Skip to content

Latest commit

 

History

History

online-binary-classification

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Online Binary Classification

Online binary classification via Stochastic Gradient Descent.

Usage

var onlineBinaryClassification = require( '@stdlib/ml/online-binary-classification' );

onlineBinaryClassification( [options] )

Creates an online linear regression model fitted via stochastic gradient descent. The module performs L2 regularization of the model coefficients, shrinking them towards zero by penalizing the squared euclidean norm of the coefficients.

var model = onlineBinaryClassification();

var idx;
var i;
var x;
var y;

x = [ [ 0, 0, 0.5 ], [ 1, 1, 0.5 ] ];
y = [ -1, 1 ];

// Iterate 500 times:
for ( i = 0; i < 500; i++ ) {
    idx = i % 2;
    model.update( x[ idx ], y[ idx ] );
}

The function accepts the following options:

  • learningRate: string denoting the learning rate to use. Can be constant, pegasos or basic. Default: basic.
  • loss: string denoting the loss function to use. Can be hinge, log, modifiedHuber, perceptron or squaredHinge. Default: log.
  • epsilon: insensitivity parameter. Default: 0.1.
  • lambda: regularization parameter. Default: 1e-3.
  • eta0: constant learning rate. Default: 0.02.
  • intercept: boolean indicating whether to include an intercept. Default: true.
var model = onlineBinaryClassification({
    'loss': 'modifiedHuber',
    'lambda': 1e-4
});

The learningRate decides how fast or slow the weights will be updated towards the optimal weights. Let i denote the current iteration of the algorithm (i.e. the number of data points having arrived). The possible learning rates are:

Option Definition
basic (default) 1000.0 / ( i + 1000.0 )
constant eta0
pegasos 1.0 / ( lambda * i )

The used loss function is specified via the loss option. The available options are:

  • hinge: hinge loss corresponding to a soft-margin linear Support Vector Machine (SVM), which can handle non linearly separable data.
  • log: logistic loss. Corresponds to Logistic Regression.
  • modifiedHuber: Huber loss variant for classification.
  • perceptron: hinge loss without a margin. Corresponds to the original Perceptron by Rosenblatt.
  • squaredHinge: squared hinge loss SVM (L2-SVM).

The lambda parameter determines the amount of shrinkage inflicted on the model coefficients:

var createRandom = require( '@stdlib/random/base/randu' ).factory;

var model;
var coefs;
var opts;
var rand;
var x1;
var x2;
var i;
var y;

opts = {
    'seed': 23
};
rand = createRandom( opts );

model = onlineBinaryClassification({
    'lambda': 1e-6,
    'loss': 'perceptron'
});

for ( i = 0; i < 10000; i++ ) {
    x1 = rand();
    x2 = rand();
    y = ( x1 + x2 > 1.0 ) ? +1 : -1;
    model.update( [ x1, x2 ], y );
}

coefs = model.coefs;
// returns [ ~4.205, ~4.186, ~-4.206 ]

rand = createRandom( opts );
model = onlineBinaryClassification({
    'lambda': 1e-2
});

for ( i = 0; i < 10000; i++ ) {
    x1 = rand();
    x2 = rand();
    y = ( x1 + x2 > 1.0 ) ? +1 : -1;
    model.update( [ x1, x2 ], y );
}

coefs = model.coefs;
// returns [ ~2.675, ~2.616, ~-2.375 ]

Higher values of lambda reduce the variance of the model coefficient estimates at the expense of introducing bias.

By default, the model contains an intercept term. To omit the intercept, set the corresponding option to false:

var model = onlineBinaryClassification({
    'intercept': false
});
model.update( [ 1.4, 0.5 ], 1 );

var dim = model.coefs.length;
// returns 2

model = onlineBinaryClassification();
model.update( [ 1.4, 0.5 ], -1 );

dim = model.coefs.length;
// returns 3

If intercept is true, an element equal to one is implicitly added to each x vector. Hence, this module performs regularization of the intercept term.

Model

Returned models have the following properties and methods...

model.update( x, y )

Update the model coefficients in light of incoming data. y has to be either +1 or -1, x a numeric array of predictors. The number of predictors is decided upon first invocation of this method. All subsequent calls must supply x vectors of the same dimensionality.

model.update( [ 1.0, 0.0 ], -1 );

model.predict( x[, type] )

Calculates the linear predictor for a given feature vector x. Given x = [x_0, x_1, ...] and model coefficients c = [c_0, c_1, ...], the linear predictor is equal to x_0*c_0 + x_1*c_1 + ... + c_intercept. For the logistic and modified Huber loss functions, supply probability for the type parameter to retrieve prediction probabilities.

var lp = model.predict( [ 0.5, 2.0 ] );
// returns <number>

var phat = model.predict( [ 0.5, 2.0 ], 'probability' );
// returns <number>

model.coefs

Getter for the model coefficients / feature weights stored in an array. The coefficients are ordered as [c_0, c_1,..., c_intercept], where c_0 corresponds to the first feature in x and so on.

var coefs = model.coefs;
// returns <Array>

Notes

  • Stochastic gradient descent is sensitive to the scaling of the features. One is best advised to either scale each attribute to [0,1] or [-1,1] or to transform them into z-scores with zero mean and unit variance. One should keep in mind that the same scaling has to be applied to test vectors in order to obtain accurate predictions.

Examples

var binomial = require( '@stdlib/random/base/binomial' );
var normal = require( '@stdlib/random/base/normal' );
var exp = require( '@stdlib/math/base/special/exp' );
var onlineBinaryClassification = require( '@stdlib/ml/online-binary-classification' );

var phat;
var lp;
var x1;
var x2;
var y;
var i;

// Create model:
var model = onlineBinaryClassification({
    'lambda': 1e-3,
    'loss': 'log',
    'intercept': true
});

// Update model as data comes in...
for ( i = 0; i < 10000; i++ ) {
    x1 = normal( 0.0, 1.0 );
    x2 = normal( 0.0, 1.0 );
    lp = (3.0 * x1) - (2.0 * x2) + 1.0;
    phat = 1.0 / ( 1.0 + exp( -lp ) );
    y = ( binomial( 1, phat ) ) ? 1.0 : -1.0;
    model.update( [ x1, x2 ], y );
}

// Extract model coefficients:
console.log( model.coefs );

// Predict new observations:
console.log( 'Pr(Y=1)_hat = %d; x1 = %d; x2 = %d', model.predict( [0.9, 0.1], 'probability' ), 0.9, 0.1 );
console.log( 'y_hat = %d; x1 = %d; x2 = %d', model.predict( [0.1, 0.9], 'link' ), 0.1, 0.9 );
console.log( 'y_hat = %d; x1 = %d; x2 = %d', model.predict( [0.9, 0.9], 'link' ), 0.9, 0.9 );