Supervised and Unsupervised Learning

In the post machine learning it was explained how supervised learning includes extra information passed into the model, called labels, that classify the input data. For example, human gesture data might include labels saying whether the data is running, walking or jumping. This helps the learning stage narrow in on the best neuron weights to achieve the required output.

It’s usually difficult to obtain labelled data, especially in situations dealing with sensor time series data. Labelling data often has to be performed manually or later added to the data. For example, to create model to infer human gestures from accelerometer data we need to initially record the x,y,x and the gesture at that time. This can be tedious, error prone and open to human bias.

So how does learning on unsupervised data know what to concentrate on to create the best model? It doesn’t. Instead, unsupervised methods concentrate on finding features in the data. Each feature has a numerical value signifying its strength.

Heatmap of 256 features from SensorCognition™ edge device unsupervised model

Going back to the human gesture example, the model might see a common up then down sequence in the x and output this as a feature. The model outputs lots, usually hundreds, of features that might be sub-features, features of interest (e.g. sitting, running) or a mix of features (jumping while running).

There’s usually something, usually simple traditional code, data science processing or more machine learning to turn the output features into detection of the feature of interest (e.g. sitting), detection of anomalies (e.g. falling), classification of the gesture or prediction of the next gesture (e.g. walking after running). A manual way of finding features for detection and classification is to feed in known gestures into inference and see which features fire. This obviously involves human effort but is much less effort that labelling all the supervised input.

Read about advantages of unsupervised learning.