We started supervised machine learning in 2016 using our BeaconRTLS™ platform. BeaconRTLS™ collects data into a database on a server which is the traditional way of collecting data for IoT. The data was extracted from the database and fed into machine learning locally and in the cloud. There were problems with this:
Our clients were storing lots of data and using lots of bandwidth, with associated costs.
We had to implement a way of using the resulting machine learning models. This involved setting up a server to serve inference results via HTTP in response to input sensor data received via HTTP and MQTT.
There wasn’t inherent consistency between the way our learning and inference pre-processed data. It was a manual process to ensure the processing was the same. The data needs to be processed in exactly the same way for learning and inference to get consistent results.
For sensors producing lots of data, for example accelerometers, there was prohibitively large data storage and traffic.
The server delay was too long for time sensitive events/alerts
In 2017 we moved to using Edge devices. These perform data collection, machine learning and run time inference on only one local device without the need for server storage of data. Sensors producing lots of data are processed quickly with orders of milliseconds latency to generate alerts. Edge devices provide inherent consistency of pre-processing data between learning and production inference.
The edge devices can still communicate with servers but only need to do so to send out alerts with pertinent data. Data storage is local, rather than remote, aiding security and privacy requirements.
Performs AI machine learning on that data to create a model
Uses the model to detect features in the data at run time
Triggers actions based on detected features.
The physical hardware we use for the edge device varies from customer to customer depending on the performance requirements.
The above shows the Input screen showing input from an accelerometer and a switch. It shows the the sensors’ signal strength in dBm and the most recent accelerometer x,y,z data . We set up the input screen on an a per-customer basis based on the required sensor input(s).
The menu allows access to screens that provide data recording, machine learning to create model, a real-time view of the generated unsupervised features, setting up of classification, anomaly detection and prediction. Further screens allow testing with pre-generated data, actions (for example, email or HTTP triggers) to be defined that occur during the Run screen and data to be imported and exported. Read more about machine learning.
The Learn screen is simple in that a button is pressed to start learning against the recorded input data. Learning can take hours, days or weeks depending on the performance of the edge device and the size and complexity of the input data.
Once completed, a model is created that is used in the Features and Run screens. The Features screen gives a real-time view of the generated features:
In this case the model is detecting up to 128 features in the data and the heatmap depicts the strengths of the features based on the incoming data. The plot on the right hand side shows the top features over time. Read the post on Supervised and Unsupervised Learning for more about features.