The SensorCognition™ edge device:
- Collects data from sensors
- Performs AI machine learning on that data to create a model
- Uses the model to detect features in the data at run time
- Triggers actions based on detected features.
The physical hardware we use for the edge device varies from customer to customer depending on the performance requirements.
The above shows the Input screen showing input from an accelerometer and a switch. It shows the the sensors’ signal strength in dBm and the most recent accelerometer x,y,z data . We set up the input screen on an a per-customer basis based on the required sensor input(s).
The menu allows access to screens that provide data recording, machine learning to create model, a real-time view of the generated unsupervised features, setting up of classification, anomaly detection and prediction. Further screens allow testing with pre-generated data, actions (for example, email or HTTP triggers) to be defined that occur during the Run screen and data to be imported and exported. Read more about machine learning.
The Learn screen is simple in that a button is pressed to start learning against the recorded input data. Learning can take hours, days or weeks depending on the performance of the edge device and the size and complexity of the input data.
Once completed, a model is created that is used in the Features and Run screens. The Features screen gives a real-time view of the generated features:
In this case the model is detecting up to 128 features in the data and the heatmap depicts the strengths of the features based on the incoming data. The plot on the right hand side shows the top features over time. Read the post on Supervised and Unsupervised Learning for more about features.