Getting Started

We are entering a new age where many organisations will need to incorporate AI in order to remain competitive. The tricky part for many owners and managers in knowing where to start.

It turns out the starting place has nothing to do with AI or machine learning and instead involves what should be familiar areas. You should instead start by looking at your current processes.

Most organisations have significant investment in legacy hardware, software and processes that can’t be replaced overnight. AI machine learning can significantly improve key aspects of your legacy systems. Once other stakeholders see the gains in efficiency, reduced cost and increased competitiveness you will be able to propose more far-reaching changes.

So how do you improve key aspects of your legacy systems? The secret is to start thinking about what things cost your organisation the most. These could be physical things, processes or even use of people.

Some simple examples. While cargo ships are expensive, the largest running cost is fuel and the large financial ‘losses’ are caused by downtime due to preventative maintenance. In health services, we spend a considerable amount on reducing symptoms rather than illness prevention. In the finance industry, many people use primitive ‘gut feeling’ approaches for investing money that can be costly.

Consider what you might do to reduce costs. For example, you might lengthen the time between preventative maintenance if you can better predict when things are likely to fail. In some cases you might even replace preventative maintenance with prognostics (condition based maintenance). In health, you need to concentrate on early detection and illness prevention. In finance, you need to invest in approaches you better understand and hence have a better risk.

Think about what data describes what affects the outcomes of your high cost scenarios. For the cargo ship case, the use of fuel might be affected by routes and speeds. Sensors might detect vibration to aid ship machinery prognostics. In health you might have medical instrument data. In finance, you might have weather data that might, for example, affect investments in (grown) commodities.

The key thing is that, in the past, it has been very difficult for humans to use this data to derive insights. The combinations of data and possible methods are huge. This is where machine learning excels.

In very very simple terms, we pass this (past) data into a neural network during a process called learning. This creates a model, which when fed current data, might for example give ship efficiency, predict machinery failure, assess health or tell you when to buy or sell shares.

In summary, AI machine learning doesn’t require a big bang approach to change in your organisation. Concentrate on the costly problems in your organisation rather than letting the technology lead the innovation.

The Business Case

Physical sensors allow you to collect historical and current information on systems, sub-systems, components and even people. The aggregate of this information provides state information on processes.

This can be in industry, health, hospitality, utilities, education or transportation. Whatever the domain, the goal is to provide actionable alerts that enable intelligent decision-making for improved performance, safety, reliability or maintainability.

Alerts are either diagnostic or prognostic in nature in that they tell you the current status or an anticipated impending situation. They allow you to:

  • Prevent something happening that might be costly or dangerous. For example, in manufacturing, significant damage to manufacturing
    equipment, the products being fabricated or costly downtime. In healthcare, someone is about to fall.
  • Reduce the need for costly preventative manual checking or over-zealous regular replacement. For example, in manufacturing, reducing the time and costs for maintenance of products or processes. In healthcare, reducing the need for wasted human effort monitoring patients who are ok the majority of the time.

The overall aim is to save human effort while also avoiding failure and significant disruptions. Achieving this using traditional algorithmic programming is difficult if not impossible due to:

  • Noise in gathered data and the variance in environmental and operating conditions
  • The possibility of false alarms due to the difficulty with dealing with uncertainties
  • The scarce nature of intermittent events making them difficult to measure and hence predict
  • The complexity of some processes having many process factors
  • The closed nature of some existing systems that are already measuring but the data isn’t accessible
  • The varying nature of scenarios and end-user requirements preventing standard solutions

AI machine learning with auxiliary sensors is ideal for making sense of such complexity.

Machine Learning Development Process

Our machine learning development process has five stages. You only need to commit to one stage at a time, having been happy with each previous stage.

We actively involve your appointed technical staff so that they can help us integrate the final solution into your organisation while at the same time picking up valuable machine learning skills.

Initial Chat

Start with a free chat to discover if we are the right people to help build your AI capability. Aspects include high level feasibility, geographical location concerns, financial considerations as well as respective company policies.

Scoping and High Level Architecture

This involves identifying and understanding your problem(s), deriving the requirements, the required accuracy, performance and machine learning output actions. We assess the feasibility of solving using machine learning while also considering whether a simpler conventional algorithmic solution might suffice. We design an architecture for gathering data that can also be used for production. The output is a short document.

Data Collection

We implement mechanism(s) to collect, clean and shape the data. With our help, you collect the data that might take hours, days or weeks.

Machine Learning to Create Models

We run many experiments to develop and refine the machine learning model(s). We try different variants of the data, different machine learning techniques and different parameters. The output is a machine learning model. This takes hours, days or weeks depending on the size and complexity of the data.


We use the same mechanism we created for data collection, to perform inference on new real time data. We implement actions based on the output of the machine learning model.

Contact us to get started

Machine Learning Development Risks

Machine learning projects have different risks than conventional programming. While our past experience, productised tools and re-use of existing machine learning models defers some of the risk, there’s always some degree of experimentation and risk:

  • Feasibility – The problem you wish to solve might not be solvable using sensor data.
  • Accuracy – While the problem might be solvable, the resultant accuracy might not be good enough or the development activities might end up taking too long to achieve a required accuracy. You will never get 100% accuracy and if you need this, machine learning isn’t the solution. Before you hastily discard machine learning, remember that many non-machine learning and manual safety critical processes usually have some possibility of failing.
  • Performance – If the input data is very large, it might not be possible to derive a model in a reasonable elapsed time. If a model can be created, it might not be fast enough during inference (real use).

Focus on Practical Matters

There’s currently a shortage of skills due to Google, Apple, Facebook, Amazon et al consuming the majority of AI talent. At the same time, there’s the misconception that you need candidates with PhDs in AI in order to successfully implement machine learning.

Deloitte’s recent article on AI’s ‘most wanted’: Which skills are adopters most urgently seeking? questions whether you really need a AI superstar. Companies say, with hindsight, that people who can work out how best to integrate AI into the organisation become more important:

The less-experienced AI adopters are placing too much emphasis on finding AI researchers.

AI might start as experiments run on servers in small labs or under the data scientist’s desk. But highly successful AI demands infrastructure that can scale beyond experimentation

IBM: Decoding the 7 traits of companies achieving success with AI

Research by Peltarion shows that while 99% of companies believe in and are trying to use AI, only 1% have deployed it extensively. Problems cited include complexity, lack specialist skills, scalability, lack of available data and integration. While most AI researchers are expert at solving first two challenges the last three often thwart successful rollouts.

The Deloitte article advocates approaches using ready-made AI tools and services that need less AI expertise. Read about Sensor Cognition™.