Machine learning and manufacturing
The field of artificial intelligence has been around for decades, and the world has seen massive advances in what is considered deep learning (e.g. IBM’s Deep Blue and Google’s AlphaGo ), but it’s only within the past decade that we’ve seen practical applications of machine learning in an enterprise setting. In the past few years, there has been an explosion in the number of products available that integrate machine learning within a business intelligence platform.
In a manufacturing setting, machine learning is used mostly for finding patterns in industrial data for the purposes of anomaly detection and predictive maintenance . Anomaly detection is certainly not specific to manufacturing, but it is used differently when applied to manufacturing-specific problems.
In looking for abnormalities, the first step is to establish what is normal . Organizations that already have historic data have a leg up in this area because this data can be fed into most machine learning systems to help establish the necessary baselines. Unfortunately, if an organization lacks existing data, the system will need to observe the data over a period of time before it can be confident about what to expect. This period of time can vary greatly depending on the enterprise, and whether activity varies greatly from season to season, for example.
Manufacturers can benefit from anomaly detection in a number of ways; a prime example is by using anomaly detection to discover defective products early in the production pipeline. Early anomaly detection can give machine operators advance warning of issues downstream in the manufacturing process, so these issues can be resolved quickly and without shutting down the production line.
Predictive maintenance is a subset of anomaly detection that focuses on determining the mechanical status of a machine—for example, whether a machine is approaching its maintenance window or if failure is imminent. By comparing current sensor readings to historic data, the system can use predictive maintenance to detect issues early on—allowing the company to handle repairs at a time when overall impact to the system is minimal. This level of prediction can prevent costly and unplanned maintenance, and lost earnings that may otherwise arise from maintenance that impacts service agreements.
Applications in machine learning
Both GE’s Predix and Siemen’s Sinalytics incorporate machine learning algorithms in their platforms, and Amazon AWS Machine Learning and Microsoft Azure Machine Learning are both commercially available services for companies that already have big data implementations and would like to add machine learning capabilities. There are also smaller companies that are bringing machine learning to industrial sector clients, such as Anodot and Plat.one .
Current machine learning environments are also far more user friendly than ever before. The majority of modern machine learning tools are rules based and even have GUIs to help build models. Many of these models can be built by business intelligence staff and data scientists who have knowledge of how to do some scripting, and they can be deployed on-the-fly, without custom code.
More advanced machine learning features include asset simulation, in which industrial machines and facilities are modeled in software, to simulate a variety of scenarios. This capability will allow industrial enterprises to find ways to optimize all of the variables in their assets to maximize efficiency for any situation. In GE’s Predix, this feature is called the “Digital Twin,” and while they have yet to model any manufacturing assets using the tool, they claim that nearly any kind of machine can be simulated using this software.
Natural language processing
One of the biggest challenges in analyzing data from industrial machinery is finding the meaning in the data (data such as error codes and sensor readings). Data formats are often buried deep in service manuals—meaning that much of this information needs to be mapped into systems manually, before it can communicate any meaning to the actual systems. Steven Gustafson, leader of the Knowledge Discovery Lab at GE, explains:
"(In a factory,) we have all these different kinds of machines provided by many different manufacturers. They’re usually connected to control systems in a rudimentary way, just to do alarming, safe shutdown, and safety things. And now we want to have a whole plant view of what’s going on, so we can do optimization. Machine learning is already having a big impact, and the main way is on the data side.
So, we need to do a lot of work to get data structured, and that could be from looking at using natural language processing, and extracting the learnings from plant failures, machine failures, or from other issues, and getting them out of reports. . . . Because, if you took a plant that might have dozens of different kinds of systems that are generating alarms, those alarms usually come with a numeric format, with a string, that is a description of the problem. And, surprisingly, a lot of the natural language processing work involves going through and normalizing all of that alarm information, so that when it flows back in, it is in a digitized form—I like to call it a “computable form”—then we can do automated inference reasoning on it."
Article image: High Tech Manufacturing in China. (source: Cory M. Grenier on Flickr ).
Li Ping Chu is a veteran software developer of the Silicon Valley tech boom. With 15 years of working experience ranging from five-person startups to consulting for major financial firms like Charles Schwab, and major e-tailers like The Gap and Williams-Sonoma, he has been involved with projects of all kinds and all sizes. He is currently located in Taipei where he most recently helped build an analytics engine for a local mobile gaming company. He loves dogs and tolerates cats.