If data is the new oil, contextual information and process knowledge are the engine and navigation system to turn insights into action and create sustainable customer value.

Head of Market Segment Feed Milling and Premix
Bühler Group
In recent years, digitization has turned from a buzzword into internet-of-things (IoT) applications with ever expanding economical use-cases. The tipping point has been the developments in micro-processing and sensor technologies. Microchips and sensors have increased in performance, decreased in cost and improved in ruggedness and reliability for industrial applications. Today’s challenge is not about collecting data but putting it into a meaningful context and transforming it into valuable information and actionable results.
TAPPING DATA SOURCES
Feed plants produce both large quantities of feed and unstructured data. A major part of this data is process dependent and can be extracted from information technology (IT) or operations technology (OT), such as enterprise resource planning systems or machine and process line control units. Secondary data sources come from sensing raw materials, equipment states and environmental and ambient conditions. Similar to a human being using all five senses to acquire data, various kinds of sensors perform this task in an industrial setup. Near-infra-red (NIR) sensors can detect product parameters such as moisture, fat or protein contents in grains; or a sensor using laser scattering technology can analyze a product’s physical composition such as the particle size distribution of feed after the grinding process. Defined data standards are prerequisites to enable interoperability between IT/OT systems and sensors and avoid data silos, therefore leveraging the full potential of connecting sensor, ambient and process data.

CONTEXT MATTERS
Since data is an isolated, yet often numerical, piece of information, it needs to be cleaned up and then put into relevant context to become meaningful and valuable. Sounds trivial, but in practice, it is not. For example, a string of particle size readings is meaningless without correlating it with process data. The process of putting different data sets together takes not only data, but also experience to match up the correct context. A sample of ground feed with an average particle size reading with a median of 612 microns will mean different things in different contexts. The plant operator may want to increase the median particle size to reduce electrical energy consumption. However, the livestock owner may want to decrease the particle size to improve the conversion of feed into animal weight. Conflicts of interests are apparent. But even optimizing an isolated target can be challenging. Since the influence of the particle size on animal health and performance is subject to multiple parameters such as species, genetics, or raw materials and feed composition, there is no universal “optimal” particle size – it is entirely context dependent. In-depth, and often cross-disciplinary process knowledge is fundamental to developing a robust model, considering the multiple variables. Finally, relevant information and key performance indicators (KPI’s) need to be pulled together in an intuitive user dashboard to assist human decision making, or – in an advanced scenario – set the foundation for self-optimizing and autonomous plant operations.

ALGORITHMS ARE ONLY AS SMART AS THEIR DEVELOPERS
While self-controlled process loops, such as moisture regulation of pellets, are already technologically feasible and supported by economical use cases, fully autonomously operated plants based on pre-set KPI’s are still further down the road. The latter requires a large amount of qualified and diverse data to mimic the long-term experience of well-trained operators. The greatest challenge in the industry today is developing the logic of the semantic data and process model, describing the dependencies of input variables, and selecting appropriate datasets to train and test the algorithm. Since feed ingredients are agricultural commodities, they are exposed to many influencing factors such as climate or transport and storage conditions, increasing the challenge. In contrast, process data and data labels derived from IT/OT systems – motor load, bearing temperatures, machine vibrations, maintenance intervals etc. – have less volatility and more structure. Such data is used today for monitoring the health and state of processing equipment. Statistical methods such as regression models, in combination with time series analysis, can make precise predictions on the wear of machine components, such as bearings, enabling predictive maintenance and thereby leading to higher plant availability.
Interpreting these data and developing algorithms for predictive maintenance requires not only expertise in data science, but also long-term process experience and application knowledge. It’s a multidisciplinary task, requiring experts from various disciplines such as data science, process technology, operations, maintenance, and even nutrition science. For instance, the motor load of a pellet mill is not only determined by the recipe, but also related to process conditions such as speed of feeding or conditioning parameters – which are interrelated with the formulation, as how protein or starch are modified during conditioning. Given these dependencies, the question is where to set the threshold value of the motor load to prevent machine damage or a blockage of the pelleting line. The imperative is to avoid overfitting (machine stopping at uncritical conditions) or underfitting (not all critical scenarios are reflected) the model to avoid production capacity losses. The larger and the more diverse the underlying dataset, the more reliable predictions and self-regulating control loops become.
Obviously, the quality of the dataset also grows with the number of real-world datasets, whereby additional value can be leveraged by tapping the data of process equipment from different plant locations. Cloud solutions are the tool of choice to aggregate the data and put it into a meaningful context, enabling feed millers to benchmark performance indicators across multiple production sites or even with industry peers. Managing plants on KPI’s such as operating expenses (OPEX), plant utilization, greenhouse gas emissions, or product quality might even trigger new or more flexible business models. Finally, to drive feed millers’ margins in a world no longer fueled by oil but rather by data; contextual information and knowledge are the keys to success.
About Stefan Hoh
Stefan Hoh is Head of Market Segment Feed Milling & Premix at Bühler. He has profound expertise in global product management and digitalization and has been working in the animal feed industry for more than 10 years. Stefan holds Master’s degrees in Food Science and Economics, as well as continued studies in Digitalization Strategies.