The authors of this book have spent a lot of time working in IoT, going back well over 10-12 years from when IoT was little more than a buzzword. When we think about IoT, our minds go to cheap, easy-to-use hardware and connected appliances or watches. This new crop of inexpensive hardware has opened people’s eyes to what could be done for minimal cost, but for industry, a different level of hardware is often required.
We can use IoT hardware and software to accomplish the goals of Industry 4.0 by providing a robust and industrial-strength set of technologies that allow for the instrumentation and measurement of equipment and its environment. Bear in mind that industry is often conducted in extreme environmental conditions and the cheapest approach is often not the right approach. A trade-off between cost and reliability should be considered since if you have to replace a component too often, then the value can be lost in effort, time, or the loss of data while waiting for the switch to occur.
Let’s talk about the key areas to be considered when turning to IoT as part of the solution. Let’s say we are going to place a simple sensor on or near a device to measure temperature. We don’t need to be specific at the moment, but just consider the conditions that you might be facing, such as the following:
- Tough: Can your sensors and equipment withstand environmental conditions and pressures? Industrial equipment in the field can be in a rough environment. Does the sensor and corresponding transmitter require an ingress protection (IP) or National Electronic Manufacturers Association (NEMA) enclosure rating for protection? IP ratings provide a rating for your enclosure for protection against access to the internal components and protection from the ingress of liquids and dust or dirt, which is essential for harsh outdoor environments. An IP67 rating indicates a solid enclosure that is protected from the ingress of dust and protects against temporary immersion in water up to a few centimeters. NEMA ratings are the same as IP ratings but provide additional classifications against corrosion and hazardous locations. For some environments, such as oil and gas, a NEMA Class I or Class II enclosure is required due to the presence of corrosive liquids, flammable gasses or vapors, or combustible dust. These environmental conditions and requirements can add additional costs and time to your effort in sourcing, testing, and possibly certifying your components for use in the field.
- Easy to deploy (and maintain): Make it as simple as possible to ensure speed and accuracy when deploying equipment. This ensures that deploying and registering your sensors and equipment is simple, almost bulletproof, for the engineer on site. When deploying a sensor to a piece of equipment or a location, we have to ensure that once the sensor is in place and operating, we can tie it back to the right location. Without that, the effort is useless, and none of the data further up the chain will be reliable. There are several options here. Mobile apps with barcodes and even manual configuration are fine as long as the setup can be done correctly and consistently. Additionally, the sensor should be easy to attach and place. OK, simple is not always possible, but as much pre-configuration as possible should be considered, leaving the engineer to do as little as possible on-site to complete the setup and installation. Runbooks should be well-defined and include any troubleshooting information that might be needed in the field. This is especially true if the deployment people are not experienced in the new technology.
- Scalable: The ability to quickly deploy many sensors in the field should be considered. This can mean dozens, hundreds, or thousands of sensors across multiple locations or across the globe. Both hardware and software can be a concern when thinking about scalability. Something easy to deploy and configure can be deployed by the thousands; however, if the software or storage is not configured to manage the data, it may result in wasted effort. Cloud technology will help with the software part, although the application requirements to view and analyze data need to be able to keep up as the system grows. This means data systems and analysis should be designed to accommodate the potential millions of readings you might expect from all those sensors.
- Reliable: This ensures sensors and monitoring will keep working over a long period of time. This is not the same as a sensor or node being tough. It’s about reliability. Reliability is much more important because now we are talking about the electronics, rather than the casing and packaging of the node. Do the electronics have sealed or glued connections? Are the sensors potted or otherwise protected? Potting means filling or surrounding the electronics with some type of gel or epoxy resin, essentially encapsulating the electronics to minimize the dust, vibration, or liquids that affect them. Of course, before going to this extreme, quality control and using high-quality components are recommended. Hot glue on your connections can be the first line of defense against the loosening of wires. If you go to the extreme of potting your components, be aware that it cannot be undone, so when a part goes bad, a replacement for the entire assembly will be needed, which may be costly. Carry out a cost-benefit analysis of the best approach based on your industry to make the right design decisions.
- Secure: Make sure data and systems are protected from malicious actors or data theft by unauthorized parties. We will talk about security throughout this book. Using IoT technology can potentially leave security holes across the entire data stream. There are several aspects here to consider. Ensure that data is secure while traveling upstream, and protect endpoints so that fake data cannot be introduced into the system to influence results or actions. Since we are talking about the endpoints in sensors or nodes, physical security is the first step to consider here. Do you need to apply any physical security to the deployment location? And if not, what type of tamper monitoring can be put in place to alert you if something seems amiss? Once data hits the cloud, traditional IT security methods can come into play, but with equipment in the field, your system can encounter many different types of threats.
If you are in the industry already, you will already know some of this; if you are an operator, you will know your environment intimately. But it is still worth considering the environment and defining some basic requirements alongside your goals. This is an excellent point to think about the domain you are considering and create a simple checklist with critical criteria that you can build on as we go forward. These considerations go hand in hand with each other and drive toward a common goal. Using the preceding criteria, along with more to be added as we go, can help you navigate decisions and communicate to others the expectations of the technology.
Sensor technology
We use the term sensor technology broadly to include both the sensor or sensors involved in instrumenting an environment, but also the sensor node that reads the data from the sensor and transmits it to the receiver. We separate these currently because they do not always go hand in hand. Sensor technology is constantly evolving and transmission protocols, such as low-power wide-area network (LPWAN) and 5G, continue to evolve. The sensors you wish to use may have specific characteristics and may not always be compatible with a sensor node for transmitting data on the protocol that you have defined. Let me share an example.
Several years ago, during the Wild West of IoT evolution (just kidding, it’s still the Wild West), one of the authors was involved with a proposal for a large city in Nevada; you can probably guess which city. Our partner in the deal was an IoT start-up; actually, it was way more than a start-up, with millions in funding, but it was relatively new to the IoT space. The company had some fascinating communication technology, which should have been a strong competitor to cellular technology and most LPWAN technologies. It was low power and could send data over very long distances, outdistancing other technologies such as LoRa (from long range) by miles.
This company spent a lot of money and energy on trying to sell and further develop its network, and since fewer towers or hotspots were required to blanket an area, they felt they could cover large areas, such as a city, with relative ease and lower cost. While this was probably true, little regard was given to the fact that no sensors or sensor nodes were available to use on the network. Chips were available and provided by the network provider, but the cost, time, and effort were left to sensor vendors to implement. Essentially the sensor vendor or consultant had to make a bet on this working based on little more than faith in the network company. In hindsight, it’s a bet we are glad we didn’t take.
Unfortunately, this turned out to be a failed strategy since the investment was too significant and complex compared to more available options at the time, using protocols such as LoRa, Sigfox, and LTE. I’m still disappointed the company didn’t have the vision to see this hole in their strategy, and they have moved into the realm of also-rans in the IoT space.
The key takeaway here is to keep in mind the following:
- Can your sensor node or transmission unit communicate back to the cloud with your chosen protocol, or set of protocols if you need redundancy?
- Can your sensor node communicate with your actual sensor or set of sensors to read the measurements for data transmission?
There can often be a mismatch here as the sensors themselves can use all kinds of unique protocols. For example, SDI-12 is a standard serial communications protocol used in agriculture and weather sensors and can be challenging to read if the sensor node is not designed for it. The protocol was defined in the late 80s and transmitted ASCII characters over a single data connection. There are many examples of serial protocols in place for industrial systems that can be decades old but are still very much the standard.
Another example is if you need calibrated sensors, such as temperature sensors, that must follow NIST standards to ensure the results adhere to standards. If calibrated temperature sensors using your defined communication protocols are not commercially available, then you have limited options.
Every day it seems, the sensor world gets a little bit brighter as a vast array of sensors, edge devices, and transmitter units or nodes become more readily available. Many of the most popular protocols are available, with new ones coming on slowly as new network technology is better adopted by the community and becomes more readily available. However, there are still sensor solution gaps for many situations.
One of our favorite options for this problem is from a company called Libelium out of Zaragoza, Spain. Libelium offers a robust mix-and-match approach to sensors and communication options of all different types. For example, you can choose sensors for measuring air quality, water quality, security, and agriculture or for integration into industrial protocols such as Modbus. You can pick a communication protocol to connect the sensors and send measurement data to an existing application or web service. Protocols include using anything from LoRa to Wi-Fi to 4G. This flexible approach makes it easy when you try to adhere to a standard communication protocol but cannot find an appropriate sensor that works with your chosen standard.
Cost can certainly be a factor, especially at scale, and while prices seem to be continuously going down, here is where the myth of IoT, again, seems to get in the way of Industry 4.0. You get what you pay for, and this can be crucial in harsh environmental conditions and areas where you need to provide a standard approach.
IT versus OT
There is still a lot of confusion around information technology (IT), operational technology (OT), and this idea of convergence. But essentially, it is a simple concept to understand.
IT is something we are all familiar with in our daily lives. We run applications on our phones or laptops. Many of these applications run on servers or in the cloud and process data-producing orders, sales, and directives or provide some type of analysis. This is the IT world that we know today. It’s a reasonably open world, and access can be gained from anywhere (provided security concerns are followed).
OT, especially legacy systems, can be considered a more closed environment. OT ends within the walls of the factory. When you think of OT, think of supervisory control and data acquisition (SCADA), which is also run by servers but interacts with devices within a defined area of control. At a large scale, consider a power plant or water treatment plant. The pump shuts off when the water in a tank gets too high. Too low, and it starts back up again. Monitors and alerts allow operators to visualize and help manage what is taking place with appropriate alarms and controls.
Industry 4.0 and organizational alignment
Figure 1.1 illustrates how different areas within the business fit together and into the big picture. In order to work, there is a strong dependency across IT, operations, and business and management; all stakeholders must work together to realize the benefit.
Figure 1.1 – Industry 4.0 organizational alignment
OK, what is the big deal? The big deal is that IT’s primary goal is to provide the business and management with information and the ability to support the decisions and operations of the company. How many widgets were produced? Or how many barrels were processed? How many were sold, and at what price? The business lives and dies on this information and data being available faster and more accurately to provide a competitive edge. Often what is missing is an insight into the real-time production of widgets or processing of barrels. Newly built or upgraded factories can provide real-time information, but in legacy systems, even relatively young ones, that information is hidden. And in production, modifying (reverse engineering) devices and machines voids warranty and, if not done correctly, may lead to complications. With the emergence of IoT, we can bring some of that data from the closed OT world into the often more integrated IT world, where it can be used more effectively.
The focus of this book is on getting the hidden data, storing and processing it, and then using this information effectively.
The business is not the only one to benefit from introducing new data. Operational teams will gain insight into the equipment and production that they didn’t have before. Uptime and maintenance can be improved, cost reduced, and throughput increased as a new understanding, and a new normal of the environment begins to emerge. The full benefit of digitalization should become clear in the rest of this chapter and throughout this book as we share examples of collecting data and then using that data to realize value across the organizational spectrum.
You can get there from here
Industry 4.0 is driven by IoT, but it is just one part of the picture. A big part, granted, as it allows visibility into equipment and operations as never before. A longer roadmap is required to achieve the vision of digitizing your industry and the transformative changes that can take place.
Important note
We are not a fan of big, complicated, eye-chart-type visuals, so throughout this book, we will keep the visuals simple and coherent to allow you, the reader, to immediately understand the concept rather than asking you to try and understand something overly complex.
Figure 1.2 illustrates a basic roadmap toward digitizing your industry or moving toward Industry 4.0. We have broken this down into four primary areas of consideration for improvement. Within each of these areas, there is a vast number of considerations both on the technical and business side to consider.
For many, the status quo or current state of their process is operate. Consider this business as usual, and maybe decades-old processes that, for the most part, just work. There may be some instrumentation, perhaps even a lot, but no cohesion or integration across machines, systems, or plants. Everyone knows we can do better, but how do we move forward? Figure 1.2 illustrates a set of steps for continuous improvement in your equipment and environment. Instrumentation and acting on it improves both the business and technical responses of the organization.
Figure 1.2 – Industry 4.0 roadmap
Let’s talk about each step of the process outlined in turn. We have labeled each area based on the technical changes because that is the focus of this book. However, this can be adapted based on your principal needs.
Instrument and connect
Moving beyond the general operate state requires in-depth visibility of your systems and environment. Consider this the instrument phase, where the goal is to start to gather data from your systems and environment. The other side of the equation is to know what should be instrumented and why. This effort of instrumentation and collecting measurements is where business and operations can collaborate to ensure that the data collected is needed and understand how that data will be used to drive processes and the business forward.
It is usually not the best strategy to jump in and instrument everything. While it may seem like more is better and that you have nothing to lose by doing so, spending time and money on equipment, manpower, bandwidth, and storage for data that is never used ends up being a losing proposition. Once committed, it may require ongoing maintenance for data that does not provide good value.
Another question to ask is how much data is needed. This depends on the velocity of what you are measuring and collecting. Some systems can churn out hundreds of measurements per second. How and where should this information be stored and analyzed? Does all of it need to go to the cloud? Can we process this on the edge and provide aggregate results? What are the pros and cons of taking an approach toward managing this data? Business and operations should be involved intimately in these discussions to help drive what level of granularity is needed and how it will be used. This in turn can drive IT decisions for data management and processing.
Baseline and analyze
Baselining your system’s normal operating environment can be an eye-opening experience. Sometimes (actually a lot of the time) we don’t really know what normal is for our equipment until we measure it and then see it in some graphical format. SCADA systems often have this insight into pressure, temperature, and flow characteristics, but not all industrial operations are driven by SCADA, or the information is hidden from all but on-site equipment operators. The insight gained here can be enormous. Measuring a handful of values can provide deeper information about the working condition of a piece of equipment or an end-to-end system and, as we will see later, drive efficiency and potential maintenance issues. Understanding the baseline of system performance and conditions at a known production rate can be powerful, as well as asking questions such as, what happens when the production rate goes up, and how does that affect the machine conditions?
Defining a baseline can take a long time; it is not done in a day or even a week. Expect at least several production cycles, which could be seasonal-based activities that could take months or years. Hopefully, most cycles do not take that long, but if your industry is influenced by weather or environmental conditions, there is that possibility. You can continuously gain good insight by getting comfortable with what your baseline looks like along the way, but unexpected curves and influences only occur with time.
Prediction and alerting
From a technical side, we have many opportunities. Now that systems are being more closely monitored, you should expect to see variations in the data as problems occur, and equipment shuts down. Maybe there were some unexpected vibrations or a temperature rise before it occurred. Can we monitor for a particular set of variances? Do the vibrations occur when a part is ready for replacement or maintenance? This is the beginning of condition-based maintenance, where new data or real-time monitoring of the environment can alert the operator to a possible set of conditions that may fail.
To accomplish this, we need to start to build predictive models. Tooling today can make it a relatively straightforward process to create a predictive model; however, much of the work in the baseline phase will help you determine which data to prepare for modeling. Generally, we are looking at data to help predict downtime or failures of equipment or systems; however, does the data do this? We will dive into some details about predictive modeling and how to use this in your architecture in the coming chapters.
We can often start our journey by using simple thresholds or comparisons on specific values or sets. This is especially true when you know what specific events or conditions you are looking for but are not quite sure how predictive models will advance your cause. Does the temperature rise above a specific degree? Does the energy usage on a pump get higher while the pressure gets lower? These are simple examples, granted, but powerful tools in helping to determine when something might need to be checked. At this point, we are still triggering more manual alerts, effectively telling someone to check something. This could be as simple as an email or SMS, or a more advanced trouble ticket being opened automatically on your enterprise asset management system.
But really, we can now take this further into the business side of things and better understand production cycles and issues and capacity constraints, not only of finished products but subprocesses that may cause bottlenecks.
Automation and improvement
Industry 2.0 and 3.0 brought a lot of automation into manufacturing and processing. Our focus is more on the automation of the overall business and what is produced. The ability to monitor and eventually steer your production closer to real-time allows the business to be more agile and respond more easily to customer demands. This is a topic well beyond the scope of this book, and would possibly include connecting customer demand, supply, and fulfillment, as well as the digitalized production or factory that is our focus here.
However, with a deeper analysis of your historical data over time, a more detailed analysis can occur of where and when improvements can be made.
Visibility is everything
We probably can’t say this enough. Possibly, this is the gist of the entire book, along with some focus on what to do after you have better visibility. It was mentioned before that understanding your baseline, or the normal operating conditions of your environment can provide clear insight into what is truly normal and when some type of abnormality occurs. This can only happen with clear instrumentation. This is true in almost any industry or science. Most experts will explain that the instrumentation of your environment allows you to gain new insight with a precision not previously available. Software developers who have used deeper inspection, such as bytecode instrumentation or injection, can easily explain the advantage of increased visibility into aspects of a running system. The same is true for physical systems and being able to view and analyze the physical characteristics of a piece of equipment or an environment.
Another aspect to consider is global or widely distributed operations. Modern equipment or systems can be outfitted with all the instrumentation needed for the safe and efficient running of the process. However, what about systems that are geographically distributed across vast areas? Combining and even comparing information from systems globally can provide new opportunities for decision-making.
Along this path should be a feedback loop, allowing adjustments and updates across the entire monitoring chain.
Business driving innovation
A quick web search will provide an abundance of IoT information, specific lists of ideas, examples, or use cases for implementing IoT for your industry, and the value it provides. Sometimes there are interesting use cases, but often it is driven by marketing and sales looking to sell their solution. Unless you have a good working knowledge of an industry, this can be misleading. Earlier, we mentioned that just because you can instrument something does not always mean you should. Time and cost considerations should come into account. Consider the cost of adding sensors to gather information, but also the data collection and maintenance costs of continuing to gather data.
IT, operations, and business stakeholders need to work together to understand what it is that they want to achieve. Then real subject matter experts need to be involved in telling you how to get it and then interpret the right data points to achieve those goals. Operations understand better than anyone how to develop, manufacture, or produce materials or goods. Business stakeholders know how to price, sell, and distribute those goods to end customers. There are nuances in business and operations that the other may not understand intimately or agree with, but working together to achieve better visibility and control can be a powerful weapon for competing on the global market.
The truth is, business and management may not know what they need to instrument at a detailed level. But they do know what information they need to make decisions, such as better overall equipment effectiveness (OEE), downtime reports, or more detailed forecasting for service lines over a period of time. OEE is a process for measuring manufacturing productivity by looking at equipment availability and performance, and the quality of manufacturing output. Operations can then make an informed decision about what they need to do to obtain and provide that information. It’s a complex process that is greatly oversimplified here, but hopefully provides some guidance that no one area of the business should work in a vacuum on this endeavor.
So far, we have provided a big-picture overview of Industry 4.0, the digitalization of the industry, and Industrial IoT. There are multiple approaches to accomplishing systematic improvement in your production and management of equipment and processes, and the roadmap is one approach. Moving forward, we want to dig deeper into some of the technical aspects of starting your journey and adopting a digitalization mindset and approach. What are the steps and goals for moving forward and getting incremental value along the way? In addition, what are the pitfalls in adoption and understanding how difficult it will be? We will be exploring more of the idea of instrumentation, analysis, and convergence for providing value across all stakeholders.