For your big data, sometimes there’s no place like home

As the IoT becomes more widespread, companies are coming to the realization that although IoT stands for “Internet of Things,” the reality is that these solutions are less about the things and more about using the data generated from these things.

As the volume of data that these solutions deliver grows, it is challenging traditional ways of reporting and investigating this data; these methods were never designed with the intent of showing users what’s in the data, but rather to enable self-guided exploration of data.

The problem is even more complex in industrial IoT (IIoT), where billions of data points can be generated every second from manufacturing systems and factory floors. This really presents an opportunity — not just to have data, but to generate actionable intelligence and deep insights from this dataflow.

This is where machine learning has started to make its presence known. But with all this data and all these new analytical tools, the new question that companies deploying IoT solutions becomes “where do these analytics now live?”

Here’s the truth: data no longer lives as a static object. The days of fixed data existing at rest have been replaced with time series-driven data streams that changes its state and is constantly in motion.

Think of the “Three Vs”

Think of data today across three attributes — Volume, Velocity, and Variety. With these in mind, we can start to look at where it is most appropriate for certain data to be ingested, processed, and delivered to take action on this data.

The first place where we see this new approach to data is at the edge with the emergence of edge analytics. For many applications, driving data all the way back to the cloud to be aggregated is neither timely, inexpensive, nor secure. Being able to turn your data around at the edge or on-premises allows for more efficient deployment of solutions to monitor data streams in real time for patterns and anomalies. These can then drive more intelligence business solutions that include such things as automated predictions, efficiency ratings, and time to failure analysis.

This reality is the reason behind ThingWorx Analytics, which gives IIoT solution builders key capabilities to tackle the volume, velocity, and variety of industrial data. It positions companies so they can leverage their data and build incremental business models without having to staff up dramatically to support this. Although the idea of data is not new, the role of a data scientist within a company is changing, and quite dramatically as well.

The risk of not getting the edge right

If you poorly deploy your technologies at the edge, it can turn your data scientists into what’s effectively an expensive professional services organization, turning data modeling into a laborious process driven by human performance rather than the timeliness of your data.

But in business, the right data delivered at the wrong time is still a suboptimal outcome.

The answer to this is a suite of products that can help your team quickly use modeling tools, then take those models and build automation around them to keep your data moving at the same speed as your business.

ThingWorx Analytics does this in several steps, designed to automate tasks for data scientists that were previously manual and time-consuming. This accelerates their ability to construct and deploy automated advanced analytical capabilities within solutions. The key to this is context and understanding where your data exists when applied to your internal business processes and use cases.

Without this contextual information, you cannot find the actionable data you need to make informed, proactive decisions. If there is a gap between data and action in IIoT, what bridges that gap is context. This is where we can finally have that meaningful discussion about the use of machine learning within industrial applications.

It’s important to point out that IIoT analytics look very different than previous generations of business intelligence (BI) tools. Most of the previous BI tools are still more business-assistive, providing better tools for a human to traverse data sets and build dashboards. They rely on humans to discover insights within the data. Machine learning can assist in the investigation of data often resulting in more insights from deeper and wider data sets.

New tools allow for pattern watching

The emergence of these autonomous learning technologies means an own approach. With autonomous learning you can track data streams in real time to watch patterns and anomalies. You can move past basic analysis to drive predictions and optimizations. And you can alter business or operational process in real-time to maximize a benefit or minimize risk.

In reality, even teams of humans can never deliver what machine learning can in terms of new models. And as a result, BI is trying to be used in places it wasn’t really developed for. In certain operational areas of a business, there is value still for traditional BI solutions, but they are not built for IoT at their core. And one of the advantages that define solutions such as ThingWorx Analytics is years of IP development in machine learning technologies.

If they’re done right, machine learning can improve the impact of your data scientists, decrease the time to market for new analytics, and increase the availability of those analytics to more internal teams beyond just the data team.

This represents a unique opportunity for companies to benefit from their data quickly and in a cost-effective way. But in order to do so, efficiency needs to improve. And it turns out that even with new technologies like IoT, BI, and the cloud, when it comes to data sometimes there’s no place like home.

This article was produced in partnership with PTC. Learn more about how the Thingworx Analytics platforms works and receive important updates.

Facebook Comments