Home Dear Cloud: Connected things have brains, too

Dear Cloud: Connected things have brains, too

One way to understand a market’s maturity is to look at how simplistically solutions are described. IoT has often been described as some sort of physical device that pipes sensor data into the cloud for deep learning. This is actually not a bad description. In fact, we do generally want to move as much intelligence as feasible to the commodity, high-power resources in the cloud. However, as IoT matures we’re realizing that it isn’t feasible for all of the intelligence to live in the cloud – it’s just not that simple.

Context, security and performance drive us to distribute this intelligence not only to the gateways, but also to the edge devices (even resource-constrained devices). Which begs the question, why? 

Why intelligence on the gateway?

I recently spoke with a prominent cloud IoT platform vendor who said that at least 85% of the IoT solutions he encountered had some sort of gateway. Gateways bridge between a local network of edge devices and the internet or intranet. They typically contain enough processing resources to support an OS such as Linux, making it easy for IT software programmers to transition to gateway programming. Programmers ideally abstract the software functions on the gateway behind service-oriented APIs (yes, running on the gateway) to further aid in cloud IoT platform integration. But why put software here in the first place?

As a bridge between the internet and local network, the gateway has a certain context that makes it an ideal place for relevant decisions and intelligence. The local network is often running a different protocol stack than the backhaul network (think 802.15.4 vs. Cellular). It has to be able to adapt between these stacks. For battery-operated devices, the gateway may schedule and manage the traffic to the edge nodes. The gateway also serves as a prime location to house edge device and network logic, so that the cloud platform doesn’t have to understand all of the variations. 

Daniel Barnes, Director of Product Management, Synapse
Daniel Barnes, Director of Product Management, Synapse

The gateway makes decisions in many applications due to latency requirements. When you press your light switch, you expect the light to turn on in less than a second. If it doesn’t, you’ll probably press it again. If the decision takes place in the cloud, the event would be triggered at the edge device (your finger press), be transmitted to the cloud, a decision would be made, a resulting event would be sent to the light and then the light would turn on. The round-trip delay from edge to cloud and cloud to edge alone is difficult to accomplish in 100’s of milliseconds. IoT lighting control developers know this from experience (and logical deduction). The gateway has all of the context to make this decision and is located to be able to do this with an acceptable amount of latency.

Security is one of those frustrating necessities of life. You never need it until you do and it’s always getting in the way of flexibility and performance. To make matters worse it isn’t a binary decision; it’s a gradient. How much security do you need? How much flexibility and performance are you willing to give up? Good news, the gateway gives you additional flexibility in your security implementation. The gateway limits the exposure of the edge devices in many architectures by providing a single point of control for all of the devices behind it. While some implementations allow direct IP access to the edge device, most either perform a NAT or even hide the edge devices behind a set of services on the gateway. Many IoT applications don’t have an internet connection and run on an on-premise server or even run entirely on the gateway.

Gateway software often acts as a powered on-premise storage center due to cellular backhaul or as a fail-safe when the internet connection is lost. The gateway aggregates the data, may perform threshold-crossing analysis and even implement rules engines, all to keep from using the backhaul link except when necessary.

Why intelligence on the edge device?

Many IoT cloud platform vendors seem to have adopted the “If we build it, they will come” mentality, meaning if the cloud platform is provided, then people will just plug their things into it. Unfortunately, in business, there are no whispers in beautiful fields of grain, just cold, hard facts. Even though a cloud platform can perform deep learning with neural networks using distributed elastic compute resources, it’ll never produce any valuable insights without real data. That’s the cold, hard fact. IoT needs the cloud platform to produce insights and IoT needs the Things to generate the data to make the insights valuable. It’s true that the Things don’t need neural networks to generate the data, but they do need some intelligence – intelligence that requires software. This should scare every IT developer out there – it’s software alright, but it’s embedded software without your precious Linux and Java garbage collector. If I’m you (and I have been), then I’m really questioning the need for software on these embedded devices. Well, here are some considerations.

The edge devices integrate with physical devices: with temperature sensors, light sensors, and vibration sensors, and legacy PLCs. These sensors have a variety of interfaces such as 0-10 Volt, 4-20 mA, and MODBUS over RS-232. This data could be communicated in its raw form to the cloud, but then the burden is on the cloud to understand a lot of detail about each and every device. Most IoT edge devices perform at least some of the conversion from the physical sensor/actuator to a logical set of information that is more generic for the cloud. For example, rather than interpret the voltage scale of a specific temperature sensor, the cloud platform can just read degrees Celsius.

Battery-operated edge devices often implement a state machine in software to manage the power consumption. They behave in an active-processing mode for very short periods of time and then go into low-current sleep. The state machine on the edge device can’t tolerate the round-trip latency of remote cloud control between each state transfer. The cloud platform will often control the state of the device at a macro-level (such as wake for upgrade), but can’t possibly manage all of the specific actions necessary to perform the state transfers.

The edge-devices often manage the performance of the local network as well as the number of data points the gateway and cloud platform are exposed to by making some local decisions. For example, a fire-extinguisher monitoring application may only send the level of the fire extinguisher fluid when it drops below a certain amount rather than periodically. 

I could continue to describe cases for software on the edge device such as encryption on mesh networks, immediate reaction and reliable data transfer. Suffice it to say that there is software on the edge devices. So, how will your IT developer write that software and how will you keep that software up to date? The good news is that there are Things platforms solving these problems. Some provide VM’s for separating the core OS and networking functions (generally written in C) from the user application (written in a language such as Python). These also provide the added benefit of remotely upgrading the user application separate from the core OS and networking.

The IoT Things Platform

I initially introduced the need for a new type of IoT platform, the Things platform, in Welcome to yet another IoT Platform, describing the large set of needs not met by the IoT cloud platforms. In To recharge or not to recharge: A battery of IoT questions, I described a specific set of problems a Things platform should solve associated with long battery life. We need the IoT cloud platforms to do much of the heavy processing, but we need IoT Things platforms to answer the challenge of distributing intelligence to the Things. SNAP: The Things Platform is Synapse’s answer to the challenges of Things.

This article was produced in partnership with Synapse Wireless

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.