The picture recently painted of a network of ubiquitous connectivity for each and every class of processor-endowed device on the planet goes so far as to suggest that things such as shipping tags, pacemakers, pipeline stress sensors, thermometers, and nuclear radiation monitors expose their methods to the world – perhaps with a RESTful state. Only through such openness could the spark of ingenuity for a standardized, all-purpose API be generated.
As KORE Telematics President and COO Alex Brisbourne told us in Part 1 of this three-part discussion with ReadWriteWeb, the engineering of such specialized devices may actually work against the creation of such an API. But commonality and generalization of functions are what drive costs down for device manufacturers – the more specialized something is, the smaller its potential market. Is there anything that “intelligent device” managers could learn from the design of existing, specialized, narrowcast (to use Brisbourne’s term) devices that developers may apply to the architecture of the devices that real-world apps will remotely monitor and manage?
Alex Brisbourne, KORE Telematics: I think the broader question is, what are the lessons to be learned? And I think there are three. Number one: When you can utilize standardized interface APIs, whether they be truly industry-standard or broadly-available APIs, that are utilized and re-utilized by a number of people, you can bring efficiencies of getting products to market. Case in point: If I go back and look five or six years, there was a genuine belief that there was a business in and of itself [around] the capability of simply being able to put a dot on a map for every location, a pin on a map – where your car was driving down the road. And that could be measured in man-weeks or months of development, it was specific to a mapping provider, specific to a processor architecture, and probably specific if, let’s say, you had location capabilities from the [tracking device].
“A patient alarm system for a guy walking around with a pacemaker, arguably is a little bit more critical than somebody else who’s simply trying to repossess a sub-prime auto lease contract.”
Today, that capability is writing to an API that’s delivered from Google, or somebody like that. And in 15 or 20 minutes, they may be putting location into their application utilizing that standardized interface, and they may or may not be paying some form of royalty to click through and get that data. So certainly, as the industry matures, I think there are broader and broader levels, ranges of building blocks that are available. And I think it’s important that those are encouraged.
Second, what is the in-service lifecycle of the device handler application? The right choice of technologies – 2G, 3G, 4G, satellite – [needs to] have been thought through: where they’re going to be sold and serviced, so that for example, if you sell devices in Europe, you don’t build them with CDMA radios (a slightly obvious example). It surprises me how few people do think through the use cases.
And the third is ultimately the overall cost of ownership, including support and recurring costs, so that you think through the optimization of applications. Recurring costs, which is oftentimes the network, over a five-year lifecycle may represent a third of the TCO of that application. So the guys who win and make a lot of money are the ones who have really tuned their applications and user experience very effectively, and I think that increasingly people are getting that message, but it’s certainly one that we feel consistently we need to continue to underscore in people’s design thinking.
Later in our discussion, Alex Brisbourne and I touched on the varying projections that manufacturers with a stake in Internet of Things (IoT) have made with regard to the potential size and scope of the project – IBM has projected as many as 24 billion simultaneous devices, while others claim even more.
AB: I’m perfectly confident that you’ve read about the 50-billion-device story. There’s a balloon to be pricked, quite frankly, before anybody ruins their career on staking out their share of it.
Scott M. Fulton, III, ReadWriteWeb: I worry about the chatter that takes place. At least up until the advent of Netflix, the thing that really constituted the majority of Internet traffic was all the signals that routers send to one another. They’re basically telling each other, “I’m fine, I’m still fine, I’m still fine.” It’s what Cisco calls “weather reports.” It used to constitute a huge majority of the stuff that browsers send, in absolute quantity. It’s M2M communication. And imagine blowing that up to a 50-billion-device level, in the way that some people perceive it. We’re not going to be able to handle that.
AB: That’s why I mentioned the early days of working with Novell and IPX. Even over a piece of standard copper, in those days, they were sending keep-alives every 15, 20 seconds, which really didn’t work terribly well, but it had suddenly gone to wide-area and had a V.56 modem attached to it.
“I must be completely honest and say that, many of the research, I think, are a bit mischievous in terms of how strapped we are for spectrum.”
You actually bring a very good point up; it actually brings to one other area that is of interest: Much is talked about with regard to capacity concerns in the networks. If you really believe even a tenth of the 50 billion devices – 5 billion sounds like quite a lot – are they going to create enormous traffic problems in today’s cellular [atmosphere] on the basis that all the propaganda would have us believe that there’s [not enough spectrum to satisfy our data demands]. And I must be completely honest and say that, many of the research, I think, are a bit mischievous in terms of how strapped we are for spectrum. We certainly have an issue if we, as individuals, continue to use the wireless network as prolifically as we are doing on our iPhones. But much of what happens in the machine-to-machine world is actually microscopically small amounts of data, in the big scheme of events. A hundred thousand home alarm systems, probably between them, generate maybe two gigabytes of data in a month. Which is maybe all that one iPad generates.
The average [bandwidth] usage on M2M devices is probably, at the very low end, probably 30 kilobytes per month, and maybe at the high end – excluding digital signage, the fringe areas – maybe five or six megabytes per month. You can push a heck of a lot of the traffic, particularly when some of it comes off-peak, into today’s network.
So I’m left worried about the spectrum and the capacity, but perhaps more worried about policy management and prioritization, which are going to be perhaps the interesting talking points over the next two to three years.
SF3: I’ve talked to some analysts who projected the level of data usage on wireless in years 2015, 2020, and how many terabytes of data will that be; and when I’ve asked them what they base their conclusions on, they say, “Scott, just do the math. You put the numbers into a calculator.” They stuck 2008 into a spreadsheet, 2009, they drew a line, and they said, “Okay, in 2020 it’s going to be up here at this rate of growth.” Instead of looking at the factors and taking into account the possibility – maybe even the probability – that someone will solve the problem of why data is ballooning to that size in the first place.
And most of the data that’s communicated now between people, on the Web, is junk. It’s not even pertinent to what they’re saying to one another. It’s not actionable information. And better applications – and there could very well be some – will reduce that.
AB: They will. And you look at what’s happening in the world, the next generation of 4G networks – which have in the past been more traditional systems for provisioning – are being taken over by policy managers. So the ability to classify your traffic in differing ways, will become very, very important. We’re not quite there yet, I think. We haven’t had the push yet, bluntly, [where implementers say], “We have to utilize the technology available to us to actually control the resources which are at our disposal.” But certainly I do think we’ll start to see these initiatives primarily because it’s an opportunity to start to differentiate tariffing to applications, and frankly, monetize the value of the network in some of these vertical markets.
A patient alarm system for a guy walking around with a pacemaker, arguably is a little bit more critical than somebody else who’s simply trying to repossess a sub-prime auto lease contract. They both want to use the same 50 kilobytes of network capacity, but there’s a somewhat different value proposition.