Do We Finally Have an Open and Realistic IoT Model?

Most of you know I think IoT is overhyped, with the popular vision being a whole bunch of new sensors put on the Internet for people to exploit, misuse, or hack, depending on their predisposition.  The real IoT opportunity has to face two realities.  First, most sensors will never be on the Internet directly; they’ll have a gateway that offers them limited exposure.  Second, the IoT opportunity for providers will be in offering digested contextual information derived from sensor gateways.  You don’t hear much of this stuff, and so I was happy to talk with the Eclipse Foundation’s IoT people and hear a more realistic vision.

It’s always interesting to see how realists view a market, in contrast to how the market is portrayed.  I explained my own vision of the IoT space, the notion of contextual “information fields” projected from event-driven services.  Even that was a bit too futuristic for the Eclipse team; they were focused more on “industrial” and “facility” IoT.  The good news is that their model of an open IoT ecosystem is perfectly compatible with their own industrial vision, my information/contextual vision, and even the pie-in-the-sky world-of-open-sensors vision.

In a nutshell, the Eclipse Foundation defines three broad software stacks, with the goal of creating three open and symbiotic ecosystems.  One is focused on limited, contained, devices like the pie-in-the-sky sensors you hear about all the time.  The second is focused on the critical gateways that connect realistic, locally specialized, sensors to broader applications, and also provides local processing for closed-loop event handling.  The third is focused on cloud-hosted applications that exercise control and provide analytics and other event digestion and distribution.  My information fields are created by the last of these three.

The overriding mission of the Eclipse Foundation in IoT is to create an ecosystem within and among these three component stacks.  They want the individual stacks to be open within the domain they represent, and they want them to present APIs that can link them together in whatever deployment context is mandated by previous services or commitments, and then with whatever new stuff is needed to address the evolution of the opportunity.

The slide deck I reviewed shows the device stack hosted on an embedded OS (OS/RTOS for “Real Time OS” in the slide), which I think is almost a given.  The second of the stacks for the gateway is similarly labeled, but here I think the OS might be a version of Linux.  The final stack for applications is a PaaS platform, meaning they hope to define middleware tools that would create a standard IoT-centric execution environment.  That, in fact, is really a goal with all the stacks.

Each of the stacks define a set of tools/protocols/APIs that provide the critical points of exchange and integration.  By defining standards here, Eclipse creates the basis for openness and extension, which is more critical for IoT than perhaps anything we talk about these days.  Information without application is just noise, and we need all the innovation we can get to turn the IoT data into useful stuff.

The middle, gateway, stack is defined in two broad examples, one for industrial applications and one for residential.  The structure of the stack is the same, but there are refinements in the interface to accommodate the different price points and functional requirements for these two market spaces.  I think that other versions of the gateway may come out as things like driverless cars and even retail IoT come along.

Up where I think the important stuff is, which is that third stack in the presentation, Eclipse sees a central role for a tool called ditto, which supports what’s called a “digital twin”.  Digital twins are representational agent processes that can be manipulated and read so as to simulate direct access to a device.  I think this is a great approach since it “objectifies” the IoT elements, but I hope they intend that the modeling/twinning is hierarchical, so that you can create successive virtual devices or elements that are relationships among a set of lower-level things.

The big barrier to the Eclipse Foundation model is incumbency.  We already have installed industrial and home systems, and in the residential market there is little tolerance for complexity and perhaps less for paying for integration.  The key would be to get manufacturers to embrace an open model.  That may be easier in Europe where there are fewer installed proprietary residential system, or in the industrial space where users are likely to pressure vendors for openness.

In the long term, history may favor the Eclipse Foundation.  Up to the 1980s, we had a bunch of proprietary computer operating systems because most enterprises did their own software.  As packaged software exploded, it became clear that vendors who didn’t have a large installed base couldn’t interest the software providers.  This created a move first to UNIX and later to Linux.  Could that happen here too?  I think it could, providing there’s a real value for “community IoT”.

Which brings me to the promise I made to summarize my own views on the subject.  There is no question that an IoT community, a collection of symbiotic applications serving many different missions valuable or even critical to consumers and businesses, demands an open model.  The mistake the IoT community has made was in presuming that model would be created by directly opening the sensors and controllers.  I don’t think there is any way to satisfy mandates for safety and privacy in such a scenario.  What is possible is what could be called “controlled, digested, exposure”.

The future of IoT lies in the creation of “information fields” derived from the collection, correlation, and analysis of sensor data.  These information fields would be created by and projected from the kind of stuff that the Eclipse Foundation is defining in its second and third software stacks—gateways and application platforms.  Applications, which could be run in or on behalf of mobile users, connected cars, autonomous vehicles, and of course cities, governments, retailers, enterprises, and so forth, would intercept some of these fields and utilize them.

For what?  Mostly about contextual applications.  We want our devices to serve us.  The biggest barrier in their doing that is understanding what we need/want, and the biggest barrier to that is understanding our context.  I’ve said in a number of blogs that I was told that the most-asked question of the early Siri was “What’s that?”  as if Siri could know.  But Siri could know with information-field-centric IoT.  We don’t have to make agent processes actually see to make them understand what we’d likely see from a given point.  We could use information fields to find places and people, to link goals with directions, to guide remote workers, to control self-driving cars.  The overall architecture is what’s needed here, not a bunch of proposed one-off solutions that couldn’t justify the enormous pre-deployment of assets needed to create critical information mass.

For all the good intentions, and good technology, in the Eclipse Foundation work, they still face the same risks as the MEF did (and as I discussed in my blog yesterday).  An ecosystem created from the bottom up may encourage exploitation but it doesn’t guarantee it.  Since value to the market is created where the money changes hands, at the user level, the value of the Eclipse Foundation ecosystem will have to be built with and on top of their work.  They’re still a very small force in the market overall, and we’ll have to wait to see if they can build up a momentum in the IoT space.  I’d like to see that, because an open and realistic approach is always critical, but especially so in IoT.