Thursday, February 23, 2023 - 05:00
  • Share this article:

The recent release of Sparkplug 3.0 marks a major milestone for the project at the Eclipse Foundation. It is, after all, the first version of the specification to be released under the Eclipse Foundation’s specification process. 

For users, not much has changed (although there are a few evolutions, which we’ll get into). For us, the focus has been on cleaning up the language in the Eclipse Sparkplug specification and improving its interoperability. 

Host Application ID Messages’ Topic Changed

Let’s start with what’s changed. There’s just one major change, which was necessary to correct some behavior that wasn’t quite right in version 2.2. 

Sparkplug uses MQTT-based state messages to notify edge nodes in a Sparkplug-based system. Sparkplug also takes this basic idea a step further, allowing producers of data to be aware when the consumers are ready. Using this functionality, producers of data can store messages if they know that the receiver isn’t ready for them, letting the edge applications function a bit more intelligently. 

This was a good idea. But there were a few issues in version 2.2 in the way it was implemented. There were some timing problems, and we also wanted to make the MQTT topic namespace a bit more concise and improve its compatibility. So, we changed the topic and made the payload a bit more descriptive. This should let the edge nodes and host applications function more intelligently. It’s a simple replacement but definitely a necessary one. 

Two Artifacts Added: TCK and Compatible Implementations

Our major task for 3.0 was making the Sparkplug specification clearer and structured more rigorously. For example, we spent a lot of time extracting normative statements from the text and make them defined explicitly. Alongside just cleaning up the specification itself, that necessitated us building two artifacts out of it. 

The first is the Technology Compatibility Kit (TCK). We used many tools that are currently being used with the Jakarta EE specifications to develop it. But Sparkplug is a bit different, in that it’s not really an API. It’s a system that talks to other networked devices. So, the implementation process for the TCK was quite a bit different as well. 

We rewrote the Sparkplug specification completely using AsciiDoc and used annotations in the specification itself. So, it is now very much more like code than just like a document, as it was originally. Within the specification, we were able to use those tools to tie the specification to the TCK. We added the ability to create coverage reports for every assertion (normative statement) in the specification, and to tie those coverage reports to individual tests in the TCK. 

Put together, this makes it so that whenever you run an implementation of Sparkplug against the TCK, you also get a full report of every single test that it passed or failed.

The next major artifact we added is compatible implementations for each of the four Sparkplug profiles, which includes two MQTT server profiles, an edge node profile, and a host application profile. 

What’s really been driving this process is the fact that we have had commercial implementations of Sparkplug for a while at this point. And as the project has been gaining momentum, more and more implementations were coming onto the market that companies wanted to incorporate into their systems. Things weren’t always going very smoothly. So, we cleaned up the language to make it clear what a given implementation had to do, and how it had to act. We provided tools to check that the implementation was in fact adhering to the specification. Ideally, now, systems can simply plug in new implementations of Sparkplug and have them work. 

Weigh In on Sparkplug’s New Features

In the short term, we’ve got a lot of work to do supporting vendors as they go through the compatibility process. We’ve definitely allocated some time to make sure we can assist them with that, as well as to fix any bugs that may crop up.

But in the long term, our big focus is to start working on new features for Sparkplug. We’ve already received a lot of good ideas for features, two of which are likely to get the green light. 

The first one is more intelligent birth certificates. One of the first things an edge node does when it realizes a host application is online is publish a birth certificate with all the information that the node will ever publish, as well as the values of the information at that time. Afterwards, those values will only be sent to the host if they change, which is one of the key ways Sparkplug limits bandwidth usage. But the metadata that gets sent with the birth certificate can be tricky. So, one of the new ideas is to separate out that metadata and allow it to be retained via MQTT. That way, it doesn’t need to all be present if a new host application comes online, which happens now and can be disruptive in the network. 

Another one is the concept of records, which are basically blocks of interrelated data. Sparkplug doesn’t have a good way to make those, so some kind of Sparkplug records functionality is probably going to be added. 

But these change ideas aren’t set in stone. Anyone can weigh in on these ideas in our main repository. We have many more issues that are tagged for the future release version, which anyone can go in and comment on. You can also create new issues if there’s something you’d like added to Sparkplug that you don’t see anywhere.

About the Author

Wes Johnson

Wes Johnson

Wes Johnson is a project lead for Eclipse Sparkplug and Eclipse Tahu, and vice-president of software at Cirrus Link Solutions.