The recent certification of the Sparkplug 3.0 specification as an ISO/IEC international standard greatly increases industry confidence in the quality and reliability of the technology. With external assurance that Sparkplug will continue to be supported into the future, we can expect to see accelerated adoption of the specification.
Significant progress is also being made on Sparkplug 4.0. Over the past 10 months, we've been diligently charting the course for the latest version of the specification. Our target is to have a release candidate ready by late 2024 or early 2025.
Although a lot has been plotted on the roadmap, it’s important to note that the entire plan isn’t etched in stone just yet. If you have ideas to elevate Sparkplug’s future, now is the perfect time to jump in and contribute. In the meantime, let’s take a sneak peek at what’s currently on the roadmap.
Addition of MetaBirth Certificate Will Make Sparkplug Slimmer and Easier to Use
In the Sparkplug 3.0 routine, when an edge node comes online, it connects via the MQTT server and sends a birth certificate. This certificate contains metadata for the birth metrics along with the current values. It’s a bit clunky because the metadata changes less frequently than the values, yet both are published every time a Sparkplug client connects to an MQTT Server.
What’s changing? We’re splitting the birth certificate into two. You’ll still have the regular birth certificate with edge node values, but now there’s a cool addition — the MetaBirth certificate. The metadata certificate will be published on a retained topic, eliminating the need for constant updates. This not only makes Sparkplug more user-friendly but also adds a touch of efficiency.
This is not a radical shift: it’s just a natural evolution of Sparkplug. Edge nodes have always been the single source of truth, including for metadata. Initially everything was bundled into the birth certificate, but after seeing Sparkplug in action across different environments, it’s clear that separating the two is the way to go.
Rebirth Topic Addition Enables Multiple Host Applications
When Sparkplug was originally developed, it supported many edge nodes with a single consumer of data. Of course, an MQTT server and a distributed system can support many producers and consumers of data. However, Sparkplug 3.0 only supports a single Sparkplug Primary Host.
Here’s why this can be a problem: add a new host application that isn’t the primary host, and chaos ensues. The new host application gets detected only when an incoming data message prompts the edge node to send it a new birth certificate. This triggers the same action in every other edge node, disrupting data streams from other host applications in the system.
To solve this, we’re introducing a rebirth topic. Formerly, rebirths were embedded in command messages, which was also a bit problematic from a security perspective. With a dedicated rebirth topic, every edge node can tap into the magic without needing an Access Control List (ACLs) to subscribe to both command messages and rebirth messages.
And for rebirth topics inside a payload, we’ll be including a source field that will allow new host applications to specify their host ID. When the application requests a rebirth, the edge node knows it’s the only one that needs a makeover and will send a targeted rebirth back to it. The edge node will also be able to pause data streaming, generate the birth certificate, and then count backwards from the sequence number to fit the certificate in between a death certificate and a new birth certificate so as to not interrupt any pre-existing Sparkplug Host Applications.
JSON-Based Payloads Will Not Be Included in Sparkplug 4.0
The decision not to include JSON-based payloads in Sparkplug has stirred some controversy within the community. Despite the appeal of JSON’s human-readable and intuitive nature, we decided against its inclusion for several reasons. Sparkplug currently uses Google Protobuf for binary encoding. Sparkplug 3 already specifies the Sparkplug B root topic namespace, and Sparkplug 4 is going to have the C namespace. Introducing JSON would necessitate yet another namespace, contradicting Sparkplug’s core principles.
At its heart, Sparkplug was made to ease communication between producers and consumers of data, which are invariably machines. While making it easier for humans to understand and interpret might seem attractive, it could hinder machines’ efficiency. Supporting both JSON and Protobuf would complicate implementation, forcing a choice that limits interoperability. Furthermore, Sparkplug, while efficient for moving data, doesn’t single-handedly address bandwidth issues. The need for a compact payload, as provided by Protobuf, remains crucial, especially for companies exhausting bandwidth on fibre networks due to inefficient data transmission.
Support for Records to Send Data in Batches
We’re also introducing support for records, a feature we’ve implemented at Cirrus Link, but held off incorporating into Sparkplug to avoid incremental changes. Records enable sending an atomic block of data grouped together in some meaningful way under its own topics. Various verticals, like manufacturing (particularly pharmaceuticals), have independently adopted similar approaches. For instance, in pharmaceutical manufacturing, consolidating all pertinent data points for a specific batch of products can be very useful for analytics or tracking down production issues.
Help Shape the Future of Sparkplug and Its Implementations
The roadmap is taking shape, but it’s still flexible. Your input on enhancing Sparkplug is valuable now more than ever.
One key area in need of contributions is the Eclipse Tahu project, focused on compatible implementations of Sparkplug. We’re particularly keen to diversify programming languages, as we’re currently very Java-heavy. Whether you prefer Rust, Python, or any other language, your expertise would be a great addition. Join us in shaping the future of Sparkplug!
If you’re interested, you can reach out to learn more here, take a look at the latest published version, check out the GitHub repository, and join the specification project.