Author: Andrew Foster, IOTech Chief Product Officer
I recently participated in a panel discussion called “Turning Industrial Data into Energy Insight: Scalable Edge Solutions for Modern Grids“, alongside brilliant people from EDF, ABB, and Fluence Energy. We had an hour-long conversation about what it truly takes to transform industrial data into operational value.
Here are some of the key highlights from the session: the heterogeneity problem is still greatly underestimated. Assets were never designed to work together. Different protocols, non-standard naming conventions, and proprietary communication layers create challenges. Additionally, data volumes are growing rapidly. Before you can do anything useful, you need to normalize and contextualize the data. This step is essential; it’s the foundation.
That foundation must rely on open standards. Proprietary APIs complicate integration and lead to vendor lock-in, costing operators for years. Using standards-based, open architecture gives customers the freedom to choose. It’s the only way to effectively coordinate across a large number of assets.
Edge computing has become essential, not just a nice-to-have. For decisions made in under a second, optimization, safety, and market participation, data processing needs to happen as close to the source as possible. Even in hybrid setups where the cloud plays a coordinating role, latency-sensitive processing must occur at the edge. There’s no debate about that anymore.
However, scaling edge computing requires its own discipline. Timestamp accuracy across distributed nodes is more important than people realize, and we have seen it lead to actual control issues. As you scale, data reduction techniques like filtering, compression, or deadband become crucial. Orchestration also matters: the ability to update, tag, and add new assets across an entire fleet without losing control. This aspect often gets ignored until problems arise.
Regarding AI, it currently has an advisory role; fault diagnostics, predictive analytics, and optimisation recommendations, all with human oversight. To support it at the edge, you need three things: high-quality normalised time series data, consistent semantic tagging, and the compute to run models locally. Get those right, and lifecycle management (deploying, updating, monitoring, rolling back) becomes your next challenge.
We are working towards distributed energy resources (DERs) that operate as coordinated, grid-supportive assets rather than isolated devices. That future is on the horizon. The platform work we’re doing now makes it possible.
These are the conversations that don’t make it into whitepapers. EDF talking candidly about the real cost of normalisation across OEMs. Fluence Energy on the sheer data demands of battery systems at cell level. ABB’s $100 million data lake story. An hour of people who are actually building and operating these systems, not just talking about them. Worth your time.