As artificial intelligence (AI), sensor technology and networking architectures continue to evolve, AI analysis will be crucial for assessing and triaging at the network edge. But AI analysis at a large scale can be slow, expensive and complex. By running AI analysis closer to the edge, bandwidth increases, latency lowers and the potential for innovation expands.
To help provide those capabilities, Red Hat partnered with NTT, Nvidia and Fujitsu to develop a solution that improves the potential for real-time artificial intelligence (AI) data analysis at the edge as part of the Innovative Optical and Wireless Network (IOWN) initiative. NTT and Red Hat claim to have successfully shown that this platform significantly reduces power consumption while operating with low latency for AI analysis at the edge.
The new platform is a combination of technologies developed by the IOWN Global Forum and built on Red Hat OpenShift. It also earned a proof of concept (PoC) recognition from the IOWN initiative in recognition of its real-world potential.
Specifically, the partners leveraged the IOWN All-Photonics Network (APN) and data pipeline acceleration tools in IOWN Data-Center Infrastructure. NTT’s accelerated data pipeline, which is optimized for AI, then adopts RDMA over the network to collect and process sensor data in edge environments.
Then, Red Hat’s container orchestration platform provides the flexibility to operate workloads within the data pipeline, including at geographically disparate data center locations. “With Red Hat OpenShift, we can help NTT provide large-scale AI data analysis in real time and without limitations,” Red Hat SVP of Global Engineering and IOWN Board Director Chris Wright said.
The platform also sets the stage of AI-driven tools and technologies that will help businesses scale sustainably. “We aim to embody the sustainable future of net-zero emissions with IOWN,” NTT SVP and IOWN Chairman Katsuhiko Kawazoe said.
For example, organizations will see less overhead from collecting vast amounts of data, improved data collection that can be transferred between urban and remote data centers, the ability to tap into locally available renewable energy and increased area management security.
Red Hat and Nvidia cut power demands, latencyThe PoC, which used NVIDIA A100 Tensor Core GPUs and NVIDIA ConnectX-6 NICs for AI inferencing, evaluated Yokosuka City as the sensor installation base, with cameras as sensor devices, and Musashino City as the remote data center.
By connecting the two locations with APN, even when a significant amount of cameras were enabled, the latency present when aggregating sensor data for AI analysis was reduced by 60% compared to conventional AI inference workloads, according to the vendors.
“These results help prove that we can build AI-enabled solutions that are sustainable and innovative for businesses across the globe,” Red Hat’s Wright said.
The testing also found that power consumption requirements for AI analysis per camera at the edge is reduced by up to 40% compared to conventional technologies. This is because the AI analysis platform lets the graphics processing unit (GPU) scale up to accommodate a high number of cameras without creating a bottleneck at the CPU. Based on trial calculations, power consumption for 1,000 cameras can be reduced by 60% with this platform.
Fujitsu’s composable disaggregated infrastructure (CDI) is another part of this solution that impacts performance and efficiency. “Fujitsu enables higher performance and power efficiency with the composability of [its disaggregated infrastructure] and continues to contribute to the realization of IOWN computing infrastructure,” Fujitsu SVP Kenichi Sakai said. “These PoC results show that IOWN’s feasibility has increased towards the commercialization in 2026 and that IOWN has a potential for AI applications.”