In today’s digital world, billions of devices generate data every second. Smartphones, smart home devices, self-driving cars, and industrial sensors constantly collect information that must be processed quickly. Traditionally, most of this data is sent to large cloud data centers for analysis. However, a growing technology known as edge computing is changing how this process works.
Edge computing is an emerging information technology architecture that allows data to be processed locally on devices such as smartphones, autonomous vehicles, local servers, and Internet-of-Things (IoT) systems instead of sending everything to a distant cloud server. By processing data closer to where it is generated, edge computing can reduce delays, improve efficiency, and allow artificial intelligence systems to operate faster while using less power.
Despite its promise, edge computing faces a major challenge: most local devices have limited computing power and limited battery capacity. To perform complex tasks, they often still need to send data to powerful cloud servers. This wireless communication consumes energy and can slow down performance.
A new study by researchers at Nanjing University may offer a powerful solution. Their research, published in Nature Electronics, introduces a novel technology called communication-aware in-memory wireless neural networks, which could significantly improve the efficiency of communication between edge devices and cloud servers.
The Growing Importance of Edge Computing
Edge computing is becoming increasingly important as the number of connected devices continues to grow. From smart cities and healthcare monitoring systems to autonomous vehicles and industrial robots, modern technologies rely heavily on real-time data processing.
When data must travel long distances to a centralized data center, it introduces latency—small delays that can become critical in time-sensitive applications. For example, a self-driving car must analyze sensor data almost instantly to detect obstacles and make safe decisions.
Processing information directly on the device—or at the “edge” of the network—helps reduce this delay. It also decreases the amount of data that must be transmitted over wireless networks.
However, edge devices face serious limitations. Unlike large cloud servers with vast processing power, edge devices operate under strict energy constraints. Their batteries must last for long periods, and their hardware capabilities are limited.
As a result, finding ways to make AI systems run efficiently on these devices is one of the biggest challenges in modern computing.
Combining Memory, Computing, and Communication
To tackle this problem, the research team led by scientist Feng Miao developed a new architecture that merges three critical functions—memory, computing, and wireless communication—into a single system.
The idea behind the project emerged from the expectation that the future will include enormous numbers of intelligent devices operating everywhere—from homes and hospitals to transportation networks and factories.
According to the researchers, these devices will require strong artificial intelligence capabilities while consuming very little energy. This challenge motivated them to explore in-memory computing, a hardware technology that processes data directly where it is stored instead of moving data back and forth between memory and processors.
Traditional computer systems separate memory and computation, which requires large amounts of data movement. This movement consumes energy and slows down performance. In-memory computing reduces this problem by performing calculations directly within memory components.
The researchers realized that combining this approach with wireless communication could dramatically improve energy efficiency for edge computing.
A Communication-Aware AI System
One of the most innovative aspects of the new system is its communication-aware training method.
In conventional wireless communication systems, the goal is to transmit data perfectly without any loss. Engineers design systems that try to preserve every bit of information during transmission. However, ensuring perfect transmission requires significant power.
The researchers proposed a different approach.
Instead of treating wireless communication as a completely separate system that must deliver perfect data, they integrated it directly into the artificial neural network. In this framework, the neural network learns how to handle imperfect or partially lost data.
In other words, the AI model is trained to work effectively even when wireless signals are weak or when data transmission contains minor errors. This allows the system to reduce the amount of power required for communication.
By accepting small imperfections in transmitted data, the system dramatically lowers energy consumption while maintaining high accuracy.
Impressive Performance in Testing
To evaluate their approach, the research team tested the new architecture on an image classification task, a common benchmark used in artificial intelligence research.
The neural network analyzed images and attempted to classify them correctly while communicating with a cloud server under different wireless conditions.
The results were impressive.
Even when transmission conditions were not ideal, the system achieved an accuracy rate of 93.71 percent. This shows that the neural network was able to maintain strong performance even with reduced communication power.
In practical terms, the technology could dramatically lower the energy required for wireless communication between devices and cloud servers.
Using the well-known ImageNet dataset as an example, the researchers demonstrated that their system could reduce the wireless transmission power required for collaborative AI tasks by up to 95 percent.
This level of efficiency could transform how edge devices operate in the future.
Breaking the Barrier Between AI and Wireless Communications
Traditionally, artificial intelligence research and wireless communication engineering have been treated as separate scientific fields. AI experts focus on improving algorithms and models, while communication engineers work on improving signal transmission and network performance.
The new research bridges these two fields.
By designing neural networks that understand and adapt to wireless communication conditions, the system integrates communication directly into the AI training process.
This new perspective could inspire future collaborations between the AI and telecommunications research communities. It also opens the door to smarter and more energy-efficient computing systems.
Potential Real-World Applications
If further developed, this technology could have major impacts across many industries.
For example, autonomous vehicles rely on a constant exchange of information between onboard computers and remote servers. Reducing communication power requirements could help extend battery life and improve reliability.
Smartphones and wearable devices could run more advanced AI applications without draining their batteries quickly.
Smart cities that rely on millions of sensors could process data more efficiently, reducing network congestion and energy consumption.
In healthcare, remote monitoring devices could analyze patient data locally while sending only minimal information to cloud systems.
Overall, the technology could enable a new generation of intelligent devices capable of performing complex tasks with minimal energy usage.
Future Development and Research
Although the results are promising, the researchers say more work is needed before the technology can be widely deployed.
Future research will focus on improving the practicality of the system and optimizing its engineering design. One important direction is extending the method to multiple-input multiple-output (MIMO) communication systems, which are widely used in modern wireless networks such as 5G.
The team also plans to explore on-chip integration, which would allow the entire system—including computing, memory, and communication components—to be integrated directly onto a single microchip.
Such advancements could make the technology more compact, efficient, and suitable for real-world devices.
A Step Toward Energy-Efficient AI
As artificial intelligence continues to expand into everyday devices, improving energy efficiency will become increasingly important.
Edge computing offers a promising way to bring powerful AI capabilities closer to where data is generated. However, solving the challenges of limited battery life and communication costs is essential for its success.
The new communication-aware in-memory wireless neural network developed by researchers at Nanjing University represents an important step toward that goal.
By combining computing, memory, and wireless communication into a single intelligent system, the technology could dramatically reduce energy consumption while maintaining strong AI performance.
If future research confirms its potential, this innovation could help power the next generation of smart devices—making artificial intelligence faster, more efficient, and accessible across billions of connected systems around the world.
Reference: Yang, ZZ., Wang, C., Zhao, Y. et al. Communication-aware in-memory wireless neural networks. Nat Electron (2026). https://doi.org/10.1038/s41928-026-01577-5

Comments
Post a Comment