The growing emphasis on “explainable AI” in recent years highlights a fundamental issue: many previous AI algorithms have operated as black boxes, with little understanding of what information is extracted and utilized from the training data. While this opaqueness may be tolerated in fields like natural language processing or computer vision, where traditional algorithms have struggled, applying unexplainable AI to an established field like wireless communications is both unnecessary and counterproductive.
In wireless communications, decades of rigorous research and experience from real-world network operation have produced well-functioning, human-crafted algorithms based on established models (e.g., for wave propagation and randomness) and optimized methodologies (e.g., from information theory). If AI is to surpass these state-of-the-art solutions, we must understand why: Is it uncovering previously overlooked characteristics in real-world data, or is it merely exploiting artifacts in a synthetic dataset? The latter is a significant risk that the research community must be mindful of, particularly when training data is generated from simplified numerical models that don’t fully capture real-world complexities but only resemble measured data in some statistical sense.

I have identified three characteristics that are particularly promising to learn from data to improve the operation of the physical and MAC layers in future wireless networks.
- User mobility: People typically move through the coverage area of a wireless network in a structured manner, but it can be hard to track and predict the mobility using traditional signal processing methods, except in line-of-sight scenarios. AI algorithms can learn complex maps (e.g., channel charts) and use them for predictive tasks such as beam tracking, proactive handover, and rate adaptation.
- User behaviors: People are predictable when it comes to when, how, and where they use particular user applications, as well as what content they are looking for. An AI algorithm can learn such things and utilize them to enhance the user experience through proactive caching or to save energy by turning off hardware components in low-traffic situations.
- Application behaviors: The data that must be communicated wirelessly to run an application is generally bursty, even if the application is used continuously by the user. The corresponding traffic pattern can be learned by AI and used for proactive scheduling and other resource allocation tasks. Auto-encoders can also be utilized for data compression, an instance of semantic communication.
A cellular network that utilizes these characteristics will likely implement different AI algorithms in each cell because the performance benefits come from tuning parameters based on the local conditions. AI can also be used at the network operation level for automation and to identify anomalies.
Many of these features mentioned above already exist in 5G networks but have been added on top of the standard at the vendor’s discretion. The vision for 6G as an “AI-native” network is to provide a standardized framework for data collection, sharing, and utilization across the radio access network. This could enable AI-driven optimizations at a scale previously unattainable, unlocking the full potential of AI in wireless communications. When this happens, we must not forget about explainability: there must at least be a high-level understanding of what characteristics are learned from the data and why they can be utilized to make the network more efficient.
I give some further examples of AI opportunities in wireless networks, as well as associated risks and challenges, in the following video: