We have now released the 23nd episode of the podcast Wireless Future! It has the following abstract:
For each wireless generation, we are using more bandwidth and more antennas. While the primary reason is to increase the communication capacity, it also increases the network’s ability to localize objects and sense changes in the wireless environment. The localization and sensing applications impose entirely different requirements on the desired signal and channel properties than communications. To learn more about this, Emil Björnson and Erik G. Larsson have invited Henk Wymeersch, Professor at Chalmers University of Technology, Sweden. The conversation covers the fundamentals of wireless localization, the historical evolution, and future developments that might involve machine learning, terahertz bands, and reconfigurable intelligent surfaces. Further details can be found in the articles “Collaborative sensor network localization” and “Integration of communication and sensing in 6G”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
We have now released the 21st episode of the podcast Wireless Future! It has the following abstract:
The latest wireless technologies rely heavily on beamformed data transmissions, implemented using antenna arrays. Since the signals are spatially directed towards the location of the receiver, the transmitter needs to know where to point the beam. Before the wireless link has been established, the transmitter will not have such knowledge. Hence, the geographical coverage of a network is determined by how we can transmit in the absence of beamforming gains. In this episode, Emil Björnson and Erik G. Larsson discuss how to achieve wide-area coverage in wireless networks without beamforming. The conversation covers deployment fundamentals, pathloss characteristics, beam sweeping, spatial diversity, and space-time codes. To learn more, you can read the textbook “Space-Time Block Coding for Wireless Communications”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
We have now released the twentieth episode of the podcast Wireless Future! It has the following abstract:
Many objects around us are embedded with sensors and processors to create the Internet of Things (IoT). Wireless connectivity is an essential component for enabling these devices to exchange data without human interaction. To learn more about this development, Erik G. Larsson and Emil Björnson have invited Liesbet Van der Perre, Professor at KU Leuven, Belgium. The conversation covers IoT applications, connectivity solutions, powering, security, sustainability, and e-waste. Further details can be found in the article “The Art of Designing Remote IoT Devices—Technologies and Strategies for a Long Battery Life”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
Our podcast is back with a second season! The first episode has number 19 and the following abstract:
How far is the capacity of wireless networks from the limits imposed by nature? To seek an answer to this question, Erik G. Larsson and Emil Björnson invited Thomas Marzetta, Distinguished Industry Professor and originator of Massive MIMO, to this first episode of the second season. The conversation covers the history of that technology and the fundamental aspects that will always dictate the capacity of wireless networks: antenna technology, channel state information, spectral efficiency, bandwidth, spectrum bands, and link budgets. To learn more, you can read the article “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
I was recently invited to the Ericsson Imagine Studio to have a look at the company’s wide range of Massive MIMO products. The latest addition is the AIR 3268 with 32 antenna-integrated radios that only weighs 12 kg. In this article, I will share the new insights that I gained from this visit.
Ericsson currently has around 10 mid-band Massive MIMO products, which are divided into three categories: capacity, coverage, and compact. The products provide different combinations of:
Number of radio branches, which determine the beamforming variability;
Maximum bandwidth, which should be matched to the operator’s spectrum assets.
The new lightweight AIR 3268 (that I got the chance to carry myself) belongs to the compact category, since it “only” radiates 200 W over 200 MHz and “only” contains 128 radiating elements, which are connected to 32 radio branches (sometimes referred to has transceiver chains). A radio branch consists of filters, converters, and amplifiers. The radiating elements are organized in a 8 x 8 array, with dual-polarized elements at each location. Bo Göransson, a Senior Expert at Ericsson, told me that the element spacing is roughly 0.5λ in the horizontal dimension and 0.7λ in the vertical dimension. The exact spacing is fine-tuned based on thermal and mechanical aspects, and also varies in the sense that the physical spacing is constant but becomes a different fraction of the wavelength λ depending on the frequency band used.
The reason for having a larger spacing in the vertical dimension is to obtain sharper vertical directivity, so that the radiated energy is more focused down towards the ground. This also explains why the box is rectangular, even if the elements are organized as a 8 x 8 array. Four vertically neighboring elements with equal polarization are connected to the same radio branch, which Ericsson calls a subarray. Each subarray behaves as an antenna with a fixed radiation pattern that is relatively narrow in the vertical domain. This concept can be illustrated as follows:
This lightweight product is well suited for Swedish cities, which are characterized by low-rise buildings and operators that each have around 100 MHz of spectrum in the 3.5 GHz band.
If we take the AIR 3268 as a starting point, the coverage range can be improved by increasing the number of radiating elements to 192 and increasing the maximum output power to 320 W. The AIR 3236 in the coverage category has that particular configuration. To further increase the capacity, the number of radio branches can be also increased to 64, as in the AIR 6419 that I wrote about earlier this year. These changes will increase the weight from 12 kg to 20 kg.
Why low weight matters
There are multiple reasons why the weight of a Massive MIMO array matters in practice. Firstly, it eases the deployment since a single engineer can carry it; in fact, there is a 25 kg per-person limit in the industry, which implies that a single engineer may carry one AIR 3268 in each hand (as shown in the press photo from Ericsson). Secondly, the site rent in towers depends on the weight, as well as the wind load, which naturally reduces when the array shrinks in size. All current Ericsson products have front dimensions determined by the antenna array size since all other components are placed behind the radiating elements. This was not the case a few years ago, and demonstrates the product evolution. The thickness of the panel is determined by the radio components as well as the heatsink that is designed to manage ambient temperatures up to 55°C.
The total energy consumption is reduced by 10% in the new product, compared to its predecessor. It is the result of fine-tuning all the analog components. According to Måns Hagström, Senior Radio Systems Architect at Ericsson, there are no “low-hanging fruits” anymore in the hardware design since the Massive MIMO product line is now mature. However, there is a new software feature called Deep Sleep, where power amplifiers and analog components are turned off in low-traffic situations to save power. Turning off components is not as simple as it sounds, since it must be possible to turn them on again in the matter of a millisecond so that coverage and delay issues are not created.
Beamforming implementation
The channel state information needed for beamforming can either be acquired by codebook-based feedback or utilizing uplink-downlink reciprocity in 5G, where the latter is what most of the academic literature focuses on. The beamforming computation in Ericsson’s products is divided between the Massive MIMO panel and the baseband processing unit, which are interconnected using the eCPRI interface. The purpose of the computational split is to reduce the fronthaul signaling by exploiting the fact that Massive MIMO transmits a relatively small number of data streams/layers (e.g., 1-16) using a substantially larger number of radios (e.g., 32 or 64). More precisely, the Ericsson Silicon in the panel is taking care of the mapping from data streams to radio branches, so that the eCPRI interface capacity requirement is independent of the number of radio branches. It is actually the same silicon that is used in all the current Massive MIMO products. I was told that some kind of regularized zero-forcing processing is utilized when computing the multi-user beamforming. Billy Hogan, Principal Engineer at Ericsson, pointed out that the beamforming implementation is flexible in the sense that there are tunable parameters that can be revised through a software upgrade, as the company learns more about how Massive MIMO works in practical deployments.
Hagström also pointed out that a key novelty in 5G is the larger variations in capabilities between handsets, for example, in how many antennas they have, how flexible the antennas are, how they make measurements and decisions on the preferred mode of operation to report back to the base station. The 5G standard specifies protocols but leaves the door open for both clever and less sophisticated implementations. While Massive MIMO has been shown to provide impressive spectral efficiency in field trials, it remains to been seen how large the spectral efficiency gains become in practice, when using commercial handsets and real data traffic. It will likely take several years before the data traffic reaches a point where the capability of spatially multiplexing many users is needed most of the time. In the meantime, these Massive MIMO panels will deliver higher single-user data rates throughout the coverage area than previous base stations, thanks to the stronger and more flexible beamforming.
Future development
One of the main take-aways from my visit to the Ericsson Imagine Studio in Stockholm is that the Massive MIMO product development has come much further than I anticipated. Five years ago, when I wrote the book Massive MIMO Networks, I had the vision that we should eventually be able to squeeze all the components into a compact box with a size that matches the antenna array dimensions. But I couldn’t imagine that it would happen already in 2021 when the 5G deployments are still in their infancy. With this in mind, it is challenging to speculate on what will come next. If the industry can already build 64 antenna-integrated radios into a box that weighs less than 20 kg, then one can certainly build even larger arrays, when there will be demand for that.
The only hint about the future that I picked up from my visit is that Ericsson already considers Massive MIMO technology and its evolutions to be natural parts of 6G solutions.
Four years ago, I reviewed the book “The 5G Myth” by William Webb. The author described how the telecom industry was developing a 5G technology that addresses the wrong issues; for example, higher peak rates and other things that are barely needed and seldom reached in practice. Instead, he argued that a more consistent connectivity quality should be the goal for the future. I generally agree with his criticism of the 5G visions that one heard at conferences at the time, even if I noted in my review that the argumentation in the book sometimes was questionable. In particular, the book propagated several myths about the MIMO technology.
Webb wrote a blog post earlier this year where he continues to criticize 5G, this time in the form of analyzing whether the 5G visions have been achieved. His main conclusion is that “5G is a long way from delivering on the original promises”, thereby implying that 5G is a failed technology. While the facts that Webb is referring to in his blog post are indisputable, the main issue with his argumentation is that what he calls the “original promises” refer to the long-term visions that were presented by a few companies around 2013 and not the actual 5G requirements by the ITU. Moreover, it is way too early to tell if 5G will reach its goals or not.
Increasing data volumes
Let us start by discussing the mobile data volumes. Webb is saying that 5G promised to increase them by 1000 times. According to the Ericsson Mobility Report, it has grown in North America from 1 to 4 EB/month between 2015 and 2020. This corresponds to an annual increase of 32%. This growth is created by a gradual increase in demand for wireless data, which has been enabled by a gradual deployment of new sites and an upgrade of existing sites. Looking ahead, the Mobility Report predicts another 3.5 times growth over the next 5 years, corresponding to an annual increase of 28%. These predictions have been fairly stable over the last few years. The point I want to make is that one cannot expect 5G to drastically change the data volumes from one day to the next, but the goal of the technological evolution is to support and sustain the long-term growth in data traffic, likely being at around 30% per year in the foreseeable future. Whether 5G will enable this or not is too early to tell, because we have only had the opportunity to observe one year of 5G utilization, at a few markets.
Importantly, the 5G requirements defined by ITU don’t contain any relative targets when it comes to how large the increase in data volumes should be over 4G. The 1000 times increase, that Webb is referring to, originates from a 2012 white paper by Qualcomm. This paper discusses (in general terms) how to overcome the “1000x mobile data challenge” without saying that 5G alone should achieve it or what the exact time frame would be. Nokia presented a similar vision in 2013. I have used the 1000x number in several talks, including a popular YouTube video. However, the goal of the discussion has only been to explain how we can build networks that support 1000 times higher data volumes, not to claim that the demand will grow by such an immense factor any time soon. Even if the traffic would suddenly start to double every year, it takes 10 years to reach a 1000x higher traffic than today.
The current state of 5G
The 5G deployments have so far been utilizing Massive MIMO technology in the 3 GHz band. This is a technology for managing higher data volumes by spatial multiplexing of many users, thus it is only when the traffic increases that we can actually measure how well the technology performs. Wireless data isn’t a fixed resource that we can allocate as we like between the users, but the data volume depends on the number of multiplexed users and their respective propagation conditions. However, field trials have shown that the technology delivers on the promise of achieving much higher spectral efficiencies.
When it comes to higher peak data rates, there are indeed ITU targets that one can compare between network generations. The 4G target was 1 Gbps, while it is 20 Gbps in 5G. The latter number is supposed to be achieved using 1 GHz of spectrum in the mmWave bands. The high-band 5G deployments are still in their infancy, but Verizon has at least reached 5 Gbps in their network.
To be fair, Webb is providing a disclaimer in his blog post saying that his analysis is based on the current state of 5G, where mmWave is barely used. My point is that it is too early to conclude whether 5G will reach any of its targets since we are just in the first phase of 5G deployments. Most of the new features, including lower latency, higher peak rates, and massive IoT connectivity aren’t suppose to be supported until much later. Moreover, the consistent connectivity paradigm that Webb pushed for in his book, is what 5G might deliver using cell-free implementations, for example, using the Ericsson concept of radio stripes.
Webb makes one more conclusion in his blog post: “4G was actually more revolutionary than 5G.” This might be true in the sense that it was the first mobile broadband generation to be utilized in large parts of the world. While the data volumes have “only” increased by 30% per year in North America in the last decade, the growth rate has been truly revolutionary in developing parts of the world (e.g., +120% per year in India). There is a hope that 5G will eventually be the platform that enables the digitalization of society and new industries, including autonomous cars and factories. The future will tell whether those visions will materialize or not, and whether it will be a revolution or an evolution.
Is 5G a failed technology?
That is too early to tell since the 5G visions have focused on the long-term perspective, but so far I think it progresses as planned. When discussing these matters, it is important to evaluate 5G against the ITU requirements and not the (potentially) over-optimistic visions from individual researchers or companies. As Bill Gates once put it:
“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. ”
We have now released the 18th episode of the podcast Wireless Future, which is the last one in the first season (we are taking a summer break). The episode has the following abstract:
Many individuals are speculating about 6G, but in this episode, you will hear the joint vision of 700+ researchers at Ericsson. Erik G. Larsson and Emil Björnson are visited by Magnus Frodigh, Vice-President and Head of Ericsson Research. His team has recently published the white paper “Ever-present intelligent communication: A research outlook towards 6G”. The conversation covers emerging applications, new requirements, and research challenges that might define the 6G era. How can we achieve limitless connectivity? Which frequency bands will become important? What is a network compute fabric? What should students learn to take part in the 6G development? These are just some of the questions that are answered.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places: