We have now released the 35th episode of the podcast Wireless Future. It has the following abstract:
The main directions for 6G research have been established and include pushing the communication to higher frequency bands, creating smart radio environments, and removing the conventional cell structure. There are many engineering issues to address on the way to realizing these visions. In this episode, Emil Björnson and Erik G. Larsson discuss the article “The Road to 6G: Ten Physical Layer Challenges for Communications Engineers” from 2021. What specific research challenges did the authors identify, and what remains to be done? The conversation covers system modeling complexity, hardware implementation issues, and signal processing scalability. The article can be found here: https://arxiv.org/pdf/2004.07130 The following papers were also mentioned: https://arxiv.org/pdf/2111.15568 and https://arxiv.org/pdf/2104.15027
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
When I first heard Tom Marzetta describe Massive MIMO with an infinite number of antennas, I felt uncomfortable since it challenged my way of thinking about MIMO. His results and conclusions seemed too good to be true and were obtained using a surprisingly simple channel model. I was a Ph.D. student then and couldn’t pinpoint specific errors, but I sensed something was wrong with the asymptotic analysis.
I’ve later grown to understand that Massive MIMO has all the fantastic features that Marzetta envisioned in his seminal paper. Even the conclusions from his asymptotic analysis are correct, even if the choice of model overemphasizes the impact of pilot contamination. The only issue is that one cannot reach all the way to the asymptotic limit, where the number of antennas is infinite.
Assuming that the universe is infinite, we could indeed build an infinitely large antenna array. The issue is that the uncorrelated and correlated fading models that were used for asymptotic Massive MIMO analysis during the last decade will, as the number of antennas increases, eventually deliver more signal power to the receiver than was transmitted. This breaches a fundamental physical principle: the law of conservation of energy. Hence, the conventional channel models cannot predict the actual performance limits.
In 2019, Luca Sanguinetti and I finally figured out how to study the actual asymptotic performance limits. As the number of antennas and array size grow, the receiver will eventually be in the radiative near-field of the transmitter. This basically means that the outermost antennas contribute less to the channel gain than the innermost antennas, and this effect becomes dominant as the number of antennas goes to infinity. We published the analytical results in the article “Power Scaling Laws and Near-Field Behaviors of Massive MIMO and Intelligent Reflecting Surfaces“ in the IEEE Open Journal of the Communications Society. In particular, we highlighted the implications for both MIMO receivers, MIMO relays, and intelligent reflecting surfaces. I have explained our main insights in a previous blog post, so I will not repeat it here.
I am proud to announce that this article has received the 2023 IEEE Communications Society Outstanding Paper Award. We wrote this article to quench our curiosity without knowing that the analysis of near-field propagation would later become one of the leading 6G research directions. From our perspective, we just found an answer to a fundamental issue that had been bugging us for years and published it in case others would be interested. Another journal first rejected the paper; thus, this is also a story of how one can reach success with hard work, even when other researchers are initially skeptical of your results and their practical utility.
The following 5-minute video summarizes the paper neatly:
Multiantenna communications have a long and winding history, starting with how Guglielmo Marconi used an array of phase-aligned antennas to communicate over the Atlantic and Karl Ferdinand Braun used a triangular array to transmit phase-shifted signal copies to beamform in a controlled direction. The use of antenna arrays for spatial diversity and multiplexing has since appeared. The cellular network pioneer Martin Cooper tried to launch multi-user MIMO in the 1990s but concluded in 1996 that “computers weren’t powerful enough to operate it”.
During the last 25 years, multiantenna communications have changed from being a technology only used for beamforming and diversity, to becoming a mainstream enabler of high-capacity communication in 5G. It is used for both single-user and multi-user MIMO when connecting any modern mobile phone to the Internet, in both the 3 GHz and mmWave bands.
The IEEE Signal Processing Society is celebrating its 75 years anniversary and, therefore, the Signal Processing Magazine publishes a special issue focusing on the last 25 years of research developments. I have written a paper for this issue called “25 Years of Signal Processing Advances for Multiantenna Communications“. It is now available on arXiv, and it is co-authored by Yonina Eldar, Erik G. Larsson, Angel Lozano, and H. Vincent Poor. I hope you will like it!
We have now released the 34th episode of the podcast Wireless Future. It has the following abstract:
The speed of wired optical fiber technology is soon reaching 1 million megabits per second, also known as 1 terabit/s. Wireless technology is improving at the same pace but is 10 years behind in speed, thus we can expect to reach 1 terabit/s over wireless during the next decade. In this episode, Erik G. Larsson and Emil Björnson discuss these expected developments with a focus on the potential use cases and how to reach these immense speeds in different frequency bands – from 1 GHz to 200 GHz. Their own thoughts are mixed with insights gathered at a recent workshop at TU Berlin. Major research challenges remain, particularly related to algorithms, transceiver hardware, and decoding complexity.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
The core of the scientific method is that researchers perform experiments and analyses to make new discoveries, which are then disseminated to other researchers in the field. The discoveries that are accepted by the peers as scientifically valid and replicable become bricks of our common body of knowledge. As discoveries are primarily disseminated in scientific papers, these are scrutinized by other researchers for their correctness and must pass a quality threshold before publication. As an associate editor of a journal, you are managing this peer-review process for some of the submitted papers. The papers are assigned to editors based on their expertise and research interests. The editor is the one making the editorial decisions for the assigned papers and the decision to accept a paper for publication is never forgotten; the editor’s name is printed on the first page of the paper as a stamp of approval. The reputation of the journal and editor is lent to a published paper, thereby setting it aside from preprints and opinion pieces that anyone can share. To increase the likelihood of making a well-informed and fair editorial decision, you are asking peer reviewers to read the paper and provide their written opinions. The intention is that the reviewers consist of a diverse mix of established researchers that are well acquainted with the methodology used in the paper and/or previous work on the topic. To prevent herd mentality (e.g., people borrow assumptions from earlier papers without questioning their validity), it is also appropriate to ask a senior researcher working on an adjacent topic to provide an outsider perspective. Both TCOM and TGCN require at least three reviewers per paper, but exceptions can be made if reviewers drop out unexpectedly.
Finding suitable reviewers is either easy or hard, seldom in between. When handling a paper that is close to my daily research activities, I have a good sense of who are the skillful researchers within that topic and these people are likely to agree to my review requests since the paper matches their interests. However, when handling a paper that is further from my expertise, meaning that I critically need input from skillful reviewers to make the right decision, I had little clue of who to ask. My typical approach was to look for senior researchers that the authors have cited in the submitted paper, get automated suggestions from the editorial system, and make Google searches for papers with similar titles and keywords, to find published authors that are active in the field. I had to take the latter approach quite regularly when working for TGCN because the papers spanned a very wide range of topics related to energy-efficient communications. The result was often a mixed bag of reviews whose quality was hard to assess, which made me less confident with my editorial decisions.
The review time can vary substantially between papers, even if each reviewer is asked to deliver their review within 45 days. The extra delays either occur at the beginning or the end of the process. Only one third of the people that I invited to write reviews accepted my requests, so sometimes I had to invite new people in many rounds before three of them had accepted, which prolongs the process. It is also quite common that reviewers are not delivering their reviews on time. A few extra days are no big deal, but it is annoying when you must chase reviewers who are several weeks delayed. I urge people who get review requests to remember that it is perfectly fine to decline; you are wasting everyone’s time if you accept the request but cannot deliver on time.
When all the reviews had been received, I went through the paper and reviews to make up my mind on what decision to take: either reject the paper or ask the authors to revise it to address the key issues that have been identified. In principle, the paper can also be accepted immediately but that never happens in practice—no paper is perfect from the beginning. A major revision is basically the most positive decision you can expect from your initial submission, and it means that the editor thinks your paper can be accepted, if you properly address the issues that are brought up. Even if there are “major” things to address, they should still be limited in scope so that you as an editor see a viable path for them to be dealt with within two months. For example, if a flaw in the underlying models has been discovered, which can only be addressed by deriving new formulas and algorithms using entirely different methods and then re-running all the simulations, then I would reject the paper since the revised version would be too different from the initial one. Ideally, the required revisions should only affect a limited number of subsections, figures, and statements so it is sufficient for the editor and reviewer to re-read those parts to judge whether the issues have been addressed or not. Note that there is a path to publication also for papers that are initially rejected from TCOM or TGCN: if you can address the issues that were raised, you are allowed to resubmit the paper and get the same editor and reviewers again.
It is the editor who makes the decision; the reviewers only provide advice. It is therefore important to motivate your decision transparently towards the authors, particularly, when the reviewers have conflicting opinions and when there are certain issues that you want the authors to address in detail, while you think other comments are optional or even irrelevant. An example of the latter is when a reviewer asks the authors to cite irrelevant papers. Even when a reviewer has misunderstood the methodology or results, there are good reasons to revise the paper to avoid further misinterpretations of the paper. A good paper must be written understandably, in addition to having technical depth and rigor.
Things that characterize a good paper
The review process is not only assessing the technical content and scientific rigor, but also the significance, novelty, and usefulness of the paper to the intended readers. The evaluation against these criteria is varying with time since the new paper is compared with previous research; a paper that was novel and useful five years ago might be redundant today. For instance, it is natural that the first papers in a new field of communications make use of simplified models to search for basic insights and lead the way for more comprehensive research that uses more realistic models. Some years later, the simplified initial models are outdated and, thus, a new paper that builds on them will be of little significance and usefulness.
A concrete example during my tenure as editor is related to hybrid analog-digital beamforming. This is a wireless technology to be used in 5G over wideband channels in millimeter bands. The initial research focused on narrowband channels to gain basic insights but once the beamforming optimization was well understood in that special case, the community moved on to the practically useful wideband case. Nowadays, new papers that treat the narrowband case are mostly redundant. During my last two years, I therefore rejected most such submissions due to limited significance and usefulness.
I think a good paper identifies a set of unanswered research questions and finds concrete answers to them, by making discoveries (e.g., deriving analytical formulas, developing an efficient algorithm, or making experiments). This might seem obvious but many paper submissions that I handled did not have that structure. The authors had often identified a research gap in the sense of “this is the first paper that studies X under conditions Y and Z” but sometimes failed to motivate why that gap must be filled and what the research community can learn from that. Filling research gaps is commendable if we gain a deeper understanding of the fundamentals or bring previous results closer to reality. However, it might also make the research further and further detached from relevance. For example, one can typically achieve novelty by making a previously unthinkable set of assumptions, such as combining a conventional cellular network with UAVs, energy harvesting, secrecy constraints, hardware impairments, intelligent reflecting surfaces, and Nakagami fading—all at once! But such research is insignificant and maybe even useless.
Is the system broken?
The peer-review system receives criticism for being slow, unfair, and unable to catch all errors in papers, and I think the complaints are partially accurate. Some reviewers are submitting shallow or misleading reviews due to incompetence, rivalry, or lack of time, and the editors are not able to identify all those cases. The papers that I handled in TGCN were often outside my expertise, so I didn’t have the competence to make a deep review myself. Hence, when making my decision, I had to trust the reviewers and judge how critical the identified issues were. I have personally experienced how some editors are not even making that effort but let the reviewers fully decide on the outcome, which can lead to illogical decisions. It is possible to appeal an editorial decision in such situations, either by revising and resubmitting a rejected paper or contacting the editor (and editor-in-chief) to challenge the decision. My editorial decisions were never challenged in the latter way, but I have handled many resubmissions, whereof some were eventually accepted. I have complained about a few decisions on my own papers, which once led to the editor changing his mind but usually resulted in an encouragement to revise and resubmit the paper.
Despite the weaknesses of the peer-review process, I don’t think that abolishing the system is not the way forward. Half of all submitted papers are rejected, which is sad for the people who spent time writing them but greatly improves the average quality of the published papers. People who want to publish without scrutiny and having to argue with reviewers already have that option: upload a preprint to arXiv.org or use a pay-to-publish journal. I believe that scientific debate leads to better science and the peer-review system is the primary arena for this. The editor and reviewers might not catch all errors and are not always right, but they are creating a healthy resistance where the authors are challenged and must sharpen their arguments to convince their peers of the validity and importance of their discoveries. This is the first step in the process of accepting discoveries as parts of our common body of knowledge.
If I could change something in the publication process, I would focus on what happens after a paper has been accepted. We will never manage to make the peer-review process fully reliable, thus, there will always be errors in published papers. The journals that I’m familiar with handle error correction in an outdated manner: one cannot correct anything in the published paper, but one can publish a new document that explains what was wrong in the original version. This made sense in a time when journals were printed physically, but not in the digital era. I think published papers should become “living” documents that can be refined with time. For example, IEEEXplore could contain a feature where the readers of published papers can submit potential typos and the authors can respond, and possibly make corrections directly in the paper. Future readers will thereby obtain the refined version instead of the original one. To facilitate deeper scrutiny of published works, I also think it should become mandatory to attach data and simulation code to published papers; perhaps by turning papers into something that resembles Jupyter notebooks to ensure reproducibility. I’m occasionally informed about genuine errors in my papers or simulation code, which I gladly correct but I can never revise the officially published versions.
Was it worth it?
Being an editor is one of the unpaid duties that is expected from researchers that want to be active in the international research community. You are a facilitator of scientific debate and judge of research quality. The reason that scientifically skilled researchers are recruited instead of non-scientific administrators is that they should make the final informed publication decisions, not the reviewers. Editors are often recruited when they have established themselves as assistant or associate professors (or similar in the industry), either by being handpicked or volunteering. The experience that these people have accumulated is essential to making insightful decisions.
I learned many things about scientific publishing during my years as an associate editor, such as different ways to write papers and reviews, who I can trust to write timely and insightful reviews, and how to effectively deal with criticism. It was also not overly time-consuming because the journal makes sure that each editor handles a limited number of papers per year. You can free this time by declining an equal number of regular review requests. It was a useful experience that I don’t regret, and I recommend others to take on the editorial role. Maintaining high scientific standards is a community effort!
We have now released the 33rd episode of the podcast Wireless Future. It has the following abstract:
Research is carried out to obtain new knowledge, find solutions to pertinent problems, and challenge the researchers’ abilities. Two key aspects of the scientific process are reproducibility and replicability, which sound similar but are distinctly different. In this episode, Erik G. Larsson and Emil Björnson discuss these principles and their impact on wireless communication research. The conversation covers the replication crisis, Monte Carlo simulations, best practices, pitfalls that new researchers should avoid, and what the community can become better at. The following article is mentioned: “Reproducible Research: Best Practices and Potential Misuse”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
Authors: William Tärneberg, Gilles Callebaut and Liesbet Van der Perre
What’s in a name? ‘The spoken newspaper’ is the term grandma1 used to call the news broadcasted on the radio. The curious name became a witness to how news media changed rapidly. In the same way, the term ‘Access Point (AP)’ literally becomes a somewhat crooked term in the context of 6G networks: nor will the infrastructure provide only access, nor will services be delivered via one point. This blog post clarifies why 6G applications and sustainability targets require, on the infrastructure side, a compute-connectivity platform, consisting of interconnected distributed resources. We introduce new essential terminology to enable a clear discussion in developing novel technologies, originating from lively academic-industrial interactions in the REINDEER project and illustrated in the figure below (spoiler alert!).
We, and the devices we use, rely more and more on information from other devices and sources, accessed wirelessly, and progressively dispersed around us and the globe. The vast population of devices generating all this data are generally referred to as Internet-of-Things (IoT) devices. They often inhabit the capillaries of the network, where they sense and report data, enabling us to make more granular and complex decisions. Therefore, in the past decade, the information centre-of-gravity has and is shifting from data centres at the heart of the network, to the edge of the network. Consequently, traditional cloud data centres are dispersing towards the edge, offering seamless computational capacity throughout the network, in the form of edge and fog computing. The potential number and types of devices that want to reliably connect, wirelessly, at low latency, and high throughput presents a huge challenge to wireless networks. Imagine, a throughput-hungry AR headset, that shares a space with a few thousand connected sensors, whilst being driven around in an autonomous vehicle. Satisfying the needs for that scenario, and many more like it is the precursor to the 6th generation wireless networks (6G).
Researchers are in the midst of shaping the 6th generation of wireless networks, capable of supporting divergent applications and especially services that go far beyond what the 5th generation can offer. That effort includes addressing challenges in achieving remarkably high data rates, imperceptibly low latency, unrivalled dependability, and ultra-low power consumption [1,2]. Also, and with increasing urgency, the above should be accomplished with a negligible carbon footprint. Our vision for 6G is focused on realising large intelligent surface (LIS) . LIS is an extension of massive MIMO, where the number of antennas is increased several orders of magnitude and distributed throughout a three-dimensional space, rather than co-located on a plane. Accompanying the antennas are radio and computational resources. This spatial and computational diversity is the singular most significant distinction between the 6th generation wireless networks and their predecessors.
With such diversity, the potential for 6G is monumental, but therein lies its greatest challenge. To seamlessly serve users with LIS, shared computational and radio resources will have to be dynamically allocated and orchestrated at a previously unseen scale and speed. As there are no discrete base stations, all decisions will have to be distributed, which is referred to as a cell-free network.
In cell-free networking [4,5], all resources in the network can be used to offer a given service. It provides an interesting concept to fully use the available resource capacity. Although the theoretical potential of cell-free networking has been identified, many questions remain concerning how these systems can be designed to be practically feasible. For example, how can such a great pool of resources get coordinated and allocated efficiently? And how can both the infrastructure and the services provided in 6G be scalable?
Our incarnation of 6G is called RadioWeaves (RW) . Cell-free networking and embedded edge computing are central components in RW. Further, as a means to begin to address the above challenges, in RW, we analogously group resources dynamically in both the temporal and spatial domains. In RW, we use the term federation(s) to denote such a group of resources, that jointly serve an application. In distinction to current wireless networks, the envisioned RW system, will also support precise positioning, wireless energy transfer and energy-neutral IoT devices . Following this paradigm shift, a new terminology for the conventional AP is introduced, i.e., the contact service point (CSP). It provides a first contact point into the network from the perspective of the user equipment (UE) and supports more than just wireless communication, as mentioned above. To do this, CSPs can host a variety of hardware such as radio, charging, processing, data storage and other sensing elements.
The figure above illustrates the abstract notion of a massive number of spatially distributed CSPs, and their grouping in relation to users and applications, i.e., federations. Concretely, the figure shows an example deployment of RW in a smart factory, with four federations, shown in different colours. Before we proceed, we need to take note of the RW terminology used in the figure. RW CSPs are deployed throughout the production hall on the walls and ceiling and are dynamically assigned to federations to serve the devices and their running applications. The constellation of CSPs assigned to each federation is tailored to the particular application’s requirements. The video below goes into more detail about how we address this challenge.
Implementing RW is the next analogous step in bringing the benefits of 6G to society. We are therefore implementing two test beds, one located at KU Leuven, Belgium, and the other at Lund University, Sweden. The testbed at KU Leuven, Techtile , was inaugurated in October 2021 and already has the physical infrastructure in place, but work is ongoing on the software to manage and run the testbed. Techtile consists of a 4-by-8-by-2-meter room built with 140 modular panels, each of which contains software-defined radios, edge computing units, and sensors. The testbed is specifically designed to study scalable and low-cost RW systems. It, therefore, uses Ethernet for power, data, and synchronisation. Complementary, the Lund University testbed2 is designed to assess high-throughput and latency-critical applications. The testbed consists of 16 panels, having 4-by-4 RF chains, equating to a total of 256 antenna elements.
More information regarding the concept and terminology can be found here: G. Callebaut, W. T ̈arneberg, L. Van der Perre, and E. Fitzgerald, “Dynamic federations for 6G cell-free networking: Concepts and terminology,” in 2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC), 2022, pp. 1–5. doi: 10.1109/SPAWC51304.2022.9833918 .
The authors would like to thank the REINDEER team, and especially Emma Fitzgerald, Erik G. Larsson, Pål Frenger, Ove Edfors and Liang Liu, for the rich discussions that have strengthened the definition of the new terminology and realisation of the test beds.
 EU H2020 REINDEER project. “REsilient INteractive applications through hyper Diversity in Energy Efficient RadioWeaves technology (REINDEER) project – Deliverable 1.1: Use case-driven specifications and technical requirements and initial channel model.” Visited on 2021-07-26. (2021), [Online]. Available: https://reindeer-project.eu/D1.1(visited on 2021).
 Ericsson AB, Joint communication and sensing in 6G networks, https://www.ericsson.com/en/6g, 2021.
 S. Hu, F. Rusek, and O. Edfors, “Beyond massive MIMO: The potential of positioning with large intelligent surfaces,” IEEE Transactions on Signal Processing, 2018.
 G. Interdonato, E. Björnson, H. Q. Ngo, P. Frenger, and E. G. Larsson, “Ubiquitous cell-free massive MIMO communications,” EURASIP Journal on Wireless Communications and Networking, 2019.
 H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta, “Cell-Free Massive MIMO Versus Small Cells,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1834–1850, 2017. doi: 10.1109/TWC.2017.2655515.
 L. Van der Perre, E. G. Larsson, F. Tufvesson, L. De Strycker, E. Bjornson, and O. Edfors, “RadioWeaves for efficient connectivity: analysis and impact of constraints in actual deployments,” Matthews, MB, IEEE, 2019, pp. 15–22.
 B. J. B. Deutschmann, T. Wilding, E. G. Larsson, and K. Witrisal, “Location-based Initial Access for Wireless Power Transfer with Physically Large Arrays,” in WS08 IEEEICC 2022 Workshop on Synergies of communication, localization, and sensing towards 6G (WS08 ICC’22 Workshop – ComLS-6G), Seoul, Korea (South), May 2022.
 G. Callebaut, J. Van Mulders, G. Ottoy, et al., “Techtile – Open 6G R&D Testbed for Communication, Positioning, Sensing, WPT and Federated Learning,” in 2022 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6GSummit): Operational & Experimental Insights (OPE) (2022 EuCNC & 6G Summit -OPE), Grenoble, France, Jun. 2022.
 G. Callebaut, W. T ̈arneberg, L. Van der Perre, and E. Fitzgerald, “Dynamic federations for 6g cell-free networking: Concepts and terminology,” in 2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC), 2022, pp. 1–5. doi: 10.1109/SPAWC51304.2022.9833918.4
1Livine Cleemput, mormor of L. Van der Perre, original term ‘Gesproken dagblad’ 2 www.eit.lth.se/index.php?gpuid=325&L=1
The project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101013425.