A Thought for 2021: Faster Science is Not Better Science

Scientific papers used to appear in journals once per month and in a few major conference proceedings per year. During the last decade, this has changed into a situation where new preprints appear on arXiv.org every day. The hope has been that this trend will lead to swifter scientific progress and a higher degree of openness towards the public. However, it is not necessary that any of these goals are being reached. The reputable academic journals are only accepting ⅓ of the submitted papers, often after substantial revisions, thus one can suspect that at least ⅔ of the preprints appearing on arXiv are of low quality. How should the general public be able to tell the difference between a solid and questionable paper when it is time-consuming even for researchers in the field to do so?

During the COVID-19 pandemic, the hope for scientific breakthroughs and guidance became so pressing that the media started to report results from preprints. For example, an early study from the Imperial College London indicated that a lockdown is by far the most effective measure to control the pandemic. This seemed to give scientific support for the draconian lockdown measures taken in many countries around the world, but the study has later been criticized for obtaining its conclusion from the model assumptions rather than the underlying data (see [1], [2], [3]). I cannot tell where the truth lies—the conclusion might eventually turn out to be valid even if the study was substandard. However, this is an example of why a faster scientific process is not necessarily leading to better scientific progress.

Publish or perish

After the big success of two survey papers that provided coherent 5G visions (What will 5G be? and Five disruptive technology directions for 5G), we have recently witnessed the rush to do the same for the next wireless generation. There are already 30+ papers competing to provide the “correct” 6G vision. Since the 6G research has barely started, the level of speculation is substantially higher than in the two aforementioned 5G papers, which appeared after several years of exploratory 5G research. Hence, it is important that we are not treating any of these 6G papers as actual roadmaps to the next wireless generation but just as different researchers’ rough long-term visions.

The abundance of 6G vision papers is a symptom of a wider trend: academic researchers are chasing citations and measurable impact at a level never seen before. The possibility of instantly uploading an unpublished manuscript to arXiv is encouraging researchers to be the first ones to formulate a new research problem and scratching on the surface of its solution, instead of patiently solving it first and then publish the results. The recent Best Readings in Reconfigurable Intelligent Surfaces page is an example of this issue: a large fraction of the listed papers are actually unpublished preprints! Since even published papers in this emerging research area contain basic misconceptions, I think we need to calm down and let papers go through review before treating them as recommended readings.

The more papers a researcher produces per year, the more the citation numbers can be inflated. This is the bad consequence of the “publish-or-perish” pressure that universities and funding agencies are putting on academic researchers. As long as quantitative bibliometrics (such as the number of papers and citations) are used to determine who will be tenured and who will receive research grants, we will encourage people to act in a scientifically undesired way: Write surveys rather than new technical contributions, ask questions rather than answering them, and make many minor variations of previous works instead of few major contributions. The 2020s will hopefully be the decade where we find better metrics of scientific excellence that encourage a sounder scientific methodology.

Open science can be a solution

The open science movement is pushing for open access to publications, data, simulation code, and research methodology. New steps in this direction are taken every year, with the pressure coming from funding agencies that gradually require a higher degree of openness. The current situation is much different from ten years ago, when I started my journey towards publishing simulation code and data along with my papers. Unfortunately, open science is currently being implemented in a way that risk to degrade the scientific quality. For example, it is great that IEEE is creating new open journals and encourages publication of data and code, but it is bad that the rigor of the peer-review process is challenged by the shorter deadlines in these new journals. One cannot expect researchers, young or experienced, to carefully review a full-length paper in only one or two weeks, as these new journals are requiring. I am not saying that all papers published in those journals are bad (I have published several ones myself) but the quality variations might be almost as large as for unpublished preprints.

During the 2020s, we need to find ways to promote research that are deep and insightful rather than quick and dirty. We need to encourage researchers to take on the important but complicated scientific problems in communication engineering instead of analytically “tractable” problems of little practical relevance. The traditional review process is far from perfect, but it has served us well in upholding the quality of analytical results in published papers. The quality of numerical and experimental results is harder to validate by a referee. Making code and data openly available is not the solution to this problem but a necessary tool in any such solution. We need to encourage researchers to scrutinize and make improvements to each others’ code, as well as basing new research on existing data.

The machine-learning community is leading the way towards a more rigorous treatment of numerical experiments by establishing large common datasets that are used to quantitatively evaluate new algorithms against previous ones. A trend towards similar practices can already be seen in communication engineering, where qualitative comparisons with unknown generalizability have been the norm in the past. Another positive trend is that peer-reviewers have started to be cautious when reference lists contain a large number of preprints, because it is by repeating claims from unpublished papers that myths are occasionally created and spread without control.

I have four New Year’s wishes for 2021:

  1. The research community focuses on basic research, such as developing and certifying fundamentals models, assumptions, and algorithms related to the different pieces in the existing 6G visions.
  2. It becomes the norm that code and data are published alongside articles in all reputable scientific journals, or at least submitted for review.
  3. Preprints are not treated as publications and only cited in the expectational case when the papers are closely related.
  4. We get past the COVID-19 pandemic and get to meet at physical conferences again.

Episode 6: Q/A on Massive MIMO

We would like to thank our podcast listeners for all the questions that they have asked on social media. We decided to categorize the questions and answer those related to Massive MIMO in the sixth episode of the podcast Wireless Future. There will be further Q/A episodes next year. The new episode has the following abstract:

In this New Year’s special, Erik G. Larsson and Emil Björnson answer questions from the listeners on the topic of Massive MIMO. Some examples are: How are the antennas calibrated? Will digital beamforming replace analog beamforming? What is channel hardening and how is it related to power control? Can Massive MIMO interact with drones? Practical issues such as the peak-to-average-power ratio (PAPR) and effective isotropic radiated power (EIRP) are also discussed.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Real-time Reconfigurable Metasurfaces

I have written several blog posts about how the hardware concepts of reconfigurable reflectarrays and metasurfaces are gaining interest in the wireless communication community, for example, to create a new type of full-duplex transparent relays. The technology is also known as reconfigurable intelligent surfaces and intelligent reflecting surfaces.

In my latest magazine paper, I identified real-time reconfigurability as the key technical research challenge: we need fast algorithms for over-the-air channel estimation that can handle large surfaces and complex propagation environments. In other words, we need hardware that can be reconfigured and algorithms to find the right configuration.

The literature contains several theoretical algorithms but it is a very different thing to demonstrate real-time reconfigurability in lab experiments. I was therefore impressed when finding the following video from the team of Dr. Mohsen Khalily at the University of Surrey:

The video shows how a metasurface is used to reflect a signal from a transmitter to a receiver. In the second half of the video, they move the receiver out of the reflected beam from the metasurface and then press a button to reconfigure the surface to change the direction of the beam.

I asked Dr. Khalily to tell me more about the setup:

“The metasurface consists of several conductive printed patches (scatterers), and the size of each scatterer is a small proportion of the wavelength of the operating frequency. The macroscopic effect of these scatterers defines a specific surface impedance and by controlling this surface impedance, the reflected wave from the metasurface sheet can be manipulated. Each individual scatterer or a cluster of them can be tuned in such a way that the whole surface can reconstruct radio waves with desired characteristics without emitting any additional waves.”

The surface shown in the video contains 2490 patches that are printed on a copper ground plane. The patches are made of a new micro-dispersed ceramic PTFE composite and designed to support a wide range of phase variations along with a low reflection loss for signals in the 3.5 GHz band. The design of the surface was the main challenge according to Dr. Khalily:

Fabrication was very difficult due to the size of the surface, so we had to divide the surface into six tiles then attach them together. Our surface material has a higher dielectric constant than the traditional PTFE copper-clad laminates to meet the design and manufacturing of circuit miniaturization. This material also possesses high thermal conductivity, which gives an added advantage for heat dissipation of the apparatus.”

The transmitter and receiver were in the far-field of the metasurface in the considered experimental setup. Since there is an unobstructed line-of-sight path, it was sufficient to estimate the angular difference between the receiver and the main reflection angle, and then adjust the surface impedance to compensate for the difference. When this was properly done, the metasurface improved the signal-to-noise ratio (SNR) by almost 15 dB. I cannot judge how close this number is to the theoretical maximum. In the considered in-room setup with highly directional horn antennas at the transmitter and receiver, it might be enough that the reflected beam points in roughly the right direction to achieve a great SNR gain. I’m looking forward to learning more about this experiment when there is a technical paper that describes it.

This is not the first experiment of this kind, but I think it constitutes the state-of-the-art when it comes to bringing the concept of reconfigurable intelligent surfaces from theory to practice.

Episode 5: Millimeter Wave Communication

We have now released the fifth episode of the podcast Wireless Future, with the following abstract:

What happened to millimeter wave communications? It is often described as synonymous with 5G, but barely any of the brand new 5G networks make use of it. In this episode, Erik G. Larsson and Emil Björnson discuss the basic properties of millimeter waves, whether it is the long sought “holy grail” in wireless communications, and where the technology stands today. To learn more, they recommend the articles “Antenna Count for Massive MIMO: 1.9 GHz versus 60 GHz” and “Massive MIMO in Sub-6 GHz and mmWave: Physical, Practical, and Use-Case Differences.

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

Who is Who in Massive MIMO?

I taught a course on complex networks this fall, and one component of the course is a hands-on session where students use the SNAP C++ and Python libraries for graph analysis, and Gephi for visualization. One available dataset is DBLP, a large publication database in computer science, that actually includes a lot of electrical engineering as well.

In a small experiment I filtered DBLP for papers with both “massive” and “MIMO” in the title, and analyzed the resulting co-author graph. There are 17200 papers and some 6000 authors.  There is a large connected component, with over 400 additional much smaller connected components!

Then I looked more closely at authors who have written at least 20 papers. Each node is an author, its size is proportional to his/her number of “massive MIMO papers”, and its color represents identified communities. Edge thicknesses represent the number of co-authored papers.  Some long-standing collaborators, former students, and other friends stand out.  (Click on the figure to enlarge it.)

To remind readers of the obvious, prolificacy is not the same as impact, even though they are often correlated. Also, the study is not entirely rigorous. For one thing, it trusts that DBLP properly distinguishes authors with the same name (consider e.g., “Li Li”) and I do not know how well it really does that. Second, in a random inspection all papers I had filtered out dealt with “massive MIMO” as we know it. However, theoretically, the search criterion would also catch papers on, say, MIMO control theory for a massive power plant.  Also, the filtering does miss some papers written before the “massive MIMO” term was established, perhaps most importantly Thomas Marzetta’s seminal paper on “unlimited antennas”.  Third, the analysis is limited to publications covered by DBLP, which also means, conversely, that there is no specific quality threshold for the publication venues. Anyone interested in learning more, drop me a note. 

Globecom Tutorial on Cell-free Massive MIMO

I am giving a tutorial on “Beyond Massive MIMO: User-Centric Cell-Free Massive MIMO” at Globecom 2020, together with my colleagues Luca Sanguinetti and Özlem Tuğfe Demir. It is a prerecorded 3-hour tutorial that can be viewed online at any time during the conference and there will be a live Q/A session on December 11 where we are available for questions.

The tutorial is based on our upcoming book on the topic: Foundations on User-Centric Cell-free Massive MIMO.

Until December 11 (the last day of the tutorial), we are offering a free preprint of the book, which can be downloaded by creating an account at the NOW publishers’ website. By doing so, I think you will also get notified when the final version of the book is available early next year, so you can gain access to the final PDF and an offer to buy printed copies.

If you download the book and have any feedback that we can take into account when preparing the final version, we will highly appreciate to receive it! Please email me your feedback by December 15. You find the address in the PDF.

The abstract of the tutorial is as follows:

Massive MIMO (multiple-input multiple-output) is no longer a promising concept for cellular networks-in 2019 5G it became a reality, with 64-antenna fully digital base stations being commercially deployed in many countries. However, this is not the final destination in a world where ubiquitous wireless access is in demand by an increasing population. It is, therefore, time for MIMO and mmWave communication researchers to consider new multi-antenna technologies that might lay the foundations for beyond 5G networks. In particular, we need to focus on improving the uniformity of the service quality.

Suppose all the base station antennas are distributed over the coverage area instead of co-located in arrays at a few elevated locations, so that the mobile terminals are surrounded by antennas instead of having a few base stations surrounded by mobile terminals. How can we operate such a network? The ideal solution is to let each mobile terminal be served by coherent joint transmission and reception from all the antennas that can make a non-negligible impact on their performance. That effectively leads to a user-centric post-cellular network architecture, called “User-Centric Cell-Free Massive MIMO”. Recent papers have developed innovative signal processing and radio resource allocation algorithms to make this new technology possible, and the industry has taken steps towards implementation. Substantial performance gains compared to small-cell networks (where each distributed antenna operates autonomously) and cellular Massive MIMO have been demonstrated in numerical studies, particularly, when it comes to the uniformity of the achievable data rates over the coverage area.

Episode 4: Is Wireless Technology Secure?

We have now released the fourth episode of the podcast Wireless Future, with the following abstract:

We are creating a society that is increasingly reliant on access to wireless connectivity. In Sweden, you can barely pay for parking without a mobile phone. Will this wireless future have a negative impact on the security of our data and privacy? In this episode, Emil Björnson and Erik G. Larsson discuss security threats to wireless technology, including eavesdropping, jamming, and spoofing. What impact can these illegitimate techniques have on our lives and what do we need to be aware of?

You can watch the video podcast on YouTube:

You can listen to the audio-only podcast at the following places:

News – commentary – mythbusting