We have now released the tenth episode of the podcast Wireless Future, with the following abstract:
5G promises peak data speeds above 1 gigabit per second. Looking further into the future, will wireless technology eventually deliver 1 terabit per second? How can the technology be evolved to reach that goal, and what would the potential use cases be? In this episode, Erik G. Larsson and Emil Björnson provide answers to these questions and discuss the practical challenges that must be overcome at the hardware level and in wireless propagation. To learn more, they recommend the article “Scoring the Terabit/s Goal: Broadband Connectivity in 6G”.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
We have now released the ninth episode of the podcast Wireless Future, with the following abstract:
In this episode, Emil Björnson and Erik G. Larsson answer questions from the listeners on the topic of reconfigurable intelligent surfaces. Some examples are: What kind of materials are used? When can the technology beat traditional relays? How quickly can one change the surface’s configuration? Are there any real-time experiments? How can the research community avoid misconceptions spreading around new technologies?
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
We have now released the eighth episode of the podcast Wireless Future, with the following abstract:
The new 5G millimeter wave systems make use of classical analog beamforming technology. It is often claimed that digital beamforming cannot be used in these bands due to its high energy consumption. In this episode, Erik G. Larsson and Emil Björnson are visited by Bengt Lindoff, Chief Systems Architect at the startup BeammWave. The conversation covers how fully digital beamforming solutions are now being made truly competitive and what this means for the future of wireless communications. To learn more about BeammWave’s hardware architecture visit https://www.beammwave.com/whitepapers.
You can watch the video podcast on YouTube:
You can listen to the audio-only podcast at the following places:
A new EU-funded 6G initiative, the REINDEER project, joins forces from academia and industry to develop and build a new type of multi-antenna-based smart connectivity platform integral to future 6G systems. From Ericsson’s new site:
The project’s name is derived from REsilient INteractive applications through hyper Diversity in Energy-Efficient RadioWeaves technology, and the development of “RadioWeaves” technology will be a key deliverable of the project. This new wireless access infrastructure consists of a fabric of distributed radio, compute and storage. It will advance the ideas of large-scale intelligent surfaces and cell-free wireless access to offer capabilities far beyond future 5G networks. This is expected to offer capacity scalable to quasi-infinite, and perceived zero latency and interaction with a large number of embedded devices.
Scientific papers used to appear in journals once per month and in a few major conference proceedings per year. During the last decade, this has changed into a situation where new preprints appear on arXiv.org every day. The hope has been that this trend will lead to swifter scientific progress and a higher degree of openness towards the public. However, it is not necessary that any of these goals are being reached. The reputable academic journals are only accepting ⅓ of the submitted papers, often after substantial revisions, thus one can suspect that at least ⅔ of the preprints appearing on arXiv are of low quality. How should the general public be able to tell the difference between a solid and questionable paper when it is time-consuming even for researchers in the field to do so?
During the COVID-19 pandemic, the hope for scientific breakthroughs and guidance became so pressing that the media started to report results from preprints. For example, an early study from the Imperial College London indicated that a lockdown is by far the most effective measure to control the pandemic. This seemed to give scientific support for the draconian lockdown measures taken in many countries around the world, but the study has later been criticized for obtaining its conclusion from the model assumptions rather than the underlying data (see [1], [2], [3]). I cannot tell where the truth lies—the conclusion might eventually turn out to be valid even if the study was substandard. However, this is an example of why a faster scientific process is not necessarily leading to better scientific progress.
Publish or perish
After the big success of two survey papers that provided coherent 5G visions (What will 5G be? and Five disruptive technology directions for 5G), we have recently witnessed the rush to do the same for the next wireless generation. There are already 30+ papers competing to provide the “correct” 6G vision. Since the 6G research has barely started, the level of speculation is substantially higher than in the two aforementioned 5G papers, which appeared after several years of exploratory 5G research. Hence, it is important that we are not treating any of these 6G papers as actual roadmaps to the next wireless generation but just as different researchers’ rough long-term visions.
The abundance of 6G vision papers is a symptom of a wider trend: academic researchers are chasing citations and measurable impact at a level never seen before. The possibility of instantly uploading an unpublished manuscript to arXiv is encouraging researchers to be the first ones to formulate a new research problem and scratching on the surface of its solution, instead of patiently solving it first and then publish the results. The recent Best Readings in Reconfigurable Intelligent Surfaces page is an example of this issue: a large fraction of the listed papers are actually unpublished preprints! Since even published papers in this emerging research area contain basic misconceptions, I think we need to calm down and let papers go through review before treating them as recommended readings.
The more papers a researcher produces per year, the more the citation numbers can be inflated. This is the bad consequence of the “publish-or-perish” pressure that universities and funding agencies are putting on academic researchers. As long as quantitative bibliometrics (such as the number of papers and citations) are used to determine who will be tenured and who will receive research grants, we will encourage people to act in a scientifically undesired way: Write surveys rather than new technical contributions, ask questions rather than answering them, and make many minor variations of previous works instead of few major contributions. The 2020s will hopefully be the decade where we find better metrics of scientific excellence that encourage a sounder scientific methodology.
Open science can be a solution
The open science movement is pushing for open access to publications, data, simulation code, and research methodology. New steps in this direction are taken every year, with the pressure coming from funding agencies that gradually require a higher degree of openness. The current situation is much different from ten years ago, when I started my journey towards publishing simulation code and data along with my papers. Unfortunately, open science is currently being implemented in a way that risk to degrade the scientific quality. For example, it is great that IEEE is creating new open journals and encourages publication of data and code, but it is bad that the rigor of the peer-review process is challenged by the shorter deadlines in these new journals. One cannot expect researchers, young or experienced, to carefully review a full-length paper in only one or two weeks, as these new journals are requiring. I am not saying that all papers published in those journals are bad (I have published several ones myself) but the quality variations might be almost as large as for unpublished preprints.
During the 2020s, we need to find ways to promote research that are deep and insightful rather than quick and dirty. We need to encourage researchers to take on the important but complicated scientific problems in communication engineering instead of analytically “tractable” problems of little practical relevance. The traditional review process is far from perfect, but it has served us well in upholding the quality of analytical results in published papers. The quality of numerical and experimental results is harder to validate by a referee. Making code and data openly available is not the solution to this problem but a necessary tool in any such solution. We need to encourage researchers to scrutinize and make improvements to each others’ code, as well as basing new research on existing data.
The machine-learning community is leading the way towards a more rigorous treatment of numerical experiments by establishing large common datasets that are used to quantitatively evaluate new algorithms against previous ones. A trend towards similar practices can already be seen in communication engineering, where qualitative comparisons with unknown generalizability have been the norm in the past. Another positive trend is that peer-reviewers have started to be cautious when reference lists contain a large number of preprints, because it is by repeating claims from unpublished papers that myths are occasionally created and spread without control.
I have four New Year’s wishes for 2021:
The research community focuses on basic research, such as developing and certifying fundamentals models, assumptions, and algorithms related to the different pieces in the existing 6G visions.
It becomes the norm that code and data are published alongside articles in all reputable scientific journals, or at least submitted for review.
Preprints are not treated as publications and only cited in the expectational case when the papers are closely related.
We get past the COVID-19 pandemic and get to meet at physical conferences again.
I have written several blog posts about how the hardware concepts of reconfigurable reflectarrays and metasurfaces are gaining interest in the wireless communication community, for example, to create a new type of full-duplex transparent relays. The technology is also known as reconfigurable intelligent surfaces and intelligent reflecting surfaces.
In my latest magazine paper, I identified real-time reconfigurability as the key technical research challenge: we need fast algorithms for over-the-air channel estimation that can handle large surfaces and complex propagation environments. In other words, we need hardware that can be reconfigured and algorithms to find the right configuration.
The literature contains several theoretical algorithms but it is a very different thing to demonstrate real-time reconfigurability in lab experiments. I was therefore impressed when finding the following video from the team of Dr. Mohsen Khalily at the University of Surrey:
The video shows how a metasurface is used to reflect a signal from a transmitter to a receiver. In the second half of the video, they move the receiver out of the reflected beam from the metasurface and then press a button to reconfigure the surface to change the direction of the beam.
I asked Dr. Khalily to tell me more about the setup:
“The metasurface consists of several conductive printed patches (scatterers), and the size of each scatterer is a small proportion of the wavelength of the operating frequency. The macroscopic effect of these scatterers defines a specific surface impedance and by controlling this surface impedance, the reflected wave from the metasurface sheet can be manipulated. Each individual scatterer or a cluster of them can be tuned in such a way that the whole surface can reconstruct radio waves with desired characteristics without emitting any additional waves.”
The surface shown in the video contains 2490 patches that are printed on a copper ground plane. The patches are made of a new micro-dispersed ceramic PTFE composite and designed to support a wide range of phase variations along with a low reflection loss for signals in the 3.5 GHz band. The design of the surface was the main challenge according to Dr. Khalily:
“Fabrication was very difficult due to the size of the surface, so we had to divide the surface into six tiles then attach them together.Our surface material has a higher dielectric constant than the traditional PTFE copper-clad laminates to meet the design and manufacturing of circuit miniaturization. This material also possesses high thermal conductivity, which gives an added advantage for heat dissipation of the apparatus.”
The transmitter and receiver were in the far-field of the metasurface in the considered experimental setup. Since there is an unobstructed line-of-sight path, it was sufficient to estimate the angular difference between the receiver and the main reflection angle, and then adjust the surface impedance to compensate for the difference. When this was properly done, the metasurface improved the signal-to-noise ratio (SNR) by almost 15 dB. I cannot judge how close this number is to the theoretical maximum. In the considered in-room setup with highly directional horn antennas at the transmitter and receiver, it might be enough that the reflected beam points in roughly the right direction to achieve a great SNR gain. I’m looking forward to learning more about this experiment when there is a technical paper that describes it.
This is not the first experiment of this kind, but I think it constitutes the state-of-the-art when it comes to bringing the concept of reconfigurable intelligent surfaces from theory to practice.
I taught a course on complex networks this fall, and one component of the course is a hands-on session where students use the SNAP C++ and Python libraries for graph analysis, and Gephi for visualization. One available dataset is DBLP, a large publication database in computer science, that actually includes a lot of electrical engineering as well.
In a small experiment I filtered DBLP for papers with both “massive” and “MIMO” in the title, and analyzed the resulting co-author graph. There are 17200 papers and some 6000 authors. There is a large connected component, with over 400 additional much smaller connected components!
Then I looked more closely at authors who have written at least 20 papers. Each node is an author, its size is proportional to his/her number of “massive MIMO papers”, and its color represents identified communities. Edge thicknesses represent the number of co-authored papers. Some long-standing collaborators, former students, and other friends stand out. (Click on the figure to enlarge it.)
To remind readers of the obvious, prolificacy is not the same as impact, even though they are often correlated. Also, the study is not entirely rigorous. For one thing, it trusts that DBLP properly distinguishes authors with the same name (consider e.g., “Li Li”) and I do not know how well it really does that. Second, in a random inspection all papers I had filtered out dealt with “massive MIMO” as we know it. However, theoretically, the search criterion would also catch papers on, say, MIMO control theory for a massive power plant. Also, the filtering does miss some papers written before the “massive MIMO” term was established, perhaps most importantly Thomas Marzetta’s seminal paper on “unlimited antennas”. Third, the analysis is limited to publications covered by DBLP, which also means, conversely, that there is no specific quality threshold for the publication venues. Anyone interested in learning more, drop me a note.