A Thought for 2021: Faster Science is Not Better Science

Scientific papers used to appear in journals once per month and in a few major conference proceedings per year. During the last decade, this has changed into a situation where new preprints appear on arXiv.org every day. The hope has been that this trend will lead to swifter scientific progress and a higher degree of openness towards the public. However, it is not necessary that any of these goals are being reached. The reputable academic journals are only accepting ⅓ of the submitted papers, often after substantial revisions, thus one can suspect that at least ⅔ of the preprints appearing on arXiv are of low quality. How should the general public be able to tell the difference between a solid and questionable paper when it is time-consuming even for researchers in the field to do so?

During the COVID-19 pandemic, the hope for scientific breakthroughs and guidance became so pressing that the media started to report results from preprints. For example, an early study from the Imperial College London indicated that a lockdown is by far the most effective measure to control the pandemic. This seemed to give scientific support for the draconian lockdown measures taken in many countries around the world, but the study has later been criticized for obtaining its conclusion from the model assumptions rather than the underlying data (see [1], [2], [3]). I cannot tell where the truth lies—the conclusion might eventually turn out to be valid even if the study was substandard. However, this is an example of why a faster scientific process is not necessarily leading to better scientific progress.

Publish or perish

After the big success of two survey papers that provided coherent 5G visions (What will 5G be? and Five disruptive technology directions for 5G), we have recently witnessed the rush to do the same for the next wireless generation. There are already 30+ papers competing to provide the “correct” 6G vision. Since the 6G research has barely started, the level of speculation is substantially higher than in the two aforementioned 5G papers, which appeared after several years of exploratory 5G research. Hence, it is important that we are not treating any of these 6G papers as actual roadmaps to the next wireless generation but just as different researchers’ rough long-term visions.

The abundance of 6G vision papers is a symptom of a wider trend: academic researchers are chasing citations and measurable impact at a level never seen before. The possibility of instantly uploading an unpublished manuscript to arXiv is encouraging researchers to be the first ones to formulate a new research problem and scratching on the surface of its solution, instead of patiently solving it first and then publish the results. The recent Best Readings in Reconfigurable Intelligent Surfaces page is an example of this issue: a large fraction of the listed papers are actually unpublished preprints! Since even published papers in this emerging research area contain basic misconceptions, I think we need to calm down and let papers go through review before treating them as recommended readings.

The more papers a researcher produces per year, the more the citation numbers can be inflated. This is the bad consequence of the “publish-or-perish” pressure that universities and funding agencies are putting on academic researchers. As long as quantitative bibliometrics (such as the number of papers and citations) are used to determine who will be tenured and who will receive research grants, we will encourage people to act in a scientifically undesired way: Write surveys rather than new technical contributions, ask questions rather than answering them, and make many minor variations of previous works instead of few major contributions. The 2020s will hopefully be the decade where we find better metrics of scientific excellence that encourage a sounder scientific methodology.

Open science can be a solution

The open science movement is pushing for open access to publications, data, simulation code, and research methodology. New steps in this direction are taken every year, with the pressure coming from funding agencies that gradually require a higher degree of openness. The current situation is much different from ten years ago, when I started my journey towards publishing simulation code and data along with my papers. Unfortunately, open science is currently being implemented in a way that risk to degrade the scientific quality. For example, it is great that IEEE is creating new open journals and encourages publication of data and code, but it is bad that the rigor of the peer-review process is challenged by the shorter deadlines in these new journals. One cannot expect researchers, young or experienced, to carefully review a full-length paper in only one or two weeks, as these new journals are requiring. I am not saying that all papers published in those journals are bad (I have published several ones myself) but the quality variations might be almost as large as for unpublished preprints.

During the 2020s, we need to find ways to promote research that are deep and insightful rather than quick and dirty. We need to encourage researchers to take on the important but complicated scientific problems in communication engineering instead of analytically “tractable” problems of little practical relevance. The traditional review process is far from perfect, but it has served us well in upholding the quality of analytical results in published papers. The quality of numerical and experimental results is harder to validate by a referee. Making code and data openly available is not the solution to this problem but a necessary tool in any such solution. We need to encourage researchers to scrutinize and make improvements to each others’ code, as well as basing new research on existing data.

The machine-learning community is leading the way towards a more rigorous treatment of numerical experiments by establishing large common datasets that are used to quantitatively evaluate new algorithms against previous ones. A trend towards similar practices can already be seen in communication engineering, where qualitative comparisons with unknown generalizability have been the norm in the past. Another positive trend is that peer-reviewers have started to be cautious when reference lists contain a large number of preprints, because it is by repeating claims from unpublished papers that myths are occasionally created and spread without control.

I have four New Year’s wishes for 2021:

  1. The research community focuses on basic research, such as developing and certifying fundamentals models, assumptions, and algorithms related to the different pieces in the existing 6G visions.
  2. It becomes the norm that code and data are published alongside articles in all reputable scientific journals, or at least submitted for review.
  3. Preprints are not treated as publications and only cited in the expectational case when the papers are closely related.
  4. We get past the COVID-19 pandemic and get to meet at physical conferences again.

10 thoughts on “A Thought for 2021: Faster Science is Not Better Science”

  1. I think preprints publishers such as arXiv and TechRxiv really hurt academia, if a research is not well enough to be published so it shouldn’t be published. They just changed academia into a racing game that measures the winner by some numbers.
    There are tons of citations out there that are not really well enough related references and if this is continued in academia then a reader just will be confused by some unrelated sources that “researchers” cite them, instead of accessing more resources to know more about a topic and dig into it.

  2. Thanks sincerely for your heartfelt words, Prof. Bjornson. I describe myself as a victim to the academic racing game, who are occasionally satisfied (be emailed the acceptance letter) but painful nearly every day.

    I held the dream to do some minor meaningful things to the community or industry when I decided to start my graduate school journey, but actually I am publishing many papers with little practical use. Sometimes I cannot even persuade myself, why I start writing another paper instead of doing something more practical.

    I have learned much from your wonderful blog, and admire your attitude towards science a lot when I read this post today. It would be so nice if you can give me any suggestion, thanks!

    1. This is a complicated issue to solve. As long as we operate in a scientific climate that incentivises making many minor contributions instead of few major contributions, it is rational to work like that. A single researcher cannot change the entire system, but only do our best within the system that currently exist. We can try to be driven by curiosity and finding answers to research questions. Even if each question might seem small, the answers to many of them might change our understanding of a topic.

      I think it is only as a community that we can try to push funding agencies, universities and government to change how they measure excellence to incentivize a different approach to research.

  3. I certainly resonate with your comments. With regards to “Five disruptive technology directions for 5G,” you are correct that at the time we wrote that paper, there had already been several years of work in 2014. Further, I think the the paper was honest in that we acknowledged that were were speculating on technologies that would lead to something disruptive that should be called 5G.

    Like many others, I am also thinking about 6G. I am personally focusing on fundamental research into possible technology directions and not spending time on overview papers. I’m excited about collaborating with folks in circuits and antennas, to devise new models that will lead to nice theory and more innovation. I’m also exploring more applications of wireless, with the hope that this leads to further innovations. We just started an initiative at NC State called 6GNC. It seems early to use the term 6G but the cat is out of the bag as they say.

    I look forward to more discussions on 6G, MIMO, and other topics.

  4. Well written, Emil.
    When it comes to 6G (and 5G for that matter), we are caught by the industry needing to present something revolutionary new every 10-11 years (solar cycle?), motivating their customers to buy new equipment. To be honest, the 5G experience tells us that it is now very hard to come up with some really game-changing technology. Industry marketing is now claiming the whole trend of “digital transformation” of society (which has been going on for 40 years or more) will happen thanks to 5G. From a tech perspective the really significant advances are in how to build networks (SDN, Cloud, slicing etc) which are not necessarily tied to 5G, but great to make the operator business more efficient. The flip side is that they hardly noticeable to the users (maybe lower cost per GB). The higher data rates are taken with a shrug of the shoulder – they are seen as just minor evolutionary steps, “more of the same”. It’s hard to please the crowd, nowadays. You may say that we are victims of our own success, and it’s not going to be easier for “6G”.

    On the lighter side, I tend to believe that the difference between technological development efforts (“D”) (e.g. 5G, 6G) and academic research (“R”) is that in the first case we try to solve a given technical problem (regardless if they are solvable or not), in the second case we tend to expand our knowledge in areas were we are able to find solutions. The problem we face is that the money is often in the “D”-domain, where we tend to bang our heads against very difficult problems or making “epsilon-improvements” on existing solutions. In the academic “R” domain, we tend to instead come up with beautiful solutions to problems that nobody heard of.
    I have been a “practical” man all my academic career, ensuring that my research leads to practical value, but I see the limitations of the “D”-approach that “6G” will no doubt push us deeper into “epsilon research”. If we really are to be serious about being revolutionary (and not just another G), I think we need to take a broader approach than what you can see in many of the 6G-vision papers you see right now.

  5. Thank you for a nice article Prof. Bjornson.

    I fully agree with you that the ‘Publish or Perish’ attitude towards academic incentives such as promotions (or tenure as they say in the other part of the world) is harnessing an unhealthy race, be it in USA, Sweden or India.

    However, my views on Open Science is a bit different, as I belong to a developing part of the world. Some of the Open journals (IEEE or Nature) are really good, but they charge a fortune. In fact, the new IEEE policy to impose page charges, even for their traditional journals, seems like a trapdoor. For example, IEEE AWPL gives you a free one-page paper to write. Isn’t it ridiculous? Who writes a 1 page paper? Shouldn’t they say it straight that we charge USD 600 for a 4-page paper? I guess all these activities are to reduce the journal submissions, so that the editors have to handle less papers. In the process, many good papers, often NOT backed up by funding in USD/ EURO, never reaches the editorial office. To me, this is a real discrimination and bias.

    This bias has inspired mushrooming of lot of bogus OPEN journals, charging only a fraction, and deceiving many young researchers with their own version of impact factor.

    I really hope that one day the research community will think globally, and we can build a better world with inclusiveness.

    1. On the one hand, there is a cost involved with having a publication database and editors that process the published papers, to maintain a professional publication quality. On the other hand, I agree that the current fees are unreasonable. IEEE is supposed to be a not-for-profit organization but its publications business is probably not organized in that way but used to generate a surplus that can be used to fund other IEEE activities.

      An interesting initiative is the new ITU Journal on Future and Evolving Technologies (https://www.itu.int/en/journal/j-fet/Pages/default.aspx), which makes similar promises regarding open access and fast review cycles as the IEEE open journals, but is entirely free of charge. I guess its activities are funded by member countries of the ITU.

  6. It seems to be contradictory since you have recently submitted and published many review papers. Some of them are even highly overlapped with each other. I see you have a book on User-centric for Cell-Free Massive MIMO and also have a survey paper on the same topic. Does it make sense to have the two heavily fundamental publications on the same topic?

    1. It is true that I have recently published a textbook on the topic of cell-free massive MIMO and also co-authored a literature review. These are two different kinds of publications: a tutorial and a survey, respectively.

      The IEEE Communications Surveys & Tutorials is explaining the difference between tutorials and surveys like this: “A tutorial article […] should be designed to help the reader to become familiar with and learn something specific about a chosen topic. In contrast, the term survey, as applied here, is defined to mean a survey of the literature.”

      My book is written to be a good way to learn the state-of-the-art theory and methods in a deep but concise manner, without caring about which researchers came up with which results. It could even be used as a textbook in university courses (my previous book “Massive MIMO networks” is used at several universities). The survey is instead listing and categorizing (almost) all the papers on the topic, with the focus on what each individual paper contains. The mathematical and analytical content is very limited.

Leave a Reply

Your email address will not be published. Required fields are marked *