I taught a course on complex networks this fall, and one component of the course is a hands-on session where students use the SNAP C++ and Python libraries for graph analysis, and Gephi for visualization. One available dataset is DBLP, a large publication database in computer science, that actually includes a lot of electrical engineering as well.

In a small experiment I filtered DBLP for papers with both “massive” and “MIMO” in the title, and analyzed the resulting co-author graph. There are 17200 papers and some 6000 authors. There is a large connected component, with over 400 additional much smaller connected components!

Then I looked more closely at authors who have written at least 20 papers. Each node is an author, its size is proportional to his/her number of “massive MIMO papers”, and its color represents identified communities. Edge thicknesses represent the number of co-authored papers. Some long-standing collaborators, former students, and other friends stand out. (Click on the figure to enlarge it.)

To remind readers of the obvious, prolificacy is not the same as impact, even though they are often correlated. Also, the study is not entirely rigorous. For one thing, it trusts that DBLP properly distinguishes authors with the same name (consider e.g., “Li Li”) and I do not know how well it really does that. Second, in a random inspection all papers I had filtered out dealt with “massive MIMO” as we know it. However, theoretically, the search criterion would also catch papers on, say, MIMO control theory for a massive power plant. Also, the filtering does miss some papers written before the “massive MIMO” term was established, perhaps most importantly Thomas Marzetta’s seminal paper on “unlimited antennas”. Third, the analysis is limited to publications covered by DBLP, which also means, conversely, that there is no specific quality threshold for the publication venues. Anyone interested in learning more, drop me a note.

]]>When T. Marzetta introduced the Massive MIMO concept in his seminal article from 2010, he concluded that “*the phenomenon of pilot contamination impose[s] fundamental limitations on what can be achieved with a noncooperative cellular multiuser MIMO system*.”

More precisely, he showed that the channel capacity under i.i.d. Rayleigh fading converges to a finite limit as the number of base stations goes to infinity. The value of this limit is determined by the interference level in the channel estimation phase. There are hundreds of papers on IEEEXplore that deals with the pilot contamination issue, trying to push the limit upwards or achieve higher performance for a given number of antennas. Various advanced mitigation methods have been developed to cure the symptoms of pilot contamination.

But was pilot contamination really a fundamental limitation to start with? In 2018, we published a paper called “Massive MIMO Has Unlimited Capacity” where we showed that there is an unexpectedly simple solution to the problem. You don’t need a sledgehammer to “**crack the pilot contamination nut**“, but the right combination of state-of-the-art tools will do. While I have written about this in previous blog posts and briefly mentioned it in videos, I have finally recorded a comprehensive lecture on the topic. It is 82 minutes long and was given online by invitation from Hacettepe University, Turkey. No previous knowledge on the topic is required. I hope you will enjoy it in small or big doses!

Like many other teachers, I had to quickly switch from physical teaching to online mode at the beginning of the COVID-19 pandemic. I am normally giving lectures in a course called Multiple Antenna Communications in the period March to May. It is a luxury to teach this course since there are only 5-10 students and these have actively selected the course so they are truly interested in the topic!

Since I was anyway going to give online lectures, I was thinking: Why not record them in a way that other people could also benefit from them? I normally give 2-hour lectures where I switch between presenting PowerPoint slides and giving examples on the whiteboard; for example, I might make a theoretical derivation on the board and then summarize it on a slide and show simulation results. This time, I decided to decouple these activities by creating one video per lecture that the students could watch in advance, and then I had live sessions where I went through prepared examples and answered questions. This was done by sharing my screen and write the examples into OneNote using a simple drawing pad, which isn’t so much different from writing on a whiteboard.

I think this online teaching approach was quite successful. I am quite satisfied with the 12 lecture videos that I created, which consist of a total of 8 hours of narrated slides. The first video has more than 6000 views on YouTube, which is three orders-of-magnitude more than the number of students that I had in the course. I received many requests for the slides, so I uploaded them to GitHub.

Here is the video series in its entirety:

I will keep these resources available. If you are a teacher, please feel free to reuse the videos or slides in your teaching! I hope that the efforts that I and other teachers put into producing online content during the pandemic can be utilized to aid the learning of students also in the years to come.

]]>Cell-free massive MIMO might be one of the core 6G physical layer technologies. One of my students, Giovanni Interdonato, defended his Ph.D. thesis on this topic earlier this week. In this video, he speaks with me about his thesis work and his time as a doctoral student.

]]>I just finished giving an** **IEEE Future Networks Webinar on the topic of Cell-free Massive MIMO and radio stripes. The webinar is more technical than my previous popular-science video on the topic, but it can anyway be considered an overview on the basics and the implementation of the technology using radio stripes.

If you missed the chance to view the webinar live, you can access the recording and slides afterwards by following this link. The recording contains 42 minutes of presentation and 18 minutes of Q/A session. If your question was not answered during the session, please feel to ask it here on the blog instead.

**Update: **The recording from the webinar has been delayed (due to the virus crisis), so I have recorded an alternative video:

Emerging intelligent reflecting surface (IRS) technology, also known under the names “reconfigurable intelligent surface” and “software-controlled metasurface”, is sometimes marketed as an enabling technology for 6G. How do they work, what are their use cases and how much will they improve wireless access performance at large?

The physical principle of an IRS is that the surface is composed of atoms, each of which acts as an “intelligent” scatterer: a small antenna that receives and re-radiates without amplification, but with a controllable phase-shift. Typically, an atom is implemented as a small patch antenna terminated with an adjustable impedance. Assuming the phase shifts are properly adjusted, the scattered wavefronts can be made to add up constructively at the receiver. If coupling between the atoms is neglected, the analysis of an IRS essentially entails (i) finding the Green’s function of the source (a sum of spherical waves if close, or a plane wave if far away), (ii) computing the impinging field at each atom, (iii) integrating this field over the surface of each atom to find a current density, (iv) computing the radiated field from each atom using physical-optics approximation, and (v) applying the superposition principle to find the field at the receiver. If the atoms are electrically small, one can approximate the re-radiated field by pretending the atoms are point sources and then the received “signal” is basically a superposition of phase-shifted (as ), amplitude-scaled (as ) source signals.

A point worth re-iterating is that an atom is a scatterer, not a “mirror”. A more subtle point is that the entire IRS as such, consisting of a collection of scatterers, is itself also a scatterer, not a mirror. “Mirrors” exist only in textbooks, when a plane wave is impinging onto an infinitely large conducting plate (none of which exist in practice). Irrespective of how the IRS is constructed, if it is viewed from far enough away, its radiated field will have a beamwidth that is inversely proportional to its size measured in wavelengths.

Two different operating regimes of IRSs can be distinguished:

1. *Both transmitter and receiver are in the far-field of the surface.* Then the waves seen at the surface can be approximated as planar; the phase differential from the surface center to its edge is less than a few degrees, say. In this case the phase shifts applied to each atom should be linear in the surface coordinate. The foreseen use case would be to improve coverage, or provide an extra path to improve the rank of a point-to-point MIMO channel. Unfortunately in this case the transmitter-IRS-path loss scales very unfavorably, as where is the number of meta-atoms in the surface, and the reason is that again, the IRS itself acts as a (large) scatterer, not a “mirror”. Therefore the IRS has to be quite large before it becomes competitive with a standard single-antenna decode-and-forward relay, a simple, well understood technology that can be implemented using already widely available components, at small power consumption and with a small form factor. (In addition, full-duplex technology is emerging and may eventually be combined with relaying, or even massive MIMO relaying.)

2. *At least one of the transmitter and the receiver is in the surface near-field.* Here the plane-wave approximation is no longer valid. The IRS can then either be sub-optimally configured to act as a “mirror”, in which case the phase shifts vary linearly as function of the surface coordinate. Alternatively, it can be configured to act as a “lens”, with optimized phase-shifts, which is typically better. As shown for example in this paper, in the near-field case the path loss scales more favorably than in the far-field case. The use cases for the near-field case are less obvious, but one can think of perhaps indoor environments with users close to the walls and every wall covered by an IRS. Another potential use case that I learned about recently is to use the IRS as a MIMO transmitter: a single-antenna transmitter near an IRS can be jointly configured to act as a MIMO beamforming array.

So how useful will IRS technology be in 6G? The question seems open. Indoor coverage in niche scenarios, but isn’t this an already solved problem? Outdoor coverage improvement, but then (cell-free) massive MIMO seems to be a much better option? Building MIMO transmitters from a single-antenna seems exciting, but is it better than using conventional RF? Perhaps it is for the Terahertz bands, where implementation of coherent MIMO may prove truly challenging, that IRS technology will be most beneficial.

A final point is that nothing requires the atoms in an IRS to be located adjacently to one another, or even to form a surface! But they are probably easier to coordinate if they are in more or less the same place.

]]>I have recorded a popular science video that explains how a cell-free network architecture can provide major performance improvements over 5G cellular networks, and why radio stripes is a promising way to implement it:

If you want more technical details, I recommend our recent survey paper “Ubiquitous Cell-Free Massive MIMO Communications“. One of the authors, Dr. Hien Quoc Ngo at Queen’s University Belfast, has created a blog about Cell-free Massive MIMO. In particular, it contains a list of papers on the topic and links to the programming code of some of them.

]]>Cell-free massive MIMO is likely one of the technologies that will form the backbone of any xG with x>5. What distinguishes cell-free massive MIMO from distributed MIMO, network MIMO or cooperative multi-point (CoMP)? The short answer is that cell-free massive MIMO works, it can deliver uniformly good service throughout the coverage area, and it requires no prior knowledge of short-term CSI (just like regular cellular massive MIMO). A longer answer is here. The price to pay for this superiority, no shock, is the lack of scalability: for canonical cell-free massive MIMO there is a practical limit on how large the system can be, and this scalability concerns both the power control, the signal processing, and the organization of the backhaul.

At ICC this year we presented this approach towards scalable cell-free massive MIMO. A key insight is that power control is extremely vital for performance, and a scalable cell-free massive MIMO solution requires a scalable power control policy. No surprise, some performance must be sacrificed relative to canonical cell-free massive MIMO. Coincidentally, another paper in the same session (WC-26) also devised a power control policy with similar qualities!

Take-away point? There are only three things that matter for the design of cell-free massive MIMO signal processing algorithms and power control policies: scalability, scalability and scalability…

]]>Check out this video, produced by the IEEE Signal Processing Society’s Signal Processing for Communications and Networking (SPCOM) technical committee. The video explains to the layman what 5G is for, and how massive MIMO comes in…

]]>One more reason to attend the IEEE CTW 2019: Participate in the Molecular MIMO competition! There is a USD 500 award to the winning team.

The task is to design a molecular MIMO communication detection method using datasets that contain real measurements. Possible solutions may include classic approaches (e.g., thresholding-based detection) as well as deep learning-based approaches.

More detail: here.

Come to the IEEE Communication Theory Workshop (CTW) 2019 and participate in the MIMO positioning competition!

The object of the competition is to design and train an algorithm that can determine the position of a user, based on estimated channel frequency responses between the user and an antenna array. Possible solutions may build on classic algorithms (fingerprinting, interpolation) or machine-learning approaches. Channel vectors from a dataset created with a MIMO channel sounder will be used.

Competing teams should present a poster at the conference, describing their algorithms and experiments.

A $500 USD prize will be awarded to the winning team.

More detail in this flyer.

]]>Come listen to Liesbet Van der Perre, Professor at KU Leuven (Belgium) on Monday February 18 at 2.00 pm EST.

She gives a webinar on state-of-the-art circuit implementations of Massive MIMO, and outlines future research challenges. The webinar is based on, among others, this paper.

In more detail the webinar will summarize the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It will explain how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes will be discussed. Open challenges and directions for future research are suggested.

Listen to the webinar by following this link.

]]>When an antenna array is used to focus a transmitted signal on a receiver, we call this beamforming (or precoding) and we usually illustrate it as shown to the right. This cartoonish illustration is only applicable when the antennas are gathered in a compact array and there is a line-of-sight channel to the receiver.

If we want to deploy very many antennas, as in Massive MIMO, it might be preferable to distribute the antennas over a larger area. One such deployment concept is called Cell-free Massive MIMO. The basic idea is to have many distributed antennas that are transmitting phase-coherently to the receiving user. In other words, the antennas’ signal components add constructively at the location of the user, just as when using a compact array for beamforming. It is therefore convenient to call it *beamforming* in both cases—algorithmically it is the same thing!

**The question is: How can we illustrate the beamforming effect when using a distributed array?**

The figure below shows how to do it. I consider a toy example with 80 star-marked antennas deployed along the sides of a square and these antennas are transmitting sinusoids with equal power, but different phases. The phases are selected to make the 80 sine-components phase-aligned at one particular point in space (where the receiving user is supposed to be):

Clearly, the “beamforming” from a distributed array does not give rise to a concentrated signal beam, but the signal amplification is confined to a small spatial region (where the color is blue and the values on the vertical axis are close to one). This is where the signal components from all the antennas are coherently combined. There are minor fluctuations in channel gain at other places, but the general trend is that the components are non-coherently combined everywhere except at the receiving user. (Roughly the same will happen in a rich multipath channel, even if a compact array is used for transmission.)

By looking at a two-dimensional version of the figure (see below), we can see that the coherent combination occurs in a circular region that is roughly half a wavelength in diameter. At the carrier frequencies used for cellular networks, this region will only be a few centimeters or millimeters wide. It is *almost magical* how this distributed array can amplify the signal at such a tiny spatial region! This spatial region is probably what the company Artemis is calling a *personal cell (pCell)* when marketing their distributed MIMO solution.

If you are into the details, you might wonder why I simulated a square region that is only a few wavelengths wide, and why the antenna spacing is only a quarter of a wavelength. This assumption was only made for illustrative purposes. If the physical antenna locations are fixed but we would reduce the wavelength, the size of the circular region will reduce and the ripples will be more frequent. Hence, we would need to compute the channel gain at many more spatial sample points to produce a smooth plot.

**Reproduce the results**: The code that was used to produce the plots can be downloaded from my GitHub.

The tedious, time-consuming, and buggy nature of system-level simulations is exacerbated with massive MIMO. This post offers some relieve in the form of analytical expressions for downlink conjugate beamforming [1]. These expressions enable the testing and calibration of simulators—say to determine how many cells are needed to represent an infinitely large network with some desired accuracy. The trick that makes the analysis feasible is to let the shadowing grow strong, yet the ensuing expressions capture very well the behaviors with practical shadowings.

The setting is an infinitely large cellular network where each -antenna base station (BS) serves single-antenna users. The large-scale channel gains include pathloss with exponent and shadowing having log-scale standard deviation , with the gain between the th BS and the th user served by a BS of interest denoted by . With conjugate beamforming and receivers reliant on channel hardening, the signal-to-interference ratio (SIR) at such user is [2]

where is the gain from the serving BS and is the share of that BS’s power allocated to user . Two power allocations can be analyzed:

- Uniform: .
- SIR-equalizing [3]: , with the proportionality constant ensuring that . This makes . Moreover, as and grow large,

The analysis is conducted for , which makes it valid for arbitrary BS locations.

For notational compactness, let . Define as the solution to where is the lower incomplete gamma function. For , in particular, . Under a uniform power allocation, the CDF of is available in an explicit form involving the Gauss hypergeometric function (available in MATLAB and Mathematica):

where “” indicates asymptotic () equality, is such that the CDF is continuous, and

Alternatively, the CDF can be obtained by solving (e.g., with Mathematica) a single integral involving the Kummer function :

This latter solution can be modified for the SIR-equalizing power allocation as

The spectral efficiency of user is with CDF readily characterizable from the expressions given earlier. From , the sum spectral efficiency at the BS of interest can be found as Expressions for the averages and are further available in the form of single integrals.

With a uniform power allocation,

(1)

and . For the special case of , the Kummer function simplifies giving

(2)

With an equal-SIR power allocation

(3)

and .

Let us now contrast the analytical expressions (computable instantaneously and exactly, and valid for arbitrary topologies, but asymptotic in the shadowing strength) with some Monte-Carlo simulations (lengthy, noisy, and bug-prone, but for precise shadowing strengths and topologies).

First, we simulate a 500-cell hexagonal lattice with , and . Figs. 1a-1b compare the simulations for – dB with the analysis. The behaviors with these typical outdoor values of are well represented by the analysis and, as it turns out, in rigidly homogeneous networks such as this one is where the gap is largest.

For a more irregular deployment, let us next consider a network whose BSs are uniformly distributed. BSs (500 on average) are dropped around a central one of interest. For each network snapshot, users are then uniformly dropped until of them are served by the central BS. As before, , and . Figs. 2a-2b compare the simulations for dB with the analysis, and the agreement is now complete. The simulated average spectral efficiency with a uniform power allocation is b/s/Hz/user while (2) gives b/s/Hz/user.

The analysis presented in this post is not without limitations, chiefly the absence of noise and pilot contamination. However, as argued in [1], there is a broad operating range (– with very conservative premises) where these effects are rather minor, and the analysis is hence applicable.

[1] G. George, A. Lozano, M. Haenggi, “Massive MIMO forward link analysis for cellular networks,” arXiv:1811.00110, 2018.

[2] T. Marzetta, E. Larsson, H. Yang, and H. Ngo, Fundamentals of Massive MIMO. Cambridge University Press, 2016.

[3] H. Yang and T. L. Marzetta, “A macro cellular wireless network with uniformly high user throughputs,” IEEE Veh. Techn. Conf. (VTC’14), Sep. 2014.

I this video clip I talk about an important lesson learned while working on Massive MIMO:

And in this video clip I talk more in general about our book, Fundamentals of Massive MIMO:

The textbook ** Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency**, that I’ve written together with Jakob Hoydis and Luca Sanguinetti, is from now on available for free download from https://massivemimobook.com. If you want a physical copy, you can buy the color-printed hardback edition from now publishers and major online shops, such as Amazon.

You can read more about this book in a previous blog post and also watch this new video, where I talk about the content and motivation behind the writing of the book.

]]>The next generation of cellular networks need to be much more energy-efficient than the current generation, if we should deliver 100-1000 times more data in a cost-efficient and environmentally friendly manner. In this video, I explain the methodology that can be used to design energy efficient 5G networks, and also the key role that Massive MIMO will play.

]]>I wrote this paper to make a single point: the hardware distortion (especially out-band radiation) stemming from transmitter nonlinearities in massive MIMO is a deterministic function of the transmitted signals. One consequence of this is that in most cases of practical relevance, the distortion is correlated among the antennas. Specifically, under line-of-sight propagation conditions this distortion is radiated in specific directions: in the single-user case the distortion is radiated into the same direction as the signal of interest, and in the two-user case the distortion is radiated into two other directions.

The derivation was based on a very simple third-order polynomial model. Questioning that model, or contesting the conclusions? Let’s run WebLab. WebLab is a web-server-based interface to a real power amplifier operating in the lab, developed and run by colleagues at Chalmers University of Technology in Sweden. Anyone can access the equipment in real time (though there might be a queue) by submitting a waveform and retrieving the amplified waveform using a special Matlab function, “weblab.m”, obtainable from their webpages. Since accurate characterization and modeling of amplifiers is a hard nonlinear identification problem, WebLab is a great tool to researchers who want to go beyond polynomial and truncated Volterra-type toy models.

A -spaced uniform linear array with 50 elements beamforms in free space line-of-sight to two terminals at (arbitrarily chosen) angles -9 respectively +34 degrees. A sinusoid with frequency is sent to the first terminal, and a sinusoid with frequency is transmitted to the other terminal. (Frequencies are in discrete time, see the Weblab documentation for details.) The actual radiation diagram is computed numerically: line-of-sight in free space is fairly uncontroversial: superposition for wave propagation applies. However, importantly, the actual amplification all signals is run on actual hardware in the lab.

The computed radiation diagram is shown below. (Some lines overlap.) There are two large peaks at -9 and +34 degrees angle, corresponding to the two signals of interest with frequencies and . There are also secondary peaks, at angles approximately -44 and -64 degrees, at frequencies different from respectively . These peaks originate from intermodulation products, and represent the out-band radiation caused by the amplifier non-linearity. (Homework: read the paper and verify that these angles are equal to those predicted by the theory.)

The Matlab code for reproduction of this experiment can be downloaded here.

]]>**No**, these are two different but somewhat related concepts, as I will explain in detail below.

Contemporary multiantenna base stations for cellular communications are equipped with 2-8 antennas, which are deployed along a horizontal line. One example is a uniform linear array (ULA), as illustrated in Figure 1 below, where the antenna spacing is uniform. All the antennas in the ULA have the same physical down-tilt, with respect to the ground, and a fixed radiation pattern and directivity.

By sending the same signal from all antennas, but with different phase-shifts, we can steer beams in different angular directions and thereby make the directivity of the radiated signal different from the directivity of the individual antennas. Since the antennas are deployed on a one-dimensional horizontal line in this example, the ULA can only steer beams in the two-dimensional (2D) azimuth plane as illustrated in Figure 1. The elevation angle is the same for all beams, which is why this is called 2D beamforming. The beamwidth in the azimuth domain shrinks the more antennas are deployed. If the array is used for multiuser MIMO, then multiple beams with different azimuth angles are created simultaneously, as illustrated by the colored beams in Figure 1.

If we would rotate the ULA so that the antennas are instead deployed at different heights above the ground, then the array can instead steer beams in different elevation angles. This is illustrated in Figure 2. Note that this is still a form of 2D beamforming since every beam will have the same directivity with respect to the azimuth plane. This antenna array can be used to steer beams towards users at different floors of a building. It is also useful to serve flying objects, such as UAVs, jointly with ground users. The beamwidth in the elevation domain shrinks the more antennas are deployed.

If we instead deploy multiple ULAs on top of each other, it is possible to control both the azimuth and elevation angle of a beam. This is called 3D beamforming (or full-dimensional MIMO) and is illustrated in Figure 3 using a planar array with a “massive” number of antennas. This gives the flexibility to not only steer beams towards different buildings but also towards different floors of these buildings, to provide a beamforming gain wherever the user is in the coverage area. It is not necessary to have many antennas to perform 3D beamforming – it is basically enough to have three antennas deployed in a triangle. However, as more antennas are added, the beams become narrower and easier to jointly steer in specific azimuth-elevation directions. This increases the array gain and reduces the interference between beams directed to different users, as illustrated by the colors in Figure 3.

The detailed answer to the question “3D Beamforming, is that Massive MIMO?” is as follows. Massive MIMO and 3D beamforming are two different concepts. 3D beamforming can be performed with few antennas and Massive MIMO can be deployed to only perform 2D beamforming. However, Massive MIMO and 3D beamforming is a great combination in many applications; for example, to spatially multiplex many users in a city with high-rise buildings. One should also bear in mind that, in general, only a fraction of the users are located in line-of-sight so the formation of angular beams (as shown above) might be of limited importance. The ability to control the array’s radiation pattern in 3D is nonetheless helpful to control the multipath environment such that the many signal components add constructively at the location of the intended receiver.

]]>Last year, I wrote a post about channel hardening. To recap, the achievable data rate of a conventional single-antenna channel varies rapidly over time due to the random small-scale fading realizations, and also over frequency due to frequency-selective fading. However, when you have many antennas at the base station and use them for coherent precoding/combining, the fluctuations in data rate average out; we then say that the channel hardens. One follow-up question that I’ve got several times is:

**Can we utilize the channel hardening to estimate the channels less frequently?**

Unfortunately, the answer is no. Whenever you move approximately half a wavelength, the multi-path propagation will change each element of the channel vector. The time it takes to move such a distance is called a *coherence time*. This time is the same irrespectively of how many antennas the base station has and, therefore, you still need to estimate the channel once per coherence time. The same applies to the frequency domain, where the *coherence bandwidth* is determined by the propagation environment and not the number of antennas.

The following flow-chart shows what need to happen in every channel coherence time:

When you get a new realization (at the top of the flow-chart), you compute an estimate (e.g., based on uplink pilots), then you use the estimate to compute a new receive combining vector and transmit precoding vector. It is when you have applied these vectors to the channel that the hardening phenomena appears; that is, the randomness averages out. If you use maximum ratio (MR) processing, then the random realization * h_{1}* of the channel vector turns into an almost deterministic scalar channel ||

In conclusion, channel hardening appears after coherent combining/precoding has been applied. To maintain a hardened channel over time (and frequency), you need to estimate and update the combining/precoding as often as you would do for a single-antenna channel. If you don’t do that, you will gradually lose the array gain until the point where the channel and the combining/precoding are practically uncorrelated, so there is no array gain left. Hence, there is more to lose from estimating channels too infrequently in Massive MIMO systems than in conventional systems. This is shown in Fig. 10 in a recent measurement paper from Lund University, where you see how the array gain vanishes with time. However, the Massive MIMO system will never be worse than the corresponding single-antenna system.

]]>The signal-to-noise ratio (SNR) generally depends on the transmit power, channel gain, and noise power:

Since the spectral efficiency (bit/s/Hz) and many other performance metrics of interest depend on the SNR, and not the individual values of the three parameters, it is a common practice to normalize one or two of the parameters to unity. This habit makes it easier to interpret performance expressions, to select reasonable SNR ranges, and to avoid mistakes in analytical derivations.

There are, however, situations when the absolute value of the transmitted/received signal power matters, and not the relative value with respect to the noise power, as measured by the SNR. In these situations, it is easy to make mistakes if you use normalized parameters. I see this type of errors far too often, both as a reviewer and in published papers. I will give some specific examples below, but I won’t tell you who has made these mistakes, to not point the finger at anyone specifically.

**Wireless energy transfer**

Electromagnetic radiation can be used to transfer energy to wireless receivers. In such wireless energy transfer, it is the received signal energy that is harvested by the receiver, not the SNR. Since the noise power is extremely small, the SNR is (at least) a billion times larger than the received signal power. Hence, a normalization error can lead to crazy conclusions, such as being able to transfer energy at a rate of 1 W instead of 1 nW. The former is enough to keep a wireless transceiver on continuously, while the latter requires you to harvest energy for a long time period before you can turn the transceiver on for a brief moment.

**Energy efficiency**

The energy efficiency (EE) of a wireless transmission is measured in bit/Joule. The EE is computed as the ratio between the data rate (bit/s) and the power consumption (Watt=Joule/s). While the data rate depends on the SNR, the power consumption does not. The same SNR value can be achieved over a long propagation distance by using high transmit power or over a short distance by using a low transmit power. The EE will be widely different in these cases. If a “normalized transmit power” is used instead of the actual transmit power when computing the EE, one can get EEs that are one million times smaller than they should be. As a rule-of-thumb, if you compute things correctly, you will get EE numbers in the range of 10 kbit/Joule to 10 Mbit/Joule.

**Noise power depends on the bandwidth**

The noise power is proportional to the communication bandwidth. When working with a normalized noise power, it is easy to forget that a given SNR value only applies for one particular value of the bandwidth.

Some papers normalize the noise variance and channel gain, but then make the SNR equal to the unnormalized transmit power (measured in W). This may greatly overestimate the SNR, but the achievable rates might still be in the reasonable range if you operate the system in an interference-limited regime.

Some papers contain an alternative EE definition where the spectral efficiency (bit/s/Hz) is divided by the power consumption (Joule/s). This leads to the alternative EE unit bit/Joule/Hz. This definition is not formally wrong, but gives the misleading impression that one can multiply the EE value with any choice of bandwidth to get the desired number of bit/Joule. That is not the case since the SNR only holds for one particular value of the bandwidth.

**Knowing when to normalize**

In summary, even if it is convenient to normalize system parameters in wireless communications, you should only do it if you understand when normalization is possible and when it is not. Otherwise, you can make embarrassing mistakes, such as submitting a paper where the results are six orders of magnitude wrong. And, unfortunately, there are several such papers that have been published and these create a bad circle by tricking others into making the same mistakes.

]]>The hardback version of the massive new book ** Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency** (by Björnson, Sanguinetti, Hoydis) is currently available for the special price of $70 (including worldwide shipping). The original price is $99.

This price is available until the end of April when buying the book directly from the publisher through the following link:

https://www.nowpublishers.com/Order/BuyBook?isbn=978-1-68083-985-2

**Note:** The book’s authors will give a joint tutorial on April 15 at WCNC 2018. A limited number of copies of the book will be available for sale at the conference and if you attend the tutorial, you will receive even better deal on buying the book!

While the research literature is full of papers that design wireless communication systems under constraints on the maximum transmitted power, in practice, it might be constraints on the equivalent isotropically radiated power (EIRP) or the out-of-band radiation that limit the system operation.

Christopher Mollén recently defended his doctoral thesis entitled High-End Performance with Low-End Hardware: Analysis of Massive MIMO Base Station Transceivers. In the following video, he explains the basics of how the non-linear distortion from Massive MIMO transceivers is radiated in space.

]]>The term “Massive MIMO” has become synonymous with providing massive data rates in wireless networks, but this is not the technology’s only good trait. In this video presentation, which has also been given as an IEEE 5G webinar, I explain how Massive MIMO can enhance future cellular networks from many different perspectives.

One reason for why capacity lower bounds are so useful is that they are accurate proxies for link-level performance with modern coding. But this fact, well known to information and coding theorists, is often contested by practitioners. I will discuss some possible reasons for that here.

The recipe is to compute the capacity bound, and depending on the code blocklength, add a dB or a few, to the required SNR. That gives the link performance prediction. The coding literature is full of empirical results, showing how far from capacity a code of a given block length is for the AWGN channel, and this gap is usually not extremely different for other channel models – although, one should always check this.

But there are three main caveats with this:

- First, the capacity bound, or the “SINR” that it often contains, must be information-theoretically correct. A great deal of papers get this wrong. Emil explained in his blog post last week some common errors. The recommended approach is to map the channel onto one of the canonical cases in Figure 2.9 in Fundamentals of Massive MIMO, verify that the technical conditions are satisfied, and use the corresponding formula.
- When computing expressions of the type E[log(1+”SINR”)], then the average should be taken over all quantities that are random within the duration of a codeword. Typically, this means averaging over the randomness incurred by the noise, channel estimation errors, and in many cases the small-scale fading. All other parameters must be kept fixed. Typically, user positions, path losses, shadow fading, scheduling and pilot assignments, are fixed, so the expectation is conditional on those. (Yet, the interference statistics may vary substantially, if other users are dropping in and out of the system.) This in turn means that many “drops” have to be generated, where these parameters are drawn at random, and then CDF curves with respect to that second level of randomness needs be computed (numerically).Think of the expectation E[log(1+”SINR”)] as a “link simulation”. Every codeword sees many independent noise realizations, and typically small-scale fading realizations, but the same realization of the user positions. Also, often, neat (and tight) closed-form bounds on E[log(1+”SINR”)] are available.
- Care is advised when working with relatively short blocks (less than a few hundred bits) and at rates close to the constrained capacity with the foreseen modulation format. In this case, many of the “standard” capacity bounds become overoptimistic.As a rule of thumb, compare the capacity of an AWGN channel with the constrained capacity of the chosen modulation at the spectral efficiency of interest, and if the gap is small, the capacity bounds will be useful. If not, then reconsider the choice of modulation format! (See also homework problem 1.4.)

How far are the bounds from the actual capacity typically? Nobody knows, but there are good reasons to believe they are extremely close. Here (Figure 1) is a nice example that compares a decoder that uses the measured channel likelihood, instead of assuming a Gaussian (which is implied by the typical bounding techniques). From correspondence with one of the authors: “The dashed and solid lines are the lower bound obtained by Gaussianizing the interference, while the circles are the rate achievable by a decoder exploiting the non-Gaussianity of the interference, painfully computed through days-long Monte-Carlo. (This is not exactly the capacity, because the transmit signals here are Gaussian, so one could deviate from Gaussian signaling and possibly do slightly better — but the difference is imperceptible in all the experiments we’ve done.)”

Concerning Massive MIMO and its capacity bounds, I have met for a long time with arguments that these capacity formulas aren’t useful estimates of actual performance. But in fact, they are: In one simulation study we were less than one dB from the capacity bound by using QPSK and a standard LDPC code (albeit with fairly long blocks). This bound accounts for noise and channel estimation errors. Such examples are in Chapter 1 of Fundamentals of Massive MIMO, and also in the ten-myth paper:

(I wrote the simulation code, and can share it, in case anyone would want to reproduce the graphs.)

So in summary, while capacity bounds are sometimes done wrong; **when done right** they give pretty good estimates of actual link performance with modern coding.

(With thanks to Angel Lozano for discussions.)

]]>We are used to measuring performance in terms of the signal-to-interference-and-noise ratio (SINR), but this is seldom the actual performance metric in communication systems. In practice, we might be interested in a function of the SINR, such as the data rate (a.k.a. spectral efficiency), bit-error-rate, or mean-squared error in the data detection. When the receiver has perfect channel state information (CSI), the aforementioned metrics are all functions of the same SINR expression, where the power of the received signal is divided by the power of the interference plus noise. Details can be found in Examples 1.6-1.8 of the book Optimal Resource Allocation in Coordinated Multi-Cell Systems.

In most cases, the receiver only has imperfect CSI and then it is harder to measure the performance. In fact, it took me years to understand this properly. To explain the complications, consider the uplink of a single-cell Massive MIMO system with single-antenna users and antennas at the base station. The received -dimensional signal is

where is the unit-power information signal from user , is the fading channel from this user, and is unit-power additive Gaussian noise. In general, the base station will only have access to an imperfect estimate of , for

Suppose the base station uses to select a receive combining vector for user . The base station then multiplies it with to form a scalar that is supposed to resemble the information signal :

From this expression, a common mistake is to directly say that the SINR is

which is obtained by computing the power of each of the terms (averaged over the signal and noise), and then claim that is an achievable rate (where the expectation is with respect to the random channels). You can find this type of arguments in many papers, without proof of the information-theoretic achievability of this rate value. Clearly, is an SINR, in the sense that the numerator contains the total signal power and the denominator contains the interference power plus noise power. However, this doesn’t mean that you can plug into “Shannon’s capacity formula” and get something sensible. This will only yield a correct result when the receiver has perfect CSI.

A basic (but non-conclusive) test of the correctness of a rate expression is to check that the receiver can compute the expression based on its available information (i.e., estimates of random variables and deterministic quantities). Any expression containing fails this basic test since you need to know the exact channel realizations to compute it, although the receiver only has access to the estimates.

**What is the right approach?**

Remember that the SINR is not important by itself, but we should start from the performance metric of interest and then we might eventually interpret a part of the expression as an *effective SINR*. In Massive MIMO, we are usually interested in the ergodic capacity. Since the exact capacity is unknown, we look for rigorous lower bounds on the capacity. There are several bounding techniques to choose between, whereof I will describe the two most common ones.

The first lower bound on the uplink capacity can be applied when the channels are Gaussian distributed and are the MMSE estimates with the corresponding estimation error covariance matrices . The ergodic capacity of user is then lower bounded by

Note that this expression can be computed at the receiver using only the available channel estimates (and deterministic quantities). The ratio inside the logarithm can be interpreted as an effective SINR, in the sense that the rate is equivalent to that of a fading channel where the receiver has perfect CSI and an SNR equal to this effective SINR. A key difference from is that only the part of the desired signal that is received along the estimated channel appears in the numerator of the SINR, while the rest of the desired signal appears as in the denominator. This is the price to pay for having imperfect CSI at the receiver, according to this capacity bound, which has been used by Hoydis et al. and Ngo et al., among others.

The second lower bound on the uplink capacity is

which can be applied for any channel fading distribution. This bound provides a value close to when there is substantial channel hardening in the system, while will greatly underestimate the capacity when varies a lot between channel realizations. The reason is that to obtain this bound, the receiver detects the signal as if it is received over a non-fading channel with gain (which is deterministic and thus known in theory, and easy to measure in practice), but there are no approximations involved so is always a valid bound.

Since all the terms in are deterministic, the receiver can clearly compute it using its available information. The main merit of is that the expectations in the numerator and denominator can sometimes be computed in closed form; for example, when using maximum-ratio and zero-forcing combining with i.i.d. Rayleigh fading channels or maximum-ratio combining with correlated Rayleigh fading. Two early works that used this bound are by Marzetta and by Jose et al..

The two uplink rate expressions can be proved using capacity bounding techniques that have been floating around in the literature for more than a decade; the main principle for computing capacity bounds for the case when the receiver has imperfect CSI is found in a paper by Medard from 2000. The first concise description of both bounds (including all the necessary conditions for using them) is found in Fundamentals of Massive MIMO. The expressions that are presented above can be found in Section 4 of the new book Massive MIMO Networks. In these two books, you can also find the right ways to compute rigorous lower bounds on the downlink capacity in Massive MIMO.

In conclusion, to avoid mistakes, always start with rigorously computing the performance metric of interest. If you are interested in the ergodic capacity, then you start from one of the canonical capacity bounds in the above-mentioned books and verify that all the required conditions are satisfied. Then you may interpret part of the expression as an SINR.

]]>I never thought it would happen so fast. When I started to work on Massive MIMO in 2009, the general view was that fully digital, phase-coherent operation of so many antennas would be infeasible, and that power consumption of digital and analog circuitry would prohibit implementations for the foreseeable future. More seriously, reservations were voiced that reciprocity-based beamforming would not work, or that operation in mobile conditions would be impossible.

These arguments, it turned out, all proved to be wrong. In 2017, Massive MIMO was the main physical-layer technology under standardization for 5G, and it is unlikely that any serious future cellular wireless communications system would not have Massive MIMO as a main technology component.

But Massive MIMO is more than a groundbreaking technology for wireless communications: it is also an elegant and mathematically rigorous approach to teaching wireless communications. In the moderately-large number-of-antennas regime, our closed-form capacity bounds become convenient proxies for the link performance achievable with practical coding and modulation.

These expressions take into account the effects of all significant physical phenomena: small-scale and large-scale fading, intra- and inter-cell interference, channel estimation errors, pilot reuse (also known as pilot contamination) and power control. A comprehensive analytical understanding of these phenomena simply has not been possible before, as the corresponding information theory has too complicated for any practical use.

The intended audiences of Fundamentals of Massive MIMO are engineers and students. I anticipate that as graduate courses on the topic become commonplace, our extensive problem set (with solutions) available online will serve as a useful resource to instructors. While other books and monographs will likely appear down the road, focusing on trendier and more recent research, Fundamentals of Massive MIMO distills the theory and facts that will prevail for the foreseeable future. This, I hope, will become its most lasting impact.

To read the preface of Fundamentals of Massive MIMO, click here. You can also purchase the book here.

]]>For the past two years, I’ve been writing on a book about Massive MIMO networks, together with my co-authors Jakob Hoydis and Luca Sanguinetti. It has been a lot of hard work, but also a wonderful experience since we’ve learned a lot in the writing process. We try to connect all dots and provide answers to many basic questions that were previously unanswered.

The book has now been published:

Emil Björnson, Jakob Hoydis and Luca Sanguinetti (2017), “** Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency**”, Foundations and Trends® in Signal Processing: Vol. 11, No. 3-4, pp 154–655. DOI: 10.1561/2000000093.

**What is new with this book?**

Marzetta et al. published Fundamentals of Massive MIMO last year. It provides an excellent, accessible introduction to the topic. By considering spatially uncorrelated channels and two particular processing schemes (MR and ZF), the authors derive closed-form capacity bounds, which convey many practical insights and also allow for closed-form power control.

In the new book, we consider spatially correlated channels and demonstrate how such correlation (which always appears in practice) affects Massive MIMO networks. This modeling uncovers new fundamental behaviors that are important for practical system design. We go deep into the signal processing aspects by covering several types of channel estimators and deriving advanced receive combining and transmit precoding schemes.

In later chapters of the book, we cover the basics of energy efficiency, transceiver hardware impairments, and various practical aspects; for example, spatial resource allocation, channel modeling, and antenna array deployment.

The book is self-contained and written for graduate students, PhD students, and senior researchers that would like to learn Massive MIMO, either in depth or at an overview level. All the analytical proofs, and the basic results on which they build, are provided in the appendices.

On the website massivemimobook.com, you will find Matlab code that reproduces all the simulation figures in the book. You can also download exercises and other supplementary material.

**Update: Get a free copy of the book**

From August 2018, you can download a free PDF of the authors’ version of the manuscript. This version is similar to the official printed books, but has a different front-page and is also regularly updated to correct typos that have been identified.

]]>IEEE ComSoc is continuing to deliver webinars on 5G topics and Massive MIMO is a key part of several of them. The format is a 40 minute presentation followed by a 20 minuter Q/A session. Hence, if you attend the webinars “live”, you have the opportunity to ask questions to the presenters. Otherwise, you can also watch each webinar afterwards. For example, 5G Massive MIMO: Achieving Spectrum Efficiency, which was given in August by Liesbet Van der Perre (KU Leuven), can still be watched.

In November, the upcoming Massive MIMO webinars are:

Massive MIMO for 5G: How Big Can it Get? by Emil Björnson (Linköping University), Thursday, 9 November 2017, 3:00 PM EST, 12:00 PM PST, 20:00 GMT.

Real-time Prototyping of Massive MIMO: From Theory to Reality by Douglas Kim (NI) and Fredrik Tufvesson (Lund University), Wednesday, 15 November 2017, 12:00 PM EST, 9:00 AM PST, 17:00 GMT.

]]>Yes, my group had its share of rejected papers as well. Here are some that I specially remember:

- Massive MIMO: 10 myths and one critical question. The first version was rejected by the IEEE Signal Processing Magazine. The main comment was that nobody would think that the points that we had phrased as myths were true. But in reality, each one of the myths was based on an actual misconception heard in public discussions! The paper was eventually published in the IEEE Communications Magazine instead in 2016, and has been cited more than 180 times.
- Massive MIMO with 1-bit ADCs. This paper was rejected by the IEEE Transactions on Wireless Communications. By no means a perfect paper… but the review comments were mostly nonsensical. The editor stated: “The concept as such is straightforward and the conceptual novelty of the manuscript is in that sense limited.” The other authors left my group shortly after the paper was written. I did not predict the hype on 1-bit ADCs for MIMO that would ensue (and this happened despite the fact that yes, the concept as such
*is*straightforward and its conceptual novelty*is*rather limited!). Hence I didn’t prioritize a rewrite and resubmission. The paper was never published, but we put the rejected manuscript on arXiv in 2014, and it has been cited 80 times. - Finally, a paper that was almost rejected upon its initial submission: Energy and Spectral Efficiency of Very Large Multiuser MIMO Systems, eventually published in the IEEE Transactions on Communications in 2013. The review comments included obvious nonsense, such as “Overall, there is not much difference in theory compared to what was studied in the area of MIMO for the last ten years.” The paper subsequently won the IEEE ComSoc Stephen O. Rice Prize, and has more than 1300 citations.

There are several lessons to learn here. First, that peer review may be the best system we know, but it isn’t perfect: disturbingly, it is often affected by incompetence and bias. Second, notwithstanding the first, that many paper rejections are probably also grounded in genuine misunderstandings: writing well takes a lot of experience, and a lot of hard, dedicated work. Finally, and perhaps most significantly, that persistence is really an essential component of success.

]]>I’ve got an email with this question last week. There is not one but many possible answers to this question, so I figured that I write a blog post about it.

One answer is that beamforming and precoding are two words for exactly the same thing, namely to use an antenna array to transmit one or multiple spatially directive signals.

Another answer is that beamforming can be divided into two categories: analog and digital beamforming. In the former category, the same signal is fed to each antenna and then analog phase-shifters are used to steer the signal emitted by the array. This is what a phased array would do. In the latter category, different signals are designed for each antenna in the digital domain. This allows for greater flexibility since one can assign different powers and phases to different antennas and also to different parts of the frequency bands (e.g., subcarriers). This makes digital beamforming particularly desirable for spatial multiplexing, where we want to transmit a superposition of signals, each with a separate directivity. It is also beneficial when having a wide bandwidth because with fixed phases the signal will get a different directivity in different parts of the band. The second answer to the question is that precoding is equivalent to digital beamforming. Some people only mean analog beamforming when they say beamforming, while others use the terminology for both categories.

A third answer is that beamforming refers to a single-user transmission with one data stream, such that the transmitted signal consists of one main-lobe and some undesired side-lobes. In contrast, precoding refers to the superposition of multiple beams for spatial multiplexing of several data streams.

A fourth answer is that beamforming refers to the formation of a beam in a particular angular direction, while precoding refers to any type of transmission from an antenna array. This definition essentially limits the use of beamforming to line-of-sight (LoS) communications, because when transmitting to a non-line-of-sight (NLoS) user, the transmitted signal might not have a clear angular directivity. The emitted signal is instead matched to the multipath propagation so that the multipath components that reach the user add constructively.

A fifth answer is that precoding consists of two parts: choosing the directivity (beamforming) and choosing the transmit power (power allocation).

I used to use the word *beamforming* in its widest meaning (i.e., the first answer), as can be seen in my first book on the topic. However, I have since noticed that some people have a more narrow or specific interpretation of beamforming. Therefore, I nowadays prefer only talking about *precoding*. In Massive MIMO, I think that precoding is the right word to use since what I advocate is a fully digital implementation, where the phases and powers can be jointly designed to achieve high capacity through spatial multiplexing of many users, in both NLoS and LOS scenarios.

Video recordings from the 2017 Joint IEEE SPS and EURASIP Summer School on Signal Processing for 5G Wireless Access are available for IEEE members, as we wrote about in a previous post. Now two of the Massive MIMO tutorial talks are openly available on Youtube.

Prof. Erik. G. Larsson gave a 2.5 hour tutorial on the fundamentals of Massive MIMO, which is highly recommended for anyone learning this topic. You can then follow up by reading his book with the same topic.

When you have viewed Erik’s introduction, you can learn more about the state-of-the-art signal processing schemes for Massive MIMO from another talk at the summer school. Dr. Emil Björnson gave a 3 hour tutorial on this topic:

]]>I am borrowing the title from a column written by my advisor two decades ago, in the array signal processing gold rush era.

Asymptotic analysis is a popular tool within statistical signal processing (infinite SNR or number of samples), information theory (infinitely long blocks) and more recently, [massive] MIMO wireless communications (infinitely many antennas).

Some caution is strongly advisable with respect to the latter. In fact, there are compelling reasons to avoid asymptotics in the number of antennas altogether:

- First, elegant, rigorous and intuitively comprehensible capacity bound formulas are available in closed form.

The proofs of these expressions use basic random matrix theory, but no asymptotics at all. - Second, the notion of “asymptotic limit” or “asymptotic behavior” helps propagate the myth that Massive MIMO somehow relies on asymptotics or “infinite” numbers (or even exorbitantly large numbers) of antennas.
- Third, many approximate performance results for Massive MIMO (particularly “deterministic equivalents”) based on asymptotic analysis are complicated, require numerical evaluation, and offer little intuitive insight. (And, the verification of their accuracy is a formidable task.)

Finally, and perhaps most importantly, careless use of asymptotic arguments may yield erroneous conclusions. For example in the effective SINRs in multi-cell Massive MIMO, the coherent interference scales with M (number of antennas) – which yields the commonly held misconception that coherent interference is the main impairment caused by pilot contamination. But in fact, in many relevant circumstances it is not (see case studies here): the main impairment for “reasonable” values of M is the reduction in coherent beamforming gain due to reduced estimation quality, which in turn is independent of M.

In addition, the number of antennas beyond which the far-field assumption is violated is actually smaller than what one might first think (problem 3.14).

]]>Here are the video recordings of the lectures and keynotes from the Joint IEEE SPS and EURASIP Summer School on Signal Processing for 5G.

IEEE SPS members can watch the videos for free but it is necessary to log in through the IEEE website.

]]>IEEE ComSoc provides new online material every month and in August the focus is on Massive MIMO.

First, four carefully selected articles are offered free of charge, see the screenshot below and click here for details.

More precisely, IEEE offers free access to the published versions of these articles, while the accepted versions were already openly available: Paper 1, Paper 2, Paper 3, and Paper 4.

Second, a live webinar entitled “5G Massive MIMO: Achieving Spectrum Efficiency” is organized by IEEE ComSoc on August 24. The speaker is Professor Liesbet Van der Perre from KU Leuven. She was the scientific leader of the MAMMOET project, which is famous for demonstrating that Massive MIMO works in practice. You can expect a unique mix of theoretical concepts and practical implementation insights from this webinar.

]]>Reproducibility is fundamental to scientific research. If you develop a new algorithm and use simulations/experiments to claim its superiority over prior algorithms, your claims are only credible if other researchers can reproduce and confirm them.

The first step towards reproducibility is to describe the simulation procedure in such detail that another researcher can repeat the simulation, but a major effort is typically needed to reimplement everything. The second step is to make the simulation code publicly available, so that any scientist can review it and easily reproduce the results. While the first step is mandatory for publishing a scientific study, there is a movement towards open science that would make also the second step a common practice.

I understand that some researchers are skeptical towards sharing their simulation code, in fear of losing their competitive advantage towards other research groups. My personal principle is to not share any code until the research study is finished and the results have been accepted for publication in a full-length journal. After that, I think that the society benefits the most if other researcher can focus on improving my and others’ research, instead of spending excessive amount of time on reimplementing known algorithms. I also believe that the primary competitive advantage in research is the know-how and technical insights, while the simulation code is of secondary importance.

On my GitHub page, I have published Matlab code packages that reproduces the simulation results in one book, one book chapter, and more than 15 peer-reviewed articles. Most of these publications are related to MIMO or Massive MIMO. I see many benefits from doing this:

1) It increases the credibility of my research group’s work;

2) I write better code when I know that other people will read it;

3) Other researchers can dedicate their time into developing new improved algorithms and compare them with my baseline implementations;

4) Young scientists may learn how to implement a basic simulation environment by reading the code.

I hope that other Massive MIMO researchers will also make their simulation code publicly available. Maybe you have already done that? In that case, please feel free to write a comment to this post with a link to your code.

]]>In January this year, the IEEE Signal Processing Magazine contained an article by Erik G. Larsson, Danyo Danev, Mikael Olofsson, and Simon Sörman on “Teaching the Principles of Massive MIMO: Exploring reciprocity-based multiuser MIMO beamforming using acoustic waves“. It describes an exciting approach to teach the basics of Massive MIMO communication by implementing the system acoustically, using loudspeaker elements instead of antennas. The fifth-year engineering students at Linköping University have performed such implementations in 2014, 2015, and 2016, in the form of a conceive-design-implement-operate (CDIO) project.

The article details the teaching principles and experiences that the teachers and students had from the 2015 edition of the CDIO-project. This was also described in a previous blog post. In the following video, the students describe and demonstrate the end-result of the 2016 edition of the project. The acoustic testbed is now truly massive, since 64 loudspeakers were used.

]]>If you want to learn about signal processing foundations for Massive MIMO and mmWave communications, you should attend the

**2017 Joint IEEE SPS and EURASIP Summer School on Signal Processing for 5G **

Signal processing is at the core of the emerging fifth generation (5G) cellular communication systems, which will bring revolutionary changes to the physical layer. Unlike other 5G events, the objective of this summer school is to teach the main physical-layer techniques for 5G from a signal-processing perspective. The lectures will provide a background on the 5G wireless communication concepts and their formulation from a signal processing perspective. Emphasis will be placed on showing specifically how cutting-edge signal processing techniques can and will be applied to 5G. Keynote speeches by leading researchers from Ericsson, Huawei, China Mobile, and Volvo complement the technical lectures.

The summer school covers the following specific topics:

- Massive MIMO communication in TDD and FDD
- mmWave communications and compressed sensing
- mmWave positioning
- Wireless access for massive machine-type communications

The school takes place in Gothenburg, Sweden, from May 29th to June 1st, in the week after ICC in Paris.

This event belongs to the successful series of IEEE SPS and EURASIP Seasonal Schools in Signal Processing. The 2017 edition is jointly organized by Chalmers University of Technology, Linköping University, The University of Texas at Austin, Aalborg University and the University of Vigo.

Registration is now open. A limited number of student travel grants will be available.

For more information and detailed program, see: http://www.sp-for-5g.com/

]]>Our Cambridge book, Fundamentals of Massive MIMO, ships now from all major retailers.

**Problem set:** We have developed an extensive set of problems to go with the book. This problem set can be downloaded from the Cambridge resource page, www.cambridge.org/Marzetta, or from this direct link.

The difficulty level of the problem varies widely, rendering the material suitable for instruction at all levels. The problem set is very much a living document and may be extended or improved in the future. Many, though not all, of the problems have been tested on my students when I taught the subject last year. We appreciate, as always, comments or suggestions on the material.

A detailed solution manual is available to instructors who adopt the book.

**List of errata:** There is also a list of errata to the book – available via this direct link, or from the Cambridge resource page.

*Have no fear of perfection — you’ll never reach it.* — Salvador Dali

I regularly get the question “are there any Massive MIMO books?”. So far my answer has always been “no”, but now I can finally give a positive answer.

My colleagues Erik G. Larsson and Hien Quoc Ngo have written a book entitled “Fundamentals of Massive MIMO” together with Thomas L. Marzetta and Hong Yang at Bell Labs, Nokia. The book is published this October/November by Cambridge University Press.

I have read the book and I think it serves as an excellent introduction to the topic. The text is suitable for graduate students, practicing engineers, professors, and doctoral students who would like to learn the basic Massive MIMO concept, results and properties. It also provides a clean introduction to the theoretical tools that are suitable for analyzing the Massive MIMO performance.

I personally intend to use this book as course material for a Master level course on Multiple-antenna communications next year. I recommend other teachers to also consider this possibility!

A preview of the book can be found on Google Books:

**Update:** Since November 2017, there is another book: “Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency“.

Here is the acoustic Massive MIMO testbed student project at Linköping University. TDD operation and reciprocity-based channel estimation, same principles as in the “real” Massive MIMO:

]]>