All posts by Erik G. Larsson

A case against Massive MIMO?

I had an interesting conversation with a respected colleague who expressed some significant reservations against massive MIMO. Let’s dissect the arguments. 


The first argument against massive MIMO was that most traffic is indoors, and that deployment of large arrays indoors is impractical and that outdoor-to-indoor coverage through massive MIMO is undesirable (or technically infeasible). I think the obvious counterargument here is that before anything else, the main selling argument for massive MIMO is not indoor service provision but outdoor macrocell coverage: the ability of TDD/reciprocity based beamforming to handle high mobility, and efficiently suppress interference thus provide cell-edge coverage. (The notion of a “cell-edge” user should be broadly interpreted: anyone having poor nominal signal-to-interference-and-noise ratio, before the MIMO processing kicks in.) But nothing prevents massive MIMO from being installed indoors, if capacity requirements are so high that conventional small cell or WiFi technology cannot handle the load. Antennas could be integrated into walls, ceilings, window panes, furniture or even pieces of art. For construction of new buildings, prefabricated building blocks are often used and antennas could be integrated into these already at their production. Nothing prevents the integration of thousands of antennas into natural objects in a large room.

Outdoor-to-indoor coverage doesn’t work? Importantly, current systems provide outdoor-to-indoor coverage already, and there is no reason Massive MIMO would not do the same (to the contrary, adding base station antennas is always beneficial for performance!). But yet the ideal deployment scenario of massive MIMO is probably not outdoor-to-indoor so this seems like a valid point, partly. The arguments against the outdoor-to-indoor are that modern energy-saving windows have a coating that takes out 20 dB, at least, of the link budget. In addition, small angular spreads when all signals have to pass through windows (maybe except for in wooden buildings) may reduce the rank of the channel so much that not much multiplexing to indoor users is possible. This is mostly speculation and not sure whether experiments are available to confirm, or refute it.

Let’s move on to the second argument. Here the theme is that as systems use larger and larger bandwidths, but can’t increase radiated power, the maximum transmission distance shrinks (because the SNR is inversely proportional to the bandwidth). Hence, the cells have to get smaller and smaller, and eventually, there will be so few users per cell that the aggressive spatial multiplexing on which massive MIMO relies becomes useless – as there is only a single user to multiplex. This argument may be partly valid at least given the traffic situation in current networks. But we do not know what future requirements will be. New applications may change the demand for traffic entirely: augmented or virtual reality, large-scale communication with drones and robots, or other use cases that we cannot even envision today.

It is also not too obvious that with massive MIMO, the larger bandwidths are really required. Spatial multiplexing to 20 terminals improves the bandwidth efficiency 20 times compared to conventional technology. So instead of 20 times more bandwidth, one may use 20 times more antennas. Significant multiplexing gains are not only proven theoretically but have been demonstrated in commercial field trials. It is argued sometimes that traffic is bursty so that these multiplexing gains cannot materialize in practice, but this is partly a misconception and partly a consequence of defect higher-layer designs (most importantly TCP/IP) and vested interests in these flawed designs. For example, for the services that constitute most of the raw bits, especially video streaming, there is no good reason to use TCP/IP at all. Hopefully, once the enormous potential of massive MIMO physical layer technology becomes more widely known and understood, the market forces will push a re-design of higher-layer and application protocols so that they can maximally benefit from the massive MIMO physical layer.  Does this entail a complete re-design of the Internet? No, probably not, but buffers have to be installed and parts of the link layer should be revamped to maximally use the “wires in the air”, ideally suited for aggressive multiplexing of circuit-switched data, that massive MIMO offers.

Quantifying the Benefits of 64T64R Massive MIMO

Came across this study, which seems interesting: Data from the Sprint LTE TDD network, comparing performance side-by-side of 64T64R and 8T8R antenna systems.

From the results:

“We observed up to a 3.4x increase in downlink sector throughput and up to an 8.9x increase in the uplink sector throughput versus 8T8R (obviously the gain is substantially higher relative to 2T2R). Results varied based on the test conditions that we identified. Link budget tests revealed close to a triple-digit improvement in uplink data speeds.  Preliminary results for the downlink also showed strong gains. Future improvements in 64T64R are forthcoming based on likely vendor product roadmaps.”

Molecular MIMO at IEEE CTW-2019

One more reason to attend the IEEE CTW 2019: Participate in the Molecular MIMO competition! There is a USD 500 award to the winning team.

The task is to design a molecular MIMO communication detection method using datasets that contain real measurements. Possible solutions may include classic approaches (e.g., thresholding-based detection) as well as deep learning-based approaches.

More detail: here.

MIMO Positioning Competition at IEEE CTW 2019

Come to the IEEE Communication Theory Workshop (CTW) 2019 and participate in the MIMO positioning competition!

The object of the competition is to design and train an algorithm that can determine the position of a user, based on estimated channel frequency responses between the user and an antenna array. Possible solutions may build on classic algorithms (fingerprinting, interpolation) or machine-learning approaches. Channel vectors from a dataset created with a MIMO channel sounder will be used.

Competing teams should present a poster at the conference, describing their algorithms and experiments.

A $500 USD prize will be awarded to the winning team.

More detail in this flyer.

Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

Come listen to Liesbet Van der Perre, Professor at KU Leuven (Belgium) on Monday February 18 at 2.00 pm EST.

She gives a webinar on state-of-the-art circuit implementations of Massive MIMO, and outlines future research challenges. The webinar is based on, among others, this paper.

In more detail the webinar will summarize the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It will explain how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes will be discussed. Open challenges and directions for future research are suggested.

Listen to the webinar by following this link.

Could chip-scale atomic clocks revolutionize wireless access?

This chip-scale atomic clock (CSAC) device, developed by Microsemi, brings atomic clock timing accuracy (see the specs available in the link) in a volume comparable to a matchbox, and 120 mW power consumption.  This is way too much for a handheld gadget, but undoubtedly negligible for any fixed installation powered from the grid.  An alternative to synchronization through GNSS that works anywhere, including indoor in GNSS-denied environments.

I haven’t seen a list price, and I don’t know how much exotic metals and what licensing costs that its manufacture requires, but let’s ponder the possibility that a CSAC could be manufactured for the mass-market for a few dollars each. What new applications would then become viable in wireless?

The answer is mostly (or entirely) speculation. One potential application that might become more practical is positioning using distributed arrays.  Another is distributed multipair relaying. Here and here are some specific ideas that are communication-theoretically beautiful, and probably powerful, but that seem to be perceived as unrealistic because of synchronization requirements. Perhaps CoMP and distributed MIMO, a.k.a. “cell-free Massive MIMO”, applications could also benefit.

Other applications might be applications for example in IoT, where a device only sporadically transmits information and wants to stay synchronized (perhaps there is no downlink, hence no way of reliably obtaining synchronization information).  If a timing offset (or frequency offset for that matter) is unknown but constant over a very long time, it may be treated as a deterministic unknown and estimated. The difficulty with unknown time and frequency offsets is not their existence per se, but the fact that they change quickly over time.

It’s often said (and true) that the “low” speed of light is the main limiting factor in wireless.  (Because channel state information is the main limiting factor of wireless communications.  If light were faster, then channel coherence would be longer, so acquiring channel state information would be easier.) But maybe the unavailability of a ubiquitous, reliable time reference is another, almost as important, limiting factor. Can CSAC technology change that?  I don’t know, but perhaps we ought to take a closer look.