@Erik G. Larsson: Thanks, I looked it up in your book 🙂 ]]>

Let’s compute the signal energy and noise energy over one second and compute the ratio of them to get the SNR.

In FDD: The signal energy is P*1, the noise energy is (B/2)*N_0*1. The SNR is 2*P/(B*N_0).

In TDD: The signal energy is P*0.5, the noise energy is B*N_0*0.5. The SNR is P/(B*N_0).

There is a difference by a factor 2, which is 3 dB. While the noise energy is the same in both cases, the signal energy is different since the power amplifier is only active half of the time.

]]>I have one confusion about “Link budget”. I didn’t understand why we have 3dB difference. Basically, if we compare the transmission of FDD and TDD, FDD has a smaller bandwidth, while TDD is using two times more bandwidth. However, this is the other way around in time resources. So, it seems for me that with a fix peak power, both should have same SNR. In the simplest case, I assume that PA in FDD and TDD are active with P_max. So, the received power by the user is the same for TDD or FDD equal to (P_max.BW)/2. Can you please help me where I’m making the mistake…?

Is the AWGN noise at receiver causing the 3dB difference for SNR due to having a larger bandwidth for the TDD compared to FDD…?

]]>