Figure 5.4 Confusion matrix CNN with synthetic data set.
By comparing with the CNN model proposed in [19], proposed models provide better classification accuracy, since the network model is trained with two labels that are SNR and modulation type. The proposed model tries even to estimate the SNR also. It can be observed from the accuracy plot that around 5%–10% enhancement in prediction accuracy even at low SNR.
5.3.2 Case Study 2: CSI Feedback for FDD Massive MIMO Systems
A massive MIMO base station (BS) requires downlink CSI for achieving desired gains. The currently deployed systems dominantly function in FDD mode, and many frequency bands are allocated explicitly for FDD use [49]. In FDD mode, the CSI is estimated from the pilots sent by the BS at the UE side, and the estimated CSI is then fed back to the BS. Even if a satisfactory estimate of the channel is made, the frequency resources of the feedback channel could be exhausted by the large-scale CSI matrix. Hence, CSI feedback is a significant problem to be addressed mainly in the FDD massive MIMO case.
Existing techniques like compressed sensing, which makes use of the sparsity concept suffers from slow reconstruction time. It uses random projection which cannot exploit the structure of the channel altogether [20]. Data-driven DL approaches use a massive amount of data, which in communication application can be generated in real-time across an enormous number of users. The data sets generated by users and BSs in various environments can be helpful for the 5G networks to learn [50].
This case study uses NNs for compression of the channel information at UE and recovery of the same at the BS. The CSI matrix compression for feedback is similar to the unsupervised clustering problem [51]. It uses an autoencoder structure, as shown in Figure 5.5, where the encoder present at the UE converts the CSI matrix into a K-dimensional vector. This is the reduced representation of the original matrix. This K-dimensional vector is then transmitted as feedback to the BS, where the decoder converts the received vector back into its original form.
Figure 5.5 Autoencoder model for CSI feedback.
5.3.2.1 Proposed Network Model
This case study proposes a novel network named inception network (InceptNet), which uses inception blocks as shown in Figure 5.6 and also used in GoogleLeNet [23]. The inception block consists of multiple parallel convolution blocks of different filter sizes. An extra 1 × 1 convolution before the 3 × 3 and 5 × 5 convolutions help in dimensionality reduction. These different filter sizes help in extracting both the generic features spread across the image and the local features. The encoder and decoder blocks of the proposed network are shown in Figure 5.7, which follows the autoencoder model presented in Figure 5.5.
Figure 5.6 Inception block.
Figure 5.7 Encoder and decoder blocks of InceptNet.
The input to the encoder is the channel matrix, and the output is a vector of reduced dimension K. The encoder consists of a single inception block. This output vector is fed back to the decoder where the decoder network with two parallel inception blocks recovers the channel matrix. The compression from N-dimensions of channel matrix to K-dimensional vector is given in terms of compression ratio (CR), where CR = K/N.
5.3.2.2 Results and Discussion
The training and testing of the models are set up in Keras built on top of TensorFlow using Google Colaboratory. COST 2100 channel model is used to generate the data set. The data set provided by Wen et al. [20] is used for simulation here. The training data consists of 100,000 samples. The validation and test set contain 30,000 and 20,000 samples, respectively. All the test samples are independent of the training and validation samples. The network is trained for 100 epochs with a batch size of 200. The learning rate is set to 0.001. Adam optimizer is used to update the parameters, and the mean squared error (MSE) function is used as the loss function. The NMSE quantitatively provides the difference between the original channel matrix
(5.1)
The CSI feedback serves as a beamforming vector. Consider hrn as the reconstructed channel vector of the nth sub-carrier and
(5.2)
where Nc is the number of sub-carriers.
Figure 5.8 shows the pseudo gray plots of the channel matrices. The recovered channel images for both CsiNet and InceptNet with CR = 1/4.
Figure 5.8 Pseudo gray plots of (a) original image (b) image recovered by CsiNet for CR= 1/4 (c) image recovered by InceptNet for CR= 1/4.
Table 5.2 shows the comparative analysis between the already existing CsiNet and the proposed InceptNet. For low compression ratio of 1/4, the NMSE for InceptNet is −18.68 dB and for CsiNet −14.43 dB. The InceptNet gives a better recovery of the channel matrix as compared to the CsiNet. Even for a high compression ratio of 1/32, it is observed that NMSE is “−7.987” dB for InceptNet and “−5.32” for CsiNet. It is observed that the InceptNet outperforms the CsiNet in both NMSE and cosine similarity for all compression ratios. The InceptNet achieves better performance by training for only 100 epochs. The parallel inception blocks with different filter sizes help in better extraction of both high level and subtle features. The training time and quantitative results can be further improved by using the recently developed network architectures.
Table 5.2 Performance comparison analysis for 100 epochs between CsiNet and InceptNet.
CR | CsiNet |