Savo G. Glisic

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks


Скачать книгу

proactive caching; big data learning for AI‐controlled resource allocation; GNN for prediction of resource requirements; and multi‐armed bandit estimators for Markov channels.

      In particular, we consider AI‐based algorithms for traffic classification, traffic routing, congestion control, resource management, fault management, Quality of Service (QoS) and Quality of Experience (QoE) management, network security, ML for caching in small cell networks, Q‐learning‐based joint channel and power level selection in heterogeneous cellular networks, stochastic non‐cooperative game, multi‐agent Q‐learning, Q‐learning for channel and power level selection, ML for self‐organizing cellular networks, learning in self‐configuration, RL for SON coordination, SON function model, RL, RL‐based caching, system model, optimality conditions, big data analytics in wireless networks, evolution of analytics, data‐driven networks optimization, GNNs, network virtualization, GNN‐based dynamic resource management, deep reinforcement learning (DRL) for multioperator network slicing, game equilibria by DRL, deep Q‐learning for latency limited network virtualization, DRL for dynamic VNF migration, multi‐armed bandit estimator (MBE), and network representation learning.

      Chapter 9 (Quantum Channel Information Theory): Quantum information processing exploits the quantum nature of information. It offers fundamentally new solutions in the field of computer science and extends the possibilities to a level that cannot be imagined in classical communication systems. For quantum communication channels, many new capacity definitions were developed in analogy with their classical counterparts. A quantum channel can be used to achieve classical information transmission or to deliver quantum information, such as quantum entanglement. In this chapter, we review the properties of the quantum communication channel, the various capacity measures, and the fundamental differences between the classical and quantum channels [37–43]. Specifically, we will discuss the privacy and performance gains of quantum channels, the quantum channel map, the formal model, quantum channel capacity, classical capacities of a quantum channel, the quantum capacity of a quantum channel, quantum channel maps, and capacities and practical implementations of quantum channels.

      Chapter 10 (Quantum Error Correction): The challenge in creating quantum error correction codes lies in finding commuting sets of stabilizers that enable errors to be detected without disturbing the encoded information. Finding such sets is nontrivial, and special code constructions are required to find stabilizers with the desired properties. We will start this section by discussing how a code can be constructed by concatenating two smaller codes. Other constructions include methods for repurposing classical codes to obtain commuting stabilizer checks [44–47]. Here, we will outline a construction known as the surface code [48, 49]. The realization of a surface code logical qubit is a key goal for many quantum computing hardware efforts [50–54]. The codes belong to a broader family of so‐called topological codes [55]. In this framework, within this chapter we will discuss stabilizer codes, surface codes, the rotated lattice, fault‐tolerant gates, fault tolerance, theoretical framework, classical error correction, and the theory of quantum error correction in addition to some auxiliary material on binary fields and discrete vector spaces, and noise physics.

      Chapter 12 (Quantum Machine Learning): In this chapter, we provide a brief description of quantum machine learning (QML) and its correlation with AI. We will see how the quantum counterpart of ML is much faster and more efficient than classical ML. Training the machine to learn from the algorithms implemented to handle data is the core of ML. This field of computer science and statistics employs AI and computational statistics. The classical ML method, through its subsets of deep learning (supervised and unsupervised), helps to classify images, recognize patterns and speech, handle big data, and many more. Thus, classical ML has received a lot of attention and investments from the industry. Nowadays, due to the huge quantities of data with which we deal every day, new approaches are needed to automatically manage, organize, and classify these data. Classical ML, which is a flexible and adaptable procedure, can recognize patterns efficiently, but some of these problems cannot be efficiently solved by these algorithms. Companies engaged in big databases management are aware of these limitations, and are very interested in new approaches to accomplish this. They have found one of these approaches in quantum ML. However, the interest in implementing these techniques through QC is what paves the way for quantum ML. QML [56–59] aims to implement ML algorithms in quantum systems by using quantum properties such as superposition and entanglement to solve these problems efficiently. This gives QML an edge over the classical ML technique in terms of speed of functioning and data handling. In the QML techniques, we develop quantum algorithms to operate classical algorithms using a quantum computer. Thus, data can be classified, sorted, and analyzed using the quantum algorithms of supervised and unsupervised learning methods. These methods are again implemented through models of a quantum neural network or support vector machine. This is the point where we merge the algorithms discussed in Parts I and II of this book. In particular, we will discuss QML algorithms, quantum neural network preliminaries, quantum, classifiers with ML: near‐term solutions, the circuit‐centric quantum classifier, training, gradients of parameterized quantum gates, classification with quantum neural networks, representation, learning, the quantum decision tree classifier, and the model of the classifier in addition to some auxiliary material on matrix exponential.