Группа авторов

The Smart Cyber Ecosystem for Sustainable Development


Скачать книгу

      It must be emphasized that cognition is not only related to wireless networks, but also the idea applies to the management of network infrastructure and the various network elements [3]. To stimulate transition to cognitive networks, their performance must outweigh all additional complexities that they require. The question is how to measure the cost of a cognitive network. Such cost would primarily depend on the communications required to apply cognition, the architecture complexity, maintenance cost, and the operational complexity. For example, in wired networks, user’s behavior is clear and easily predictable, and therefore, it may not be interesting for some people to employ cognition with this type of networks. On the contrary, wireless networks often include heterogeneous elements and have characteristics that cannot be easily predicted, making them the best candidates to adopt the cognition concept.

      Cognitive networks should use different measures, tools, and patterns as inputs to the decision-making processes. Then, they come up with results in the form of procedures or commands that can be implemented in modifiable network elements. It is important to note that the cognitive network must adapt to changes in the environment in which it operates and anticipate problems before they occur. Their architecture must be flexible, scalable and be supportive of future improvements and extensions.

      Several research studies have been discussing the architecture and functionalities of cognitive networks. There is a need to rethink about network intelligence from being dependent on resource management to understanding the needs of network users and then transferring intelligence also to the elements of the network.

      The central mechanism of the cognitive network is the cognitive process. This process implements real learning and decides the appropriate responses and actions based on observations in the network. The operation of the cognitive process mainly depends on whether its implementation is central or distributive as well as on the amount of state network information.

      The basic concept of ML is through training data that is used as input to the learning algorithm. The learning algorithm then produces a new set of rules, based on inferences from data, which results in a new algorithm. The new algorithm is officially referred to as the ML model. Traditional algorithms are comprised of a set of pre-programmed instructions used by the processor in the operation and management of a system. However, instructions of ML algorithms are formed based on real-life data acquired from the system environment. Thus, a machine is fed a large amount of data, it will analyze and classify data, then use the gained experience to improve its own algorithm and process data in a better way in the future. The strength of ML algorithms lies in their ability to infer new instructions or policies from data. The more data is available for the learning algorithms during the training phase, the more ML algorithms will be able to carry out their tasks efficiently and with greater accuracy.

      2.4.1 ML Types

      Depending on the type of tasks, there are two types of ML:

       Regression LearningIt is also called prediction model, used when the output is a numerical value that cannot be enumerated. The algorithm is requested to predict continuous results. Error metrics are used to measure the quality of the model. Example metrics are Mean Absolute Error, Mean Squared Error, and Root Mean Squared Error.

       Classification LearningThe algorithm is asked to classify samples. It is of two subtypes: binary classification models and multiple classification models. Accuracy is used to measure the quality of a model.

      The main difference between the algorithms for classification and regression is the type of output variable. Methods with quantitative outcomes are called regressions or continuous variable predictions. Methods with qualitative outputs are called classifications or discrete variable predictions.

      2.4.2 Components of ML Algorithms

      A formal definition of a ML algorithm is “A Computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks T, as measured by P, improves with experience E” [5].

       Tasks: A task defines a way to process an object or data. An example task is classification, which is a process of assigning a class label to an input object or data point. Regression is another task example, which involves assigning a real value to an object or data point.

       Performance Measure: Defines the criteria by which a ML algorithm is evaluated. In classification algorithms, accuracy refers to the percentage of correct assignment of class labels to objects or data points. Normally, data is divided into two sets. The first is used for training, while the second is used for testing.

       The Experience: It refers to the knowledge that a ML gains while learning. It divides the ML algorithms into the types explained in the next subsection.

      2.4.3 How do Machines Learn?

      Intelligent machines learn from the data available in their environment. The process of applying ML consists of two phases: The training phase and the decision-making phase. In the training phase, ML techniques are used to learn the system model using training dataset. In the decision-making phase, the machine shall be able to estimate the output for each input data point using the trained model.

      According to the training method, ML techniques can be classified into four general types. Many advanced ML techniques are based on those general types. Figure 2.2 illustrates these types.

       2.4.3.1 Supervised Learning

      This learning method requires a supervisor that tells the system what is the expected output for each input. Then, the machine learns from this knowledge. Specifically, the learning algorithm is given labeled data and the corresponding output. The machine learns a function that maps a given input to an appropriate output. For example, if we provide the ML system during the training phase with different pictures of cars, and with information indicating that these are pictures of cars, it will be able to build a model that can distinguish the cars’ pictures from any other pictures. The quality of a supervised model depends on the difference between the predicted output and the exact output. The convergence speed of supervised learning is high although it requires large amount of labeled data [6]. Next, we discuss the well-known supervised learning algorithms.

Schematic illustration of machine learning types.

      Figure 2.2 Machine learning types.

      Figure 2.3 Illustration of SVM.

      Support Vector Machine

      Support Vector Machine (SVM) algorithm is a linear supervised binary classifier. It separates data points using a hyperplane. The best hyperplane is the one which results in maximum separation between the two given classes. It is called the maximum margin hyperplane. SVM is considered to be a stable algorithm applied for binary classification. For multiple classification problems, the classification tasks must be reduced to multiple binary classification problems. The basic principle of SVM is illustrated