Группа авторов

Handbook on Intelligent Healthcare Analytics


Скачать книгу

noise from the dataset, which takes particular columns to predict the performance rate and accuracy rate. Based on hidden NNs, the weather features can be variation, categorical data variables, etc., that are not stable instantly [13].

      2.2.3 Improved Bayesian Hidden Markov Frameworks

      Markov chain process is the type of technique to detect the chain represented in a random process. In this instance, the system can find the representations in disaster management dataset. In this Markov chain process, transition probabilities are described by the graph representations. The graph representations are the probability of moving the one state to another state. In state representation, the variable is declared as i and j, and the transitions are forward from i to j states.

      In disaster management dataset, temperature, dry earthquake, and volcano are the attributes. The temperature attribute records the daily temperature in Celsius, which leads to disaster effects. The dry earthquake records the causes of earthquake and volcano records the hotness in lava and temperature in water vapor.

      Hidden states are the layers, which are trained and tested in the model process. Hidden layers consist of weights and biases, which help in the training process. Weights help the input by multiplying and biases help to add the weighted input passed to the next hidden layer. These hidden layers are processed in forward and backward direction, which leads to reduce the error and loss rate and increase the accuracy of outcomes. In output layer, activation functions accessed the hidden layer output, which converts into binary format for the user understanding process.

      For exponential time, executions are done by brute force solution. This brute force solution helps the model to efficiently approach exponential inputs. In this approach, sequences of observations of each state are executed using the Viterbi algorithm. The maximum probability of a sequence path of state and time are stored in observations of p. The observations of p are the property of Markov models. This process leads to the model in forward recursion.

image

      Where,

      Ot = probability, i = initial state, and j= hidden state of Markov.

      The parameter learning in the hidden Markov model is executed using the Baum-Welch algorithm. This Baum-Welch algorithm helps to find the probability of observations in local maxima. The following process shows the learning approach:

image

      where p = probability, O = converge of optimum, and M = parameters.

      These chain properties are applied in discrete directions that are applied in each step. Let us consider the sequence of generated variables as follows:

      Pr = probability computation of input sequence, which describes the moving of state only to new state or the previous state that can derive the new next state [8].



image Natural disasters
image Drought
image Earthquake
image Wildfire
image Volcanic activity
image