Группа авторов

Machine Learning Algorithms and Applications


Скачать книгу

target="_blank" rel="nofollow" href="#ulink_1d7669b4-8f8c-59cc-bf98-99dae81300e3">Table 1.1 Range of AQI categories.Table 1.2 Precision, recall, and F1-score.Table 1.3 MAE and RMSE scores for different epochs.Table 1.4 MAE scores for LSTM hyper parameters.

      2 Chapter 2Table 2.1 Specifications of foreground-background (FB) segmentation CNN model.Table 2.2 Specifications of egg location CNN model.Table 2.3 Specification of egg class predicator CNN model.Table 2.4 Performance of the CNN model results on test datasets.

      3 Chapter 3Table 3.1 Statistical information of data collected from Stanford Station.Table 3.2 Effect of various parameters.Table 3.3 Optimum configuration.Table 3.4 Comparison of models.Table 3.5 Coverage of points within the boundary of the regression line.

      4 Chapter 4Table 4.1 Performance of baseline ResNets.Table 4.2 Performance of baseline ResNets without bridge connections.Table 4.3 Hyperparameters.Table 4.4 Performance of baseline SE-ResNets.Table 4.5 Performance of proposed model.Table 4.6 Performance improvement from baseline ResNet.Table 4.7 Performance improvement from baseline SE-ResNet.

      5 Chapter 5Table 5.1 CNN-based different architectures.Table 5.2 Some CAD references driven by deep learning and medical imaging.

      6 Chapter 6Table 6.1 Description about experimental datasets.Table 6.2 Performances of MLFN, RBFN, DTNN, and ensemble approaches with features selection on Australian datasets.Table 6.3 Performances of MLFN, RBFN, DTNN, and ensemble approaches with features selection on German-categorical datasets.Table 6.4 Performances of MLFN, RBFN, DTNN, and ensemble approaches with features selection on Japanese datasets.Table 6.5 Performances of MLFN, RBFN, DTNN, and ensemble approaches with features selection on German-numerical datasets.

      7 Chapter 7Table 7.1 Comparison based on varying block size.

      8 Chapter 8Table 8.1 Classes present in MIT-BIH database with their percentage.Table 8.2 XGBoost model performance for heartbeat classification using MIT-BIH arrhythmia dataset with train-test ratio 60:40.Table 8.3 XGBoost model performance for heartbeat classification using MIT-BIH arrhythmia dataset with train-test ratio 50:50.Table 8.4 XGBoost model performance for heartbeat classification using MIT-BIH arrhythmia dataset with train-test ratio 70:30.Table 8.5 XGBoost model performance for heartbeat classification using MIT-BIH arrhythmia dataset with train-test ratio 80:20.Table 8.6 XGBoost model performance for heartbeat classification using MIT-BIH arrhythmia dataset with train-test ratio 90:10.Table 8.7 Comparison of the overall accuracy achieved by the XGBoost and AdaBoost classifiers using different traintest ratios of the MIT-BIH arrhythmia database.Table 8.8 Comparison of classification accuracy of proposed work and other state-of-the-art techniques.

      9 Chapter 9Table 9.1 Result for prostate cancer data.Table 9.2 Result for DLBCL data.Table 9.3 Result for child all data.Table 9.4 Result for gastric cancer data.Table 9.5 Result for lymphoma and leukemia.

      10 Chapter 10Table 10.1 Optimal parameters for 2D Gabor.Table 10.2 EER (%) values using different channels of the VW images.Table 10.3 EER (%) values using feature-level fusion (OR and AND).Table 10.4 EER (%) values using score-level fusion.

      11 Chapter 12Table 12.1 The percentage of each class of fingerprints...Table 12.2 The proposed CNN architecture.Table 12.3 Distribution of the images in the training set.Table 12.4 Distribution of the images in the testing set.Table 12.5 Model performance evaluation.Table 12.6 Comparison of the classification accuracies.

      12 Chapter 13Table 13.1 Performance (%) of the CNN classification method on FER 2013 datasets.Table 13.2 Performance (%) of the different features with SVM classification method on FER 2013 datasets.Table 13.3 Fusion of CNN, landmark, and HoG features with SVM classification accuracy results.

      13 Chapter 14Table 14.1 Results obtained using pre-trained networks.Table 14.2 Results obtained using AnimNet network.

      14 Chapter 15Table 15.1 Most strongly and weakly sentiment associated words in teachers’ feedbacks.Table 15.2 Most strongly and weakly sentiment associated words in laptops’ feedbacks.Table 15.3 Estimation of overall sentiment score of an item.Table 15.4 Summary of results accomplished by different important modules/steps.

      15 Chapter 16Table 16.1 Parameter details and their values.Table 16.2 Sample candidate phrases extracted from corpus.Table 16.3 Sample phrases and their embedding with similarity score.Table 16.4 Sample words and their embedding with similarity score.

      16 Chapter 17Table 17.1 Laplace noise mechanism.Table 17.2 Gaussian noise mechanism.

      Guide

      1  Cover

      2  Table of Contents

      3  Title Page

      4  Copyright

      5  Acknowledgments

      6  Preface

      7  Begin Reading

      8  Index

      9  End User License Agreement

      Pages

      1  v

      2  ii

      3  iii

      4  iv

      5  xv

      6  xvii

      7  1

      8  3

      9  4

      10  5

      11  6

      12  7

      13  8

      14 9

      15