eggs and unhatched eggs. The positive samples consist of images where an individual egg is visible completely, while the negative samples consist of eggs that are partially visible and have multiple egg entries. Figure 2.5 represents the classifier model that is trained to determine positive and negative samples, and the positive samples are later trained to predict the egg center location in terms of pixel values. Further, during the practical application, the center location of the egg predicted is used to crop a single egg data to be fed into a classifier that determines the class of the selected egg into HC or UHC. Figure 2.6 represents an overall result of locating egg centers using egg location predictor CNN model for one of the test data sheets where all egg centers are marked with a blue dot. A sliding window of (32 × 32) with a stride of (4, 4) was used to achieve the results.
2.3.4 Predicting Egg Class
The sliding window method is used to generate input images, and a single egg may be represented by many image windows each of size 32 × 32. Euclidean distance equation,
(2.5)
(2.6)
Table 2.2 Specifications of egg location CNN model.
Input image | Activation/output | Training samples | Test samples | Validation samples | Test loss | Validation loss |
32 × 32 | Regression Center of the egg (x, y) | 439 × 103 | 51.6 × 103 | 25.8 × 103 | 0.5488 | 0.5450 |
Figure 2.5 CNN training model to predict egg location in terms of pixel values.
Figure 2.6 Result of egg location CNN model.
Table 2.3 Specification of egg class predicator CNN model.
Input image | Activation/output | Training samples | Test samples | Validation samples | Test loss | Accuracy on the test set | Accuracy on the validation set |
32 × 32 | SoftMax 2 class-(0/1) | 2.4 × 106 | 80.2 × 103 | 30 × 103 | 0.0077 | 99.8115% | 99.7981% |
The overall result of egg classification and counting yields an accuracy greater than 97%, and Figure 2.7 represents the result generated using the proposed method where green dots represent the hatched eggs while red dots represent unhatched eggs. Some of the areas of the images are zoomed and shown separately in Figure 2.7 since the input image is too big to fit in the page and eggs are minuscule to see any features.
Figure 2.7 Result of egg classification generated by the proposed method.
2.4 Dataset Generation
In comparison with the conventional method of extracting egg count information using digital images that hardly require any training data, the proposed method that employs the CNN technique required large datasets to learn the features automatically to provide the required results. The CNN method uses plenty of training data along with test and validation datasets as the number of hidden layers increases.
There are many datasets available for free that can be downloaded to train our own CNN models to classify handwritten digits, identify objects, and many more. But there is no single public dataset available corresponding with the sericulture field especially silkworm egg counting or classification. So, in our work, training datasets were generated by cropping class images from the silkworm egg sheet and providing class