Page 61 - 2025S
P. 61

54                                                                UEC Int’l Mini-Conference No.54







            Table 2: Comparison of CNN Model Perfor-          mizer function, which is an improvement on the
            mances                                            stochastic gradient descent optimization tech-
                                                              nique. This function adjusts learning rates for
                   CNN Models      Accuracy (%)               each parameter, corrects estimator bias in the
                      VGG16             93.17                 gradient’s first and second moments, and im-
                      Xception          95.58                 proves model stability and performance during
                                                              training.
                    MobileNetV2         97.99
                     ResNet50           98.59
                    DenseNet201         99.00


            3.3   Hyperparameter setting

            To train our DCNN models, this study uses cus-
            tom data. 80% of the dataset was used in the
            training procedure. In order to track model per-
            formance and avoid overfitting, an additional
            10% of the data was allocated for validation
            at the beginning of each epoch. Additionally,
            a completely separate dataset of 10% was used
            for testing to provide an objective assessment of
            the model’s performance. Besides, new and real
            data is used in the test to check if the model
            works well with different stuff it hasn’t seen be-
            fore. For accurate gradient estimations, a batch
            size of 32 was utilized throughout the training
            phase, even though it required more memory.       Figure 4: The process of classification approach
            The ”softmax” function is also used in the out-
            put layer to map the final layer’s output and
            enhance the predictability of the model.  To      3.4   Leaf Features Extraction Analysis
            make sure the models always give the same re-
            sults, we start the randomness from a specific    Feature extraction plays an important role in de-
            number, which is 64. In order to ensure con-      tecting features in digital images such as edges,
            sistent training and evaluation for accurate per-  shapes, or motion in DL research. All medicinal
            formance comparison, the model’s parameters       plant leaves are not the same age. There are
            are modified through training based on input      mixed-age leaves, like pre-mature, semi-mature,
            data and the optimization process. By utiliz-     and mature. Selecting these three conditions
            ing ImageDataGenerator’s ”horizontal flip” fea-   for each species of medicinal plant allows for an
            ture, more training data will be available to the  in-depth study because the leaves appear differ-
            model, helping in its capacity to identify fea-   ently at various stages of growth. The types
            tures independent of horizontal movements and     of medicinal plants for this research are neem,
            enhancing the handling of test data with com-     moringa, Malabar nut, holy basil, arjun, and
            parable movements. For multiclass classifica-     green chiretta.  Utilizing the features that a
            tion, the ”categorical” class mode was employed,  neural network was trained on before is quick
            which allowed models to understand connections    and efficient when performed via feature extrac-
            between classes and predict multiple classifica-  tion. This research’s main objective is to de-
            tions. Images were utilized with 0.2-20% zoom     velop an automated system for identifying the
            in and out, and the images were randomly ro-      accurate medicinal plants that are deployed by
            tated with 20% shifting along the X and Y         transfer learning models. Additionally, trans-
            axes. However, we applied the ”Adam” opti-        fer learning serves as a feature extraction and
   56   57   58   59   60   61   62   63   64   65   66