Computer Vision project to detect different varieties of fruits and vegetables included: Apple, Banana, Eggplant, Hazelnut, Kiwi, Lychee, Mango, Onion, Orange, Pear, Potato etc. using Siamese Network, one of powerful one-shot-learning techniques currently getting increasingly popular in Deep Learning research and applications.
Datasets was obtained and carefully filtered from Kaggle Fruit 360 consisting of 82213 RGB images of 120 fruits and vegetables. I only obtained 1627 images of fruits and vegetables for this project.
I turned the training images into Luminance-based (not RGB) images in order to avoid the network cheating the colour of fruits or vegetables while predicting the dissimilarity score of two input images. Each class of the train images only contains 10 images to train the network.
Test Images cover six classes of fruits and vegetables variants. All the classes were not included when I trained the network:
- Strawberry
- Tomato Cherry Red
- Pepper Yellow
- Cauliflower
- Grapefruit Pink
- Guava
- Clementine
- Physalis with Husk
To research and implement the technique, I was supported by two popular open-source Deep Learning framework, namely: Pytorch and ReNom.
Hyper-parameters I set to train the network were:
- BATCH_SIZE = 64
- N_EPOCHS = 20
- LR = 0.0005
- Optimizer = Adam
- Loss Function = Contrastive Loss
The plot showed that the model learned the problem fast, achieving a low loss within just a few training epochs.
As shown from the following results, the network can successfully differentiated all testing images. When the pair of two simillar images but categorized in the same class were fed into the network, it can predicted its dissimilarity in lowest score.
Note that, these variety of fruits and vegetables were never seen by the network during training process!
According to the research I conducted, I found that there are 3 crucial points to notice when building Siamese network:
-
Datasets we obtain to train the network must be balanced with as many positive as negative samples since we want the network to learn well similarity function, otherwise learning process doesn’t meet desired result
-
The layers in the two subnetwork must share the same weights each other. This allows the network to learn symmetrically while capturing meaningful features for a pair of input image
-
Calculating the squared distances of the feature vectors. To measure the performance of the network, we can experience with three popular distance metrics proposed by research including binary-cross entropy, contrastive and triplet loss. For my case, I used contrastive loss to lead learning process into good result
This project requires Python 3.6 and the following Python libraries installed: