In the past few weeks I have been experimenting a bit with OpenCV and other techniques to find out if pictures contain labels and wrote about that (part 1, part 2, part 3, part 4, part 5 and part 6). As you know the proof of the pudding is in the eating and I do like pudding, so I decided to run a test to see if I could say if an image has a label on it, or could not say it (so basically the result is "has label" or "don't know if there is a label"), as up until now I had only tested with about 30 images.
I wanted to keep my test small because of time constraints (I still need to add multiprocessing to my labeling script) but big enough to draw interesting conclusions I chose one particular Spanish label called Belter. This is not because I like the music they released but just one that I encountered many times and that has a rather simple clean label design, so it seemed like a good candidate, although it could also mean that my results need to be taken with a grain of salt.
The first step is to get as many pictures as needed. I took the data dump covering data up until (and including) April 30 2019 and extracted all the 7" releases (not 12" or LPs) on this particular label that have images.
There are 3315 7" releases that have at least 1 image. The breakdown of the amount of images per release is as follows:
Next step: downloading all the images via the Discogs API. To my surprise there were 5 images more, which likely were added in the last few days. More is always better!
I then ran my label script on every image to see if it thought a label was present or not. The results:
I am sure that better quality images, especially better cropping, would eliminate at least another 50% of the false positives.
I did not count pictures where a partial label, or the sleeve plus label together were in a picture as a false negative. Counting those the amount of false negatives would be higher.
The biggest issues that I found:
I wanted to keep my test small because of time constraints (I still need to add multiprocessing to my labeling script) but big enough to draw interesting conclusions I chose one particular Spanish label called Belter. This is not because I like the music they released but just one that I encountered many times and that has a rather simple clean label design, so it seemed like a good candidate, although it could also mean that my results need to be taken with a grain of salt.
The first step is to get as many pictures as needed. I took the data dump covering data up until (and including) April 30 2019 and extracted all the 7" releases (not 12" or LPs) on this particular label that have images.
There are 3315 7" releases that have at least 1 image. The breakdown of the amount of images per release is as follows:
- 1 image: 1299 releases
- 4 images: 1188 releases
- 2 images: 619 releases
- 3 images: 147 releases
- 5 images: 36 releases
- 6 images: 13 releases
- 8 images: 5 releases
- 10 images: 2 releases
- 7 images: 2 releases
- 11 images: 1 release
- 15 images: 1 release
- 14 images: 1 release
- 9 images: 1 release
Next step: downloading all the images via the Discogs API. To my surprise there were 5 images more, which likely were added in the last few days. More is always better!
I then ran my label script on every image to see if it thought a label was present or not. The results:
- 2517 images thought to be labels
- 5599 images thought to be not labels
Images tagged as labels
Of the 2517 images that were tagged as labels 47 turned out to be false positives: 1.87%. This is not bad at all. Looking at the false positives (and processing them again) I could find a few patterns:- pictures were not properly cropped, for example an underlying wooden table can be seen in some pictures. When processing this is then treated as some sort of outer ring, and interferes with the method
- during edge detection one extremely tiny circle is found somewhere in the image (with a 0 or 1 pixel inner radius). Eliminating these reduced the number of false positives with 0.44% (11 false positives) to 1.43%
- there are circles on the releases, or something that resembles circles, such as this flexi disc, this single with an ellipse in the center, one of images in this rather elaborate packaging, a release with a circle and more. The most interesting one was actually this cardboard release, where my script thought the picture of the back of the release was actually a label.
I am sure that better quality images, especially better cropping, would eliminate at least another 50% of the false positives.
Images tagged as "not a label"
On to the false negatives: of the 5599 images tagged as "not a label" 341 releases actually turned out to be a label: 6.09%. This is quite a bit more than the false positives, but not as disastrous as false positives (as the choice was between "contains a label" or "don't know").I did not count pictures where a partial label, or the sleeve plus label together were in a picture as a false negative. Counting those the amount of false negatives would be higher.
The biggest issues that I found:
- camera flash and other sources of light: these really mess with the intensity of some of the pixels, which confuses the edge detection algorithm
- white stickers covering part of the label, including the edge, which also confuses the edge detection algorithm
- angle of the picture: some pictures had strange angles and for my method to work a top down view is best
Comments
Post a Comment