I started this blog in September 2017 with the intention of finding interesting patterns in the Discogs data. It fairly quickly turned into a blog pointing out what doesn't work in Discogs, which I must admit is a bit negative and not what I originally wanted. I am honestly trying to keep it positive this simply isn't always possible. The reason: in the last two years very little has been done to get rid of data entry errors that can be prevented. The only thing that I have seen is the automatic correction of capitalization errors (even though it is a bit difficult to trigger, although it does a good job). Due to work obligations I couldn't look at the data for a few months and had hoped that things would have been different when I returned to processing the data, but this wasn't the case.
I have had conversations with people at Discogs and those conversations were actually very nice, but it doesn't seem to lead to improvements. Why this is I really don't know. Perhaps these people had no leverage inside Discogs (a few of them left Discogs since then as well), perhaps the code running Discogs and the underlying datamodel don't allow for the changes without a major overhaul which is a decision that should not be taken lightly, or perhaps there is something else going on (the employee reviews on Glassdoor for example aren't very positive). I simply don't know what is going on and why my suggestions (of which some are not that difficult to implement) are not followed.
For me this is frustrating: Discogs has a truly great dataset, but it has lots of errors and the website makes adding new errors very easy (as I have been pointing out for the last few years). This requires a constant clean up of data that with a bit of extra work shouldn't need to be cleaned up. Cleaning up all this data is an enormous waste of time and people doing the clean up will at some point just give up as they will feel that nothing fundamentally changes.
When processing data it also means that I have to work around all these errors and clean up or rewrite the data to make the data (more) usable. This sucks.
In the next few months I will likely have little time for Discogs (except for quickly processing the monthly data dumps). I hope that when I return things have improved, but I won't be holding my breath.
I have had conversations with people at Discogs and those conversations were actually very nice, but it doesn't seem to lead to improvements. Why this is I really don't know. Perhaps these people had no leverage inside Discogs (a few of them left Discogs since then as well), perhaps the code running Discogs and the underlying datamodel don't allow for the changes without a major overhaul which is a decision that should not be taken lightly, or perhaps there is something else going on (the employee reviews on Glassdoor for example aren't very positive). I simply don't know what is going on and why my suggestions (of which some are not that difficult to implement) are not followed.
For me this is frustrating: Discogs has a truly great dataset, but it has lots of errors and the website makes adding new errors very easy (as I have been pointing out for the last few years). This requires a constant clean up of data that with a bit of extra work shouldn't need to be cleaned up. Cleaning up all this data is an enormous waste of time and people doing the clean up will at some point just give up as they will feel that nothing fundamentally changes.
When processing data it also means that I have to work around all these errors and clean up or rewrite the data to make the data (more) usable. This sucks.
In the next few months I will likely have little time for Discogs (except for quickly processing the monthly data dumps). I hope that when I return things have improved, but I won't be holding my breath.
Comments
Post a Comment