At the beginning of every month (usually the 3rd or 4th day of the month) Discogs releases a data dump of various parts of the database:
I have not fully looked into all the data and in this post (and the next few) I will only look at the XML of the individual releases.
The script take the XML, processes it and reports any smells. Currently it only looks at a few of the available fields (namely identifiers, notes, country) and performs some very simplistic checks to see if the data in the fields in identifiers make sense or if the wrong value is used, or if it is in the wrong place (wrong identifier, or hidden in notes and not added to identifiers). As the script is still under active development more checks will be added.
In the next few blog posts I will walk through each of these checks, plus provide some background information about each of the values and the checks with, hopefully, some interesting statistics.
- individual releases
- master releases
- artists
- labels
I have not fully looked into all the data and in this post (and the next few) I will only look at the XML of the individual releases.
Individual releases in the Discogs data
Every month the data for all individual releases of Discogs is made available as a gzip compressed XML dump. This file is quite big: the dump for September 2017 is 4.9 GiB when it is gzip compressed. Uncompressed it is 32 GiB. The data dump for September 2017 contains information about 8,878,391 releasesThe Discogs database had information about 8,878,391 releases on September 1 2017The top level element of the file is called releases. All the individual releases are children of this top level element. Each release is contained in an element release. This element has a few important attributes:
- id : the identifier in the database
- status : the status of the release in the database. Currently a release can be accepted (8,873,625 in September 2017 dump), rejected (1,442 in September 2017 dump), deleted (1,005 in September 2017 dump) or draft (2,319 in September 2017 dump).
- images - metadata about the images of the release uploaded to Discogs (not included in the data dumps, only available via the API)
- artists - artists on the release
- title - the title of the release
- labels - music labels involved in the release
- extraartists - artist data of any guest artists
- formats - formats of the release (usually one, but there can be multi-format releases)
- genres - genres the release fits in
- styles - musical styles
- country - country (or meta-country like EEC) of the release
- released - date (year, or full date, or empty) for the release
- notes - free text field for data that didn't fit anywhere else
- master_id - the name of the master release (data of which can be found in another XML file)
- data_quality - quality of the release data as voted for by Discogs users with voting rights
- tracklist - list of tracks on the releases
- identifiers - matrix/runouts, barcodes, label codes, and so on
- videos - links to (external) video sites with video clips of songs on the release
- companies - companies involved in the release (pressing, mastering, and so on)
Which XML parser to use?
Because of its size you don't want to process the XML with a DOM parser. For processing a file of this size a SAX parser is a much better option.Use a SAX parser for processing the Discogs XML, not a DOM parser.Since the XML is generated from the Discogs database it always has the same format and the (main) elements always appear in the same order, allowing for a few shortcuts. However, if you want to cross reference data from various fields you will have to process it in another way, for example first splitting the XML into smaller pieces (exaple: 10,000) and then processing each chunk separately (for example using xml_split), parsing the data with a SAX parser and building an internal data representation and subsequently using that data or possibly storing it in a database first.
Discogs error detection script
I have been working on a script to cleanup the data, which I made available under the GPL 3 license. The script can be found in a repository on GitHub. You will need Python3 to run it. I have only tested it on Linux, but it should work on other platforms as well.The script take the XML, processes it and reports any smells. Currently it only looks at a few of the available fields (namely identifiers, notes, country) and performs some very simplistic checks to see if the data in the fields in identifiers make sense or if the wrong value is used, or if it is in the wrong place (wrong identifier, or hidden in notes and not added to identifiers). As the script is still under active development more checks will be added.
In the next few blog posts I will walk through each of these checks, plus provide some background information about each of the values and the checks with, hopefully, some interesting statistics.
Comments
Post a Comment