iTunes Library Preening
Preening one’s iTunes library is one of those continually rewarding pastimes. Adding cover art, fixing up meta-data, adding playlists, it’s all endless fun. iTunes has a particularly entertaining feature where each track in your library can be given a rating, from one to five stars. This is a great way of instantly locating the best-of-the-best tracks in your collection. If you’re like me you preen your ratings constantly.
The trouble I’ve always had though is deciding on the criteria for a given rating. Here are the criteria I have been loosely applying up till now. First the easy ones, then the harder ones:
- 5 Stars Desert-island track. Listen anytime, anywhere. Ridiculous sentimental attachment.
- 1 Star A bit crap. Sometimes totally crap, but I just can’t bear to part with it because it completes an otherwise worthy album. Maybe it’s one of those “we’re just tooling around with the microphones on but we’ll put it on the album for a laugh” type of tracks. On shuffle-play I don’t want to hear these, ever.
- 2 Stars A default rating. Nothing standout about this track. Could stand to lose it from the collection, but don’t actually dislike it.
- 4 Stars Something about this prevents it accompanying me to the desert island. Maybe I only like it in certain circumstances, or maybe I used to love it but am now going a bit bored of it.
- 3 Stars Somewhere in-between 2 and 4. The less said about this category the better, OK?
After struggling with these definitions for a while, I decided that they weren’t really working. The 2, 3, and 4 star ratings were the hard ones, and this is where the bulk of my ratings go. This led me to thinking about the distribution of the tracks with each rating. Which led to the idea that maybe the ratings should reflect the distribution.
So instead of defining 5 stars against some arbitrary external criteria, it seems to be just as useful (if not more so) to define these in relative terms. So instead of “desert island track”, I now look at 5 star-rated tracks as those in the top 5% of my library. 4 stars are in the top 15%, 3 stars in the top 50%, and 2 stars in the top 90%. 1 Star reserved for the bottom 10%.
[Don’t ask me why 5 stars is the top 5% and 1 star is the bottom 10%, they are just the figures that popped out of my brain, perhaps some subconsciously-mangled Sturgeon’s Law?]
But I needed some help to assess how far away from these definitions in my current ratings assignments. So I wrote a script to analyse my iTunes library to determine how far away I was from my target ratings distribution. Here is the first result:
Cumulative % of rated | |||||
---|---|---|---|---|---|
Number | % of rated | Actual | Target | Shortfall | |
Tracks rated 5 stars: | 6 | 1 | 1 | 5 | 4 |
Tracks rated 4 stars: | 136 | 22 | 23 | 15 | -8 |
Tracks rated 3 stars: | 277 | 45 | 68 | 50 | -18 |
Tracks rated 2 stars: | 160 | 26 | 94 | 90 | -4 |
Tracks rated 1 stars: | 34 | 6 | 100 |
As you can see, I either need to realign my ratings definitions, change some of the existing assignments, or overcompensate when rating new tracks. Or all of these. Anyway, I don’t want to be seen to be obsessing over this too much (a bit late now, he realises!) but the idea is to see that I’ve been, for example, a bit stingy with the 5-star ratings, and a bit generous with the 3-star ratings.
All of this obsessive behaviour can be yours, thanks to the latest girtby offering: iTunes Library Stats. It’s an XSLT script which will produce the above output for your own iTunes Library. Works on Windows and MacOS. Enjoy!
7 Comments