[Cross-posted at The Hackerati.]
Principal Component Analysis and Fashion
Our dataset is 807 pictures of dresses from Amazon. They have a standard image size, but unfortunately do not have a standard model pose (though they tend to be centered in the image similarly). Ideally, our principal components would only be about actual dress style, but here many of them will be concerned with model pose. Despite this, we can still do a lot with this data set.
This eigendress is the first principal component, which accounts for the most variation among all the dresses. Broadly, it’s looking at light-colored dresses vs dark-colored dresses.
The second component seems to look at short dresses vs long dresses.
Reddish colors vs blueish colors.
Short hair and sleveless vs long hair and long sleeves.
Posing with legs close together vs posing with legs farther apart.
And so on.
Using components to recreate images
With a bunch of components like these, we can reduce an image from, eg 60,000 points of data (pixel values) to just a handful of numbers.
Let’s recreate this dress from its components.
The following pictures are created from one, two, four, nine, ten, fifteen, thirty, forty, and seventy components (respectively).
As you can see, the more components we have, the more accurate and detailed the dress recreation will be.
The data for the middle dress above now looks like this: [-17541.81, -12749.33, -3766.29, 2005.28, 4193.08, 6832.55, -6704.90, -2135.51, 1112.27, 7627.80].
So, if you have a million pictures you need to store, you can save a whole lot of space by saving just the component values instead of the values of every pixel of every dress.
It even works for dresses that were not in the training set:
Though it works less well for patterns we haven’t seen before:
And can’t recreate accessories that were not present in the training set (notice the sunglasses and handbag disappear):
And even though the training set only contained dresses, the data is decent at recreating different types of clothing, such as suits and overalls:
Using components for prediction
I’ve also manually categorized the pictures as dresses I like (287 pictures) and ones I dislike (520 pictures).
Now we can use logistic regression on component data to predict whether or not I’ll like a dress.
Sorting all the dresses by score, it can show the prettiest and ugliest dresses of the whole set.
The prettiest dresses:
The ugliest dresses:
Seems pretty spot on! I could now set up something to watch as new dresses are posted on Amazon, and to alert me to dresses it thinks I will really like.
The misclassifications are interesting too. Here are the three “ugliest pretty dresses”, those that I classified as my style, that the program predicted I should really dislike:
It seems to be about that specific shade of blue.
And here are “prettiest ugly dresses”, those that I classified as dislikes, that the program predicted I would really like:
These aren’t that bad. I do kinda like them, but think they’d be nicer with some minor adjustments (slightly less form-fitting, slightly less loud pattern, slightly brighter color).
Creating new dresses
For creating pictures, there’s no reason we need to confine ourselves to already known dress component values. We can also choose random values for each component, and see what happens!
Completely new dresses! With more data and better data, this could actually be a viable dress design tool!
Want to play? Code on Github
If you’d like to see more about neat applications of PCA, check out Joel Grus’s post!