Advice on How to Take the Perfect Selfie, From a Convolutional Neural Network

GIPHY
GIPHY / GIPHY

Despite being more dangerous than swimming in shark-infested waters, people (and animals) are still very much engaged in selfie culture. Capturing the perfect self-portrait can be more difficult than nabbing Moby-Dick, but thanks to Stanford computer science graduate student Andrej Karpathy, there may be hope.

According to Fast Company, Karpathy trained something called a "convolutional neural network" (which he explains in-depth on his blog) to distinguish between good and bad selfies. He first collected millions of photos that used the #selfie hashtag and narrowed them down to a more manageable two million selfie sample, making sure that each photo had at least one face. The photos were then ranked based on how many likes they received, and that scale was "controlled" for the number of followers each photographer had.

/

"Best 100 out of 50,000 selfies, as judged by the Convolutional Neural Network." - Andrej Karpathy 

After feeding the data into the ConvNets, Karpathy made observations based on the top 100 selfies. To take the perfect selfie you should be a woman with long hair, your oversaturated face should occupy at least one-third of the frame with your forehead cut off, and there should be a filter and border added. "A good portion of the variability between what makes a good or bad selfies can be explained by the style of the image," Karpathy said of his findings, "as opposed to the raw attractiveness of the person." 

It is interesting to note (which Karpathy does not include in his observations) that none of the women in the top 100 have dark skin, only two out of 100 wear glasses, and none of them are dressed in bright colors.

Head over to Karpathy's blog to learn more about the convolutional neural network and the results of his experiment.