An internship working on "Customers who bought this also bought" at Amazon 16 years ago

Headshot of
                Vikram Oberoi

Vikram Oberoi


May 29, 2023

ยท 7 min read

An internship working on "Customers who bought this also bought" at Amazon 16 years ago

I wrote this tweet about my Amazon internship in 2007 as I rolled out of bed yesterday morning and it went viral.

A screenshot of a tweet by the author stating “I interned at Amazon in 2007 and his entire task was to test changes to the algorithm so that the first Harry Potter 7 book (released that summer) wouldn’t appear in ‘Customers who bought this also bought’ for literally everything (e.g. mops, Now That’s What I Call Music 22, etc.)”
(Yes, you are correct: there is only one Harry Potter 7 book. And no, I did not recommend the second one into print, but I do love that joke.)

How fun!

That internship was my first work experience overall in software and I got incredibly lucky being placed on the Similarities team.

One could do way worse as a 20-year old landing an internship at Amazon: the Similarities team was doing cutting edge recommendations and experimentation work, I got to deploy and see feedback from my work on-site every week, and I was paired with a kind and helpful mentor on a great team.

My experience that summer is the reason I continued doing data & systems work in college and after I graduated.

The Similarities team was responsible for producing the dataset that powered “Customers who bought this also bought”. The dataset was also used in personalized recommendations they’d surface to customers elsewhere: on-site and in emails.

~20% of revenue was attributed to similarities at the time if I remember correctly (my mentor was kind enough to terrify me with an estimate of the revenue Amazon lost from an an outage I initiated). So they were an effective and important part of the site. But there were still a number of unintuitive similarities that would appear in the “Customers who bought this also bought” widget.

While there was a long tail of random issues in different product categories, โ€“ Amazon had begun a rapid expansion beyond books and was working on fixing them โ€“ the biggest problem, by far, was Harry Potter.

More specifically, in the summer of 2007 it was Harry Potter and the Deathly Hallows.

It would show up as a similarity everywhere. Like, you’d be on Amazon buying a mop and the similarities widget would show a recommendation for Harry Potter and the Deathly Hallows, followed by Pine-Sol and a bucket.

The team computed item-to-item similarities using collaborative filtering with customer order baskets as the input. Put simply, if customers bought products A and B together frequently enough, then Amazon would present A and B as similar items.

But what happens when the same product appears in virtually every order basket? The Harry Potter problem.

In this pun on the fifth HP book, Harry Potter and the Order of the Phoenix, we see Harry Potter working at a fast food restaurant taking the complicated order of a customer who is also a phoenix.

The similarities team deployed me to one idea they had to address the Harry Potter problem: could they use feedback from users to cull unintuitive similarities?

Amazon got feedback in two ways:

So I spent the summer trying to use the output of Amazon’s collaborative filtering algorithm + all this clickstream/conversion/feedback data to address unintuitive similarities. The thinking was that if a similarity was unintuitive, then presumably it’d underperform by some measure based on user feedback.

My primary target was Harry Potter and the Deathly Hallows: it stuck out like a sore thumb and it was an easy way to see if an approach was working qualitatively.

Amazon similarities were served by a Berkeley DB (BDB) file at the time. BDBs are embedded key-value stores โ€“ a file you can ship around with a format optimized for key-value lookups. Amazon would crunch numbers and emit a new similarities BDB nightly or weekly (I don’t remember which).

The similarities BDB mapped ASINs (product IDs in Amazon parlance) to lists of ASINs, like this:

B00P0H6836: B07K8Y6CMP, B07K8RWVF9, B07K8S9ZQZ, B00P0H6836

That’s the ASIN for this great cat litter mapped to different sizes of pee pad refills in the “Compare with similar items"section. This is also an example of an unintuitive similarity: if I use clumping litter for my cat, it is unlikely that I will use pee pads too.

So each week I’d write a Perl script to crunch clickstream/conversion/feedback data and then remove or reorder some of the mappings in the BDB file based on my algorithm that week. We’d now have two sets of similarities:

Then I’d send an email to the team with a link to a CGI script I wrote that allowed us to qualitatively assess B against A. It was a little web page with an input box at the top where you could enter an ASIN and it would show you similarities from A compared to similarities from B.

We’d exchange emails about the quality of my similarities or talk about them in a meeting. Then my mentor would decide whether or not we would push it to production.

Most weeks we’d push something to production and A/B test it. I don’t know what percentage of the site saw my similarities, but I suspect it was low: it would take a few days for us to get conclusive results and even in 2007 Amazon got tons of traffic.

I threw a lot of things at the wall.

Two approaches that I remember relied on the idea that users will simply click more on items on the left side of a page. It is well-known that page position is a massive determinant of clickthrough rates. The “Customers who Bought this Also Bought” widget was laid out from left to right, so items in the first slot had a baked-in “boost”.

So, if that is true yet we see a similarity in the first slot “underperform”, maybe we should reorder it. Here are two different ways I did that:

  1. A basic approach: if an ASIN in slot 1 has a lower clickthrough rate than an ASIN in slot 2, swap slots 1 and 2.
  2. A more complicated approach: if an ASIN in slot 1 has a statistically lower clickthrough rate compared to the ASIN in slot 2, swap slots 1 and 2.

I don’t remember the specifics for #2. It might have been something like:

  1. Get the difference in clickthrough rate between slots 1 and 2.
  2. See where it falls on the distribution of “difference in clickthrough rate between slots 1 and 2”.
  3. Decide it’s underperforming if it is some distance away from the mean.

I spent the entire summer trying stuff like this and published a log of what I did on Amazon’s internal wiki.

Some takeaways I recall:

There are many wildly qualified people who have worked on similarities and recommendations at Amazon, Netflix and elsewhere the last 15 years. Greg Linden started and led a lot of the personalization work at Amazon and has written about some of it on his blog.

I don’t know if Greg was there in 2007. My mentor and the team’s manager during my internship keep a much lower profile, but were also extremely talented.

๐Ÿ‘‹ to Wes & Brent if you see this! Thank you for setting me up with a delightful and impactful experience 16 years ago.