Let me start with the obvious – there’s a tremendous amount of buzz around Big Data.
Big Data is now deep into the hype phase of the innovation cycle. All the classic signs are there: you can eat buffet dinners all 52 weeks a year at Big Data conferences, Big Data tag lines are now common in emails from industry analysts, and even investment bankers are tossing around the phrase.
Does this mean that there’s little substance beneath the hype?
Not at all. While the hype level may be ahead of reality, there are many concrete, specific examples of how to get value from Big Data. Part of the problem is that in-the-trenches Big Data practitioners are not saying and writing more. Maybe because they are busy doing the work!
Following Gandhi’s famously misquoted advice, let me try to “be the change I want to see in this world”
Here are two specific ways in which CQuotient helps retail marketers get value from Big Data.
Recommendation engines often use some form of collaborative filtering (“people who bought this also bought that” algorithms, in plain English). In retail businesses where the average customer visits a few times a year and the product assortment changes several times during the year (e.g., apparel, fashion, consumer electronics), collaborative filtering often breaks down because there’s not that much in common between people’s literal purchases.
But by switching from products to product attributes, it is possible to better connect the dots. You and I may not have ever bought the exact same shirt, but we both may like pinpoint button-downs. Knowing this helps us use our data for each other’s benefit. Sounds great, right? But where do attributes like button-downs come from?
They come from text-mining Big Data from unstructured product descriptions, product reviews, Facebook posts, tweets and the like. A nice bonus here is that we can do this without taxing the retailer’s already stretched in-house databases, IT or marketing teams.
Building models to predict customer behavior is nothing new. But the predictive accuracy of such models can be increased substantially by adding hundreds of variables gleaned from Big Data sources, especially website logs.
Extracting these variables from massive quantities of web log data is hard and requires tools like – you guessed it! – Hadoop. And building predictive models with hundreds of such variables across tens of millions of customers is much harder still!
We have had to invent and build our own tools to make this level of machine-learning and predictive modeling possible. But it is absolutely worth it since the needle moves nicely when you go to all this trouble.
In the examples above, incorporating Big Data helps us solve a problem better than we otherwise could. That’s why we care about Big Data. As simple as that.