Sears Holdings – the tenth largest retailer in the Unites States by turnover – has outlined a big data-led strategy which it hopes will help it take market share away from its competitors and halt a steady decline in sales.
The Illinois-headquartered company, which trades most famously under the brand names Sears and Kmart as a result of a merger in 2005, operates 4,000 stores across the US. However, the company’s sales have decreased year-on-year since 2007 and last year it incurred a $3.1bn loss.
In a bid to reverse its fortunes, the company has invested heavily into big data analytics (“hundreds of millions”, according to chairman Edward Lampert) with a focus on its loyalty programme – something its closest competitor, Wal-Mart, does not offer.
The company is keen to combine the existing maturity of its rewards programme with the trend in customers seeking web and mobile interaction. Personalised mobile coupons will be one of the first products of this focus.
In an interview with Forbes, Sears’ CTO Dr Phil Shelley said: “We are re-engineering an old legacy company to become a big data company.”
With 80 million people registered to Sears’ reward programme (slightly more than a quarter of the US population), the company is using big data to track a huge amount of information, with far reaching granularity that covers every customer “every SKU, every store, every point of history for as long as you want to [keep it]”.
The value of Hadoop, says Shelley, is that there are now no restrictions on the amount of statistical analysis you can perform on this data and therefore value can derive from it, including – most importantly to Sears – the newfound ability to personalise this insight on an individual customer base.
“We are bleeding edge on a large scale,” he said. “Some of the innovations are just amazing. Now you can put all your data in one place and achieve a single point of truth and use it at a granular level that was pretty much impossible before.”
Pointing out the value of Hadoop, Shelley told Forbes how Yahoo! recently took one Terabyte of data and sorted it twice in 62 seconds using the free, open-source tool.