Big data was seen as one of the biggest buzzwords of 2013, when companies often used the term inappropriately and in the wrong context. This year, people will finally understand what it means.
One could look back at 2013 and consider it the breakthrough year for big data, not in terms of innovation but rather in awareness. The increasing interest in big data meant it received more mainstream attention than ever before. Indeed, the likes of Google, IBM, Facebook and Twitter all acquired companies in the big data space. Documents leaked by Edward Snowden also revealed that intelligence agencies have been collecting big data in the form of metadata and, amongst other things, information from social media profiles for a decade.
And beyond all of that, big data became everyone’s most hated buzzword in 2013 after it was inappropriately used everywhere, from boardrooms to conferences. This has led to countless analysts, journalists and readers calling for people to stop talking about big data. A good example could be seen in the Wall Street Journal last week, where a reader wrote in complaining:
A lot of companies talk about it but not many know what it is.
While that’s a problem, it leads to my first prediction:
1. In 2014, people will finally start to understand the term big data. Because, as it stands, many do not.
The truth is that we’ve only really just started to talk about big data and companies aren’t going to stop screaming about their latest big data endeavours. In fact, it’s only January and the social bookmarking network Pinterest has already acquired image recognition platform VisualGraph. (Why? Pinterest want to understand what users are „pinning“ and create better algorithms to help users better connect with their interests).
So let’s get 2014 off on the right foot with a definition of big data, from researchers at St Andrews, that’s fairly easy to understand:
The storage and analysis of large and/or complex data sets using a series of techniques including, but not limited to: NoSQL, MapReduce and machine learning.
The main elements revolve about volume, velocity and variety. And the word ‚big‘? If your personal laptop can handle the data on an Excel spreadsheet, it’s not big.
Matt Asay, a journalist with ReadWriteWeb, also does a good job in explaining what makes a big data problem (as opposed to more traditional business intelligence).
If you know what questions to ask of your transactional cash register data, which fits nicely into a relational database, you probably don’t have a big data problem. If you’re storing this same data and also an array of weather, social and other data to try to find trends that might impact sales, you probably do.
2. Consumers will begin to (voluntarily) give up certain elements of privacy for personalisation.
We’ve all heard of cookies – and we know that our actions around the internet affect the adverts that we see on websites and the suggested items we receive on Amazon. This is a concept that we’ve not only become accustomed to but also accept. After all, if we’re going to have information put in front of us, we’d rather that we could relate to it.
But there have been problems in the past. Some websites have taken advantage of customers, for example increasing the prices for a flight that they’ve previously expressed interest in (consumers might worry that the price will go up even further and therefore decide to buy a ticket).
But as more companies instil big data techniques, customers will cooperate, on the premise that they will benefit. This is likely to follow Tesco’s methodology, whereby customers are sent vouchers for goods that they are likely to buy anyway, creating a win-win situation for both parties. Customers, generally, are happy to receive a discount and retailers are pleased customers are coming back (especially if vouchers have an expiry date).
3. Big data-as-a-service will become a big deal
Despite claims from analysts that all businesses will look to hire data scientists, this just isn’t going to happen. Firstly, there’s a shortfall of data scientists, which goes some way in explaining why companies are retraining existing staff to work with big data) and secondly, not all companies are ready to (nor do they need to) invest in full-time data scientists to analyse and explain their data.
Instead, just as in other areas, I expect a wave of companies hustling to enter the big data-as-a-service space, an idea that began to creep into the latter parts of 2013. This could be anything from small and medium businesses signing up to anything from entire packages of storing, analysing, explaining and visualising data to more compact services, which focus on transferring data to cloud-based servers to allow for an accessible way of questioning the data in the future.
4. And finally… remember how Hadoop is an open-source software? Expect a lot more of that.
Hadoop, famously named after a toy elephant, is a well known piece of software to anyone curious about data science and it provides the backbone for many big data systems, allowing businesses to store and analyse masses of data. Most importantly, it’s open source, which means that its implementation was inexpensive, allowing many organisations to understand, rather than ignore, the data they were collecting.
Quentin Gallivan, the chief executive of business analytics software firm Pentaho, explained last month that the rise of new open-source software will bring about more innovation and more ways of understand the data. He said: „New open source projects like Hadoop 2.0 and YARN, as the next generation Hadoop resource manager, will make the Hadoop infrastructure more interactive… projects like STORM, a streaming communications protocol, will enable more real-time, on-demand blending of information in the big data ecosystem.“
Foto: Google’s data centre in Dalles, Oregon, was the first of its kind to be built by the online giant. Photograph: Google/Rex Features