We are living in times of a growing amount of digital data. Economist writer Ludvig Siegele claims 2012 will be the year in which the trend of ‘big data
’ gets noticed more widely. “Many more firms will start to analyse huge piles of data to optimise everything from their supply chains to their customer relationship,” he wrote.
This trend will be facilitated with plummeting costs of storing data and its real-time availability. Software to handle such huge amounts of data is improving, so 2012 will see a demand on talent to analyse it.
According to Siegele, such talent is already scarce and will become even scarcer. A study by the McKinsey Global Institute (MGI)
found analysing healthcare data could yield $300 billion (£186 billion) of savings in the US alone. There is also further evidence confirming the need for meaningful data analysis skills should we want to take full advantage of this digitally facilitated knowledge.
In the world of procurement, data collection is also rapidly improving. E-sourcing, e-procurement and e-tendering tools twinned with statistical methods gives us an intelligent premise to make commercial decisions and develop socio-economical projects. To be able to place the aforementioned data within an emerging “collaborative consumption
” sector that is more interested in renting, not buying; accessing, not owning, would potentially give us a very new direction.
Can we harness the abundance of available data? Can we make sense of it? Can we create a world in which consumption as we know it now will be replaced by a collaborative and non-ownership philosophy?
My intuition tells me we can, but my analytical side says we cannot. Procurement faces a lot of challenges; global commerce versus regional and local development; transparency versus exclusivity; accessibility versus to acquisition and more. But procurement is a tool that on a smaller scale can address such contradictions by designing processes where new and emerging sectors and social needs can potentially be embraced.
The MGI predicts by 2018 there will be a ‘talent gap’ of between 140,000 and 190,000 people with the skills to analyse data. Will we end up with data control or data out of control? That is the question.