CBRE’s award-winning data project helped the company gain greater control of its global supply chain. By refining its spend data management, the company was able to detect and prevent fraudulent activities in real-time.
Fraud and economic crime rates are at a record high, impacting more firms today than ever before. While it can be tough to target firms directly, scouring the supply chain is an easy way to take advantage of weak points – and the figures confirm this method is proving popular. Almost half of companies claim to have experienced fraudulent activities in the past two years, representing a total revenue loss of around $42bn, according to the PwC Global Economic Crime and Fraud Survey 2020.
The survey results, compiled from more than 5,000 organisations, show that a fifth of responders have fallen victim to procurement fraud over the past two years. Another fifth said vendors and suppliers were the source of their “most disruptive external fraud”. However, businesses are getting wise to these types of attacks and are increasingly turning to data analytics to detect fraudulent schemes and prevent infiltration.
Lay out the foundations
Kicking off a data project can be a mind-boggling task; for larger businesses, simply choosing where to begin seems a daunting decision. For commercial real estate services and investment firm CBRE, there was significant complexity. The firm encompasses four major business segments, each operating multiple financial systems, processes and controls, while its procurement organisation spans 70 countries with a total spend of $23.2bn per year.
Thrown in at the deep end, the company’s supply chain fraud analytics project team was tasked with setting up a digital solution to detect suspicious or fraudulent behaviour from data spanning more than 150,000 suppliers, over 25 financial systems for more than a billion transactions.
“We’ve never had one place where we could access spend data from all the regions and other lines of business,” explains Justyna Maciejewska, global analytics lead at CBRE, who was involved in the project. To make it manageable, the team’s first task was to consolidate information, working with a data scientist to normalise – effectively organise – the data with 40,000 to 50,000 business rules applied using artificial intelligence (AI). “We had several local spend databases, but never a global picture,” Maciejewska continues.
During the project, the team reduced the input to just 51 source systems.“These are ERPs that we use all over the world, but also smaller, local spend trackers and external client systems where we manage spend. The more different sources, the more difficult it is to get a common understanding of the data. So we do normalisation and classification of the data. It’s not perfect, but it’s as good as it can be.”
While this was only the beginning, the rewards were already becoming apparent. By rationalising data, CBRE’s strategic sourcing, category experts and compliance teams could now manipulate and analyse spend to validate current strategies and identify future opportunities.
Putting numbers to work
Three separate work streams were implemented in the second phase of the project, using data to flag fraud by identifying suspicious transactions and suppliers that would require greater scrutiny. For this, the team developed a machine learning algorithm to review transaction-level spend and highlight anomalies, separately creating four bespoke algorithms that use AI to identify suspicious behaviour and implementing fraud analytics monitoring into CBRE’s global procure-to- pay (P2P) platform.
Of course, a system operating at this size and scale required a significant investment, but by spotting a number of duplicate invoices and stopping the payments in real- time, a return on investment was made within the first six months. According to Mat Langley, CBRE senior vice president and global head of operations, technology and transformation, the P2P platform “easily paid for itself ”.
In contrast, much of the investment for the other two work streams focused less on making savings and more on identifying fraudulent or suspicious transactions. “What we were looking to do is to just move those suppliers out of our supply chain, to use it as a way to de-risk our supply chain, so more of a reputational aspect,” Langley explains.
“The core of my technology strategy has been around building in sustainability and building in data quality. So it’s just the underlying strategy we’ve included with everything we’re looking at and all the investment we’re doing,” he says.
Overcoming hurdles
Challenges of the project included not only managing the quality of the data, but also examining the types of data being used by the firm and then adjusting that focus, says Maciejewska. “In spend data, we were only looking at what’s actually been paid already. So here, the answer was to use technology which actually looks at the transactions submitted and not yet paid. It allows you to catch the potential fraud before it actually happens,” she says.
CBRE estimates it has saved up to 1% of its annual procurement spend by actively preventing procurement fraud. Its internal spend data algorithm continues to flag transactions that are not compliant; thus far, the AI system has flagged 2,300 transactions that are the subject of further reviews, while suppliers identified as suspicious have been vetted. The Spend Guard module in its P2P platform has highlighted 2,600 duplicate invoices and over 100 suspicious POs.
Langley says: “I think data is in a lot of ways the new oil, but it’s got to be accurate and it’s got to be useful data. That’s why we want to get as much data as we can together and make it accurate. Now we’re at the point of deciding what is useful data that we can focus on actionable insights and changes. It’s very much an evolutionary process. “It should always be the idea that, in the end, you have got so much data you have to throw some of it out.”
CBRE’s data intelligence project earned the accolade of best use of digital technology at the CIPS Excellence in Procurement awards 2020.
Three key lessons when starting a data project:
1. Start small
There’s a misconception that data intelligence projects have to be expensive, says Langley. “There’s always low-cost options,” he says. “I think a lot of people see that when they want to do a procurement transformation, they choose one application and then just roll it out. These large, integrated platforms have some benefits, and some not. Data is definitely something that flows through but there are a lot of small improvements you can make through processes. What some of the new startups are doing is amazing.
“You can make improvements and keep them small and you can do preparation. I think people make the mistake that you’ve got to spend millions on a big integrated platform or improvement where often you don’t. The hard part is identifying the data sets and bringing them together. It’s more just having this as an aim and seeing what small steps you can make that add up to big steps.”
2. Schedule your resources
Resources and time were two of the biggest challenges CBRE faced in taking on the supply chain fraud analytics project. “The challenge was always going to be just how long it took. It took so long to find the right people that could access and pull in a report and send us the data files on a monthly basis,” Langley says.
“The other learnings were while we took finance and compliance leadership along on the journey with us, we needed people to be able to assess what the data scientists had done on the algorithms. We wanted someone to check that and that’s quite a lot of work.
“To get people in compliance and finance to spend time on it was probably the biggest challenge we had. We should have advised them to block out a whole month where they could do work on this. As opposed to us coming in and saying ‘okay we’ve got the data now, can you look at it?’ and them saying ‘maybe in three months’ time we can’.
“The biggest thing was the downstream support we needed and being able to get people to spend time validating what you’ve done,” Langley says.
3. Don’t wait for perfection
Firms shouldn’t be afraid to share data or think they have to wait until the process is perfect, says Maciejewska. “Don’t be afraid to publish the data and try to hold on until it’s perfect because it never will be. When the data was published to the business, we gathered a lot of feedback and then we were able to improve,” she says.
“If people tell us what’s wrong with it [the data], we’ll be able to fix that and it will benefit both parties. With our spend data, we were lucky enough to be able to publish it quite early. Even though we still had one or two local databases running in parallel, we could see a lot of engagement from the business with this kind of feedback and advice of what should be improved.”