The Fall of Predictive Analytics…What Went Wrong

By Convertr - October 8, 2019

It wasn’t long ago, four, perhaps, five years ago when the term “predictive analytics” was the most exciting thing in B2B martech. Coming in with the promise of leveraging a company’s first-party data, combined with a “secret sauce” of third-party data and black-box algorithms, Predictive Analytics was going to revolutionize B2B marketing.

Led by Lattice-Engines, a whole host of companies rapidly entered the space. Each claimed that their predictive capabilities would cure a multitude of marketing ailments, make prospecting more manageable and, most importantly, help companies identify buyers who were in market for their products at that very moment. Almost overnight, about two-dozen players crammed themselves into the “predictive” space, raised an abundance of capital, and went to war with each other as companies scrambled to implement predictive capabilities into their marketing strategies.

Fast forward to 2018, and you would’ve been hard-pressed to find a company in B2B martech claiming “predictive analytics” as their core competency. After a few years of false-starts, most of the leading players in the space pivoted their messaging to become Customer Data Platforms (CDPs), data orchestration platforms, or other, adjacent platform types. Predictive was still part of the offering, and a key selling point, but it became a feature instead of a foundation.

That brings us to present day where two of the last remaining players in this formerly white-hot space were purchased by companies well outside of martech. With almost all others having been either purchased or re-imagined, it’s clear that stand-alone predictive analytics wasn’t ready for prime time.

The question then is, why didn’t predictive live up to its promise? How could so many very intelligent, well-funded, well-intentioned people have gotten this wrong?

Well, having been close to many of these companies for both the boom and the bust years, there are four main factors that I believe lead to this current state:

  1. Cost – Bottom line: these solutions weren’t cheap. With the average starting price coming in near the six-figure mark, it was critical for these platforms to deliver value in-line with the cost. While global enterprise companies with deep pockets could swallow the hefty price tag and allow some time for results, others with tighter budgets needed to see a substantial return on investment quickly. When their expectations fell even marginally short, it became easy to justify pulling the plug and chalking the purchase up as a failed experiment.
  2. Data Volume – To adequately feed the models in their systems, the predictive companies needed a certain amount of first-party data to deliver viable results. While the same global enterprise customers that could afford the solutions in the first place possess the volume of data necessary to feed the machine, reducing the company size to scale meant a natural decrease in data volume. This decrease then necessitated more “fuzzy math,” and third-party data to help fill in the blanks. While using these additives was successful for some, it didn’t deliver the life-changing results many expected.
  3. Data Quality – This is the foundational problem that exacerbated the first two issues exponentially, and it’s the one problem that is almost entirely out of the control of the predictive companies. It’s pretty simple: to build effective models, you need valid data and the fact remains that most companies, big and small, suffer from massive data quality issues. If your first-party data is defective to begin with, then you feed that flawed data into a high-powered predictive engine, the results that come out the other end are going to, naturally, be defective as well. While machine learning can smooth some of the bumps, the false positives and negatives that exist within the majority of first-party data sets were too problematic.
  4. Time to Value – While data quality is the most significant point of failure, this last issue was the nail in the coffin. Given enough time, and some serious help around data quality, there’s a strong likelihood that the models would’ve worked for the majority of companies.

Unfortunately, for these things to prove useful, it takes time to create the models, gather the data, deliver the results, begin to market to the new targets, move them through the funnel, and close them. When you’ve made a massive investment in a new piece of technology, right or wrong, the expectation is that it’s going to pay off fairly quickly and that wasn’t realistic in this space.

In the end, the underlying principles upon which most of these companies were based was sound which means that, despite this glitch, predictive isn’t going anywhere. Instead, the predictive capabilities are being rolled up into other platforms that can deliver a more holistic approach to prospect engagement.

Ultimately though, the onus of ensuring long-term success only partially falls in the hands of people creating the technology. The more significant issue is one that has to be solved by marketing, technology, and sales organizations and that’s the issue of garbage data. Continue to feed garbage into your models and you’re sure to get garbage out. Fix that at the point of entry, then keep it fixed inside the stack and the positive outcomes will happen more and more frequently.

If data quality is holding you back, request your free demo today. 

What is a lead management operating system?

Lead management has been defined as the process of identifying and nurturing potential customers all the way through to conversion, all of which is underpinned by technology that automates this process. The Convertr lead management operating syste

We're here to help!


Book a demo with one of our experts to review your current demand generation process, define your objectives and answer any questions.

   London: +44 (0)203 617 7659       Denver: +1 (720) 699 7880   

Talk To An Expert