I recently published a blog post arguing that over the next 5 years many commercial MMM engine developers might face an uncomfortable truth: their code and algorithms are not defensible. As part of that article, I separated the “MMM Vendor Value Prop” into four components:
- The core computational engine and algorithms (aka “engine and modeling capabilities”).
- A set of applications that use the trained model provided by the MMM to make recommendations (e.g., spend optimization and revenue forecasting).
- A structural model and set of data definitions.
- A set of integrations into data sources and production processes to run the engine and algorithms.
I then sketched out an argument that because the first two bullet points are very hard to defend, durable value will move “up the stack” into domain-and-vertical-specific intelligence, operational reliability, and ease of integration (into both other product-based components of the marketing stack and with internal toolchains and processes).
Here’s the actual statement:
The first claim I’m making is that open source will take over the first two bullet points. And the second claim I’m making is that, depending on company size, companies will either do the work associated to the last two bullet points themselves, or use an industry/vertical specific provider that leverages the open-source frameworks from the first two bullets (Larger companies will roll their own; smaller companies will use a vendor).
I also summarized that idea in a LinkedIn post with a buyer’s point-of-view question for MMM vendors: Why, concretely, is using their product a better idea than custom-coding a purpose-built solution on top of PyMC?
(using PyMC as a stand-in for open-source tooling).
To my mind, this is the key question that any vendor should be able to answer very concretely (and the answer should be on their website, in very concrete form).
Two MMM CEOs (Henry Innis (Mutinex) and Charles F. Manning (Kochava) ) disagreed publicly with the blog post. I’m genuinely happy they did. This industry needs more transparent debate, and both of their responses were professional, substantive, and worthwhile contributions to the conversation. I also want to say clearly: I respect Henry and Charles and nothing here is meant as a criticism of them or their teams.
Henry Innis’s Point: Incentives and Money Keep Vendors Ahead
Henry’s core disagreement is direct: he believes third-party MMM vendors are (and will remain) “far, far ahead” of open-source implementations (largely because commercial incentives fund product maturity).
Two specific points stood out:
- The value is in the product around MMM, not the algorithm. Henry says most MMM value comes from solving product problems around the model, not from the modeling technique itself.
I think Henry and I are in complete agreement here.
- AI may reduce the incentive to open source. He argues that many open-source efforts are sustained because they monetize elsewhere (implementation partnerships, customization, consulting, benchmarked data). If AI-assisted development reduces the “end state” that needs to be maintained, that value may shift into new SaaS surfaces rather than staying tied to open-source projects in their current form.
This second point is an interesting prediction in and of itself. Many open-source efforts will struggle in the years to come. An early sign of problems to come is the fact that Tailwind recently laid off 75% of their engineers.
In essence, Henry’s argument is that generative AI will cause open-source projects to falter, and commercial engines (funded by customer revenue) will be able to stay ahead.
This is a place where reasonable people can disagree. And, to be clear, I disagree with Henry: Corporate-backed open source , foundations, and vendor-adjacent ecosystems can sustain maintenance even if smaller OSS projects struggle.
Charles F. Manning’s Point: Trust is Built Outside the Engine
Charles’s response was about “trust and defensibility” – the key idea being that commercial MMM vendors, collectively, have established a basis for customer trust that enables them to defend their market (and that, because of this, the open-source engines will not get additional traction).
Using his numbering, the core of his argument is the following three objections:
- Objection 3: Optimization is the Moat. In Charles’s view, the defensible layer is optimization: forecasting outcomes under constraints and balancing short-term performance with long-term value. He claims that commercial MMM optimization is sophisticated and delivers substantial enterprise value and that similar optimization layers don’t exist in typical open-source stacks today.
The disagreement Charles and I have is twofold. First, I am making a set of predictions about what will happen, and what will be true 5 years from now, and he’s talking about what exists in the market today (to some extent, we are talking about different things). For other points of view on the current state of open-source MMM, I recommend the discussions from Digiday, Search Engine Land, and EMarketer.
And, second, I simply don’t think optimizers and spend forecasters are defensible technologies.
- Objection 4: Domain Expertise > Generic Modeling. Charles also emphasizes that domains like mobile advertising have unique constraints (attribution nuances, conversion lags, SKAdNetwork gaps, and so on). You can’t model what you don’t understand, and “generic MMM” will miss important real-world structure. Kochava’s product bakes in domain-specific intelligence based on more than a dozen years in the market.
I don’t think Charles and I disagree on this at all. This is actually a foundational thesis for Game Data Pros— effective optimization requires domain expertise and verticalization. A substantial part of the value-add is knowing what to do in a specific domain, not the core engine or modeling capabilities.
- Objection 5: Modeling Code is not the Product. Charles states that “Model architecture is only ~10% of the challenge.” The rest is data reliability, validation, uplift testing, attribution reconciliation, and governance. These are the operational “scaffolding” that makes results defensible.
Here too, I think Charles and I are in complete agreement. And we both agree with Henry.
Charles concludes his response by saying:
“Moving Up the Stack” Is What We Already Do. The article claims value will shift from algorithms to integration, QA, and scenario planning. That’s already our model. AIM is SaaS MMM built for action, not academic benchmarking. StationOne is next.
To which I can only say: Great. We are in total agreement.
Except, of course, that I think performance standards and benchmarks matter, and that the phrase “academic benchmarking” could be viewed as somewhat dismissive. Without performance standards and benchmarks, I don’t see how a customer can make an informed choice between the 50 or so providers in Marketing Science Today’s Provider map.
There’s a Lot of Common Ground Here
Henry and Charles’s objections align pretty closely with each other and with what I actually wrote.
- Henry: the value is mostly in the product around MMM, and commercial incentives fund that product.
- Charles: the moat is optimization, domain intelligence, reliability, QA, validation, integrations, governance (i.e., everything around the model and algorithms).
That’s extremely close to my claim that engines and algorithms are becoming commodities while value mostly becomes verticalized and domain-specific.
So, where’s the disagreement? I think it’s mostly about what “open source replaces” actually means.
When I say open source “replaces” commercial MMM implementations, I don’t mean the world stops buying (or leasing) MMM engines in the short-term. I mean that the core modeling and optimization stack will be increasingly based on open source, and that, over time, we will have open baseline implementations (increasingly good, increasingly automated).
Faced with that, some commercial vendors will continue to develop their engines. But most will try to win by layering value on top of open source platforms (and not by asking customers to trust a proprietary system without independent evidence).
In much the same way that 60% of developers build on PostgreSQL, I would be willing to bet that, in 5 years’ time, 80% of new MMMs will be built on an open-source framework.
About Benchmarks and Test Suites
In a separate LinkedIn post, I praised Mutinex for building an open-source framework for evaluating MMMs and publishing “rough benchmarks” for what good performance looks like. We can argue about whether they chose the right metrics, and whether or not their performance thresholds are the right ones, but I love the fact that they jump-started a conversation about what metrics and performance.
- MAPE / sMAPE: excellent <5%, good 5–10%, acceptable 10–15%, poor >15%
- R²: excellent >0.9, good 0.8–0.9, acceptable 0.6–0.8, poor <0.6
- Stability & sanity checks: parameter change, perturbation change, and placebo ROI bands
Even more commendably, Henry publicly praised Recast for pioneering the public discussion of MMM performance. And he was right to do so: Recast’s Accuracy Dashboards, discussion of their model validation process, and how to do back testing are exemplary.
Simply put, if we think MMMs are a critical part of the marketing infrastructure, and we think there are substantial performance differences between them, then we ought to be able to define objective performance standards and metrics, and then compare different MMMs using publicly available test suites in exactly the same way that people compare databases.
What we shouldn’t do is claim that the open-source frameworks (or our competitors) aren’t very good, but not have a public test suite or standardized definitions of what good means.
The Path Forward is Open Source and Test Suites
My original article was long (~4,000 words). Here’s a simplified form of the predictions.
- The modeling and optimization core will become mostly open. I don’t see any reason to recant (any of) the predictions. The trajectory is the same: better libraries, better tooling, and (with AI) faster iteration and adoption.
- Without a shared test suite and standards of accuracy, open source will win “the engine wars” by default. Without hard evidence, customers have no objective reason to believe a specific proprietary engine is better, and plenty of reasons to prefer an open implementation. And because, over time, for the reasons outlined in the original article, the open-source implementations will pull ahead and become the defaults engines which get plugged into enterprise marketing architectures.
- Vendors will differentiate above the core. Domain-specific models, priors, and constraints, automated QA, data pipelines, experimentation and uplift integration, governance, and workflow UX are all important pieces of an overall marketing architecture, and they’re the place where differentiation and value creation will happen.
In an upcoming article, I’m going to focus on the second of these bullet points and write more about what credible MMM engine validation should look like (and what a public test harness could include).
But for now, I’m just happy we’re all talking about this in public.



