9 Overlapping Predictions That, Collectively, Explain Why Open Source Will Mostly Replace Commercial MMM Implementations Sometime in the Next Five Years

At various points in the past year (at the 2025 Game Revenue Optimization Mini-Summit and, more recently, on LinkedIn), I’ve been an advocate for a take that makes some people uncomfortable:

Open Source is Going to Dominate the Future of Commercial MMM.

When I say that in private conversations, I usually get one of two flavors of pushback.

  1. “Sounds like a big change. What do you mean by dominate?”
  2. “You do game revenue optimization for a living — talking about the future of MMM isn’t exactly in your swim lane. Why do you care?”

The second question is easy. In mobile games, marketing measurement isn’t an analytics side quest — it’s part of the core loop. If you can’t measure incrementality, you can’t compute marginal Return on Advertising Spend (ROAS) or forecast payback. If you can’t compute marginal ROAS or forecast payback, you can’t scale. And since GDP occasionally gets retained to help evaluate user attribution and marketing measurement systems and build roadmaps, our customer base is effectively saying: “Yes, GDP, this is precisely your swim lane.”

The first question is harder, because “open source will dominate” is imprecise and implies a significant change in the market. Let’s start by defining dominate.

By dominate, I mean the default foundation for serious MMM implementations will be open-source frameworks like Meridian or PyMC and that most commercial value will move up-the-stack into integration, operations, governance, and domain-specific modeling.

How will this happen? The rest of this article contains a set of predictions for how the commercial landscape of MMM technology will evolve over the next 3–7 years (and why I think that the excellent provider maps from Marketing Science Today are going to change dramatically as a result).

Marketing Science Todays MMM Provider Map.
MMM Provider Map from https://marketingscience.today/

This article is formulated as a set of nine specific predictions that, collectively, justify the claim that open source is going to dominate the future of MMM.

Before we get started, it’s important to note that, conceptually, an “MMM implementation” divides into four pieces:

  • The core computational engine and algorithms (aka “engine and modeling capabilities”). This is the hard data science code and is also commonly referred to using the following names: model layer, inference engine, and model training engine.
  • A set of applications that use the trained model provided by the MMM to make recommendations (e.g., spend optimization and revenue forecasting).
  • A structural model and set of data definitions. This is the data-modeling part of the job and is also commonly referred to by the following names: model structural form, measurement framework, data and metrics taxonomy, schema & definitions, or semantic model.
  • A set of integrations into data sources and production processes to run the engine / algorithms.

The first claim I’m making is that open source will take over the first two bullet points. And the second claim I’m making is that, depending on company size, companies will either do the work associated to the last two bullet points themselves, or use an industry/vertical specific provider that leverages the open-source frameworks from the first two bullets (Larger companies will roll their own; smaller companies will use a vendor).

And, of course, if you’re the sort of person who likes their predictions laced with some empirical validation, everything I’m talking about in this article is already happening (per William Gibson, the future is already here. It’s just not evenly distributed).

Here, for example, is a recent post from LinkedIn:

MMM vendors are increasingly losing deals to the open source platforms.
Source: https://www.linkedin.com/feed/update/urn:li:activity:7407778595366125568/

With that said, let’s get started.

Prediction #1: No Private Vendor Will Maintain a Durable “Engine and Modeling Capabilities” Edge Over Open Source Frameworks

If you’ve worked in software long enough, you know how this goes.

A core technology becomes strategically important and broadly applicable. Open-source communities form. Enterprises start contributing. Vendors stop competing on the core algorithms and software capabilities, and start competing on packaging, workflow, and services.

And here are three examples from recent history:

MMM is lined up for the same pattern because it has the same properties as databases, operating systems, and orchestration frameworks. That is, it has:

  • High strategic value. Being able to optimize advertising spend is mission-critical for most companies.
  • Low “technical secret sauce.” MMM has sixty years of academic research behind it and the core ideas are well-understood. The core ideas have been refined, and re-refined, and most MMM engines have easily understood structural models.
  • MMM is not core competence. For most companies, MMM is an analytical tool that helps them allocate advertising spend more effectively. From a business perspective, the real differentiation is elsewhere (product, brand, …).
  • A constant need to evolve in response to platforms changes. In fact, the modern resurgence of MMM, at least in some verticals, dates back to Apple’s decision to change privacy rules (for the current rules on iOS, see Apple’s ATT docs and Apple’s SKAdnetwork docs)
  • A shared problem structure across companies. This will be revisited more extensively below in Prediction #6. For now, suffice it to say that two educational software companies that ship mobile apps and charge a subscription are very likely to have similar MMMs implementations, and there is little or no point to them investing in building the underlying technology.
  • A huge premium on transparency and trust. In many ways, this is part of “high strategic value.” If a tool is being used to make important decisions, it needs to have a high level of transparency and trust. And MMM is especially vulnerable to open-source standardization because the trust surface area is huge: inputs, priors, assumptions, diagnostics, and decomposition logic all need to be inspectable.

The first three of these argue that companies will outsource development of MMM technology. The last two imply that if your commercial moat is “our engine and algorithms are better but we can’t tell you why because trade-secret,” you might run into problems as the market matures.

Prediction #2: Most Major Companies Will Run Internal MMM Systems On Top of an Open-Source Codebase

The first part of this prediction centers around the following question here: at scale (say, $100M in annual media spend), should a company rely on an MMM run by an MMM vendor? The answer is that, for many brands, this doesn’t make sense. Instead, most large-scale advertisers should and will run and maintain MMM systems internally, even as they lean on external experts for initial setup and periodic checkups.

Why? Because at a certain scale, the MMM isn’t a model or an algorithm or a separate piece of software. It’s part of a much larger system comprised of

  • Data contracts with a large number of other marketing systems.
  • Features engineered on top of proprietary data (which, in many cases, cannot be shared or has to be scrubbed extensively before sharing for compliance reasons).
  • Integrated experimentation layers.
  • Stakeholder workflows, customized dashboards, and integrations to internal planning and financial systems.
  • Repeatable forecasting routines.

All of this is incorporated into an internal suite of truth, and is tied to mission-critical, highly visible, processes that are often company specific (that is, the decision to bring the MMM in-house mostly means owning the data contracts, the refresh cadence, governance, experiment and integration, …. not re-inventing Bayesian inference).

And once a company decides to use an internal system, the decision to leverage robust open-source framework is an easy one to make.

This trend is already visible. Google’s Meridian is explicitly positioned as enabling advertisers to run in-house MMM. And Meta’s Robyn was built for “in-house and DIY modelers,” with published case studies including in-house applications.

Robyn’s documentation is clear: the goal is to support in-house modeling.
(Taken from https://facebookexperimental.github.io/Robyn/docs/analysts-guide-to-MMM/)

The interesting second-order effect is contribution. Once enough big companies run open-source MMMs in production, they’ll start contributing code and fixes back. Not out of charity, but because maintaining private forks is expensive and they want the ecosystem to solve shared problems in standard ways (like clean room inputs, reach/frequency handling, calibration tooling, and standardized diagnostics).

That flywheel is why open source solutions tend to accelerate once they reach critical mass (and it’s also why private solutions, once they fall behind, never catch up). And accelerating flywheels lead to dominant solutions.

Prediction #3: The Two “Leading Open-Source Bayesian MMMs” Will Become Fundamentally Different Systems Over Time

Right now, the two Bayesian open-source platforms that are leading the conversation are Google’s Meridian and PyMC-Marketing’s MMM.

They’re both Bayesian. They’re both open source. But they don’t feel like the same product at all.

(If you want a deeper comparison, there are already multiple comparisons floating around, including a head-to-head benchmark from PyMC Labs and some excellent practitioner writeups. See, for example, this comparison from early 2025 or this pair of articles from PyMC)

My take is simple:

  • If you’re resource constrained and need a tighter “path to value,” Meridian’s ease of use is a very nice feature. Both Google and the community will lean into that, making MMM easily accessible to a large number of lightly-resourced companies.
  • If you have strong internal modeling expertise and you need to build something bespoke (hierarchical, multi-outcome, time-varying, experiment-informed coefficients, …), PyMC-Marketing is the more extensible base. And PyMC will lean into that, in the process becoming the enterprise toolkit for MMM.
  • This gap will widen over time because Meridian will optimize for adoption and repeatability, while PyMC will optimize for extensibility and enterprise-grade composability.

Of course, these first three predictions are the backbone of the prediction everyone wants to argue about.

Prediction #4: By 2030, Many Enterprises Will Run “Open-Source MMM / In-House Team / Ecosystem Contributions”

Today, most enterprise MMM “systems” are still a patchwork of legacy martech tools, bespoke SQL, and spreadsheet glue—refreshed quarterly or semi-annually and dependent on a few heroic analysts. That’s why this shift will feel less like “switching models” and more like infrastructure modernization: once the core technology is standardized, the real work becomes building durable data contracts, QA, governance, and decision workflows around it.

The general pattern is the same one we’ve seen elsewhere:

  • MMM is becoming infrastructure.
  • Infrastructure gets standardized.
  • Standardization favors open source.
  • Enterprises keep control of the instance, the data, and the business logic.

The best mental model here is Kubernetes. Kubernetes won not because one vendor stayed ahead forever, but because it became the standard substrate that everyone extended: cloud providers, security tooling, observability, deployment pipelines, and internal platform teams. MMM is headed toward the same kind of ecosystem. Once a handful of large advertisers operationalize open-source MMM, you’ll see an explosion of “everything around the model”: data connectors, calibration pipelines, scenario tooling, automated QA, governance, and decision workflows.

And this is where contributions become inevitable. In practice, “contributing back” won’t look like brands publishing their spend curves or revealing confidential information. It will look like bug fixes, stability improvements, new diagnostics, better infrastructure for priors, standardized data schemas, and reference implementations for common patterns (geo hierarchies, reach/frequency, promotions, creative fatigue). Those are the shared problems that everyone wants solved once and then maintained by the community.

So, the MMM vendor category doesn’t disappear. Instead, it moves up-the-stack, from “owning the engine” to “owning deployment, governance, integrations, and vertical packaging.”

Prediction #5: Most Providers in the “MMM Platform Map” Will Be Forced to Pivot (Or Become Commoditized)

If you look at provider maps like the one above from Marketing Science Today, you’re basically looking at a snapshot of a market where most of the enterprise value is currently held by:

  • Proprietary implementations.
  • Bespoke onboarding and integrations.
  • Customer lock-in.
  • Opaque modeling decisions that are hard to replicate.

Once the open-source substrate becomes standard, a substantial percentage of that value simply evaporates.

Which Vendors Will Survive?
AI-Generated MMM Provider Map Circa 2030

Some vendors will still win—not by owning the engines and algorithms, but by owning integrations into clean rooms and walled gardens, governance/model risk tooling, change management, and the operational layer that makes MMM usable week-to-week.

MMM consultants will continue to prosper by offering specialized services (in much the same way that Percona helps companies get the most out of their open-source databases). Enterprises will have internal MMM teams that know the business deeply. They’ll still need help with the initial development of their MMM, and specialist help when things go south in a complicated way.

And some companies will offer “MMM as a service” on top of the open-source platforms. I expect that the way this will roll out is that a company will develop deep expertise in a specific vertical (see the next prediction), and operate and maintain the MMM in production for smaller companies (that don’t want to have expertise in keeping an MMM running). Note that these will be relatively thin layers on top of open-source frameworks.

What won’t prosper is proprietary engines or algorithmic / data-science code.

Prediction #6: Verticalized MMM Becomes a Real Category (And It Will Look Like “Open-Source / Hosting / Domain Expertise”)

Here’s the (slightly) exaggerated version of an important claim:

Companies in the same vertical need the same MMM (in everything except the data. And mostly the same data too)

This is not a new insight. In 2005, in an article entitled Market Response Models and Marketing Practice Hanssens, Leeflang and Wittink talked about “standardized models” and “the availability of empirical generalizations.” And in 2009, in an article entitled Market Response and Marketing Mix Models:
Trends and Research Opportunities
, Bowman and Gatignon explicitly talked about “Industry Specific Contexts.” Newer work and meta-analyses show that response patterns can be meaningfully different in specific sectors (e.g., entertainment), limiting naïve transferability and strengthening the case for vertical-specific defaults, priors, and diagnostics.

To make this more concrete, consider the following verticals:

  • Subscription digital goods (streaming, SaaS-ish consumer apps, memberships). Focus: long payback windows and retention-driven growth. Core issues: linking media to CAC/LTV, cohort behavior, and delayed revenue realization (making outcome measurement unreliable).
  • Mobile video games / live-service games. Focus: both acquisition and re-engagement, with marketing organized around strong content beats. Core issues: mixed performance + brand dynamics, event-driven baselines, platform signal loss, overlapping measurement systems, creative fatigue, and the need for high-frequency (daily/weekly) budget adjustments.
  • DTC e-commerce for physical goods. Focus: heavy paid social/search, promotion calendars, and operating within inventory constraints. Core issues: major confounders from merchandising/pricing/promo strategy, and separating media effects from cultural events and seasonality (e.g., holidays).
  • Omnichannel retail (brands with physical stores and online commerce). Focus: coordinating a wide mix of legacy and digital media across multiple purchase paths. Core issues: inconsistent measurement units (e.g., GRPs vs. impressions), geo/store hierarchies, distribution changes, attributing media to footfall vs. online activity, and disentangling holiday-driven demand from true incrementality.
  • QSR / food delivery (fast food, restaurants with delivery, delivery aggregators). Focus: local demand generation with always-on promotion strategies, increasingly tied to digital outcomes (e.g., app installs, online orders). Core issues: localized dynamics, promo-driven demand shifts, weather sensitivity, competitive pressure, and multi-outcome measurement across in-store and digital channels.
  • Healthcare services / providers (health systems, urgent care, dental/ortho, telehealth). Focus: high-consideration decisions with conversions that often occur offline (calls, intake, scheduling) and vary heavily by geography. Core issues: multiple outcomes (inquiries, appointments, treatments, and revenue), long and variable lags in ad response, capacity constraints (clinician supply and scheduling), compliance concerns, and confounders like payer mix/open enrollment cycles, network changes, local competition, and seasonal demand shocks.
  • … (feel free to add Education, Insurance, … )

Each of these verticals is clearly distinct from the others (the requirements for digital subscription goods are very different from those for healthcare), and each is ripe for a standardized model and SaaS services built on top of hosted open source.

That is, companies in a single vertical aren’t identical, but they are similar enough that you can build a single verticalized MMM system. Such a system would have:

  • A canonical data model / structural form.
  • A set of priors and response curve defaults.
  • A standard set of confounders.
  • A standard set of integrations to vertical-specific tools.
  • And a standard reporting workflow.

Note also that building this, in the open-source world, requires deep domain expertise, and just-enough MMM expertise to encode the right confounders and workflows (but not the kind of research-grade modeling and coding effort required to build the core framework). In other words, this is best done as a layer on top of the open-source MMM toolkits that are already available.

I also expect that many of these companies will actually be “spun-out” from companies already doing business in the vertical (in the same way that Discord began life as the communication layer of Fates Forever).

The prediction is that the open-source community will build and maintain the hard data-science parts, as both out of the box systems and as extensible toolkits, and the vertical-specific hosting companies will build and maintain the domain specific models (and compete on domain expertise, not MMM expertise)

Prediction #7: After Vigorous Debate, the Industry Will Converge on What “Accurate MMM” Means (And It Won’t Be a Single Number)

Right now, the idea of “accuracy” is a mess. Two different groups of people, or two different MMM providers, can both say “our MMM is highly accurate” and mean completely different things. For example, they could mean:

  • The model has high R² (or RMSE. Or NRMSE, NMAE, …)
  • The model has good out-of-sample prediction error (e.g., using one of RMSE / MAPE / wMAPE / sMAPE / MASE, NRMSE, NMAE, …)
  • The model backtests well.
  • The model matches lift test and incrementality tests.
  • When we run the MCMC sampler again, we get the same results (note that we’re not including sampler metrics, like BFMI, in this list because they matter for reliability, whether we can trust the outcome of the sampler, but aren’t about accuracy).
  • The results are stable under time-series cross-validation.
  • The decomposition looks plausible to domain experts.

In order to progress, the industry has to move toward a layered standard that looks like:

  1. Predictive sanity checks (R², RMSE, MAPE, wMAPE, etc.) with vertical-specific values for “good” and “great” performance (e.g., a 10% wMAPE is probably excellent in omnichannel retail, but not nearly as impressive in subscription digital goods).
  2. Stability checks (time-slice CV, holdouts, parameter stability)
  3. Decomposition plausibility (no insane baselines, response curves make sense to industry experts, and so on)
  4. Calibration / validation against experiments (geo lift, conversion lift, interrupted time-series analysis)

Note that everyone is starting to talk about accuracy and performance measurements more seriously. Meridian’s documentation explicitly states the goal is causal inference, and that out-of-sample prediction metrics are useful guardrails but shouldn’t be the primary way fit is assessed. Similarly, PyMC-Marketing explicitly documents evaluation workflows and time-slice CV, including Bayesian scoring like CRPS and Recast has been a staunch advocate for stability and robustness.

The consensus will be less like “everyone uses metric X” and more like “everyone uses a shared evaluation playbook which is customized by vertical.”

Prediction #8: “Interoperability in the Marketing Stack” Will Stop Meaning “Everything has a Dashboard”

Today, most marketing systems are tied together at the dashboard level. System A produces a chart. System B produces another chart. A smart human stares at both charts and then decides what to do.

That’s not interoperability in any real sense. That’s parallel usage (possibly accompanied by “storing the data in the same relational database”)

In the next iteration of marketing measurement, interoperability will mean:

  • Shared metric definitions.
  • Shared data sets.
  • Machine-readable outputs.
  • And automated decision workflows (with humans supervising, not translating).

AI is going to accelerate this, not because LLMs magically fix data, but because they dramatically reduce integration friction.

Protocols like MCP (Model Context Protocol) are basically “standard tool interfaces for AI systems,” and they’re already being applied to analytics. AI tools enable companies to deal with messy and unstructured data and dramatically lower the barriers to system integrations. Ad Exchanger recently published a nice summary of the value of MCP but the key point is simply this: the adoption of MCP is spreading rapidly (for example, Google ships a Google Analytics MCP server so an LLM can connect to GA4 data directly, you can manage your Facebook ads via MCP, analytics vendors like Mixpanel have adopted MCP, and so on). Once MMM outputs and measurement systems are exposed through standard interfaces, LLM-driven agents can:

  • Map schemas across platforms
  • Translate metric definitions
  • Generate and maintain transformation code
  • And reconcile “same concept / different naming” problems that currently require senior analysts and significant amounts of tribal folklore.

This is the tedious plumbing work that marketing stacks have always needed… and never staffed adequately. And now it’s, if not easy, doable.

Prediction #9: Standardization Creates Shareable Datasets, Enabling Academic Research that Will Accelerate Model Progress

In the long run, standardization creates three things (that don’t exist today at scale):

  1. Benchmark datasets (mostly synthetic and semi-synthetic) with known ground truth as well as standard definitions for metrics and data elements.
  2. A shared evaluation suite (the “accuracy playbook” from Prediction #7, but runnable as code).
  3. Privacy-safe collaboration patterns that let companies share researchable artifacts without sharing raw sensitive data.

Once those exist, academics can stop doing “MMM research in the void” and start iterating against problems that look like production.

There are already efforts aimed at connecting academics, advertisers, and vendors around MMM research (e.g., industry initiatives convening multiple stakeholders). The next step will be a shared evaluation suite — not just “use RMSE,” but a versioned set of tests that any MMM implementation can run: rolling time-slice CV, stability checks, decomposition plausibility checks, calibration scoring against experiments, and distributional scoring where appropriate.

In other words: an MMM will be able to pass or fail a standardized battery of tests the way software passes unit tests.

Once the community has that, we get something we’ve never had: comparability. Practitioners can argue about assumptions instead of arguing about whose dashboard looks nicer. Vendors can compete on reliability and usability. And researchers can publish results that actually translate back into practice because everyone can reproduce them.

Did I Make a Mistake? Surely the Future Isn’t This Predictable

This article contains 9 fairly specific predictions about the future of MMM. Each of the predictions is plausible and reasonably well-supported (I could add more supporting details, but we’re already at almost 4,000 words).

If I’ve done my job well, you agree with six or seven of the predictions and have reservations about two or three of them. But you’re probably still on the fence about whether the MMM provider community is about to implode as their customer base standardizes on top of open-source MMM frameworks.

That’s okay. The goal was to start a conversation.

The point of view here is that we are in the “suddenly” part of the famous Hemingway quote.

“Gradually, and then suddenly” — Hemingway was talking about going bankrupt, but the quote applies to almost every major change. Things start slow and then accelerate.

But timing is hard.  Bill Gates might very well wind up with the last word. “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

Reporting from the 2025 Game Revenue Optimization Mini-Summit

(To learn how Game Data Pros can help you optimize your games, contact us)

In 2024, we held the first-ever Revenue Optimization in Games Mini-Summit at GDC. We did it because we didn’t like that there aren’t many revenue optimization talks at GDC and that, in general, the idea of “Game Revenue Optimization” doesn’t seem to get much, if any, mindshare at industry conferences.

So, instead of grousing, we organized our own summit in 2024. The feedback we got was incredible —  the attendees loved the event, they thought the talks were amazing, and, more generally, they spent the next year asking us if we were going to do it again. 

Spoiler Alert— We did. We rented the same venue (the incredible American Bookbinders Museum), ordered a few thousand dollars worth of goat-cheese tarts and coconut macaroons, invited the world, and put on a show.

And what a show it was!

First and foremost, we had a set of world-class talks

After a brief introduction by Pallas Horwitz, the day’s emcee, the talks began at 2:15. We had five speakers.

  • Our CEO, Bill Grosso, opened the show with “10 Reasons MMMs Are More Interesting Than You Think” — an overview of how Generative AI combines with open-source libraries like Meridian to make building a good and useful MMM much more accessible to small companies than it was even 5 years ago (slides). 
  • Then Ryo Shima, CEO of JetSynthesys Japan, presented “How Game Revenue Optimization is Different in Japan” — an in-depth discussion of the behavioral differences between Japanese and Western gamers, and how that impacts monetization strategies (slides).
  • Tiffany Keller, one of the superstars at Liquid and Grit, followed Ryo and gave a talk on “Advanced Hybrid Monetization.” This was the graduate seminar version of the roundtable she held last February and was a comprehensive overview of the state of the art in hybrid monetization. 
  • And, finally, Joost Van Dreunen and Julian Runge closed the presentation part of the day by presenting a sweeping overview of the future of brand engagement with gaming (slides).
Speakers, from Left: Pallas Horwitz, Bill Grosso, Ryo Shima, Tiffany Keller, Julian Runge, and Joost Van Dreunen.

Second, we had an amazing audience

Like last year, we were slightly nervous about this — the room only holds 105 people, and we had 340 people registered. Ultimately, we decided to issue 180 tickets. 75 people came, most stayed for the entire summit, and the event turned into a caffeine-fueled group conversation about revenue optimization. 

As a side note, the audience included at least one certified game design legend among the other luminaries. 

Third, the happy hour was delightful

“Most awesome part of GDC.” — Evan Van Zelfden

The combination of the incredible speakers and the amazing audience meant that the happy hour was more than just an excuse to have salmon brioches and artichoke salad while downing plastic glasses of red wine. The food was good, but the conversations were excellent and lasted until the museum closed.

Is Mobile App Revenue Moving Off-Platform? Industry Survey Indicates Landslide Changes in Web Store Adoption

During GDC 2024 in San Francisco, we hosted the Revenue Optimization in Games Mini-Summit. Industry leaders gave four fascinating presentations about revenue optimization in gaming, including an overview of the complete survey.

See our Reporting from the Game Revenue Optimization Mini-Summit follow-up post to learn more!

At Game Data Pros, a lot of our recent work on personalization has focused on what the Deconstructor of Fun podcast refers to as “Off-Platform Payments” and what Liquid & Grit calls “Web Stores”. We think it’s a big and important trend in the games industry. But how big? And how important?

To find out, we distributed a survey on LinkedIn, Twitter, and in the Deconstructor of Fun and Mobile Dev Memo communities. While this sampling approach is imperfect, it should yield decent indications of what’s happening in the marketplace. We collected a large number of responses over about two weeks. After cleaning the data from fraudulent responses based on the provided e-mail addresses and patterns in timing and response behavior, we had a sample of 26 high-quality responses across different companies and backgrounds. While that number is too small to draw conclusions, it is a good start to gather indications.

As a little introductory data point, here’s where respondents in the sample say they get their mobile gaming news (multiple responses possible):

The top news sources among survey respondents are Deconstructor of Fun, LinkedIn, and Mobile Dev Memo. Professional communities for the win, yay!

Of course, the responses here might be impacted by how we distributed the survey. But it’s nice to see two mobile / gaming communities — that I personally trust and frequent — land in the top three.

Now, let’s dive in.

Web stores are a major market trend

Respondents believe that the adoption of web stores in the market is far from perfect and that there is still ample potential for mobile game developers to move payments off-platform. The community is split on the question of how widespread adoption is: Half of respondents think that at least 50% of companies have started running a web store, the other half thinks that most companies are not yet doing it:

The community is split in their beliefs on how widespread web store adoption is. By the way, you can see all questions and the full survey here.

Another question we asked provides us with a more direct read:

Actual web store adoption among respondents outpaces respondents’ beliefs about adoption in the wider market.

17 respondents are live with a web store in one or more games in their company / their company portfolio. Another four indicate that they’re planning to go live soon, and three are not live and don’t seem to be planning to go live with a web store. Beliefs about adoption, i.e. the results of the previous question, may hence underestimate how many companies are actually already live with a web store.

(Side note: Our sample likely overestimates actual web store adoption as people with interest in the topic are more likely to respond.)

Off-platform payment activity expected to be significant

Now, being live with a web store doesn’t mean that a lot of revenue is going through it. To assess what the market thinks about the economic significance of web stores, we asked respondents for their estimates of what share of revenue will be moving off-platform in one and in five years:

Three quarters of respondents believe that 30%+ of mobile game revenue will be generated off-platform in five years. Wow.

Only 27% of respondents believe that only 10% or less of mobile game revenue will be off-platform in a year from now. And nobody thinks that off-platform payment activity will be that low in five years.

73% of respondents believe that 30% or more of mobile game revenue will be generated off-platform in five years. 15% even think that a staggering 70% or more of overall mobile game revenue will run through web stores in five years. Mull on that.

A windfall for game creators?

Next, we asked participants about their expectations for the revenue impact of web stores:

Three quarters of respondents say that revenue for mobile game devs — after platform fees — will increase by 10%+.

76% of respondents indicate that revenue after platform fees for mobile game developers will increase by 10% or more. A third thinks that the revenue windfall will clock in at 30% or more, with two respondents expecting a post-fee revenue jump of 70% plus!

The exact impact for different game developers will certainly depend on the genres and monetization behaviors in the respective publishing portfolio. If a game’s revenue is driven by a relatively small set of high-value and high-spending players, and the company very successfully entices these players to use a (personalized) web store, such outcomes seem possible. They are, however, unlikely to materialize for the broader market to this extent.

Nonetheless, these results serve to show how much is at stake. Up to 30% of overall mobile game revenue is bound to be re-distributed, by law and/or through strategic maneuvering by major market participants.

Is everybody of the same opinion?

No. Opinions diverge on the importance of web stores and on what revenue share should go to content creators versus platform operators. Our sample is a little too small to slice and dice it. However, if we look at indicators of “web store bullishness” across the two most important community news sources in our sample, we notice an interesting pattern:

While they’re trending strong, not everyone is equally bullish on web stores and off-platform payments.

Respondents who list Mobile Dev Memo (MDM) as their most important source of mobile (gaming) news, appear much more bullish on web stores than respondents who primarily follow Deconstructor of Fun (DoF). 75% of MDMers think that current web store adoption sits at 50% or more while only 12.5% of DoFers think so. 100% of MDMers believe that 30% or more of revenue will go through web stores in a year from now while only 25% of DoFers do. Expectations start converging in the longer term: 75% of MDMers see 50%+ of revenue off-platform in five years, and almost 40% of DoFers agree with that perspective.

Bear in mind that these are indications at best. The sample is simply too small for anything more. They would align with this perspective though: The MDM community has on average more business-minded and less purely gaming-focused members — which seems reasonable. After all, off-platform payments may become even more critical for app developers outside gaming, such as in health, news, music, and other content distribution.

So, is this it?

And, no, again. Our survey also asked respondents about the main challenges they face in web store adoption and how they plan to overcome them. For a talk covering the full results, join us for the Revenue Optimization in Games Mini-Summit and Happy Hour on March 20, 2 pm, in downtown San Francisco. Four experts from different corners of the industry will talk about their recent work and what they see in the market. During a reception following the talks, you will have a chance to connect with the speakers and us to discuss game monetization and its future.

We’re excited to see you there!

Dear Digital-First Advertisers, Are You Media or Marketing Mix Modeling?

As the adoption of MMM among digitally native businesses increases and matures, awareness of the differences between the two can open up new pathways for excellence in marketing analytics.

(Scroll to the end of the article for a TL;DR.)

MMM, commonly used to abbreviate marketing mix modeling, is experiencing a surge in interest among digital-first advertisers. App publishers, game companies, direct-to-consumer businesses, and others are all embracing a new measurement standard as private and regulatory privacy initiatives are rocking the data infrastructure of digital advertising. In lieu of deterministic attribution and measurement based on user-level data and identity graphs, advertisers are flocking to probabilistic measurement from coarser data and identity graphs such as at the campaign-, state-, DMA-, or country-level. Especially MMM, as the most comprehensive and holistic of probabilistic measurement methods, is finding adoption as marketers want to mitigate a risk of “flying blind” if user-level data access continues to deteriorate at the current pace.

Now, as everyone in digital advertising starts talking about MMM, there seems to be a conflation of the terms of marketing and media mix modeling. While the two are highly related and make of use of similar and in many ways identical methods, they are not the same. A recent report by the Marketing Science Institute nicely brings this point home by distinguishing MMM (marketing mix modeling) and mMM (media mix modeling). The key difference between the two is that MMM really is about supporting a firm’s decisions on the full marketing mix (see Figure 1), so product, price, promotion, and place/distribution, while mMM is about informing its decisions on the media mix, i.e., how it sets and allocates its media budget across media and advertising channels (see upper part of Figure 1).

This blog post aims to achieve three things:

(1) Revisit and summarize differences between MMM and mMM, mostly to help inform current industry conversations in digital advertising;

(2) Talk a little bit about why the concepts of MMM and mMM are often used synonymously and may have fused in digitally native business especially;

(3) Highlight that there may be valuable lessons to be gleaned for digital-first advertisers from the distinction of MMM and mMM.

Figure 1: This overview published by Harvard Business Review nicely summarizes the levers firms can work with to impact their marketing strategy and success. It also provides a succinct summary of the related analytics chain. The only lever I would add are a company’s own (new) product releases and launches. (Source: https://hbr.org/2013/03/advertising-analytics-20).

Differences between MMM and mMM

Both MMM and mMM are analytical approaches used by companies to understand the effectiveness of their marketing and advertising efforts. While they share similarities, they have distinct focuses and differences. MMM is a broader approach that analyzes the overall impact of various marketing elements on a company’s sales and other key performance indicators (KPIs). These marketing elements typically include a combination of the “Four Ps” of the marketing mix: Product, Price, Promotion, and Place (distribution). MMM aims to quantify the contributions of each of these elements, and their interactions, to overall sales.

As illustrated in Figure 2, media used for marketing is a subset of all modeling variables used in MMM. In this vein, mMM focuses on analyzing the effectiveness of different advertising media channels in driving sales and other KPIs and determining the optimal allocation of media budget across various channels to achieve the best return on marketing investment (ROMI). It thereby attributes sales or conversions to specific media channels, helping marketers understand which channels are driving the most value. In this way, mMM can sometimes offer insights at a more granular level, such as the impact of specific ad placements, time slots, or online platforms.

Due to their different scopes as shown in Figure 2, the two approaches require different historical data coverage. MMM requires data inputs addressing all the various marketing activities of interest, e.g., on all Four Ps (product, price, promotion, place), in addition to sales data, other relevant external factors (e.g., competitive and macroeconomic), and potentially media spend. While data on the Four P are often added to mMM as control variables, mMM does not require them per se and can work from media spend and sales data alone.

Figure 2: Media mix modeling (mMM) addresses a subset of the analytical scope of marketing mix modeling (MMM). The author believes that awareness of this difference in scope can hold valuable lessons for digital-first advertisers. (Image source: https://hbr.org/2013/03/advertising-analytics-20)

Similarities between MMM and mMM

In terms of model specification and the methodological approaches used for estimation of the models, MMM and mMM lean very similar and often use identical methods. An mMM can also be included in a company’s MMM, meaning a more comprehensive MMM covers media spend evaluation and optimization as a subset of its overall analytical scope. In both MMM and mMM, a simple starting point can be to estimate a parametric model of sales explained by investments in different actions on the Four Ps and in media. Usually, as mentioned above, such a model will also include variables addressing the competitive and macroeconomic landscape. From there, modeling for both MMM and mMM can become more sophisticated by modeling dynamic (e.g., ad stock) effects, interactions between different marketing levers, engineering specific features, using experiments to calibrate the model, and performing other tweaks. More advanced modelers also like to specify, possibly marketing action-specific, response curves that address diminishing returns to scale, e.g., due to saturation of an advertising medium.

While a simple use case of mMM and MMM can be to evaluate past marketing strategy, more advanced uses commonly include forecasting of future sales and optimization of future marketing strategy and actions. These more advanced use cases thereby require explicit assumptions and accommodations in the model. E.g., is the data generating process stationary? Did the competitive or macroeconomic landscape change? Are there new advertising media, product line extensions, or other changes that may require specific adjustments to allow the model to generalize from the past and present to the future? If we increase spending on this medium threefold, how quickly should we expect the returns to that investment to diminish? If we scale down advertising on TV, will sales in the next period be unaffected but may we see a major drop in future periods? If we run large-scale promotions in the next period, how will this in-/decrease and shift our sales between future periods? A model’s architecture will need to be finessed to be able to appropriately reflect these complexities. The larger the model’s scope (MMM > mMM) and the more advanced the use case (optimization > forecasting > evaluation), the more effortful and challenging this task, and the more insightful the resulting model, becomes.

In summary, MMM is a comprehensive analysis of various marketing elements, while mMM specifically focuses on assessing the impact of advertising across different media channels. Figure 2 succinctly captures this difference in analytical scope. Both approaches aim to provide data-driven insights to help companies make informed decisions about resource allocation and strategy in marketing.

Why are MMM and mMM often used synonymously, especially among digitally native advertisers?

By digitally native advertisers, I mean companies that were started and grew with the increased digitization of the production and delivery of consumer goods through the proliferation of the web, personal computers, social media, and then handheld devices. Examples are web-based and mobile gaming companies, direct-to-consumer businesses, app developers, digital (social) media platforms, or e-commerce operations. I believe there are a few factors that may have contributed to a conflation of MMM and mMM among these digital-first advertisers:

  • A distinction of mMM and MMM was simply not needed or relevant: Digitally native businesses primarily operate in the digital realm, relying heavily on online platforms, social media, and digital advertising for their marketing efforts. Since their marketing activities are predominantly digital, they often equate marketing with media, considering digital media as the core component of their overall marketing strategy.
  • Many digital media are priced “freemium:” Very much related to the previous point, digital consumer goods are predominantly offered under freemium pricing where initial product adoption and use are free. Price hence is much less of a relevant decision criterion for consumers, in turn affecting its importance in a firm’s marketing decision-making.
  • Digitization was accompanied by further significant shifts in the salience of the marketing mix’ Four Ps: As freemium pricing reduced the relevance of price in product adoption decisions, promotion is much less relevant as well. Plus, recent research suggests that the effects of price promotions may be very different for digital freemium consumer goods. Distribution collapsed to digital platforms and media or, in direct-to-consumer commerce, was replaced by target advertising and simply disappeared as an essential consideration.
  • On digital media, A/B tests and experiments can be conducted with ease: Publishers of digital goods did not need an MMM to inform their product, price, promotion, and place/distribution decisions. As illustrated in Figure 3, they had (and still have) access to granular, user-level data allowing them to run user-level A/B tests and other experiments to inform marketing and product initiatives. A/B tests and other experiments can be run at the user-level to get “gold standard” reads on price elasticity, inter-temporal substitution, and the effectiveness of promotions.
  • User-level data enable(d) granular analytics and decision support: Similarly, the available detailed first-party and often third-party data could fuel MTA (multi-touch attribution) models or elaborate product analytics efforts to evaluate and attribute merit to different product and marketing strategies and tactics. In digital advertising, this level of data access is currently under siege (so, for the third-party use cases in Figure 3), but it is likely to remain in place for the foreseeable future for first-party data. Thus, it can continue to support decision-making for product, price, and promotion on a firm’s proprietary digital offerings. When the only reasonable use case of an MMM is to support advertising decisions, it becomes an mMM (see Figure 2).I want to note that, while these factors might lead to the perception that MMM and mMM are the same, recognizing the distinction between assessments of the overall marketing strategy and of media channel allocation holds valuable lessons. A well-rounded approach considers all marketing elements, even in digitally native businesses, to enable a comprehensive and holistic understanding of the factors driving business growth. A more holistic and comprehensive model is also likely to provide more accurate estimates, e.g., of ROMI, for each individual marketing lever. Further, while user-level data and experimentation may still provide more accurate and reliable decision support in product, price, and promotion to digitally native businesses, setting up an MMM to complement, cross-check, and build on these other analytics tools is a worthwhile effort. It can bring “everything together” in one holistic model and provide valuable higher-level insights, e.g., on longer-term strategic and interaction effects that might otherwise go undetected.
Figure 3: Digitally native businesses have grown accustomed to using first-party experimentation and user-level analytics to support decisions in product, price, promotion, and third-party experimentation and user-level analytics in digital advertising. MMM-type modeling is hence mostly/only relevant to support media-related decisions. This may help explain why MMM and mMM seem to have collapsed to meaning the same for many digital-first advertisers. My inclusion of new product releases in the first-party experiment scope intends to refer to a company’s own product releases. (Image source: https://hbr.org/2013/03/advertising-analytics-20)

TL;DR / Take-Aways

Using the terms marketing mix modeling (MMM) and media mix modeling (mMM) synonymously really is no mistake if you’re running a fully digitally-centric business. Doing so however may lead to confusion (1) when you operate both on- and offline product and distribution, and (2) if you interface with traditional brand advertisers. So, keep the differences between traditional mMM and MMM in mind and see if you can learn anything for your digital-first MMM from “old school” brick-and-mortar marketing mix modeling:

  • Could you include data on price and promotion and inform your pricing and promotional strategies from your MMM? Could resulting estimates substitute and complement your existing price and promotion analytics, e.g., by reducing the need to run experiments?
  • Are there distribution and advertising channels that you have not considered so far and that could meaningfully increase demand for your product(s)?
  • Can a model that more comprehensively addresses your actions on the marketing mix surface insights on synergistic effects that you so far were unaware of? E.g., do promotional efforts increase the effectiveness of your advertising? Is there evidence that lowered prices in certain territories may increase product usage and in turn word-of-mouth in these regions?

In this way, as MMM adoption among digital-first advertisers matures, awareness of the differences between MMM and mMM can open up new pathways for excellence in marketing analytics. Once your mMM is in (a good) place, strive to complement it with an MMM as the next frontier of digital marketing analytics. MMM and mMM can work nicely together: E.g., you can use a more comprehensive MMM to assess your overall marketing strategy and set a media budget that you then allocate based on your mMM. Your media tactics can additionally be informed by further lower-level analytics such as an MTA model or campaign optimization tools. You can also use outputs from granular product analytics and experiments across product, price, promotion, and advertising to calibrate and fine-tune your marketing and media mix model. And you may be able to inform the design of treatments and strategies that you test experimentally using the insights provided by your MMM.

 Like our blog? Join our substack.

Employment Application

    Resume Upload:
    (Allowed file types: pdf, doc, rtf)

    Cover Letter Upload:
    (Allowed file types: pdf, doc, rtf)