Sometimes, Everyone Agrees

I recently published a blog post arguing that over the next 5 years many commercial MMM engine developers might face an uncomfortable truth: their code and algorithms are not defensible. As part of that article, I separated the “MMM Vendor Value Prop” into four components:

  1. The core computational engine and algorithms (aka “engine and modeling capabilities”).
  2. A set of applications that use the trained model provided by the MMM to make recommendations (e.g., spend optimization and revenue forecasting).
  3. A structural model and set of data definitions.
  4. A set of integrations into data sources and production processes to run the engine and algorithms.

I then sketched out an argument that because the first two bullet points are very hard to defend, durable value will move “up the stack” into domain-and-vertical-specific intelligence, operational reliability, and ease of integration (into both other product-based components of the marketing stack and with internal toolchains and processes).

Here’s the actual statement:

The first claim I’m making is that open source will take over the first two bullet points. And the second claim I’m making is that, depending on company size, companies will either do the work associated to the last two bullet points themselves, or use an industry/vertical specific provider that leverages the open-source frameworks from the first two bullets (Larger companies will roll their own; smaller companies will use a vendor).

I also summarized that idea in a LinkedIn post with a buyer’s point-of-view question for MMM vendors: Why, concretely, is using their product a better idea than custom-coding a purpose-built solution on top of PyMC?

(using PyMC as a stand-in for open-source tooling).

To my mind, this is the key question that any vendor should be able to answer very concretely (and the answer should be on their website, in very concrete form).

Two MMM CEOs (Henry Innis (Mutinex) and Charles F. Manning (Kochava) ) disagreed publicly with the blog post. I’m genuinely happy they did. This industry needs more transparent debate, and both of their responses were professional, substantive, and worthwhile contributions to the conversation. I also want to say clearly: I respect Henry and Charles and nothing here is meant as a criticism of them or their teams.

Henry Innis’s Point: Incentives and Money Keep Vendors Ahead

Henry’s core disagreement is direct: he believes third-party MMM vendors are (and will remain) “far, far ahead” of open-source implementations (largely because commercial incentives fund product maturity).

Two specific points stood out:

  • The value is in the product around MMM, not the algorithm. Henry says most MMM value comes from solving product problems around the model, not from the modeling technique itself.

I think Henry and I are in complete agreement here.

  • AI may reduce the incentive to open source. He argues that many open-source efforts are sustained because they monetize elsewhere (implementation partnerships, customization, consulting, benchmarked data). If AI-assisted development reduces the “end state” that needs to be maintained, that value may shift into new SaaS surfaces rather than staying tied to open-source projects in their current form.

This second point is an interesting prediction in and of itself. Many open-source efforts will struggle in the years to come. An early sign of problems to come is the fact that Tailwind recently laid off 75% of their engineers.

In essence, Henry’s argument is that generative AI will cause open-source projects to falter, and commercial engines (funded by customer revenue) will be able to stay ahead.

This is a place where reasonable people can disagree. And, to be clear, I disagree with Henry: Corporate-backed open source , foundations, and vendor-adjacent ecosystems can sustain maintenance even if smaller OSS projects struggle.

Charles F. Manning’s Point: Trust is Built Outside the Engine

Charles’s response was about “trust and defensibility” – the key idea being that commercial MMM vendors, collectively, have established a basis for customer trust that enables them to defend their market (and that, because of this, the open-source engines will not get additional traction).

Using his numbering, the core of his argument is the following three objections:

  • Objection 3: Optimization is the Moat. In Charles’s view, the defensible layer is optimization: forecasting outcomes under constraints and balancing short-term performance with long-term value. He claims that commercial MMM optimization is sophisticated and delivers substantial enterprise value and that similar optimization layers don’t exist in typical open-source stacks today.

The disagreement Charles and I have is twofold. First, I am making a set of predictions about what will happen, and what will be true 5 years from now, and he’s talking about what exists in the market today (to some extent, we are talking about different things). For other points of view on the current state of open-source MMM, I recommend the discussions from Digiday, Search Engine Land, and EMarketer.

And, second, I simply don’t think optimizers and spend forecasters are defensible technologies.

  • Objection 4: Domain Expertise > Generic Modeling. Charles also emphasizes that domains like mobile advertising have unique constraints (attribution nuances, conversion lags, SKAdNetwork gaps, and so on). You can’t model what you don’t understand, and “generic MMM” will miss important real-world structure. Kochava’s product bakes in domain-specific intelligence based on more than a dozen years in the market.

I don’t think Charles and I disagree on this at all. This is actually a foundational thesis for Game Data Pros— effective optimization requires domain expertise and verticalization. A substantial part of the value-add is knowing what to do in a specific domain, not the core engine or modeling capabilities.

  • Objection 5: Modeling Code is not the Product. Charles states that “Model architecture is only ~10% of the challenge.” The rest is data reliability, validation, uplift testing, attribution reconciliation, and governance. These are the operational “scaffolding” that makes results defensible.

Here too, I think Charles and I are in complete agreement. And we both agree with Henry.

Charles concludes his response by saying:

 “Moving Up the Stack” Is What We Already Do. The article claims value will shift from algorithms to integration, QA, and scenario planning. That’s already our model. AIM is SaaS MMM built for action, not academic benchmarking. StationOne is next.

To which I can only say: Great. We are in total agreement.

Except, of course, that I think performance standards and benchmarks matter, and that the phrase “academic benchmarking” could be viewed as somewhat dismissive. Without performance standards and benchmarks, I don’t see how a customer can make an informed choice between the 50 or so providers in Marketing Science Today’s Provider map.

There’s a Lot of Common Ground Here

Henry and Charles’s objections align pretty closely with each other and with what I actually wrote.

  • Henry: the value is mostly in the product around MMM, and commercial incentives fund that product.
  • Charles: the moat is optimization, domain intelligence, reliability, QA, validation, integrations, governance (i.e., everything around the model and algorithms).

That’s extremely close to my claim that engines and algorithms are becoming commodities while value mostly becomes verticalized and domain-specific.

So, where’s the disagreement? I think it’s mostly about what “open source replaces” actually means.

When I say open source “replaces” commercial MMM implementations, I don’t mean the world stops buying (or leasing) MMM engines in the short-term. I mean that the core modeling and optimization stack will be increasingly based on open source, and that, over time, we will have open baseline implementations (increasingly good, increasingly automated).

Faced with that, some commercial vendors will continue to develop their engines. But most will try to win by layering value on top of open source platforms (and not by asking customers to trust a proprietary system without independent evidence).

In much the same way that 60% of developers build on PostgreSQL, I would be willing to bet that, in 5 years’ time, 80% of new MMMs will be built on an open-source framework.

About Benchmarks and Test Suites

In a separate LinkedIn post, I praised Mutinex for building an open-source framework for evaluating MMMs and publishing “rough benchmarks” for what good performance looks like. We can argue about whether they chose the right metrics, and whether or not their performance thresholds are the right ones, but I love the fact that they jump-started a conversation about what metrics and performance.

  • MAPE / sMAPE: excellent <5%, good 5–10%, acceptable 10–15%, poor >15%
  • R²: excellent >0.9, good 0.8–0.9, acceptable 0.6–0.8, poor <0.6
  • Stability & sanity checks: parameter change, perturbation change, and placebo ROI bands

Even more commendably, Henry publicly praised Recast for pioneering the public discussion of MMM performance. And he was right to do so: Recast’s Accuracy Dashboards, discussion of their model validation process, and how to do back testing are exemplary.

Simply put, if we think MMMs are a critical part of the marketing infrastructure, and we think there are substantial performance differences between them, then we ought to be able to define objective performance standards and metrics, and then compare different MMMs using publicly available test suites in exactly the same way that people compare databases.

What we shouldn’t do is claim that the open-source frameworks (or our competitors) aren’t very good, but not have a public test suite or standardized definitions of what good means.

The Path Forward is Open Source and Test Suites

My original article was long (~4,000 words). Here’s a simplified form of the predictions.

  • The modeling and optimization core will become mostly open. I don’t see any reason to recant (any of) the predictions. The trajectory is the same: better libraries, better tooling, and (with AI) faster iteration and adoption.
  • Without a shared test suite and standards of accuracy, open source will win “the engine wars” by default. Without hard evidence, customers have no objective reason to believe a specific proprietary engine is better, and plenty of reasons to prefer an open implementation. And because, over time, for the reasons outlined in the original article, the open-source implementations will pull ahead and become the defaults engines which get plugged into enterprise marketing architectures.
  • Vendors will differentiate above the core. Domain-specific models, priors, and constraints, automated QA, data pipelines, experimentation and uplift integration, governance, and workflow UX are all important pieces of an overall marketing architecture, and they’re the place where differentiation and value creation will happen.

In an upcoming article, I’m going to focus on the second of these bullet points and write more about what credible MMM engine validation should look like (and what a public test harness could include).

But for now, I’m just happy we’re all talking about this in public.

Seven Things That Are Absolutely Going To Happen in 2026

One of the more unique attributes of the video game industry is the extent to which people make year-end predictions. Every December, hundreds, if not thousands, of articles are published with predictions for the coming year; something that simply does not happen in consumer packaged goods, for example (nobody’s out here publishing “10 wild predictions about laundry detergent in 2026”).

Gaming is also an unusually volatile industry. Things change constantly. As a result, many of the predictions are wrong. Nobody looks like a prophet one year later, and many major industry events simply were not on anyone’s list. This last point is particularly interesting — we looked and couldn’t find a single set of predictions for 2025 that included the recent partnership between Unity and Unreal (which is arguably one of the five most important events to happen in gaming in 2025).

Unity and Epic Games today announced they are working together to bring Unity games into Fortnite, creating more opportunity and value for players and developers. Developers will have the ability to publish Unity games into Fortnite, one of the world’s largest gaming ecosystems with more than 500 million registered accounts worldwide, and participate in the Fortnite Creator Economy

Our 2025 predictions, made in late 2024, are also a great example of this. While six of the seven predictions we made were correct, we missed three huge trends (two of which were directly in our field of expertise). So, maybe we get a 60% correct score (which, given industry volatility, feels like a solid B+).

This industry-wide habit of predicting the future is a good thing. How do you model the future in a highly volatile industry? The best way is to have lots of people make lots of predictions (ideally, as independently as possible), and then look for patterns and themes. In practice, the gaming industry has stumbled into a superforecasting-style yearly tournament. As the people from D-Lab put it:

While many domains still rely heavily on the opinions of credentialed experts – pundits, analysts, and consultants – an alternative solution has gained traction: crowdsourcing the predictions and then aggregating the collective wisdom. Scholars of collective intelligence have long posited that the aggregation of diverse, independent opinions can often outperform even the most sophisticated and knowledgeable individual experts, particularly in highly uncertain domains. Moreover, the fallacy and biases of expertise have become increasingly apparent across a wide range of fields.

In keeping with our recent tradition, we are also going to include instant reactions from ChatGPT personas (in this case, we asked ChatGPT to assume the personas of a CEO of a mid-sized gaming company and an industry pundit knowledgeable in the specific prediction area).

Our (AI) Expert Panel, Debating Furiously

And now, without any further ado… here are seven things that are really, absolutely, 100%, beyond a shadow of a doubt going to happen in 2026.

The Predictions, At a Glance


7. The New Normal Will Continue. Revenue Will Be Up, Headcount Will Be Down

2025 saw both a decrease in headcount and an increase in revenue. This is a slight change from the previous year: in 2024, at least 14,000 jobs were lost while revenue was approximately flat (the data is unclear: Newzoo says that 2023 revenue was $183.9B USD, 2024 revenue was 182.7B USD, and that 2024 was a 3.2% increase over 2023).

Newzoo forecast for 2025 (from Sept 2025)

We’re leaning into that 3.2% number for 2024 (and the associated 3.4% for 2025). While the headcount reductions will continue, revenue will continue to climb. And while there is substantial skepticism around Owen Mahoney’s prediction that game industry revenue will triple in the next 5 to 7 years, the smart money is on the rate of revenue growth increasing as well. It might be too much to expect double-digit revenue growth, but we think Mahoney is directionally correct and that 7% to 10% seems likely.

(update Dec 21: The ‘final’ numbers for 2025 are starting to come in and 2025 actually grew at 7.5% according to NewZoo. If anything, this emphasizes that we are switching into a bull market in 2027 and beyond)

At the same time, under the hood, expensive senior staff have been laid off, mid‑career folks are being inexorably squeezed, and much of the “missing” capacity is being replaced by AI tools, outsourcing networks, and a more flexible contractor/remote workforce. This looks like a stable equilibrium where investors get the margin expansion they want, players keep getting content, and the labor market continues to shrink.

Our expert panel was in complete agreement:

  • CEO. This is pretty much what I’m planning for. Our board wants us to show operating leverage: flat or slightly lower headcount while revenue grows. The 2025 data is already clear— industry layoffs in the tens of thousands, but market forecasts are back in growth mode. So in 2026 I have to assume capital markets will reward lean teams that adopt AI, automation, and external partners rather than rebuilding the 2021 org chart.
  • Pundit. This prediction lines up almost perfectly with the data trail from 2022–2025. Revenue is rebounding—Newzoo projects nearly $189B in 2025. At the same time, layoff trackers estimate 4,000+ additional games industry job cuts in 2025 alone, for a total of 45,000 jobs lost since 2022.

6. Adjacent Verticals Will Continue to Staff Marketing Teams from Mobile Gaming

The case studies are in and mobile‑game‑style performance marketing is rapidly becoming the default playbook across a wide swath of consumer apps—fintech, health, dating, edtech, e-commerce, and even traditional media (from playables to programmatic advertising to … ). And mobile-game monetization systems are taking hold as well.

These companies don’t just copy tactics; they hire the people who built free‑to‑play UA machines. Growth leads from top mobile studios are increasingly taking roles at streaming media apps, edtech, sportsbooks, neo-banks, and DTC brands, bringing with them cohort thinking, ROAS‑driven budgeting, LTV modeling, and live‑ops‑style promotion calendars.

At the same time, non‑gaming products keep importing gaming engagement mechanics—streaks, quests, passes, … —which further increases demand for talent steeped in F2P game design and UA.

The upshot: in 2026, “mobile games veteran” on a LinkedIn profile will be a strong signal for senior roles in growth for almost any consumer application (this also combines nicely with prediction #7’s continued workforce contraction in gaming).

Here’s how our expert panel sees it:

  • CEO. This one is already biting us. I’m losing UA and growth PMs to non‑gaming apps that offer better comp, fewer content fires, and less hit‑driven risk. I’m running lean and these losses hurt. The Deconstructor of Fun article on Duolingo, DraftKings, and Tinder really underlines how aggressively these adjacent verticals are pulling from the game design and UA toolkit.
  • Pundit. The prediction is directionally right, but I’d widen it. It’s not just staffing migration; it’s strategy migration. Engagement mechanics from mobile games are showing up in top non‑gaming apps, and the people driving those efforts are often ex‑games PMs and UA leads.


5. Mobile Ads Will Continue to Get Weirder

Let’s be clear: when you compare mobile game advertising in 2025 to mobile game advertising in 2020 (or even to mobile game advertising in 2024), the rate of change is astonishing. Playable ads and mini‑games have become standard tools for serious UA teams and the creative envelope keeps being pushed by AI‑generated actors, deepfake‑style influencer clones, interactive AR lenses, and boundary‑pushing “shock” concepts. Things are already pretty weird.

Moreover, current data shows a strong performance advantage for interactive/playable formats and the continued arms race in creative volume; at the same time, regulators are reacting to deceptive or offensive ads, as in the UK ASA’s crackdown on sexualized mobile ads. At this point, the best mobile ads are more like a short piece of interactive entertainment—or uncanny AI spectacle—than a traditional banner ad or static video. Half the time, players remember the ad better than the game it’s selling (which is both impressive and slightly terrifying).

To understand what’s going to happen in 2026, let’s briefly review what Meta has been doing in 2024 and 2025:

  • Meta has already rolled out tools that let advertisers feed in a few basic ingredients—like a product image, brand assets, and a short text prompt—and then the system automatically creates multiple ad variations using AI. These variants are then tested in real time across placements and Meta’s Advantage + system automatically prioritizes the formats and creatives that perform best.
  • Meta is also gradually removing advertisers’ ability to manually tune targeting (whether by specifying demographic criteria or by managing exclusions). The vision is clear: instead of an advertiser manually picking narrow audiences based on demographic criteria, the advertiser will feed Meta signals (creative variations, objectives, first‑party data, product catalogs, conversion events, and measures of the user’s value), and Meta systems will automatically slice users into countless “micro‑segments” on the fly, constantly shifting budget toward the most responsive micro-segments.

The prediction for 2026 consists of:

  • The prediction that the AI tools will continue to get better in 2026. It’s going to keep getting easier to create bespoke creative content using AI tools (whether using Meta’s tooling or other systems). By the end of 2026, it will become the default at most companies to have the advertising networks automatically generate the ads.
  • The prediction that Meta will engage in cross-advertiser learning for both content generation and targeting. We don’t know how much they’re doing this today, but it is inevitable (and it seems like an inevitable consequence of GEM, their new “Foundation Model for Ads”). They won’t share the semantic models with their advertisers, but they will build them.
  • The prediction that the combination of having the cost of creative generation go to zero, the ability to understand and target creative to micro-segments, and the ability to learn targeting across all the advertisers will lead to a feedback loop that will create and reward highly differentiated content as long as it appeals to users.
  • The claim that the other advertising networks (especially the SRNs) are not far behind and will also roll out similar tools.

The net effect will be an avalanche of unusual, obscure, and creative ads. In many, if not most, cases the ad will be more interesting than the games.

Our expert panel thought long and hard about this one:


4. GTA-6 Will Ship and the LiveOps Backlash Will End

The prediction here isn’t just “GTA-6 actually makes its latest launch deadline” (though this prediction will be wrong if that doesn’t happen).

Given GTA history and Take‑Two’s massive expectations, GTA-6 is almost guaranteed to lean on long‑tail monetization. Rockstar will thread the needle with a strong core game, cosmetic‑heavy monetization, and a steady cadence of content that will once again prove that Live Services, done well, are key to both building a long-term community and effective monetization (note that if you’re reading this and trying to figure out whether to invest in Take Two, we strongly suggest reading Joost Van Dreunen’s take on the situation).

And this will finally normalize large‑scale live operations for premium games in a way players accept. After several years of backlash (see also: here and here) against poorly executed live‑service titlesroadmaps scrapped, servers shut down, exploitative monetization—GTA-6 will arrive as a $70+ box product with a robust single‑player experience and an evolving online/live‑ops layer that actually delivers value.

Here’s how our expert panel reacted to this prediction:

  • CEO. If Rockstar pulls this off, it will help executives like me argue that ‘live ops’ isn’t inherently evil—it’s just usually done badly. Right now, I look at consumer sentiment and developer surveys and see real fatigue: developers saying they don’t want to make their next project live service, and articles cataloguing the ‘life and death’ of shut‑down games.
  • Pundit. The delay to November 2026 actually strengthens the logic of this prediction. Take‑Two is clearly optimizing for quality and long‑term impact. Meanwhile, live‑service fatigue is very real in 2025. You can see it in opinion pieces warning of burnout and monetization pressure, and in the number of live‑service games quietly shutting down.

3. Apple Will Drop the Standard Rate to 15% for Everyone. Google will be a Fast Follower

Apple’s original policy for processing payments was simple: applications distributed through Apple’s App Store were required to use Apple to process any in-app transactions involving digital goods. And Apple took 30% of every transaction (a policy that was modelled on Facebook Credits)

For the past 10 years, Apple has been under extraordinary pressure to allow developers to use alternative payment systems (e.g. not use Apple to process transactions).

As a result of these changes, Apple has been slowly losing market share. Publishers, rightly, see the opportunity to move from a 30% fee to a 5% fee (Naavik estimates 3 to 4%) as extraordinarily compelling. Apple hasn’t moved quickly because of the enormous volume of purchases made through Apple’s App Store— lowering 30% to 15% is significant, even to Apple. Their long-term choice is clear: either lower the overall rate from 30% to a more competitive number or see most transactions move off their platform.

Note also that Apple has already been lowering their rate for very specific carveouts. Here’s a snapshot of their current policy:

This prediction is simply that, in the face of ongoing pressure, declining market share, and increasing payments policy complexity, Apple moves to a universal 15% fee in 2026. And, of course, once they do so, Google will follow quickly.

Our expert panel wasn’t surprised, but was surprisingly cool to this prediction:

  • CEO. As a CEO running a portfolio of F2P and hybrid-casual titles, a universal 15% is… nice, but not life-changing. It’s a few extra points of margin, which matters at scale, but it doesn’t fundamentally change how hard it is to build a profitable game in 2026.
  • Pundit. Dropping to a flat 15% for everyone is less a revolution and more the end of a very long, very public negotiation between Apple, regulators, and the ecosystem. For years, the real story has been erosion of the 30% norm: carve-outs for small devs, sweetheart deals for big media partners, regional compliance hacks after the DMA, and mounting legal pressure in the US and EU.

2. Elon Musk Will Begin to Talk Extensively about the Neuralink as the Ultimate Gaming Platform

Elon Musk is one of the world’s most-covered celebrities. He’s not quite up there with Taylor Swift, but by all accounts he is one of the people the world is fascinated by. He got there, of course, by being a fabulously successful CEO and simultaneously building Tesla, SpaceX, and Starlink while also buying Twitter X, serving as the head of DOGE, and fervently promoting the colonization of Mars. He also got there by frequently making public predictions about technology that didn’t quite come true (although, to be fair, the ones that were correct get less press coverage).

Most importantly for this prediction, he is the founder of Neuralink and, in 2025, has been claiming to be one of the world’s top Diablo 4 players (to be fair, he has a long history with video games, including writing an HTML game in the early 1980s and this Washington Post profile from 2015).

While Neuralink is still early and experimental, and while there are still substantial ethical questions, it is also moving forward steadily. It’s only a matter of time before recent advances, such as this one where a paralyzed man is able to control his computer for the first time, lead to sustained and frequent public speculation, including aggressive timelines for delivery, about mass-market adoption of the Neuralink as a gaming controller.

Neural Control of a Fighting Game
(Where Reaction Time Really Matters)

Our prediction is simply that this conversation hits the mainstream, shepherded by Elon, in the second half of 2026.

After they finished snorting coffee through their noses (and then wiping up the mess; our expert panel seems to be composed of neat-freaks), our expert panel reacted to this prediction:

  • CEO. I’m not building a Neuralink P&L any time soon! The timelines and regulatory risk are huge. But I am paying attention because BCI gaming has been around for a long time and patients are already playing simple games using only their thoughts. And that means it’s only a matter of time.
  • Pundit. This wasn’t on my radar. But you’re right. In 2026, ‘ultimate gaming platform’ will mostly be rhetoric and early lab demos. But the idea will be in the cultural water, which matters a lot for how investors and big platforms think about the 2030s.

1. The PS6 Will Be Delayed Until 2028

The PlayStation 5 was released on November 12, 2020 to generally positive reviews. The reviews, while positive, were constrained. Ars Technica, for example, called it “more of a generational hop than a leap [forward].” The consensus was that the core of the machine, the CPU and the graphics, were improvements, but that the controllers were the best feature. In the five years since launch, opinion has solidified — the PS5 is the leading console, but it “defines gaming’s standard — and its plateau.” Or, as GamesRadar, put it:

All this time later, the PS5 is easily my most used console, but – as a straightforward replacement for the PS4 – it’s not felt like an earth-shattering generational shift.

All of this is to say that there’s a certain amount of pressure for the PS6 to be awesome, not incremental.

So, while Sony’s historical cadence of seven‑year console cycles points to a 2027 launch window for the PS6 (the PS3 was released in 2006 and the PS4 was released in 2013), and the industry consensus is that it will ship in time for the 2027 holiday season, there are starting to be signs that it could be mid 2028 instead:

Our money is on Sony opting for a 2028 window. By doing so, Sony extends PS5/PS5 Pro’s life, lets the Project Amethyst‑style hardware mature into something mind-boggling, and aligns the PS6 with a more stable post‑GTA‑6 landscape.

Here’s how our expert panel reacted to this prediction:

  • CEO. This prediction tracks what I’m hearing in platform briefings and from analysts. A longer PS5/PS5 Pro cycle gives us more runway to recoup AAA budgets—but it also means another few years of intense competition for store placement and subscription visibility on current‑gen hardware. I’m not loving my console business right now.
  • Pundit. Sony hasn’t given a date for the PS6 yet, but the breadcrumbs are there. TechRadar, GamesRadar, Tom’s Guide, and a slew of rumor coverage all converge on a 2027–2028 window, with many insiders leaning 2028.
 

Extra Bonus Observation: 2028 Will Be the Beginning of the Next Video Game Boom

These last two predictions, that the Neuralink will eventually become a gaming platform and that the PS6 will be released in 2028 (with amazing hardware features), can really be combined together, along with the continued growth of VR and AR gaming and the rumored 2028 Xbox release, to predict that “2028 will be the year when the platform shifts really happen.”

When you combine that with the emerging world of weird ads and micro-targeting, the increased maturity of AI toolchains, and the coming enormous revenue expansion predicted by Owen Mahoney, it feels like 2028 is going to be the start of a sustained boom period for video games.

Evaluating Our Predictions for 2025

For the most part, the gaming industry runs on a yearly cyclical cadence (there are large-scale decades-long trends and patterns, such as Joost Van Dreunen’s Play Pendulum, but the yearly cycles are how the industry self-organizes). So, for example, if it’s March, it’s time for GDC (and welcome once again, my friends, to the Game Revenue Optimization Mini-Summit). And if it’s December, it’s time for the pundits to gather round and tell us what will happen in the coming year.

Right now we’re in the prelude before the prediction bonanza, the calm before the storm, that long moment when the baby has been dropped on the floor but has not yet started screaming. It’s the time of the year when the people who made the predictions ‘fess up and talk about why they were right (and how it was reality that got it badly wrong).

That’s what this is. We made some predictions last year, and we’ve been thinking hard about what’s coming next year. But we owe it to you, dear reader, to let you know how we did and to let you draw your own conclusions about how seriously to take our next set of predictions.

How We Evaluated the Predictions

We evaluated our predictions along two distinct axes:

  • Correctness. Were we right? Did our predictions come true? If what we said wasn’t true, you’d have good reason to ignore our upcoming set of predictions.
  • Completeness. Did we miss anything important? Part of the value of predicting is not just being right, it’s covering all the important events. If we were running the country and we predicted an increase in street traffic but completely missed an attack by an army of CHUD (Cannibalistic Humanoid Underground Dwellers), you’d probably wonder whether we were the right leaders.

(as a side-note, we apologize to the fans of the undergraduate logic curriculum, who are no doubt saddened that we omitted compactness as a third evaluation criteria)

Separately, there’s also the question of who evaluates the predictions. The pattern in years past, and in most of the gaming industry press, is for the predictors to self-evaluate. That is, the people who make the predictions mostly get to decide whether they did a good job.

But we’ve been told that AI changes everything. Since 2025 is the year of agentic AI, we decided to have ChatGPT (5.1, pro) evaluate our predictions using three distinct personas.

Our expert panel was comprised of generative AI simulations of:

  • A CEO of a small to midsize gaming company that is struggling to stay afloat during industry hard times.
  • An industry pundit with deep knowledge of the space and a regular platform (e.g. blog or substack).
  • An external observer, not working in the games industry but with some knowledge of the space and working in an adjacent industry, who feels somewhat skeptical about the value of year-end predictions.

Each of these personas was given the task of scoring us on both correctness and completeness.

We Were Mostly Correct

After a hearty breakfast, we convened the panel for the initial round of deliberations. Here’s the short version: We made 7 predictions. 6 came true; one came close.

The Expert Panel, Debating Correctness
(Taken During the Morning Session)

Here’s the detailed table:

PredictionCEO EvaluationPundit EvaluationSkeptical Observer EvaluationOur Reaction
There will be more revenue.This prediction has basically come true in headline terms … However, that does not mean 2025 feels like a boom.Accurate, but unexciting.Global games revenue is higher in 2025 than in 2024, but 3 to 4 percent growth against similar levels of inflation yields only a small real gain.Yes! We nailed it!
Mobile UA teams will keep leaning on MMM, often without proper validation.Directionally right.Mostly correct, but a bit pessimistic.Agree with the prediction’s spirit that many teams cling to MMM without sufficient experimentationNailed it again!
No new Top 100 mobile games without web shops will add one in 2025.Whether or not literally zero new Top 100 titles launched web shops in 2025 is hard to verify from public information and irrelevant.Too strongly worded but directionally plausible.The more meaningful observation is that web shops have rapidly become yet another layer of complexity in mobile monetization.We weren’t wrong!
Web storefronts will see big gains from experimentation and personalization.This prediction rings true in spirit. The 2025 ecosystem of DTC tooling, analytics integrations, and vendor case studies all pushes toward more experimentation and segmentation on web storefronts.Partially correct but slightly overstated.The prediction sounds plausible but I question whether, across the whole of mobile gaming in 2025, this really counts as one of the defining revenue growth engines.Still not wrong! We’re 4 for 4 so far!
AR and VR will finally gain significant traction.AR and VR do have more momentum in 2025 than a few years ago, but the impact on my business is still limited. Broadly right. The data clearly show a meaningful upswing in AR or VR and smart glasses shipments in 2025, with AI enabled glasses emerging as a genuinely new category.The prediction captures a real uptick in momentum but exaggerates how decisive 2025 is as a turning point.Yes! Right again!
Alternative app stores will exceed 10 percent of western mobile gaming installs.Overly optimistic about adoption speed. In 2025, alternative stores are a real strategic consideration, especially for EU focused titles and for Android in markets where OEM or carrier stores matter. Almost certainly incorrect given the data available in late 2025.A classic case of over extrapolating from regulatory headlines. Changing default distribution behavior for hundreds of millions of mainstream users is extremely difficult.Ouch. These judges are tough.
Selling physical goods in game will stop being surprising.Largely accurate but unevenly distributed across the industry and irrelevant to me. Mostly correct given how 2025 has unfolded.Agree that 2025 made the idea of buying physical goods inside games feel more normal.Yes! Back on track and 6 for 7 overall!

But We Missed Three Big Trends

After lunch, we reconvened the expert panel to discuss the harder question: What did we miss?

The Expert Panel After Lunch, Thinking Hard
(Mainly About Where to Go for Dinner)

According to our experts, we missed three major trends.

Prediction We Should Have MadeCEO OpinionPundit OpinionSkeptical Observer OpinionIn Our Defense
AI-native game businesses. By late 2025, most commercially serious studios will treat AI as a default part of production and live ops, and the hard problems will be ROI measurement, workflow integration, and governance, not ‘should we use AI?’My P&L reality in 2025 is that AI is the only lever big enough to offset rising costs and shrinking margins.2025’s biggest structural shift is that AI is becoming the new production function. The bottleneck in games used to be content; now it’s taste and data quality.AI is obviously big, but the ROI is opaque at this point.How did we miss this? AI is huge and transformative and we’re doing a lot of work helping companies with their AI transformations. This omission is inexcusable. Especially since 30% of the respondents to a recent GDC survey said that they believe that generative AI is having a negative impact on the games industry.
Platformized Creator Economies. By end of 2025, creator platforms like Roblox and Fortnite will represent a third major commercial pillar alongside traditional PC/console/mobile – with built‑in A/B testing, regional pricing, and engagement-based payouts that make them some of the most data‑instrumented game economies on earth.In 2025, a real strategic choice is: do we become a ‘studio on a platform’ (Roblox, UEFN) instead? This is new and a significant challenge.This is the blind spot: you treated platforms mainly as distribution and webshops, not as competing economic systems.Maybe. But are these ecosystems net-new value, or just reshuffling time and money away from other games?What can we say? After VR and then Web3, we are skeptical of platform shifts and overlooked this one.
Hybrid Monetization is the New Norm. In 2025, the median successful game will be running at least two monetization models (e.g. ads + IAP, or IAP + sub), and platform-level subscriptions will keep pulling value out of pure à‑la‑carte spending. The hard problem moves from ‘which model?’ to ‘how do we optimize LTV across overlapping ones?’”Your predictions focused on ‘more revenue’ and DTC mechanics but didn’t explicitly call out how messy monetization design has become.Hybrid monetization and subscription stacking are now the design constraint for game businesses.I don’t like it. This is starting to look like game design is just financial engineering. Games are not spreadsheets.This is business as usual and not “prediction-worthy.” Hybrid monetization was well-established heading into 2025 and isn’t really a trend. And, anyway, Tiffany Keller covered it extensively at our Game Revenue Optimization Mini-Summit in 2025.

In Conclusion

We were reasonably accurate — for the most part, the things we predicted did happen. At the same time, the events we didn’t predict are significant omissions. While our 2025 predictions captured the headline moves in revenue, UA, and monetization, our AI panel rightly called out the deeper structural shifts around AI‑native production, platformized creator economies, and hybrid monetization. It’s reasonable to say that we predicted the linear trends (that we forecasted correctly based on existing trends), but failed to anticipate the bigger non-linear shifts that will cause significant structural changes in the gaming industry.

As we head into the next prediction cycle, we’ll keep treating predictions as hypotheses to be tested, not pronouncements from on high—and we’ll try to be more explicit about the big picture changes and large-scale changes we missed this year.

Integrating Experimentation into Marketing Measurement

Table of Contents

Introduction

Understanding advertising effectiveness is crucial for any marketing strategy because it directly impacts resource allocation, campaign optimization, and overall return on investment (ROI). By measuring how well advertisements perform, marketers can determine which messages resonate with their target audience, identify underperforming channels, and refine their creative approach to boost engagement. Effective ad analysis also helps pinpoint the ideal balance between reach, frequency, and targeting precision, ensuring that budgets are not wasted on ads that fail to drive revenue. Moreover, it provides valuable insights into consumer behavior, helping businesses adjust to changing preferences and trends. Ultimately, understanding ad effectiveness enables data-driven decision-making, empowering marketers to create more impactful campaigns that achieve measurable outcomes and foster long-term brand growth.

Integrating experimentation into marketing measurement is one of the most effective ways to achieve advertising effectiveness. You can optimize resource allocation and improve ROI by embedding controlled experiments, such as AB tests or randomized controlled trials (RCTs), into your marketing processes and analytics. In a recent study, those advertisers on an online advertising platform who used ad experiments for measurement saw substantially higher performance than those who did not. An e-commerce advertiser running 15 experiments (versus none) saw about 30% higher ad performance in the same year and 45% in the year after. While this evidence is correlational, it’s reasonable to assume that, in today’s data-driven landscape, experimentation, personalization, and automation are not just a best practice; they are becoming a competitive necessity.

However, integrating an experimentation strategy into marketing measurement can be complex, often requiring large-scale organizational changes and careful planning. This means clearly articulating objectives, establishing a hierarchy for measurement and analytics, selecting the right types of metrics, and determining a system of ground truths and methodologies. You must decide on your marketing and business goals, such as prioritizing ROI or top-line growth. By clearly understanding these goals, you can more effectively design experiments and integrate these with observational analytics to refine your strategies. This ensures that the integration of experimentation is not just a technical procedure but a crucial part of a larger, comprehensive strategy to achieve business success.

In this article, we provide high-level guidance on how you can succeed with integrating experimentation into your marketing measurement.

Why Experimentation is Necessary

In the 20th century, the field of marketing experienced a dramatic transformation driven by advancements in data collection, analytics, and communication technologies. Early in the century, marketing effectiveness was primarily assessed through anecdotal evidence and crude measures, such as sales increases and consumer feedback. The rise of mass media—newspapers, radio, and television—ushered in an era of broad audience outreach, leading to the development of audience metrics such as radio ratings and TV viewership statistics. The mid-century saw a growing interest in market research, with the establishment of industry giants like Nielsen providing quantitative insights into consumer behavior.

By the late 20th century, computers had revolutionized data analysis, enabling sophisticated consumer segmentation and predictive modeling. It became common practice to use econometric models to determine the relationship between the various factors in a marketing model.  In particular, the field of Observational Causal Inference (OCI) seeks to identify causal relationships from observational data when no experimental variation and randomization are present.

However, as two of the authors recently noted: “Despite its widespread use, a growing body of evidence indicates that OCI techniques often stray from correctly identifying true causal effects [in marketing analytics].[1] This is a critical issue because incorrect inferences can lead to misguided business decisions, resulting in financial losses, inefficient marketing strategies, or misaligned product development efforts.” One of the most common and longstanding OCI techniques in marketing measurement is media and marketing mix models (MMM).

In our recent note, we called on the business and marketing analytics community to embrace experimentation and to use experimental estimates to validate and calibrate OCI models. The community response was vivid, including a contextualizing piece on AdExchanger.

It should be pointed out that this is not a new observation. Many early papers in OCI advocated for experimental validation of modeling results. For example, Figure 1 shows the abstract from a paper by M. L. Vidale and H.B. Wolfe in 1957.

Figure 1. The abstract from "An Operations-Research Study of Sales Response to Advertising."
(Vidale and Wolfe)
Figure 1. The abstract from “An Operations-Research Study of Sales
Response to Advertising.” (Vidale and Wolfe)

What is new is that, in the modern internet era, wide-scale experimentation is now both possible and widely accessible. It’s still not easy, but it is doable.

Types of Experiments in Marketing

In its broadest sense, marketing experimentation refers to any intentionally designed intervention that can help marketers measure the effects of their actions. This includes deliberate variations in spend, share, allocation, or other strategic and tactical decisions made for the purpose of measurement.

For instance, a marketer might introduce intentional variation in daily or weekly spending for a specific channel to estimate its impact on outcomes. By analyzing how performance changes with these fluctuations, marketers can better isolate and quantify the channel’s true effect.

In more extreme cases, experimentation might involve “going dark”—completely halting marketing activity in a specific channel or geographic location. By observing the performance drop (or lack thereof) when marketing is paused, marketers can try to measure the incremental impact of that channel. While this approach can yield insights, it comes with risks (such as confounding variable bias), particularly in high-stakes environments where even short-term losses are undesirable. And it clearly is not an RCT where we know that effect estimates will be unbiased on average.

Tests with Treatment and Control Groups

Narrowing the focus, experimentation can be defined as specifically designed tests that involve treatment and control groups to estimate effects. Under this definition, experimentation encompasses a wide spectrum of tests, ranging from basic ad platform tools to more rigorous methodologies.

Many advertising platforms, like Google and Facebook/Meta, provide split (or A/B) testing tools. These often self-serve tools enable marketers to compare various tactics or creative assets without the need for control groups, using only exposed, non-overlapping audiences.  Split testing tools divide the audience into two or more groups, each receiving a different version of the ad. Marketers might also run simultaneous campaigns with varying parameters to observe performance differences.

While these tools can be useful for directional insights, split tests are typically used to optimize specific campaign elements because they fall short of delivering incremental measurements.

The Gold Standard: Randomized Control Trials (RCTs)

Randomized Control Trials (RCTs) are often called the gold standard of effectiveness research. In an RCT, ad exposure is fully randomized across users, with some users serving as a control group who do not see the ad or campaign being measured. This level of rigor ensures that the treatment effect (the ad’s impact) can be isolated and measured without bias on average.

RCTs are widely recognized as the most reliable method for causal inference. However, RCTs are often challenging to execute. Many marketers lack the ability to control ad exposure at the user level, particularly when working across multiple platforms or channels. Privacy regulations and restrictions on user-level data access have further complicated the implementation of RCTs in recent years.

Most ad platforms offer RCTs but sometimes these are not usable without dedicated support personnel (and they often require more effort to implement successfully).

A Practical Middle Ground: Cluster-Level Randomized Experiments

When user-level randomization is not feasible, cluster-level randomization and experiments can offer a practical alternative. In cluster-level randomization, the assignment of experimental ads is managed at broader levels, like geographic regions, rather than at the level of the individual user. With geo experiments, the most common type of cluster experiments, ad exposure is varied at a geographic level – such as ZIP codes, designated market areas (DMAs), or cities – rather than at the level of individual consumers. Some regions serve as test groups, receiving the ad campaign, while others act as controls.

Geo experiments allow marketers to measure the incremental impact of campaigns while avoiding some of the complexities of user-level RCTs. They are particularly valuable when privacy or technological restrictions limit access to granular user data or when there might be spillover effects (a spillover effect is an unintended impact of a marketing intervention or campaign on individuals, groups, or regions that were not directly targeted by the campaign. This can occur when the influence of an advertisement, message, or promotion “spills over” to adjacent groups or regions, leading to indirect exposure and potential behavior changes outside the intended treatment group). Figure 2 below provides an overview of the different types of experiments available to marketers in different situations (source: Figure 1 in this article):

Figure 2. Taken from "It’s time to close the experimentation gap in advertising:
Confronting myths surrounding ad testing."
Figure 2. Taken from “It’s time to close the experimentation gap in advertising:
Confronting myths surrounding ad testing.”

There is also another reason that clustered experimentation is sometimes desirable. Choosing a small sub-population, or experimenting within a restricted demographic or geography is often a way to mitigate perceived risk. If key stakeholders are uncomfortable experimenting on the entire population, or worried about the potential impact of spillover effects, isolating to a small sub-population can be a good compromise.

However, clustered experiments are not without challenges. They require careful planning, significant resources, and rigorous execution to ensure clean results. Marketers must account for regional differences, external factors, and spillover effects (where the impact of a campaign in one region influences neighboring regions). It can also be challenging to hold out large cities with attractive contiguous market areas from campaigns, making it challenging to create balanced test and control market groups.

Successful Experimentation Requires a Commitment Across the Organization

Organizational success with experimentation requires more than just tools and processes. Most of the time, it requires a cultural shift and support. Executives must encourage teams to test hypotheses, embrace failure as a learning opportunity, and prioritize data-driven decision-making. Executive buy-in is critical to ensure experimentation becomes a core part of your marketing strategy. Here are a set of essential steps that can help you succeed:

Staff and Endorse Marketing Analytics Appropriately

The foundation of a successful experimentation program lies in having the right people and organizational support. This starts with hiring a dedicated data scientist or analytics team with expertise in marketing measurement and experimental design and analysis. These experts will be responsible for designing, running, and analyzing experiments and ensuring that insights are actionable.

Equally important is securing executive endorsement. A dotted reporting line to a C-level executive can signal the strategic importance of marketing analytics and experimentation. This endorsement helps prioritize the initiative across the organization and ensures that resources are allocated effectively.

Foster a Culture of Experimentation

For experimentation to thrive, firms must embed it into their organizational culture. This means fostering curiosity, encouraging data-driven decision-making, and rewarding teams for testing assumptions – even when experiments don’t yield the desired outcomes.

Leadership plays a critical role in shaping this culture. By promoting the value of experimentation and celebrating learnings from both successes and failures, executives can inspire teams to embrace testing as a core part of their workflow.

Depending on the setup of your wider analytics organization and if there is a central experimentation team and platform, it can be wise to formally link the marketing analytics group up with the platform team. Research suggests that organizations with mostly decentralized decisions but a single authority that sets consistent implementation thresholds achieve more robust returns to experimentation. Experiment-based innovation and learning further thrive on cross-pollination, which the central team can facilitate.

One of the most challenging obstacles within an organization is overcoming the silos that exist between various departments, such as analytics, planning, strategy, marketing, finance, and leadership. These silos can hinder communication, collaboration, or the flow of information, ultimately impacting the organization’s ability to make data-informed decisions and execute effective strategies.

Commit to a Learning Agenda and Hold the Marketing Analytics Team Accountable

Bridging these departmental gaps requires a concerted effort to foster a culture of collaboration and open communication. One powerful approach to breaking down barriers is committing to a learning agenda that encourages cross-departmental engagement with shared objectives. By aligning all teams around common goals and promoting continuous learning, commitment to a joint learning agenda can be the single most important step in transforming organizational dynamics.

Ask the marketing analytics team to set clear objectives and a roadmap for experimentation. Every experiment should begin with a specific, measurable goal. The team needs to be able to answer questions like: What do we want to learn? What are the hypotheses we are testing? How will the results influence our decisions? How will we use the results in the wider measurement framework, e.g., to validate and calibrate OCI models? Clear objectives ensure that experiments are focused and actionable. They also help prioritize testing efforts, directing resources toward questions with the highest potential impact.

Create Feedback Loops

The true value of experimentation lies in its ability to inform decision-making. Firms need to establish feedback loops where insights from experiments inform future campaigns, strategies, and even the design of new experiments. Regularly reviewing and acting on experimental results, possibly following a fixed-timed process, ensures that insights drive tangible business outcomes. This iterative approach fosters continuous improvement and adaptation to changing market dynamics.

Tactics that Lead to Successful Experimentation

To integrate experimentation into marketing measurement effectively, marketing analytics teams must establish a clear framework that balances rigor and practicality. Here’s how marketers can get started:

Align Hypotheses, Objectives, and Governance

Commit to a learning agenda as a practical first step that fosters cross-departmental collaboration and aligns all relevant teams around shared objectives, helping to overcome organizational or communicational silos.

Start with Broad Interventions


If your team is new to experimentation, begin with simpler interventions, such as introducing controlled variations in spending or campaign parameters. For example, randomly adjusting daily spending across campaigns can help identify baseline performance trends and directional insights.

Leverage Platform Tools and External Know-How


Modern marketing platforms like Google Ads and Meta Ads Manager include built-in experimentation tools. These platforms allow firms to test different variables – such as targeting criteria or bidding strategies – directly within their campaigns. Use these tools as a stepping stone. While these tests may not meet the highest standards of rigor, they can provide valuable learnings when executed thoughtfully. Ensure you understand the limitations of these tools, particularly around randomization and confounding.

Similarly, if you are primarily active on one or a couple of ad platforms, the provided attribution tools can provide reasonably reliable estimates of your advertising effectiveness. Build on these insights directly to validate and calibrate OCI models if you have those.

Firms can also turn to specialized vendors like Optimizely, Eppo, Adobe Target, or Game Data Pros for more complex needs. These vendors provide advanced capabilities for designing and analyzing experiments and building related software tools. Investing in these tools can streamline the experimentation process and make it easier to scale testing efforts.

Incorporate Cluster-Level Experiments


Whenever feasible, prioritize RCTs. Collaborate with platforms, publishers, or third-party measurement providers to implement RCTs that deliver unbiased causal estimates. RCTs may not always be practical, but they should remain the gold standard you aspire to. One particular caveat is to make sure there is enough statistical power: insufficient budget or duration can undermine the reliability of the experiment and results. To address this, ensure an adequate budget, duration, and holdout is applied based on power calculations.

As your experimentation capabilities mature, explore geo and other cluster-level randomized experiments to measure the incremental impact of campaigns. Partner with data scientists or measurement specialists to effectively design and execute these tests. Geo experiments can bridge the gap between observational measurement and user-level RCTs.

Set up OCI model(s)

Once your marketing efforts involve more than two channels and you’re looking to scale up, it is time to build a comprehensive measurement framework that captures the full scope of these marketing activities. This involves cataloging marketing activities, i.e., listing all current and upcoming campaigns, channels, and tactics, along with their associated costs and KPIs. The figure in this article may be helpful for this exercise. Then set up a holistic measurement model, e.g., a media or marketing mix model, that includes all these activities plus control variables, trends, and adstock. This article provides an introduction to how you can do this using an open-source package.

A holistic model serves as the baseline for measuring the incremental impact of experiments and provides a framework for interpreting results in the context of broader marketing dynamics. Figure 3, taken from a presentation by Meta, visualizes how different OCI approaches can come together with experimentation.

Figure 3. Taken from a Presentation by Meta.
Figure 3. Taken from a Presentation by Meta.

Validate OCI Model(S)

Take the outputs from split tests, trusted attribution models, geo experiments, and RCTs to validate and calibrate your observational measurement models. To start, you can compare experimental and observational model results to ensure that they are “similar.” Similar can mean that both approaches pick the same winning ad variant/strategy or directionally agree. If the results are inconsistent, update the observational model to achieve similarity.

A somewhat more advanced approach uses experiment results to choose between OCI models. The marketing analytics team can build an ensemble of different models and then pick the one that agrees most closely with the ad experiment results for the KPI of interest, e.g., cost per incremental conversion or sales.

Calibrate OCI Model(s)

The most advanced and quantitative approach incorporates experiment results into the OCI model directly. Getting this right requires a robust understanding of statistical modeling. In a Bayesian modeling framework, the experimental results can enter your model as a prior. In a Frequentist model, they can serve to define a permissible range on the coefficient estimates: Say your experiment shows a 150% return-on-ad-spend with a 120% lower and 180% upper confidence bound; you can constrain your model’s estimate for that channel to that range.

Under a machine learning approach, you can use multi-objective optimization. Meta’s Robyn package does this: You can set it to not only optimize for statistical fit to observational data but also for minimal deviation from experimental results. This article provides a detailed walk-through of this relatively novel idea.

Identify Channels that Have Too Little Data for OCI Models to Work

OCI Models, like all machine learning models, require data for creation and calibration. For example, an advertising channel must have a volume of historical data above a minimal threshold and variations in spend and exposure in order to be meaningfully incorporated into an MMM model.  

If an MMM model has an advertising channel with too little data, several strategies can help address the issue. For example, incorporating prior knowledge through Bayesian methods can help stabilize estimates when data is sparse. Grouping similar channels with shared characteristics also allows performance to be estimated collectively, assuming similar behavior. In either case, experiments can quickly generate additional data to validate assumptions.

Integrating Experiments Pays Off

In conclusion, integrating experimentation into marketing measurement is essential for improving the accuracy and reliability of advertising effectiveness insights. While observational methods like MMM and OCI models provide valuable insights, they can suffer from biases without experimental validation. Controlled experiments can help calibrate and enhance these models by offering unbiased causal estimates.

However, success with experimentation requires work and planning. It requires an organizational commitment to data-driven decision-making, cross-departmental collaboration, and continuous learning. By aligning hypotheses, leveraging platform tools, fostering a culture of testing, and iteratively improving OCI models with experimental data, organizations can optimize resource allocation, better measure performance, and seize new growth opportunities across channels. Ultimately, experimentation transforms marketing from intuition-based strategies to a rigorously tested framework that drives both short-term results and long-term growth.

The effort is worth it though. Evidence is mounting that OCI can often stray far from the estimates of RCTs and  that firms that embrace experimentation as an analytics strategy do better. It’s not either OCI or ad experiments. It’s OCI and ad experiments.

We hope our article will help you get started.


[1] For the case of advertising, e.g., see Blake, Nosko & Tadelis (2015), Gordon et al. (2019), or Gordon, Moakler & Zettelmayer (2022); for the case of pricing, Bray, Sanders & Stamatopoulos (2024).

The Walls Continue to Crumble

Developers everywhere rejoice as the walls around Apple’s 30% fee continue to be systematically dismantled by governments.

Apple was handed a significant setback by a U.S. Federal Court yesterday in the ongoing legal battles over purchases related to mobile applications on the iPhone. GamesBeat’s overview article offers a concise overview of the ruling and summarizes the core finding quite nicely.

A federal district court judge found that Apple willfully violated a court order in Epic Games vs. Apple antitrust case … The judge said that “in stark contrast to Apple’s initial in-court testimony,” the documents revealed that Apple knew exactly what it was doing and at every turn chose the most anticompetitive option.

The short-term impact is that Apple will not be able to charge a 27% “tax” on web storefronts that are linked from the game. It’s worth noting that this was already true in the EU due to the Digital Markets Act. Joost Van Dreunen puts the event into a more detailed historical perspective in a post entitled Bad Apple. The days of the Walled Garden charging 30% are clearly numbered. And, as always, Eric Seufert has magnificently chronicled the entire adventure.

This is a huge win for developers and will accelerate the current trends towards web storefronts. It is also almost certainly a foreshadowing of what is to come in 2026 / 2027.

Here at GDP we expect that

  • Other walled gardens will continue to crumble. The EU’s enforcement actions against Meta will continue to gain steam, and the expansion of regulatory oversight of large-scale platforms will continue on all fronts. This is part of a large-scale societal trend as we all continue to grapple with the pervasive nature of modern technology.
  • Alternative payments platforms and SDKs will continue to gain ground. Not just in Europe, but in America as well. There simply isn’t much in the way of a limiting principle in place in any of the legal injunctions—the reasoning in this ruling logically extends to the idea that alternative payments SDKS (such as Xsolla’s SDK) will eventually be allowed, even in games that are downloaded via Apple’s publishing platform.
  • The combined pressure of lower-friction web storefronts, alternative distribution mechanisms, and alternative payment SDKs will eventually cause Apple (and Google) to drop the 30% fee on in-app purchases.
  • With a wide variety of different and lower-cost payment options, the case for experimentation and personalization in payments (which we talked about as part of our 2025 predictions) becomes even more compelling

The Game Revenue Optimization Mini-Summit Rides Again

Want to learn more about revenue optimization in gaming? Join us for the Revenue Optimization in Games Mini-Summit and Happy Hour at 2 PM on March 19 at the American Bookbinder’s Museum in downtown San Francisco.

We’re a little over two weeks away from our yearly GDC event (signup here) and I’m really excited. This is our once-a-year community-building event where we take some time to talk revenue optimization with our peers, as a community, and it’s looking incredible.

We’ve got a great schedule:

  • The doors open at 2:00. We’ll have coffee for people who had a lot of carbs at lunch.
  • At 2:15, I’m kicking things off with “MMMs Are More Useful Than You Think.”
  • At 2:40, Ryo Shima will talk about “How Revenue Optimization is Different in Japan.”
  • At 3:05, Tiffany Keller will speak on “Advanced Hybrid Monetization.”
  • At 3:30, there will be a brief coffee-break. Note also that there will be cookies and coffee available throughout.
  • At 3:40, Joost Van Dreunen and Julian Runge will present a special double-length talk on “Gaming’s New Game: How Brand Partnerships Are Reshaping Entertainment Marketing.”
  • From 4:30 to 5:30, we’ll have a reception. Great hors d‘oeuvres from a grazing station along with various beverages.

That’s an amazing set of speakers and talks. But, to be honest, I’m equally excited by the audience. I’ve been looking at the people who’ve signed up, and –wow — the list is extraordinary. There might just be some legends in attendance.

The talks will be great, the side conversations (we’ve got the entire first floor of the American Bookbinder’s Museum and there are a couple of rooms that are perfect for follow-on conversations) will be awesome, and the reception will take it to the next level.

We’d love to have you join us.

2025 Games Industry Predictions Roundup 

As we head into 2025, one thing is clear: the gaming industry has no shortage of would-be-Nostradamuses. Predictions are flying by at high speed, sometimes as lists on LinkedIn and sometimes as full-fledged articles.  

Image by Terrence Dorsey

I made my own predictions, Seven Things That Are Really Going To Happen in 2025, last week. This week, to help everyone get a sense of what’s coming, the team gathered a few of our favorite prediction articles into a single list.  


First and foremost, of course, are Dean Takehashi’s yearly predictions. Dean is the games journalist and helpfully includes an analysis of his previous predictions and how he did in 2024.  VentureBeat also has a good list of 2025’s best sellers.  

The brain trust at Deconstructor of Fun has come out with their 6 Predictions. They have a strong focus on geopolitical trends, and also think we’re headed for a new era of AI realism. They also have a set of Mobile Games Marketing Predictions that are interesting.

GamesIndustryBiz, as usual, is focused on large-scale business events and trends. Their panel of industry-watchers review their calls for the past year and offer new forecasts for the year to come

Eric Seufert and Mobile Dev Memo are a definitive source for mobile marketers, and Eric published his 2025 year-ahead predictions related to mobile marketing and mobile gaming for the 11th year running (paywall). 

PocketGamer held a gathering of mobile mavens and collected their thoughts in a three-part series on mobile gaming in 2025.  

DFC Intelligence and Geekwire teamed up to tell us what will happen as well, Gaming industry trends to watch in 2025: Distribution channels, console wars, and more, focusing on distribution channels and console gaming.  

MIDiA Research published five trends they think are key, including an interesting take on the rise of portable devices. Everyone expects the Switch 2 to ship. But … Microsoft and Sony shipping portable devices as well? That is a bold prediction. 

As you might expect, Esports Insider has some thoughts on the future of esports, and 2025 looks set to be a milestone year for the industry. Also included: a link back to their industry insiders’ 2024 predictions

Servers.com focused on trends in live services in gaming, including an unusual prediction about AI (it will help reduce operational costs). 

And, as mentioned previously, Game Data Pros has analyzed the trends and has seven predictions about how game revenue optimization will evolve in 2025.


Got a favorite that we missed? Let us know and we’ll add it to the list.  

Seven Things That Are Really Going To Happen in 2025

Every year around this time, you see the same two things happen. People make New Year’s Resolutions about how they will change their lives. And they make predictions about how the world will change. Oddly enough, both activities are often highly repetitive – “This year I will lose 20 pounds” is repeated almost as frequently as “This is the year that Applovin will buy Unity.”

Here at Game Data Pros, we’re not immune. We, too, have resolutions and predictions. But since we’re experts in Game Revenue Optimization (and not, for example, in game design or large-scale M&A), our industry predictions are focused entirely on game revenue.

If you want help leveraging any of these trends, or simply want to talk revenue optimization, please don’t hesitate to contact us.

And now, without any further ado … here’s seven things that are really going to happen in 2025.

The Predictions, At a Glance


There Will Be More Revenue

This is the most straightforward prediction and probably the least controversial: Sales will go up.

It’s easy to lose sight of the revenue numbers in the face of widespread industry problems. For example, 2024 was the third straight year of record layoffs in the video game industry. Matthew Ball shared a nice visual of the layoff trend on X

Figure 1. Record Game Layoffs. Source: Matt Ball.

But at the same time, industry revenue has never been higher and has never been healthier.

How healthy is it? At the recent AWS: Reinvent conference, Amazon estimated the digital entertainment market as follows:

  • Digital Entertainment in 2024. $1 Trillion in consumer spending.
  • Gaming in 2024. At least 25% of that.
  • Digital Entertainment in 2028. $3.8 trillion in consumer spending.
  • Gaming in 2028. At least 25% of that.
Figure 2. Digital entertainment forecast from AWS re:Invent 2024.

Restating that: Amazon forecasts that gaming will be a $800 billion industry in 2028.


Mobile UA teams will continue to use MMMs for ROAS, and many teams will not experiment with or empirically validate their models

A strange thing happened in gaming between 1990 and 2020.  Marketing precision first improved dramatically and then regressed almost as dramatically.  

In 1990, marketers bought ads and knew they worked, but they didn’t have a good idea of which ads worked or how effective a particular ad was. John Wanamaker, a famous merchant from the 1800’s, once described this way of advertising with the quip “Half the money I spend on advertising is wasted; the trouble is, I don’t know which half.”

As advertising became digital and performance marketing gained traction, that changed. This was especially true in mobile gaming—in 2017, by using IDFAs and other forms of deterministic attribution, you could start to precisely measure which ads a user saw and attribute installs to ads (it wasn’t perfect, but it was good. Possibly very good).

And then IDFA deprecation happened and … well …  umm … Tim Sweeney put it best:

Figure 3. Tim Sweeney, posting on X.

Marketers reacted quickly by resuming their use of observational causal inference models to measure advertising effectiveness.  

Unfortunately, however, the problem of misinformation originating from observational causal inference in business analytics is serious. 


None of the “Top 100” mobile games that don’t already have a web storefront will implement one in 2025

This prediction is really an observation: Web storefronts have already achieved peak penetration. From which it follows that there won’t be a lot of new large-scale web storefronts coming on line in 2025.

To understand the observation: 2024 was the year that web storefronts got everyone’s attention. Perhaps most dramatically, Appcharge analyzed web store adoption rates in mobile games and discovered that 72 of the top 100 mobile games already had a web storefront.  Appcharge also discovered that adoption varied widely by genre: 100% of social casinos had web storefronts, but only 30% of casual games had web storefronts.

The reasons for this variation in adoption are explained in detail in the Appcharge article cited above and in a related article by Jeff Gurian over at Mobile Game Doctor.  But, in brief, they are:

  • The early and middle adopters have already adopted. 72% is already a high number.
  • Web storefronts aren’t free—they require implementation and maintenance and often require changes to game design.
  • To make the web storefront decision profitable, the game needs the players to make their purchases at the web storefront. The challenges involved in shifting purchase traffic can be significant.
  • In general, any game whose IAP revenue profile involves large numbers of players making small and infrequent impulse purchases will find it hard to move purchase traffic to a web storefront.

Given the compelling monetary incentives associated to web storefronts in general, we think that most of the top 100 games that don’t have a web storefront already have made a rational decision that web storefronts won’t work for them.


Web Storefronts Will Achieve Significant Revenue Growth by Adopting Experimentation and Personalization

But, still, 72% of the top 100 mobile games have a web storefront. As the game teams gain experience with their storefronts and add more functionality, the possibilities for personalization-driven revenue growth are enormous. The most significant revenue growth driver in mobile games in 2025 will be the widespread adoption of experimentation-based personalization regimes.

Eric Seufert made the case eloquently in a tweet and then a follow-up article on Mobile Dev Memo.

Figure 4. Tweet by Eric Seufert in 2022.

There is an interesting question, though—Eric has emphasized the value of experimentation and personalization since 2022, and GDP grew out of our experiences doing personalized pricing and bundling at Scientific Revenue from 2013 to 2019. Why aren’t all web stores personalized already? Why do we think this the primary avenue for revenue growth in 2025?

The answer is simple: first, you have to build the store. As Stash put it in their case study, these things have a lifecycle.

Figure 5. Stash Lifecycle. First, you build the store, then you market it, and then you personalize it.

This can take years to execute.


AR/VR will finally gain significant traction, making revenue optimization an even more difficult problem

This is probably the first controversial prediction: AR/VR is on the verge of a breakthrough. It may not happen in 2025, but by the end of the year, everyone will agree that the momentum is real.

It’s easy to point to Apple’s spectacular failure with the Vision Pro and the fact that analysts slashed forecasts for the Meta Quest 3 as proof points for the opposite case. And, indeed, many industry luminaries are doing precisely that on LinkedIn.

Figure 6. Eric Kress thinks VR is over

But this ignores (or, at least, glosses over) the ongoing adoption trends:

Even if you assume linear growth over the 2024 numbers, you’re looking at 15 to 20 million units sold. Admittedly, this is nothing compared to mobile phones, but it is roughly the number of PlayStation 5s that Sony sold in 2024 (and, given that we’re in the back end of the console lifecycle, more than Sony will sell in 2025).

It seems like a very bad bet to assume that amidst all those people, nothing compelling will emerge.

Of course, this is terrible news if you’re trying to build a global, long-term revenue optimization platform for gaming. Modeling the LTV impact of Console <-> Meta cross-play is the stuff of statistical nightmares..


Alternative App stores will be more than 10% of the Western mobile gaming market by install

2024 was the year of the webstore in mobile gaming. And a previous prediction was a about the rise of personalization in webstores. But there’s another equally compelling trend happening because of Epic.  Epic’s almost-quixotic legal battles to open up distribution and on-deck purchases are being quietly resolved in Epic’s favor.

The resolution in Europe is happening via regulation. The European Union’s Digital Markets Act (DMA), which came into effect in 2024, introduces regulations to enhance competition and limit the dominance of major tech platforms, referred to as “gatekeepers” by the DMA.  A key provision of the DMA requires these gatekeepers to permit the installation of alternative app stores on their devices, thereby reducing their control over app distribution.

In the US, the courts are starting to decide in favor of Epic. In October of 2024, judges ruled that Google’s Android app store was an illegal monopoly.

Consequently, alternative app stores are gradually emerging—first in Europe, but eventually everywhere. The Epic Appstore is already live in Europe, and the Microsoft Appstore is code-complete (Microsoft is just waiting for the appeals to be exhausted).

It remains to be seen how widespread adoption of these app stores will be, but Epic has also started having its app store preinstalled on Android devices and it seems a safe bet that they will get 10% of the market within a year.


It will become ordinary for games to sell physical goods and game-related merchandise in-game

Quietly, almost stealthily, Amazon released Amazon Anywhere in 2023. The idea is simple: it’s a way for developers to incorporate Amazon e-commerce from within their games or apps easily. That is, it’s a way for players to easily buy goods from Amazon without leaving the game. 

The first use was with “Peridot,” an augmented reality game developed by Niantic in which players can buy Peridot-branded merchandise directly within the game.

Amazon explains it this way: “Developers and creators are able to broaden in-game or in-app environments to offer more than digital products, opening up a new way to engage their audiences without worrying about selection, shipping, or fulfillment. Instead, they can focus on creating incredible experiences.”

We see this as being related to the following trends:

  • There is an increasing industry focus on cross-platform strategies that leverage IP to create growth. At the recent GamesBeat Next in October, the growth conversation began with “Beloved IPs are Crossing between Games and Other Mediums”
  • Relatedly, many companies from other verticals are focusing on video games. While it would be incorrect to say companies like Mattel, Sands, and Hasbro are new to video games, they are now paying unprecedented attention to them.
Figure 7. IP and Transmedia Were Projected to Be a Major Source of Growth at GamesBeat Next

Both of these trends mean that experimentation with “Play the game, buy the plushie” is inevitable. We don’t think this will become a common or standard practice in 2025, but we do think it will no longer be surprising when we see it.

 Like our blog? Join our substack.

Employment Application

    Resume Upload:
    (Allowed file types: pdf, doc, rtf)

    Cover Letter Upload:
    (Allowed file types: pdf, doc, rtf)