top of page
  • Thomas Thurston

IRL app faked 95% of its users: humans bought in, but AI didn't

This week I saw the headline for the first time: “messaging app startup that raised $200M from SoftBank and others is shutting down because 95% of its users were fake.[i]

No way! I’d been wondering about that one. Before the announcement, our AI (that we use to analyze startups) had been estimating IRL’s valuation as 10X smaller than its listed valuation, but I’d assumed our AI was wrong. We thought it must be a glitch. Now it seems the AI was right all along and we, humans, were the glitch.

The startup named IRL, or "In Real Life," was a social media app that was founded in 2016. It became a “unicorn” in 2021 after raising around $200 million at a valuation of more than $1 billion from SoftBank, Kleiner Perkins, Founders Fund and other notables. Make no mistake, these are some of the world’s most sophisticated venture capital firms.

Apparently, an internal investigation revealed that 95% of the app’s reported 20 million users were fake.[ii] While not as expensive a scandal as Theranos, there are many similarities.

IRL’s fraud had been hard for investors and even insiders to detect. The company's CEO, Abraham Shafi, was very secretive about the company's user data. He refused to share any of this data with investors or employees and allegedly went so far as to fire an employee who questioned the accuracy of his user claims.[iii] Last year IRL laid off around a quarter of its team, suspicious employees began expressing more doubt around Shafi’s claims, then there was an SEC investigation.[iv]

Looking at how our AI valued IRL over time, you can see (below) things more or less lined up at first. Then, in early 2021, there was a huge divergence between IRL’s listed valuation and our AI-based estimates.

It isn’t unusual for differences to arise between our AI estimates and real-world valuations, but they’re usually at least in the same ballpark. For example, the AI estimated IRL’s valuation at around $50M in 2019, which was twice as high as the last-known valuation at the time of around $25M. That’s a big gap, but the prior $25M listed valuation was more than a year old, so if IRL had been growing since then our estimate might at least be within the realm of plausibility.

Everything changed in 2021 when, all of a sudden, IRL became a unicorn and the AI seemed wrong by a factor of 10. That isn’t a plausible gap; it’s a canyon.

I thought the AI had to be making a mistake. After all, the “market” and smart investors had priced IRL at over $1B. Heck, for what it’s worth, even Pitchbook’s “exit predictor” had IRL at a 98% likelihood of a successful exit. 98%! You don't see that very often.

This month, now that fraud has been discovered, we humans are finally learning what our AI already knew: IRL simply wasn’t worth $1 billion. Not even close. AI had detected – long before the rest of us – that IRL hadn’t created as much market value as it claimed. Lies, fake numbers and bots can fool us humans, at least for a while, but in this case they couldn’t fool AI. The AI had no eyes to pull wool over, so to speak.

As much as I obsess over AI and its use in venture capital, the IRL scandal somehow struck a new chord. At this moment in history, the conversation over using AI in venture capital is still mostly rooted in the assumption that people’s market transactions are the “reality” that AI should aspire to emulate. In other words, the prices people agree to in funding rounds are real, AI estimates are fake, and AI is only as good as its proximity to those human decisions.

The IRL scandal flips this on its head, at least in this case. Here, AI had a better understanding of reality than the people involved. The AI estimates were “real” (or at least more real), the prices people agreed to in funding rounds were fake (even though they didn’t know it), and the human decisions would have been only as good as their proximity to the AI.

In real life (irony intended), it was AI rather than us humans that had better visibility into what was real versus fake.

When basing decisions on AI, everything obviously depends on the quality of the AI involved (that goes without saying). I’m also not advocating some sort of blind submission to whatever an algorithm spits out. I am saying, however, the IRL scandal felt (at least to me) like a glimpse into what’s ahead. Or, in this case, a future that’s already here.

In the not-so-distant past, the mention of AI conjured up images of machines that needed constant supervision and oversight from humans. It seemed as though we had to watch over AI like a vigilant guardian, fearful of its missteps and unpredictable behavior. But as the relentless march of progress continues, a fascinating shift continues to happen. The better our tools get, the more ways we can learn to use them.

Today, we find ourselves in a remarkable era where AI isn't just a tool we second-guess but one that can help second-guess us, our biases, our egos and what we think we know.

Human judgment and AI are becoming more like equal partners, displacing an assumption of master and servant. In a big picture sense, this partnership can be good or bad depending on the details; the wisdom of AI is very case-by-case. In this case, AI could have helped protect sincere humans from fake ones - in real life.

[i] Steve Mollman, A messaging app startup that raised $200M from SoftBank and others is shutting down because 95% of its users were fake, Fortune (June 26, 2023) [ii] Amanda Silberling, Unicorn social app IRL to shut down after admitting 95% of its users were fake, TechCrunch (June 26, 2023) [iii] Mark Matousek, Social App IRL, Which Raised $200 Million, Shuts Down After CEO Misconduct Probe, The Information (June 23, 2023) [iv] Amanda Silberling, Unicorn social app IRL to shut down after admitting 95% of its users were fake, TechCrunch (June 26, 2023)


bottom of page