How does it feel to win the T Challenge Award? What does this recognition mean to you?
Zohaib Ahmed, Founder and CEO of Resemble AI: It’s fantastic. Every startup dreams of partnering with large enterprises, and to have the chance to work directly with companies like Deutsche Telekom and T-Mobile US is incredible. This award validates our progress and gives us the opportunity to ensure our solution is used exactly as we envisioned—fulfilling our mission.
Throughout the challenge, we spoke with stakeholders across Deutsche Telekom and T-Mobile and discovered how strongly aligned they are with our mission. The space we’re tackling—verifying authenticity in an era of generative AI—has become critically important. Today, people can’t always tell if what they see or hear online is genuine. That’s true in real time on phone calls, too. The enthusiasm and support we received from everyone at Deutsche Telekom and T-Mobile have been overwhelmingly positive. It shows they share our goal of restoring trust in digital media.
Your startup, Resemble AI, directly addresses this fundamental problem: trust. When everything can be faked, how can we distinguish the real from the fake? You’re offering a tool designed to defend against that threat. Do you see your work as just a business, or is it more of a mission?
It’s definitely a mission. Our internal slogan is “fight AI with AI,” because that’s the only effective way to combat malicious generative content. Already, humans struggle to tell whether audio or video is real. These AI models create content that looks and sounds unbelievably authentic. It’s hard to believe it’s only been three years since ChatGPT’s debut—these models have improved so rapidly that they’ve spawned an entirely new category of challenge.
In fact, Resemble AI was founded on principles of trust and ethics even before ChatGPT existed. From day one, we’ve open-sourced models for speaker verification and identification, and published research on techniques like watermarking to enhance content authenticity. Our core objective has always been larger than any single company: to establish a new standard for verifying AI-generated media. As generative AI evolves, the need for countermeasures must grow alongside it.
Let’s delve into the technology. Many companies claim to detect fakes—what sets your solution apart? What’s happening “under the hood” of your engine?
Honestly, our greatest differentiator is our team. We’re aggressively publishing generative AI models, and our DNA is rooted in generative AI research. Originally, our goal was to build generative models for Hollywood, and we succeeded—our technology was used in Netflix’s Andy Warhol Diaries, which received four Emmy nominations. That entire docuseries was generated with our models. Last year, another documentary called Dirty Pop also used Resemble AI’s technology.
To effectively combat deepfakes, you must understand how to build generative models yourself. It’s fundamentally a data problem: collecting large, high-quality datasets and synthesizing realistic samples. Our team has deep expertise in creating those datasets and training those models. That’s our biggest advantage. For example, right now on Hugging Face, the top voice synthesis model is ours—DeepSeek is second, and ours has earned 5,000 GitHub stars in just four days. That shows how rapidly we’re pushing the frontier in generative AI.
We believe that to control generative AI, you must learn how it works. It’s like learning how to ride a horse: if you know how the horse moves, you can ride it safely. If you just jump on without understanding, you risk falling off. In the same way, to detect or watermark AI-generated content, you need that insider expertise.
Suppose I’m a customer—what tools does Resemble AI offer? I understand you started with voice, but I imagine you’ve expanded.
We began with voice, and it quickly became our bread and butter. In the past nine months, we’ve seen tremendous demand for image and video solutions as well. Today’s image and video generative models are so realistic—no more extra fingers or mismatched eyes—that fraudsters can easily create convincing fake visuals. Voice was the first vector we tackled because generative voice fraud was already happening: scammers were using AI audio to impersonate people and trick unsuspecting individuals.
But as image and video models improve, new forms of fraud emerge: insurance scams, falsified expense reports, and more. For example, you can use an app like ChatGPT to generate a picture of your car with a broken bumper, then submit that to an insurer as evidence. Insurance companies face a massive problem because they can no longer trust the images they receive. I tried it once with my father, and he was stunned at how realistic it was—and he couldn’t believe it was fake. It’s clear that our eyes and ears alone aren’t enough anymore; we need AI to detect AI-generated fraud.
When you demonstrated the voice examples and asked us to identify which were fake, I scored six out of nine, despite expecting to do better. It highlights how convincing these fakes are. Have you encountered real-life cases where your technology prevented harm or solved similar problem?
Absolutely. When we first built this product, our first seven customers were all in law enforcement and intelligence agencies. They faced serious issues: child pornography using synthetic audio and video, deepfake recordings of politicians, and content that could sway entire elections. Agencies around the world now use our technology.
This problem is global. It’s not limited to North America or Europe; it’s happening in Singapore, Japan, Australia—everywhere. We’ve helped fight child exploitation, detect election-related deepfakes, and more. Last year I testified before the United States Senate on election deepfakes. I was just as nervous as anyone else—pacing in a blue suit—but it was a critical moment to highlight how quickly AI-generated threats are emerging.
Think about the adoption curve: the internet took decades to reach universal adoption, but generative AI has exploded in just three years. The pace has overwhelmed traditional defenses, so law enforcement and the public sector are scrambling to catch up. Their interest in our solutions is a clear signal that enterprises need them, too. Public-sector agencies regularly advise banks, telecoms, and other industries on how to respond. Just in the past month, four CEOs have been scammed by deepfakes—and they could just as easily involve you next.
Are policymakers and politicians hearing your message? Do they grasp the scale of this threat?
Yes. There’s growing regulatory momentum, but regulations must strike a balance: you don’t want to stifle innovation by overreaching. Our goal isn’t to halt generative AI; it’s to provide technological solutions that address these new risks. Regulators understand this dilemma—AI is effectively an arms race, with multi-billion-dollar deals shaping the landscape. If you overregulate, other countries with more flexible approaches will pull ahead.
That’s why we’re engaging with agencies and lawmakers to develop technological safeguards. Aside from deepfake detection, we’ve open-sourced an AI watermarking model that embeds invisible markers in media files. We’ve worked with Sony Music and RIAA to track sources of training data and authenticate content. If creators can watermark their original media in a way that can’t be removed, recipients can immediately verify its origin. There’s no single “silver bullet.” We need multiple approaches—detection, watermarking, verification—to stay ahead of adversaries.
So far, we’ve discussed these solutions with virtually every U.S. agency you can imagine, as well as many international bodies. Senators and officials have listened attentively and asked for our input. It’s clear that everyone is on the same page: the threats are real, and they require collaborative, technology-driven responses.
Resemble AI is based in the U.S. Does international politics—tariffs, trade tensions—impact your work?
Tariffs don’t directly affect us. This challenge is bigger than politics. In the U.S., you’ll find bipartisan agreement on the need to address AI threats. Whether you’re Republican or Democrat, left or right, AI is one issue everyone takes seriously. That unity offers some protection against political headwinds. If anything, we’ve been encouraged by support from all sides.
Finally, beyond prestige, this award comes with a monetary prize. How will the funding help you scale? Does it change your roadmap?
First and foremost, the recognition and validation matter more than the cash. Having Deutsche Telekom and T-Mobile prioritize our mission signals that what we’re doing is important and that credibility is invaluable.
Of course, the financial award also helps us reinvest in this long-term partnership. We view Deutsche Telekom as a decades-long partner, not a one-off. AI isn’t going away. The next 20 innovation challenges will likely be AI-focused. This funding goes right back into our engagement: building pilots, running proofs of concept, securing additional compute resources—whatever it takes to succeed.
What was your experience like in the T Challenge program, from application to winning the award? How was the journey?
The mentorship has been the most valuable aspect. Transparency from day one was crucial. It’s encouraging when mentors share our mission, but it’s even more useful when they tell us the hurdles we’ll face: accuracy requirements, legal and compliance concerns, deployment challenges, and so on. Deutsche Telekom and T-Mobile have assigned several experts to us, and we can tap into them anytime. That kind of inside perspective is rare. Most pilots don’t come with eight dedicated mentors ready to explain enterprise bottlenecks.
Their feedback has been frank: “Legal is worried about X, Y, Z—how can you address it?” Then it becomes a dialogue: “If we tweak our technology this way, can we satisfy compliance?” On our side, we don’t know exactly how to integrate our solution into a massive enterprise network. But with their guidance, we figure it out together. That collaborative process has been invaluable.
Any advice from the 2025 winners for next year’s applicants who might hesitate to apply?
This is a golden opportunity to put your ideas on paper and have them directly reviewed by executives at Deutsche Telekom and T-Mobile. Many startups focus on outbound pitches—cold emails, networking—to reach customers. Here, they’re handing you the chance: “Fill out this application, and we’ll read it.” If your proposal resonates, they’ll bring you in.
Every team that advanced to the program, regardless of whether they won, was very different from the others. Large enterprises face hundreds of problems, and this is your chance to align with one of those challenges and solve it. Even if you don’t win, you’ll learn whether your solution fits their needs and can iterate from there. If you have a chance to write those words on a page and get in front of that audience, seize it.
Great – thank you for your time, and congratulations again!
Thank you! It’s been our pleasure.