When the Truth Gets Buried: Gaurav Srivastava and the Mechanics of Modern Disinformation

Review of Episode 2 of Gaurav Srivastava’s Story (Targeted Podcast)

In the second installment of Targeted’s two-part exploration of Gaurav Srivastava’s downfall, host Zach Abramowitz deepens the portrait of a man swept into what he describes as a deliberately orchestrated storm of character assassination. The podcast—produced by Next Chapter Podcasts—continues to ask timely and unsettling questions: What happens when a well-funded smear campaign manipulates the digital ecosystem? Can someone claw back their reputation after it’s been algorithmically dismantled?

The episode picks up where the first left off. Srivastava, once a respected commodities investor with deep industry ties, finds himself reeling from what he alleges to be a calculated media and information war initiated by a former business partner. The previous episode detailed the origins of the feud, rooted in accusations of sanctions violations and financial misconduct. Episode two, however, drills into the tactics of reputational destruction—and perhaps more disturbingly, the mechanisms that make such campaigns possible in today’s media landscape.

Quicksand and the Wikipedia Spiral

The episode opens on a vivid, emotionally charged scene: Srivastava watching his children during a school chapel assembly learning the story of Jonah. “I identified with Jonah,” he emotionally recounts. It’s an analogy that sets the tone—one of confusion, isolation, and entrapment.

But where this episode really gets traction is in its examination of how disinformation metastasizes across platforms. “The Wikipedia angle,” as Abramowitz calls it, is central. As Gaurav Srivastava’s public image unraveled, one of the most insidious elements of the attack was the creation of a Wikipedia page titled Gaurav Srivastava Scandal. It wasn’t just the title that was problematic—it was the speed, authorship, and coordination behind it. According to a Wikipedia contributor interviewed in the episode (using the pseudonym “David”), the article was not a natural product of open-source contribution but a coordinated effort by a single editor with the hallmarks of “sock puppetry”—a deceptive practice where one person creates multiple fake accounts to fabricate consensus.

The page appeared and reappeared, was taken down after being labeled an attack page, restored, and weaponized again. “It literally sprung up overnight,” David says, with citations pulled from obscure blogs and low-tier publications, most of which, according to Srivastava, had been deliberately seeded to support this version of the truth. Another user linked to the page demanded $40,000 in cryptocurrency for an interview. As the podcast later confirms, he has since been banned from Wikipedia following an internal sock-puppet investigation.

The implication here is stark: Wikipedia, once a beacon of collective truth, can be gamed. If you control enough content upstream—through third-tier articles, coordinated blogs, even paid-for features—you can fabricate a reality that even Wikipedia will document. “They wrote the story, and then created the sources to back it up,” Srivastava laments. It’s not just that the information was wrong—it was that it was designed to be unchallengeable.

The Echo Chamber of Respectability

From there, the episode moves into its most blistering critique—not of the murky blogs or shadowy PR hands, but of the supposed sentinels of respectable journalism. The Wall Street Journal and Financial Times, once seen as impenetrable arbiters of truth, are called out for echoing the claims seeded by this disinformation campaign without adequate scrutiny.

“The FT headline reads, an oil trader ‘has a staggering story to tell about how he got sanctioned,’” Abramowitz notes, adding, “It’s a story about the story. Not an investigation into the facts.” The suggestion isn’t that the Financial Times or Wall Street Journal acted maliciously, but that in a media ecosystem hungry for pre-digested narratives, even the most prestigious outlets can become unwitting amplifiers of misinformation.

This is perhaps the most chilling takeaway of the episode: once a narrative reaches critical mass, even elite publications will pick it up—not to interrogate, but to report that it’s being talked about. The authority of the story becomes self-reinforcing. “Confirmation becomes currency,” Srivastava says, “and the lie becomes real.”

It’s a point that extends beyond this specific case and into a broader indictment of how reputations can be algorithmically dismantled and epistemically erased. The podcast doesn’t shy away from the uncomfortable fact that many listeners, if skimming headlines or checking Wikipedia, might very well conclude Srivastava is guilty. That perception is not accidental; it was designed.

Personal Fallout and Psychological Cost

The episode doesn’t rely solely on media criticism. It remains grounded in the deeply personal effects of this campaign. Banks closed accounts. Business partners distanced themselves. Children became pariahs in school settings. There’s a harrowing moment when Srivastava recounts how his kids were excluded from birthday parties, and parents believed the FBI might raid their home.

That fallout, he explains, became existential. “I contemplated whether I had failed as a father,” he says. It’s not merely the loss of business or dignity—it’s the loss of trust, of routine, of identity.

There is also an exploration of how artificial intelligence could exacerbate these dynamics. The podcast runs a brief clip of what sounds like Srivastava thanking the host—except, as Abramowitz later reveals, the voice was generated by AI. The message is clear: with access to someone’s voice, it’s no longer difficult to produce “evidence.” This casts doubt on the still-unconfirmed voice recordings that both the FT and WSJ referenced in their reporting—allegedly of Srivastava calling himself a spy.

The segment demonstrates how easy it is to fabricate damning audio with modern tools, hinting that even key “evidence” used by reputable publications may not be what it seems.

A Broader Implication

As the episode closes, Srivastava is less concerned with vengeance than with redemption. “I want my life back,” he says, “but I want people… who have faced very similar issues to have faith.” He hopes that by telling his story, he might light a path for others. That’s the quiet thesis of Targeted: not simply to expose scandal, but to explore how systems of power, PR, and digital manipulation can collapse someone’s world—and what can be done to rebuild it.

In its framing, this second episode of Gaurav Srivastava’s Targeted interview is not merely a character piece, but an exploration of how digital reputations are forged and fractured. It’s a grim reminder that in a world driven by clicks, consensus can be constructed with enough coordination—and once built, it takes on a life of its own.

As the series prepares to pivot to its next guest—BBC journalist Amalia Zaatari and her investigation into the culture of snitching and manipulation in Russia—Targeted continues to promise a rare blend of intimate storytelling and systemic critique. Whether or not one believes every detail of Srivastava’s version of events, this episode makes a compelling case that reputational warfare is real, and the court of public opinion is far more dangerous than most imagine.

Leave your comment