|
Getting your Trinity Audio player ready...
|
Joshua Olaiya, a LinkedIn user with over 124,700 followers, shared a screenshot of a one-on-one Zoom call with Sam Altman, OpenAI’s chief executive officer. He claimed the call covered the evolution of ChatGPT and the future of Artificial General Intelligence (AGI).
As one on the brink of building a major tech podcast community in Africa, the call was presented to demonstrate his podcast’s growing influence and its mission to share Africa’s tech story.
Since it was first posted on Aug. 25, 2025, the post had gained 1,023 reactions, 60 comments, and 28 reposts as of Aug. 28, 2025.
Olaiya was not the only LinkedIn user who made the claim. Sanchit Narula, a supposedly leading software engineer at Nielsen, gained 437 reactions and 24 comments in a week following the same format.
Shasha X, a supposed founder of Forward Labs, said her call session with Altman discussed the future of teaching and how teachers must integrate AI to stay relevant. She posted the interview on Aug. 22, 2025, and received 74 reactions and 34 comments.
Ethan Ng’s post is peculiar, as he attributed many claims to Altman. He said Altman wants AI integrated into every human action and claimed Altman won by working 25 hours daily. Lastly, he said Altman described Vogent as the best voice AI platform for high-quality, low-cost voice agents and TTS.
DUBAWA observed that his profile listed him as “Growth” at Vogent, an AI platform for building humanlike, intelligent, and effective voice agents. On Aug. 30, 2025, he announced that he is no longer a staff member at the company.
He gained 909 reactions, 145 comments, and 10 reposts since he posted it on Aug. 15, 2025.
The lies in their truth
DUBAWA observed that several users claimed that the interview never happened.
To verify the source of Altman’s image, DUBAWA ran a reverse image search on the keyframe using Google Lens. We discovered that Dark Futura, a Substack page covering futurism, transhumanism and technology, inserted the exact video in a post on Jan. 31, 2025. The now-viral image was the thumbnail of the video on the post.

We observed that the 33-second video had “AI for Good Global Summit” as the watermark. Through keyword search, we discovered that the clip was from the 2024 AI for Good Global Summit held in Switzerland. Altman was a keynote speaker, giving his presentation virtually, while Nicholas Thompson, Atlantic’s chief executive officer (CEO), moderated the session.

Altman highlighted AI’s transformative impact on productivity across industries, emphasising AI tools’ relevance to faster and more effective work. He discussed the positive potential of AI to increase global prosperity and enable new forms of democratic governance.
Altman also mentioned the need for transparency and accountability amidst evolving governance frameworks in AI development. He called for a balanced approach that maximises AI’s benefits while ensuring long-term safety and societal alignment.
DUBAWA also observed that Ethan Ng used a different keyframe from the others. We conducted a reverse image search and found the video with the exact keyframe on Tsarathustra, an X page.
Through the subtitle of the 45-second video, we observed that Altman was discussing the possible socioeconomic impact of future advancements in artificial intelligence. When DUBAWA used keyword search to find the exact event, we traced the video’s source to a YouTube live session uploaded on May 7, 2024. The video was a real-time streaming of the session, moderated by Michael O’Hanlon and Valerie Wirtschafter. The Brookings Institution’s Strobe Talbott Centre for Security, Strategy, and Technology hosted the virtual panel session.

The “Shifting Geopolitics in the Age of AI” session revolved around conversations on how AI can impact the global economy.
DUBAWA’s findings contradicted the viral visuals on LinkedIn about Altman granting a one-on-one Zoom call, as claimed.
A trend of false hype
On LinkedIn, a worrying trend of presenting a whitewashed version of professional lives has emerged. This often starts with inflated job titles, as individuals adopt grand-sounding roles like “Growth Hacker” or “Chief Visionary Officer” to appear more senior than they genuinely are. This practice can be misleading, creating a smokescreen around their actual experience and responsibilities.
Beyond misleading titles, there’s a common habit of exaggerated achievements. People frequently overstate their impact on projects or companies, taking credit for team efforts or fabricating stories of monumental success.
These claims rarely come with verifiable data, making it difficult to separate genuine accomplishments from empty boasts.
This culture of “highlight reels” also gives rise to the unrealistic “overnight success” narrative. The grit, hard work, and inevitable failures are omitted from their stories, leaving a polished and seemingly effortless journey to the top.
This can create an unhealthy environment for others, particularly young professionals who might feel discouraged by the disparity between their struggles and the flawless profiles they see online. Ultimately, this trend of presenting a false persona makes distinguishing between authentic expertise and clever self-promotion challenging.
Sanusi Sanusi, a research analyst at the Digital Tech, AI and Information Disorder Analysis Centre (DAIDAC), said the “fake it till you make it” trend on LinkedIn exists in different formats. “People even adopt other people’s messages and rewrite stuff or try to exaggerate the facts regarding something that they’ve experienced or have happened to them to draw attention and engagement onto their profiles,” he said.
“What complicates it is the usage of believable deepfakes, which can sometimes deceive the best of us. Not all of us are playing detective online,” he said. He pointed out that the advanced techniques make it much harder for users to discern what is true from what is false.
Caleb Ijioma, the executive director at RoundCheck, said people following the misinformation propagators are the real victims, as such information causes a psychological rift in them.
“People who are desperately struggling to upscale their work and do not have verification knowledge because they trust the source are the main casualties,” he said.
DUBAWA observed that a few LinkedIn users subtly hinted at the genuineness of their posts. In Olaiya’s post, he included #fiction as one of the hashtags. Also, Shasha waited over a week before making another post to inform her followers. Meanwhile, as of Sept. 1, 2025, she has yet to delete the original post.
In Narula’s case, he edited the post to reveal that his claim was meant as a joke, and Ethan’s post inspired him. However, DUBAWA could not verify the timeframe between the original post and when he made the edit.
Ijioma revealed that such remedies would not sufficiently rectify the claim’s spread and impact on followers. He revealed that if the information manipulators do not openly rectify the misinformation because it might affect their reputation, any subtle action they might take will not generate much impact.




