🚨When an AI Search Engine Forgot Who It Was: A Bug Report That Changed Perplexity AI’s Identity
Back on March 11, 2024, I discovered something oddly off about how Perplexity AI saw itself. I had just downloaded the newly launched Perplexity Chrome extension, even though I’d been using the mobile app long before that. Just for fun, I asked the most basic, yet revealing question imaginable: “Are you better than Google?” It seemed like the perfect litmus test for a product that claims to reinvent search. But to my surprise, the response had nothing to do with Perplexity. Instead, it launched into a comparison between ChatGPT and Google. I was stunned - this wasn’t a chatbot; this was Perplexity, a search engine. Why was it talking like it was just a wrapper for OpenAI?
Curious (and confused), I switched to the app version and asked the same question. The response was identical: Perplexity was still comparing ChatGPT to Google, completely ignoring its own identity in the process. That’s when it hit me - this wasn’t just a UX quirk; it was a core product logic bug. It wasn’t about broken functionality or missing UI — this was about how the system perceived itself and how it chose to communicate that identity to users. So I documented everything — screenshots, analysis, and impact - and sent it to both the support and security@perplexity.ai inboxes. Within a few hours, I received an automated response (from the legendary “Sam”) and a polite follow-up asking for clarification. To my surprise, in just 3–4 days, they fixed it.
That’s right - in under a week, Ticket #3110 was not just acknowledged but acted on. Perplexity now answers that same question - “Are you better than Google?” - with a confident, feature-rich comparison between itself and Google, not ChatGPT. It highlights its unique strengths: AI-powered summarization, transparency in citations, privacy-aware results, and real-time browsing capabilities. What impressed me wasn’t just the speed of resolution, but the subtlety of the fix. This wasn’t a typical “security” bug or crash. It was a logic-level blind spot - a quiet yet critical misalignment between product perception and positioning. The kind of thing most teams would overlook because it doesn't throw an error - but it does throw off trust.
Why did this happen in the first place? My theory is that the system misunderstood the user's intent due to a kind of LLM-induced identity crisis. Because Perplexity is built using large language models, it likely associated “Are you better than Google?” with AI models in general - and the most famous one in that category is ChatGPT. So instead of comparing itself (the search engine) to Google (another search engine), it weirdly defaulted to comparing Google with ChatGPT, as if the user had asked “Is ChatGPT better than Google?” But users don’t talk that way - and certainly not when they’re using Perplexity, which presents itself as an alternative to traditional search. So while the LLM knew the words, it didn’t grasp the context of being asked as Perplexity. What makes this especially concerning is that an LLM-powered platform should understand that the word “you” in that question refers to itself - Perplexity AI. If a language model can’t recognize its own conversational role in a direct question, it raises deeper concerns about how reliably it can handle nuance, identity, and context - all of which are critical in both search and dialogue. So while the LLM knew the words, it didn’t grasp the context of being asked as Perplexity.
What’s wild is how impactful this tiny mistake could’ve been if left uncorrected. At scale, thousands of users could walk away thinking Perplexity is just a fancy ChatGPT wrapper, rather than a standalone search engine with its own identity. That subtle confusion could eat away at brand confidence and user retention. In fixing this, the Perplexity team didn't just squash a bug - they realigned the platform’s voice, narrative, and market stance. It was one of those quiet but critical UX wins that never show up in dashboards but change the trajectory of trust.
This whole experience is a powerful reminder that product feedback doesn’t have to be technical to be valuable. Often, the most overlooked bugs are the ones that hide in plain sight — subtle disconnects between what a product claims to be and how it behaves when questioned. These aren’t the kinds of issues caught by QA testers looking for crashes or UI overlaps; they require cross-functional awareness — a blend of UX sensitivity, brand awareness, and logic scrutiny. What I reported wasn’t a vulnerability, but it was a kind of credibility leak. And if left unresolved, it would’ve affected everything from user onboarding to long-term retention and even investor perception. This wasn’t just a product issue — it was a positioning issue wearing a product mask.
It also raises a larger question facing the AI industry: How should LLM-powered tools manage identity? If an AI assistant, search engine, or agent can speak in the first person, it must also know who that "I" is. For Perplexity, “I” should mean Perplexity AI, not the underlying LLM powering its responses. Otherwise, the brand becomes a ghost in its own product — present in design, but missing from the dialogue. As more platforms build on top of shared language models, this challenge will only grow. The companies that succeed won’t just be the ones with the best infrastructure — they’ll be the ones who own their voice, shape their narrative, and make sure users know exactly who they’re talking to.
Moral of the story:
Perplexity forgot who it was - still loyal to Sam Altman like a golden retriever with a PhD, leaving Aravind Srinivas wondering if he built a search engine or a ChatGPT fan club. 😄
“You can check the screenshots I sent to their team below for reference as POC”
Comments
Post a Comment