AI Applications Are Far from Perfect: Trust, but Verify

Breanna Faye
3 min readJust now

--

In the last few years, AI has made incredible strides in generating content, solving problems, and helping us automate complex processes. But here’s a truth we can’t ignore: AI applications, especially those powered by large language models (LLMs), are far from perfect right now. You cannot blindly trust the output it provides.

Imagine you’re working with a super eager, overzealous intern. They’re quick, always willing to help, but also prone to mistakes — sometimes big ones. AI, in its current state, is just like that intern. Helpful? Absolutely. But flawless? Definitely not.

The Reality: AI Can “Hallucinate” Information

One major problem that AI systems currently face is hallucination, a phenomenon where the AI produces completely fabricated or “made-up” information that it presents as fact. Unlike a human, who might hesitate when unsure, AI can confidently churn out an answer, even when it’s entirely wrong.

Here’s an example: Imagine you ask an AI to transcribe feedback from a user interview. Instead of providing a word-for-word transcription, the AI fills in the gaps with assumptions and made-up details, producing a report that looks polished but is completely unreliable. Unless you cross-check the output, you may end up with user feedback that never existed, or insights that are, frankly, fictional.

Photo and example by Erica Heinz

Let’s take it a step further. You’re designing a product based on user feedback. The AI-generated transcript says the user had issues with a particular feature, but in reality, the user loved it. Now you’ve made decisions based on inaccurate data — decisions that could derail your project.

AI’s Overzealous Intern Moment

We need to treat AI the way we would an overexcited intern who’s trying to impress on their first day. LLM’s and AI assistants will do the work quickly, but if you don’t check it, they will make mistakes. And those mistakes could cost you time, energy, money and opportunity. You don’t wan to be the person to blame when you point the finger to ChatGPT (that was doing all of the work instead of you).

Here are a few ways AI gets it wrong:

  1. Inventing Citations: Ask an AI to provide sources for a claim, and it might create completely fake citations. That’s not just a simple mistake — that’s creating an academic mirage. Sometimes links will be broken and many times never have existed to begin with.
  2. Misinterpreting Data: AI can misunderstand the context or fail to grasp the nuance of your data inputs. Just because it can spit out answers in seconds doesn’t mean those answers make sense.
  3. Fictionalizing Details: If tasked with creating summaries or reports, LLMs sometimes embellish information they don’t have — improvising rather than sticking to the facts.

Truth Checking Will Be Native, But Not Yet

In a few years, AI systems will likely have built-in truth-checking capabilities, constantly cross-referencing their outputs with real-world data. But until then, the burden is on us to verify. Just as you wouldn’t trust an intern to make high-stakes decisions without oversight, you shouldn’t trust AI to operate without careful review.

Designing with AI

AI can be an incredible tool for designers, developers, and researchers. It can save hours of manual work and offer creative solutions, but it’s not a replacement for human judgment. If you’re using AI in your workflows — whether it’s generating user personas, summarizing feedback, or even helping with creative copywriting — always check its work. This “intern” can help you move faster, but it will also make its fair share of mistakes along the way.

In the world of design, where details matter, and accuracy is critical, trusting AI without verification is a risk you can’t afford to take. Until AI matures into the polished professional it has the potential to be, treat its output as a helpful draft, not the final word.

#designingwithAI #AI #artificialintelligence #emergingtechnology

--

--

Breanna Faye

Technologist, Architect, Artist and Writer, Metaverse citizen, future space tourist, #emergingtechnology #spatialcomputing #web3 @MIT_alumni @MIT