A Lesson on the risks of AI chatbots

Version Details

Summary:

This article was written by me, with a final spelling and grammar check conducted by AI.

Two days ago, users of the popular AI coding tool Cursor found themselves being logged out of one device when they opened the app on another. While some software products (e.g., Microsoft Office, Netflix) do enforce this kind of single-device restriction to encourage multiple subscriptions, it’s generally not a popular feature.

This unexpected behavior led many users to contact the company, unsure whether it was a bug or a change in policy. Given that Cursor had made a few unpopular decisions recently, emotions were already high—so when the support team responded by confirming that this was a deliberate policy change, many users were furious.

One problem: it wasn’t a policy change. It was a bug.

The support responses had been generated by AI, which completely made up (or “hallucinated”) the idea that this was an intentional decision. This led to backlash on Reddit and a wave of cancelled subscriptions.

A valuable lesson

The key logic behind AI chatbots isn’t traditional code—it’s human-readable instructions. But AI makes mistakes. And one of the most common? Not following the instructions it’s been given.

We’ll probably never know whether this particular failure was due to poor prompt design or the AI simply ignoring its instructions. Either way, it’s a valuable case study.

A few takeaways:

  • Using chatbots for important customer interactions is risky.
  • Not disclosing that the support agent is an AI means most people will assume they’re talking to a human, and that a confident response means that human knows what they’re talking about.
  • Clearly disclosing that the response is AI-generated helps mitigate this risk, as users can then decide whether to trust the information or seek human confirmation.
  • People are rapidly getting used to dealing with AI (many already prefer it over humans for certain tasks) so being open about AI usage is likely to build trust, not erode it.

Tech companies are leading the way in reaping the enormous productivity benefits of AI, but they’re also learning the hard lessons of what can happen when AI is relied on too heavily or isn’t properly configured. Learning from both their wins and their stumbles is a great way for others to be fast followers and to gain the benefits of AI without taking on all the risk of operating right at the bleeding edge.

Read more: