It’s very simple. What if Chat GPT or another AI gets it wrong? Everyone remembers the lawyers who used Chat GPT to write a legal brief. The AI “hallucinated,” making up court cases. A federal judge didn't think it was funny.
Take another example. A Long Island University study found 75% of drug-related Chat GPT responses were incomplete or wrong.
These are dramatic cases of Generative AI mistakes or distortions. But less newsworthy errors or omissions can also prove very damaging, especially to businesses, brands, and executives.
Imagine you’re a public company in a competitive space. You’re an innovator and a category leader. Yet when it comes to your core services, a Chat GPT summary comes back with comments like this real summary:
“The website does not provide specific details about these services.”
It can get even worse. Here’s another real response (name withheld):
“Corporation did not provide a clear explanation for the decline in revenue and operating income in the earnings press release or the conference call…This lack of clarity on the reasons behind the financial underperformance may have contributed to the negative sentiment.”
Your company, of course, shared substantial information with equity analysts, stakeholders, and the press. It’s required by Regulation FD! Yet the Large Language Models (LLMs) that power Chat GPT and Bing just aren’t reading it that way.
What do you do?
Call Sam Altman? Email the board at Open AI? Or if it’s Bard, ring Sergey’s office?
Tradition search engine optimization (SEO) isn’t going to be enough either. It’s all about links, and links are yesterday’s biggest opportunity and challenge. Chatbots’ summaries are what will matter more and more.
There is an answer.
You need to think like the machines. Why? Because the machines are now a crucial audience. It’s the machines that read your press releases, crawl your site, and analyze your investor conference calls. And guess what?
The machines didn’t get it!
“Learn what to do about it in part two!”