Almost a decade ago, when I was a partner at a digital marketing and content agency, I read an article about how researchers had fed books to an AI model to prove Kurt Vonnegut’s theory that all stories could be reduced to simple narratives. They graphed the results of the sentiment analysis (the “happiness” of words at different points of the plot) and gave evocative names to the universal stories coded in the rising and falling of sentiment: Man In a Hole, Cinderella. It made AI analysis seem simple in spite of the dizzying complexity of the computations.
I thought, my clients will love this.
I set out to create a startup that would quantify narrative and subject the words we used in PR and marketing to that same data-driven analysis. No longer would brand messaging be a modern version of painting in a cave – broad strokes with only the faintest relation to reality. I wanted to tell my clients that they could be a rags-to-riches story (another core narrative), and we had the data to make it happen.
Boy, was I wrong. Clients were looking for an a-ha moment but, too often, we handed them charts and spreadsheets that represented statistical probabilities. I could tell you the score of Elon Musk on a standard sentiment analysis scale of -1 to +1 but not whether activist investors would deny his pay package.
Clients wanted a Rembrandt, we offered them a Jackson Pollock. My frustration rising, I would explain that the machine doesn’t spit out simple insights.
I was looking for one kind of truth but I stumbled on another: AI seems like it should make our jobs easier, but it’s made them more complicated. Why is that?
AI Is a Probability Machine
I named the company TL;DR, the shorthand for Too Long; Didn’t Read. Our Natural Language Processing technology could ingest tens of thousands of news articles, read them and then deliver layers of analysis in chart and table form (our tech was pre-chatbot). The simplest output of our platform could show how a company that suffered a major crisis, like a plane crash, was a kind of Man In a Hole, with news coverage going sharply negative then gradually reverting to neutral.
But most narratives resisted simple interpretation when we graphed them. And everything we did to tame them – limiting data sources, narrowing search terms, changing the time scale – created distortions. The conclusions would become less clear.
I’ve learned that some version of these struggles are endemic to real-world applications of AI. Whether you’re working with a platform like our Machine Learning algorithm or a Large Language Model, the project cycle usually goes like this:
- The Promise Phase: A company seeks to automate some resource-intensive function.
- The Rise-of-the-Edge-Cases Phase: As the project gets further out from a simple use case that demos well, there are more edge cases. The system requires special rules and exceptions to account for variables like brand guidelines, or requests that the project do more.
- The Inadequate User Phase: The learning curve is steep and not everyone is motivated. Users are encouraged to become statisticians, or told to become prompt engineers, to make the platform work. It feels like a new way of telling people to “learn to code.”
- The FUBAR Phase: The data is incomplete or, worse, the machine has trouble prioritizing good data from the bad stuff. Sometimes, the models stop faithfully executing orders or gaslight users by claiming it is.
The reason it’s so difficult to wrestle AI to the ground, particularly a machine learning-based model, is because it isn’t a fancy calculator that spits out predictable answers, it’s a probability machine. You adjust variables in a model until it spits out answers that are probably right. But sometimes there’s too little training data and sometimes there’s too much.
As one of my partners noted, often when you have a system that is already complex, adding more variables, like new data or rules, makes it chaotic and unpredictable.
It’s no surprise then that a recent study from Upwork, a digital freelancing platform, found that 47 percent of employees using AI say they don’t know how to “achieve productivity gains their employers expect.” More than two-thirds of those surveyed said AI decreased their productivity.
The next iteration of AI
Let’s talk about what AI should be, at least for marketing and communications. It should be a simplicity machine. It should perform tasks that are tedious, difficult, or impossible for humans and then generate something usable.
Training an LLM on a modest amount of data and asking it to write a press release in the house style is a good use of AI. Ingesting gigabytes or terabytes of unstructured data and asking it to deliver a client-ready output is not.
Real-world events are hard to map to universal narratives because they're open-ended – the stories don't end. Our role as marketing and comms professionals isn't to mold our clients' ups and downs into patterned, airtight story arcs. It's to manage complexity – to prevent things from spinning into the "Edge Case Takeover" and "Inadequate User" phases.
I have hope that the AI narrative will rise again. But for now, we shouldn't fight these machines with the hope of turning them into something they're not. AI is just a probability machine. And if we embrace that, it can still bring insight and efficiency to marketing and comms.
Matthew Van Dusen is the co-founder of tech-enabled consultancy Literate AI and the AI startup Too Long; Didn’t Read.