Did Wikipedia Die and What Does It Mean?

Today, Wikipedia plays a unique role in search. It delivers fast, sourced biographical info on well-known people. It also provides background on a full range of historical, political, scientific and cultural questions. Broadly speaking, Wikipedia is doing its best to live up to its original mission per founder Jimmy Wales:

"Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That's what we're doing."

Yet its days are numbered.

ChatGPT and other Large Language Models (LLMs) provide summary responses to Wiki-type questions. While often not as detailed Wikipedia entries, follow up prompts guide users to more information and propose related areas of inquiry. As adoption of LLM answer engines continues, the relevance of Wikipedia will inevitably decline.

The interesting question is what does this mean? Let’s dive in!

First, influence.

It will signal a decline in humanity’s direct influence over what we read online. It’s part of the larger transition to machine learning-driven autonomy, i.e., algorithms making decisions for us about which content is relevant and what themes and facts should be emphasized.

Second, transparency.

We will no longer be able to point to human editors, whether to praise or criticize their choices. Yes, ChatGPT, Perplexity and others do cite sources, but the process by which the summaries are assembled isn’t transparent. 

To take things a step further, in The Age of AI, Eric Schmidt, Henry Kissinger and Daniel Huttenlocher argue that machine learning is a unique development in human history. For the first time, machines will develop their own processes that—literally—no human designed or even understands. Regardless of the potential benefits, it’s hardly a model of transparency

However, Wikipedia is edited and updated by anonymous writers. Their ability to rewrite or remove entries gives them enormous power, and it means that Wikipedia is naturally subject to these editors’ interests and points of view. The result: growing controversies.  

Take the recent edits wars. Per the New York Post, Wikipedia has “banned 14 editors from working on topics related to the Israel-Hamas war,” and the site's founders are publicly battling “over whether to reveal the identities of the Wikipedia contributors, who are typically kept anonymous.”

Third, objectivity

Under LLMs control, we would eliminate the subjective human role in favor of impartial umpires—ChatGPT, Perplexity, Gemini, Claude and more. Sounds great. Like electronic line calling for professional tennis. No more bratty outbursts. Yet LLMs are designed by people and source their information from the web, ensuring human interests and biases will impact many answers.

So where does it leave us?

AI engines will continue to evolve, and future versions may provide detailed sourcing and full visibility into how summaries are structured. Then again, such information may be closely guarded as trade secrets, i.e., impenetrable machine learning processes unique to each LLM. Naturally, consumers will pick and choose tools based on affinities just as we select media today. Either way, it seems like the challenges of influence, transparency and objectivity will be with us regardless of the future balance of human and machine power.

Interestingly, when we asked Google’s Gemini if Wikipedia has a “founding motto"...remember Jimmy Wales quote?...It linked us back to a Wikipedia page. 

Its source–surprise–was another Wiki page! 

Maybe that's the next chapter for Wikipedia. The site may live on as a kind of canonical backend — an encyclopedia for the bots.

But even in that future, we won't be reading Wikipedia anymore. It'll be the machines’ job.