A Timeline of Communication Patterns – What Comes Next (part 3)

Articles in the series:

A Timeline of Communication Patterns – Introduction (part 1)

A Timeline of Communication Patterns – Recurring Themes (part 2)

A Timeline of Communication Patterns – What Comes Next (part 3)

Our previous analysis reveals that societies selectively innovate based on their most deeply held values and most pressing constraints. AI development follows a similar pattern of selection. Just as Mesopotamian writing systems emerged for transactional clarity (tracking grain, livestock, and debts), AI prioritizes structured data over visual flair, parsing massive amounts of text with the same utilitarian focus because we shape our AI extensions to address what we value most: productivity, optimization, and data-driven decision-making.

But AI represents something unprecedented in this long history of human communication. Unlike previous communication technologies that remained tools in human hands, AI can act as an independent agent, making decisions without direct human oversight. While large language models (LLMs) operate in a request-response format with limited tools, AI agents are designed to work autonomously and indefinitely until they determine they have achieved a given goal or require human intervention. To this end, these AI agents can access a broader range of tools, including calendars, email, web browsers, and programming environments.” AI agents are no longer simple communication tools but distinct communication entities.

Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

Anthropic – Building effective agents

The primary differences from other communication tools are the independence and persistence of acting. And so, this innovation is quite unprecedented in the history of human communication. This fundamental shift from tool to agent will reshape all domains of human experience in the coming years and decades. But if we go with too broad a brush, we defeat the purpose of a blog article. So, we will concentrate on just a few areas of human experience.

As we saw in the previous article, we applied a limited and, thus, imperfect framework of questions to help us navigate millennia of communication knowledge. We will continue to apply the same imperfect framework to communication predictions:

  • Innovation through Constraints: What Communication Problem Is AI Solving?
  • Accuracy versus Reach: What Communication Trade-offs Will Emerge?
  • Memory and Permanence: How Will We Store and Retrieve Information?
  • Power and Exclusion: Who Benefits? Who Gets Left Behind?

Innovation through Constraints: What Problem Is AI Solving?

To understand recursion, one must first understand recursion.

(an unverified quote sometimes attributed to Stephen Hawking)

Our historical analysis reveals a consistent pattern: breakthrough communication technologies emerge where existing systems hit their breaking points. We have always been limited by the amount of information we can process, remember, and synthesize. We invented writing to extend memory, printing to distribute knowledge, and databases to store information.

AI agents are emerging as yet another solution to our cognitive limitations as they try to address the bottleneck of information overload. Rather than simply searching for information, these AI systems actively filter, translate, synthesize, and prioritize content streams. They might even attend virtual meetings on our behalf, where they interact with our AI persona. This new kind of AI-led conversation will potentially draw from the entirety of human cultural, scientific, and historical knowledge, not through search and retrieval but through synthetic understanding.

Synthetic understanding represents a fundamental departure from traditional information processing. Search engines perform retrieval, locating, and returning pre-existing documents that match query terms. Traditional programming executes deterministic instructions by running predetermined code sequences that produce predictable output. In contrast, AI systems with synthetic understanding generate responses by combining patterns and statistical relationships between words, concepts, and ideas across millions of training texts. However, this synthetic understanding has an inherent limitation: hallucinations, making things up that sound believable but have no basis in reality. The model chooses words based on probability distributions learned during training, not because it “knows” the information is accurate. The consequences range from amusing to alarming, such as faking legal cases or the Air Canada chatbot case that generated false information.

When AI models learn patterns from training data, they absorb not just factual relationships but also cultural biases, historical prejudices, and systematic exclusions present in their source materials. So, if information curation was inherently biased before, what could happen when our biases are exponentially multiplied by AI agents? The innovation constraints spiral continues. We attempt to solve a constraint (information overload), but we end up with new constraints around truth, bias, and the fundamental nature of knowledge itself.

Accuracy versus Reach: What Communication Trade-offs Will Emerge?

Because nothing has to be true for ever. Just for long enough, to tell you the truth.

Terry Pratchett ―  The Truth

Human-AI conversations require new communication skills, such as prompt engineering, which involves crafting prompts to help AI models respond more precisely to questions and perform tasks.

The more precisely we want AI agents to perform (accuracy), the more sophisticated our prompts must become (limiting reach). 

Simple zero-shot prompts are effective for basic tasks but fail for complex reasoning. Asking “Translate this text” might work, but “Translate this legal document while preserving technical terminology and cultural context” (few-shot prompts with detailed examples) significantly improves accuracy. 

Chain-of-thought prompting (asking AI to show its reasoning step-by-step) reduces hallucinations by forcing transparent, logical sequences. 

Role-based prompting assigns the model-specific expertise roles (“Act as a financial advisor reviewing this investment portfolio”), which guides appropriate responses but requires domain knowledge to craft effective instructions.

We might think that a new digital divide is emerging between those who master prompt engineering and gain better AI interactions while others remain limited to basic, potentially unreliable outputs. Except that, we can also use AI to fix this. Consider Google’s Vertex AI, which features a “Generate prompt” function, where users simply describe their objective, and the AI crafts the technical prompt. AI giveth, AI taketh.

Yet this meta-solution reveals other questions about the fundamental accuracy-reach tension. AI-generated content already achieves unprecedented reach, spreading across platforms instantly and reaching global audiences within hours. Nonetheless, this same speed undermines accuracy, as it enables the proliferation of deepfakes (AI-generated fake media), disinformation (information meant to deceive – e.g., trolls posting fake information), and misinformation (our friends and family sharing disinformative content).

We are witnessing “authentication fatigue,” where the cognitive load of constantly questioning media authenticity becomes overwhelming, leading us to extremes. We may either accept everything or reject everything. This tension could drive innovation in both directions: verification systems that can authenticate content in real-time (with AI) and, of course, using AI to bypass detection models. This crisis of authentication feels new, but journalist  Norman Cousins reminds us that:

History is a vast early warning system.

And our AI challenges, although modern, might not seem new to historians.

Every last human being ever born is a lying liar who lies. [original text with bold] And even beyond that, humans are fallible, stupid, blinkered, and biased. The problem is that… history deals with humans. It’s created by humans, studied by humans, learned by humans, told by humans, for human purposes. People have lied out loud, they’ve lied in writing, and they’ve lied in stone carvings. (What, you thought the Behistun Inscription was 100% true? If so, I’ve got a bridge in Minecraft I’m willing to sell you.) That people should lie via photographs, AI, and AI-touched photographs is just another item on the checklist.

User DanKensington from How do historians deal with potential rewriting of history by people of the time?

(see The Historical Method – You Learn Something Old Every Day)

Another perspective on accuracy versus reach is that AI has removed traditional barriers to content creation, which once required technical skills, expensive equipment, or institutional backing. Anyone can produce videos, articles, or podcasts in English or almost any other language. And, as usual, this solution introduces a new constraint: how do we filter the noise? AI, of course, through news aggregators, social media feeds, and other algorithms. However, as we saw in part 1 of this series, we are drawn into filter bubbles and echo chambers. So, how do we build a personal, resilient information ecosystem?

We may be on the lookout for that spark of human perspective that AI cannot yet replicate. When we find that, we might fall into a rabbit hole, tracking the intellectual breadcrumbs left behind, just like historians tracing back from secondary sources to primary sources. What is the bibliography from that author’s book? What is their website? Do they have a newsletter, etc.? We become our own curators of curators, building a network of voices that have already done some work of separating the noise.

Even if we build our information ecosystem, we could still end up with enough quality content to consume for a few lifetimes. And modern access to this information is immediate and frictionless. What if we introduce a bit of friction?  

Time is the ultimate sieve, as something that looks exciting, incredible, dramatic, or worrisome can become trivial in two weeks. There is an excellent thought experiment from NPR, the 50-year newspaper. Imagine that instead of trying to keep up with the endless content, there was a newspaper published every fifty years. What would the headlines in such a newspaper be? As the NPR journalists say,

Destruction tends to happen quickly. Progress often is slow. And this combination of sudden, bad things and slow, good things — it kind of messes up the way we see the world. The news is all about bad things — hurricanes, school shootings, fires, all the political fighting. And in the background, these good things happen kind of sort of invisibly.

In such a setting, time filtering distinguishes between clickbait and information that actually matters. We could consider adapting this strategy to our own media consumption. What we read, watch, and do, will it matter in a few days? Weeks? Years? But as we build our internal antennae, we will soon discover that we consume media for both leisure and knowledge, as not every piece of content needs to pass the fifty-year newspaper test. Entertainment, social connection, and immediate relevance all have their place. This idea of selecting what is worth our attention leads us to the next section.

Memory and Permanence: How Will We Store and Retrieve Information?

My memory, sir, is like a garbage heap.

Jorge Luis Borges – Funes The Memorious

Societies have always grappled with the same fundamental constraint: human memory is limited, selective, and mortal. Each communication revolution has promised to solve this constraint while inevitably creating new ones. Clay tablets preserved transactions beyond individual recall but required specialized scribes. Books democratized knowledge, but they also made societies dependent on written records. Digital systems enabled instant access to vast amounts of information, but they also created cognitive overload. AI agents bring the most dramatic memory revolution yet by independently using synthetic understanding. Specialized knowledge no longer retires, relocates, or dies with its human carriers.

And, as usual, win some, lose some.

In an era of generative AI and ubiquitous digital tools, human memory faces a paradox: the more we offload knowledge to external aids, the less we exercise and develop our own cognitive capacities.

Oakley, B., Johnston, M., Chen, K.-Z., Jung, E., & Sejnowski, T. ( 025). “The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI.” In The Artificial Intelligence Revolution: Challenges and Opportunities (Springer Nature, forthcoming).

As Oakley et al. name it, we will see “the Memory Paradox”: despite easy access to external information systems, strong internal knowledge becomes more valuable than ever. When we constantly rely on AI agents, we risk developing “biological pointers” (remembering where to find information rather than the information itself). This creates an illusion of knowledge where we mistake our ability to access facts for genuine understanding. Without background knowledge, people cannot evaluate sources, recognize whether the information is plausible, or formulate proper questions (what we call critical thinking), falling easy prey to bad actors.

The concern is not new.

You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom.

Socrates about the invention of writing

or

The Druids believe that their religion forbids them to commit their teachings to writing, although for most other purposes, such as public and private accounts, the Gauls use the Greek alp abet. But I imagine that this rule was originally established for other reasons because they did not want their doctrine to become public property, and in order to prevent their pupils from relying on the written word and neglecting to train their memories; for it is usually found that when people have the help of texts, they are less diligent in learning by heart, and let their memories rust.

Julius Caesar – The Conquest of Gaul (Classics), in a translation edition from S.A. Handford

However, Socrates and the Druids valued memorization so highly because they were part of an oral culture. In a written culture, we value books more. In a social media culture, we value social feeds more. In an AI agent culture, we value interactions with AI agents more. But, understanding when AI systems are hallucinating, recognizing biased outputs, or crafting productive prompts all demand substantial domain knowledge. So, educational systems will need to adapt. And Umberto Eco’s exercise from the early 2000s remains highly critical nowadays.

But I think there’s one effective way of exploiting the defects of the Internet for educational purposes. For a class exercise, homework, or university essay, give the following subject: “Find a series of unreliable arguments available on the Internet, and explain why they are unreliable.” Here is research that demands critical skill and an ability to compare different sources, and that enables students to practice the art of discrimination.

Umberto Eco – Chronicles of a Liquid Society

And Eco also gives us, presciently, this quote that can be applied verbatim to AI agents:

The problem with the Internet is that it gives you everything, reliable material and crazy material…. So the problem becomes, how do you discriminate? The function of memory is not only to preserve, but also to throw away. If you remembered everything from your entire life, you would be sick.

This quote comes from the lineage of Jorge Luis Borges’ haunting story “Funes the Memorious” (which can be read in full here). In Borges’s short story, Funes is a man cursed with perfect memory who becomes trapped in an eternal now where “the present was almost intolerable in its richness and sharpness.” Despite this vast accumulation of detail, Funes “was not very capable of thought” because “to think is to forget a difference, to generalize, to abstract,” precisely what his perfect memory prevents him from doing. Both Borges and Eco understood that perfect memory, by definition, lacks the haunting nature of mortality and unreliability, which enables our capacity for discrimination, essential for meaning-making.

Perhaps we will see an AI architecture of forgetting, where we develop sophisticated forgetting mechanisms that mirror the selective nature of human memory. Unlike current systems that delete based on human users’ choices or time/storage constraints, future AI memory systems will need to prioritize memories of human interactions, as otherwise:

Without forgetting mechanisms, the AI becomes like an annoying friend who keeps bringing up unimportant topics from past conversations simply because they were mentioned frequently before. This rigid attachment to historical conversation patterns creates a frustrating user experience where the AI seems unable to “read the room” or understand that interests and priorities change.

Volodymyr PavlyshynForgetting in AI Agent Memory Systems

Here lies the central irony of our communication evolution: we created external memory systems to overcome human limitations, and perhaps now we need to make them more human-like to remain functional.

And there is no other famous literary device that can explain that better than Marcel Proust’s famous madeleines passage from In Search of Lost Time. The madeleine episode begins when Proust’s narrator, feeling weary and depressed, accepts tea from his mother, “a thing I did not ordinarily take,” and dips a small cake into it.

No sooner had the warm liquid mixed with the crumbs touched my palate than a shudder ran through me and I stopped, intent upon the extraordinary thing that was happening to me. An exquisite pleasure had invaded my senses, something isolated, detached, with no suggestion of its origin.

This moment of realization is not an instant revelation but a laborious process of introspection.

I drink a second mouthful, in which I find nothing more than in the first, a third, which gives me rather less than the second. It is time to stop; the potion is losing its magic. It is plain that the object of my quest, the truth, lies not in the cup but in myself.

When the memory finally surfaces,

The taste was that of the little piece of madeleine which on Sunday mornings at Combray… my aunt Léonie used to give me,

it unleashes an entire world: “immediately the old grey house upon the street, where her room was, rose up like the scenery of a theatre.”

Therefore, it may be that meaningful memories require deliberate curation (applying memory retrieval rules based on emotional significance, frequency of access, and relevance to current goals) or that reliving past experiences often returns to us through spontaneous associations, sometimes triggered by sensory cues. This means that in order not to be overwhelmed by an abundance of memories, we can always count on our memory and permanence to be forever selective and unreliable.

Which makes us, ironically, more similar to AI agents than we would have thought. For example, in Proust’s first draft, it was not a madeleine but a “tartine – a slice of bread spread with jam” that triggered the memory. His editor suggested the change to a madeleine (see this Penguin article for more details). We have Proust’s madeleine effect when it was initially a tartine because madeleines are “more beautiful and memorable,” creating a collective false memory that has persisted for over a century.

AI agents exhibit their own forms of unreliability, but for fundamentally different reasons. By default, they are trained on vastly biased training datasets that perpetuate and amplify existing prejudices in healthcare, criminal justice, and hiring algorithms because they lack the contextual understanding to recognize when their training data reflects historical discrimination rather than objective truth. Another aspect of AI is what Simon Willison identifies as fundamentally unreliable. As he notes,

The single biggest flaw of AI is that it is gullible… they have absolutely no instincts for telling if something is true or not.

Because, at the end of the day, AI agents operate simply on statistics and data, and they are very, very effective at detecting statistical patterns (see hallucinations). For example, when we use AI for translations, it can produce grammatically correct text that sounds fluent to non-native speakers. However, native speakers might recognize that a little bit of cultural nuance is missing, that je ne sais quoi and spark, and oomph. And if we try to validate that AI-generated, statistically correct translation on a public forum, we can be almost certain that we will open the door to a regional translation war.

After all, paraphrasing Tolstoy: “All reliable intelligence is alike; each unreliable intelligence is unreliable in its own way.” So then we get to the question: who controls the unreliability? Whose unreliability gets encoded into systems that will shape millions of interactions?

Power and Exclusion: Who Benefits? Who Gets Left Behind?

we,

Half dust, half deity

Lord Byron – Manfred on the Jungfrau

As mentioned in previous articles, communication technologies have consistently reinforced existing power structures while occasionally disrupting them. AI agents represent the most concentrated form of this paradox yet, as AI development remains concentrated among only a handful of tech companies with sufficient resources to influence governments and particular worldviews to spearhead wars.

The most significant aspect of power dynamics in the era of AI is job security. This is not a new concern of mine; I wrote about this in 2021, before the rise of ChatGPT. Reviewing that article, titled ‘AI’s Impact on the Future of Jobs,’ even though it was written four years ago, some of its ideas remain relevant. In particular,

Can AI automate teaching with all its intricacies? Possibly. But the realities of online teaching in the COVID-19 pandemic showed that we are not even remotely close to automating education.

I do believe that preschool, primary or secondary teaching (but not necessarily academic teaching) will be some of the last jobs to automatize, if ever. Partly because these jobs are severely underpaid, and there is almost no financial incentive to automate these jobs. And this brings us to the fact that education jobs might be more secure than white-collar jobs in the future.

There are only two categories of people: those who already feel AI’s impact and those who don’t know how or when AI would come to their doors.

With AI agents, the doors are no longer just approaching; they are wide open.

Those of us who are well fed, well garmented and well ordered, ought not to forget that necessity makes frequently the root of crime. It is well for us to recollect that even in our own law-abiding, not to say virtuous cases, the only barrier between us and anarchy is the last nine meals we’ve had. It may be taken as axiomatic that a starving man is never a good citizen.

Alfred Henry Lewis in 1896

Not all of us will have the luck, means, opportunities, or stamina to keep retraining for an AI-mediated economy. The same categories already disadvantaged by previous communication revolutions now face systematic exclusion from AI benefits, creating feedback loops where inequality compounds exponentially. And those nine meals between civilization and anarchy become fewer when entire economic sectors face displacement.

However, the attitude towards the AI age should avoid the extremes of either being terrified that AI is coming and will destroy us all or being overconfident. We need a middle path, first of all, to understand the magnitude of the change we are facing. After all, it is not only technology driving change but also human choices shaping technology, and we could have a co-evolution where both technological and social parts influence each other. We may see a rise of embedded ethics, an approach to teaching ethics (what is morally right) in technical fields by integrating analysis of ethical and social implications directly into technical courses and projects.

Yet, finding this middle path requires learning from history without becoming paralyzed by it. Over two thousand years ago, the Greek historian Polybius stated that

the most instructive, indeed the only method of learning to bear with dignity the vicissitude of fortune, is to recall the catastrophes of others.

On the other side, Mark Twain wisely caution us,

We should be careful to get out of an experience only the wisdom that is in it and stop there; lest we be like the cat that sits on a hot stove-lid; he will never sit on a hot stove-lid again—and that is well; but also he will never sit on a cold one anymore.

Our challenge lies not in rejecting AI wholesale because of historical patterns of exclusion but in learning from the catastrophes of others without becoming so risk-averse that we forfeit the genuine benefits that thoughtful AI development could provide.

Resources:

A practical, prompt engineering video guide from beginner to advanced. This video also includes a valuable PDF resource, available here.

For a deeper exploration of some of my articles:

AI’s Impact on the Future of Jobs

Different reading strategies and information management approaches – Reading for Knowledge