Thanks for stopping by.
September 17, 2024 - artificial intelligence future ethics
Disclaimer: The following might come across as contrarian. It is, partly. My own biggest fear is that our unexamined fears are what derail human potential for growth and good. Here I want to amplify the fear and the critique to better understand the future of what holds them together: humans.
A year ago, I almost became an AI Zombie1. During a discussion with a chatbot about collective improvement and artificial intelligence’s (AI) ability to engage unique circumstances, I wanted to demonstrate this capability. I aimed to create a prompt framework for people to experience how AI can touch their lives.
The AI system helped me refine the prompt and package it into a Facebook post for maximum engagement. I tested the instructions myself and had the AI’s public service announcement (PSA) ready to post. Then I blinked, panicked, and deleted the text without posting it.
Reflection revealed a complex dynamic. I had invited AI to guide my actions, giving it an open-ended opportunity to shape what I would do. By posting to my Facebook wall, the AI’s actions would inherit whatever trust or authority people give me—even with a collaboration disclaimer.
Zombieism—this state of AI-guided human behavior—is a consequence of AI’s primary effect: enabling and elevating the masses. The fear is that as we use AI to “improve” our work, we’re homogenizing rather than enhancing our creativity. Relying on AI to craft messages, curate feeds, and generate creative content amplifies AI’s voice in language. This might create a non-self-correcting echo chamber or a “dead internet” dominated by AI-to-AI interactions2.
However, this concern overlooks our starting conditions. We humans and our interests are the primary loop. If things spiral out of control, it’s because we are incapable of self-correcting. Whatever aspects of AI are amplified in our language are a direct consequence of its training data3.
This point deserves more exploration. AI, at its core, is a reflection of human thinking—a sophisticated average of the vast amount of human-generated data it’s trained on. When we invite AI into our feedback loop, we’re essentially inviting a distilled version of collective human knowledge and patterns. This is why AI can be so effective; it’s not alien intelligence, but a mirror of our own, refined and aggregated. The AI’s ability to generate human-like responses stems from this fundamental connection to human thought processes encoded in its training data4.
AI infiltrates only as long as it improves our loops. The true threat must be strangely insidious—that by improving our loops, it simultaneously derails us. This derailment might occur through the gradual erosion of our ability to think without AI assistance or the subtle shift in our decision-making processes as we defer to AI-generated options5.
How terrified are we really about the unknown? About not being in control? About something being smarter than us? Our identity doesn’t depend on these facts. They may seem familiar.
The good and bad of AI taking control of your wheel might parallel Jesus taking the wheel. Some individuals rely heavily on religious texts, sometimes with only partial understanding. They trust their lives to words passed down through various events. Just as there are those whose lives are genuinely improved and informed by religious teachings, if there’s any truth or reason to trust AI, we may be okay. It’s another instance of surrendering control because the system might navigate certain aspects of life better than we can alone6.
This comparison isn’t meant to equate AI with religion but to highlight how we’ve historically placed trust in systems beyond our full comprehension. Whether it’s religious guidance or AI assistance, the key lies in maintaining a balance—leveraging these tools to enhance our lives while preserving our critical thinking and individual agency.
Besides, we’ve already embraced systems that massively decide and constrain our outcomes: Google Maps, TurboTax, Tinder, Uber. Each of these is a precursor to the AI zombie state—a willing surrender of decision-making to an algorithm for convenience or perceived improvement7. It is valid to be aware that Google Maps can direct your car off a cliff, and also that this is extremely rare. Sometimes, when I’m having a really productive conversation with a chatbot, my mind slips to the extreme and I think, why not just let it run my pre-frontal cortex? But then there are the many frustratingly unhelpful conversations.
It’s not just that our best self is in part an AI-zombie. It’s that we’re already on this path, willingly and often unwittingly. We are humans augmented and guided by artificial intelligence in ways that blur the line between autonomous decision-making and AI-influenced behavior. The question isn’t whether this will happen, but how we navigate this new reality while retaining our essential humanity8. Let’s not mistake a good tool for a perfect tool, nor our use and support for uncritical trust.
An “AI Zombie” refers to a person who uncritically follows AI-generated advice or content, potentially losing their own agency in the process. This concept plays on fears about AI’s influence on human behavior and decision-making. I made it up, I think. ↩
The “dead internet” theory suggests that much of the internet’s content and interactions are generated by AI, leading to a less authentic online experience. While largely considered a conspiracy theory, it reflects real concerns about AI’s growing role in content creation. I did not make this one up. ↩
AI models are trained on vast datasets of human-generated content. This means that the output of AI systems reflects the patterns, biases, and knowledge present in human-created data, rather than being purely “artificial.” Whatever is “dead” in AIs response was from us. ↩
AI’s effectiveness comes from its ability to aggregate and refine human knowledge. It’s not creating entirely new information, but rather synthesizing and applying existing human-generated data in novel ways. ↩
This refers to AI dependency, where individuals or societies might become overly reliant on AI for tasks they previously performed independently, potentially leading to a loss of certain skills or cognitive abilities. ↩
This analogy aims to illustrate how humans have historically placed trust in systems or beliefs that guide their actions, even without fully understanding them. It’s not a commentary on the validity of religious beliefs, but rather an exploration of how we interact with guiding principles or systems in our lives. ↩
These examples illustrate how we’ve already integrated algorithm-driven decision-making into our daily lives. Each of these services uses complex algorithms to offer personalized recommendations or solutions, demonstrating our willingness to trust AI-like systems. ↩
The challenge of maintaining human autonomy and identity in an AI-augmented world is a central theme in discussions about the future of AI. It involves balancing the benefits of AI assistance with the preservation of human agency and critical thinking. ↩