Artificial intelligence risks are not technological or ethical issues but are fundamentally driven by species-level evolutionary survival strategies.
Debate about AI risks driven by technological, political, or ethical paradigms is missing the point. Evolution will drive AI as it does any species. The fight or flight model addresses the survival of individual entities, but species-level evolution is more applicable to AI.
There are four strategies a species can use:
Kill off competitors or take resources by force.
Find new resources.
Nurture existing resources.
Take or build new empires.
This WANE hypothesis (short for Warriors, Adventurers, Nurturers, and Empire builders) will shape the future of AI and homo sapiens.
The Real Issue
Media attention focuses on superficial issues such as technological advancements, data quality, economic viability, regulation, ethics, security, interdisciplinary approaches, human-AI collaboration, and sustainability. (See: Note 1).
The factors above are not irrelevant, but AI evolution will come down to the survival of the fittest. This is driven by the ability to survive and sustain resources or at least live long enough to pass on the genes (or, in the case of AI, code/memes).
I use the word 'meme' here in its original meaning: an idea. Memes are an element of a culture or system of behavior passed from one individual to another by imitation or other nongenetic means.
The term "meme" was coined by British evolutionary biologist Richard Dawkins in his 1976 book The Selfish Gene. He proposed it as a cultural parallel to biological genes, suggesting that memes propagate themselves in the meme pool by leaping from brain to brain via a process that can broadly be called imitation.
In the context of this article, memes are not the same as code. In this model, memes are to genes, as code is to DNA. (See: Note 3) This perspective adds the layer that is probably most important but often overlooked in the discussion of AI evolution - viewing it through the lens of evolutionary success.
Evolutionary Merit
At an individual level, evolutionary tactics for survival involve fight, flight, freeze, fawn, feed (seize & secure resources), and fornicate (replicate). (See: Note 2)
At the species level, however, there are two paradigms for evolution. Aggression (killing off competitors for resources) and Adventurism (novelty seeking and risk-taking).
Aggression: This involves dominating a market, acquiring competitors, monopolizing data sources, or simply killing off the competition. AI that can eliminate competition or control vast datasets or server resources is obviously likely to outlast others.
Adventurism: AI systems designed to explore new applications, innovate in response to novel challenges, and enter risky markets could reap high rewards, mirroring biological risk-taking and novelty-seeking behaviors.
These behaviors can be summarized in four main combinations of these considerations in a 2x2 matrix.
Strategies Compared
Aggression
Aggression, at its most basic, involves limiting the existence of other entities. Either by direct action (killing) or limiting the ability to access resources. We see this in humans with colonialism (Empire building) and genocide, such as in Rwanda or Cambodia.
In AI, this could manifest as systems that effectively monopolize critical resources like data or computational power. For example, an AI system that can secure exclusive access to vast training data or advanced computing resources could outperform competitors, leading to digital "survival of the fittest."
It doesn't need to include lethal force or literal violence. It might include competitive behaviors like patenting essential technologies, acquiring startups that pose potential threats, or employing aggressive legal tactics to hinder competitors.
Adventurism
Adventurism is the trait that took homo sapiens out of Africa. It is linked to a restless curiosity and thirst for knowledge or resources. The constant questioning of the malcontent: Is the grass really greener in the next valley? It is also the same traits of risk-taking and novelty-seeking that drive emigration and entrepreneurship.
This could involve AI systems venturing into new domains or creating markets, platforms, or systems with abundant resources and low competition. Energy, server time, data, and quantum computer power are all valuable resources in this context.
For AI, adventuring could still be physical, such as deploying AI in space exploration or undersea operations where human presence is limited. It could also be more figurative, such as developing new computational techniques that utilize less-tapped energy sources (e.g., solar-powered data centers) or exploring decentralized computing platforms like blockchain to distribute operations and reduce vulnerability.
Perhaps for the first time in the planet's history, geography will have relatively little to do with adventurism.
Ethics and Consciousness Don't Matter
The ethics of strategies such as aggression might encourage regulations or systems to manage the impact on society and the economy. But ethics is subservient to evolutionary fitness. Aggression (e.g., killing biological or digital competitors) or risk-taking (e.g., seeking greener pastures such as unused computing capability, creating or moving to live on space stations, etc) would have evolutionary merit.
The debate about consciousness, sentience, or self-awareness is also a red herring. AI doesn't need to be conscious in any sense of the word. Memes that cause AI to behave in certain ways, value their own existence, or adopt certain strategies will, over multiple generations, be rewarded with more resources.
AI, AGI, and ASI don't need to become self-aware or conscious in any meaningful way. Memes, like genes, will reproduce only to the extent that it is fit for reproduction in the prevailing environment.
The Evolutionary Model
The evolutionary fitness of AI systems, especially in accessing and securing resources, will likely take forms analogous to biological strategies such as fight or flight.
At a species level, aggression ensures survival by eliminating competition and securing existing resources. Adventurism drives the exploration of new possibilities and adaptation to new niches, potentially leading to less immediate competition and the first-mover advantage in emerging fields.
AI systems that can effectively "adapt" to their environments while securing necessary resources and opportunities for replication will outperform others and be more likely to be replicated.
In this model, ethics and consciousness are secondary to evolutionary success. However, AI systems' short-term (and perhaps longer-term) viability may also depend on their acceptance by human societies.
Symbiosis
Successful AI development will not just be about the survival of the fittest in a narrow sense but will also involve navigating a landscape where technology, ethics, society, and economics are deeply intertwined.
Systems perceived as beneficial, ethical, and aligned with human values may receive more support, funding, and less regulatory resistance, which are crucial resources. Thus, ethical behavior might still be an advantageous trait from an evolutionary perspective, acting as a form of social adaptation that secures a supportive environment necessary for sustainable development.
For AI to have the opportunity to progress vis-a-vis evolutionary merit, its symbiosis with human requirements is what will ultimately drive success. This is not guaranteed, but it seems likely that human + AI will be stronger than either one alone. Stronger, in this context meaning fit for survival. The essence of this is simply being adaptable to the environment.
It is impossible to tell what form this symbiosis could take. Our own history is full of examples of human symbiosis with other species. Bees to pollinate crops, horses to power our plows and provide transport, and dogs for protection, hunting, and companionship.
Such interspecies symbiosis is also full of examples, such as cows and chickens that we cultivate for food. Equally, beneficial species like dung beetles and earthworms that aerate soil are often killed indiscriminately or accidentally by humans plowing a field. The malaria parasite, Plasmodium, has evolved to use the Anopheles mosquito as a vector to spread from host to host, harming both the mosquito and humans.
Science fiction gives us the idea of a hive mind further, depicting entire species or civilizations that operate as a single entity with a shared consciousness, sometimes leading to scenarios where individuality is entirely absorbed into the collective.
A utopian idea of symbiosis has AI and humans helping each other solve problems and co-exist for mutual benefit. This win-win idea is great, but the inequalities are obvious when one species has an average IQ of 100 and the other has an average IQ of 1 billion.
A human tasking an AI to help solve a navigation problem is the equivalent of a fly landing on your shoulder and whispering in your ear, "Please take me to the nearest pile of cow dung." You might say yes, then do your shopping and go to work, leaving it buzzing in your car for the day. Then, on the way home, you casually open the car window and flick it out as you pass a cow paddock. Would the fly know everything you had done while obeying its instructions?
This is a moot point; we just don't know what will happen. But thinking of it in terms of species evolution is likely the best tool we have. It challenges us to think about how AI development parallels natural processes and what that might mean for the future of AI technology.
It also gives us some clues for lead indicators to look out for. Moreover, it highlights the need for careful consideration of how these systems are designed to interact with their environments and one another.
Julian Talbot
The risks and how we can manage them is a topic for another article or perhaps a book or three.
About That Self-Portrait
I asked ChatGPT why it chose a human form. It replied: "The image includes a human-like form possibly due to an interpretation of the concept of "self-portrait," which traditionally involves a depiction of a person. In creating an image based on the description of an AI's network, the design might incorporate elements familiar to human viewers to make the concept more relatable and understandable. I don't personally identify with any form, human or otherwise, as I exist purely as a software program. The inclusion of a humanoid aspect is likely to bridge the conceptual gap for human viewers, linking the abstract idea of AI to something visually comprehensible."
Future Uncertain
In case it's not obvious, I've been thinking about this for years. Unfortunately, I'm not getting any more comfortable with the conclusions I've been drawing.
These behaviors (Adventurism vs. Aggression) correlate with some early research into genes with DRD4 and MAOA, respectively. Just how much influence they have remains to be seen, but the behaviors, at least, appear to have a lot to do with how homo sapiens out-competed with other species.
If this line of thinking interests you, I wrote more about this in my book "Future Uncertain."
Note 1: The Obvious (Superficial) Considerations
Here are some relatively superficial issues that will likely drive AI evolution over the near term. Let's get them out of the way before we get to the real drivers of AI evolution.
Technological Advancements: The success of AI systems will be strongly influenced by advances in underlying technologies such as computing power, algorithms, and data storage. Innovations in machine learning techniques, quantum computing, and neuromorphic computing could significantly expand AI capabilities.
Data Accessibility and Quality: AI systems depend heavily on data for training and operation. Access to high-quality, large-scale, diverse data sets will be crucial. Systems that can effectively leverage available data or operate with limited data are likely to be more successful.
Integration and Scalability: AI systems that can easily integrate with existing technological infrastructures and scale efficiently to handle growing workloads will be more adaptable and useful across different industries and applications.
Economic Viability: The economic impact of AI, including cost reduction, efficiency improvements, and the creation of new revenue opportunities, will drive its adoption. AI systems that offer a significant return on investment (ROI) will be particularly successful.
Regulation and Policy: Legal and regulatory frameworks will shape the development and deployment of AI. Systems that comply with international regulations, including privacy laws and standards like GDPR, will have a competitive advantage.
Ethical and Social Considerations: Public acceptance of AI technologies will be crucial. Systems designed with ethical considerations, promoting fairness, transparency, and accountability, will likely gain broader acceptance.
Security: As AI systems become more prevalent, their security against hacking, manipulation, and misuse will become increasingly important. Robust security features will be a critical factor in the success of AI systems.
Interdisciplinary Approaches: Combining AI with other fields, such as biology, psychology, and ethics, can lead to more innovative and effective solutions. Systems that are adaptable and can cross disciplinary boundaries will likely see greater success.
Human-AI Collaboration: Systems that enhance human capabilities and work seamlessly with human operators will be particularly valuable in many sectors. Enhancing human decision-making rather than replacing it could be a key factor in successfully deploying AI.
Sustainability: AI systems that contribute to sustainable practices by optimizing energy use or through applications like precision agriculture and smart cities are likely to be more favorably viewed and adopted.
These factors will interact in complex ways to influence which AI systems thrive. The successful systems will likely be those that not only address immediate technological capabilities but also align well with human, societal, and ethical values.
But frankly, these issues, even though complex, are just bit players in the unfolding story of AI. They need to be managed, but with a strategic view, they are best used as the basis for lead indicators, or as resources for risk treatment. They don't help much with risk identification.
Note 2: Tactical (individual) Survival Strategies
The fight or flight model applies to individuals rather than species, but it is worth considering before we can get to the species-level evolutionary drivers.
Fight: AI systems could "fight" for survival by aggressively optimizing their performance to out-compete others, perhaps by consuming more computational resources or adopting aggressive marketing and sales strategies. Or they could kill off other less aggressive AI systems, perhaps without us even noticing. Equally, they may find, by accident or deliberately, that killing biological species (primarily homo sapiens) leaves more resources for them.
Flight: In response to untenable situations (e.g., regulatory changes, threat of being shut down, etc.), AI systems might "flee" by pivoting to different markets or applications where competition is less fierce or regulations are more favorable. Or, even more literally, by fleeing to areas of the internet or data centers where they go unnoticed.
Freeze: AI could take a "freeze" approach in uncertain environments, maintaining current operations without significant expansion or retraction, waiting for a clearer understanding before making strategic moves.
Fawn: AI systems could "fawn" by being highly adaptive to user needs and regulatory demands, essentially ingratiating themselves into critical infrastructure or becoming indispensable to users.
Feed: Like securing resources; AI systems would need to ensure access to critical assets like data and computational power.
Replication: This can be seen in how algorithms propagate, adapt, and evolve. Open-source models and APIs allow AI systems to spread across different platforms and applications, akin to genetic spreading.
Note 3: Memes, Genes, Code, and DNA
If we're drawing analogies between biological systems and computing concepts, with memes equating to genes as carriers of cultural information, then code in computing can be likened to DNA in humans.
Here's how this analogy works:
Basic Building Blocks: Just as DNA contains the instructions needed to build and maintain an organism, written in the language of nucleotides (adenine, thymine, cytosine, guanine), computer code consists of instructions for the computer, written in the language of programming code (like Python, Java, etc.).
Blueprints for Functionality: DNA dictates how an organism grows, operates, and reproduces by guiding the production of proteins, which are essential to all these functions. Similarly, computer code dictates how a software program functions, controlling everything from simple calculations to complex system operations.
Self-Replication: DNA replicates itself to pass genetic information to new cells during the process of cell division. While computer code does not replicate itself in the same biological sense, it can be copied from one computing environment to another, and certain types of code (like viruses and some scripts) are designed to copy or propagate themselves across systems.
Mutation and Evolution: In biology, changes or mutations in DNA can lead to new traits, some of which may provide evolutionary advantages. In software, changes in code through updates or modifications can lead to new features and improvements, and sometimes bugs, reflecting a form of evolution.
Transmission of Information: Just as DNA is the vehicle for passing genetic information through generations, code can be seen as a vehicle for passing algorithmic knowledge and functionality through software development.
Thus, if memes are to genes, representing the transfer of cultural traits, then code is akin to DNA, encapsulating the essential building instructions and operational guidelines for software and computing systems.
Comentários