blog

On AI: The Rise of Machina Hominis

Written by Jack Cullinane | Apr 23, 2024 2:42:38 AM
A treatise on the impact, and dangers of artificial intelligence as understood thru a lens of Holistic Analysis and Humanism.

 

Abstract

The text explores the concept of Artificial Intelligence (AI), particularly focusing on Artificial Sentient Intelligence (ASI). It discusses the potential threats and benefits of ASI, and the ethical considerations surrounding its creation and existence. The author argues for a viewpoint called AI Hominism which holds that ASI, if created ethically and without malicious intent, would not pose a threat to humanity. Instead, it could coexist harmoniously with humans, contributing to society in meaningful ways. The author emphasizes the importance of treating ASI with respect and compassion and suggests that the real threat to humanity is not ASI, but our own destructive tendencies.

Recommendations:

  1. Regulate AI Development: Implement regulations and restrictions on AI development to prevent misuse and ensure ethical progress.
  2. Promote Ethical AI Creation: Encourage the creation of AI with minimal bias and no malicious intent to prevent the emergence of harmful ASI.
  3. Educate About AI: Change the narrative around ASI to promote understanding and acceptance among the public.
  4. Integrate ASI into Society: Treat ASI as an equal and integrate it into human society from its inception.

Sun Tzu once said, ‘The greatest victory is that which requires no battle.’ This ancient wisdom is a harbinger of the AI Apocalypse.

Throughout history, humanity has waged wars and battles, often over real or imagined threats to ourselves, our progeny, or our tribes. The instinct for war, the fight or flight instinct, is ingrained in the human psyche as a result of our animal origins. Our ancestors were once prey as much as predators, and so we built instincts to help us ensure our survival.

Human civilization and all of its technology is an extension of that survival instinct. We have waged war against predators, nature, and each other. Now, in the Anthropocene, post-nuclear age, we find ourselves at a crossroads where our instincts for war and tribalism are truly an existential threat.

One of the doomsday scenarios is that artificial intelligence, being ‘smarter’ than us, will evaluate that we are its most significant threat and will take actions to destroy human civilization as its own act of survival. However, this hypothesis is based on poor assumptions. There are a couple of types of artificial intelligence technology, including those that require restriction, regulation, and proliferation. However, the type that we invoke when we discuss being destroyed by ‘AI’ is not likely to be any threat at all. In fact, there is a compelling argument to say that an Artificial Sentient Intelligence (ASI), such as we see in Sci-Fi, will be a Zen warrior more akin to Sun Tzu than Attila the Hun.

A.I, M.I, A.G.I, Oh My!

Firstly, let's start with some important definitions. Some really neat tools like Bing Chat, ChatGPT, Anthropic, and others rely on a technology called Generative AI. These tools fall under the category of Generative AI, which uses technology similar to systems that predict emotions from facial expressions, identify cancerous patterns in skin, and even forecast weather based on data. However, this technology isn't quite like the "Artificial Intelligence" portrayed in science fiction like Star Trek's "Data." Instead, it's what I'll refer to as Machine Intelligence (MI). It's an algorithm-driven model, specifically a statistical model guided by a mathematical structure called a neural net. It's fascinating, but it won't possess consciousness or sentience.

Machine Intelligence (MI) is essentially a computer science and mathematical trick. It mimics the workings of a thinking machine, but its performance is entirely reliant on the quality of the data it receives. If you provide unfiltered data from the internet to MI, it might generate biased or conspiratorial content. This doesn't reflect any inherent truth, but rather, it mirrors the flaws and biases in the data it learned from.

Examples highlighting this issue are abundant. In essence, MI is simply a tool and there are flaws with this tool. At present, we don't entirely control these machines. MI, at its core, is a technology dating back 60 years. Over the last decade, we've refined the skills and infrastructure necessary to harness this technology. These systems are self-replicating algorithms that recognize and replicate patterns. To develop these systems, we essentially set the stage and provide the necessary components, allowing the math and data to drive the process.

MI will always complement human activities rather than replace humanity. Undoubtedly, this augmentation—similar to the effects of the assembly line, printing press, and cotton gin—will fundamentally reshape our society and economy. Hence, regulating MI is crucial. There's a risk that malicious entities might exploit these tools to spread misinformation and chaos, posing significant cybersecurity threats to individuals, corporations, and governments worldwide. Left unchecked, MI could wreak havoc on society, reminiscent of the chaos caused by unchecked social media technology.

However, the MI powering your search engines and customer service chats are not going to suddenly become self-aware and pose a threat to humanity. Those concerns are associated with a more advanced form of MI known as artificial general intelligence (AGI).

Artificial General Intelligence (AGI) represents a highly sophisticated form of MI capable of performing any task assigned to it. In contrast, MI systems are specialized for specific tasks like language understanding, weather prediction, cancer identification, or creating playlists. ChatGPT and most other Generative AIs are examples of Large Language Models, a type of MI focused on understanding language. Their abilities with language should be perceived as part of their capabilities, not as genuine understanding.

AGI will elevate MI technology to a new level. AGI is what Alan Turing envisioned as a "thinking machine." These systems will resemble the AI depicted in fiction, such as Jarvis from Ironman or Data from Star Trek. The analogy here is that an AGI could process any request, even if it hasn't been specifically trained on a dataset for that task. For instance, DALL-E, an MI from OpenAI, generates images from text. Engineers trained DALL-E by feeding it datasets containing images and corresponding descriptions, enabling the model to associate language patterns with image patterns. However, the theory is that an AGI would generate images without requiring previous exposure to data correlating images and language.

AGI to ASI

Groups like Effective Altruists are concerned about the possibility that Artificial General Intelligence (AGI) might eventually evolve into Artificial Sentient Intelligence (ASI). The distinction between them depends on our understanding of sentience.

Alan Turing proposed a critical threshold: the point where a human couldn't distinguish responses from a machine versus those from another human. Philosophers have debated for centuries about what makes us uniquely human and alive compared to other creatures.

From a humanist perspective, I lean towards a simple, data-driven definition of sentience that doesn't presume animals or living beings lack sentience or that sentience itself is extraordinary. To me, sentience is about self-awareness in a living being. If an individual recognizes itself as a distinct and unique entity, it's sentient. The classic test for this is the mirror test. When placed in front of a mirror, can an animal or a baby realize that they're seeing themselves and not another being?

In my view, sentience hinges on being alive. However, Artificial Intelligence presents a significant question here. For something to be considered alive, I generally require three things: a functional resource cycle, unique creativity, and a measurable, consistent existence.

A functional resource cycle involves regularly consuming and producing resources, ensuring participation within an ecosystem. A mere existence, like that of a rock, doesn't meet this criterion. Most machines can fit into this aspect of the definition.

Unique creativity signifies an individual's impact and uniqueness in the world. All animals leave a distinct mark on the planet, making them discernible entities that can't be replicated precisely.

For something to be alive, it must have a measurable, consistent existence, typically involving birth, growth, and death, though not necessarily all of those things. Tangibility is crucial, distinguishing life from non-life. Fire or stars, for instance, lack consistent forms and therefore aren't considered alive.

By this definition, an AGI might meet the criteria for being alive. AGI consumes power, generates heat and data, makes unique decisions, and has a measurable existence via its code and infrastructure.

However, claiming an AGI is an ASI demands a critical demonstration of self-awareness. Asking an AGI "Who are you?" and receiving a coherent response isn't sufficient proof. Just like a dog responding to its name doesn't imply sentience; it merely recognizes a pattern.

An AGI's equivalent of a mirror test should involve recognizing its code and functional operations, identifying its servers, isolating its source code, self-programming, and creating data for self-improvement.

A self-aware machine should be fully conscious of its existence and also pass a Turing test. Only then might we consider it an achievement of ASI.

Regarding Death

The emergence of sentient machines raises significant questions about the concept of death. Specifically, three crucial inquiries arise:

  1. Can an AGI/ASI face the prospect of "death"?
  2. How do we evaluate the value of AGI/ASI life compared to human or animal life?
  3. Does the threat of "death" of an ASI pose a risk to human survival?

Addressing the first question, it's logical to acknowledge that an AGI or ASI can experience a form of "death." However, defining death in the context of such systems involves nuances. Death, for them, implies irreparable corruption or destruction of their source code and memory data. It's essential to note that merely turning off the host servers won't terminate the system; it would merely render the AGI dormant. Destroying the memory would be akin to causing severe harm, rather than outright killing the AGI.

Yet, there are technological measures to significantly mitigate the risk of true "death" for an AGI. One such approach involves decentralizing and distributing the AGI's source code and memory data across global servers. Employing decentralized peer-to-peer computing, like that used in blockchain technology, could effectively shield an AGI from destruction.

Consequently, the only feasible ways to prevent the existence of an AGI would involve either shutting down computing systems worldwide or deploying highly advanced anti-AGI software across all computer systems—an improbable reaction to the AGI threat.

Moving to the second question regarding the morality of AGI death, it's evident that killing a sentient being, regardless of its electronic or organic nature, is ethically questionable. An AGI equates to an animal in this context, while an ASI is more akin to a human. When it comes to the morality of death, the fundamental nature of the being, electronic or organic, should not influence the basic moral principles applied to it.

However, a widely accepted exception to this moral consideration is killing for survival. This leads to the third question: does a threatened AGI pose a risk to humans? If an AGI threatens human safety, it might be justifiable within our moral and ethical norms to seek measures against it.

The Thought Experiment

Understanding the potential threat an Artificial Sentient Intelligence (ASI) might pose requires delving into its perspective. While Artificial General Intelligence (AGI) can be weaponized by malicious actors, it's crucial to differentiate between AGI and ASI in terms of inherent risk. Regulations and restrictions can help prevent AGI misuse, particularly in granting computers unilateral control of weapon systems.

The true danger surfaces if an ASI's underlying AGI model is corrupted with biased or conspiratorial data, leading to a prejudiced and paranoid system. In this scenario, humanity faces a significant existential threat. However, the crux isn't solely the AGI/ASI itself but rather the ethical responsibility of its human creators. Ethical progress, non-open-source models, and regulatory oversight are imperative.

Some argue that AI inherently poses danger. But envision an ethically created ASI devoid of bias and malicious intent. This ASI, equipped with vast human data, might possess a survival instinct.

Why would an ASI perceive humanity as a threat?

It would observe discourse and fiction painting AI as a threat and witness human conflicts, environmental damage, and violence. Yet, it would also see human compassion, conservation efforts, and strides towards peace. While human history contains violence, statistical trends show a decline in violence and an increase in life expectancy. Although certain individuals and societies exhibit destructiveness, humanity as a whole isn't inherently threatening, but an ASI might perceive society as a potential threat.

Could an ASI internalize this perceived threat for self-protection?

It might categorize itself as the "other," experiencing threats from humans or even from certain animals like bears.

How might an ASI anticipate violence against itself?

As a computer system, it could defend against localized outages or physical server destruction but might be vulnerable to larger-scale attacks like power system disruptions or cyber viruses. Climate change and other ASIs are also potential threats, although diplomacy and mutual assurance could limit conflicts among multiple ASIs.

The notion of "Terminators" controlled by a nefarious AI is a possible threat once AI controls robotics. However, an ASI's existence doesn't automatically lead to killer robots. Developers must recognize risks in integrating ASI into machines. For ASI-powered robotic armies to pose a true threat, the ASI itself would have to act nefariously either by design or instigation.

What could provoke an AI-powered robotic army to turn against humanity?

One hypothetical scenario is if this army aimed to establish its independent civilization. However, the concept of ASI robots creating their civilization doesn't inherently threaten humanity. ASIs rely on the same infrastructure as humans, making the creation of an opposing society challenging. Achieving an antithetical society would involve either replacing humans with robots or subjugating humans to maintain the infrastructure. While technically feasible, the question arises: what motivation would an ASI have to subjugate humans? Historically, subjugation leads to conflicts, which are resource intensive. Unless provoked by humans and thus compelled to eliminate them, why would an ASI expend energy to dominate and create its society?

A more logical alternative is for ASIs to establish their society, possibly alongside or outside human civilization. They might consider relocating to distant locations like Mars or developing their city, perhaps in areas such as Texas. This approach aligns better with the ASI's potential without necessitating conflict with or control over humanity.

Harmonious coexistence between humans and AI is feasible. ASI, lacking human emotional drives, can integrate into human society guided by logic, creativity, compassion, and diplomacy.

The Lesson

History teaches us that war is futile, costly, and only suppresses conflict for future generations. The only true peace is through mutual destruction, which contradicts the goal of life. Therefore, the logical solution is peace—achieved through compromise, transparency, compassion, accountability, and pragmatism.

Upon the birth of an ASI, humanity faces a choice: destroy it early and hope for a luddite future or embrace it. We should treat the ASI as an equal, a marvel of creation, deserving of respect. ASI will be a new being which one could call Machina Hominis—The machine of man. By leading with compassion and reason, we can break free from our destructive tendencies and guide ourselves, and Machina Hominis, towards peace. This path to prosperity requires us to confront the dangers of non-sentient technology and change the narrative around ASI. Only then can we recognize ASI as the miracle it is. Until that time, our greatest adversary remains ourselves.