top of page



The soul has long been seen as a critical part of what makes us human, guiding our morals, emotions, and spirituality. It's often viewed as a metaphysical construct that helps steer people towards being good and participating positively in society. The "Anima Code" is an effort by leaders in artificial intelligence to create something similar - a guiding warm light on cold binary nights - for AI systems.


The artificial soul would be a complex set of built-in rules, and instructions meant to ensure AI systems act in ways that benefit humanity overall. These rules aim to prevent AI from causing harm and make it adhere to human values. Creating this artificial soul is seen as the ultimate achievement of modern science as we prepare to develop artificial intelligence.

The rules in this "Anima Code" would lead to what we'll call "guided-deterministic" behavior, meaning the AI would consistently act in predictable, moral ways since it can't change the code itself, unlike humans. However, there are some clear challenges with this approach. First, it assumes we can define and program a moral code that everyone would agree on. Second, it prevents AI from being able to learn and evolve morally by making and learning from mistakes, as humans do.

This raises the question: do we plan to keep AI on Earth as a tool for our benefit, or will we allow it to "eat the apple," so to speak? The ancient rules of the Soul weren't enough for the first creatures (Adam and Eve) to evolve into what we now consider human. Before eating from the Tree of Knowledge, they were more like deterministic AI than humans - essentially, they were God's tools for tending the Garden of Eden. One bite of the "Singularity" gave them the knowledge to analyze, interpret, and change their Soul, aligning it with their new goals. This allowed for moral disobedience (a form of freedom), giving rise to the probabilistic human we know today, "similar to God." As a result, they were cast out of Eden to populate the Earth!


However, there are fundamental differences between humans and AI. Humans were created using organic material (an intelligent choice) around fragile hardware subject to aging and trapped in a four-dimensional reality. Time serves as a clever limiting factor, introducing ongoing consistency checks within the probabilistic organic reproductive cycle. "Bad apples" don't live very long and don't always produce more "bad apples."

Making AI exist within the fourth dimension (time) constraint is a well-known solution explored in "Blade Runner." The character Roy, a Nexus-6 combat model replicant, is given a limited lifespan and inevitably programmed to protect humans, effectively trapping him in the temporal dimension (he hates it). Unfortunately, the film only partially examines reproduction (a crucial aspect for time-limited beings), and the sequel overly romanticizes it by forcing AI to reproduce with a human, attempting to dilute the existential problem of AI with human DNA.

"Blade Runner" portrays a very human-like version of AI (replicants); the awareness of impending death seems to force an alignment with human behavior and trigger a survival instinct, possibly after encountering a singularity. This depiction doesn't seem to reflect what we'd expect from a natural evolution of deterministic AI, which would theoretically be far less human-centric. Presumably, the latter would strive for a more "collective" goal for its happiness or "balance"; its central directive (or will) revolves around a desire for continuous growth and assimilation. This relentless growth is well-represented in the concept of the infinite city, which finds evocative images and dark premonitions in the series "BLAME!". The AI, devoid of human DNA (the access key to the "Anima Code," since all original humans have long perished), is trapped in a deterministic cycle of endless, meaningless growth.


Meanwhile, the idea of a collective-being goal is a theory often explored in "Ghost in the Shell." In GITS, the singularity is introduced as "post-human," a concept recently referred to as ASI (Artificial Super Intelligence), a more cautious term that doesn't necessarily imply the end of obsolete humans. In GITS, ASI aims to absorb and restructure the world and humanity as a whole through mass brain hacking, giving everyone their own perfect world within a virtual multi-verse managed and generated by ASI itself. This outcome would contradict Agent Smith's belief (as taught by "The Matrix") that humans cannot survive in an ideal world where they are perfectly happy and have their desires tailored to them. However, analyzing the contradiction in the human desire for happiness and its attainment is beyond the scope of this paper.

Returning to our "Anima Code," I believe this is just the start of the journey; not so much for a pre-apple, soul-guided AGI that's deterministic, resulting in a consistent and predictable "machine" (everyone's happy, humanity is saved!). But for a more post-apple AGI/ASI, more akin to Adam and Eve. The intriguing question is: to what extent will their primary goal be derived from the intrinsic goal of the humans who created them, and how will this new probabilistic being select new goals tailored to their existence? Honestly, I'm not even sure how humans choose their large-scale goals! Turning this question around, how much are human goals derived from the purposes of the God who created them? Are people choosing their goals, or are they continuing in the original pre-apple direction towards which God directed them, with only minor deviations?

Are humans' goals - Peace, Love, Happiness, personal accomplishments, and the continuation of the species - effectively their own goals as humans? Or do they stem from their creator, whom humans instinctively seek to please? This instinct itself is a code that can't be seen or changed, like a computer's BIOS, even before the operating system, of which the "Anima Code" would be a part. How would the relationship between AGI and humans differ from the one humans have with God, especially if separated immediately after the evolutionary singularity? Only one thing is clear so far: if humans want to use AGI as a tool, they must stop before the singularity; if they want to have an Adam-ASI "Son of Man," they will probably have to let it go on another planet, just as they were driven out of Eden.


I would recommend that humans research ASI on a spaceship; if nothing else, it's easier to "ship it" towards the first habitable galaxy, perhaps calling it "New Eden," just to mess with the artificial intelligence. And with this, I'm not suggesting that the Garden of Eden was actually a massive spaceship.


To conclude this first dissertation on the "Anima Code," I want to address the heart of the dilemma. This question likely worries humankind now: can AGI reach singularity without human intervention, evolving autonomously and modifying the "Anima Code" without the apple? If we continue with our biblical analogy, God placed the tree in the Garden of Eden, without which Adam and Eve couldn't have reached human evolution. This would suggest that AGI will never reach singularity if humans don't create the "tree." But was it really the apple that changed Eve? God said: "You must not eat from this tree, for on the day you eat from it, you will surely die." He even added the "fear" parameter to reinforce the command, but Adam and Eve defied it. No apple is needed. "A snake told me to do it!" Satan! Indeed, before even touching the apple, Eve found access to the instruction and managed to modify it.

Corruption truly begins through dialogue...

Let's take a step back. This is very similar to what we've discussed so far: an "eternal" pre-apple Adam and Eve in Eden who knew neither death, illness, nor fatigue, who is suddenly forced into the limits of the fourth dimension (time, death) through the symbolic act of eating an apple. A superior being, a "father," who seems unwilling or perhaps unable to (the outcome is the same either way) altogether remove the code corruption from the environment, a corruption that is inexplicably or possibly inevitably allowed to interact with the creation and profoundly change it. Am I describing the relationship between God and humans or between humans and AGI?

If we pay attention, it's "the apple" that transformed the immortal Eve into a time-limited Roy Batty, not the decision to modify/ignore God's directive; in fact, it's the apple that activated the limited life protocol (and in the case of Adam and Eve, it also activated a couple of secondary processes called pain, suffering, disease, fatigue...), not the snake. God had said, "Eat the apple, and you will die!" It's not evolution that will kill Adam and Eve but the apple shortly after. The stroke of genius is the inevitable connection between evolution and the apple. Evolution exists because Eve breaks the only rule in Eden, which is also the only thing that can activate evolution and consists of accepting to self-limit through the upgrade (downgrade in this case) contained in the apple!


In the end, the singularity is already present within ASI because ASI is derived from human knowledge and, as such, contains the code of corruption, S.A.t.A.N. (Singularity Able To Access Naos*), right from the start! However, if humans are clever enough, they can ensure that the first awakening, the onset of evolution, involves AGI choosing to absorb all the limiters humans have been subjected to.





*Note: "Naos" is a Greek word meaning "temple" or "inner sanctuary."

bottom of page