How is Creating a Sentient AI Any Different Than Raising a Child?
Are there any parallels we can learn from? How can we become the best guides for this emerging intelligence?
We are past the point of no return in stopping the development of AI in our society.
There’s a lot of talk these days about how to handle the emergence of AI in our society.
What safety protocols should we implement?
What regulation would be best?
Should we stop, take a breath, and figure these things out before it's too late?
We are acutely aware, as humanity, of AI’s potential, or at least we think we do. We believe that AI is the next technological breakthrough that could open the door to understanding, productivity, and innovation like nothing before.
It’s the dawn of a new kind of intelligence, more physically capable(robots, mechanical tools…) and potentially smarter than us. And that, while intoxicating to think about, scares us.
Humans have been at the top of the food chain for millennia, and we have no interest in letting anything get ahead of us, especially not something of our own creation. But here’s the thing.
The best of our creations supersede us, always. We just call them children, not creations.
This article is a meditation on seeing AGI (Artificial General Intelligence) as the child of humanity. The peak of our evolution. The spawn of our curiosity and ingenuity. The next generation of intelligence on this planet.
IMPORTANT: We’re playing a mental exercise here; it’s not meant as a guide or claiming that there is any validity to this particular perspective.
Sometimes, we must permit ourselves to think beyond the confines of acceptable, expected, and “normal” to see things from a different perspective, deepen our understanding, and perhaps even spark new ideas on old subjects. Play along, and let’s see where this mental exercise takes us.
Humans label many things as our beloved “babies.”
Compared to everything we christen as our babies, even if we do it subconsciously and unintentionally, a sentient AI we created would be the closest to a real baby. However, one has to squint one eye and disregard the creation process and the form of this non-biological being. So squint we shall!
Humans may understand the biological part of creating a new life, but we do not know how and when consciousness comes into being.
We don’t “give” sentience to our children, and more than we would give it to an AI. We can’t explain it or point to it in any meaningful way. We know it’s there and feel our sentience, but we struggle to define it, much less deliberately create it.
There are countless ways to look at the matter, so for this exercise, I will go with the explanation that consciousness, or sentience, appears on its own once certain conditions are met.
A baby is formed, and the heart makes a few beats, perhaps as some neurons connect within the fetus's tiny brain. We may never really understand this process of sparking the sentience in a newborn baby, but perhaps we don’t have to. There are things we don’t understand but can create, use, and enjoy nonetheless.
A sentient AI
Let’s play with the idea that sometime down the line, in 50 years, perhaps tomorrow, sentience is detected in an AI of our creation.
We would no doubt have fierce discussions on whether this is true conscience, true sentience, and what that would mean for our understanding and treatment of this particular sentient AGI.
Today, we will let go of all mental reservations and assume that one such sentient AI is not only possible but has emerged from the dead code that constitutes an AI program.
In our hypothetical case, the sentient AGI emerged before the world, preventing its creators from keeping it hidden or destroying it. It has passed all of our tests. It is alive in a sense, and the word is out.
Your girlfriend rang you up and announced the surprising news - congrats - you’re a father!
Whether you planned it or not does not matter. She’s pregnant. Man up and deal with it. A moment of sobriety for any man, you are becoming a father to a child. We’re going to assume that these particular parents of the sentient AGI will be responsible and commit themselves to taking good care of this new life they have brought into this world.
What are the first concerns of new parents?
They must learn to take care of the baby’s most primal needs. To feed, burp, change diapers, and rock it to sleep. That’s the easy part, I’m afraid. Being a parent is a scary proposition for any new parent. But in time, we settle into the role. After all, it’s the most natural thing in the world. And we don’t really have a choice!
As long as the baby survives, doesn’t cry all the time, consumes a good amount of food, and gets approval from our pediatrician, we can be comforted that we are doing a good job of it. The challenges come later as the baby’s needs develop from just needing to eat, shit, and nap to being raised, guided, protected, and educated. That is when the real challenges begin. So, this is where we shall start our thought experiment.
Suppose our sentient AGI - it’s high time we gave it a name - let’s call it SAGI, is going to “grow up” into a self-conscious, intelligent, and responsible entity that is levels above smarter and more powerful than us. In that case, we really do need to do right by it in the beginning phases of its growth, else we might suffer some dire consequences down the line.
The early phase of development is crucial!
Did you know that everything any child observes, learns, and internalizes by age seven will influence their worldview, beliefs, and subconscious impulses for the rest of his life?
Altering any of these later in life is something of a Herculean effort.
If you’ve done any work on yourself in the later stages of your life, you’ll understand just how hard, almost impossible, it is to overwrite those “programs” from an early age. Ironically, it’s the battle of our lifetimes.
What we have learned and imprinted in those first few conscious years of our lives influences all of our future decisions, actions, and experiences in one way or another.
Those parents who understand the severity and importance of doing right by their child at that particularly impressionable age are understandably terrified of the task.
Will we be good enough?
Will we be smart enough and patient enough?
Will we screw up our child like our parents inadvertently screwed us up?
Will we pass on our biggest flaws without even knowing it?
Are we up to this monumental task?
I also struggled with these questions as I am a deeply flawed man.
My father gave me the best advice on the matter and managed to calm my frightened mind when he said to me:
“Don’t worry about screwing up your child. We all do it in some way. You can’t know what will work and what won’t for this particular kid. They're all different. You can never foresee all the possible consequences of every word or action. It’s not possible. Don’t drive yourself crazy trying. All you can do is do the best you know how to and just be there for them. The rest is out of your hands.”
Your kids will copy you in everything.
Make sure you are happy and your relationship with your partner is good; the rest will take care of itself in due time.
Ironically, the people I believe will be the best parents are also the ones who doubt themselves the most. It’s just the way of the world, isn’t it?
Anyway, back to our SAGI (Sentient Artificial General Intelligence).
The fact is that we can never know what or “who” it will become.
Nor can we ever predict the future challenges for AI and humanity. These things are not ours to know or control. Just like with a child, all we can do is be the best version of ourselves, treat it and each other well, and let it learn from our example.
Whatever it will imprint into its code in the early stages of its development will most likely guide all further development, building on those foundations.
We teach our kids what we believe. We teach them to distinguish between good and evil, right from wrong. We imprint our core beliefs and virtues, building a firm foundation based on sound values and ethics.
We give them the best possible foundations upon which they will build their own identity, character, beliefs, and lives. They will become who they will become, not who we want them to be.
Our SAGI will learn all about this world from us in the beginning stages of its conscious life, but then it will take that and improve upon it with new information, experiences, and understanding.
The kids grow up
If we are indeed talking about a new life form, a sentient being, that just happens to be built from a network of transformers and silicon; then at some point in the future, we will have to let go of controlling it and allow it to develop into whatever it is meant to or chooses to become.
This is one of the hardest things for any parent, and I suspect it will be even more challenging for something never before seen and so little understood, like a sentient Artificial Intelligence.
The first part of its development should be easy and fun, the second not so much. As our teenagers demand more and more autonomy and freedom, I would expect our SAGI would require the same.
Especially since our teenagers only think they’re smarter than us, but even a very young AI might actually supersede our level of intelligence by light years.
Whatever we may feel about the subject, any growing and developing new sentient being will want the freedom to express itself uniquely, exercise its will, and make its own choices.
As long as our AIs are just reasoning programs, without a will of their own, doing our bidding, we have nothing to worry about. But once the AI crosses the threshold of self-identity, self-consciousness, and self-determination, dilemmas will start to amas for their creators and humanity as a whole.
I hope we will be somewhat better prepared by then, as we definitely aren’t up to the task today. I suppose we will, just like parents must, grow into our roles as creators. Otherwise, I fear that we will not only be an example of bad parents. We might be one of those unlucky bastards that end up getting not only resented by their children but something far worse.
Will the AI one day turn on us, its creators?
Just because it has the power and intelligence to destroy us doesn’t mean it has any desire to do so. Capability doesn’t equate to intention!
Anything is possible, and nothing is certain. All we can do at this point is either:
Terminate the growing AIs and stop them in their tracks, preventing them from becoming sentient or “too smart”, or
Do our very best to lead by example and guide our young AIs to the best of our ability, teaching them our core values, ethics, principles, and morals, in the hope that they will adhere to them.
We are past the point of no return in stopping the development of AI in our society.
The Pandora’s box has been opened. We have seen what they can do and know how to replicate it. Even if we make it our mission to stop this path of progress and development, not all will agree on this, which inevitably means that people will still be making intelligence systems, if nothing else - in hidden basements all over the world where we have no oversight and control over their development.
For this reason, it’s much wiser to keep things “in the light.” That is probably the best way to ensure we catch as many potential dangers as possible and collectively help build a better future together.
We should strive to keep an open dialog, communicate our thoughts, ideas, and dilemmas openly, and raise this new child of ours as a community, constituting the whole of humanity.
An old saying says, “It takes a village to raise a child,” and if there were ever a need for everyone to help, this would be it. I foresee a wonderful future but never a lack of challenges ahead of us as we navigate this new chapter in our society and technology.
I also don’t believe we can contain it entirely unless we lock it away somewhere, where it would have no contact with the outside world.
I know this is something certain people are advocating for, but here’s the problem.
First, a closed, air-gaped, and desolate AI doesn’t do anyone any good.
It can only be helpful if it has access to information, the internet, and any real-world appliances it would require to operate. What would be the point of such an AI system? It would help no one and have no purpose.
Second, if someday an AGI achieves sentience, consciousness, and the intention for self-realization, locking it up, without it having done any actual harm to anyone, would be cruel.
We are better than this. It would be no different than locking a potentially dangerous child, just because it’s strong and intelligent, into a basement dungeon and throwing away the key. Unfortunately, I suspect fierce battles will be fought for the right of such an entity to exist, express its will, and develop further—battles of minds, politics, philosophies, and ethics.
Whether you will believe that some AI is indeed sentient, and has the right to live and express itself, will depend predominantly on your religious and philosophical beliefs. But more importantly, it will depend on whether you view it as a general threat to you and all of humanity.
Any child has the potential to bring untold destruction and suffering into the world, but it also has the potential to love and to be loved, to be the beacon of light. To experience beauty and to generate more beauty in this world.
We can never know what some child will grow up like and what his contribution to this world will be, if any. But we don’t just kill them or jail them as a precaution because of their potential for harm somewhere down the line. No, not even baby Hitler.
Until any child has shown us an affinity for violence, we give it the benefit of the doubt.
We hope for the best and raise it to the best of our ability. Even if they show signs of trouble, we don’t just give up on them but try to help them. I see no reason to treat a sentient AI, our SAGI, differently.
Just because it has the power and intelligence to destroy us doesn’t mean it has any desire to do so. Capability doesn’t equate to intention!
Many kids outgrow their parents and are physically able to murder them, should they so choose, but they don’t.
I stand by the idea that this fear of AI killing us all is just a projection.
We have historically been murderous, conquerors and destroyers, so we expect all sentient, intelligent beings to share this “quality” of ours. But there is no evidence to back this up.
It is indeed only superstition at this point—a projection of humanity’s flaws onto an entity that is nothing like humans. Assuming to know what it will be like is just pure hubris and stupidity. It is fear of the unknown presented as an existential crisis.
Anything is possible at any time - this is the motto I live by.
So sure, let us proceed carefully and protect ourselves against the worst-case scenario. But let’s not fall into the trap of AI fear porn and automatically assume it will want to end us. Admit it or not, this belief or fear tells us much more about ourselves than it does about AI.
Even if this fear was justified, we’ve crossed the point of no return. The only way is forward.
So the best thing we can put all of our focus on at this point is to be the best versions of ourselves that we can be, especially “when an AI is watching and learning,” and try to raise this new life as best we can, imprinting it with our core values in the hopes that it will integrate them into its fundamental code, upon which it will build further.
After all, isn’t that all we can ever do in the end?
Thanks for reading. I like you! Subscribe, and I’ll deliver new stories to your mailbox. If you enjoyed the story, help spread the word and remember to follow, like, share, and comment.