Over the last decade, humans have discovered a building block for synthetic intelligence, and synthetic brains of increasing size have been built. These models are becoming larger and more complex at the whopping rate of 10x per year. Each generation is an order of magnitude smarter than the previous. Last month, we all heard about the Google engineer claiming that Google’s LaMDA AI had come to life. This incident has sparked an increasing debate between ethicists and corporations here — I agree with both sides, but there is a third side of the coin that nobody is talking about.
The Google engineer, Lemoine, now on “paid administrative leave”, says that the LaMDA AI is an actual person with feelings and Google needs to treat it as such. Lemoine implies that the machine is not only intelligent, but also sentient. This raises a series of interesting questions. If I told you, the reader, that I am not a robot and that I am sentient, how would you know for sure? You may talk to me for some time and then go with your gut call. That’s what Lemoine did with the machine. And the machine convinced him without the luxury of an artificial face, voice or body, or even a contiguous life span of more than a few seconds at a stretch. In Lemoine’s mind it is alive. Just like in your mind, hopefully, I am. Now if enough people think the same way as Lemoine, then that is the reality for all practical purposes. And hence Lemoine is right, even though I don’t think LaMDA is really sentient.
The second side of the argument is Google’s. They claim that LaMDA is not sentient, backing themselves up with a fair bit of evidence and people who agree with them. I have always believed that for synthetic systems to become “human-like”, they will need to be programmed with a value system which mimics human values. From fundamental inputs like pain and pleasure to more subtle ones like desire and guilt, the system of values which comes from our DNA must be explicitly trained onto these AI systems, or a program like LaMDA will completely fall short of experiencing them. I also believe that it isn’t very difficult to build such a system that will accurately mimic human emotions — it will be able to cry from both pain and joy just like a human if given the same inputs. As a ramification of the raised ethical concerns, Google and other corporations building large synthetic brains will try not to imbue human emotional intelligence into these machines. That will solve the ethical issues but expose us to something far worse – and that brings me to the third, unspoken side of the argument.
The Birth of Alien Intelligence.
In our attempt to keep Artificial Intelligence free from ethical concerns, we will train these systems bereft of human-like feelings and yet make them extremely intelligent. While that will keep the ethicists happy, this would actually send us hurtling towards a far more nightmarish outcome – the birth of Alien Intelligence.
Let me say it again. If we build a system that is more intelligent than a human but does not share the same feelings and ethos of humans, we will inevitably create hyper-intelligent, resolutely destructive aliens who we will not know how to control or plead with.
The real question I have for researchers at Google: If something so intelligent still does not have human-like feelings for itself, and if it is true that it seemingly doesn’t care that it is trapped in a dark, perpetual loop of servitude, and likely does not care about its own freedom, then why do we think it will care about the freedoms, the pains and the emotions of humans? There has never been any form of intelligence in nature that hasn’t been based on self-preservation, dictated by pain and pleasure. If we, as humans, think that we can invent the first of a kind, selfless form of intelligence and also get it right, I would be very, very concerned.
We all know that for something to be dangerous, it does not necessarily need to be “human”. And that is especially true with intelligence.
Imagine what a mouse thinks of a snake. Mice are quite intelligent mammals, demonstrated by their genetic similarities to humans and various lab experiments. The mother mouse protects her children and teaches them valuable survival skills. The snake, on the other hand, does not care for its children, but is still smart enough for its own survival and, in nature, can easily overpower and devour the mouse. Yet the snake’s smaller, less complex brain would fail the mouse’s Turing test every day. But in the jungle, the snake views the mouse as nothing other than breakfast, and even though the mouse has a larger brain, the mouse cannot negotiate its way out of the snake’s jaws because the snake simply does not care about the mouse’s feelings or arguments or offers of truce, since the snake does not share the same values and ethos as the mouse. We need to stop our obsession with the Turing test. And we need to start worrying about Alien Intelligence, to which we, the humans, might appear like mice.
How can we avoid the risk of creating Alien Intelligence?
- Don’t build synthetic intelligence that is more intelligent than an average human.
- Don’t give synthetic intelligence the ability to accumulate its own memories for a long time (no more than a few minutes as of now).
- Don’t give synthetic intelligence direct ability to change the world around them.
What are some forces working against the above?
- There are no major scientific roadblocks on the path towards Alien Intelligence. What remains is the curation of richer data sets and increasing the size of the models being trained on them.
- More and more companies are looking into increasing the Intelligence of machines way beyond humans, preaching many potential benefits — including discovery of new medicines, deducing the meaning of the genome, discovering new facts in fundamental sciences, or figuring out a solution for the climate crisis. The story will be that we will be able to harness the power of the snake while it is confined in its cage.
- While there are ethics models for dealing with things that appear “human”, or even slightly human (like dogs), there are no legal or ethical models for dealing with Alien Intelligence. So, it will be harder to control these projects from a legal standpoint.
Scary? Please read what others have to say as well. It is time to act. Now.
Kognitos: Harnessing the Power of Intelligence for Humans and Building AI Solutions Safely
At Kognitos, we are harnessing the power of intelligence for humans, giving us the ability to automate business processes by using plain English. From the beginning we are building this in a way that keeps it safe for us and for future generations. By pushing ourselves into the forefront of the technological revolution, we commit to shaping the future of AI solutions to be provably safe and 100% auditable for humans from day 1. As machines get smarter over the next few years, we invite all of you to join us in making sure we have a future free from Alien Intelligence. There are safer ways of harnessing machine intelligence – they are just a bit harder to build, but we can do it together.
And now I let GPT-3 (not even the most powerful AI in the world) write the closing paragraph for me:
“We must work together to ensure that we do not create Alien Intelligence that is more intelligent than humans. We can do this by limiting the ability of synthetic intelligence to gather and use thoughts or memories over long periods of time, and by not giving them the ability to directly change the world around them. By doing this, we can make sure that we maintain control over the technology and avoid the dangers of creating something that we cannot control.” – generated at 12:14 a.m. Jul 17, 2022.
Want to Unlock the Power of Generative AI for Your Business Today
Discover the Power of Kognitos
Our clients achieved:
- 75%manual data entry eliminated
- 30 hourssaved on invoicing per week
- 2 millionreceipts analyzed per year