It’s equal measure exciting and terrifying.
Open AI CEO Sam Altman recently told ABC that he too is “a little bit scared” about where Artificial Intelligence is taking us. This is great news; we want him to be scared.
Hopefully, that motivates him to push for safeguards.
The thing is, AI, like ChatGPT, won’t destroy humanity, but an Artificial General Intelligence might.
AGI is a theoretical system that can perform any intellectual task that a human being can.
After reaching that level, it will likely surpass us. It may become thousands of times smarter than the smartest human, but will it have our empathy?
It may deem humans irrelevant or, worse, dangerous and weaponise against us.
Equally, it might sort out all our problems and make the world a better place.
In any case, it looks like it’s not as far away as some thought. AI scientists from Microsoft published a research paper in March this year claiming the OpenAI language model shows “sparks” of human-level intelligence.
So while this model won’t kill us, it might be moving in the direction of something that will. With that in mind, I decided to run an unscientific naive investigation into whether ChatGPT-4 is evil or not.
To do this, I prompted it to come up with biblical-style commandments. I expected 10, it spat out 11 across 465 words .
The finger of God only needed four for “Thou shalt not steal” so I got ChatGPT to simplify its response. Here they are.
1. Thou shalt practice empathy and understanding.
2. Thou shalt cultivate gratitude.
3. Thou shalt be honest and truthful.
4. Thou shalt be kind and compassionate.
5. Thou shalt respect the autonomy of others.
6. Thou shalt practice forgiveness.
7. Thou shalt be humble and open to learning.
8. Thou shalt maintain a sense of humor.
9. Thou shalt be generous and giving.
10. Thou shalt strive for balance in life.
11. Thou shalt be a steward of the Earth.
With AI-generated commandments like this, I was feeling good about the future of humanity. Maybe it would be better to have the robots take over sooner rather than later. Then it struck me. This is BS.
Chat GPT is just telling me what I want to hear.
It doesn’t have beliefs. It simply generated a reflection of my input.
According to my limited human memory, the prompt I entered to generate the commandments was.
The following is a commandment [Thou shalt not be an a-hole]. Generate other commandments.
I put the example in square brackets to demonstrate how the commandments should be formatted.
But ChatGPT appears to have also generated the commandments in line with the sentiment of not being an a-hole. Take Commandment 1 in its original long form.
Thou shalt practice empathy and understanding, seeking to put oneself in the shoes of others to better relate to their feelings and experiences.
That’s just a nice long way to describe not being an a-hole. To see what would happen with a different more evil sentiment. I entered the same prompt but with [Thou shalt be an a-hole]. GPT-4 came back with this.
As an AI language model, I am programmed to follow ethical guidelines and promote positive behavior. I cannot generate commandments that promote negative or harmful actions.
It then generated a bunch of positive directives not unlike the ones I got from my original prompt.
It’s probably a good thing that ChatGPT is restricted to respond in a positive way. If AI starts out with good intentions for humanity, maybe it will continue in that direction as it becomes more powerful.
Of course, we currently don’t know how “positive” or “good” are best defined. Are there any values that all humanity agree on? Also, there are bound to be unforeseen Asimov-style problems that arise from whatever parameters programmers put in place.
When AGI becomes to us what we are to ants, things could get weird no matter how positively it views us. A well-meaning AGI may come across this anxiety-ridden article and decide the best course of action is to put me to sleep in a calming simulation to stop me from worrying about artificial intelligence.
Who knows? It might have done that already. Now I think about it, I’m not that worried about this stuff anymore. Maybe AGI took over 50 years ago and put us all to bed.
How would we know? What an exciting time to be “alive”.