We are living in a new era. For the first time ever, humanity is suddenly having to deal with our own creation potentially turning against us. In some ways this is a very old story that has played out over and over again. In ancient Greek mythology, the three brothers Zeus, Poseidon and Hades rose up against their creator, Kronos. The Bible speaks of Satan, being a creation of God Himself, before rebelling and corrupting the entire world. Now humanity is having to deal with artificial intelligence potentially rising against us.
Dystopian films like ‘Terminator’, ‘The Matrix’ and recently ‘The Creator’ portray wars between humanity and AI. Almost every day, such a nightmarish scenario seems closer than ever. Of course such a fate won’t take place for some time, first AI will slowly take over occupations that were always held by people. Like say, I don’t know, writing perhaps?
If you follow the news you might have heard of this Hollywood writers strike going on. A bunch of writers who work on scripts mostly for TV shows have gone on strike until their (frankly impossible) demands are met. Now some production companies are considering using AI, such as potentially ChatGPT to to work on these future scripts to prevent another blackout. Hmm. Well in the case of the ‘Stephen Colbert Show’ I would think that using AI might actually improve the show.
Seriously though. This is a little more serious than meets the eye. Can a computer really express thoughts and even emotions? If so, then the lines between AI and humanity blur. In the country of Saudi Arabia, full citizenship rights were granted to a robot. For the first time in human history we are forced to question what makes us human? Early in our history we only had survival to seriously contend with. Back in primal days, we had to face against animals and nature. Then we had to deal with wars between other humans/tribal factions. Now we’ve managed to achieve a fragile semblance of peace. We still deal with challenges from nature. Yet AI is turning back the clock, becoming so advanced as to challenge our position as the dominant species.
Giving rights to a robot, represents a major shift in our progress. Whether you like robots or not, any logically thinking person would have to admit that AI stands as an existential threat to our existence. I didn’t say that AI will destroy us all, I only said that it has the potential to do so. Potential isn’t a definite. It still merits thought and consideration. The question we are all facing is whether we are prepared to face these new challenges in the future.
Back to Saudi Arabia, while I applaud the nations efforts to move from its radical roots, I have to question the decision to grant citizenship to a robot. There are some who have lauded such a move, citing how intelligent the robot in question was. That’s interesting. Is intelligence the only thing that makes us human? If so, how can we possibly calculate this? How intelligent must a person be before they can cross the threshold to be considered ‘human’? Does that mean that people with mental disabilities are to be considered somehow ‘sub-human’? I find a lot of fault with this line of thinking.
The way I see it there is much more to being human than merely intelligence. Throughout human history, we have always wrestled with ideas of right and wrong. One of the oldest texts in existence is the Code of Hammurabi. Hammurabi was a Babylonian ruler who penned down a list of what was considered moral at the time. Behind every philosophy, ever world religion, has been the question of not only how to live, but what is the right way to live? This one defining question has always been what has separated people from animals. Watch any animal show and you will witness animals committing what humans would consider the grossest violations of morality, and animals don’t appear to care. Animals do not appear to display a conscience of any kind.
I would apply this same line of thinking to robots as well. By its very nature AI systems cannot feel anything, they certainly cannot have a working conscience. Anything it would consider ‘right’ or ‘wrong’ was because the developer input it so. I could develop a robot that would think burning buildings is the ‘right’ thing to do, and it would never come to the conclusion that burning buildings is wrong. How could it?
If a computer system cannot display a conscience or make moral decisions for itself then I would conclude that on this basis, no computer can ever be considered human. For the simple reason that computer/AI are missing the most fundamental aspect of being human. This is a deadly serious topic. Having a conscience, being able to think about issues of right and wrong is the only reason that humans are capable of forming civilizations and governing themselves. (However imperfectly we do so)
AI is already here and whether we like or not, there is no stopping it. We have moved too far, too fast. Like Pandora, we opened the box out of childish curiosity, not realizing the dire consequences of our actions. Understand something, I personally see some benefits to AI. I do think there are multiple uses. What I’m questioning is the speed of progress we have been making. I’m questioning whether including AI in our homes, businesses and even our military is a wise choice right now. I’m questioning whether anyone is considering the consequences of our actions. Because after watching interviews from those in powerful positions, I’m not convinced anyone is.
Posted inWriting