Could artificial intelligence destroy reality as we know it? It sounds like clickbait I know. Just another random keyboard warrior out to scare people with dystopian nightmares. I get it, I don’t care for those either. This is not one of those clickbait articles that you regret reading at the end. What we’re talking about, it the most serious subject of our time. Artificial intelligence is here, its become more advanced than we could have dreamed, and now it is causing problems that to be frank, we haven’t prepared for.
How can anything destroy reality? Reality is a constant, it is solid and concrete. I can walk right outside of my house, pick up a rock and say with confidence, ‘this is a rock’. A geologist might come across, shake his head at my ignorance and inform me that what I am holding is in fact, a piece of granite. Fair enough I guess, neither one of us is wrong. The geologist simply knows more than I do. But the piece of granite is still a rock. Now, let’s imagine a different scenario. Let’s say I walked outside of my house one day, and to my astonishment, find out that I’ve somehow walked into Mars. That bizarre scenario is exactly what we’re about to discuss.
What does AI have to do with this? If you’ve been paying attention to the news you might have noticed a lot of talk about online AI bots, like ChatGPT. For the first time ever, fully self learning AI is now open to the public, right there on the internet. AI at its core is simply a very sophisticated algorithm. Nothing more, nothing less. What makes algorithms like ChatGPT interesting, is these algorithms are able to analyze terabytes of data in mere seconds. With this translates into is a sort of pseudo-learning. When these online chat bots interact with other people (like you and me) it essentially takes notes. It pays attention to patterns, and uses that in order to form a base conclusion of sorts. Whereas previously AI only understood whatever its creator(s) input into it, now AI is able to form conclusions on its own. Say for example, that the vast majority of users talked about how great cheeseburgers were. The algorithm would formulate the conclusion that cheeseburgers are good for people. That sounds simple enough.
Up until recently all AI could truly do is mimic people. Sort of like a bio-digital parrot. AI is capable of analyzing patterns, without really understanding its significance. Like parrots who can mimic speech patterns without having the foggiest idea of what they’re even talking about. Some people are having great fun with this. You can log on to ChatGPT, tell it to write an essay and presto! In seconds it will have a five hundred word essay ready to go. It looks good too, nicely formatted, punctuated well. Until you start reading it and realize the essay is full of nonsense. One ChatGPT essay lectured on how good ground porcelain was in baby formula. Some of the sheer babble that comes from ChatGPT is actually funny.
Amusement aside, there’s a darker side to this. You see ‘artificial intelligence’ is a misnomer. The algorithms being used aren’t intelligent at all. Actually, they’re quite stupid. While these AI algorithms are indeed capable of storing and processing mountains of information, they are incapable of making any sort of moral judgements. Concepts of right and wrong are completely foreign. What this means, is that there is absolutely no restrictions upon how AI systems are being used.
Which brings me back to our original question. As of right now, this very moment, AI is already distorting reality. Across the internet, fake videos are proliferating like rabbits. Fake videos of celebrities and even ex-Presidents. They look pretty good, if you aren’t paying attention it would fool you. One viral video showed what appeared to be actor, Tom Cruise, imitating a snapping turtle. It did look pretty good, but if you payed attention you could tell it wasn’t real. (And why would Tom Cruise be imitating a snapping turtle anyway?) Now that’s not really important, but other videos show a more nefarious intent. Like one video made about former President Donald Trump, apparently making racist remarks. Which also wasn’t real, but no one caught on to it immediately. Let this all sink in for a moment. This fake video of Donald Tump was circulated for days as a legitimate video before someone finally caught on to what was going on.
AI is capable of taking in information and then reconstructing it into a format like writing essays or making a video of people that look, and sound very similar. This is being implemented very, very quickly. If you watched the series “The Book of Boba Fett” on Disney+ you’ve already seen it. Mark Hamill made an appearance as Luke Skywalker, the production crew took advantage of AI to make his appearance look thirty years younger. And it looked very good. Disney plans to re-use the same system in future projects like “Indiana Jones & The Dial of Destiny” and an untitled Luke Skywalker project. This is for entertainment purposes. But what if say, I don’t know, maybe a terrorist decided to use this easily accessible technology to make the current President US President appear to declare war on a superpower? That in and of itself could trigger a World War 3, without anyone having to fire a single bullet. I’m simply presenting a possibility here. A very real possibility. After all, terrorism has become more sophisticated over the years. Does anyone remember the propaganda videos the Islamic State would put out? It had surprisingly good production, which just made it even more eerie.
Posted inBlog