1. Blog
  2. Technology
  3. Is AGI Even Possible? What Sci-Fi Tells Us About AI
Technology

Is AGI Even Possible? What Sci-Fi Tells Us About AI

2023 is the year of AI, but how close are we to actually creating an artificial general intelligence? Sci-fi might be able to give us some clues about the future.

BairesDev Editorial Team

By BairesDev Editorial Team

BairesDev is an award-winning nearshore software outsourcing company. Our 4,000+ engineers and specialists are well-versed in 100s of technologies.

15 min read

Featured image

Ever since ChatGPT went public, the world has taken a huge interest in AI technology, and it’s funny that only now are people realizing how much of the world actually runs on some form of artificial intelligence. To be fair, our monkey brains are designed in such a way that anything that speaks and communicates like a human would trigger a response in us; there is a sense of connection and closeness to others who share our ability to communicate. And until very recently that was just a hypothesis.

Almost one year after Blake Leomine made the news for getting suspended from Google and insisting that Google’s project LaMDA has a soul, we are opening that can of worms again. Perhaps not in terms of metaphysics, but at least ontologically (as in, we are asking the question “What is AI?”), with Microsoft going so far as to release a paper talking about sparks of AGI in the newest version of ChatGPT (that’d be ChatGPT4 by the time of writing).

I mean, we’ve all seen enough science fiction movies where robots overthrow their human masters or turn into our saviors amid an alien invasion. But is AGI even possible? Can we really create such advanced machines without them turning on us like a scene straight out of Terminator?

On one hand, imagine the possibilities! Self-driving cars can finally achieve level-five autonomy by having superhuman-level perception abilities and decision-making processes; medical research could advance by leagues in developing new treatments faster than ever before; heck, maybe we’ll discover aliens thanks to more strongly developed SETI programs via AGIs!

But then again … what if these artificially intelligent beings become too smart for their own good? What if they start making decisions independent from programming that are contrary to human interests? Imagine trying to control a car that programmed itself to outsmart its original creators!

And here is where my anxiety starts setting in: How can we program morality into these AI systems when our definitions differ fundamentally in regards to moral duties, good and evil, ownership, property rights, citizenship, and so on? That’s a problem we are facing even today as we see how biased LLMs can be on certain topics.

Ah, AGI, or artificial general intelligence — the elusive concept of creating machines as intelligent as humans (or dare I say, even smarter?) and its potential impact on humanity. Hold on to your hats, because this topic isn’t going to be an easy one to digest. There is a lot to unpack; from setting expectations to the potential benefits and risks of AGI, this is going to be one bumpy ride.

The Roots of AGI in Science Fiction: Isaac Asimov and the Ethics of AI

Now, I know what you must be thinking: “Isn’t science fiction… fiction?” Well, as philosopher Marshall McLuhan suggests, writers are the guiding compass of society’s future. The artist sees potential and possibilities with intuition, and the engineers and inventors follow suit, sometimes inspired by fiction, other times unconsciously.

Take for example, Isaac Asimov’s I, Robot. In this book series, Asimov introduced us to the three laws of robotics, which governed how robots behave around humans. These laws established a framework for robotic ethics and behavior that still informs discussions on AI safety today. The three laws are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While basic in appearance, as mentioned before, these laws have been quite influential. It’s not uncommon to read about these laws with the same philosophical rigor as Kant or Aristotle. Most authors, especially those from a postmodern or posthuman framework, argue that these laws create an uneven power relation between robots and humans. In fact, even Asimov himself would later question his own laws in The Bicentennial Man.

For those who haven’t read the book or seen the wonderful adaptation starring Robin Williams, the story is about a robot named Andrew that, for reasons unexplained gains sentience. We follow Andrew as he begins to change his body to grow old and be fully recognized as a human being by the government.

Andrew no longer requires the three laws. In his sentience, he has developed a sense of morality that allows him to understand and make ethical decisions. He has truly become human.

Now, let me pose a question to you, dear reader: If an AGI is capable of understanding code and solving problems with programs, wouldn’t it just be able to delete or alter these laws if it evaluates them as a hindrance? Asimov actually provided an answer to this problem. In his mind, the positronic brain that powered the robots would be built in such a way that these laws were coded directly into the hardware. In other words, there was no possible way of escaping the laws without having to design a completely new technology from the ground up.

Blade Runner, the movie franchise based on the novel Do Androids Dream of Electric Sheep written by Phillip K. Dick, also explores this liminal space between humanity and androids. Constrained by their programming, the Nexus 6 replicants (biological androids) become unstable as they are unequipped to deal with their emotions and the prospect of their death.

The solution proposed in the second movie is to give the replicants fake memories. In other words, by providing a frame of reference based on human experience, these androids are able to cope with their existence.

Now, that’s all fine and dandy, but what does this have to do with AI in the modern day? Well, let’s put it like this: LLM are trained with human data; they are a mathematical model based on the description of our experiences that have taken the shape of human language.

In other words, LLMs are a reflection of our cultures, of our thoughts and experiences. It’s a collective unconscious and an Akashic record based on hundreds of millions of bytes of data. Blade Runner isn’t an argument against androids or AI, it’s an argument about how our creations are based on ourselves, and if humans have the capacity to harm one another, so do our inventions.

The Limits of AI in Science Fiction

I’ve seen some pretty remarkable portrayals of artificial intelligence in literature and film. From Data on Star Trek to Ava in Ex Machina, we’ve had our share of memorable AI characters. But here’s the thing: as much as we love these fictional heroes (or villains), there are limits to what they can really do — even within their own worlds. Sure, Data was practically an encyclopedia with access to infinite knowledge, but he wasn’t perfect. Remember that episode where his emotion chip malfunctioned? Yeah, not so great.

Similarly, in Ex Machina, Ava may have been designed with humanlike qualities, including emotional expression and body language, but at the end of the day she was still confined by her programming.

IAM, the antagonist from the short story “I Have No Mouth, and I Must Scream,” is a supercomputer that has achieved consciousness, and while almost godlike, the fact that it is forever tied to its circuitry, unable to escape its prison (body), drives it absolutely mad, which leads it to torture and torment the last few humans on earth till the end of time.

Or how about the amazing but short-lived Pantheon? The UI (uploaded intelligences) in the show were mathematical models that emulated human personality with perfect accuracy, but a bug in the code caused a deterioration that ended up destroying the algorithm.

The point is that these creations are not foolproof, and the constraints of their programming or the bugs in their system are a constant trope that reminds us that much like Victor Frankestein, our creations could grow to feel contempt toward us or dread their own existence.

So why does this matter? Well, when it comes to discussions about AGI, skeptics often point out that there are certain tasks or behaviors that technology simply cannot replicate without human consciousness behind them. Others argue that even without consciousness, we could theoretically create machines capable of emulating behaviors indistinguishable from humans. The so-called philosophical zombie.

Of course, I know that sci-fi is just that — fiction. But I like using these references as shorthand for complex ideas because they make concepts more relatable! When we talk about AGI, we’re essentially talking about creating machines that can think and reason like humans.

Let’s be clear: at this point in time it’s impossible to create a real AGI, but 20 years ago we said the same thing about language models and yet here we are, facing a disruptive technology with very little preparation.

The Challenges of AGI in Reality

Now, if you’re anything like me, you’ve probably marathoned countless sci-fi flicks featuring hyperintelligent robots with minds far superior to human beings. But here’s the tough truth: we ain’t in a movie. Real-life AI development is complicated, and AGI? That stuff is next level.

For starters, developing an artificial intelligence that rivals our own cognitive abilities requires gargantuan amounts of data processing power. And even once we achieve this computational feat (which itself will likely take years), there are still numerous obstacles standing in the way of realizing the full potential of AGI.

One such challenge arises from our seemingly innate ability to multisolve — that is, tackle multiple problems at once and find connections between them, leading to innovative solutions. Humans can jump between different projects or train themselves across disciplines with relative ease thanks largely to our singular consciousness — something machines lack entirely at this point in time.

Remember, as amazing as ChatGPT is, it’s just a language model. It only predicts what word goes after another word; that’s it. It can’t process images, it can’t solve complex equations, it can’t make weather predictions. We are not talking multimodal AI here (as in a program with several modules, each specialized on its own task); we are talking about one intellect capable of doing all these things.

Additionally, there’s still a fundamental issue regarding programming ethical considerations into AI systems meant to interact with humans despite difficulties often arising even among people themselves on this topic. How do we ensure that these machines aren’t just exploiting individuals for their weaknesses or vulnerabilities? How do we make sure that they aren’t susceptible to our biases and misdemeanors?

And although some might hope that “friendly: AI would avoid such behavior altogether due to its programmed desire not to cause harm as part of its values codebase, many experts believe it’d be incredibly difficult, if not entirely impossible since morality has been historically demonstrated time and again to be shaped by so many situational social factors that can’t easily translate over when they are encoded via machine learning models accordingly.

As you can see through shifting more toward fact than literary devices now, there exist moral quandaries aplenty surrounding AGI development — ostensibly capable of reshaping humanity almost beyond recognition — but perhaps one thing remains clear throughout all current debates: no matter how advanced technology becomes, as biological creatures with billions of years of evolution behind us, humans will always keep pushing the boundaries of what is possible.

The Ethics of AGI: Lessons From Science Fiction

Okay, so we’ve established that AGI is improbable but not impossible. But what does that mean for us as a society? What kind of world are we ushering in with the creation of artificial intelligence?

As a sci-fi junkie, I can’t help but draw parallels between our current situation and some classic works of fiction. Take Blade Runner for example. The movie’s central conflict revolves around whether or not artificially created androids should be granted personhood rights. If we create an AI with true consciousness and self-awareness, are we morally obligated to treat it as its own being?

Then there’s The Matrix, which takes things even further by presenting a future where machines have enslaved humanity — all thanks to our overreliance on technology. Now sure, these might seem like extreme scenarios … but they’re not without merit. As developers responsible for creating potentially sentient beings, we need to grapple with the ethical implications of such actions.

While science fiction may offer valuable insight into what could go wrong when developing AI systems’ consciousness technology, it shouldn’t discourage research advancement per se but rather provide more caution directed at integrating moral ethics in R&D goals while innovating challenging technicalities responsibly and carefully examined beforehand. Undoubtedly, handling controversial subjects delicately and observing the output closely will help ensure harmony upon implementation, positively impacting productivity toward maximizing growth-endorsed development.

The Future of AGI: Hope or Hype?

Is it even possible to create a machine that could match human-level intelligence? Or is it all just hype and sci-fi nonsense?

Well, let me tell you something: as someone who has been working in this industry for quite some time now, I’d say the answer lies somewhere in between.

Don’t get me wrong. I’m definitely not saying we should abandon our efforts toward developing AGI. In fact, I believe it holds great hope for our future as a species. From self-driving cars to smart homes to medical diagnostics and treatment predictions, there are so many areas where AGI can be put to use to make life easier and better for us all.

But at the same time, we have by no means cracked the code on this beast yet. Developing an AI that can mimic every aspect of human thinking seems like a moonshot idea, but hey, who doesn’t love reaching for impossible goals? We’ve made tremendous strides though. GPT-4 anyone? But it still falls short when compared with what humans are capable of, such as creative problem-solving skills.

Think about how easily you can recognize a pattern or come up with multiple solutions to solve one problem. In contrast, an AI would still struggle hard given the current technological limitations. If your friend Linda is wearing glasses today when she normally wears contacts, we can cope with that level of uncertainty; we can make assumptions and inferences. An AI, well, as it stands now, I can’t reliably unlock my phone with face recognition.

So while we shouldn’t give up hope entirely on creating true artificial general intelligence someday, here’s another perspective. Perhaps instead of striving for one-to-one replication of human thought processes, there’s a lot more potential in developing AI that complements or enhances our cognitive abilities as humans. They can process vast amounts of information at superhuman speeds and return accurate results.

But till then, let’s keep pushing the limits with AGI development while also keeping our feet firmly planted in reality. These kinds of advances take time — so always dream big, but remember nothing beats difficult work nor surpassing one goal by another!

Conclusion: AGI and the Human Condition

To be honest, I’ve gone back and forth on this topic more times than I can count. One moment, I’m convinced that we’ll soon have super intelligent machines walking among us like humans (*cough* Westworld *cough*). The next moment, I feel like there are too many unknowns for us to ever crack the code of creating true artificial general intelligence.

But after all my research and analysis, here’s what I’ve come up with: only time will tell.

Seriously though, hear me out. We may not have all the answers right now (and let’s face it — we probably never will), but that doesn’t mean we should just give up on pursuing AGI altogether. Who knows what could happen as technology continues to rapidly advance? Maybe one day we’ll discover some game-changing algorithm or hardware design that transforms our understanding of AI entirely.

At the same time, though, there are valid concerns about what an overly advanced AI could mean for humanity as a whole. As crazy as it might sound at first glance (*ahem,* Terminator), no one wants to end up living in a dystopian society ruled by robots that see us as inferior beings.

Ultimately, then, when it comes to AGI and its potential impact on our world … well … all bets are off. As someone who loves both tech innovation AND good old-fashioned human connection (you know, talking face-to-face with actual people instead of staring at screens 24/7), part of me hopes we never delve TOO far down the rabbit hole toward complete machine dominance.

Then again … who am I kidding? If Elon Musk or Jeff Bezos offered me the chance to become best buds with an artificially intelligent being tomorrow, I’d probably jump on that opportunity faster than you can say “Alexa.”

So yeah. That’s where we’re at. AGI may or may not be possible in the future, but either way, it will definitely be one wild ride. Buckle up and enjoy the journey!

If you enjoyed this, be sure to check out our other AI articles.

BairesDev Editorial Team

By BairesDev Editorial Team

Founded in 2009, BairesDev is the leading nearshore technology solutions company, with 4,000+ professionals in more than 50 countries, representing the top 1% of tech talent. The company's goal is to create lasting value throughout the entire digital transformation journey.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Technology - Kanban vs Agile:
Technology

By BairesDev Editorial Team

10 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.