Ad Image

Understanding the Future Era of Artificial General Super Intelligence

Sentra’s Ron Reiter offers insights on understanding the future era of artificial general super intelligence. This article originally appeared on Solutions Review’s Insight Jam, an enterprise IT community enabling the human conversation on AI.

Perhaps the most common debate when it comes to generative AI (genAI) is whether the risks of its bad and malevolent uses outweigh its potential benefits. Ultimately, it depends on who you ask. Certainly, the tech sector has advocates on both sides of this debate.

As we’re becoming accustomed to using Chat GPT and other genAI applications in our daily lives, there’s a not-so-distant future where artificial general super intelligence (AGSI) could potentially outperform humans.

In just the last 12 months, we’ve witnessed the growth of genAI capabilities to process natural language queries, recognize and generate images, and create all types of content from just a few text prompts. In some of these situations, genAI is already surpassing human abilities. These are good initial steps, but AGSI has the potential to go beyond these tasks and leave a mark on humanity.

For example, AGSI could one day solve some of the world’s most complicated problems, from designing affordable nuclear fusion for generating cost-effective power to discovering new drug therapies or solving complex supply chain logistics problems.

When it comes to the power of AGSI, we’re only beginning to scratch the surface of what’s possible. However, at some point, AGSI will be able to teach itself, learning from its mistakes to the point where some researchers believe it will become infinitely smart. In anticipation of this future, the international community needs to develop a series of ethical AI practices that can be enforced and promote a better future for mankind.

Part of the problem is that while governments are geared towards maximizing health and safety to minimize harmful activities, translating that into the necessary computer code doesn’t always compute. Machines still need to make correct judgment calls. Developers were quick to install guardrails in the early days of the Chat GPT craze, but continue to lag behind bad actors.

Perhaps the earliest example of this, if we draw on science fiction, is Isaac Asimov’s “Three Laws of Robotics”, used in several of his writings to prevent harm to humans. These laws were incorporated into many of the earliest genAI projects. However, this isn’t universal: some genAI models have no guardrails, which is even touted as a feature and not a bug.

One of the more important questions surrounding AI is how these models are trained to operate and how different they are when compared to human intelligence and learning incentives. These incentives are clearly different between humans and machines. Humans have ingrained reactions and instincts based on things that bring joy and avoid pain. Of course, AI models have no equivalent concepts, but they can be set to run in a closed loop with incentives. Notably, Microsoft’s CTO Kevin Scott has previously stated that the AI models could prove hard to predict because they consist of complex collections, meaning we must be very careful as they are rolled out.

These risks mean that technological policies and international governments need to have a better understanding of AI models, how they are designed, and how they operate, especially as AI tools become more capable and move into the AGSI era.

In the end, will AGSI pose a risk to humanity? Only if we let it. With the proper stewardship and careful deployment, it will be more boon than bane. The next few years will be critical to its success or failure. AGSI will be a game changer, and there are so many unknowns: who will have access and use these models, and how they will be managed are all open questions. Answering them means deciding on the fundamental understanding of the differences between humanity and culture, and the role that machines will play in our future.

Share This

Related Posts