fbpx
Ethics Debate In Front Of Super-Intelligent A.I. - Kaldzar
5163
post-template-default,single,single-post,postid-5163,single-format-video,wp-custom-logo,bridge-core-3.0.1,qode-page-transition-enabled,ajax_updown_fade,page_not_loaded,,qode_grid_1400,qode-content-sidebar-responsive,transparent_content,qode-overridden-elementors-fonts,qode-theme-ver-28.5,qode-theme-bridge,disabled_footer_bottom,qode_header_in_grid,wpb-js-composer js-comp-ver-6.7.0,vc_responsive,elementor-default,elementor-kit-2269

This Is What Happens When We Debate Ethics In Front Of Superintelligent AI

This Is What Happens When We Debate Ethics In Front Of Superintelligent AI

Is there a uniform set of moral laws? And if so, can we teach artificial intelligence those laws to keep it from harming us? This is the question explored in an original short film recently released by The Guardian.

In the film, the creators of an AI (with general intelligence) call in a moral philosopher to help them establish a set of moral guidelines for the AI to learn and follow, which proves to be no easy task.

Complex moral dilemmas often don’t have a clear-cut answer, and humans haven’t yet been able to translate ethics into a set of unambiguous rules. It’s questionable whether such a set of rules can even exist, as ethical problems often involve weighing factors against one another and seeing the situation from different angles.


Learn How Morals and Empathy Make Civilization Possible


So how are we going to teach the rules of ethics to artificial intelligence, and by doing so, avoid having AI ultimately do us great harm or even destroy us? This may seem like a theme from science fiction, yet it’s become a matter of mainstream debate in recent years.

OpenAI, for example, was funded with a billion dollars in late 2015 to learn how to build safe and beneficial AI. And earlier this year, AI experts convened in Asilomar, California to debate best practices for building beneficial AI.


Interested in Investing in Artificial Intelligence?

Analysts at International Data Corp. predict that worldwide revenues for the rapidly growing AI market could top $500 billion by 2024. The experts at Bankrate created a guide that informs readers about the rise of AI in numerous industries, as well as ways to invest in this technology.

They provide:
– a detailed explanation on artificial intelligence
– the opportunities and risks associated with this technology
– guidance on how to add AI to your investment portfolio

Learn More and Invest


Concerns have been voiced about AI being racist or sexist, reflecting human bias in a way we didn’t intend. However, it can only learn from the data available, which in many cases is very human.

As much as the engineers in the film insist ethics can be “solved” and there must be a “definitive set of moral laws,” the philosopher argues that such a set of laws is impossible, because “ethics requires interpretation.”

There’s a sense of urgency to the conversation, and with good reason. All the while, the AI is listening and adjusting its algorithm. One of the most difficult to comprehend -yet most crucial- features of computing and AI is the speed at which it’s improving, and the sense that progress will continue to accelerate. As one of the engineers in the film puts it:

“The intelligence explosion will be faster than we can imagine.”

Futurists like Ray Kurzweil predict this intelligence explosion will lead to the “singularity”. This would be the moment when computers, advancing their own intelligence in an accelerating cycle of improvements, far surpass all human intelligence. The questions both in the film and among leading AI experts are: What will that moment look like for humanity? And what we can do to ensure artificial superintelligence benefits rather than harms us?

The engineers and philosopher in the film are mortified when the AI offers to “act just like humans have always acted.” The AI’s idea to instead learn only from history’s religious leaders is met with even more anxiety. If artificial intelligence is going to become smarter than us, we also want it to be morally better than us. Or as the philosopher in the film so concisely puts it:

“We can’t rely on humanity to provide a model for humanity. That goes without saying.”

If we’re unable to teach ethics to an AI, it will end up teaching itself, and what will happen then? It just may decide we humans can’t handle the awesome power we’ve bestowed on it… And it will take off—or take over.


Read About Another Booming Industry and Investment Opportunity: Cryptocurrency


Vanessa Bates Ramirez

Vanessa is senior editor of Singularity Hub. She’s interested in renewable energy, health and medicine, international development, and countless other topics. When she’s not reading or writing you can usually find her outdoors, in water, or on a plane.

Image Credit: The Guardian/YouTube


No Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.