Ray Kurzweil Rebuts Calls To Pause AI Research

[ad_1]

Ray Kurzweil, a noted futurist and director of engineering at Google, published a rebuttal to the letter calling for a pause in AI research, offering reasons why the the proposal is impractical and deprives humanity medical breakthroughs and innovations that profoundly benefit humanity.

International Letter to Pause AI Development

An open letter signed by scientists and celebrities from around the world (published on FutureOfLife.org) called for a complete pause on developing AI that is more powerful than GPT-4, the latest version created by OpenAI.

In addition to pausing further development of AI, they also called for development of safety protocols that are overseen by third party independent experts.

Some of the points that the authors of the open letter make:

  • AI poses a profound risk
  • AI development should only proceed once beneficial applications of the technology are enumerated and justified
  • AI should only proceed if “we” (the thousands of signatories to the letter) are confident that AI risks are manageable.
  • AI developers are called to work together with policymakers to develop AI governance systems consisting of regulatory bodies.
  • The development of watermarking technologies to help identify AI created content and for controlling the spread of the technology.
  • A system for assigning liability for harms created by AI
  • Creation of institutions to deal with the disruptions caused by AI technology

The letter seems to come from a viewpoint that AI technology is centralized and can be paused by the few organizations in control of the technology. But AI is not exclusively in the hands of governments, research institutes and corporations.

AI is at this point an open source and decentralized technology, developed by thousands of individuals on a global scale of collaboration.

Ray Kurzweil: Futurist, Author and Director of Engineering at Google

Ray Kurzweil has been designing software and machines focused on artificial intelligence since the 1960s, he’s written many popular books on the topic and is famous for making predictions about the future that tend to be correct.

Of 147 predictions he made about life in 2009, only three predictions, a total of 2%, were wrong.

Among his predictions in the 1990’s is that many physical media, such as books will lose popularity as they become digitized. At a time in the 1990s when computers were large and bulky, he predicted that computers would be small enough to wear by 2009, which turned out to be true (How My Predictions Are Faring – 2010 PDF).

Ray Kurzweil’s recent predictions focus on all the good that AI will bring, particularly on medical and science breakthroughs.

Kurzweil is also focused on the ethics of AI.

In 2017 he was one of the participants (along with OpenAI CEO Sam Altman) who crafted an open letter known as the Asilomar AI Principles that were also published on the Future of  Life website, guidelines for safe and ethical development of Artificial Intelligence technologies.

Among the principles he helped create:

  • “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
  • Investments in AI should be accompanied by funding for research on ensuring its beneficial use
  • There should be constructive and healthy exchange between AI researchers and policy-makers.
  • Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
  • Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

Kurzweil’s  response to the open letter asking for a pause in AI development came from a lifetime of innovating technology from the point of view of all the beneficial good it can do for humanity and the natural world.

His response focused on three main points:

  • The call for a pause is too vague to be practical
  • All nations must agree to the pause or the goals are defeated from the start
  • A pause in development ignores the benefits such as identifying cures for diseases.

Too Vague to be Practical

His first point about the letter is that it is too vague because it’s causing for a pause on AI that’s more powerful than GPT-4, which assumes that GPT-4 is the only kind of AI.

Kurzweil wrote:

“Regarding the open letter to “pause” research on AI “more powerful than GPT-4,” this criterion is too vague to be practical.”

Nations Will Opt Out of Pause

His second point is that the demands outlined in the letter can work only if all researchers around the world voluntarily cooperate.

Any nation that refuses to sign on will have the advantage, which is probably what would happen.

He writes:

“And the proposal faces a serious coordination problem: those that agree to a pause may fall far behind corporations or nations that disagree.”

This point makes it clear that the goal of a complete pause is not viable, because nations won’t cede an advantage and also, AI is democratized and open source, in the hands of individuals around the world.

AI Brings Significant Benefits to AI

There have been editorials dismissing AI as having very little benefit to society, arguing that increasing worker productivity is not enough to justify the risks that they fear.

Kurzweil’s final point is that the open letter seeking a pause in AI development completely ignores all of the good that AI can do.

He explains:

“There are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields.

…more nuance is needed if we wish to unlock AI’s profound advantages to health and productivity while avoiding the real perils.”

Perils, Fear of the Unknown and Benefits to Humanity

Kurzweil makes good points about how AI can benefit society. His point that there is no way to actually pause AI is sound.

His explanation of AI emphasizes the profound benefits to mankind that are inherent in AI.

Could it be that OpenAI’s implementation of AI as a chatbot trivializes AI and overshadows the benefit to humanity, while simultaneously frightening people who don’t understand how generative AI works?

Featured image by Shutterstock/Iurii Motov



[ad_2]

Source link

Ray Kurzweil Rebuts Calls To Pause AI Research
Scroll to top