Intelligence artificial intelligence (A.I.) expert and renowned pioneer Jeffrey Hinton teamed up to create a groundbreaking AI technology in 2012, underpinning AI programs for some of the world’s biggest brands but now Hinton has joined a growing circle of critics risking creating objects based on generative AI alerts against, powering popular chatbots like ChatGPT. He officially left his position at Google, where he worked for more than a decade, and became a respected voice in the field so that he could talk freely about AI risks.
Hinton admits his life’s work. He has some regrets but thinks if he didn’t, someone else would. The fact that Hinton has gone from AI trailblazer to apocalypse is a pivotal moment for the tech industry in one of the most significant changes in decades. Industry leaders see new AI systems as important as web browsers introduced in the early 1990s and potential in areas such as medical research and education but many in the industry fear these systems are giving something away a terrible thing has come to the world.
Generative AI may already be a misinformation tool, and in the future, it could be a threat to industry and ultimately humanity.
In March, OpenAI, a San Francisco startup, released a new version of ChatGPT, a popular chatbot powered by artificial intelligence (AI). In response, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new AI systems. Concern was expressed about the serious threat posed by AI technology to society and humanity.
Shortly after, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a society with a 40-year history, released their own letter warning about the risks of AI. The group included Eric Horvitz, the chief scientific officer at Microsoft, which has deployed OpenAI’s technology across various products, including Bing, its search engine.
Dr. Geoffrey Hinton, known as the “Godfather of A.I.”, did not sign the open letters calling for a halt to AI development because he did not want to criticize Google or other companies publicly while he was still employed. However, last month, he resigned from his job at Google and talked to Sundar Pichai, the CEO of Alphabet, Google’s parent company, about his concerns. He did not reveal the details of their conversation.
Google’s chief scientist, Jeff Dean, reassured the public that the company is committed to a responsible approach to AI and is constantly learning to understand emerging risks while also innovating boldly.
Dr. Hinton is a 75-year-old British academic who has been dedicated to the development and use of AI throughout his career. In 1972, as a graduate student at the University of Edinburgh, he began working on neural networks, a mathematical system that learns skills by analyzing data. He has been committed to this idea ever since.
In the 1980s, Dr. Hinton worked as a professor of computer science at Carnegie Mellon University but later moved to Canada because he did not want to accept Pentagon funding for his research. Dr. Hinton is firmly against using AI in warfare, particularly in the form of “robot soldiers.” In 2012, Dr. Hinton, along with his students Ilya Sutskever and Alex Krishevsky, created a neural network that could learn to identify common objects by analyzing thousands of photos.
Dr. A.S. Hinton’s startup, which he founded with two of his students, was acquired by Google for $44 million. Their technology served as the basis for a number of highly advanced systems, including the popular chatbots ChatGPT and Google Bard. One of those students, Mr. Sutskever, became the chief scientist at OpenAI. When Dr. A.S. Hinton and his colleagues have been awarded the 2018 Turing Prize, considered the computing equivalent of the Nobel Prize.
Dr. Hinton once believed that machines could learn language through motor skills, but it wasn’t as effective as human speech processing. But his views changed when Google and OpenAI introduced better A.I. of systems. While he still believes that humans know language processing in some ways, he believes that these processes have surpassed human intelligence in others.
For example, A.I. Technology is advancing, and he thinks it’s becoming more dangerous. Dr. Hinton: A.I. Technology is awesome and they believe that A.I. they must be dealt with.
Until last year, Dr. Hinton said that Google was cautious with the technology, ensuring it did not release anything that could cause harm. However, with Microsoft’s Bing search engine incorporating a chatbot, Google is now competing fiercely to deploy similar technology. Dr. Hinton believes that this competition between tech giants may be unstoppable.
One of his main concerns is the spread of false information on the internet, such as photos, videos, and text that cannot be easily verified. He fears that people will not be able to distinguish truth from falsehood anymore.
Dr. Hinton is also worried that A.I. technologies will gradually replace human workers in certain fields such as paralegals, personal assistants, translators, and others who perform repetitive tasks. While this takes away the drudgery of such work, it may lead to a significant loss of jobs.
Looking into the future, Dr. Hinton is concerned that A.I. technology may threaten humanity as it continues to learn unexpected behaviors from the vast amounts of data it analyzes. He is worried that individuals and companies may allow A.I. systems to generate their own computer code and even run that code on their own, leading to the development of truly autonomous weapons or “killer robots”. He admits that he used to think this was a far-off scenario, but now he sees it as a possibility much closer than he initially believed.
Subscribe to divine.ai for more of the latest news.