Is Artificial Intelligence Good? - IT zone360

Is Artificial Intelligence Good?

Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.


Is Artificial Intelligence Good?

You’ve read it in the papers.  You’ve experienced it in life. Machines are taking over.  

And they are doing it fast.  Siri turned 9 on October 4 (go ahead, ask her if you don’t believe me). Tesla’s first Autopilot program is also 9.  And Alexa is less than 5 years old. 

Despite the fact that these AI-powered technologies haven’t graduated past their first decade, they seem to be running our lives. And they are apparently doing it better than we can.

WHAT IS AI?

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Artificial intelligence good or bad?

Artificial intelligence requires control. British media reported in an article published in the Financial Times on Monday. He was chief executive of Google, now he is also CEO of Alphabet, Google's parent company. There is no chance of his words being dropped.

Beautiful Pichai describes artificial intelligence as one of the most likely technologies. But he did not even write down the potential risks of unplanned use. He wants to illustrate how new technology has caused new problems in the past. The Internet has made it easy for anyone to connect and exchange information from anywhere. Again it has been easy to spread the wrong information.



Regarding artificial intelligence control, Sundar Pichai said that the European Union and the United States have already started drafting a policy on artificial intelligence control. He believes international coordination is needed to meet global standards.

Meanwhile, Huawei chief executive officer of Chinese telecommunications products manufacturer Ren Zhengfei expressed his dissent on a different stage. "I think we can use the new technology for the welfare of mankind," he said at an annual conference of the World Economic Forum in Davos, Switzerland, last Tuesday. Everyone wants a better life, not distress. 

Ren Zhengfei remarks that fear of people is irrelevant to the intelligence of the device. At one time, people were afraid of the atomic bomb. Now their fears are about artificial intelligence. He said AI would not be as harmful as a nuclear bomb.

In the session titled A Future Shaped by a Technology Arms Race, Renault Zhengfei was accompanied by historian and author Juval Noah Harari. Harari, however, could not be so relaxed. He said the competitive investment in artificial intelligence technology, especially between the US and China, was a concern for everyone.

To a lesser extent, this could be a repeat of the nineteenth-century industrial revolution. At that time, world leaders had the opportunity to spread economic and political domination in the world. Referring to the current occupation fighting is different, Harari said, "If you get all the information about a country, you don't have to send troops anymore." Source: Mashable, World Economic Forum


HOW CAN AI BE DANGEROUS?

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

01) The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.

02) The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.



Please email us for any suggestions.
our E-mail: info@itzone360.net

No comments

Powered by Blogger.