Should we be worried about the advancement of Artificial Intelligence?
This was originally posted as a "viewpoint" at the Aeon Ideas Beta website - Aeon Ideas
https://ideas.aeon.co/questions/should-we-be-worried-about-the-advancement-of-artificial-intelligence#viewpoint_2919
https://ideas.aeon.co/questions/should-we-be-worried-about-the-advancement-of-artificial-intelligence#viewpoint_2919
When we lack a precise definition of “intelligence”, we run the risk of jumbling run-of-the-mill machines with limited intelligence into a super-intelligent demon and fear that our microwave will one day take over our world.
In the “Art of War” Lao-Tzu stressed the importance of knowing thy enemy before going into battle. This includes getting a grip on your enemy’s strengths and weaknesses and the myths and misconceptions that surround him. To explain if we should fear developments in Artificial Intelligence we therefore need to firstly understand at least a bit of AI, cutting through the jargon in not only the techie journals and online fora but even mainstream media. It involves also distinguishing between (a) what AI-related developments – especially those products of scare-mongering and hyperbole - could perhaps be ignored from (b) what really needs to be feared.
(a) Things/events we can ignore (may be):
Confusion between weak and strong-AI (or mistaking the weak for the strong):
Weak AI is where AI is used for a narrow specific purpose. Examples are intelligent personal assistant software such as Siri, Google Now, Cortana, etc. Strong AI refers to a machine that can possess or exhibit consciousness, mind and is able to apply its intelligence to problems not already anticipated in its algorithms. (Products of weak AI are therefore to be welcomed generally as they make life easier for us.) When however we lack a precise definition of “intelligence”, we run the risk of jumbling run-of-the-mill machines with limited intelligence into a super-intelligent demon and fear that our microwave will one day take over our world.
Is it apocalypse now?Futurist Ray Kurzweil predicts that the technological singularity - when machines will become capable of redesigning themselves and trigger a runaway effect of generating more and more intelligent machines – will happen by 2045. Vernor Vinge thinks this may come to pass even sooner, by 2030. I am not sure how plausible this is – particularly the self-replication element. The Earth is around 4.5 billion years old. The first life forms however took nearly another billion years to make their appearance. One argument for the origin of life is that self-replication could happen only when biochemical reactions crossed a certain complexity threshold. Even being generous with the effects of Moore’s law and other paradigm shifts (leave aside the absence of biochemistry), it is hard to visualise machines gaining the ability to make copies of themselves in a matter of decades without an external agency. It is also not clear what existential imperatives machines will have to go forth and multiply, especially given they are bound to have much longer life expectancy compared to humans and other animals. Some critics fear that there could even be baser motives behind projecting such apocalyptic scenarios. By capturing public attention, such “religious claims” when promoted in “pop-science” media avoid , Robert M Geraci, a professor of religious studies, contends, the requisite critical scrutiny and may encourage disproportionate funding and support for such technologies.
(b) Things/events we may need to fear:
AI amorality:
Google and other organisations currently employ “deep learning” (algorithms that try to model higher-order thinking) in developing AI systems such as self-driving cars. No doubt such technologies will vastly improve road safety and engender efficient transport systems. However, such AI systems based on deep learning may not possess the abstract knowledge that is such an innate feature of human cognition. It may therefore be hard to predict how AI will deal with moral and ethical questions. As much as we try to put in place human cognitive processes in machines, they will still lack what the Nobel laureate Gerald Edelman calls “inherited value systems”, the moral safeguards that evolution has embedded in the human brain. These value systems usually stop us from choosing crime over social cohesion, evil over good. Moreover, as cosmologist Max Tegmark points out, limited resources may preclude from implementing a perfect algorithm in AI. This adds a great deal of uncertainty to machine behaviour when it confronts real life situations.
The Human Condition:
We now have an algorithm that may be able to classify works of art by style, genre, artist, etc. For example, it can tell the difference between a work by a Dutch master and a painting by Van Gogh.
Vincent van Gogh, The Bedroom (at Arles), 1888. |
J Vermeer, The Little Street, 1658 |
Comments
Post a Comment