Should we be worried about the advancement of Artificial Intelligence?



When we lack a precise definition of “intelligence”, we run the risk of jumbling run-of-the-mill machines with limited intelligence into a super-intelligent demon and fear that our microwave will one day take over our world.



In the “Art of War” Lao-Tzu stressed the importance of knowing thy enemy before going into battle. This includes getting a grip on your enemy’s strengths and weaknesses and the myths and misconceptions that surround him. To explain if we should fear developments in Artificial Intelligence we therefore need to firstly understand at least a bit of AI, cutting through the jargon in not only the techie journals and online fora but even mainstream media. It involves also distinguishing between (a) what AI-related developments – especially those products of scare-mongering and hyperbole - could perhaps be ignored from (b) what really needs to be feared.

(a) Things/events we can ignore (may be):
Confusion between weak and strong-AI (or mistaking the weak for the strong):
Weak AI is where AI is used for a narrow specific purpose. Examples are intelligent personal assistant software such as Siri, Google Now, Cortana, etc. Strong AI refers to a machine that can possess or exhibit consciousness, mind and is able to apply its intelligence to problems not already anticipated in its algorithms. (Products of weak AI are therefore to be welcomed generally as they make life easier for us.) When however we lack a precise definition of “intelligence”, we run the risk of jumbling run-of-the-mill machines with limited intelligence into a super-intelligent demon and fear that our microwave will one day take over our world.

Is it apocalypse now?Futurist Ray Kurzweil predicts that the technological singularity - when machines will become capable of redesigning themselves and trigger a runaway effect of generating more and more intelligent machines – will happen by 2045. Vernor Vinge thinks this may come to pass even sooner, by 2030. I am not sure how plausible this is – particularly the self-replication element. The Earth is around 4.5 billion years old. The first life forms however took nearly another billion years to make their appearance. One argument for the origin of life is that self-replication could happen only when biochemical reactions crossed a certain complexity threshold. Even being generous with the effects of Moore’s law and other paradigm shifts (leave aside the absence of biochemistry), it is hard to visualise machines gaining the ability to make copies of themselves in a matter of decades without an external agency.  It is also not clear what existential imperatives machines will have to go forth and multiply, especially given they are bound to have much longer life expectancy compared to humans and other animals. Some critics fear that there could even be baser motives behind projecting such apocalyptic scenarios. By capturing public attention, such “religious claims” when promoted in “pop-science” media avoid , Robert M Geraci, a professor of religious studies, contends, the requisite critical scrutiny and may encourage disproportionate funding and support for such technologies.

(b) Things/events we may need to fear:
AI amorality:
Google and other organisations currently employ “deep learning” (algorithms that try to model higher-order thinking) in developing AI systems such as self-driving cars. No doubt such technologies will vastly improve road safety and engender efficient transport systems.  However, such AI systems based on deep learning may not possess the abstract knowledge that is such an innate feature of human cognition. It may therefore be hard to predict how AI will deal with moral and ethical questions. As much as we try to put in place human cognitive processes in machines, they will still lack what the Nobel laureate Gerald Edelman calls “inherited value systems”, the moral safeguards that evolution has embedded in the human brain. These value systems usually stop us from choosing crime over social cohesion, evil over good. Moreover, as cosmologist Max Tegmark points out, limited resources may preclude from implementing a perfect algorithm in AI. This adds a great deal of uncertainty to machine behaviour when it confronts real life situations.

The Human Condition:
We now have an algorithm that may be able to classify works of art by style, genre, artist, etc. For example, it can tell the difference between a work by a Dutch master and a painting by Van Gogh.
Vincent van Gogh, The Bedroom (at Arles), 1888.
J Vermeer, The Little Street, 1658
Are we then close to the day when machines can impose an Orwellian censorship on us? For instance, the work of Vermeer may be acceptable while that of Edvard Munch would become undesirable - because AI may assume that the meaning behind the former’s Delft streetscapes is clear as day whereas it cannot relate to the spiritual angst The Scream may allude to and that only humans feel anyway. This raises the question where does the border between logic and rationality on the one hand and emotions and feelings on the other lie? What is the difference between a robot passing the Turing Test and acting and sounding like a human and a member of species homo sapiens? Is a machine explaining away a rainbow as a prosaic interplay between photons, photoreceptors and neurons the same as me witnessing it as one of nature’s most beautiful visions? As Roger Scruton points out, aesthetics, theology and music are not the same as science as they are concerned with explaining the human condition rather than understanding it. If we attempt to rebrand them as neuroscience, Scruton fears we will end up only with “neuro-nonsense”.

Comments

Popular posts from this blog

What do mosquitoes have to do with management?

Philosophy Now Magazine - Question of the Month

An illness and a joy