Home Bookend - Where reading meets review Artificial Intelligence: A Guide For Thinking Humans – Melanie Mitchell

Artificial Intelligence: A Guide For Thinking Humans – Melanie Mitchell

by Venky

Image result for Artificial Intelligence: A Guide for Thinking Humans

René Descartes, a French philosopher, mathematician and scientist in elucidating his famous theory of dualism, expounded that there exist two kinds of foundation: mental and physical. While the mental can exist outside of the body, and the body cannot think. Popularly known as mind-body dualism or Cartesian Duality (after the theory’s proponent), the central tenet of this philosophy is that the immaterial mind and the material body, while being ontologically distinct substances, causally interact. British philosopher Gilbert Ryle‘s in describing René Descartes’ mind-body dualism, introduced the now immortal phrase, “ghost in the machine” to highlight the view of Descartes and others that mental and physical activity occur simultaneously but separately.

Ray Kurzweil, the high priest of futurism and Director of Engineering at Google, takes Cartesian Duality to a higher plane with his public advocacy of concepts such as Technological Singularity and radical life extension. Kurzweil argues that with giant leaps in the domain of Artificial Intelligence, mankind will experience a radical life extension by 2045. Skeptics on the other hand bristle at this very notion, claiming such “Kurzweilian” aspirations to be mere fantasies putting to shame even the most ludicrous of pipe dreams.

The advances in the field of AI have spawned a seminal debate that has a vertical cleave. On one side of the chasm are the undying optimists such as Ray Kurzweil predicting a new epoch in the history of mankind, while on the other side of the divide are placed pessimists and naysayers such as Nick Bostrom, James Barrat and even the likes of Bill Gates, Elon Musk and Stephen Hawking who advocate extreme caution and warn about existential risks. So what is the actual fact? Melanie Mitchell, a computer science professor at Portland State University takes this conundrum head on in her eminently readable book, ““Artificial Intelligence: A Guide for Thinking Humans.” A measured book, that abhors mind numbing technicalities and arcane elaborations, Ms. Mitchell’s work embodies a matter-of-fact narrative that seeks to demystify the future of both AI and its users.

The book begins with a meeting organized by Blaise Agüera y Arcas, a computer scientist leading Google’s foray into machine intelligence. In the meeting, the genius AI pioneer and author of the Pulitzer Prize winning book, “Gödel, Escher, Bach: an Eternal Golden Braid” (or just “gee-ee-bee’), Douglas Hofstadter expresses downright alarm at the principle of Singularity being touted by Kurzweil. “If this actually happens, “we will be superseded. We will be relics. We will be left in the dust.” A former research assistant of Hofstadter, Ms. Mitchell is surprised to hear such an exclamation from her mentor. This spurs her on to assess the impact of AI, in an unbiased vein.

Tracing the modest trajectory of the beginning of AI, Ms. Mitchell informs her reader about a small workshop in Dartmouth in 1956 where the seeds of AI were first sown. John McCarthy, universally acknowledged as the father of AI and the inventor of the term itself, persuaded Marvin Minsky, a fellow student at Princeton, Claude Shannon, the inventor of information theory and Nathaniel Rochester, a pioneering electrical engineer, to help him organize “a 2 month, 10-man study of artificial intelligence to be carried out during the summer of 1956.” What began as a muted endeavor has now morphed into a creature that is both revered and reviled, in equal measure. Ms. Mitchell lends a technical element to the book by dwelling on concepts such as symbolic and sub-symbolic AI. Ms. Mitchell, however lends a fascinating insight into the myriad ways in which various intrepid pioneers and computer experts attempted to distill the element of “learning” into a computer thereby bestowing it with immense scalability and computational skills.

For example, using a technique termed, back-propagation, errors are taken away at the output units and to “propagate” the blame for that error backward so as to assign proper blame to each of the weights in the network. This allows back-propagation to determine how much to change each weight in order to reduce the error. The beauty of Ms. Mitchell’s explanations lies in its simplicity. She breaks down seemingly esoteric concepts into small chunks of ‘learnable’ elements.

It is these kind of techniques that have enabled IBM’s Watson to defeat World Chess Champion Garry Kasparov, and trump over Jeopardy! Champions Ken Jennings and Brad Rutter. So with such stupendous advances, is the time where Artificial Intelligence surpasses human intelligence already upon us? Ms. Mitchell does not think so. Taking recourse to the views of Alan Turing’s “argument from consciousness,” Ms. Mitchell brings to our attention, Turing’s summary of the neurologist Geoffrey Jefferson’s quote:

“Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”

Ms. Mitchell also highlights – in a somewhat metaphysical manner – the inherent limitations of a computer to gainfully engage in the attributes of abstraction and analogy. In the words of her own mentor Hofstadter and his coauthor, the psychologist Emmanuel Sander, “Without concepts there can be no thought, and without analogies there can be no concepts.” If computers are bereft of common sense, it is not for the want of their users trying to ‘embed’ some into them. A famous case in point being Douglas Lenat’s Cyc project which ultimately turned out to be a bold, albeit futile exercise.

A computer’s inherent limitation in thinking like a human being was also demonstrated by The Winograd schemas. These were schemas designed precisely to be easy for humans but tricky for computers. Hector Levesque, Ernest Davis, and Leora Morgenstern three AI researchers, “proposed using a large set of Winograd schemas as an alternative to the Turing test. The authors argued that, unlike the Turing test, a test that consists of Winograd schemas forestalls the possibility of a machine giving the correct answer without actually understanding anything about the sentence. The three researchers hypothesized (in notably cautious language) that “with a very high probability, anything that answers correctly is engaging in behaviour that we would say shows thinking in people.”

Finally, Ms. Mitchell concludes by declaring that machines are as yet incapable of generalizing, understanding cause and effect, or transferring knowledge from situation to situation – skills human beings begin to develop in infancy. Thus while computers won’t dethrone man anytime soon, goading them on to bring such an endeavor to fruition might not be a wise idea, after all.

Don’t miss the posts!

We don’t spam!

Related Articles

3 comments

crispina kemp October 30, 2019 - 2:57 am

I’d say the time to really get worried is when they’ve made a machine that hallucinates and has delusions.

Reply
venkyninja1976 October 30, 2019 - 8:15 am

Ha ha ha! true that.

Reply
crispina kemp October 30, 2019 - 2:10 pm

🙂

Reply

Leave a Reply

%d bloggers like this: