“Listen! And understand. That Terminator is out there! It can’t be bargained with, can’t be reasoned with! It doesn’t feel pity or remorse or fear, and it absolutely will not stop. EVER! Until you are dead!” – Michael Biehn a.k.a Sgt Kyle Reese in The Terminator
The chilling lines set out above have now attained iconic status. Taken from the Hollywood blockbuster, “Terminator” starring an incredibly evil Arnold Schwarzenegger, the movie lays down the setting for a time when mankind is taken over by, made subservient to before being completely dominated by the forces of Artificial Intelligence (“AI”). At time of this writing, while we are not yet the helpless lab rats of and for Skynet, we are close to reaching an inflection point in our employ of and interaction with AI. From the notion of driverless cars to robotic surgeries, the cross winds of AI are buffeting the world of Science and Technology.
The burgeoning rise of AI has invariably spawned a debate that has an unambiguous vertical divide. On side of the deliberation stand eternal optimists of the likes of Ray Kurzweil, who are ready to burn their very barns betting on the potential of AI. Kurzweil in fact is so fervent a believer in the tenets of AI that he vehemently believes in a concept termed ‘Singularity’. In a bestseller having the same title as the concept, Kurzweil proposes that by the year 2045, “Singularity will help us multiply our effective intelligence a billion fold by merging with the intelligence we have created.” Opposing Kurzweil and his band of egregious brothers are the likes of James Barrat. In his own bestseller, “Our Final Invention”, Barrat advocates extreme caution when it comes to the employ of AI. Barrat takes pains to adumbrate the fact that ASI instead of being the embodiment of an agglomeration of sentient notions, will be a scheming, sinister, surgical monster of intelligence having both the potential and inclination to wipe humanity off the face of Planet Earth. The force of self-perpetuation inbuilt in a machine with ASI, will be in a position to “repurpose the world’s molecules using nanotechnology” thereby leading to “ecophagy” – eating the environment. “
So is AI the proverbial bane or the quintessential boom? In his measured work, “A Human’s Guide to Machine Intelligence – How Algorithms Are Shaping Our Lives and How We Can Stay in Control”, Kartik Hosanagar, – the John C. Hower Professor of Technology and Digital Business and a Professor Marketing at the Wharton School of the University of Pennsylvania – attempts to answer this very question. From an entangled mesh of conflicts, confusions and conundrums, Mr. Hosanagar, tries to ascertain whether AI posits an existential crisis or if the concerns regarding machine learning represent an outlandish exaggeration. This Mr. Hosanagar proposes to do by broadly answering the following questions:
- What causes algorithms to behave in unpredictable, biased, and potentially harmful ways?
- If algorithms can be irrational and unpredictable, how do we decide when to use them? And
- How do we as individuals who use algorithms in our personal or professional lives and as a society, shape the narrative of how algorithms impact us?
Beginning with the astonishing example of two contradictory and divergent outcomes originating from two identical endeavours by the same Company, Mr. Hosanagar sets the platform for an informed discussion. In 2014, Microsoft launched XioIce, a chatbot in China. The result was a phenomenal success with users raving endlessly over their fabulous interactions with XioIce. Bolstered by this result, Microsoft launched Tay, an artificial intelligence chatter bot via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch.
As Mr. Hosanagar elucidates, “As machines become more intelligent and dynamic, they also become more unpredictable.”
As Mr. Hosanagar takes the trouble to educate to his readers, Google’s self-driving car is based on algorithms that in turn are based on rules not programmed by humans directly but instead “trained on a database of videos of humans driving” that allow it to arrive at “its own driving policy using machine learning.” A self-driving car that learns like a teenager in a driver’s education class may not inspire confidence, but, as Hosanagar observes, the algorithm has driven millions of miles in training, something almost no human has ever done.
So how can algorithms and their users co-exist with confidence instead of getting entangled in a queasy relationship characterized by mistrust and apprehension? Mr. Hosanagar banks on a solution proposed by the founding fathers of American Democracy and even the creators of Magna Carta. In his own words, “based on what we know about AI and its potential impacts on society, I believe there should be four main pillars of an algorithmic bill of rights, including a set of responsibilities for users of decision-making algorithms:
- First, those who use algorithms or who are impacted by decisions made by algorithms should have a right to a description of the data used to train them and details as to how that data was collected;
- Second, those who use algorithms or who are impacted by decisions made by algorithms should have a right to an explanation regarding the procedures used by the algorithms, expressed in terms simple enough for the average person to easily access and interpret. These first two pillars are both related to the general principle of transparency;
- Third, those who use algorithms or who are impacted by decisions made by algorithms should have some level of control over the way those algorithms work–that is, there should always be a feedback loop between the user and the algorithm; and
- Fourth, those who use algorithms or who are impacted by decisions made by algorithms should have the responsibility to be aware of the unanticipated consequences of automated decision making.”
Mr. Hosanagar also derives inspiration from the words of caution sounded by pioneers of scientific temper who warned the world about the perils of their own invention. Just days before his death, Albert Einstein whose brainchild hastened the consummation of the Manhattan Project, drafted the “Russell-Einstein Manifesto–an eloquent call to scientists to act for the good of humanity. Supported by other such notable scientists and intellectuals as Max Born, Frédéric Joliot-Curie, Linus Pauling, and Bertrand Russell, the manifesto states:
There lies before us, if we choose, continual progress in happiness, knowledge, and wisdom. Shall we, instead, choose death, because we cannot forget our quarrels? We appeal as human beings to human beings: Remember your humanity, and forget the rest. If you can do so, the way opens to a new Paradise; if you cannot, there lies before you the risk of universal death.
While Mr. Hosanagar candidly acknowledges that abandoning AI would be akin to driving ourselves back to the Stone Age, there is no dispute that great care and caution ought to be exercised before embracing AI wholesale. And towards this endeavor, “A Human’s Guide to Machine Intelligence” is a thought provoking and hands-on user’s guide to this first seemingly esoteric sphere of dynamic and fluid technology.
(Written as part of the Blogchatter’s A2Z Challenge) – PART 1 ALPHABET A