“Human” Automation

Job Description of a “Picker” in an Amazon warehouse (via the FT):

The last group, the “pickers”, push trolleys around and pick out customers’ orders from the aisles. Amazon’s software calculates the most efficient walking route to collect all the items to fill a trolley, and then simply directs the worker from one shelf space to the next via instructions on the screen of the handheld satnav device….“You’re sort of like a robot, but in human form,” said the Amazon manager. “It’s human automation, if you like.” Amazon recently bought a robot company, but says it still expects to keep plenty of humans around because they are so much better at coping with the vast array of differently shaped products the company sells.

The bolded line touches upon a theme I have analysed many times, most recently in the essay ‘Technological Unemployment Amidst Stagnation’:

Many routine jobs that have provided avenues of mass employment during the twentieth century have typically been jobs requiring the use of human sensory and motor skills, skills that have proven hardest to automate. This phenomenon is known as ‘Moravec’s Paradox’ named after the artificial intelligence researcher Hans Moravec who observed that those skills we typically identify with intelligence (e.g. rational decision making) tend to be the skills that are easiest to replicate via an artificial intelligence (a combination of data and algorithms). But those skills that even a baby possesses, such as the ability to move around complex environments and pick up a variety of objects, tend to be the hardest to replicate in a robot. In a way some of what separates from the machines is what unites us with the animals.

The Importance of Forgetting and Limited Memory

My memory, sir, is like a garbage heap.

— Funes the Memorious, Jorge Luis Borges

One popular conception of how systems such as Watson can aid human beings is by acting as a kind of extension of the database of the human brain and giving us better and speedier algorithms. So a doctor could instantaneously access all the information and data that he cannot possibly analyse on his own. Implicit in this conception is as assumption that we are better off if we can process and store more information and that our own limited, forgetful memory is not up to the task of dealing with complex domains such as medical diagnosis. And a robotic aid is surely so much better than memory-enhancing hormones or training to become a memory athlete. However, the assumption that more memory is better is unwarranted. As Gerd Gigerenzer notes “the philosophical world in which perfect memory would flourish is a completely predictable world, with no uncertainty” whereas human cognition is adapted to an unpredictable and uncertain environment.

The importance of limited memory in learning was highlighted in a study by cognitive scientist Jeffrey Elman. Elman demonstrated that under certain conditions, initial restrictions on the memory of an artificial neural network may improve its ability to comprehend the complex grammatical relationships that are key to learning a language. In Elman’s words:

one might have predicted that the more powerful the network, the greater its ability to learn a complex domain. However, this appears not always to be the case. If the domain is of sufficient complexity, and if there are abundant false solutions, then the opportunities for failure are great. What is required is some way to artificially constrain the solution space to just that region which contains the true solution. The initial memory limitations fill this role; they act as a filter on the input, and focus learning on just that subset of facts which lay the foundation for future success.

 It is in this context that the limited memory capacity of infants has a positive impact by acting “like a protective veil, shielding the infant from stimuli which may either be irrelevant or require prior learning to be interpreted.”

The most striking example of how perfect memory can malform human intelligence is the case of the Russian journalist and mnemonist Shereshevsky. While studying him, the neuropsychologist Alexander Luria found that Shereshevsky possessed a memory of almost unlimited capacity and durability. Luria tested Shereshevsky’s memory by asking him to repeat arbitrary series of numbers, words and syllables that Luria provided him with, a task that Shereshevsky completed without error no matter how long the series and how long back the series had been given to him. Indeed, he possessed a flawless recollection of series’ that Luria had given him as long as 15 years ago. In many respects, Shereshevsky’s mind resembles that of a computer. Luria notes that when asked to reproduce a particular word in the series, Shereshevsky “would pause for a minute, as though searching for the word, but immediately after would be able to answer my questions and generally made no mistakes” as if he were searching through a vast database with an incredibly accurate and efficient algorithm. Perfect memory however carried a high cost. Shereshevsky struggled to understand the meaning of simple passages of text (especially poetry or metaphors), “a struggle against images that kept rising to the surface of his mind.” He found it almost impossible to extract any true meaning from them or to be truly aware of anything at an abstract level. In this respect, Shereshevsky resembles Jorge Luis Borge’s famous character ‘Funes the Memorious’ whose prodigious memory meant that he was “incapable of ideas of a general, Platonic sort”.