Since the dawn of human ingenuity, people have devised ever more cunning tools to cope with work that is dangerous, boring, onerous, or just plain nasty. That compulsion has culminated in robotics – the science of conferring various human capabilities on machines.
A The modern world is increasingly populated by quasi-intelligent gizmos whose presence we barely notice but whose creeping ubiquity has removed much human drudgery. Our factories hum to the rhythm of robot assembly arms. Our banking is done at automated teller terminals that thank us with rote politeness for the transaction. Our subway trains are controlled by tireless robo-drivers. Our mine shafts are dug by automated moles, and our nuclear accidents – such as those at Three Mile Island and Chernobyl – are cleaned up by robotic muckers fit to withstand radiation.
Such is the scope of uses envisioned by Karel Capek, the Czech playwright who coined the term ‘robot’ in 1920 (the word ‘robota’ means ‘forced labor’ in Czech). As progress accelerates, the experimental becomes the exploitable at record pace.
B Other innovations promise to extend the abilities of human operators. Thanks to the incessant miniaturisation of electronics and micromechanics, there are already robot systems that can perform some kinds of brain and bone surgery with submillimeter accuracy – far greater precision than highly skilled physicians can achieve with their hands alone. At the same time, techniques of long-distance control will keep people even farther from hazard. In 1994 a ten- foot-tall NASA robotic explorer called Dante, with video-camera eyes and with spiderlike legs, scrambled over the menacing rim of an Alaskan volcano while technicians 2,000 miles away in California watched the scene by satellite and controlled Dante’s descent.
C But if robots are to reach the next stage of labour-saving utility, they will have to operate with less human supervision and be able to make at least a few decisions for themselves – goals that pose a formidable challenge. ‘While we know how to tell a robot to handle a specific error,’ says one expert, ‘we can’t yet give a robot enough common sense to reliably interact with a dynamic world.’ Indeed the quest for true artificial intelligence (Al) has produced very mixed results. Despite a spasm of initial optimism in the 1960s and 1970s, when it appeared that transistor circuits and microprocessors might be able to perform in the same way as the human brain by the 21st century, researchers lately have extended their forecasts by decades if not centuries.
D What they found, in attempting to model thought, is that the human brain’s roughly one hundred billion neurons are much more talented – and human perception far more complicated – than previously imagined. They have built robots that can recognise the misalignment of a machine panel by a fraction of a millimeter in a controlled factory environment. But the human mind can glimpse a rapidly changing scene and immediately disregard the 98 per cent that is irrelevant, instantaneously focusing on the woodchuck at the side of a winding forest road or the single suspicious face in a tumultuous crowd. The most advanced computer systems on Earth can’t approach that kind of ability, and neuroscientists still don’t know quite how we do it.
E Nonetheless, as information theorists, neuroscientists, and computer experts pool their talents, they are finding ways to get some lifelike intelligence from robots. One method renounces the linear, logical structure of conventional electronic circuits in favour of the messy, ad hoc arrangement of a real brain’s neurons. These ‘neural networks’ do not have to be programmed. They can ‘teach’ themselves by a system of feedback signals that reinforce electrical pathways that produced correct responses and, conversely, wipe out connections that produced errors. Eventually the net wires itself into a system that can pronounce certain words or distinguish certain shapes.
F In other areas researchers are struggling to fashion a more natural relationship between people and robots in the expectation that some day machines will take on some tasks now done by humans in, say, nursing homes. This is particularly important in Japan, where the percentage of elderly citizens is rapidly increasing. So experiments at the Science University of Tokyo have created a ‘face robot’ – a life-size, soft plastic model of a female head with a video camera imbedded in the left eye – as a prototype. The researchers’ goal is to create robots that people feel comfortable around. They are concentrating on the face because they believe facial expressions are the most important way to transfer emotional messages. We read those messages by interpreting expressions to decide whether a person is happy, frightened, angry, or nervous. Thus the Japanese robot is designed to detect emotions in the person it is ‘looking at’ by sensing changes in the spatial arrangement of the person’s eyes, nose, eyebrows, and mouth. It compares those configurations with a database of standard facial expressions and guesses the emotion. The robot then uses an ensemble of tiny pressure pads to adjust its plastic face into an appropriate emotional response.
G Other labs are taking a different approach, one that doesn’t try to mimic human intelligence or emotions. Just as computer design has moved away from one central mainframe in favour of myriad individual workstations – and single processors have been replaced by arrays of smaller units that break a big problem into parts that are solved simultaneously – many experts are now investigating whether swarms of semi-smart robots can generate a collective intelligence that is greater than the sum of its parts. That’s what beehives and ant colonies do, and several teams are betting that legions of mini-critters working together like an ant colony could be sent to explore the climate of planets or to inspect pipes in dangerous industrial situations.
Questions 14-19
Reading Passage 2 has seven paragraphs A-G. From the list of headings below choose the most suitable heading for each paragraph.
List of Headings
i Some success has resulted from observing how the brain functions.
ii Are we expecting too much from one robot?
iii Scientists are examining the humanistic possibilities.
iv There are judgements that robots cannot make.
v Has the power of robots become too great?
vi Human skills have been heightened with the help of robotics.
vii There are some things we prefer the brain to control.
viii Robots have quietly infiltrated our lives.
ix Original predictions have been revised.
x Another approach meets the same result.
14. Paragraph A
15. Paragraph B
16. Paragraph C
17. Paragraph D
18. Paragraph E
19. Paragraph F
Questions 20-24
Do the following statements agree with the information given in Reading Passage 2? In boxes 20-24 on your answer sheet write
YES if the statement agrees with the views of the writer
NO if the statement contradicts the views of the writer
NOT GIVEN if it is impossible to say what the writer thinks about this
20. Karel Capek successfully predicted our current uses for robots.
21. Lives were saved by the NASA robot, Dante.
22. Robots are able to make fine visual judgements.
23. The internal workings of the brain can be replicated by robots.
24. The Japanese have the most advanced robot systems.
Questions 25-27
Complete the summary below with words taken from paragraph F. Use NO MORE THAN THREE WORDS for each answer.
The prototype of the Japanese ‘face robot’ observes humans through a (25)……………………which is planted in its head. It then refers to a (26)……………………of typical ‘looks’ that the human face can have, to decide what emotion the person is feeling. To respond to this expression, the robot alters it’s own expression using a number of (27)………………………..