Silicon Consciousness

Silicon Consciousness

by : Aria Ratmandanu 



















        As we’ve seen, human consciousness is an imperfect patchwork of different abilities developed over millions of years of evolution. Given information about their physical and social world, robots may be able to create simulations similar (or in some respects, even superior) to ours, but silicon consciousness might differ from ours in two key areas: emotions and goals.

        Historically, AI researchers ignored the problem of emotions, considering it a secondary issue. The goal was to create a robot that was logical and rational, not scatterbrained and impulsive. Hence, the science fiction of the 1950s and ’60s stressed robots (and humanoids like Spock on Star Trek) that had perfect, logical brains.

        We saw with the uncanny valley that robots will have to look a certain way if they’re to enter our homes, but some people argue that robots must also have emotions so that we can bond with, take care of, and interact productively with them. In other words, robots will need Level II consciousness. To accomplish this, robots will first have to recognize the full spectrum of human emotions. By analyzing subtle facial movements of the eyebrows, eyelids, lips, cheeks, etc., a robot will be able to identify the emotional state of a human, such as its owner. One institution that has excelled in creating robots that recognize and mimic emotion is the MIT Media Laboratory. 

        The two intriguing robots are called Huggable and Nexi. Their creator, Dr. Cynthia Breazeal, explained that these robots have specific goals. Huggable is a cute teddy bear–like robot that can bond with children. It can identify the emotions of children; it has video cameras for eyes, a speaker for its mouth and sensors in its skin (so it can tell when it is being tickled, poked, or hugged). Eventually, a robot like this might become a tutor, babysitter, nurse’s aide, or a playmate.

        Nexi, on the other hand, can bond with adults. It looks a little like the Pillsbury Doughboy. It has a round, puffy, friendly face, with large eyes that can roll around. It has already been tested in a nursing home, and the elderly patients all loved it. Once the seniors got accustomed to Nexi, they would kiss it, talk to it, and miss it when it had to leave.




Figure 1. Huggable (top) and Nexi (bottom), two robots built at the MIT Media Laboratory that were explicitly designed to interact with humans via emotions.


        Dr. Breazeal told us that she designed Huggable and Nexi because she was not satisfied with earlier robots, which looked like tin cans full of wires, gears, and motors. In order to design a robot that could interact emotionally with people, she needed to figure out how she could get it to perform and bond like us. Plus, she wanted robots that weren’t stuck on a laboratory shelf but could venture out into the real world. The former director of MIT’s Media Lab, Dr. Frank Moss, says, “That is why Breazeal decided in 2004 that it was time to create a new generation of social robots that could live anywhere: homes, schools, hospitals, elder care facilities, and so on.

        At Waseda University in Japan, scientists are working on “a robot that has upper-body motions representing emotions (fear, anger, surprise, joy, disgust, sadness) and can hear, smell, see, and touch. It has been programmed to carry out simple goals, such as satisfying its hunger for energy and avoiding dangerous situations. Their goal is to integrate the senses with the emotions, so that the robot acts appropriately in different situations.

       Not to be outdone, the European Commission is funding an ongoing project, called Feelix Growing, which seeks to promote artificial intelligence in the UK, France, Switzerland, Greece, and Denmark.”

Emotional Robot


        What’s more, AI researchers have begun to realize that emotions may be a key to consciousness. Neuroscientists like Dr. Antonio Damasio have found that when the link between the prefrontal lobe (which governs rational thought) and the emotional centers (e.g., the limbic system) is damaged, patients cannot make value judgments. They are paralyzed when making the simplest of decisions (what things to buy, when to set an appointment, which color pen to use) because everything has the same value to them. Hence, emotions are not a luxury; they are absolutely essential, and without them a robot will have difficulty determining what is important and what is not. So emotions, instead of being peripheral to the progress of artificial intelligence, are now assuming central importance.

       If a robot encounters a raging fire, it might rescue the computer files first, not the people, since its programming might say that valuable documents cannot be replaced but workers always can be. It is crucial that robots be programmed to distinguish between what is important and what is not, and emotions are shortcuts the brain uses to rapidly determine this. Robots would thus have to be programmed to have a value system—that human life is more important than material objects, that children should be rescued first in an emergency, that objects with a higher price are more valuable than objects with a lower price, etc. Since robots do not come equipped with values, a huge list of value judgments must be uploaded into them.

     The problem with emotions, however, is that they are sometimes irrational, while robots are mathematically precise. So silicon consciousness may differ from human consciousness in key ways. For example, humans have little control over emotions, since they happen so rapidly and because they originate in the limbic system, not the prefrontal cortex of the brain. Furthermore, our emotions are often biased. Numerous tests have shown that we tend to overestimate the abilities of people who are handsome or pretty. Good-looking people tend to rise higher in society and have better jobs, although they may not be as talented as others. As the expression goes, “Beauty has its privileges.

       Similarly, silicon consciousness may not take into account subtle cues that humans use when they meet one another, such as body language. When people enter a room, young people usually defer to older ones and low-ranked staff members show extra courtesy to senior officials. We show our deference in the way we move our bodies, our choice of words, and our gestures. Because body language is older than language itself, it is hardwired into the brain in subtle ways. Robots, if they are to interact socially with people, will have to learn these unconscious cues.

       Our consciousness is influenced by peculiarities in our evolutionary past, which robots will not have, so silicon consciousness may not have the same gaps or quirks as ours.

Programing Emotions.


       In this discussion we have so far avoided the difficult question of precisely how these emotions would be programmed into a computer. Because of their complexity, emotions will probably have to be programmed in stages.First, the easiest part is identifying an emotion by analyzing the gestures in a person’s face, lips, eyebrows, and tone of voice. Today’s facial recognition technology is already capable of creating a dictionary of emotions, so that certain facial expressions mean certain things. This process actually goes back to Charles Darwin, who spent a considerable amount of time cataloging emotions common to animals and humans.

      Second, the robot must respond rapidly to this emotion. This is also easy. If someone is laughing, the robot will grin. If someone is angry, the robot will get out of his way and avoid conflict. The robot would have a large encyclopedia of emotions programmed into it, and hence would know how to make a rapid response to each one.

     The third stage is perhaps the most complex because it involves trying to determine the underlying motivation behind the original emotion. This is difficult, since a variety of situations can trigger a single emotion. Laughter may mean that someone is happy, heard a joke, or watched someone fall. Or it might mean that a person is nervous, anxious, or insulting someone. Likewise, if someone is screaming, there may be an emergency, or perhaps someone is just reacting with joy and surprise. Determining the reason behind an emotion is a skill that even humans have difficulty with. To do this, the robot will have to list the various possible reasons behind an emotion and try to determine the reason that makes the most sense. This means trying to find a reason behind the emotion that fits the data best.

     And fourth, once the robot has determined the origin of this emotion, it has to make the appropriate response. This is also difficult, since there are often several possible responses, and the wrong one may make the situation worse. The robot already has, within its programming, a list of possible responses to the original emotion. It has to calculate which one will best serve the situation, which means simulating the future.









Komentar

Postingan Populer