By CORNELIA DEAN
ATLANTA, Georgia - In the heat of battle, even the best-trained soldiers can act in ways that violate the Geneva Conventions or battlefield rules of engagement. Now some researchers suggest that robots could do better.
“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C.Arkin, a computer scientist at the Georgia Institute of Technology, who is designing software for battlefield robots under contract with the United States Army.“That’s the case I make.”
Robot drones, mine detectors and sensing devices are already common on the battlefield, but they are controlled by humans. Dr.Arkin is talking about true robots operating on their own.
He and others say that the technology to make lethal autonomous robots is inexpensive and proliferating, and that it is only a matter of time before these robots are deployed on the battlefield. That means, they say, it is time for people to start talking about whether this technology is something they want to put to use.
Noel Sharkey, a computer scientist at the University of Sheffield in Britain, wrote last year in the journal Innovative Technology for Computer Professionals that “this is not a ‘Terminator’-style science fiction but grim reality.”He said South Korea and Israel were already deploying armed robot border guards.
“We don’t want to get to the point where we should have had this discussion 20 years ago,” said Colin Allen, a philosopher at Indiana University in Bloomington and a co-author of the new book “Moral Machines: Teaching Robots Right From Wrong.”
Randy Zachery, who directs the Information Science Directorate of the Army Research Office, which is financing Dr.Arkin’s work, said the Army hoped this “basic science” would show how human soldiers might use and interact with autonomous systems and how software might be developed to “allow autonomous systems to operate within the bounds imposed by the war fighter.”
In a report to the Army last year, Dr.Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as aresult, no tendency to lash out in fear. They can be built without anger or recklessness, Dr.Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.
His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents.
Dr.Arkin said he could imagine a number of ways in which autonomous robot agents might be used - in countersniper operations, clearing buildings of suspected terrorists or other dangerous assignments.
But first those robots would need to be programmed with rules and instructions about whom to shoot, when it is acceptable to fire and how to distinguish attacking enemy troops from civilians, the wounded or someone trying to surrender.
Dr.Arkin’s battlefield simulations play out on computer displays. obot pilots have information a human pilot might have, including maps showing the location of houses of worship, apartment buildings, schools and other centers of civilian life.
They are instructed as to the whereabouts of enemy troops, materiel and high-priority targets. And they are given the rules of engagement, directives that limit the circumstances in which they can initiate and carry out combat.
In one simulation, a robot pilot flies past a small cemetery. The pilot spots a tank at the cemetery entrance, a potential target. But a group of civilians has gathered at the cemetery, too. So the pilot decides to keep moving, and soon spots another tank, standing by itself in a field. The pilot fires; the target is destroyed.
Some who have studied the issue worry whether battlefield robots designed without emotions would lack empathy. Dr.Arkin, a Christian who acknowledged the help of God and Jesus in the preface to his 1998 book “Behavior-Based Robotics,” reasons that because rules like the Geneva Conventions are based on humane principles, building them into a machine’s mental architecture would endow it with a kind of empathy. He added, though, that it would be difficult to design “perceptual algorithms” that could recognize when people were, for example, wounded or holding up a white flag.
Dr.Arkin said he saw provoking discussion about the technology as the most important part of his work. And if autonomous battlefield robots are ultimately banned, he said,“I would not be uncomfortable with that at all.”
댓글 안에 당신의 성숙함도 담아 주세요.
'오늘의 한마디'는 기사에 대하여 자신의 생각을 말하고 남의 생각을 들으며 서로 다양한 의견을 나누는 공간입니다. 그러나 간혹 불건전한 내용을 올리시는 분들이 계셔서 건전한 인터넷문화 정착을 위해 아래와 같은 운영원칙을 적용합니다.
자체 모니터링을 통해 아래에 해당하는 내용이 포함된 댓글이 발견되면 예고없이 삭제 조치를 하겠습니다.
불건전한 댓글을 올리거나, 이름에 비속어 및 상대방의 불쾌감을 주는 단어를 사용, 유명인 또는 특정 일반인을 사칭하는 경우 이용에 대한 차단 제재를 받을 수 있습니다. 차단될 경우, 일주일간 댓글을 달수 없게 됩니다.
명예훼손, 개인정보 유출, 욕설 등 법률에 위반되는 댓글은 관계 법령에 의거 민형사상 처벌을 받을 수 있으니 이용에 주의를 부탁드립니다.
Close
x