By its nature, the Open Roboethics Initiative is easy to dismiss — until you read anything they’ve published. As we head toward a self-driving future in which virtually all of us will spend some portion of the day with our lives in the hands of a piece of autonomous software, it ought to be clear that robot morality is anything but academic. Should your car kill the child on the street, or the one in your passenger seat? Even if we can master such calculus and make it morally simple, we will do so only in time to watch a flood of household robots enter the market and create a host of much more vexing problems. There’s nothing frivolous about it — robot ethics is the most important philosophical issue of our time.
Biologists at Bielefeld University present expanded software architecture for the walking robot Hector
A year ago, researchers at Bielefeld University showed that their software endowed the walking robot Hector with a simple form of consciousness. Their new research goes one step forward: they have now developed a software architecture that could enable Hector to see himself as others see him. “With this, he would have reflexive consciousness,” explains Dr. Holk Cruse, professor at the Cluster of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University. The architecture is based on artificial neural networks. Together with colleague Dr. Malte Schilling, Prof. Dr. Cruse published this new study in the online collection Open MIND, a volume from the Mind-Group, which is a group of philosophers and other scientists studying the mind, consciousness, and cognition.
In Her, Spike Jonze’s Oscar-winning romance between a man and his operating system, we are pushed to reevaluate our relationships with computers. If it could listen and respond intelligently to your every concern, would you prefer dating a computer over a distracted, self-involved human? But for me, it’s not in my love life that I feel most replaceable. It’s in my choice of profession—medicine.
It started innocently enough. Robots like the DaVinci surgical system behaved as tools, needing the hands of their masters to function. Then some advanced past mechanical labor to read and write, and became a little less deferent. Electronic health records stop doctors if we prescribe the wrong medications or if we forget to ask the right questions at an annual checkup.
Now machines have become stand-ins. Virtual avatars called “relational agents” handle daily conversations to motivate weight loss and “watch” while patients take their medications. And their diagnostic skills are improving: In January 2013, a group from Indiana University achieved 41.9% better diagnostic accuracy with their artificial intelligence algorithm than trained physicians. Four years of medical school and five-plus years of residency later, we’ve got nothing on Her.
Um einen Studienplatz muss sich Torobokun keine Sorgen machen. Die Aufnahmeprüfungen an japanischen Universitäten würde der Roboter locker schaffen.
Last week at the Army Aviation Symposium, in Arlington, Va., a U.S. Army officer announced that the Army is looking to slim down its personnel numbers and adopt more robots over the coming years. The biggest surprise, though, is the scale of the downsizing the Army might aim for.
At the current rate, the Army is expected to shrink from 540,000 people down to 420,000 by 2019. But at last week’s event, Gen. Robert Cone, head of the Army’s Training and Doctrine Command, offered some surprising details about the slim-down plans. As Defense News put it, he “quietly dropped a bomb,” saying the Army is studying the possibility of reducing the size of a brigade from 4,000 soldiers to 3,000 in the coming years.
weiterlesen im Originalartikel