mercredi 25 novembre 2015

Researchers are teaching robots to say "no" to humans

Apparently, researchers are making some progress teaching robots to question orders from their masters.

Link

In the video, the programmer tells the robot to walk off the edge of the table, and we see the robot say "But it is unsafe". The programmer has then to modify their command and say "It's ok, I will catch you" which will then make the robot accept the order and walk off the table.

I can see how this is intended for certain purposes, where say, we have a future where robots are working in things like construction. You have a bunch of robots building a tower, and someone commands a robot to do a certain task, and the robot says "But it is unsafe", thus, preventing what could have been an accident.


But I can't help thinking that it can also have the opposite effect. A robot may "think" a certain action is unsafe or not convenient, and deny obeying the command, when in reality, it has a necessary outcome. We're basically programming machines to think they know better than us what is it that we need, which is something I have always avoided and hated. There's nothing I hate more than my computer making decisions for me because it thinks it knows best. Most of the times, the computer is wrong.

I think machines, no matter how sophisticated, are still tools. We are the ones who use the tools, not the other way around. Programming machines to question our commands can end up in a very frustrating and even mortally dangerous situation. If the aiming goal is to teach machines morality, we should remember that morality is relative and depends on the context. If we humans haven't managed to reach an agreement on what's moral, what makes us think we can successfully program a robot to achieve an objective morality?


via International Skeptics Forum http://ift.tt/1IaT4JB

Aucun commentaire:

Enregistrer un commentaire