Hello!
The three laws,
I think, are really for humans to design robots by. no one (human) would want a
robot to bring injury to another person.(unless, it was a terminator!)
Does the class of robot dictate philosophy of the morals of the robot? Meaning, if there are terminator
robots made for war, then domestic robots made for personal use, what is to stop someone from
transferring one "mindset" of one robot to another? If that is done, then how would we know if an
accidental software transfer at the factory was done, and a "mutant" robot was created?
The best thing to do, is to limit the "mind" of the robot to dedicated work related areas, and avoid
using them for war. This limitation will never happen, as any advantage will soon be used to obtain
supremacy. Having said all this, maybe we should use the laws for robots on ourselves(1st law);
and lead by example. What are the odds of that happening?
Good thought invoking question.