Join us in Chat. Click link in menu bar to join. Unofficial chat day is every Friday night (US time).
0 Members and 1 Guest are viewing this topic.
Whatever happened to Asimov's 3 laws?
If a robot can be programmed to not kill out of hatred (or whatever other reason), I would argue it to be unethical to *not* use a robot soldier instead of a human soldier.
The problem is not the lack of emotions, but rather the situations where they go haywire and we all know that a human brain (even a not so bright one) is far superior to what a robot is, and will be for a lot of time, when it comes to eg. "pattern recognition"
and it's easy to imagine a scenario where even a light damage (bullet, frags or whatever) makes a robot go haywire and then civilians will be in real danger.
And what's the point if two sides each sends robots and no humans (in some future battles) - It reminds me of the situation where two kids argues over who's father can beat the other ones father
Funny thing is, people advocating such always start their dogmatic outburst with "I really don't like war, it's terrible, but...".If all engineers simply didn't enter into developing killer 'bots, guns, bombs etc. there wouldn't be any such devices.
Quote from: Soeren on December 01, 2012, 11:07:39 AMThe problem is not the lack of emotions, but rather the situations where they go haywire and we all know that a human brain (even a not so bright one) is far superior to what a robot is, and will be for a lot of time, when it comes to eg. "pattern recognition"As long as this is true, robots will never be fielded in those situations. The above article is for the hypothetical day when robot brains are actually superior.
Humans don't even need light damage to go haywire. My Lai, for example.
When wars are fought over resources (such as oil fields or territory), the victor keeps it. When it is fought over ideals, the victor gets to force their ideals on the loser. etc. etc. etc.
And history shows the better equipped force is much more likely to win.
Did you see the monetary prize for the latest DARPA robot competition? And there will always be engineers who believe what he is doing is right because it is for his nation.
I'm not an advocate of war, or making weapons.
I refuse to make or assist in the creation of killer robots (yes, peaceful robot tech can be used for evil).
But I realize that wars are not going away any time soon, that dictators won't step down if you say 'please', and people will wage those wars regardless of the weapons they have.
HRW can't ban robots no more than they can ban guns or war itself. It's a political - not engineering - issue.
Yes, chemical weapons were banned, and same with cluster munitions, but only because they couldn't discriminate between combatants and civilians.
Carpet bombing was ended with the development of the smart bomb. And my argument is that future robots will be better than the current option.
Besides, how are we going to fight off the alien invasion if we don't have giant fighting robots?
Carpet bombing was ended because smart bombs is a cheaper way to reach a goal, with less protests from people against war.
And the killer 'bot... How's it going to discriminate between a civilian and a soldier?People wearing a green jacket and a hole punch may have been forced to do so and guerillas may look like civilians - how should it define who's who, when it's sometimes impossible for humans?
If a robot intentionally kills a non-combatant, [...]
That said, I think the next tech to be fielded will actually be soldiers in exo-suits. But sooner or later, like aircraft today, the pilot will no longer be needed to do the job.
And no need to bash the US (in every country there are good people, and there are bad people)
I'm more concerned with the unintentional behavior.
You may think it is possible to make the distinction under all ambient conditions, but I doubt that and then there's at least two ways for the enemy to render a robot useless: Either make some civilians look like combattants to a robot, while not to a human, get them killed and make an international case of it, which should make the controllers shut the 'bots down. Second, make the combattants look like civilians and the robots won't harm them.
But it's OK to bash eg. Russia?
finally we can get our troops pulled from Afganistan (where they shouldn't have been in the first place
So in my opinion there is still a long way where robots have to ' prove ' themselves as trustworthy autonomous beings.