Asimov himself agreed that there were bugs in the three laws. Consider this:
During war, people hurt other people. As a result, any robot in a war zone should be striving to protect the people. Since soldiers are fighting each other, then conceivably the robot should be protecting soldiers of both sides.
So far so good, but here's the catch: the first law states that a robot must not only not harm a human being with its own actions, it must also not allow a human being to come to harm through its own *inaction*. A sufficiently versatile robot, then, would never be useful as soon as it heard about a war going on, since it would have to act. Even if it's in North America and the war is taking place in Europe, if the robot knows it is capable of going to Europe, it would have to do so. So unless mankind stopped wars completely before implementing the three laws, any robot that could move to a conflict would have to, thus becoming completely useless with regards to its original programming :-P
I guess that's sort of a bug that's due to the features of the laws, though...