This seems wrong, but I don't understand why. If I have a wildly inaccurate sensor, but one which gave random error within a certain range, and I take a large (approaching infinite) set of sensor readings and then average them, shouldn't the value I get approach the true mean?