The subject of consciousness is a difficult one for philosophical reasons. Many of a more spiritual bend than I think that consciousness is something beyond the physical, related to some transcendent concept such as the soul. I don't believe this myself, and to be honest I don't the concept of consciousness is really well-defined enough to even debate resonably at present. To me, consciousness implies simply awareness of yourself and your environment. I see this as a prerequisite for anything beyond the most rudimentary, instinctive behavior. By this standard, a chimpanzee's ability to learn by immitation behaviors like termite fishing or breaking open coconuts qualifies it as having "conscious." Some think imitation is instinctive, but I'm not sure I agree. It implies an awareness of the other chimpanzee, and an ability to recognize, if not understand, causal relationships. Interestingly, I think this is a larger accomplishment than figuring out how to bash the coconut open in the first place - that can happen by accident, or out of desperation. Imitating the action when witnessed implies an ability to watch another, and imagine yourself in that position, basically think "That could be me, I could be eating that."
As far as I know, acheiving this level of behavior in a robot is far beyond our present ability, though as much because of the difficulty of processing visual data in so sophisticated a manner as to identify not only another robot but recognize an action taken by that robot that is not part of the observer robot's programming.
There are other, probably better, ways to measure intelligence, but in my view we will not have true human-level intelligence in a robot until that robot is capable of abstract thought - not necessarily in human language, but in some abstracted language that serves the same purpose for the machine. Without that, the best we can hope for is the robotic equivilant of a dog or cat, capable of some degree of adaptability but not capable of really understanding or consciously impacting it's environment. Without the ability to reason in a general, abstracted way (as opposed to the specific, restricted way that expert machines reason) a robot can't reach a human-level of sophistication.
As for the mobility question, I think your original thought was humanoid mobility, rather than just mobility in general? And as far as I am concerned, a real human level of kinematic intelligence requires some of that same reasoning ability above; you might program a bot to play baseball, or climb rocks, but no matter how many specific actions it is programmed with, it is no more likely to learn an entirely new behavior on it's own - say, climbing a rope ladder - than any other humanoid bot. To make that leap requires an understanding of your body and the ability to apply that knowledge to new tasks. When we can make a robot that can learn a sport or martial art the same way a human does, through practice, observation, and verbal instruction rather than through programming or mechanical imitation, then we will have a truly human level of intelligence and mobility.
I hope we'll accomplish this goal in the next 50 years, but I'm not knowledgable enough about the subject to really guess with any accuracy.