fairly techie explanation here, try and follow it, but don't blow your brain up
right, no in a conventional sense. unless you like matrix style outputs.(the movie or the math, take your pick) the robots map isn't really understandable at a basic level of interaction. add in low data transfer rates, and it becomes even more horrible.
the way around this is to pass data on the objects the robot detects to the pc. (boring techie detail)thus using low data transfer rates, and minimal power and processing capablities on the robot. the con of this, the computational cost to the pc is higher, and you duplicate some processes, increassing the overall computational cost of the system(end of boring techie detail)
in laymans terms, the robot can have a low powered processor, but the pc needs more processing power. so basicly, the pc needs to be running windows 98 or better
pc's have masses of processing power. just add a few nano seconds to the processing time.
if you post process the data, you can get a vector representation of what it is that the robot can detect. you can then process that into a visual represntation of the data. the easiest way is to draw the vectors as lines on a 2d bit of "paper"
effectivly, you can't really get a robot to draw a map and send it to the computer. if it passed a vector representation of the objects it detects to the pc, the pc then does the image processing. the volume of data passed is a minimum. if it sees a giant wall in all of its sensor possitions, it passes a single vector for the whole wall. which is a lot faster than sending each sensor reading, or building a map and passing an image over wireless.
at some point you need to get into GUI's and image processing. I recomend C sharp, but whatever you want.