A microcontroller can do what you suggest, if treated carefully.
You should never use a delay() call anywhere.
You will probably use interrupts and overlap certain operations.
Most of your code will be a big state machine that's run in the main loop:
1) Check any events/flags since last loop.
2) Read any available inputs.
3) Figure out what to do next based on available state.
4) Possibly start some new operation.
5) Update outputs.
For example, to read an ultrasonic sensor, you typically don't want to first pull the line high, and then delay waiting for the input pulse. Instead, you pull the line high, and store away the time (measured in a small unit, like microseconds, or some timer tick that runs at a speed of between 1/16 and 16 microseconds per tick.)
In your main loop, when you see that the current time is greater than the start time plus required pulse time, turn off the pulse, and record the time again.
In an interrupt handler that runs when the ultrasonic sensor input pin goes from high to low, read the value of the timer and set a flag that you got that edge.
Again, in your main loop, when you see this flag, read the "pulse started time" and the "pulse completed interrupt time," to calculate how long the pulse was. This is now a new sensor input reading available for the rest of the logic.
Note that planning out microcontroller programs like this is *hard* because it requires managing many different pieces of state, and make sure that they don't step on each other. This is why most microcontroller tutorials only show you how to do one thing at a time.
If you've never done threaded, distributed, or otherwise asynchronous programming before, you *will* get it wrong a fair bit. This is the kind of software design that may make undergraduate computer science students fail.
Welcome to robotics :-)