Optimism is good!
However, before taking on a substantial project (and this must be infinity beyond possible at this time), it's a very good idea to assess just the bare necessities, or outline if you will, of what has to be done to reach the goal.
The current state of matters, among the best and brightest scientists, is a fumbling start, where we know how little we know.
We still have a lot to learn to replicate even a mediocre attempt at intelligence - that's why I really hate the term Artificial Intelligence (AI), as the term implies that it is
intelligence, just made artificial (i.e. by humans you may say).
The best we have today can rightfully be termed nothing more than Simulated
Intelligence (SI) and as long as humanity cannot define exactly what
, we can not
hope for anything more than simulating it.
Besides this, to get closer to the target, we have to give robots behavioral responses based on sometimes irrational stuff and at least based on real world rationality, but all we can do is equip them with simulated behaviour.
To loosely quote Immanuel Kant, who probably couldn't imagine robots as we know them at all, "you can give a machine the behaviour based on sensors, but as long as it doesn't feel the vision, taste, smell etc. intelligence is out of the question" (his idea, my wording).
So even with programmed behavioral responses, it's all just a sim
- we have to accept a "Skynet" to get there and what would become of us then
Without ("real") intelligence, really understanding language is impossible, too many words sound the same but have different meaning due to context, cultural influence (even within a family or small town) and lots of other "modifiers" and let's not forget that more than 50% of interface communication is nonverbal (and in several hard to describe ways that we learn only through our intelligence).