(just a little brain dump)
Neural networks are limited because they can't reason about what they know. They can learn probabilistically, and can adapt themselves to the hidden variables in a complex system quite well, but they don't have the ability for self-reflection and introspection. To be able to self-reflect, their information needs to be encoded differently -- all of their information is stored as meaningless vectors in a high-dimensional space. Neural networks need a secondary network that lies on top which can group these meaningless vectors as "concepts" -- meta-cognition -- like humans do.
(what's this area of nerual net research called?)