(just a little brain dump)

Neural networks are limited because they can't reason about what they know. They can learn probabilistically, and can adapt themselves to the hidden variables in a complex system quite well, but they don't have the ability for self-reflection and introspection. To be able to self-reflect, their information needs to be encoded differently -- all of their information is stored as meaningless vectors in a high-dimensional space. Neural networks need a secondary network that lies on top which can group these meaningless vectors as "concepts" -- meta-cognition -- like humans do.

Resources

HCI/Introspection (last edited 2011-08-29 01:09:42 by Chris)