On the Subject of Distributed Neural Networking and MIT’s ConceptNet

I have been experimenting for the past few months using MIT’s ConceptNet. Some scripting and a bit of research has revealed a lot of insight at what it might take to build a bot that can carry on a basic conversation driven by a mixture of simulated impulse and context.

ConceptNet is a utility for lexical analysis. It can’t be characterized as being “smart” by itself, but it is extremely useful and carries a lot of potential in certain areas of lexical analysis and artificial intelligence. It is a curated database of relational data and the amount of effort put into this project should be commended.

At the simplest levels, you can take two concepts and find relationships between them. For example we can determine that “People” and “Cats” have overlapping relational attributes, like “people can pet,” and “cats can be petted.” But you should be cautious with this data because it comes with some inherent caveats:

  1. These are facts are direct attributes in the database, not assertions or assumptions. In other words, the word “people” have an attribute in the database table that states they can pet; this is a simple noun-verb association with no other subtext. Likewise, the database also states that cats are able to be petted. They each have hundreds of additional attributes (e.g., cats have 4 legs,) but I cherry-picked these 2 reciprocating attributes about cat-petting for my demonstration.
  2. These attributes are information that is aggregated from a lot of different sources, so while it is generally true that “people can pet cats” it should be understood that some people will have disabilities that will make them incapable of petting cats. In addition, some cats cannot be pet because they are ferile and avoid humans.
  3. Despite the aforementioned exceptions, these generalized facts should not be discarded. A concept database that assumes every relationship to be both true and false because of exceptions would be less than useful, so any exceptions should take on a context-sensitive approach. More on this later.

Despite the above warnings, pure ConceptNet queries can draw some valid assertions using overlapping attributes—”Cats have hair,” and, “people have hair,” therefore you can make the assertion: “both humans and cats have hair.” Note again that we are not taking into the exceptions of hairless cats and human baldness, because as I stated before there is the difference between the attributes and a contextual exception to those attributes.

Making assertions based on concept attributes is tricky, though, and it can lead to some bizarre results. Cats and people are both mammals, too, but if we weight these relationships too heavily we can make the mistake of declaring that “cats are people,” an actual result I’ve received from a ConceptNet query. In fact, it’s quite easy to exploit this bizarre behavior to make a lot of inaccurate or weird assertions between two concepts. Here’s a few examples:

  • Spoon is fork
  • Person does not want to be a computer
  • Horse is a kind of person

Granted, these assertions were a fairly crude sampling using some sample ConceptNet Python code. They are not to be considered a fundamental flaw with ConceptNet itself, but with how it’s being used to extrapolate new information between 2 concepts. Making more robust queries to validate assertions would help solve this problem. In the last example above, a thorough query would reveal that horses have much different properties than humans do, and therefore cannot be the same. This is a simple way to weed out the oddities above as being fundamentally false or irrelevant.

Further edge case optimization can be added especially for specialized topic, but any extra work at this level may contend with the law of diminishing returns while more and more exceptions are found. Ideally we would want to do the least mucking around at the low-level concept and attribute level.

Just one piece of the puzzle

ConceptNet lacks any sort of state or memory, rationality or personality. If one were to make two separate queries to ContextNet, each result would be deterministic and have no effect on each other. Even though the data can be used to make assertions, ConceptNet doesn’t have the capacity to visualize the experience, react to stimuli, or form responses by itself.

My interest with ConceptNet is to provide an alternative to emulating the whole brain at the neuronal level (which is impractical on a consumer level.) This alternative would include small interactive parts that build on top of ConceptNet, augmenting it with  human language parsing, knowledge, understanding and reasoning that rivals our own. This would ultimately produce a system that’s more compatible with humanity for specific tasks. Particularly ones that can and should be automated.

Neocortex

Thinking and reasoning are essential to consciousness, and in mammals would be the responsibility of the neocortex. The low-level specialized brain functions we discussed above all need to come together at a single point to form the seemingly fluid, flowing sensation of consciousness. For the sake of usability our neocortical model should directly recognize the context of stimulus.

Implementations of a neocortex can be a simple finite-state machine or more complex state matrices that are quantized using simulated annealing. There are tons of other methods out there, some simulating down to the neuronal level. I feel like the former 2 demonstrate a practical balance of intelligence, effort, and usable results on common low-grade consumer hardware like a phone, Raspberry Pi, or a netbook.

Finite-State Neocortex

In the case of the finite-state neocortex, each operation the central parts of the brain makes are deterministic. That is, given the same stimuli the AI will intentionally produce predictable, repeatable results. This may be a practice in considerable patience and effort to integrate enough pieces and states to obtain useful results. You may consider it as more of a concrete, rule-based architecture.

A mission-critical system might benefit more from this type of architecture because you can plan scenarios and test for failure points. There are a quantifiable (finite) amount of states this cortex can exhibit. Special case scenarios or functionality can be handled as one would handle programming exceptions on a computer. However results of a finite state machine can still be non-trivial and depend heavily on implementation, so it can be much less robotic than it seems.

Quantum Neocortex

For my design I’ve decided to treat stimuli and context as individual thought units that can be quantized in a streaming fashion. This design will provide interfaces for both processing stimuli and acting upon that stimuli in a way that doesn’t need to be triggered (e.g., it can respond to speech naturally, or when it feels like it needs to.) It can also be processed quickly by capturing each state and calculating the next via simulated annealing.

Next steps

A proof-of-concept is in the works. There are 3 different types of machine learning that need to be performed independently to get the quantum neocortex just to respond to speech, and each of them requires considerable processing time and a lot of space and memory. I may have to build or rent some servers.

Aside from that: Speech recognition and synthesis systems have already been set up. I have high-availability servers running several instances of this software remotely. While it would be nice to run this all locally on a phone, they take up too much space and CPU to be done locally. I’ll look at optimizing this later, though.

I plan on posting more when I have actual data to go off of, and I will probably start producing some code and a guide to get your own distributed neural network up and running on your own servers.

Featured image courtesy gerard79 

Cool Things Happening in Medicine Lately

I was beginning to wonder if I would ever see any major medical advances in my lifetime, due to the lack of any trend in this area lately.