Diffusion Labs is an enterprise whose mission is to research, analyze, and abstract complex systems; advance knowledge and understanding of systems theory; and develop fuzzy expert systems for a range of social and industrial applications.
The vision of this enterprise is to promote soft computing solutions which increase the efficiency, knowledgeability, and functionality of intelligent agents, equipping them with a means to improve the quality of life for whom those agents serve.
The core disciplines exercised by this enterprise include thermodynamics, electrodynamics, quantum mechanics, computational intelligence, systems theory, information theory, and systems biology.
Artificial Intelligence 2.0
Computational intelligence (CI) can be thought of as sub-symbolic artificial intelligence (AI). While AI focuses on the high level symbolism of information, CI focuses on how units of information are related to one another, in addition to the collective value they represent. This is a useful departure from classical AI, as it provides a more scientific framework for investigation.
This view of intelligence is derived primarily from neural networks where a cognitive state is represented as a multidimensional vector of neural unit activation (probability potential) values, and knowledge is represented as a matrix of neural unit connection strength values. The concept of the neural network originated as an effort to explain biological cognition, and has been successful in modeling many natural connectionist functions. Neural networks, in contrast to the circuits of most electronic hardware, exhibit what is referred to as neuroplasticity: an ability to undergo autonomous adaptation and modification, or more simply, learn.
One popular mechanistic convention by which neural networks learn is known as Hebbian-style learning. Hebb's rule, introduced by Donald Hebb in 1949, attempts to describe how the strength of natural and artificial neural unit connections increase in response to simultaneous activation or potentiation. It forms the basis for several more modern theories of associative learning and neuroplasticity models such as Bienenstock-Cooper-Munro (BCM) theory and the Generalized Hebbian Algorithm (GHA).
These mechanisms for learning are prominent in fuzzy systems, where continuous truth values are defined by input variables mapped to fuzzy set membership functions. Traditional control systems are generally rigid and myopic, requiring a rigorous framework to map their input variables to their output. When a rule base is used in combination with fuzzy sets, fuzzy systems can make highly effective decisions given incomplete or delayed input values through the use of inference.
The inductive logic of choice for many has its origins in Bayes' theorem. A theorem which relates the conditional and marginal probabilities of two random events. Bayesian statistics is sometimes used in the course of machine learning where a prediction of posterior input values is assigned a probability defined by the product of the prior input value probability and posterior input value likelihood. As a result, Bayesian networks are often used to represent a belief system that can be queried for knowledge.
Perhaps the most promising strategy for solving problems through system optimization involves evolutionary techniques. These techniques are typically metaheuristic and intensely recursive. Some techniques such as evolutionary algorithms deal primarily with highly interconnected domains, while others like swarm intelligence, self-organization, and cultural algorithms deal with decentralized or fragmented domains. Evolutionary techniques have already proven very effective, and sometimes have even outperformed traditional expert systems by large margins.
PO Box 15177
Panama City, FL 32406-5177