Optimization for Unknown Unknowns, and a Path to Universal Intelligence
A group of agents in a dynamic landscape is trying to find an optimal solution: there’s a good one close by and a less impressive one below. Over time one shrinks while the other grows.

Left: Using local optimization, the agents are unable to let go of the nearest local minima. Only when it evaporates completely do they start moving. Right: Using Metaphor’s Intentional Evolution, the agents quickly discover both hot spots, maintain a presence in both, and rapidly migrate when one collapses.
Metaphor is a new path toward Universal Intelligence. We’re building self-modifying neural networks that can adapt to change, use analogies, and search for optimal solutions both locally and by leaps and bounds.
The end-result is a new paradigm and workflow that automates AI research and development: Machines continuously test different architectures, topologies, and parameters, while researchers can constantly try out new meta-learning priors, hypotheses, and heuristics.
This approach can be immediately used for optimization in any kind of dynamical system, especially those having a short chaotic timescale and an ability to leverage bets — markets and exchanges, recommendation systems, cat-and-mouse scenarios, creative generation, and scientific discovery.