Cookie Preferences

    We use cookies to enhance your browsing experience, analyze site traffic, and personalize content. By clicking "Accept All", you consent to our use of cookies. Learn more

    Decentralized ELM Architectures

    Inspired by Nature

    By Julian Wilkison-Duran

    Neural networks are complex systems designed for advanced data crunching to make essential predictions, or at least that's what we are currently using them for, but it doesn't have to be that way. We borrowed this concept of neural networks from nature, and in nature, the neural network that we know best and know so very little about is our brain. Our brain is a complex chemical, electrical, and biomechanical machine capable of doing extraordinary things. Incredibly, we are starting to replicate that cybernetically. But what if we took a step back from gen AI for a minute and looked at smaller neural networks again? Compared to the human brain, which has over 86 billion neurons, let's take a look at a starfish that has about 500. With 500 neurons, a starfish is capable of many things.

    Key Activities of Starfish

    • Sensing the Environment: Starfish can detect touch, light, temperature, orientation, and the water's status. This helps them navigate and find resources.
    • Locomotion: Their nervous system, particularly the nerve ring and radial nerves, facilitates the coordinated movement of their numerous tube feet, allowing them to crawl along surfaces and even achieve a synchronized bouncing motion.
    • Feeding: When an arm touches food, it can become dominant and direct the starfish towards the food source. Starfish also have the unique ability to evert their stomachs to digest prey externally.
    • Regeneration: Starfish are renowned for their ability to regenerate lost limbs. This remarkable feat is possible because their decentralized nervous system allows each arm to function somewhat independently, and vital organs are duplicated in each arm.
    • Defense: Starfish have various defense mechanisms, including spines and armor, chemical repellents, and the ability to shed an arm to escape predators.
    • Learning: Research suggests that starfish are capable of simple forms of knowledge, such as habituation (decreasing the response to a repeated stimulus) and perhaps even associative learning, where specific cues are associated with food or danger.

    Communication

    Starfish may use chemical signals (pheromones) to communicate with each other, such as signaling good feeding spots or releasing distress signals.

    How do they achieve this with a limited nervous system?

    Decentralized Control

    Instead of a central brain, their nerve ring and radial nerves distribute control throughout the body.

    Mechanical Coupling

    The tube feet are structurally attached and mechanically coupled, allowing for coordinated movement without requiring a central command center for every action.

    Embodied Cognition

    Some cognitive processes, like directional memory, might be distributed within the limbs themselves.

    How can we apply this to neural networks in programming?

    The AsterMind ELM-based architecture is a working example of how these biological principles can inspire software systems:

    Decentralized Control in AsterMind

    Instead of having one massive neural network responsible for all tasks, AsterMind chains multiple Extreme Learning Machines (ELMs), each specialized in a distinct but related task. This architecture reflects a starfish-like system: there is no central controller. Instead, modules make autonomous predictions and influence one another through shared outputs, much like radial nerves passing signals between the limbs of a starfish.

    Mechanical Coupling in AsterMind

    Each ELM module's output is tightly coupled to the input of the next module. Just as starfish tube feet are structurally linked, the ELMs in this system are mechanically coupled via intermediate feature vectors, for instance:

    • The output of an AutoComplete ELM can directly feed an Encoder ELM.
    • The Encoder ELM provides features to multiple downstream classifiers.
    • Then you could have a Combiner ELM that merges multiple modalities into one prediction vector used by other ELMs.

    Embodied Cognition in AsterMind

    This hardwired sequence creates emergent coordination without requiring any model to know the whole system state, just like a starfish moving as one without centralized control.

    Rather than all cognition residing in a monolithic model, AsterMind distributes intelligence, for instance:

    • Different agents can handle encoding and classification.
    • Directional context and metadata can be embedded in the inputs themselves.
    • Confidence and refinement can be decided locally from partial inputs.

    Each ELM doesn't need to "understand everything." They embody intelligence in structure, training, and input context, just like how a starfish limb retains a sense of direction or feeding dominance.

    Why This Is Unique

    This approach diverges from conventional neural architectures in several key ways. Most machine learning systems—especially those utilizing deep learning—rely on centralized, monolithic models, such as large language models (LLMs), which ingest raw input and output a result, often with limited interpretability. AsterMind's ELM chain architecture instead distributes cognitive functions into purpose-built, lightweight networks that collaborate.

    What makes this system particularly novel is:

    • Specialized autonomy: Each ELM is optimized for a single cognitive task, reducing complexity and improving transparency.
    • Dynamic orchestration: Instead of static processing, results from upstream modules guide behavior downstream, including fallback logic when confidence is low.
    • Modularity: Any ELM can be retrained or swapped independently, allowing the overall system to evolve organically, mirroring biological systems.
    • Embodied logic: Intelligence is embedded not only in the models but also in their data flow and environmental interaction, enabling contextual reasoning.

    This is not just a technical novelty; it opens a philosophical door. Rather than hardcoding behavior or relying solely on centralized intelligence, AsterMind explores how cognition can emerge from relationships and structure—something nature figured out long ago.

    Data as a First-Class Citizen

    In traditional programming paradigms, logic dominates. We write code that defines behavior, and data follows those structures. But in the AsterMind model, this is inverted. The architecture is data-centric: models learn from data, structure themselves around data, and even determine what data is still missing.

    Each ELM doesn't merely respond to data—it depends on it. Inputs are not passive—they guide how each module behaves, adapts, and routes decisions. This makes data the driving force behind behavior, not just a parameter.

    This could represent a new paradigm for programming:

    • Instead of defining business logic upfront, you let ELMs discover structure from data.
    • Instead of static forms or workflows, interfaces dynamically query users for just the data they need.
    • Instead of having all logic baked into the code, the intelligence lives in data relationships and adaptive models.

    This is still an emerging concept, but it has not been widely adopted or formalized in mainstream software engineering. You could argue this is a new form of programming—data-first, emergent, embodied computation—and it has yet to be fully explored. AsterMind could be one of the first practical demonstrations of this idea.

    Example Architectures for AsterMind

    Before diving into the architecture examples, it's worth discussing how these systems can be trained in the real world, especially when labeled data is scarce. One powerful approach is to begin with synthetic training data generated by rules or even large generative models (like GPT). For example, you could prompt a language model to simulate realistic conversations, form-filling behavior, or sensor logs, giving you a rich, labeled dataset to bootstrap training.

    Once your ELM network is up and running using synthetic data, it can start making real-world predictions. From there, you activate a human-in-the-loop feedback system: users or domain experts review the predictions, flag errors, and provide corrections. These corrections become new training examples, which you can use to retrain your ELMs incrementally. Over time, the models shift from relying on synthetic patterns to learning from actual, verified real-world data.

    This approach is especially valuable when starting from zero. Synthetic data gets you off the ground fast, while continuous human feedback ensures your system adapts and improves as it encounters genuine, messy, real-world inputs. It's a practical blend of bootstrapping and iterative refinement—perfect for agile, evolving systems like AsterMind.

    Learn More

    Discover how AsterMind's innovative approaches to machine learning can transform your applications with nature-inspired architectures.