Cookie Preferences

    We use cookies to enhance your browsing experience, analyze site traffic, and personalize content. By clicking "Accept All", you consent to our use of cookies. Learn more

    Documentation & API

    AsterMind-ELM Documentation

    npm versionnpm downloads
    MIT license

    A modular Extreme Learning Machine (ELM) library for JS/TS (browser + Node)

    What you can build — and why this is groundbreaking

    Instant, tiny, on-device ML for the web — train in milliseconds, predict with microsecond latency

    AsterMind brings instant, tiny, on-device ML to the web. It lets you ship models that train in milliseconds, predict with microsecond latency, and run entirely in the browser — no GPU, no server, no tracking. With Kernel ELMs, Online ELM, DeepELM, and Web Worker offloading, you can create:

    • Private, on-device classifiers (language, intent, toxicity, spam) that retrain on user feedback
    • Real-time retrieval & reranking with compact embeddings (ELM, KernelELM, Nyström whitening) for search and RAG
    • Interactive creative tools (music/drum generators, autocompletes) that respond instantly
    • Edge analytics: regressors/classifiers from data that never leaves the page
    • Deep ELM chains: stack encoders → embedders → classifiers for powerful pipelines, still tiny and transparent

    Why it matters: ELMs give you closed-form training (no heavy SGD), interpretable structure, and tiny memory footprints. AsterMind modernizes ELM with kernels, online learning, workerized training, robust preprocessing, and deep chaining — making seriously fast ML practical for every web app.

    New in this release

    • Kernel ELMs (KELMs) — exact and Nyström kernels (RBF/Linear/Poly/Laplacian/Custom) with ridge solve
    • Whitened Nyström — optional Kmm-1/2 whitening via symmetric eigendecomposition
    • Online ELM (OS-ELM) — streaming RLS updates with forgetting factor (no full retrain)
    • DeepELM — multi-layer stacked ELM with non-linear projections
    • Web Worker adapter — off-main-thread training/prediction for ELM and KELM
    • Matrix upgrades — Jacobi eigendecomp, invSqrtSym, improved Cholesky
    • EmbeddingStore 2.0 — unit-norm vectors, ring buffer capacity, metadata filters
    • ELMChain+Embeddings — safer chaining with dimension checks, JSON I/O
    • Activations — added linear and gelu; centralized registry
    • Configs — split into Numeric and Text configs; stronger typing
    • UMD exports — window.astermind exposes ELM, OnlineELM, KernelELM, DeepELM, etc.
    • Robust preprocessing — safer encoder path, improved error handling

    AsterMind: Decentralized ELM Framework Inspired by Nature

    Welcome to AsterMind, a modular, decentralized ML framework built around cooperating Extreme Learning Machines (ELMs) that self-train, self-evaluate, and self-repair — like the nervous system of a starfish.

    How This ELM Library Differs from a Traditional ELM

    This library preserves the core Extreme Learning Machine idea — random hidden layer, nonlinear activation, closed-form output solve — but extends it with:

    • • Multiple activations (ReLU, LeakyReLU, Sigmoid, Linear, GELU)
    • • Xavier/Uniform/He initialization
    • • Dropout on hidden activations
    • • Sample weighting
    • • Metrics gate (RMSE, MAE, Accuracy, F1, Cross-Entropy, R²)
    • • JSON export/import
    • • Model lifecycle management
    • • UniversalEncoder for text (char/token)
    • • Data augmentation utilities
    • • Chaining (ELMChain) for stacked embeddings
    • • Weight reuse (simulated fine-tuning)
    • • Logging utilities

    AsterMind is designed for:

    • • Lightweight, in-browser ML pipelines
    • • Transparent, interpretable predictions
    • • Continuous, incremental learning
    • • Resilient systems with no single point of failure

    Core Features

    Architecture

    • ✅ Modular Architecture
    • ✅ Closed-form training (ridge/pseudoinverse)
    • ✅ JSON import/export
    • ✅ Self-governing training
    • ✅ Flexible preprocessing

    Activations & Kernels

    • ✅ relu, leakyrelu, sigmoid, tanh, linear, gelu
    • ✅ Initializers: uniform, xavier, he
    • ✅ Kernel ELM with Nyström + whitening
    • ✅ Online ELM (RLS) with forgetting factor

    Advanced Features

    • ✅ DeepELM (stacked layers)
    • ✅ Web Worker adapter
    • ✅ Embeddings & Chains for retrieval and deep pipelines
    • ✅ Retrieval and classification utilities

    Deployment

    • ✅ Lightweight (ESM + UMD)
    • ✅ Zero server/GPU required
    • ✅ Private, on-device ML
    • ✅ Numeric + Text configs

    Installation

    NPM (scoped package):

    npm install @astermind/astermind-elm
    # or
    pnpm add @astermind/astermind-elm
    # or
    yarn add @astermind/astermind-elm

    CDN / <script> (UMD global astermind):

    <!-- jsDelivr -->
    <script src="https://cdn.jsdelivr.net/npm/@astermind/astermind-elm/dist/astermind.umd.js"></script>
    
    <!-- or unpkg -->
    <script src="https://unpkg.com/@astermind/astermind-elm/dist/astermind.umd.js"></script>
    
    <script>
      const { ELM, KernelELM } = window.astermind;
    </script>

    Usage Examples

    Basic ELM Classifier

    import { ELM } from "@astermind/astermind-elm";
    
    const config = { categories: ['English', 'French'], hiddenUnits: 128 };
    const elm = new ELM(config);
    
    // Load or train logic here
    const results = elm.predict("bonjour");
    console.log(results);

    CommonJS / Node:

    const { ELM } = require("@astermind/astermind-elm");

    Why Use AsterMind?

    Because you can build AI systems that:

    • Are decentralized
    • Self-heal and retrain independently
    • Run in the browser
    • Are transparent and interpretable

    Suggested Experiments

    • Compare retrieval performance with Sentence-BERT and TFIDF
    • Experiment with activations and token vs char encoding
    • Deploy in-browser retraining workflows
    "AsterMind doesn't just mimic a brain—it functions more like a starfish: fully decentralized, self-evaluating, and self-repairing."
    Published: January 15, 2025
    Last updated: June 10, 2025