v0.1.0 — Now with distributed training

The Intelligence
Engine

Self-verifying, self-evolving AI that runs on your hardware. Five specialized domains. Quantum-enhanced reasoning. Train it with the Hive. No GPU required.

0.982
Benchmark
78.6
tok/s MPS
5
Domains
12
API Endpoints

Capabilities

Not a chatbot. Infrastructure.

Production-grade AI with domain adapters, neural compression, autonomous invention, and self-healing training.

Multi-Domain LoRA

5 specialized adapters — programming, cybersecurity, quantum, fintech, general. Switch domains with a single parameter. Each adapter is independently trainable.

Neural Compression

VQ-VAE hierarchical autoencoders compress hidden states at 2x/4x/8x. Process longer contexts with the same memory footprint.

RAG + Document Grounding

FAISS + sentence-transformers retrieve and inject relevant chunks from your documents. Every answer is grounded. Zero hallucination on known material.

Self-Verification

Adaptive router estimates difficulty, routes queries, and self-verifies outputs. Every response is checked before delivery. 100% verification pass rate.

Autonomous Invention

Evolutionary search discovers novel algorithms — attention mechanisms, compression schemes, state-space models. Bee writes its own improvements.

Self-Healing Training

Monitors gradient health, detects training anomalies, auto-adjusts learning rate, rolls back to stable checkpoints. Training never crashes.

API

OpenAI-compatible.
Zero lock-in.

Drop-in replacement for OpenAI. REST + WebSocket streaming. RAG ingestion. Human-in-the-loop feedback. Bearer auth with request tracing.

POST /v1/chat/completions
WS /v1/chat
POST /v1/documents/upload
POST /v1/feedback
GET /v1/models
API Playground
# One line to switch from OpenAI
curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "bee",
    "messages": [{
      "role": "user",
      "content": "Explain quantum entanglement"
    }],
    "domain": "quantum"
  }'

Bee Hive

Distributed training.
Anyone can contribute.

Run one command on any machine. Your compute trains Bee. Validated adapters push to the community automatically. The longer it runs, the smarter Bee gets.

STEP 01

Install

pip install bee-engine

STEP 02

Train

python -m bee.hive

STEP 03

Contribute

Adapters auto-push to Hub

# Works on MacBook, Linux, Colab, Kaggle — any hardware
$ python -m bee.hive --device auto

> Detected Apple M4 Max (MPS)
> Loading base model: SmolLM2-360M-Instruct
> Training domain: programming (LoRA r=16, α=32)
> Eval loss improved: 2.41 → 2.18 ✓
> Pushing adapter to cuilabs/bee-hive-programming
# Cycle complete. Starting next domain...

Runs everywhere

Apple SiliconNVIDIA CUDAGoogle ColabHuggingFaceVS CodeAny CPU

Intelligence should
be free.

Run locally. Train with the Hive. Use the API. No subscriptions. No vendor lock-in. Just intelligence.