We make machines think

We are pushing towards the final frontiers of intelligence - Artificial General Intelligence.

AI Agents will overtake the output of human workers before 2040.

We research and build towards AGI guided by geometry.

World Map

Our Approach

Better Coordinates for AI

Inference is now the bottleneck: memory bandwidth, KV cache pressure, and serving cost often decide what actually reaches production. We attack this with a geometry-led approach to quantization and compression—finding better “coordinates” for neural representations so models retain critical information even at lower precision.


The outcome we care about is simple and measurable: lower $/token and latency at the same quality, or higher quality at the same budget. Our work pairs theory with engineering: reproducible evaluation harnesses, practical baselines, and methods that integrate into real serving stacks.


Near-term focus: quantization efficiency improvements on open models → partner pilots on real workloads → public releases and benchmarks.

New Machine Learning Architectures

Today’s dominant architectures are conservative. We explore alternatives guided by structure, invariances, and representation theory, aiming to unlock capability and efficiency gains that scaling alone won't deliver.


Our architecture research is intentionally tied to deployment reality: we prioritize designs that can improve inference efficiency, reliability, and controllability—especially for agentic systems that must act under constraints and feedback.


Long-horizon goal: a principled design framework for agents and models—grounded in measurable benchmarks, not vibes.

Benchmarks and Datasets

What you measure shapes what you build. Many benchmarks reward “headline capability” while ignoring the constraints that dominate real deployments: latency, memory, robustness, and cost.


We create economic benchmarks that evaluate models the way production teams and frontier labs actually operate—under budgets, SLAs, and iterative improvement loops. This includes datasets and harnesses designed to test autonomy, tool use, and self-improvement in controlled settings.


Deliverables: open evaluation suites, reference baselines, and partner-specific benchmark tracks for real workloads.

Real-World Focus

We're a research lab, but we don't do research in a vacuum. Our best ideas come from domains where outcomes are measurable and iteration is fast: scientific discovery, quantitative decision-making, and optimization-heavy products.


We collaborate through pilots and co-authored research: define target constraints, run controlled experiments, and deliver reproducible results that teams can ship. Our default mode is “publish + ship”—open-source when possible, and always benchmarked.


If you're deploying LLMs and your inference bill is growing faster than your revenue, we should talk.

Geometry of Intelligence

We envision a future where AGI emerges from geometric harmony and swarm coordination, creating "Baby AGI" systems that self-improve and accelerate human progress. By focusing on research-led optimization, we target a "Science Singularity"—AI-driven breakthroughs in discovery and innovation.

Key Pillars:

▶ Geometric Machine Learning: Lower costs, better architectures.
▶ Agentic Systems: Emergent intelligence from decentralized coordination.
▶ Real-World Playgrounds: Domains like science (infinite data) and marketing (fast feedback) for testing and refinement.

Join us in shaping this future through partnerships and shared exploration.

Latest from our blog