After years of research and development, ULAM is evolving. We're shifting from consulting to investing our own capital in the technologies and companies we believe will shape the AGI future. Our main beliefs now guide not just our research, but our investment strategy.
Read full article →We make machines think
We are pushing towards the final frontiers of intelligence - Artificial General Intelligence.
AI Agents will overtake the output of human workers before 2040.
We research, invest and build towards AGI based on our beliefs.

Our Main Beliefs
AGI will be achieved through Swarm Intelligence
Swarm-style coordination can combine many narrow agents into an emergent system. Think decentralized problem-solving, redundancy, and robustness - collective intelligence emerging from simple individual behaviors to solve complex problems.
The path to AGI looks less like a single towering model and more like a coordinated economy of agents. In a swarm, many specialized models—planners, tool users, researchers, critics, executors—coordinate through protocols to produce results no single component could achieve alone. The intelligence emerges from composition: decomposition of tasks, parallel exploration, competitive proposal and selection, and relentless self-correction. What matters is not just raw model capacity but the market-like mechanisms that allocate attention, verify claims, price uncertainty, and converge on decisions.
Practically, a swarm architecture scales along three axes at once: more agents, more tools, and richer coordination rules. Add narrow experts to widen coverage; plug them into software, data stores, and robots to extend reach; and tighten the contract between them with bidding, critique, simulation, and testing. The system improves because the slowest step—learning—shifts from weight updates to organizational refinement: better task routing, smarter memory, higher-quality feedback, and stronger incentives for truthful, useful outputs. As orchestration matures, capability rises faster than any single model’s parameter count would suggest.
The agentic future is an operations story as much as a research story. Swarms demand durable identity, permissions, and budgets for agents; audit trails and reproducibility for their decisions; and service-level objectives for latency, cost, and reliability. Organizations will talk about “intelligence supply chains”: prompts, policies, tools, and datasets moving through staging, canarying, and production, with spend controls, incident response, and red-teaming baked in. Winners will treat the swarm like a product line—versioned, observable, costed—rather than a demo, and they will measure lift in cycle time, quality, and margin at the workflow level, not model benchmarks in isolation.
Safety and governance become system properties, not afterthoughts. Swarms are robust because they are redundant and diverse: different agents verify each other, simulate consequences, and quarantine risky actions. They are also governable because behavior is shaped at the coordination layer—what tools are allowed, what proofs are required, which critics must sign off—so policy updates propagate instantly without retraining. With the right checkpoints, you can require provenance, chain-of-thought summaries for auditors (not end users), and sandboxed execution before anything touches production systems or the physical world.
Framed this way, “AGI” arrives gradually as the swarm crosses thresholds of generality, autonomy, and trust. You do not wait for one model to know everything; you assemble a society of competent agents, give them memory, tools, incentives, and courts, and let them compete and collaborate toward goals. As the cost of coordination falls and the quality of verification rises, the system behaves more like a capable organization than a chatbot—able to plan, act, learn, and justify at scale. That’s the agentic future: intelligence as an ecosystem, where capability emerges from architecture and governance, not a single monolith.
Rare & Precious Metals are key to Robotics & Computing
Advanced semiconductors, batteries, sensors, and high-torque motors rely on a small set of critical elements whose unique properties are hard to replace. Designing with these materials— and planning for their availability—directly affects performance, cost, and time-to-market.
Robotics and advanced computing are increasingly gated by materials economics. Rare earths like neodymium, dysprosium, and terbium concentrate the torque and precision needed for compact motors; gallium and silicon carbide lift power density in motor drives and data-center converters; tantalum, hafnium, ruthenium, and indium surface across capacitors, transistor stacks, and displays. In parallel, silver and gold play the role of connectivity and reliability enablers, carrying current cleanly and preserving signal integrity at the edges where downtime is most expensive. Together, these inputs shift procurement from a back-office function to a strategic lever that determines launch cadence and unit margins.
Demand is being pulled by three engines that compound each other: electrified motion in growing robot fleets, high-density compute for training and inference, and the energy transition that powers both. Each new motor adds rare-earth magnet mass; each rack adds wide-bandgap devices and thermal interfaces; each expansion of solar and storage multiplies silver usage and pressures upstream refining. Even when per-unit grams are small, deployment at industrial scale turns those grams into kilotons and pushes companies toward multi-year, capacity-linked agreements instead of spot buys that collapse under cycle pressure.
Silver and gold matter on the business side because they convert reliability into a financial outcome. Data centers, telecom, aerospace, medical, and heavy industry pay for availability, and premium finishes and connections using these metals reduce outages and RMAs across multi-year service lives. That effect shows up in total cost of ownership rather than unit price, which is why mature buyers justify higher input costs with hard metrics like uptime, service intervals, and warranty performance. As volumes ramp, scrap and end-of-life recovery for Au/Ag create credits that blunt price volatility and improve gross margin resilience.
Supply constraints are less about geology and more about processing concentration and policy. Rare earth separation is clustered; gallium and germanium exports can be throttled; cobalt and nickel flows hinge on a handful of jurisdictions; plating and finishing capacity for precious metals is finite and bookable months ahead. Companies that map their BOMs to specific refining and finishing steps—rather than just mines and distributors—see disruptions earlier, negotiate better, and can stage inventory where it actually breaks bottlenecks. Indexed pricing, hedged agreements, toll refining, and take-back programs turn volatile inputs into manageable, contractible services.
The practical strategy is to treat rare metals as performance gates and silver/gold as uptime insurance, then plan commercial moves accordingly. Align product roadmaps with realistic materials lead times, pre-book separation and finishing capacity in lockstep with deployment schedules, and structure pricing to pass through metal curves without constant re-quotes. Build circularity into the P&L so recovered magnets, boards, and connectors finance a portion of new builds. In a market where performance and availability are the product, mastering both rare-metal supply and precious-metal reliability becomes a durable competitive advantage.
Nuclear power will be crucial to keep up with energy demands
High-capacity, low-carbon baseload power supports energy-hungry compute clusters and robotics infrastructure. Nuclear energy provides the reliable, scalable foundation needed for AI advancement. Nuclear companies will lead the next revolution, uranium will be key.
Nuclear is re-emerging as the quiet backbone of the AI era: dense, low-carbon baseload that can sit next to data centers, fabs, and electrified industry. The small-reactor wave shifts the conversation from megaprojects to productized units—factory-built modules, shorter schedules, standardized designs, and passive safety that lowers operational overhead. For compute and robotics infrastructure, the value is simple: predictable, around-the-clock power that doesn’t depend on weather, with sites that can co-locate where demand is growing fastest.
Startups in the space are pushing toward commercial reality by converging on repeatable designs, modular construction, and streamlined operations. The goal isn’t just a new reactor; it’s a replicable playbook: common components, well-defined supply chains, digital twins for maintenance, and service models that look more like long-term power-as-a-service than one-off engineering projects. Advances in load-following, thermal storage integration, and improved safety systems make these reactors more compatible with variable renewables and the spiky loads of AI clusters.
On the fuel side, uranium is the strategic input that sets the pace. After years of under-investment, the market has tightened as utilities and new buyers seek long-dated certainty. Mining, conversion, and enrichment capacity take time to ramp, so procurement is shifting from spot purchases to multi-year contracting and diversified sourcing. As advanced designs arrive, fuel-cycle services—fabrication, take-back, and eventual recycling pathways—become part of the commercial bundle rather than an afterthought, giving large power buyers more confidence in long-horizon planning.
The recent technical advances are practical rather than flashy: improved manufacturing of key components, more mature passive safety features, digital instrumentation and control, and licensing frameworks that favor standardized units over bespoke builds. Together they compress development risk and bring timelines closer to the cadence at which data centers, industrial campuses, and transit electrification are expanding. For operators, that translates to cleaner capacity that can be forecast, financed, and delivered in serial production rather than as a once-in-a-generation bet.
The business case is straightforward. If the next economy is agentic and electrified, it runs on firm power with known costs. Small nuclear units provide that anchor: they hedge volatility, stabilize grids that are absorbing large intermittent resources, and create a pathway to grow compute without chasing distant generation. Companies that secure standardized reactors and fuel arrangements early will control their energy destiny in the 2030s—turning power from a constraint into a competitive advantage.
Money will be mostly digital and easy for AI agents to manage
Programmatic payments, APIs, and smart contracts enable autonomous transactions. Digital-first financial systems allow AI agents to efficiently manage resources, make payments, and participate in the economy.
The agentic economy assumes money that is machine-readable by default. As AI systems plan, procure, and operate on their own schedules, they need payment rails that expose clear rules, deterministic settlement, and programmable controls. Digital-first money—whether account-based through APIs or token-based on open ledgers—turns finance from a back-office handoff into an embedded capability, so agents can price options, post collateral, and close loops without waiting for human batching or bank hours.
Stable-value digital currencies play a central role because they compress volatility and friction. With price-stable tokens, agents can quote, escrow, and settle in seconds while keeping books in unit-of-account terms that match real costs. Programmable settlement allows conditional releases tied to delivery proofs, oracle signals, or service-level outcomes, reducing counterparty risk and disputes. The same mechanisms make micropayments and streaming payments practical, so compute, bandwidth, data access, and robotics services can be billed per second or per task instead of through coarse subscriptions.
The operational benefits compound as finance becomes an API surface. Wallets, custodial policies, and spending limits can be encoded as guardrails: budgets that reset, approvals that require multiple signatures, and compliance checks that run before funds move. This shifts risk management from after-the-fact reconciliation to pre-trade enforcement. For enterprises, it also improves auditability—agents leave cryptographic receipts and immutable logs that can be sampled, monitored, and tied to cost centers, turning financial oversight into real-time observability rather than quarterly archaeology.
Interoperability across payment stacks matters as much as raw speed. Agents will navigate account-based rails, real-time gross settlement systems, and public chains, choosing routes by latency, fee, and jurisdictional policy. Bridges, on–off ramps, and tokenized deposits make it possible to keep treasury centralized while letting edge agents transact locally, with automated rebalancing bringing funds home. The net effect is a portfolio of settlement options that can be optimized like any other supply chain, with resilience gained from multiple paths rather than a single dependency.
As these capabilities mature, “money” behaves more like infrastructure than a product. Contracts execute themselves, invoices collapse into events, and working capital cycles tighten because settlement and delivery converge. In that world, the advantage goes to organizations that treat digital cash, stablecoins, and programmable finance as core primitives: they give agents discretionary budgets with policy guardrails, codify compliance and reporting, and design business models—usage-based, outcome-based, marketplace-based—that only make sense when transactions are instant, verifiable, and automated.
Our 2017 manifesto - We want to make machines think
Currently machines are only able to perform low-level general tasks (like ordering a taxi), but are able to go beyond human capabilities in many specialized tasks.
Computers are better than humans at playing chess, Go and Atari games. We go further and treat 'mathematics' as the next game at which machines would excel. Mathematics is a test case for our ability to make machines think in an abstract way.
As a company we research machine reasoning and then apply it to different areas of business (finance, telecom, industry, etc).