We’re excited to share that we’ve raised $75M in new funding from Sequoia and Spark Capital—partnering with Sonya Huang, Mikowai Ashwill, and Yasmin Razavi, all of whom are incredibly thoughtful and deeply supportive of our long-term mission of building aligned AGI. We’ve also brought on angels & advisors including Milan Kovac, Stanley Druckenmiller, and Andrej Karpathy.
Our early results with FDM-1 moved computer use from a data-constrained regime to a compute-constrained one; this latest round of funding unlocks several orders of magnitude of compute scaling for that work. With the FDM model series we have a path to scale agentic capabilities through video pretraining, and we expect to achieve superhuman performance on general computer tasks in the same way that current language models have superhuman performance on coding tasks.
We’re also now able to invest in the blue-sky research necessary to our long term mission of building aligned general learners. To realize the civilizationally transformative impacts of AI, models must generalize far out of their training distributions, actively exploring and building skills in new environments. This capability represents a substantial shift from the current paradigm of model training. We believe that current alignment techniques are insufficient to predictably and safely steer a model with human-level learning capabilities, and so we’re doing work to study small versions of this problem in controlled environments to develop a science of alignment for general learners.
We’re a team of 6 people in San Francisco. We’re hiring world-class researchers and engineers to help us achieve our mission. If that’s you, please get in touch.
— Standard Intelligence