Anthropic Secures Series B Funding for the Development of Directable, Comprehensible, and Resilient Artificial Intelligence Systems
Anthropic, a leading AI safety and research company, has raised a significant $580 million in its Series B round. The funding round was led by Dario Amodei, a co-founder and former OpenAI VP of Research, and included participation from notable investors such as Sam Bankman-Fried, CEO of FTX, Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, the Center for Emerging Risk Research (CERR), Sam Altman, founder of OpenAI, and Elon Musk, CEO of SpaceX and Tesla.
With this new funding, Anthropic plans to expand its research into AI alignment, a field not previously mentioned. The company is aiming to develop AI systems that can reason about the world and make decisions based on their understanding, a goal not previously mentioned.
Anthropic is also focusing on creating AI systems that can be steered, made more robust, and made more interpretable, reinforcing earlier mentioned goals, but with a renewed emphasis on practical applications. The company has made progress in understanding the source of pattern-matching behavior in large language models and has developed baseline techniques to make large language models more "helpful and harmless".
In addition, Anthropic has begun to mathematically reverse engineer the behavior of small language models. The funding will be used to build large-scale experimental infrastructure for AI models, a partnership not previously mentioned, with Anthropic collaborating with Microsoft on this endeavour.
The company has also released a dataset to help other research labs train models that are more aligned with human preferences, beyond the one previously mentioned. Anthropic has published two analyses of sudden changes in performance in large language models and their potential societal impacts.
Moreover, Anthropic is planning to establish a research collaboration with the University of Oxford to work on AI safety and alignment, a new development not previously mentioned. The company has released additional datasets to aid other research labs in this endeavour.
Anthropic has been conducting research into making AI systems more steerable, robust, and interpretable. The company has used reinforcement learning to further improve the properties of large language models. The company has recently published a new analysis of sudden changes in performance in large language models and their potential societal impacts, distinct from the previous one.
This significant investment in Anthropic signals a growing recognition of the importance of AI safety and research, and the company's commitment to making AI systems that are safe, beneficial, and aligned with human values.
Read also:
- Conflict Erupts Between Musk and Apple Over Apple Store's Neglect of Grok
- Iberdrola embraces AI technology for strengthening electrical grid durability
- SpaceX and xAI Garnering Multi-Billion Dollar Agreements: Major Achievements in Valuation
- AI company Dataloop collaborates with Qualcomm to enhance AI model creation