A day after saying new AI fashions designed for U.S. nationwide safety purposes, Anthropic has appointed a nationwide safety knowledgeable, Richard Fontaine, to its long-term profit belief.
Anthropic’s long-term profit belief is a governance mechanism that Anthropic claims helps it promote security over revenue, and which has the ability to elect a few of the firm’s board of administrators. The belief’s different members embody Centre for Efficient Altruism CEO Zachary Robinson, Clinton Well being Entry Initiative CEO Neil Buddy Shah, and Proof Motion President Kanika Bahl.
In a press release, Anthropic CEO Dario Amodei mentioned that Fontaine’s hiring will “[strengthen] the belief’s means to information Anthropic by way of advanced choices” about AI because it pertains to safety.
“Richard’s experience comes at a important time as superior AI capabilities more and more intersect with nationwide safety issues,” Amodei continued. “I’ve lengthy believed that guaranteeing democratic nations preserve management in accountable AI growth is crucial for each world safety and the widespread good.”
Fontaine, who as a trustee gained’t have a monetary stake in Anthropic, beforehand served as a international coverage adviser to the late Sen. John McCain and was an adjunct professor at Georgetown instructing safety research. For greater than six years, he led the Middle for A New American Safety, a nationwide safety suppose tank based mostly in Washington, D.C., as its president.
Anthropic has more and more engaged U.S. nationwide safety prospects because it seems to be for brand new sources of income. In November, the corporate teamed up with Palantir and AWS, the cloud computing division of Anthropic’s main companion and investor, Amazon, to promote Anthropic’s AI to protection prospects.
To be clear, Anthropic isn’t the one prime AI lab going after protection contracts. OpenAI is looking for to determine a better relationship with the U.S. Protection Division, and Meta not too long ago revealed that it’s making its Llama fashions obtainable to protection companions. In the meantime, Google is refining a model of its Gemini AI able to working inside categorised environments, and Cohere, which primarily builds AI merchandise for companies, can also be collaborating with Palantir to deploy its AI fashions.
Fontaine’s hiring comes as Anthropic beefs up its govt ranks. In Could, the corporate named Netflix co-founder Reed Hastings to its board.





