Anthropic

Anthropic, founded in 2021 by 7 ex-OpenAI researchers in San Francisco, builds the Claude AI family and raised $30B at a $380B valuation in Feb 2026.

Founded: 2021 · HQ: San Francisco, California, USA · Team: 200+ · CEO: Dario Amodei · Funding: $5.3 billion · Valuation: $5 billion (2023 Series B)

About Anthropic

Anthropic is an AI safety company incorporated as a Public Benefit Corporation in January 2021 in San Francisco, California. The company was founded by Dario Amodei (CEO) and Daniela Amodei (President), along with five other researchers who had all previously worked at OpenAI: Tom Brown, Jack Clark, Jared Kaplan, Chris Olah, and Sam McCandlish. The founding team departed OpenAI over disagreements about the pace of commercialization relative to safety research, and set out to build a company where alignment science sits at the center of the product roadmap rather than on the periphery. The company's flagship consumer product is Claude.ai, a web and mobile interface that gives individuals and teams access to the Claude model family. Claude models are also available through the Claude API at pricing that ranges from $0.25 per million tokens on the Haiku tier to $15 per million output tokens on the Opus tier. Anthropic provides Claude for Enterprise with additional controls including SAML SSO, zero-data-retention options, and role-based access. Claude is available through Amazon Bedrock, Google Vertex AI, and Microsoft Azure, reflecting a deliberate multi-cloud distribution strategy. The current model lineup as of May 2026 includes three tiers. Claude Haiku 4.5 targets low-latency, high-throughput workloads at the lowest cost point. Claude Sonnet 4.6 serves as the default general-purpose model and supports a 1-million-token context window, making it suitable for large document analysis and long codebase review. Claude Opus 4.7 is the top-tier reasoning model and scored 70% on the CursorBench software engineering benchmark at $15 per million output tokens. Anthropic's funding history reflects extraordinary investor conviction in its safety-first approach. The company has raised $72.3 billion in total across 18 rounds. Amazon has committed $8 billion in total and is Anthropic's primary cloud partner through Project Rainier, a collaboration deploying 500,000 Trainium chips that represents one of the largest single AI compute arrangements disclosed publicly. Google has invested $2 billion in equity and an additional $30 billion in committed cloud spend through Google Cloud. In February 2026, Anthropic closed a $30 billion Series G round led by Coatue Management and GIC (Singapore's sovereign wealth fund) at a $380 billion post-money valuation. As of May 2026, the company is reportedly evaluating a further round at an $850 billion to $900 billion valuation. Research sits at the core of Anthropic's organizational identity. Chris Olah leads mechanistic interpretability work that attempts to reverse-engineer the internal computations of neural networks, mapping which features and circuits drive specific model behaviors. Jan Leike, who joined from OpenAI's superalignment team in 2024, leads alignment science research. The company's Constitutional AI method trains models using an AI-generated set of principles rather than relying entirely on human feedback at each step, enabling more scalable oversight. Anthropic publishes its Responsible Scaling Policy, a public commitment that links model deployment decisions to measurable safety evaluations. Anthropic employs between 1,000 and 5,000 people as of 2026, with particularly fast expansion in Europe. The company's EMEA headcount tripled in 2025, supported by new offices in Paris, Munich, and London, in addition to existing offices in Seattle, New York, and Washington DC. The European expansion reflects both growing enterprise demand and the company's engagement with EU regulatory discussions around the AI Act. On the agentic side, Anthropic launched Claude Code in early 2025, a command-line interface tool that gives developers the ability to run Claude as an autonomous coding agent. Claude Code can read and write files, run shell commands, and navigate codebases, operating in a loop until a programming task is complete. Separately, Project Glasswing is a research initiative examining how AI systems can be used safely in critical infrastructure contexts, acknowledging the growing deployment of AI in power grids, financial systems, and defense applications. Compliance certifications include SOC 2 Type II, ISO 27001:2022, ISO/IEC 42001:2023 (the AI management standard), HIPAA eligibility for healthcare customers, and GDPR compliance for EU operations. Full compliance documentation and third-party audit reports are available through the public trust center at trust.anthropic.com. These certifications are prerequisites for Anthropic's enterprise sales in regulated industries including healthcare, finance, and government. Anthropic's competitive position is distinct from other frontier labs in that it explicitly treats safety research as a competitive differentiator rather than a compliance overhead. Its structure as a Public Benefit Corporation legally obligates it to consider public benefit alongside shareholder returns. This resonates with large enterprises in regulated industries who need defensible justifications for which AI vendor they adopt. The combination of frontier model performance, multi-cloud availability, strong compliance posture, and a credible safety narrative has made Anthropic one of the two most valuable pure-play AI model companies globally. Looking ahead, Anthropic's roadmap centers on three themes: pushing context windows and agent reliability further with each model generation, deepening mechanistic interpretability research to produce models that can be more rigorously audited, and expanding distribution across enterprise verticals including legal, healthcare, and software development. The company's trajectory from a 7-person team in January 2021 to a $380 billion valuation by February 2026 reflects the speed at which frontier AI has become a core enterprise infrastructure category.

Mission

To develop AI systems that are safe, interpretable, and aligned with human values through constitutional AI and responsible deployment practices.

Products

Compliance

SOC 2 Type II

Links

Website · GitHub · Twitter · LinkedIn · Blog · Docs

Frequently Asked Questions

Who founded Anthropic and when?

Anthropic was founded in January 2021 by seven researchers who had all previously worked at OpenAI. The founders are Dario Amodei (CEO, formerly VP of Research at OpenAI), Daniela Amodei (President, formerly VP of Operations at OpenAI), Tom Brown, Jack Clark, Jared Kaplan, Chris Olah, and Sam McCandlish. The team left OpenAI due to disagreements about how quickly the company was moving relative to its safety research investments. The company was incorporated as a Public Benefit Corporation in San Francisco, California, a legal structure that requires it to consider public benefit in addition to shareholder returns. Dario Amodei had previously led the GPT-3 research effort at OpenAI before departing to co-found Anthropic. Chris Olah, one of the most cited researchers in mechanistic interpretability, joined from OpenAI and now leads Anthropic's interpretability program. The founding team brought together expertise spanning large-scale training, alignment research, policy, and systems engineering.

How much funding has Anthropic raised and at what valuation?

Anthropic has raised $72.3 billion in total across 18 funding rounds as of May 2026. The most recent major round was a $30 billion Series G closed in February 2026, led by Coatue Management and GIC (Singapore's sovereign wealth fund), which valued the company at $380 billion post-money. Amazon has invested $8 billion in total and is the primary cloud infrastructure partner through Project Rainier, which includes deployment of 500,000 Amazon Trainium chips. Google has invested $2 billion in equity plus a $30 billion committed cloud spend arrangement through Google Cloud. Reports from early 2026 indicate Anthropic is evaluating a further fundraising round at a valuation between $850 billion and $900 billion. The company's valuation has grown faster than almost any private company in history, reaching $380 billion within five years of founding in January 2021. Project Rainier represents one of the largest disclosed AI compute arrangements globally and anchors Anthropic's production infrastructure on AWS.

What are Anthropic's Claude model tiers and pricing?

Anthropic's Claude model family as of May 2026 includes three main tiers available through the API. Claude Haiku 4.5 is the fastest and cheapest option, priced at $0.25 per million input tokens, designed for high-throughput and latency-sensitive workloads. Claude Sonnet 4.6 is the default general-purpose model, supports a 1-million-token context window, and handles tasks requiring long document analysis or large codebase review. Claude Opus 4.7 is the top-tier reasoning model priced at $15 per million output tokens; it scored 70% on the CursorBench software engineering benchmark, one of the highest published scores on that evaluation. All three models are available through the Claude API, the Claude.ai consumer interface, Claude for Enterprise (with SAML SSO and zero-retention options), and cloud marketplaces including Amazon Bedrock, Google Vertex AI, and Microsoft Azure. Enterprise pricing is negotiated separately and typically includes volume discounts and additional compliance guarantees.

What is Constitutional AI and why does Anthropic use it?

Constitutional AI (CAI) is a training method developed and published by Anthropic in 2022 that uses an AI-generated set of principles to guide model behavior during training. Rather than relying on human raters to evaluate every response at each step, CAI uses a critic model to score responses against a written constitution of principles, making oversight more scalable as models grow larger. The constitutional principles cover topics including honesty, harm avoidance, and respect for human autonomy. This approach reduces dependence on large human annotation workforces for alignment while making the training criteria more transparent and auditable. Anthropic argues that CAI produces models that are more consistently aligned because the governing principles are explicit rather than implicit in human annotator preferences. The method was published in a peer-reviewed paper in December 2022 and has influenced alignment research at other organizations. CAI is one of several alignment research directions at Anthropic alongside mechanistic interpretability and the Responsible Scaling Policy.

What compliance certifications does Anthropic hold?

Anthropic holds SOC 2 Type II certification, which audits controls around security, availability, processing integrity, confidentiality, and privacy. The company is certified to ISO 27001:2022 (the international information security management standard) and ISO/IEC 42001:2023 (the AI management system standard that specifically addresses responsible AI development). Anthropic is HIPAA-eligible, meaning healthcare customers can use Claude under a Business Associate Agreement for workloads involving protected health information. The company complies with GDPR for EU customers through data processing agreements and EU-resident data handling options. All third-party audit reports and compliance documentation are publicly accessible through the trust center at trust.anthropic.com. These certifications are a prerequisite for Anthropic's enterprise sales in regulated industries including healthcare, legal, finance, and government. The company's European offices in London, Paris, and Munich support GDPR compliance through locally available data residency options.

What is Claude Code and how does it work?

Claude Code is Anthropic's command-line agentic tool that gives developers the ability to use Claude as an autonomous software engineering assistant directly from the terminal. Launched in early 2025, Claude Code can read and write files in a local repository, execute shell commands, run tests, and navigate codebases autonomously until a task completes or requires human input. It is designed for developers who want to integrate AI deeply into terminal-based workflows rather than using a GUI-based editor extension. Claude Code uses Claude Sonnet 4.6 or Claude Opus 4.7 under the hood depending on the task complexity configured by the developer. The tool supports project-level context through CLAUDE.md configuration files that describe codebase conventions, preferred patterns, and restricted commands. Claude Opus 4.7 achieved 70% on the CursorBench software engineering benchmark, one of the most-cited performance data points in the agentic coding tool category in 2025. Pricing follows the standard per-token Anthropic API rates for whichever model is selected.

How large is Anthropic's team and where are its offices?

Anthropic employs between 1,000 and 5,000 people as of 2026, making it one of the larger pure-play AI research and product companies globally. The company's headquarters is at 548 Market Street in San Francisco, California. Additional US offices are located in Seattle, New York City, and Washington DC. Internationally, Anthropic has offices in London, Paris, and Munich, with EMEA headcount tripling in 2025 as part of an intentional European expansion strategy. The Paris and Munich offices support the company's engagement with EU regulatory bodies around the AI Act and serve as centers for European enterprise sales. The Washington DC office supports Anthropic's policy team, which participates in US government discussions around AI safety standards, export controls, and procurement. The Seattle office is closely tied to the Amazon partnership given that AWS is headquartered in the Seattle metro area. Anthropic also hires for remote research and engineering roles globally.

What is Anthropic's Responsible Scaling Policy?

The Responsible Scaling Policy (RSP) is a public commitment published by Anthropic that links the company's decisions about training and deploying new AI models to the results of specific safety evaluations. The RSP defines a set of AI Safety Levels (ASL-1 through ASL-4) analogous to biosafety levels, where each level represents a different degree of potential catastrophic risk from a given model. Before training or deploying a model at a new capability level, Anthropic commits to conducting specified safety evaluations and implementing corresponding safeguards. The RSP covers risks related to weapons of mass destruction assistance, autonomous AI action, and large-scale cybersecurity attacks. It is a voluntary, legally non-binding commitment that Anthropic published proactively before any regulatory requirement to do so. The policy is updated as the company's understanding of AI risks evolves and is reviewed by Anthropic's Long-Term Benefit Trust board. Several other AI companies including Google DeepMind have published similar frameworks, partly in response to Anthropic's RSP being viewed as an industry benchmark for responsible AI deployment commitments.