Home What is Anthropic, the company behind Claude AI?

What is Anthropic, the company behind Claude AI?

TLDR

  • Anthropic focuses on ethical AI development, emphasizing safety and privacy in its models.
  • The company developed Claude AI, designed for natural conversations and complex tasks.
  • Anthropic's AI follows "constitutional AI" principles, ensuring responsible technology use.

Anthropic’s CEO recently penned an essay predicting that “powerful AI” could be developed as early as 2026. The almost 15,000-word essay by Dario Amodei paints a rather rosy picture of a world transformed by artificial intelligence. In it, he sees a future where AI accelerates human progress at a substantial rate. As one of the founders behind the company’s flagship AI model Claude, he’s previously said he hopes to make the model “more warm, more human, more engaging.”

Although Anthropic’s efforts are frequently associated with mitigating the potential risks of AI, Amodei points out that he is not a pessimist. Founded in 2021, Anthropic is a San Francisco-based AI startup with a public-benefit focus.

He wrote: “[People] sometimes draw the conclusion that I’m a pessimist or ‘doomer’ who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future.”

Here’s what we know about Anthropic.

What company is behind Claude AI?

Anthropic is an AI startup founded by former OpenAI members, including siblings Dario and Daniela Amodei, who played a key role in the development of GPT-3. Concerned about AI safety, they left OpenAI and launched Anthropic in 2021 with a focus on ethical AI development.

One of Anthropic’s creations is Claude, an AI assistant designed to handle various tasks like writing, analysis, and coding. Since its debut in March 2023, Claude has undergone several upgrades. Claude 3, released in March 2024, even added the ability to analyze images. Available via API, web, and mobile apps, Claude is built for natural, conversational interactions, excelling in tasks like summarization, Q&A, decision-making, and code-writing.

The latest version, Claude 3.5 Sonnet, offers even better capabilities and is now freely accessible to the public. Anthropic currently offers three versions of Claude: Claude 1, Claude 2, and Claude-Instant, each with its own strengths. Claude is constantly trained on new data and can process up to 75,000 words at a time, meaning it can read a short book and provide insights or answers.

Anthropic has quickly gained traction in the AI space, securing investments from major players like Google and Amazon, with its current valuation reaching $20 billion. What sets Anthropic apart is its commitment to AI safety. While OpenAI’s GPT-4 is trained on human preferences, Anthropic takes a different route.

Their “constitutional AI” approach ensures their AI systems are aligned with a set of guiding principles, promoting freedom, humane treatment, and privacy. This makes Anthropic’s AI development focused on safety and responsibility at its core.

Is Anthropic’s Claude AI better than ChatGPT?

Claude’s lineup of models parallels OpenAI’s offerings pretty closely. Claude-Instant is the faster, more affordable option, much like GPT-3.5, while Claude-2 is the more advanced, slower model, competing with GPT-4 in terms of capability.

However, Claude has some limitations. It doesn’t have access to information beyond what’s in its prompt and can’t interpret or create images, unlike some of the features ChatGPT offers.

Both Claude and ChatGPT have their strengths, so which one is better depends on what you’re after. For free versions, Claude generally outshines ChatGPT. But once you look at paid subscriptions, ChatGPT, especially with GPT-4, pulls ahead with a broader range of knowledge and capabilities.

Comparing AI models isn’t straightforward since no single test can measure everything, but benchmarks help give a clearer picture. Here at ReadWrite, we’ve tried comparing the models as part of a test. However, one useful tool is LMSYS’s chatbot arena leaderboard, which ranks models based on three evaluations.

First, there’s the Elo rating system, originally used in chess. It compares AI models side-by-side with human input to rank the best responses.

Then, there’s the MT-Bench, which uses GPT-4 to evaluate multi-turn conversations and aligns with human evaluations about 80% of the time. The MMLU (Massive Multitask Language Understanding) benchmark also tests how well models understand and answer questions across a range of topics and complexity.

Claude outperforms the free version of ChatGPT, but GPT-4 remains on top in the higher, paid tiers. Beyond that, Claude stands out in tasks like reading and summarizing long documents, and handling up to 150 pages, which is great for reviewing longer texts.

That said, Claude Pro lacks many features that ChatGPT+ offers, like voice chat, image creation, web browsing, and data analysis, so it has some catching up to do to match ChatGPT’s premium service.

Is Claude AI safe?

Anthropic says it takes data privacy seriously and has put strong measures in place to protect user interactions. They automatically delete prompts and responses from their systems within 90 days unless users request otherwise.

One key difference between Claude AI and other models is that Anthropic doesn’t use interactions from its consumer or beta services to train its models unless users give explicit permission. This adds an extra layer of privacy for those concerned about how their data is used.

In addition, all data transmitted through Claude AI is encrypted, which ensures that your conversations can’t be intercepted or accessed by unauthorized individuals.

The tech firm also follows strict privacy policies and complies with data protection regulations, ensuring that your information is handled responsibly and securely.

On a broader scale, Anthropic’s leadership has raised concerns about the potential dangers AI could pose to civil society. They’ve proposed that democracies work together to secure the AI supply chain and prevent adversaries with harmful intentions from accessing the tools needed for powerful AI production, like semiconductors.

Amodei writes: “I don’t think it is ethically okay to coerce people, but we can at least try to increase people’s scientific understanding—and perhaps AI itself can help us with this.”

Even though the company touts privacy, Anthropic was sued for training its chatbot on pirated copies of copyrighted books. In August, a San Francisco federal court alleged that Anthropic taught its AI product using libraries of pirated works.

How much does Claude AI cost?

For the average user, Anthropic makes it easy to try out their best Claude model for free through their chat interface at claude.ai, which is still in open beta as of October 2023. Signing up is simple—just create a free account!

If you’re looking for more, they also offer Claude Pro, a subscription similar to ChatGPT+, for $20 per month. The free tier gives you access to their most advanced model, Claude 2, which is a big plus since OpenAI reserves GPT-4 for its paid subscribers.

Claude Pro comes with some added perks, according to Anthropic’s website:

  • 5x more usage of Claude 2 compared to the free tier, allowing you to send many more messages
  • Priority access during busy periods
  • Early access to new features to get the most out of Claude.

Anthropic sells access to their AI models to three main groups: standard users (like those using the chat interface), developers who want to integrate Claude into their apps, and businesses in need of enterprise-level support.

For developers, if you want to integrate Claude into your applications, you can access Anthropic’s models via their API or through Amazon Bedrock. Both platforms follow a pay-as-you-go model, where pricing is based on how much text you want Claude to process, making it flexible for different project needs.

Featured image: Anthropic

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Suswati Basu
Tech journalist

Suswati Basu is a multilingual, award-winning editor and the founder of the intersectional literature channel, How To Be Books. She was shortlisted for the Guardian Mary Stott Prize and longlisted for the Guardian International Development Journalism Award. With 18 years of experience in the media industry, Suswati has held significant roles such as head of audience and deputy editor for NationalWorld news, digital editor for Channel 4 News and ITV News. She has also contributed to the Guardian and received training at the BBC As an audience, trends, and SEO specialist, she has participated in panel events alongside Google. Her…

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.