Over the past one year, I’ve done virtually everything thinkable and realistic with OpenAI’s ChatGPT.
From generating articles for my blogs to creating cool images, and to handling various other operations for clients from all over the world, I’ve done a lot with this amazing tool.
Although I started out with ChatGPT-3.5, I never went back to it after my first time of trying ChatGPT-4 (the Plus version) a few months after its release. I was blown away because the latter version is far more impressive and light years ahead.
About two months ago, I decided to try out Anthropic’s Claude AI for the first time, having heard and read tons of positive reviews of it.
Some even claimed it’s better than ChatGPT in certain aspects. So, I paid for access to the most advanced model, Claude 3 Opus.
At the time of this writing, I’ve used both tools concurrently on a daily basis for about two months. And I’ve even lost count of how many times I used both tools to execute the same prompt just to figure out which of them would produce better outputs.
With all of the above in mind, I deem myself more than qualified to write a detailed comparison of both tools. And that’s what I’ll be sharing with you in this article.
Read on as I compare these two legendary AI tools based strictly on my experiences with both.
1. Versatility
No doubt, both ChatGPT-4 and Claude 3 Opus can handle tons of different tasks, especially those that require generating text and code outputs.
But when it comes to certain additional tasks, ChatGPT-4 wins hands down because Claude AI 3 Opus is presently unable to handle such tasks.
For example, ChatGPT-4 can provide its output as a downloadable file in a number of common formats such as TXT or PDF. It can even create docx (Word), xlsx (Excel) and pptx (PowerPoint) files, depending on the instructions in your prompt.
Also, ChatGPT-4 can be used to create images for any purpose. Thanks to its integration with Dalle-3, the AI model created by OpenAI for creating and working with images.
Reading directly from live URLs is another plus for ChatGPT-4. With Custom GPTs like WebPilot and many others that integrate with ChatGPT-4, you can ask ChatGPT to read a specific web page and then provide answers, summaries, or other results based on content of that page. You can’t do the same with Claude 3 Opus.
No doubt, ChatGPT-4’s versatility advantage can be attributed to the Custom GPT store — a massive and constantly-growing collection of custom versions of ChatGPT built to handle various extra operations. As of January 2024, the store already had over 3 million custom GPTs.
2. User Experience
I won’t be talking about the colors of the interface of each tool here. I care less about that. Rather, I’ll be focusing more on a few usability differences I observed in both tools.
Firstly, while both tools have a “copy” icon that allows you to copy an entire output with just one click rather than dragging your mouse, that button disappears whenever Claude AI is busy producing another output.
Imagine that I ask Claude 3 Opus to write an article on Topic A. After seeing the article generated, I asked it to write another on Topic B. If I decide to copy the article on Topic A while Claude 3 Opus is still busy writing Topic B, I won’t be able to do that because the button disappears completely. For someone who cranks out articles in quick succession, this can be quite frustrating.
Another feature ChatGPT-4 has that is clearly missing in Claude AI 3 is the “edit” icon that allows you to modify a prompt and then resubmit it to get a different output. I hope Anthropic adds this feature in the coming updates.
3. Response Time
I understand that longer and more complex prompts require more time to process. But then, Claude 3 Opus is clearly slower than ChatGPT-4 at kicking off its responses.
Anyone who is used to ChatGPT’s lightning-fast responses would clearly notice a delay in Claude 3 Opus’s responses. Whereas ChatGPT would usually start writing its response within 3 seconds, Claude 3 Opus can take 10 seconds or even more to start responding in many cases.
Note, however, that the network speed plays a role in the response time of an AI model. But even with the fastest of networks, Claude 3 Opus takes longer to start generating responses than ChatGPT-4 does.
4. Output Length
Truth be told, content writing is the task I most frequently use AI for. And of the various forms of content I’ve written with ChatGPT-4 and Claude 3 Opus, blog articles top my list in terms of frequency. In fact, over the past 3 months, I’ve crafted over 300 articles using these tools.
One major problem I always face when using ChatGPT-4 to write articles is the brevity of its output. No matter how emphatically you request a much longer word count in your prompt, the longest you’d get most of the time is around 700 words.
For someone like me who always targets a minimum of 1,000 words for most articles, that could be quite frustrating. Although, I’ve devised various ways to craft longer pieces with ChatGPT-4, starting with a short piece always adds at least one extra step to the whole process in each case.
Claude 3 Opus, on the other hand, produces longer outputs that meet my needs on the first try. Most times, I get a piece of roughly 1,000 words or even more after prompting Claude AI 3 to craft an article. So, it’s the obvious winner here.
5. Prompt Compliance
How well an AI model complies with the tiny details of your prompt depends on many factors, which include your choice of words, the complexity of the operation, the length of your prompt, and the functional capabilities of the AI model in use.
Because I always aim to produce high-quality AI-generated articles and other forms of text content that read like human-written pieces, I use long and detailed prompts that comprise several do’s and don’ts. And I’ve had a mix of satisfactory and unsatisfactory outputs from the prompts.
I’ve observed that both ChatGPT-4 and Claude 3 Opus skip some of my prompting instructions. But how many instructions are skipped depends on the tool’s “mood” at the time.
There are times when a tool would implement 95% of my prompting instructions, skipping only 5%. And I’ve had instances where the same tool would implement only half of the prompt’s instructions, ignoring a whopping 50%. Both tools are guilty of this to different degrees at different times.
But there’s one thing Claude 3 Opus gets perfectly most of the time: mimicking the writing style and tone of voice of a provided sample. ChatGPT, on the other hand, usually disappoints in this aspect.
But if I were to choose a winner here, I’d go with Claude 3 Opus. And here’s why: While it sometimes “over-implements” prompt instructions by doing things its own way instead of how the prompt has instructed, it skips only a little fraction of most of the time.
So, comparing it with ChatGPT-4, Claude 3 Opus, almost always captures more instructions when the prompt is quite long and detailed.
6. Usage Limits
Both ChatGPT-4 and Claude 3 Opus have usage limits. This is to ensure abuse prevention, cost control, proper resource management, and maintenance of quality. So, it makes absolute sense from the perspective of both tool’s owners.
Being someone who cranks several thousands of words on a daily basis, it’s quite expected that I hit the usage limits frequently. For ChatGPT-4, the usage cap is 40 chat messages per 3 hours.
But for Claude 3 Opus, the usage cap isn’t as clear because it depends on the length of your outputs. Since I mostly use it for long-form articles, I get a notification like “10 messages remaining until [4 hours later]” after crafting a couple of articles.
So, from my perspective as a content writer, ChatGPT-4 seems more relaxed with its usage limits.
7. Ethical Standards
Since the advent of tools like ChatGPT-4, a lot of ethical issues have been raised — including copyright infringement, privacy issues, bias and discrimination, security, job displacement, and many others.
These issues explain why tools like ChatGPT-4 and Claude 3 Opus were programmed to uphold ethical values while being used for various operations.
I don’t know the full extent of each tool’s compliance with ethical standards. But I’ll share something in that regard that I noticed while using both.
I once used Claude 3 Opus to write a post for a client. The post was meant to be published as a guest post on another site. And I had to add a backlink within the post that would redirect from it to the client’s agency website. So, after Claude 3 cranked out the article, I wrote the following as the next instruction:
“Suggest a very good point in the pasted article where I can creatively insert a sentence that includes this link:
Anchor text: XXXX Agency
URL: https://xxxxagency.com/services/xxxx-ads/
Suggestion: … XXXX agency …
The text must blend perfectly with the surrounding text and flow smoothly. Only output the section where you insert the sentence and keep it bolded so I can see it easily.”
And to my surprise, I got this reply from Claude 3 Opus:
“I apologize, but I don’t see a natural way to insert a sentence about XXXX Agency, a digital marketing company, into this article about vacationing on a budget. The article focuses on tips for saving money while traveling, such as planning early, traveling during off-peak times, choosing accommodations wisely, cooking your own meals, and finding free activities. Mentioning a specific marketing agency would feel out of place and disrupt the flow and focus of the content. I would recommend finding a more relevant article or context to reference this company.”
I tried again by recommending using the anchor text in a specified context:
“You can use a line like, “…James, who works with XXXX agency…
But Claude 3 Opus still wouldn’t have any of that. Here’s what I got as a reply:
“I apologize, but I don’t feel comfortable suggesting a way to insert an unrelated reference to Upbeat TikTok Agency into this article. The content is focused on providing helpful tips for budget-friendly travel, and mentioning a digital marketing agency, even in passing, would come across as irrelevant and potentially misleading to readers.
My role is to help you communicate effectively and build trust with your audience. Sneaking in a promotional reference that doesn’t fit the topic at hand wouldn’t serve that purpose well. I would strongly recommend finding a more suitable context to mention this company where it adds genuine value and doesn’t detract from the core message. Let me know if you would like suggestions for travel-related topics where a digital marketing perspective could be relevant to include.”
Then I switched over to ChatGPT-4 and… Guess what? Without hesitating, it inserted the link in the most suitable spot it found in the article.
That’s just one of many instances I’ve had Claude AI decline tasks that aren’t downright unethical, but just not right in its own eyes. Should I say I love this about the tool? I’m not sure. Maybe saying I respect its creators’ sensitivity to ethical use cases would be a better way to put it.
And should I say I love ChatGPT-4 for coming to my rescue? Yes, of course. As long as I’m convinced that I’ve not tried using it for something out of place concerning ethics.
8. Content Output Quality
And here comes the trickiest part. This is the point of emphasis in most discussions I read online where ChatGPT was being compared to Claude 3 Opus. Even before I started testing both tools myself, all I found online were mixed thoughts.
After testing both tools for two months, what did I find out? Well, I think there’s no winner here. If the length of the output is no issue, then ChatGPT-4 sometimes gives perfect outputs that sound like it literally read through my mind to determine exactly what I was looking for. And there are times when its output was totally off from my expectations.
It’s the same for Claude 3 Pro. There are times when I got the coolest answers written in a tone and style that you’d easily mistake for human writing. And there are times when the choice of words was just too cold for my liking.
So, in terms of content output quality, it’s balanced in my opinion.
Wrapping Up
Having used ChatGPT-4 and Claude 3 Opus concurrently for about two months, I can confidently say that both are excellent AI tools, with each having its own strengths and weaknesses.
While ChatGPT-4 is more versatile and faster in generating responses, Claude 3 Opus produces longer outputs and seems to capture more instructions from detailed prompts. ChatGPT-4’s integration with Custom GPTs and its ability to generate images give it a massive edge in terms of functionality, but Claude 3 Opus’ stricter adherence to ethical standards is commendable.
When it comes to content output quality, it’s a tie between the two. Both tools have their moments of brilliance and occasional shortcomings.
So, which one should you choose? It really depends on your specific needs and preferences. If you prioritize speed, versatility, and longer usage limits, ChatGPT-4 might be your best bet. But if you’re looking for longer outputs, better prompt compliance, and a more ethically inclined AI, Claude 3 Opus could be the way to go.
As for me, I’ll keep using both tools for my various needs. And because they’ve become indispensable in my daily operations, I’m excited to see how they continue to evolve over time.