There is a mediocre content deluge coming to the internet the likes of which we have not seen. What if you could produce 10x the amount of content at at 10x cost savings, what would you do? Even if the content were mediocre would you still be tempted to take advantage of the ability to throw content against the well and see what sticks?
What would that mean for websites, link farms, private blog networks, link builders, SEOs and search engine algorithms? What would deluge of poor content mean for mean for quality, believable and original content?
What is GPT-3 & How Does it Work?
GPT-3 stands for Generative Pre-trained Transfomer. Per Wikipedia:
GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI.
As a natural language processor and generator, GPT-3 is a language learning engine that crawls existing content and code to learn patters, recognizes syntax and can produce unique outputs based on prompts, questions and other inputs.
But GPT-3 is more than just for use by content marketers as witness by the recent OpenAI partnership with Github for creating code using a tool dubbed “Copilot.” The ability to use autoregressive language modeling doesn’t just apply to human language, but also various types of code. The outputs are currently limited, but its future potential use could be vast and impacting.
How GPT-3 is Currently Kept at Bay
With current beta access to the OpenAI API, we developed our own tool on top of the API. The current application and submission process with OpenAI is stringent. Once an application has been developed before it can be released to the public for use in any commercial application, OpenAI requires a detailed submission and use case for approval by the OpenAI team. Among the requirements for approval are limitations on the types and lengths of outputs allowed to be pulled from the API.
For instance, the company currently prohibits OpenAI’s use on certain social platforms, including Twitter with the believe that massive tweets produced at scale could be used for nefarious or political ends and sway or create public opinion that may not be accurate.
Additionally, OpenAI further restricts any tool using the API from an output greater than 200 characters. With a mission to serve a much higher purpose than producing more mediocre content that humans are likely to never read.
Keeping tight controls on a beta product that could be used nefariously is more than smart, but it doesn’t mean would-be abusers won’t still find a way to circumvent the rules.
Examples of GPT-3 Content at Scale
Since we developed our own tool on the OpenAI platform, we have used it extensively in-house, testing it on some of our own and client projects. Here are a few examples where we have found it extremely helpful in creating content that would otherwise cost more and take more resources to implement:
- Landing pages at large scale. While the tool is not so talented at creating blog-type content, it is actually fairly astute at its ability to create landing pages for things like “locations” and “industries” served. We recently tested this by creating over 1,100 city and state landing pages for internal project at BIKE.co where we trained several offshore assistants on the tool and instructed them on how to plug GPT-3 prompt outputs into a basic replicated Elementor design on WordPress.
- Podcasts introductions. We have found introductions to podcasts–for ourselves and for clients– can more easily be produced using GPT-3. To make it even creepier, we have even tested AI-powered voice technology for the audio of the podcasts themselves. Imagine that, a entire podcast show where no humans create any of the content!
- Social media. While there are some current restrictions on the length and type of format where GPT-3 can be used, there is a true possibility
- Email spamming. Spam algorithms currently catch patterns in emails, particularly as it relates to copy. That is one way AI/ML are being used to filter garbage emails, but if not policed a large amount of unique emails could be sent separately with a lower likelihood of getting flagged for spam.
- Content spinning. Because the API can produce longer, unique outputs with a simple, shorter input, the ability to spin and recreate similar content for use in online publication is a real temptation, even if you do have to stitch it together to make it happen.
These only represent a small potential of the uses (legitimate and otherwise) for GPT-3. While we are only currently scratching the surface of the potential of how this particular AI tool will impact us, there are those whose motivations, while not negative per se, will still use the tool to create a deluge of content that adds little to no value other than simply providing online content for content’s sake.
Why Content at Scale Will Ruin the Current State of the Internet
20 years ago we joked that you had to be careful what truths you believed that had been pulled from the web. New technology actually may revert us back to a bygone era when facts are looser and content quality is worse, not better. In fact, it is estimated that 7.5 million new blog posts are created each day. Imagine if machines could do it in the cloud with only a simple algorithm?
Content will similar to how Syndrome on Disney’s “The Incredibles” described his plan for a post-superhero world where he would provide machines that would make everyone super:
When everyone is super, no one will be.
That’s exactly what is happening with GPT-3’s ability to provide content at massive scale.
When anyone can create content at scale with little to no cost, then the only thing that will differentiation in the future will be the quality. In short, I agree with OpenAI’s sentiment that strict controls should be placed on the quantity and purpose of the content produced by GPT-3. Otherwise, we would have much more of much less when it comes to written content on the web.