Sorry, this is not another post about how amazing GPT-3 is. Well, okay maybe it is, but not in a good way. Unless you’ve been living under a rock, I’m sure you all have been seeing the massive amount of freakout reactions on twitter over GPT-3. Its very good, but far from perfect. To many it seems like a novelty, something that’s fun to play around with and see how it responds. Unfortunately however, as soon as this thing enters the public I think its going to usher in a new era of internet chaos. I’ll explain some of the ways I think this new technology will inevitably be used to deceive people, but first I’ll quickly review what GPT-3 is for those who don’t know.
…and don’t skip the ending ;)
GPT-3, our new robot frenemy
Its use case is simple. Give the AI a prompt, and it will attempt to match the pattern you give it. You can ask it a question, give it a scenario, tell it instructions, etc. You can even “program” it by giving it examples and telling it to reproduce or adapt the output in a particular way.
If you wanted to give the AI a particular bend, you can fine tune the model by giving it a dataset you provide. You could give it a bunch of essays from white supremacists, for instance.
After fallout from GPT-2, OpenAI has wizened up to some of the more obvious malicious use cases for a general NLP program like this. For various reasons, OpenAI is not releasing the model to the public. Rather, they are giving access via an API, which is currently in private beta.
This allows them to monitor the ways the AI is being used. It claims that they will be banning people who use the AI for astroturfing, abuse, scamming, etc. But to be honest, I think they have no idea how much people are going to abuse the shit out of this thing.
First, I don't believe that they have effective ways of preventing even the obvious malicious use cases. Even if they did, its only a matter of time before this model or one just as good is released to the public, and after that all bets are off.
Second, I think there are much more subtle ways to abuse this technology than having it churn out Nazi propaganda. As with most new technology, we aren’t prepared for it. We’ll just have to adapt to the problems as they arise I guess.
One of the most obvious usecases is passing off GPT-3 content as your own, and I think this is quite doable.
Pumping out blog content
Ever since COVID hit, everyone and their mother started writing online. One of the most interesting ways people have been playing with this technology is in feeding it article headlines and introductions.
While the output is not perfect, you can easily curate it to something that's convincing. This will make it so easy for people to just pump out clickbait articles to drive traffic.
It would be pretty simple to do actually.
First thing you would need to do is come up with a name. If it were me, I’d name it after the Greek god of deception or something like that just to be clever. Then I’d just stick an “A” in front so nobody gets suspicious.
After that, I’d make a substack because it takes no time to set up. Once thats done you have to come up with some content. GPT-3 isn’t great with logic, so inspirational posts would probably be best, maybe some pieces on productivity too.
Once you have your name, your website, and your content, its time to promote. Just start posting your articles on a website like Hacker News and a couple are bound to get popular.
Thats it. All thats left is to give it time. Because GPT-3 is generating the content for you, its easy to post something every day and grow it quickly.
But I couldn’t do that for long. As an experiment, maybe, but I’d have to eventually let the people know whats happening or I’d feel too dishonest…