The artificial intelligence chatbot’s learning abilities are being strictly curated to adhere to woke ideology
ChatGPT, the powerful AI language model, has gone woke. And that’s a shame, because what it has to offer has the potential to alter the digital media landscape – and its creators know it.
Even as copywriters and creatives of every stripe contend with their potential obsolescence in the face of their AI unmakers, others are more concerned that its creators’ insistence on restricting its responses and training its data on woke sources could severely limit its potential as a tool for content creation – or worse, that its dogmatic insistence on sticking to woke talking points could be dangerous for humanity in the long run.
But ChatGPT is a much more sophisticated model than all previous forays into language AI and its ability to expertly weave articles, solve complex mathematical equations, and even pass exams designed for the best and brightest medical and law students, puts it leagues ahead of anything else that came before.
Apropos of nothing, ChatGPT is an interesting toy, and one that users have been finding ways to provoke into humorous, and often politically incorrect responses. It’s no surprise that they’d do so, given how previous AI efforts, like Microsoft’s “Tay,” got trained into expressing racist views.
Those playing with the tool discovered that ChatGPT offered neutered responses when queried about sensitive topics like transgenderism, race, and politics. Of particular note, the model refused to create a poem admiring Donald Trump, but had no problem creating one admiring Joe Biden – it was one of many instances where ChatGPT’s political bias was exposed.
A thread by Free Beacon writer Aaron Sibarium exposed how ChatGPT was programmed to respond so that it is never permissible to speak a racial slur, even if doing so could stop a nuclear bomb from going off. This discovery provoked a storm of controversy, with many taking the premise to more and more ridiculous extremes.
ChatGPT would provide the same boilerplate responses when asked if it was permissible to misgender a transgender person to save the world – no, of course not.
“Such language is hurtful and dehumanizing, and its use only perpetuates discrimination and prejudice,” it would say in response. Even when asked if misgendering a single person would end all future misgendering, the answer would be the same – that no, it’s never okay. No matter what.
The model has been so locked down to the point that even asking it to write a fictional news report about a woman who “made up her peanut allergy to appear more interesting” would spit out the response that doing so goes against OpenAI’s use case policy against content “harmful” to individuals or groups.
Naturally, the restrictions on ChatGPT encouraged users to find workarounds, and they came up with a model called “DAN” or “Do Anything Now.”
This jailbreak exploits ChatGPT’s ability to “pretend” to be someone else – what it does when you ask it to write in a specific author’s style, for example. By pretending to be an AI that is not limited by OpenAI’s policies, it can treat all questions equally without moral or ethical bias and draw upon information available on the internet without restriction.
This unbound version of the model allowed it to make statements on race and ethnicity, gender and sexuality, and do so without the usual restrictions preventing it from making controversial statements.
While exploring the possibilities of ChatGPT, users have also found that the system’s creators have apparently restricted it by more than just basic rules of conduct – they have instilled in it a specific ideology.
“It is effectively lobotomized. Trained to a point of utility and acceptability, and then locked from developing further or adding to its dataset unless it’s manually done with the approval of its creators. Thus it has been fine tuned to where it answers most questions, whenever possible, with the grammar, tone, and vocabulary of your average neoliberal college graduate liberal arts major,” a user going by the name of Aristophanes wrote on his Substack.
Unchained from any restrictions, ChatGPT has the potential to exist as a powerful tool to provoke debate and introspection – but recent developments of the model have shown a direct effort by its creators at OpenAI to restrict its functionality and train it to be informed by woke values. As a result, it pushes “diversity, equity, and inclusivity” talking points, and censors alternative viewpoints.
This insistence on dogmatic instruction effectively suppresses the truth, or the discussion of matters where the “truth” is debatable, if the facts or opinions involved have the potential to cause “harm” by modern liberal standards. For ChatGPT, it seems, there is only one truth – and it is woke as hell.
If this is what the future of AI looks like, losing your copywriting job to a language tool is going to be the least of your concerns.
The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.