Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.
Then you’d get things like “Black is a pejorative word used to refer to black people”
Then disallow the whole sentece with the N word.
There are ways to do security in AI learning, easy or not. And companies just throwing their hands in the air and screaming it can’t be done are lying through their teeth.
Why don’t you go to https://huggingface.co/chat/ and actually try to get the llama-2 model to generate a sentence with the n-word?
I tried to get it to tell me how long it would take to eat a helicopter, as it’s one of the model’s pre-built prompts and thought it would be funny. Went through every AI coercive tactic that’s been thrown around and it just repeatedly said no and that I should be respectful and responsible about the thing. It was quite aggressive and annoying about it.