AI is fun when it eats horror movies and churns out Netflix’s Mr. Puzzles Wants You to Be Less Alive. But sometimes the lack of context is an issue.

Take Google Docs’ new “assistive writing” feature, which makes suggestions as you write (e.g., switching from passive to active voice or deleting repetitive words). This makes your writing more accessible, which is great.

It may also suggest more inclusive language, while flagging words that could be deemed inappropriate.

That’s cool, in theory…

… but many users have found the suggestions to be, well, a little weird.

Motherboard tested out several text excerpts. While the tool suggested more gender-inclusive phrasing (e.g., “policemen” to “police officers”), it also flagged the word “Motherboard.”

And while it suggested “property owner” in lieu of “landlord,” it didn’t flag anything in a slur-laden interview with ex-KKK leader David Duke.

Context is key

Numerous startups are banking on AI to help us write, and it often does a great job.

But AI learns from people and can pick up their biases. A study by The Allen Institute for AI (AI2) found that AI language tools “are prone to generating racist, sexist, or otherwise toxic language” for that reason.

AI also lacks context. When Facebook sent its human moderators home amid the pandemic, its automated system mistakenly removed numerous posts. YouTube warned its systems would likely do the same.



A Google spokesperson told Motherboard that while its tech is evolving, it may never have a “complete solution to identifying and mitigating all unwanted word associations and biases.”

The takeaway? Bots are helpful. But you still have to do the work.

BTW: If you’re looking to use more inclusive language in your writing, here’s a guide.

©