
When ChatGPT exploded onto the scene in late 2022, it promised a new frontier in creativity and productivity. But for many writers—especially Black creatives—the AI revolution has presented a number of challenges: their words, ideas and voices are being co-opted, misattributed, and in some cases, weaponized against them.
From lawsuits against tech giants to wrongful accusations that upend careers, a troubling picture is emerging—one where artificial intelligence isn’t just changing the writing world, but actively harming those who’ve long been pushed to the margins of it.
AI’s appetite for copyrighted work
Major AI companies like Meta, OpenAI, and others have trained their language models using massive datasets scraped from the internet—books, essays, blogs, and articles often included without permission or compensation. Now, a growing number of writers are fighting back.
Meta is currently facing a class-action lawsuit filed by authors accusing the company of using copyrighted material without consent to train its models. The complaint underscores how AI systems are built on the backs of creative professionals—often without regard for ownership, credit, or compensation.
“This isn’t innovation. It’s exploitation,” said one plaintiff. “Our intellectual property is not just data—it’s our livelihood.”
Accused by the machines
Beyond copyright violations, there’s another, quieter crisis unfolding: writers being falsely accused of using AI when they haven’t.
Rose Jackson-Beavers, a seasoned author, was stunned to find her work flagged as AI-generated.
“I was accused of using AI, and when I clicked on the link, it was my own website,” she said. “It was my bio. I put two chapters on Grammarly and was angry that it said my text matched a website. It said I had patterns that resemble AI.”
Morgan McDonald, a writer in the nonprofit world, shared a similar experience.
“I just quit my job at a reproductive justice org, partially because my manager consistently accused me of using AI for applications. She said Grammarly was flagging my work as AI-generated and that it was a security concern. But I wasn’t using it.”
These accusations, often based on AI detection tools with dubious accuracy, are causing real harm. Freelance writers are losing gigs. Students are being denied diplomas. Professionals are being censured, silenced or shamed—all for writing in their voice.
The problem with AI detectors
After ChatGPT’s launch, dozens of startups rushed to fill the void with detection tools—like GPTZero, Copyleaks, Originality.AI and Winston AI—that claim to spot machine-written text with near-perfect accuracy. However experts say those claims are misleading at best and dangerous at worst.
“These companies are in the business of selling snake oil,” said Debora Weber-Wulff, a computer science professor who co-authored a study on AI detection reliability. “There is no magic software that can detect AI-generated text with certainty.”
Studies have shown that AI detectors flag work from marginalized writers—particularly Black and non-native English speakers—at disproportionately high rates. A 2023 report from Common Sense Media found that Black students were more than twice as likely to be falsely accused of using AI than their white or Latino peers. It’s an issue that may stem, at least in part, from flaws in AI detection software.
The report revealed that about 79% of teens whose assignments were wrongly flagged by a teacher also had their work submitted to AI detection tools, while 27% said their work had not been submitted at all.
AI detection systems have already shown troubling signs of bias. According to experts, the disparities uncovered in Common Sense Media’s report may be due to the AI tools themselves or to biases held by educators.
“We know that AI is putting out incredibly biased content,” said Amanda Lenhart, head of research at Common Sense. “Humans come in with biases and preconceived notions about students in their classroom. AI is just another place in which unfairness is being laid upon students of color.”
In other words, while AI tools aren’t human, they still mirror the prejudices—conscious or not—of the people who create and use them.
“AI is not going to walk us out of our pre-existing biases,” Lenhart said.
The human cost
These issues aren’t just legal or technical—they’re deeply personal.
False accusations erode trust between writers and editors, between students and teachers and between workers and employers. They spark anxiety, depression and have a chilling effect on creativity.
Meanwhile, the broader economic toll is mounting. As publishers and platforms flood their feeds with cheap AI-generated content, the demand for human writers is shrinking.
Opportunities are drying up—not just for book deals, but for essays, articles and grant proposals. And when those who remain are falsely labeled as AI cheats, their reputations can suffer irreparable damage.
Legal and ethical battles ahead
Groups like the Authors Guild are fighting to hold AI companies accountable and push for transparency in model training. Some legislators are now proposing laws that require clear consent and compensation when creative work is used to train AI.
There are also calls for independent audits of AI detection tools and clearer standards on when and how they can be used—especially in education and employment settings.
“AI isn’t going away. It will continue to reshape journalism, literature, and freelance creative work. But writers—especially those from marginalized communities—deserve protection, respect, and agency,” Beavers said.
