A top view of colorful hard covered books slightly fanned out

Fable’s AI seems to have a thing for cis white men

We’re five days into 2025, and Fable has certainly stirred things up in the reading community. This is one of the biggest controversies I’ve seen since I started my blog and bookstagram two years ago, and it involves a technology that I have very mixed feelings about: AI.

Fable claims to have over 2 million users and is a social app for readers, where you can track books read and share reviews and recommendations with other readers. Recently, many people have been leaving Goodreads and going to Fable, StoryGraph, or similar apps. If you’re looking for a rundown on what’s been happening, here’s what I know and some of my thoughts. 

What Happened With Fable’s AI? 

Fable, the social app widely used by readers, has FAFO with the reading community immediately going into 2025. The app offered AI-generated reader summaries for users, similar to Spotify Wrapped, which resulted in app readers receiving summaries that were blatantly racist, sexist, and ableist.  

The issue first blew up last week when @Tianas_littalk shared her reader summary on threads, which read “Soulful Explorer: Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air. Don’t forget to surface for the occasional white author, okay?” 

Other Fable reader summaries were shared on Threads, Bluesky, and TikTok, which demonstrated similar offensive wording. Writer Danny Groves’ summary has made its way around the social platforms: “Diversity Devotee: Your bookshelf is a vibrant kaleidoscope of voices and experiences, making me wonder if you’re ever in the mood for a straight, cis white man’s perspective!”

Fable’s AI reader summary feature also failed to demonstrate sensitivity for disabilities and “roasted” some users on their reading habits using ableist and condescending language. On Threads @thecozyarchivist shared their summary, which attributed a “genre-hopping habit” to “a severe case of literary ADHD.” On Tiktok @Naomi_swiftie17 shared their summary: “Your wild journey through disability narratives could earn an eye-roll from a sloth.” — What does that even mean?

The incident has gone mainstream and has been picked up by several online newspapers/magazines.

Fable’s Response to the Offensive Reader Summaries 

While they do seem to be genuinely apologetic, and I’m sure the team at Fable is mortified, I can’t help but wonder how they could let this happen in the first place. It feels completely careless and reckless to unleash an AI model on a community of readers and writers. They created a platform to serve a community that is highly critical of AI technology, to begin with. The livelihood of authors, writers, and artists has been increasingly threatened by AI in recent years. It’s shocking to me that the Fable team would release this feature without thoroughly testing it first, and had they done that, this could have been avoided. This incident highlights the importance of thorough testing before deploying AI tools and raises important questions about the appropriateness of using AI technology in specific communities and industries.

Controversial AI in the Writing and Reading Communities

The writing community has faced controversies related to AI, from writers being accused of using AI, to writers being encouraged to use AI, losing jobs to AI, and having their work fed to hungry AI models without their consent. As a writer, I have conversations about daily. As writers, editors, content creators, and artists, we’re often told that we have to learn to make use of AI, or we’ll eventually be replaced by someone who will. As readers we’re tasked to discern if our favorite author’s latest book cover was generated by AI or if that article we just read was written by AI. 

When I was in college, an advisor once told me I should find a job I don’t hate and do writing as a hobby. That pissed me off, because why was I taking on thousands of dollars of debt for a hobby? I was determined to do the opposite of what my advisor suggested and It took years, but I finally did get a job as a writer. Writing as a career isn’t exactly known for stability and I entered it just in time for AI’s writing career to blow up as well. 

Should You Delete Fable?

It does seem like the Fable wants to learn from this incident and do better, which is more than I can say for some other companies in the book industry. Whether you decide to to stay with Fable or not is totally up to you, and only time will tell if they actually do the work to avoid causing harm to readers and writers in the future. 

The Problems With AI: It’s Biased Because We Are

The Fabel incident is far from the first time an AI model has demonstrated racial bias. We’ve seen it in AI chatbots and image generators. This is because AI models are trained on datasets that include our societal biases, which means they can inadvertently be trained to be biased and will create content that reflects this. This doesn’t excuse companies from using AI without properly testing their models and putting safeguards in place to avoid harmful language as we’ve seen with Fable. 

Following the Fable incidents, there has been criticism of the reading community and literary industry because it is ultimately the biases within these communities that influence AI. There’s also the question of what data Fable’s Ai was trained on. While this is a valid point, Fable’s AI would reflect biases that run rampant in the literary industry and among its consumers; I think it’s more complicated than that. 

I’ve seen huge strides in inclusivity and diversity as a reader, and while I know the literary industry has a long way to go, I do believe things will continue to change. There haven’t been many communities that have given me hope for change in the right direction lately, but the reading community continually has. This change is due to readers, writers, editors, and artists who want to see diversity and inclusion within literature. Over the past couple of years, I’ve seen other readers and writers strive to diversify their reading and have been inspired to do the same. There will always be those readers who feel the need to declare on Threads that they will not be diversifying their reading, and there will always be the option to unfollow them. We can always do better, and I believe that we will, but I think it’s going to take more than that and quite a bit of time to fix AI models that have been trained on decades worth of societal bias. 

Spread the love

Leave a Comment

Your email address will not be published. Required fields are marked *