We’re five days into 2025, and Fable has certainly stirred things up in the reading community. This is one of the biggest controversies I’ve seen since I started my blog and bookstagram two years ago, and it involves a technology that I have very mixed feelings about: AI.
Fable claims to have over 2 million users and is a social app for readers, where you can track books read and share reviews and recommendations with other readers. Recently, many people have been leaving Goodreads and going to Fable, StoryGraph, or similar apps. If you’re looking for a rundown on what’s been happening, here’s what I know and some of my thoughts.
What Happened With Fable’s AI?
Fable, the social app widely used by readers, has FAFO with the reading community immediately going into 2025. The app offered AI-generated reader summaries for users, similar to Spotify Wrapped, which resulted in app readers receiving summaries that were blatantly racist, sexist, and ableist.
The issue first blew up last week when @Tianas_littalk shared her reader summary on threads, which read “Soulful Explorer: Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air. Don’t forget to surface for the occasional white author, okay?”
Other Fable reader summaries were shared on Threads, Bluesky, and TikTok, which demonstrated similar offensive wording. Writer Danny Groves’ summary has made its way around the social platforms: “Diversity Devotee: Your bookshelf is a vibrant kaleidoscope of voices and experiences, making me wonder if you’re ever in the mood for a straight, cis white man’s perspective!”
Fable’s AI reader summary feature also failed to demonstrate sensitivity for disabilities and “roasted” some users on their reading habits using ableist and condescending language. On Threads @thecozyarchivist shared their summary, which attributed a “genre-hopping habit” to “a severe case of literary ADHD.” On Tiktok @Naomi_swiftie17 shared their summary: “Your wild journey through disability narratives could earn an eye-roll from a sloth.” — What does that even mean?
The incident has gone mainstream and has been picked up by several online newspapers/magazines.
Fable’s Response to the Offensive Reader Summaries
Fable has issued several responses through social media comments and posts, as well as two videos of Chris Gallello, Fable’s head of product. In the first video, Gallello apologized and said the Fable team was shocked to discover the reader summary feature had generated “very bigoted, racist language” and attributed it to their AI model’s “playfulness.” Gallello promised that the team would work on updating safeguards to try to prevent offensive language from being generated in the future and would add a feature where users could flag inappropriate summaries.
In a second update video, Gallello said that due to user feedback following the incidents and the first video, they would remove the Wrapped feature and some other AI-powered features. Gallello also announced that Fable will be holding a town hall over Zoom on January 6 at 7 p.m. EST. Gallello said they “want to use this as an opportunity to learn and grow.”
While they do seem to be genuinely apologetic, and I’m sure the team at Fable is mortified, I can’t help but wonder how they could let this happen in the first place. It feels completely careless and reckless to unleash an AI model on a community of readers and writers. They created a platform to serve a community that is highly critical of AI technology, to begin with. The livelihood of authors, writers, and artists has been increasingly threatened by AI in recent years. It’s shocking to me that the Fable team would release this feature without thoroughly testing it first, and had they done that, this could have been avoided. This incident highlights the importance of thorough testing before deploying AI tools and raises important questions about the appropriateness of using AI technology in specific communities and industries.
Controversial AI in the Writing and Reading Communities
The writing community has faced controversies related to AI, from writers being accused of using AI, to writers being encouraged to use AI, losing jobs to AI, and having their work fed to hungry AI models without their consent. As a writer, I have conversations about daily. As writers, editors, content creators, and artists, we’re often told that we have to learn to make use of AI, or we’ll eventually be replaced by someone who will. As readers we’re tasked to discern if our favorite author’s latest book cover was generated by AI or if that article we just read was written by AI.
When I was in college, an advisor once told me I should find a job I don’t hate and do writing as a hobby. That pissed me off, because why was I taking on thousands of dollars of debt for a hobby? I was determined to do the opposite of what my advisor suggested and It took years, but I finally did get a job as a writer. Writing as a career isn’t exactly known for stability and I entered it just in time for AI’s writing career to blow up as well.
Should You Delete Fable?
Many people in the reading community have said that they have or will be deleting Fable. I tried the app about a year ago and never really got into it. Personally, I’ve been using StoryGraph, which also uses AI for certain features but gives you the choice to opt-out. Goodreads was my go-to, but I recently canceled my Amazon Plus and Kindle Unlimited subscriptions and have really just been over Goodreads for a while now.
It does seem like the Fable wants to learn from this incident and do better, which is more than I can say for some other companies in the book industry. Whether you decide to to stay with Fable or not is totally up to you, and only time will tell if they actually do the work to avoid causing harm to readers and writers in the future.
The Problems With AI: It’s Biased Because We Are
The Fabel incident is far from the first time an AI model has demonstrated racial bias. We’ve seen it in AI chatbots and image generators. This is because AI models are trained on datasets that include our societal biases, which means they can inadvertently be trained to be biased and will create content that reflects this. This doesn’t excuse companies from using AI without properly testing their models and putting safeguards in place to avoid harmful language as we’ve seen with Fable.
Following the Fable incidents, there has been criticism of the reading community and literary industry because it is ultimately the biases within these communities that influence AI. There’s also the question of what data Fable’s Ai was trained on. While this is a valid point, Fable’s AI would reflect biases that run rampant in the literary industry and among its consumers; I think it’s more complicated than that.
I’ve seen huge strides in inclusivity and diversity as a reader, and while I know the literary industry has a long way to go, I do believe things will continue to change. There haven’t been many communities that have given me hope for change in the right direction lately, but the reading community continually has. This change is due to readers, writers, editors, and artists who want to see diversity and inclusion within literature. Over the past couple of years, I’ve seen other readers and writers strive to diversify their reading and have been inspired to do the same. There will always be those readers who feel the need to declare on Threads that they will not be diversifying their reading, and there will always be the option to unfollow them. We can always do better, and I believe that we will, but I think it’s going to take more than that and quite a bit of time to fix AI models that have been trained on decades worth of societal bias.