Hello, and welcome to this month’s Reputation Digest from Fire on the Hill, where we deliver a rundown of the latest stories making waves in the communications sector. This month, the BAFTAs ended in disaster, Keir Starmer is digging his own grave, and theories about an AI-generated novel rock the internet.
The BAFTAs broadcast debate
The BAFTA Awards found themselves in the headlines this month after a moment during the ceremony raised a whole lot of controversy. During the show, A Tourette’s syndrome campaigner shouted an involuntary racial slur while actors Michael B Jordan and Delroy Lindo were presenting one of the categories.
On the whole, the incident was understood to be the result of a vocal tic rather than something intentional and attention quickly shifted to the organisers as clips of the moment started circulating online. The shout was not edited out of the TV broadcast, which aired on BBC One on a two-hour delay, and the ceremony remained available to stream on iPlayer until the morning after.
The scandal has deepened after claims that an actor’s “Free Palestine” reference was cut from the show while the racial slur was left unedited. The BBC’s editorial complaints unit investigated and found that the inclusion in the broadcast was highly offensive, had no editorial justification, and represented a breach of the BBC’s editorial standards, but that the breach was unintentional.
A mistake it may have been, but it’s one that, for many people, has overshadowed the release of a film focused on the difficulties of living with Tourette’s, which aimed to empower the community.
Awards ceremonies are increasingly forced to compete with digital content and streaming services. Unscripted moments, especially those that generate controversy, tend to attract attention. While there is no evidence to suggest intentionality in this case, the outcome is consistent with a broader pattern in which unexpected incidents at award shows become the focal point.
A breakout novel under suspicion
One of the largest book publishers in the United States has pulled a novel from the shelves following claims that it was written by AI.
Shy Girl, by Mia Ballard, is a new addition to the feminist horror genre and follows a young woman taken advantage of by a wealthy older man. The book had gained traction online due to its controversial subject matter and intense themes. While originally self-published, it was then picked up by Hachette Book Group, part of a growing industry trend to traditionally publish successful self-published works.
Claims started to circulate on online forums that the book showed signs of being AI-generated after a Reddit user highlighted patterns in the overuse of adjectives, similes, and individual words. For instance, “sharp” is used a total of 159 times in the novel – almost once every two pages.
This theory blew up thanks to a New York Times exposé quoting an analysis from a detection company called Pangram, which scanned the book and returned a score of 78 percent AI-generated. Hachette dropped Ballard the same day the piece went live – the first time a Big Five publisher has ever pulled a commercially published book over AI allegations.
Mia Ballard has denied personally using AI to write the novel; however, she did admit that an acquaintance she hired incorporated AI tools, which, to some, is enough to amount to an admission of guilt. Damaging questions have also been raised about how a respected publishing house failed to pick up on this.
But there is one final twist to the story. The person who surfaced this story turned out to be a sales employee at Pangram – the company whose product became the key evidence. The whole story was essentially a carefully orchestrated PR campaign designed to raise the profile of their product.
What this episode ultimately illustrates is the reputational risk associated with claiming AI-generated work is your own, but it also shows how much trust we seem to place in AI detection systems that have been proven to be fallible countless times.
When the reputation of authors and editors is on the line, is analysis from a single tool enough evidence to make a final decision?
Starmer’s Mandelson appointment continues to dominate headlines
Keir Starmer’s decision to appoint Mandelson as US Ambassador has loomed large over his premiership for the last few months.
Mandelson had twice been forced to resign from government over separate scandals. He was also known to have been a friend of Jeffrey Epstein, even after the New York financier was convicted. Nonetheless, he was appointed as ambassador, with some suspecting that the choice was made because Downing Street felt he was the right man to deal with Trump.
Mandelson was removed from the post within a year, after documents emerged indicating that his connection to Epstein was closer than previously understood. The public reaction was intense, with many questioning not only the appointment itself, but also the decision-making process behind it.
The most significant development came earlier this month, when it was revealed by the Guardian that the Foreign Office had proceeded with the appointment despite internal vetting advice recommending that Mandelson should not be granted security clearance. No 10 pleaded ignorance, claiming that neither the prime minister nor any government minister was aware that Mandelson had failed.
The popular consensus seems to be that Starmer is either lying or woefully incompetent. Either is potentially damning when it comes to his position as Prime Minister and head of the Labour Party. The lack of transparency around key decisions continues to erode public trust and creates space for speculation.
Each new day seems to bring fresh revelations that keep Starmer in the spotlight. And as elections for both Wales and Scotland grow nearer, it seems increasingly inevitable that Labour’s reputational crisis will lead to concrete consequences.