A few weeks ago, Apple News made headlines for all the wrong reasons. Its AI summarisation tool generated inaccurate—and sometimes offensive—summaries of news articles. While some errors were laughable, others seriously damaged trust in the platform. Apple will have used a Large Language Model (LLM) to handle the summaries.
LLMs are the reason you hear so much on AI right now, as they revolutionise industry approaches. They have become popular for tasks like summarisation because they can process vast amounts of data and generate natural-sounding text. However, they have a well-documented flaw called “hallucination,” where the AI invents content that isn’t grounded in reality. It’s a risk that many organisations overlook—until it creates a public relations nightmare.