Over the past few years, Microsoft has been increasingly, aggressively, and arguably recklessly integrating large language model (LLM) artificial intelligence (AI) agents into the most recent versions of Windows, with potential security issues for businesses reliant on secure IT solutions.
This initiative, which Microsoft called “Agentic AI”, is somewhat controversial, in part due to the many issues that the company admitted to in that same update.
Namely, LLMs, AI Agents or chatbots (all synonyms for roughly the same technology) are designed to simplify using Windows by having users type prompts or talk to an agent such as Copilot, who will then use natural language processing to complete the task.
The claim is that this would be quicker and easier than doing the same task using a keyboard and mouse, but the problems with the system, besides its difficulty in completing some basic office IT tasks faster than a typical user could, are that sometimes it will make stuff up.
This issue, known as hallucinations, delusions or confabulations, is where an LLM will output a result that is plausible but completely wrong, such as citing made-up papers in an essay, creating code that looks functional but does not, and failing to recognise the correct person from an image or video.
More dangerously, AI has lied about the state of a system or database, leading to a worst-case scenario reported by PC Mag, where an AI agent deleted an entire database despite having strict instructions not to.
This happens broadly due to the ways in which LLMs work; they do not necessarily understand the right answer but instead find the most plausible answer that will lead to the most positive response by the end user, in a way similar but significantly more sophisticated than an autocomplete function on a mobile phone.
The system is also vulnerable to security exploits through “prompt injection”, where an AI tool reads hidden instructions that are designed to bypass safety filters and security instructions to cause data breaches or unwanted outputs.
This has put Microsoft in the position of creating a two-pronged security issue; agentic AI can create whole new security issues if activated more widely, whilst over a billion Windows 10 users have elected to stay put as of December 2025, not all of whom will be receiving security updates.

