AI in Pharmaceutical Manufacturing: Where the Industry Actually Stands

Last week, I attended Making Pharma 2026 in Coventry, and one of the panels I sat in on was probably the most useful hour I have spent on the topic of artificial intelligence in pharmaceutical manufacturing. The topic of the session was AI in Pharma: Navigating Innovation and Regulation Together, brought together Kevin Bailey of the MHRA, Adam McLennan of AstraZeneca, and Nick Kesterton of ISPE UK, and was moderated by Dr Andrew King of AstraZeneca.

What made the discussion valuable was not novelty. It was precision. Most public conversation about AI in pharmaceutical manufacturing still swings between hype and anxiety, and neither is particularly useful when you actually have to run a regulated factory. The panel did neither. It set out a calm, accurate picture of where the technology is, what regulators expect, and where the genuine opportunities lie. That picture is worth sharing in some detail because the AI conversation in our industry suffers from a shortage of clarity.

This post sets out four themes that emerged from the discussion, with some context and explanation for readers who do not work with AI day to day. It closes with a reflection from the perspective of an equipment manufacturer.

Theme One: Regulators are moving together, and they are moving deliberately

When a new technology starts to enter regulated industries, the most important question is rarely “what can it do” but “what will the regulator allow.” The good news for pharmaceutical manufacturers is that the answer is becoming clearer.

The European Medicines Agency, the UK’s Medicines and Healthcare products Regulatory Agency and the International Pharmaceutical Inspection Co-operation Scheme have been working in close coordination on AI guidance. In July 2025, the European Commission published the draft of Annex 22, the first dedicated guideline on the use of artificial intelligence within the EU’s Good Manufacturing Practice framework. It was developed by EMA’s Inspectors’ Working Group together with PIC/S, with the FDA and MHRA participating as observers. The final version is expected during 2026, followed by a grace period for implementation.

The approach is deliberately principles-based and tool-agnostic. The framework defines how AI should be qualified, validated, monitored and documented, but does not name particular tools or platforms. This is the right design choice. The technology is changing too fast for any rule that names specific products to remain useful for long, and a principles-based approach gives manufacturers and suppliers a stable target to design against even as the underlying tools evolve.

For organisations that have been waiting for regulatory clarity before investing in AI, the message is straightforward. The clarity is here, or close enough to plan around. The framework is conservative, but it is workable.

Theme Two: The accountability line is firm, and it is not moving

A consistent thread through the panel was the role of AI relative to human decision-making. The position is unambiguous. AI in GMP manufacturing is being positioned as a support to expert judgement, not a replacement for it. No AI is releasing batches. No AI is replacing a Qualified Person. MHRA’s public language describes AI as a tool to augment expert judgement, not replace it, and that framing carries real weight in how systems can be built and deployed.

This is where the concept of human-in-the-loop becomes important, and worth explaining properly because it is widely misunderstood.

In AI systems, human-in-the-loop refers to a design principle in which a person reviews, validates or makes the final call on the AI’s output before any consequential action is taken. The principle exists because most AI systems, however sophisticated, can fail in ways that are difficult to predict from the outside. They can be confidently wrong. They can drift in performance over time. They can encounter cases that fall outside their training data and produce outputs that look plausible but are unreliable. A human in the loop is not a fallback for when the AI fails. It is a structural feature of the system designed to keep accountability where it belongs and to catch the edge cases the AI cannot.

In a pharmaceutical context this is particularly important. The cost of an undetected error can include patient harm, regulatory action, batch rejection, or recall. A human-in-the-loop design ensures that the responsibility for decisions of consequence remains with a qualified person, while still allowing the AI to do useful work — typically by handling volume, surfacing anomalies, or pre-processing information so that the human reviewer can focus on the decisions that genuinely need their judgement.

This framing does a lot of useful work for the industry. It tells manufacturers where the regulatory boundary sits, so they can invest below it with confidence. It gives auditors a clear set of questions to ask. And it tells those of us who design and supply equipment and systems for pharma exactly what to build for. Human oversight is not a feature to add later — it is a design constraint from the outset.

Theme three: AI is earning its keep, but not where the headlines suggest

The most useful part of the discussion was about where AI is actually delivering value in pharmaceutical manufacturing right now. The picture is much less dramatic than the public conversation implies, and considerably more useful.

The mature use cases are not autonomous factories or self-optimising production lines. They are in the parts of the business that involve enormous volumes of structured paperwork along with validation documentation, testing records, deviation investigations, CAPA write-ups, and regulatory submissions. Pharma’s documentation burden is genuinely vast, and modern generative AI tools have matured fastest in exactly the kind of structured-language work this involves. The savings being reported by manufacturers in this area are substantial and have been independently corroborated across multiple operators.

On the production side, AstraZeneca’s work on AI-assisted environmental monitoring is a useful public example, and was discussed during the panel. The team trained models on thousands of microbiological settle plates, validated the system rigorously against experienced microbiologists, and now uses the AI to read plates with consistency that exceeds the manual process. The work is part of AstraZeneca’s published partnership with Clever Culture Systems on the APAS platform, and has been progressing through validation at the Macclesfield site. MHRA was satisfied from a GMP perspective.

What makes this case study instructive is the unexpected benefit. The AI not only performed the reading task better than the manual process, but it also produced a meaningful improvement in data integrity. Every plate the AI reads is automatically logged with an image and a timestamp, creating a complete digital audit trail that a human reader does not routinely generate. The compliance dividend was a byproduct of the deployment, not its primary purpose.

The pattern is worth copying. Narrow scope. Rigorous validation against known controls. Human in the loop throughout. Compliance benefits that emerge as a side effect rather than a sales pitch. Manufacturers looking to deploy AI inside GMP environments will not go wrong by following this template.

Theme four: Good AI starts with good data, and good data starts with the equipment

If there is one thing that gets glossed over in the AI conversation, it is the question of where the data actually comes from. Most discussions of AI in pharmaceutical manufacturing assume that the data is already there, which is clean, structured, real-time, and accessible. In practice, that assumption frequently does not hold.

The fundamental constraint of any AI system is the quality of the data it receives. An AI model trained on incomplete, noisy, or inconsistent data will produce incomplete, noisy, or inconsistent outputs. This is true across every industry, but it matters particularly in pharmaceutical manufacturing because the standards for data integrity are exceptionally high, and because the consequences of acting on poor data are exceptionally serious. A predictive maintenance model is only useful if the equipment it monitors is producing reliable telemetry. A real-time release model is only meaningful if the process parameters feeding it are clean and traceable. A deviation-detection algorithm is only as good as the upstream data describing what the process actually did.

In manufacturing, that data comes from the equipment. A tablet coater, a granulator, a fluid-bed dryer, and a mill, these machines are the source of the process information that any AI layer above them will rely on. If the equipment does not expose its process parameters cleanly, in real time, in structured and standardised formats, it puts a hard ceiling on what AI can do further up the stack. No amount of investment in MES platforms or analytics tools can compensate for telemetry that is missing, ambiguous or locked inside proprietary controllers.

This is exactly the conversation we are having internally at Gansons. The next generation of our equipment is being designed with data accessibility as a first-order requirement, not an afterthought. The aim is for every machine that leaves our works to expose its process parameters in a form that any modern MES, analytics platform or AI layer can use directly, structured, real-time, traceable, and rich enough to support genuinely useful analysis. We see this as our part of the Pharma 4.0 contract. The software layers above the equipment will only ever be as good as the data the equipment provides, and we believe the equipment manufacturers have a real responsibility to raise that floor.

Pharma 4.0 is often described as a software journey. It is more accurate to say it is a data journey, and that journey begins at the machine.

A closing thought

A line from the panel stayed with me. AI in GMP manufacturing is a tool within the change-control framework, not a replacement for it. The manufacturers who succeed with AI in this industry will be the ones who treat it the way they treat any other piece of critical equipment — qualified, validated, monitored, and understood by the people running the line. There is no shortcut around that, and there shouldn’t be. The pharmaceutical industry’s quality standards exist for good reason, and any technology that wants to play a serious role inside them needs to earn its place.

What was encouraging about the panel was the sense that the industry, the regulators and the responsible suppliers are largely in agreement about what that earning process looks like. That alignment is rarer than it sounds, and worth noting when it appears.

Share the blog across your network
Anamika Banerjee
Anamika Banerjee

Anamika Banerjee leads UK operations for Gansons, a manufacturer of solid-dosage pharmaceutical processing equipment with eight decades of experience supplying tablet coating, granulation, drying, milling and blending systems to regulated manufacturers worldwide.

Newsletter Updates

Enter your email address below and subscribe to our newsletter