When Alexander Fleming discovered penicillin in 1928, he did not think it would be a useful antibiotic to treat infections. It was hard to isolate, it was unstable, and he had doubts it would last long enough in the body to take effect. It wasn’t until 1941 that pharmacologist Howard Florey and biochemist Ernst Boris Chain treated the first patient with penicillin. However, they ran out of the drug, and the patient died. Florey and team member Norman Heatley then traveled to the U.S. to scale up production with the help of the U.S. government and pharmaceutical companies using deep-tank fermentation. Consequently, Pfizer opened its first plant for large-scale penicillin production in 1944.
“A lot of the older [fermentation] facilities that are out there today were built in the 1940s or 1950s to make those kinds of products,” says Mark Warner, co-founder and CEO of Liberation Labs and the 2025 SynBioBeta Conference Chair for Biomanufacturing Scale-Up. The following decades saw facilities focus on producing biopharmaceuticals. More recently, bioreactors are now being repurposed to produce products used in food, biofuels, and biomaterials such as heme protein, spider silk, and human milk oligosaccharides.
Bioprocesses, which use living cells to create a desired product, are complex and require a series of steps to optimize both cell growth and its output. Automation can help make the process more reproducible and less prone to contamination or errors.
Bruce Li, general manager at TJX Bioengineering, describes two main ways that bioprocessing uses automation: process automation and workflow automation. Process automation is not new, he says. “This dates back to the 1950s or 1960s when people started using computers to control bioreactors.” Things like monitoring culture conditions like nutrient levels, off-gassing, and temperature could all be done with sensors that feed information into a computer.
Workflow automation, on the other hand uses robotic arms for liquid handling tasks: automating assembling, dosing, and inoculation, for example. These types of automation have been around for about a decade, he says.
What process automation technologies of the past have been missing is the ability to quickly turn reactor conditions into indicators of yield. This type of prediction can help find the best conditions to grow the cells and to maximize production of protein.
AI can help make these decisions based on current and historical data to automatically optimize the bioreactor’s operation, says Li. This is called predictive model control. “The idea itself is not new, but we have been lacking the computing power. Back in the 90s, there were already books written on using AI and machine learning in bioprocess and the chemical process optimization.”
However, finding the best optimization for the bioprocess means you need a lot of data: both data that show successful and failed bioprocesses. Li says this data is still lacking.
One way to solve this problem is to use high-throughput parallel bioreactors, sets of bioreactors that can run independently at the same time, each with different conditions. “The idea of using high-throughput parallel bioreactors is to generate as much data as possible very quickly,” says Li. “This data is too much for any human to analyze so you use machine learning algorithms to process the data.”
Li, who did his PhD on multi-phase processing, mentioned that he had only one bioreactor to work with during his PhD research. “It was shared by four PhD students so we took turns. I probably only did maybe 5 or 6 runs during that time. High throughput parallel bioreactors can do more experiments than I could have ever imagined in one week. Imagine 64 data points in one week,” he says.
While parallel bioreactors may be more suitable for process development, continuous bioprocessing can enable the generation of large amounts of data at the manufacturing scale for real-time optimization of production. Pow.bio, founded by Shannon Hall and Ouwei Wang, uses continuous fermentation split across two tanks – one for growth and one for production – to eliminate contamination and genetic drift issues common in continuous systems. “What we have done is we've broken the fermentation into unit operations and then overlaid that with a set of AI programs so that we can get high performance out of that fermentation,” says Hall.
This software monitors the fermenters for up to three weeks, continuously optimizing different conditions to achieve the best production faster. “What we get in one tank is like getting 100 tanks done in a traditional way,” Hall says. “The key is we have this incredible roadmap that lets us drive to reproducible results every single time.”
Pow.bio believes its system can produce higher amounts of protein at lower capital and operational costs, frequently achieving a 2-10x gain in productivity. “On the economics side, that means that we can use much lower capital to produce a unit of a product,” says Hall. “We need to make proteins at quantities that make sense for really going to market. Because if we don't go to market, these are excellent ideas that don't have social impact.”
BIoreactors designed for biopharma have different needs in terms of cost and scale. “Pharmaceuticals are usually smaller scale, but higher cost products. Foods are usually much larger in scale and at a lower cost,” says Warner. “Most of the facilities out there today were built to make one specific product, and they had to be retooled or reworked to make other products.” However, it’s difficult to repurpose existing bioreactors for precision fermentation needs. “95% of these facilities that are out there weren’t designed for food,” says Warner. “You might think a pharmaceutical facility should be fine for making food, but it's more of a struggle than you would think it is.”
For example, purifying proteins, both food and non-food proteins, use downstream processes, like microfiltration and ultrafiltration, that separate them by size. In contrast, small molecule pharmaceuticals are generally purified by distillation and solvent extraction.
The microorganism Komagataella phaffii (formerly known as Pichia pastoris) is commonly used today to produce food proteins, but it uses methanol as its carbon source. “Very few of the existing facilities can run methanol in their fermenters,” says Warner. “For us, we were able to design a new facility knowing we needed to do that. We're building it to serve a wide swath of processes.”
Earlier this year, Liberation Labs began building its first commercial-scale precision fermentation facility, containing four 150,000-liter bioreactors. It plans to build a larger 4 million-liter facility containing eight fermenters of 500,000 liters each.
Fermenters can be automated entirely, but this is not often done in practice. “We can run our software fully autonomously or in a co-pilot mode with the process engineer,” Hall says. “It's fairly common to have what we call hold points,” Warner adds. These hold points allow users to check that everything needed has been completed correctly before moving the process forward.
A lot of this comes down to risk. “A failed batch can cost hundreds of thousands of dollars. Do you really trust that process to be 100% not monitored?” asks Warner.
When Alexander Fleming discovered penicillin in 1928, he did not think it would be a useful antibiotic to treat infections. It was hard to isolate, it was unstable, and he had doubts it would last long enough in the body to take effect. It wasn’t until 1941 that pharmacologist Howard Florey and biochemist Ernst Boris Chain treated the first patient with penicillin. However, they ran out of the drug, and the patient died. Florey and team member Norman Heatley then traveled to the U.S. to scale up production with the help of the U.S. government and pharmaceutical companies using deep-tank fermentation. Consequently, Pfizer opened its first plant for large-scale penicillin production in 1944.
“A lot of the older [fermentation] facilities that are out there today were built in the 1940s or 1950s to make those kinds of products,” says Mark Warner, co-founder and CEO of Liberation Labs and the 2025 SynBioBeta Conference Chair for Biomanufacturing Scale-Up. The following decades saw facilities focus on producing biopharmaceuticals. More recently, bioreactors are now being repurposed to produce products used in food, biofuels, and biomaterials such as heme protein, spider silk, and human milk oligosaccharides.
Bioprocesses, which use living cells to create a desired product, are complex and require a series of steps to optimize both cell growth and its output. Automation can help make the process more reproducible and less prone to contamination or errors.
Bruce Li, general manager at TJX Bioengineering, describes two main ways that bioprocessing uses automation: process automation and workflow automation. Process automation is not new, he says. “This dates back to the 1950s or 1960s when people started using computers to control bioreactors.” Things like monitoring culture conditions like nutrient levels, off-gassing, and temperature could all be done with sensors that feed information into a computer.
Workflow automation, on the other hand uses robotic arms for liquid handling tasks: automating assembling, dosing, and inoculation, for example. These types of automation have been around for about a decade, he says.
What process automation technologies of the past have been missing is the ability to quickly turn reactor conditions into indicators of yield. This type of prediction can help find the best conditions to grow the cells and to maximize production of protein.
AI can help make these decisions based on current and historical data to automatically optimize the bioreactor’s operation, says Li. This is called predictive model control. “The idea itself is not new, but we have been lacking the computing power. Back in the 90s, there were already books written on using AI and machine learning in bioprocess and the chemical process optimization.”
However, finding the best optimization for the bioprocess means you need a lot of data: both data that show successful and failed bioprocesses. Li says this data is still lacking.
One way to solve this problem is to use high-throughput parallel bioreactors, sets of bioreactors that can run independently at the same time, each with different conditions. “The idea of using high-throughput parallel bioreactors is to generate as much data as possible very quickly,” says Li. “This data is too much for any human to analyze so you use machine learning algorithms to process the data.”
Li, who did his PhD on multi-phase processing, mentioned that he had only one bioreactor to work with during his PhD research. “It was shared by four PhD students so we took turns. I probably only did maybe 5 or 6 runs during that time. High throughput parallel bioreactors can do more experiments than I could have ever imagined in one week. Imagine 64 data points in one week,” he says.
While parallel bioreactors may be more suitable for process development, continuous bioprocessing can enable the generation of large amounts of data at the manufacturing scale for real-time optimization of production. Pow.bio, founded by Shannon Hall and Ouwei Wang, uses continuous fermentation split across two tanks – one for growth and one for production – to eliminate contamination and genetic drift issues common in continuous systems. “What we have done is we've broken the fermentation into unit operations and then overlaid that with a set of AI programs so that we can get high performance out of that fermentation,” says Hall.
This software monitors the fermenters for up to three weeks, continuously optimizing different conditions to achieve the best production faster. “What we get in one tank is like getting 100 tanks done in a traditional way,” Hall says. “The key is we have this incredible roadmap that lets us drive to reproducible results every single time.”
Pow.bio believes its system can produce higher amounts of protein at lower capital and operational costs, frequently achieving a 2-10x gain in productivity. “On the economics side, that means that we can use much lower capital to produce a unit of a product,” says Hall. “We need to make proteins at quantities that make sense for really going to market. Because if we don't go to market, these are excellent ideas that don't have social impact.”
BIoreactors designed for biopharma have different needs in terms of cost and scale. “Pharmaceuticals are usually smaller scale, but higher cost products. Foods are usually much larger in scale and at a lower cost,” says Warner. “Most of the facilities out there today were built to make one specific product, and they had to be retooled or reworked to make other products.” However, it’s difficult to repurpose existing bioreactors for precision fermentation needs. “95% of these facilities that are out there weren’t designed for food,” says Warner. “You might think a pharmaceutical facility should be fine for making food, but it's more of a struggle than you would think it is.”
For example, purifying proteins, both food and non-food proteins, use downstream processes, like microfiltration and ultrafiltration, that separate them by size. In contrast, small molecule pharmaceuticals are generally purified by distillation and solvent extraction.
The microorganism Komagataella phaffii (formerly known as Pichia pastoris) is commonly used today to produce food proteins, but it uses methanol as its carbon source. “Very few of the existing facilities can run methanol in their fermenters,” says Warner. “For us, we were able to design a new facility knowing we needed to do that. We're building it to serve a wide swath of processes.”
Earlier this year, Liberation Labs began building its first commercial-scale precision fermentation facility, containing four 150,000-liter bioreactors. It plans to build a larger 4 million-liter facility containing eight fermenters of 500,000 liters each.
Fermenters can be automated entirely, but this is not often done in practice. “We can run our software fully autonomously or in a co-pilot mode with the process engineer,” Hall says. “It's fairly common to have what we call hold points,” Warner adds. These hold points allow users to check that everything needed has been completed correctly before moving the process forward.
A lot of this comes down to risk. “A failed batch can cost hundreds of thousands of dollars. Do you really trust that process to be 100% not monitored?” asks Warner.