When infectious diseases spread, the first warning signs may not come from hospitals or doctor reports but from Google searches among worried residents.
A KAUST-led study, focusing on a 2024 outbreak of dengue fever in Brazil, has found that internet searches for disease-related terms can provide faster and sometimes more accurate estimates of case numbers than official surveillance reports[1].
The research has prompted the development of a website that incorporates Google search activity across Brazil’s 26 states and federal districts to support timely decisions and resource allocation. It also provides proof-of-concept of how digital data can help health officials track outbreaks in real time.
“It is urgent that we take action and work together in order to reduce the impacts associated with dengue and other mosquito-borne diseases — and one way to do it is through the development of these kinds of disease surveillance systems,” says data scientist, Paula Moraga, who led the study.
Moraga and her team, in collaboration with statisticians from Brazil, evaluated several prediction models used for tracking dengue transmission. The researchers compared traditional epidemiological approaches, which predict cases on the basis of recent trends in confirmed weekly case counts, with models incorporating search data from Google Trends, a tool that analyzes the popularity of different search queries.
The difference was striking. In most Brazilian states, the simplest model — built only on search queries for the word ‘dengue’ — proved more accurate than traditional approaches: errors in estimating weekly cases were consistently smaller, and the search-based model captured the timing of surges more precisely.
The study demonstrates how digital information sources — such as internet search terms, chatbot interactions, and social media posts — can complement traditional health surveillance methods, notes Moraga, who won the 2023 Letten Prize for her work developing statistical methods for public health surveillance.
“This information is not produced for epidemiological research, but we can use it to understand disease activity levels in real time,” she explains.
The search-based model was especially valuable in the southeastern state of Rio de Janeiro, where traditional surveillance models did not capture this data, yet the Google-based approach could produce timely estimates that were adopted by the Ministry of Health.
Such discrepancies underscore the value of ‘nowcasting’ — estimating the current state of an outbreak when official statistics are delayed or incomplete. For diseases like dengue, which can overwhelm hospitals in a matter of weeks, even modest time gains can shape how health authorities deploy doctors, hospital beds, and mosquito-control campaigns.
“Nowcasting methods allow us to understand current disease activity levels and make better informed decisions,” says study co-author Yang Xiao, a member of Moraga’s GeoHealth research group at KAUST.
Furthermore, through the creation of Dengue Tracker, an online platform publishing weekly forecasts and interactive maps for every Brazilian state, the research has been put directly into practice: “These reports assisted policy makers and the general public in understanding dengue levels and guide their decisions,” Xiao says.
Google queries are not a substitute for robust surveillance, emphasizes Moraga, especially in areas with limited internet access. However, she says that digital signals provide a valuable complement — one that may be especially useful in the Gulf region, where mosquito-borne diseases such as dengue, malaria and Rift Valley fever remain persistent threats.
A three-dimensional imaging method utilizes substrates that are birefringent — they have different refractive indices along different crystal axes — to enhance the precision and depth of single-particle tracking, eliminating the need for high-tech hardware. Developed by KAUST, the user-friendly method is compatible with a standard fluorescence microscope and makes molecular motion in complex environments easier to visualize — a potential research tool for general users[1].
Three-dimensional single-particle tracking enables the direct characterization of molecular motion in complex environments. In life science, it provides a crucial understanding of the motion and associated behavior of biological molecules and complexes in cells. This includes the cellular uptake of viruses and DNA hybridization.
Most tracking approaches determine the spatial coordinates of individual particles by creating a unique pattern, known as the point spread function (PSF). This represents what the microscope ‘sees’ for a single point of light, which requires placing additional tools, such as spatial light modulators, in the detection path of the microscope. This creates various patterns depending on the axial position or depth of the particles, revealing their location.
“This is very useful but involves special knowledge and a sophisticated, custom-built microscope,” says principal investigator Satoshi Habuchi.
Now, a team led by Satoshi Habuchi and Shuho Nozue has devised a convenient tracking method that uses mica plates as substrates. The method does not require a customized microscope; instead, the birefringent substrates modify the way light propagates, generating axial-position-dependent patterns.
An evaluation of the substrates’ performance showed that fluorescent nanoparticles supported by mica produced distinct, non-concentric patterns that changed depending on their axial position. The method performed well, achieving a maximum axial tracking range of 30 micrometers with a localization precision exceeding 30 nanometers, surpassing conventional tracking techniques.
“We knew that mica substrates could distort the PSF because of their birefringence, but had no idea whether the distortion could be used for PSF engineering,” Habuchi says. “We were amazed by the large tracking range.”
Simulations by a group led by colleague, Ying Wu, perfectly matched the observed axial-position-dependent PSF, confirming the role of birefringence in PSF distortion.
Variations in the substrate thickness also altered the patterns by affecting the axial range. A thicker substrate led to an axial-position-dependent PSF at a larger axial range, resulting in a larger axial tracking range and vice versa.
Habuchi explains that, depending on tracking experiment requirements, this allows users to choose mica substrates of different thicknesses that exhibit an axial tracking range matching the sample thickness.
The researchers demonstrated that their method could localize and track nanoparticles in live cells for an axial range of more than 20 micrometers. They clearly captured the three-dimensional trajectory of an individual nanoparticle and localized multiple nanoparticles in plant cells, which are much larger and more challenging to examine by fluorescence microscopy than animal cells.
This suggests that the new method can provide insight into the spatiotemporal dynamics of target molecules and the delivery of external materials, such as genetic materials for genome editing, into plant cells, as well as motion in multicellular systems, such as biological tissues.
The team is now working to identify birefringent materials that outperform mica, streamlining the image processing pipeline using deep learning, and expanding the method’s capabilities to three-dimensional orientations.
The complex sugar molecules that festoon our cells are often treated as little more than biological decoration. A new study suggests they hold hidden patterns — distinct signatures that can separate one cancer from another.
By tracing the genetic machinery that sculpts these sugar tags, called glycans, scientists at KAUST have uncovered a sweet-and-simple diagnostic code, one that could make identifying and classifying tumors faster and more precise[1].
“We have created the building blocks for a one-stop classification system for all cancers,” says cell biologist Jasmeen Merzaban, who co-led the study.
The project began as a collaboration between Merzaban and her colleague, computational biologist Xin Gao, who specializes in applying artificial intelligence to health-related challenges. Together, they trained a machine learning algorithm on gene expression data from thousands of tumor samples — focusing not on the full catalogue of gene readouts, as existing AI cancer-classification tools have done, but on a lean set of 71 genes responsible for building glycans. These cancer-pattern glycosyltransferases, or CPGTs, are known to play a pivotal role in how tumors proliferate and spread.
The resulting model proved remarkably powerful: it sorted tumors into 27 categories with more than 95 percent accuracy — a performance on par with, and in some cases surpassing, gold-standard genomic classifiers that rely on far larger gene sets. And, unlike many cancer-classification systems that require massive datasets and heavy computational power, the KAUST model runs quickly on a standard laptop. Results can be generated in under half an hour, enabling broader use in hospitals and labs that lack high-performance computing resources.
The payoff was not just in speed. In gliomas — an especially aggressive form of brain cancer — glycan-related gene expression patterns predicted patient survival more reliably than standard clinical markers. In breast cancer, the CPGT-based classifier nearly doubled the accuracy of a widely used genomic test in distinguishing tumor subtypes.
“That means CPGTs may reveal not only what kind of cancer a patient has, but also how the disease is likely to progress,” notes Jing Kai, a Ph.D. student in Merzaban’s lab group, co-first author on the study. “Our work here shows the untapped potential for using glycans in cancer diagnosis and prognosis,” she adds.
Ali AlZahrani, a thyroid cancer specialist and co-author of the study from the King Faisal Specialist Hospital and Research Center in Riyadh, agrees. “This study is proof of a new concept in the diagnosis and classification of cancer with potentially wide-reaching applications,” he says.
To move the technology closer to clinical use, the KAUST team is now streamlining their methods for analyzing CPGT expression and, in collaboration with AlZahrani and other Saudi clinicians, validating their model in larger patient cohorts. At the same time, the researchers have joined forces with KAUST structural biologist Andreas Naschberger to solve the three-dimensional structures of key CPGTs, aiming to identify new therapeutic targets that could inspire future drug development.
“The diagnostic tool marks a first step toward turning sugar biology into a practical tool for precision medicine for people with cancer,” notes Merzaban. “This ultimately broadens the toolkit for both basic discovery and translational oncology,” she concludes.
A novel way to train AI models that empowers them to better assist in cutting-edge research has been developed by researchers at KAUST[1]. The new machine learning method enables accurate AI prediction even in frontier areas of science where only very limited data is available to train the model.
“The new method is already generating new leads in the development of sustainable aviation fuel (SAF), potentially helping to overcome a major challenge in the clean energy transition,” says the lead author of the study, Basem Eraqi, a Ph.D. student in the Clean Energy Research Platform, led by Mani Sarathy.
AI models with property prediction capabilities could dramatically accelerate the discovery of molecules with advanced performance for a specific task. “To build such models, conventional machine learning techniques typically require large, well-balanced datasets to achieve reliable performance,” Eraqi says. However, in many cases — including the development of new pharmaceuticals and polymers, as well as sustainable aviation fuels — there is very little data available for each molecular property of interest.
“Our goal was to develop a machine learning method that performs well even in this ultra-low-data regime, enabling performant material discovery in data-scarce domains,” Eraqi says.
The team based their approach on a method called multi-task learning (MTL), which trains a model to predict multiple properties at once. “The core idea is that, by learning related tasks simultaneously, the model can extract and reuse shared patterns in the data,” Eraqi explains. A molecule’s flammability limits, for example, are related to its volatility, and so learning these properties together can enhance the model’s predictive performance.
The smaller or more imbalanced the dataset used for MTL, however, the greater the chance of ‘negative transfer’, where the model makes erroneous connections that harm the model’s predictive performance.
To protect against negative transfer, the team developed a novel training scheme called Adaptive Checkpointing with Specialization (ACS). “ACS monitors each task’s performance and preserves the best-performing model state for that task, allowing for safe and effective knowledge sharing,” Eraqi says. By mitigating negative transfer, ACS can improve the accuracy and stability of molecular property predictions.
The team trialed ACS by testing its capability to predict properties of potential SAF components. “SAF development is a high-impact, real-world challenge where experimental data is extremely limited and labour-intensive to obtain,” Eraqi concludes. ACS delivered robust and accurate predictions across 15 SAF properties, consistently outperforming conventional models. It performed especially well in ultra-low-data settings, with as few as 29 training data points, achieving over 20% higher predictive accuracy than conventional training methods.
“The model’s accurate predictions are already helping to accelerate the discovery and development of new SAF blends,” Sarathy says. “We are applying the ACS methodology to predict several dozen SAF-relevant properties that can impact aircraft emissions and efficiency,” he adds. “These property predictions are then being fed into a fuel design tool targeting novel SAF formulations for an industrial partner.”
The team has also tested ACS on pharmaceutical and molecular toxicity datasets, confirming that it delivered significant predictive accuracy improvements over conventional training methods.
Microbes are masters of survival, evolving ingenious strategies to capture energy from their surroundings. For decades, scientists believed that only a handful of bacteria used specialized molecular “circuits” to shuttle electrons outside their cells — a process known as extracellular electron transfer (EET). This mechanism is critical for cycling carbon, sulfur, nitrogen, and metals in nature, and it underpins applications ranging from wastewater treatment to bioenergy and bioelectronics materials.
Now, KAUST researchers have discovered that this remarkable ability is far more versatile and widespread than previously imagined.
Working with Desulfuromonas acetexigens — a bacterium capable of generating high electrical currents — the team combined bioelectrochemistry, genomics, transcriptomics, and proteomics to map its electron transfer machinery[1]. To their surprise, D. acetexigens simultaneously activated three distinct electron transfer pathways previously thought to have evolved separately in unrelated microbes: the metal-reducing (Mtr), outer-membrane cytochrome (Omc), and porin-cytochrome (Pcc) systems.
“This is the first time we’ve seen a single organism express these phylogenetically distant pathways in parallel,” says first author Dario Rangel Shaw. “It challenges the long-held view that these systems were exclusive to specific microbial groups.”
The team also identified unusually large cytochromes, including one with a record-breaking 86 heme-binding motifs, which could enable exceptional electron transfer and storage capacity. Tests showed that the bacterium could channel electrons directly to electrodes and natural iron minerals, achieving current densities comparable to the model species Geobacter sulfurreducens.
By extending their analysis to publicly available genomes, the researchers identified more than 40 Desulfobacterota species carrying similar multipathway systems across diverse environments, from sediments and soils to wastewater and hydrothermal vents.
“This reveals an unrecognized versatility in microbial respiration,” explains Krishna Katuri, co-author of the study. “Microbes with multiple electron transfer routes may gain a competitive advantage by tapping into a wider range of electron acceptors in nature.”
The implications go well beyond ecology. Harnessing bacteria that can employ multiple electron transfer strategies could accelerate innovations in bioremediation, wastewater treatment, bioenergy production, and bioelectronics. For instance, electroactive biofilms like those formed by D. acetexigens could help recover energy from waste streams while simultaneously treating pollutants.
“Our findings expand the known diversity of electron transfer proteins and highlight untapped microbial resources,” adds Pascal Saikaly, who led the study. “This opens the door to designing more efficient microbial systems for sustainable biotechnologies.”
As researchers delve deeper into the microbial world, the discovery that a single bacterium can use multiple pathways underscores how much remains to be explored and how these hidden strategies could power a cleaner, more sustainable future.
In the urgent search to find scalable, science-based solutions to the global coral reef crisis, there is a demand for robust ways to test these interventions that bridge the gap between laboratory and field experiments. In 2021, this prompted an international team of scientists, led by KAUST researchers, to establish a functional underwater research laboratory on a natural coral reef in the Red Sea that could provide a space to trial coral adaptation options.
Called the Coral Probiotics Village (CPV), this facility enables scientists to test and monitor the success of administering probiotics for corals in a real-world reef environment, among other innovative solutions to restore coral reefs. The first results from projects conducted at the CPV show glimmers of hope for future reef conservation efforts.
“Coral reefs are declining at alarming rates, and the mass coral bleaching event in 2024 had devastating effects worldwide,” says Neus Garcias-Bonet at KAUST, who was involved in developing the CPV with KAUST colleagues. “The CPV provides the perfect capabilities and closely monitoring frameworks to test probiotics and other coral restoration tools under real ocean conditions.”
Coral probiotics boost the corals’ own natural symbiotic microbes, or ‘good bacteria’, to help them remain resilient and healthy in the face of warming oceans. Coral probiotics have gained traction in recent years and show promise in laboratory trials. However, testing this solution on actual ocean reefs remained challenging.
The team presented the design, establishment and full scientific validation of the CPV in a paper published in Ecology and Evolution in 2025, offering a blueprint for other, similar underwater laboratories to be built across the world[1]. “We believe that the CPV provides a reproducible model for testing of integrated reef restoration,” says Garcias-Bonet.
For the CPV, the researchers designed and built a diverse and continuous surveillance platform capable of tracking underwater conditions and thermal trends across different years, so that probiotics can be administered quickly and effectively. They are also developing integrated underwater sensor networks and AI-assisted reef monitoring, together with autonomous vehicles and technologies, in order to gather robust data during all projects conducted at the CPV.
The first projects conducted at the CPV – led by KAUST’s coral probiotics expert Raquel Peixoto – expanded initial successful lab trials on coral probiotics out into the reef.
“We were delighted with the successful completion of the first field trials of coral probiotics at the CPV,” says Peixoto. “These trials demonstrated that beneficial microbes can be safely incorporated by corals, improving their health and resilience without causing harm to other reef organisms or the surrounding environment. Remarkably, we also observed that treated corals helped protect nearby reef life, suggesting potential for broader ecosystem-level benefits.”
These results position the CPV as a scientifically robust platform for advancing reef restoration and conservation, notes Peixoto. As the laboratory is clearly mapped and well signposted, with named streets and zones, it also serves as an excellent tool for outreach and education.
“We envision the CPV as a long-term, multi-disciplinary research hub that enables the rigorous testing and refinement of microbial therapies and other advanced coral reef-assisted restoration-guided technologies,” says Garcias-Bonet. “The CPV offers a pathway to accelerate the development, validation, and deployment of interventions at meaningful ecological scales.”
Lithium-ion (Li-ion) batteries have long dominated the market for portable electronics, electric vehicles, and grid energy storage. A new generation of batteries using aqueous electrolytes offers compelling advantages for large-scale applications, including lower cost, improved safety, and environmental sustainability. However, parasitic chemical reactions continue to limit their cycle life.
To better understand these limitations, researchers at KAUST are developing advanced analytical tools to precisely identify the root causes of chemical degradation in aqueous batteries[1].
“Even in established battery chemistry, there are still mysteries to solve. By applying new investigative tools, we can uncover hidden mechanisms that determine battery performance,” says Yunpei Zhu, the lead author of the study. “Understanding the ‘why’ behind these processes lays the foundation to designing cheaper, safer, and longer-lasting batteries.”
A typical rechargeable battery includes a liquid electrolyte into which positively charged ions of a metal, such as lithium, sodium, or zinc, are dissolved. When the battery is charged, the ions capture electrons from the surface of an electrode — a process called reduction — and the metal is deposited onto the electrode in its solid form. During discharging, the reverse chemical reaction — oxidation — returns the metal back into solvated ions in the electrolyte.
One factor affecting the lifetime of a battery is the shape and texture of the solid metal electrodeposited on the electrode. For example, needle-like structures can grow on the electrode surface. These dendrites reduce the amount of useful metal, short circuit the battery, causing failure, overheating, or a fire, which represents a major safety concern.
Zhu and his KAUST co-workers investigated these parasitic reactions using a combination of advanced nuclear magnetic resonance, electron microscopy, ultrafast electrochemical experiments, and simulations. Taking zinc as a model metal, they tested five different zinc salts in a water-based, or aqueous, electrolyte: zinc sulfate, zinc perchlorate, zinc chloride, zinc triflate, and zinc bis(trifluoromethanesulfonyl)imide. Each salt uses a different type of negatively charged ion, or anion, in the reaction.
Their analysis indicated that low reversibility in batteries arises from the free water molecules in the aqueous electrolyte, but that these could be reduced by the correct choice of anion. “We used our advanced techniques to watch, at the molecular level, how water molecules behave in battery electrolytes,” explains Zhu. “By comparing different salts, we discovered that certain ions can ‘calm’ the motion of water molecules — and this subtle control greatly improves the performance and lifespan of metal anodes in aqueous batteries.”
Specifically, the sulfate ions performed best and batteries using zinc sulfate achieved very high reversibility and stability, and the zinc anode lasted much longer than those in the other electrolytes.
“This work is representative of our core focus and efforts in innovative energy technologies at KAUST’s Center for Renewable Energy and Storage Technologies (CREST),” says Husam Alshareef, who led the group. “Our next goal is to scale up this chemistry into larger battery systems suitable for grid-scale energy storage — particularly for renewable energy in Saudi Arabia.”
Artificial vision systems that combine image sensing, memory, and processing in one compact platform are a step closer to real-world application following a major advancement led by researchers at KAUST. The team has developed high-performance, light-controlled memory devices, or photonic memristors, that mark a significant step toward energy-efficient, integrated ‘smart vision’ hardware[1].
Memristors exhibit a resistance that varies with applied current flow and retain this resistance even when the current is turned off. Their ability to remember resistance based on past current flow enables memory and computation in one component, which is essential for data storage and neuromorphic computing. They display two distinct brain-like resistive switching modes: non-volatile and volatile modes, mimicking long-term and short-term memory, respectively.
Typically, memristors comprise metal-oxide thin films that respond to electrical stimuli but have several manufacturing challenges and performance limitations. Photonic memristors use light, a low-power, non-destructive, and contactless stimulus, to trigger switching, which provides a fast and energy-efficient alternative to conventional devices. The devices that contain atomically-thin two-dimensional materials, such as hexagonal boron nitride (hBN), feature excellent thermal stability, mechanical flexibility, and transparency. However, they are limited to narrow wavelength ranges and work in a single mode.
To harness hBN’s exceptional thermal stability and silicon’s light-absorption capabilities, an international team led by Maolin Chen, Xixiang Zhang, and with co-workers from KAUST have created photonic memristors by combining both materials in a layered arrangement.
The researchers produced uniform nanocrystalline hBN films using a low-temperature process called plasma-enhanced chemical vapor deposition to ensure compatibility with existing silicon-based manufacturing. They incorporated the films into memristor arrays on four-inch wafers, which is consistent with the scalability of the devices to industrial applications.
“Our memristors enable ‘all-in-one’ vision chips: they include image sensing, data storage, and parallel processing,” Chen says. In addition to high memory stability and durability, the devices exhibit a switching ratio exceeding one billion.
The memristors dynamically change their resistive switching behavior when exposed to different light conditions. They respond to a wide wavelength range from ultraviolet to near-infrared light, indicating compatibility with broadband operation.
They also achieve on-demand reconfigurability between memory modes using light intensity. They do not show any resistive switching in the dark but change from volatile to non-volatile modes when light intensity increases.
“Volatile and non-volatile switching mimics neuroplasticity, such as short-term adaptation versus long-term memory,” Chen says. This multi-mode behavior emulates how human visual neurons respond to stimuli of varying strength, which is crucial for artificial vision systems operating in dynamic environments.
The light-induced change arises from interactions between photogenerated electrons from the silicon layer and hydrogen ions migrating within the hBN layer. These interactions create tiny conductive paths, or filaments, which sets the resistance state. These filaments also form, persist, or vanish depending on how light interacts with the materials.
The researchers discovered that the filaments originate from the ionization of airborne water molecules under applied voltage. The electric fields dissociate the water molecules at the grain boundaries of hBN to yield the migrating hydrogen ions, Chen explains.
The team is working on scaling down devices for higher-density integration, implementing three-dimensional stacking for ultra-compact neuromorphic hardware, and testing in-memory computing for real-time AI vision tasks.
With the decline of coral reefs well-documented, there is now evidence of heat tolerance being conferred in a common reef-building coral. This finding is helping scientists to understand if, and how, corals may be adapting naturally to rising ocean temperatures driven by global warming.
An international team has demonstrated that heritable genetic variation for heat tolerance is more widespread than previously thought in the reef-building coral Platygyra daedalea[1]. The pressure of more regular marine heatwaves appears to be enhancing the selection of gene variants to help the coral to withstand higher temperatures.
“We urgently need to understand whether corals can adapt quickly enough to keep pace with climate change and, if so, how they might do this,” says Manuel Aranda from KAUST, who led the project alongside Emily Howells from Southern Cross University in Australia. “This knowledge is essential to guide conservation strategies and prioritize interventions in coral reefs while there is still time to act.”
P. daedalea is broadly distributed across the Indo-Pacific, including in the Red Sea and Arabian Gulf, which are among the world’s hottest reef environments. This makes it an ideal coral to study both for adaptation to extreme heat and for the potential for gene flow between populations.
In this ambitious project, the team combined large-scale quantitative breeding experiments across ten populations of P. daedalea, collected from six ocean regions.
“The greatest challenge was logistical, conducting controlled coral breeding and heat-stress experiments across multiple locations,” explains Aranda. “This coral species spawns only once a year for just a few nights, so our teams needed to be in the right place at the right time, often under challenging field conditions.”
The proximity of KAUST to the Red Sea allowed the scientists to collect and breed corals from populations known for their thermal tolerance.
“This breadth of sampling allowed us to directly assess heritable variation in heat tolerance at local and global scales,” notes Aranda. “Understanding the limits of coral heat tolerance, and the genetic basis of that tolerance, also tells us whether corals retain sufficient genetic variation to continue adapting in the future.”
The team subjected the coral to water of different temperatures in the lab, and monitored the resulting survival and settlement of coral larvae. They used specific breeding designs that allowed for trait selection, and genomic analyses to disentangle genetic from environmental effects. The results indicate that some coral populations possess heritable variation in heat tolerance, shaped by the history of heatwave exposure in each region.
According to Aranda, this suggests that corals are already adapting to warming oceans. “However, this adaptive potential is being depleted in the most thermally extreme environments, which may limit future adaptation. This finding is crucial because it highlights both hope and urgency for conservation actions,” he adds.
Aranda urges caution when it comes to engineering corals, because heat tolerance is a finely balanced trait that involves many genes. Conservation strategies should instead focus on preserving genetic diversity in reefs to maintain their evolutionary potential. The team suggests it may one day be possible to boost reef systems with naturally heat-tolerant genotypes.
In the next phase of the project, the researchers will apply high-throughput genotyping (ezRAD sequencing) to link genetic variation with thermal tolerance. This will allow them to determine how heat tolerance genes are passed down through generations, and to identify the genetic markers under selection.
A new machine-learning tool that classifies lab-grown embryo models with exceptional speed and accuracy offers a solution to one of the most pressing problems in developmental biology: how to reliably analyze vast numbers of stem-cell–derived structures known as blastoids, that mimic early human embryos, without relying on slow and subjective human inspection.
The KAUST-developed system, known as deepBlastoid, uses deep learning to sort these structures by morphology in a fraction of the time it would take trained embryologists, with performance that rivals, even surpasses, expert judgment[1]. In benchmark testing, it proved highly adept at classifying early developmental structures, opening new possibilities for high-throughput profiling and uncovering subtle biological effects that might otherwise be missed.
“AI tools like deepBlastoid could reshape how we study the earliest stages of life,” says Zejun Fan, a Ph.D. student who helped develop the tool. “They enable researchers to run larger, more complex experiments, screen new drugs more efficiently, and study rare developmental events with greater precision. This could accelerate discoveries in infertility treatment, toxicology, and synthetic embryo modeling.”
To build deepBlastoid, the team — led by stem-cell biologist Mo Li and computer scientist Peter Wonka — trained an AI tool to recognize patterns in around 1,800 microscope images of blastoids. Each image had been sorted by experts into one of five categories corresponding to the quality of the blastoids; these ranged from well-formed structures with clear inner cell clusters and fluid-filled cavities, to misshapen ones and empty wells.
The researchers found AI learned to match the expert labels with 87 percent accuracy. When the team added a step that sent uncertain cases to human reviewers, the accuracy jumped to 97 percent.
The team benchmarked the tool’s performance against three expert annotators in a head-to-head test. In only 20 minutes, the AI processed thousands of images — around 1,000 times faster than human experts — while matching or even surpassing their accuracy.
To showcase its utility, the team applied deepBlastoid to two real-world use cases. First, they exposed blastoids to a gradient of lysophosphatidic acid (LPA), a signaling molecule known to influence early development. The model detected the expected increases in overall cavitation — the formation of fluid-filled cavities — but also revealed a previously overlooked surge in a specific quality class of blastoids at low LPA concentrations.
Second, they examined the effects of dimethyl sulfoxide (DMSO), a common solvent in drug screening. While the overall morphology appeared unaffected, deepBlastoid pinpointed subtle shifts in blastoid class frequencies, hinting at possible developmental impacts even at low doses.
“It’s a powerful assistant that improves overall efficiency and reliability,” says Li. “This opens the door to data-driven insights about how external factors influence embryo-like development.”
To encourage broader adoption, the team has made deepBlastoid freely available and open-source, allowing other labs to retrain the system with their own images or adapt it to different embryo models.
Fan notes that the field of developmental biology is only beginning to tap the full potential of artificial intelligence. He hopes that tools like deepBlastoid, combined with community engagement and standardized imaging protocols, will lower technical barriers and speed up scientific discovery.
“The main hurdles to adoption are integration into existing lab workflows, the need for high-quality training datasets, and ensuring trust and interpretability of AI decisions in sensitive biological contexts,” he says. “Overcoming these challenges will be crucial to ensure responsible and effective deployment of such technologies.”