Energy systems can come under enormous strain from sudden changes in renewable generation, such as when sunlight rapidly increases as clouds pass, or when strong gusts hit a wind farm. A clean energy storage technology that handles these power peaks and troughs with ease, converting renewable electricity into green hydrogen, has been demonstrated by researchers at KAUST[1].
Storing renewable energy as clean hydrogen fuel is a critical element of future energy systems. Green hydrogen is made by using renewable electricity to split water molecules, using a device called an electrolyzer.
Today’s electrolyzers are poorly suited to this task. “Most water-splitting electrolyzers depend on steady electricity from the power grid — but that electricity often comes from fossil fuels, negating hydrogen’s environmental benefit,” says Abdul Malek, a postdoc in the lab of Xu Lu, who led the research.
One electrolyzer component highly vulnerable to sudden power surges is the water splitting catalyst. Low-cost nickel–iron (NiFe) catalysts work well when power supply is steady but can degrade rapidly when connected to renewable power sources that keep switching on and off, Malek explains. “Until now, there was no clear way to help such catalysts survive these harsh conditions for long periods,” he adds.
Some previous studies had suggested that adding chromium to NiFe catalysts might improve performance, but other reports concluded that the chromium-modified catalyst quickly broke down during operation. Lu and his team wondered if both findings might be true.
“We suspected that chromium might act like a temporary helper, guiding the catalyst into its most active and stable form when the system first starts running,” Malek says. “This idea was inspired by our earlier research, where we observed that chromium gradually washes out during operation — but instead of harming the catalyst, the process left behind a more porous structure that improved performance.”
The researchers tested the concept by designing a NiFe catalyst that incorporated a sacrificial quantity of chromium. They used a raft of analytical techniques, including X-ray photoelectron spectroscopy and inductively coupled plasma–optical emission spectroscopy, to track the catalyst’s changing composition and structure during use.
The results confirmed that the chromium gradually disappeared during electrolyzer operation, leaving behind an open structure of nickel and iron in a stable oxidized state.
A lab-scale electrolyzer fitted with the new catalyst maintained strong, stable performance over 30 days of fluctuating power. The researchers then teamed up with industry to test the catalyst at scale. “We demonstrated that an eight-cell electrolyzer stack, which delivered 2.5 kW peak power, remained stable over 13 simulated stop-start solar cycles,” Malek says. The device recovered instantly after a sudden power loss test, he adds.
“The challenge is to develop inexpensive electrolyzer systems that can operate stably for thousands of hours under real dynamic conditions,” Lu says. “Our next steps are larger stacks, direct coupling with solar power, and further improved earth-abundant catalysts and system engineering. The goal is practical, renewable-powered green hydrogen production that works outside the lab,” he concludes.
Quantum dot (QD) semiconductor lasers have been shown to operate reliably under strong optical feedback, which results from external light being reflected back from other circuit components[1]. A KAUST-led team says its discovery is the key to simpler and cheaper on-chip integration.
This advance brings these lasers closer to practical use in compact, scalable photonic circuits that enable faster data transfer and processing while using less energy.
Photonic integrated circuits typically use quantum well-based lasers containing III-V-type semiconductor materials like gallium arsenide, which are ideal for long-distance, high-speed data transmission in fiber optic networks. But when incorporated into standard silicon-based circuits, these lasers face specific hurdles. They are highly sensitive to optical feedback, which degrades performance, and can undergo coherence collapse — a chaotic state in which the laser signal becomes unstable and noisy — even under modest feedback levels.
As a result, quantum well-based lasers typically require optical isolators, which allow light transmission in just one direction, or complex engineering to prevent feedback when used on circuits. These protective measures add cost, complexity, and energy consumption.
In contrast, QD lasers are thermally stable, efficient, and resistant to optical feedback thanks to their ability to maintain a consistent, narrow-linewidth signal. This could eliminate the need for optical isolators, simplifying packaging and reducing costs. But, can the lasers stay reliable without isolators in real circuits, where reflections can be much stronger?
The research team — led by Yating Wan, with postdoc Ying Shi, and coworkers from KAUST and the University of California, Santa Barbara — have developed a laser setup to establish a realistic and quantitative feedback limit that system designers can rely on.
“We needed to push QD lasers far beyond previously explored regimes and directly observe where they finally become unstable,” Shi says.
The researchers coupled the QD gain medium with a Fabry-Perot cavity, a simple arrangement of mirrors and optical elements that allowed them to isolate the properties that govern feedback tolerance.
“Using this design ensured any improvements in feedback tolerance truly come from the quantum dot material itself, rather than from added cavity engineering,” Shi adds.
The system withstood feedback levels up to −6.7 dB before collapsing, which is tens of decibels better than standard quantum well-based lasers. “This confirmed that QD lasers are not feedback immune, yet they remained remarkably stable just below this limit,” Shi explains.
Even near the collapse threshold, the laser could transfer data at a sustained, maximum speed of 10 gigabits per second without significant performance degradation. It also maintained strong thermal stability, long-term stability, and reproducibility.
The system performed as well as hybrid platforms, which combine two microchips, and outperformed current state-of-the-art devices in feedback tolerance. Modeling revealed that coherence collapse is influenced by the external cavity length and circuit design, providing practical guidance for building photonic circuits that don’t require optical isolators.
“We are extending this work to application-oriented devices, such as narrow-linewidth and mode-locked quantum dot lasers,” Wan says. The team ultimately aims to develop robust, energy-efficient, and fully isolator-free circuits for emerging applications, such as LiDAR and optical computing.
Coral reefs differ widely in the types of corals they host, how their populations are structured, and the extent of coral cover. These differences are influenced by environmental and biological factors, from local conditions to regional climate patterns. Now, KAUST researchers have determined baselines in spatial variability for eight reefs in the northeastern and central Red Sea, providing vital information for future management and conservation efforts[1].
“This study was conducted at a critical moment for coral reefs,” says Chiara Pisapia, who worked on the study with colleagues Eslam Osman and Maggie Johnson.
“Data was collected from November 2023 to January 2024, between two back-to-back mass bleaching events,” continues Pisapia. “These timely datasets will help scientists assess how much change, loss, or recovery follows subsequent bleaching events. This will improve understanding of the long-term consequences of climate stress on coral reefs.”
Latitudinal differences in temperature and salinity, together with factors such as nutrient availability, light levels, water quality and fishing pressures, all combine to determine the success of coral communities in different regions.
The team investigated spatial differences in coral cover, taxonomic composition and demographic structure in three common reef-building coral genera: Acropora, massive Porites and Pocillopora. They surveyed five reefs in the northeastern Red Sea, which is cooler and less saline than southern and central regions, and compared the data to those gathered at three reefs in the central Red Sea.
“This study goes beyond traditional reef surveys that focus mainly on coral cover. It’s one of the first studies in the Red Sea to directly compare coral population demography, including both adults and juveniles, across different reef habitats and large spatial gradients,” says Pisapia. “This meant we could identify how well populations are structured to recover in the future, which is essential for understanding resilience.”
Their findings showed pronounced spatial differences, not just between latitudes but also between reef habitats (the reef crest and reef slope). Specifically, northern reefs have high live coral cover – almost double the amount relative to the central reefs at the time the surveys were conducted.
“Coral populations are not uniform, and there were substantial differences in coral assemblages across regions and habitats in these reefs,” says Pisapia. “This highlights the sensitivity of coral populations to environmental gradients and disturbance history. It is vital that scientists include spatial context when evaluating reef condition.”
The team’s discovery highlights the need for conservation and management strategies that are specifically tailored to each habitat type and region.
“Some reefs, or specific reef habitats, may be better positioned to recover from climate disturbances than others,” says Johnson. “This information can help managers prioritize protection, restoration, and monitoring efforts where they are likely to be most effective. It’s only with pivotal data like those collected by Pisapia that we can evaluate the true effect of environmental change on Saudi Arabia’s coral reefs.”
Pisapia plans to revisit these sites to track how coral populations change through time as disturbances continue to intensify, and assess which reefs are most likely to persist.
The study was supported by the Ocean Science and Solutions Applied Research Institute (OSSARI) awarded to Pisapia and Osman, and a National Geographic Society Award granted to Osman.
Increasing AI’s ability to tackle complex challenges with greater accuracy and energy efficiency is not simply a matter of adding more computing power. Subtle details in the networking of the computing elements can have a significant impact on AI performance.
The architecture, or wiring, of the computing elements in an AI is inspired by how neurons form circuits to process information and learn. However, a key aspect of neural network structure has so far been overlooked in AI design, as a KAUST-led team has shown[1][2].
Jesper Tegnér and his team at KAUST – in collaboration with an international team including the AI technology company NVIDIA – made the discovery while examining the network architecture of an AI solving a balance task. “We focused on ‘network motifs’, which often form the fundamental building blocks of large, complex networks,” explains Haoling Zhang, a Ph.D. student in Tegnér’s lab.
Network motifs combine to form complete, complex networks, just as words come together to create language. “Network motifs have been widely studied in biology and social science, but surprisingly little in the AI literature,” Zhang says.
The concept had been so overlooked in the AI community that no methods existed to study it. “To make progress, we had to build from scratch our own analytical methods, combining neuroevolution with machine learning,” Zhang adds.
The team focused on some of the simplest possible network motifs, consisting of three nodes – two inputs and one output – connected to form a triangle. Depending on how they are wired and how information flows through them, different types of three-node network motifs can form. Two of the most important are known as ‘coherent’ and ‘incoherent’ loops. “The abundance of incoherent motifs in natural systems, where both activation and repression occur on the same node in small circuits, has remained intriguing,” Tegnér explains.
“Initially, we didn’t suspect that these two loop types would have a significant impact on AI performance,” Zhang says. But, as the team progressed from testing individual network motifs to assessing more complex networks in various tasks, a consistent and notable difference emerged between the two loop types.
Neural networks dominated by coherent loops tended to quickly focus on feature-rich ‘high-gradient’ patterns encoded in the learning dataset. In contrast, networks dominated by incoherent loops explored patterns without any early preference for specific features.
In real-world situations, random or irrelevant noise in datasets is almost impossible to eliminate. During the early training phase of machine learning, noise can easily mislead a neural network because it cannot yet distinguish meaningful from non-meaningful signals. By rapidly narrowing in on high-gradient regions of the dataset, coherent loops were more susceptible to being distracted by noise. “This could slow learning and distort what is learned,” explains Zhang.
Networks built from incoherent loops, in contrast, learned in a richer, more balanced way. “In noisy datasets, networks with more incoherent loops stayed noticeably more stable and got less confused,” Zhang says.
“We found that these biological connectivity patterns had functional significance in an AI system,” Tegnér says. The team is now exploring various nature-inspired ways to develop high-performing, creative, energy-efficient next-generation AI systems.
Hydrogen is a clean-burning gas that could help to tackle climate change by reducing our dependence on fossil fuels. But storing and transporting hydrogen is expensive and technically challenging, typically requiring high-pressure gas tanks or cryogenic systems that operate at very cold temperatures.
One promising alternative involves incorporating hydrogen into carbon-based molecules known as Liquid Organic Hydrogen Carriers (LOHCs), which are safer and easier to handle than the gas itself. KAUST researchers have shown that certain LOHCs could reliably store hydrogen underground in depleted oil fields, and then help to recover residual oil from those reservoirs[1].
“Together, these advantages make LOHCs a compelling alternative to conventional hydrogen storage technologies,” says Hussein Hoteit, who led the research team.
LOHC systems use a catalyst to chemically combine hydrogen with a liquid organic molecule, forming a hydrogenated liquid that can be stored or transported like a conventional fuel. A second catalytic reaction is subsequently used to release the hydrogen and regenerate the initial carrier molecule.
Crucially, LOHCs can be handled using existing petrochemical infrastructure, such as pipelines, tankers, and large-scale storage facilities. “This significantly reduces the cost and complexity of building new hydrogen-specific infrastructure, which is one of the major barriers to widespread hydrogen deployment,” says Zeeshan Tariq, a member of the team.
The researchers simulated how two different LOHC systems would perform in a depleted sandstone reservoir at a depth of about 2,200 meters, typical of oil fields in Saudi Arabia. Their calculations included a wide range of factors, including the viscosity, stability, and hydrogen-storage capacity of the LOHC molecules.
In the first system, hydrogen is combined with toluene at the surface to produce methylcyclohexane. Both molecules are stable, widely available, and already used in above-ground LOHC facilities. Toluene stores about 6.2 percent of its weight in hydrogen, while methylcyclohexane has a low viscosity that enables it to flow easily underground.
In one simulation, methylcyclohexane was injected into the reservoir for five months, left for two months, and then extracted over five months. The yearlong cycle was repeated 15 times. Calculations suggest that about three-quarters of the methylcyclohexane could be recovered after each cycle. By the end of the simulation, more than half of the residual oil trapped in the field had also been recovered. This additional oil would offset storage costs, and the researchers estimate that the whole project would generate $70 million more in value than it consumed.
The second LOHC system could store more hydrogen per molecule, but its higher viscosity caused greater resistance during injection and extraction, leading to much poorer performance.
Although recovering residual oil would ultimately lead to downstream CO2 emissions, these would be small compared with the climate benefits offered by large-scale hydrogen use. “Carrier-based storage does not undermine climate goals,” says Hoteit. “Instead, it helps make hydrogen storage deployable at scale today, using existing assets, while supporting a gradual and economically viable transition to a low-carbon energy system.”
The team now plans to extend their study to multi-well reservoir systems, in which several injection and production wells operate simultaneously across a depleted oil field.
Technologies capable of generating freshwater efficiently and cost-effectively are critical for reaching sustainability goals, particularly in arid regions such as the Middle East. Researchers have now developed a polymeric membrane that can desalinate seawater and brines at ambient temperature and pressure[1]. The work was carried out by an international research team led by scientists at King Abdullah University of Science and Technology (KAUST).
“Water scarcity is severe in Saudi Arabia and is reaching unprecedented levels in countries once thought to be safe from such pressures,” says Noreddine Ghaffour, who led the research. “We urgently need to produce freshwater from seawater and brines at any scale, efficiently and cost-effectively, while conserving energy.”
Conventional membrane-based technologies such as reverse osmosis are most cost-effective at very large scales and depend on sophisticated energy-recovery systems. Even under these conditions, treating highly concentrated brines remains difficult because of the extreme pressures required. Membrane distillation offers an alternative approach, but it typically relies on elevated temperatures to vaporize water before it passes through a membrane and condenses as freshwater.
The membrane developed by Ghaffour’s team consists of an ultrathin polymeric film supported by a porous substrate and is designed for membrane distillation at low temperatures. The film contains sub-nanometer-sized pores and has a highly water-repellent, or superhydrophobic, surface, allowing the process to operate under ambient pressure.
In the system, warm saline water at approximately 25 ºC flows along one side of the membrane, while cooler water at 20 ºC flows along the other. This small temperature difference creates a natural driving force that pulls only water vapor across the membrane, where it condenses as pure water, leaving salt and other contaminants behind.
“What distinguishes our membrane is its ultrathin separating layer, only a fraction of a micrometer thick, combined with a highly water-repellent surface,” says project team member, Mohamed Obaid Awad. “This superhydrophobicity is crucial because it prevents liquid seawater from entering and flooding the membrane pores.”
At the nanoscale, the membrane remains ‘air-filled’, ensuring that only water vapor can pass through. Water does not need to boil to evaporate; even at room temperature some water molecules naturally escape into the vapor phase. Here, the extremely small pores enhance this effect by increasing local vapor pressure, facilitating evaporation.
The membrane also demonstrates high rejection of salt and acts as a total barrier to boron and other contaminations, preventing dissolved ions from entering the vapor pathway and improving performance in realistic desalination conditions.
“Salt ions and other dissolved species cannot evaporate under these conditions and are therefore excluded,” says Sofiane Soukane, a member of the research team. “Also, the membrane material is resistant to chlorine-based oxidants, which improves durability and long-term stability.”
Moving beyond the laboratory, the team is currently testing the technology in a pilot plant at KAUST. “Lessons from the pilot study will guide how we scale up membrane production,” says Ghaffour. “We have several industrial partners keen to get involved.”
Smart biomedical devices are transforming modern healthcare, using skin-mounted sensors to capture in-depth health information directly from the body. As clinicians increasingly use biosensing devices to guide patient care, accurate and reliable signal acquisition is critical.
A new system that can rapidly detect when the electrodes of devices such as heart monitors start to detach from the skin has been developed by a team at KAUST[1]. Unlike indirect electrode monitoring techniques, the new system directly measures electrode integrity by evaluating digital signal quality between electrodes.
“Traditional methods for checking whether medical electrodes are properly attached, based on impedance or indirect monitoring, were developed many years ago and assume relatively stable conditions,” explains Rajat Kumar, a student in the lab of Ahmed Eltawil, who led the research.
But in real life, as people move and sweat, electrodes can partially loosen or intermittently lose skin contact, which traditional indirect monitoring methods can struggle to detect.
“This is especially problematic for home-based wearable medical devices, where poor electrode contact may go unnoticed for long periods, leading to inaccurate data being recorded and relied upon,” says Abdelhay Ali, a postdoc in Eltawil’s group.
To develop smarter electrode connection monitoring, the team rethought the role of the body itself, Eltawil says. “Instead of treating the body as something that interferes with measurements, we considered whether it could be part of the solution.”
Tiny electrical signals can safely pass through the body, previous research has shown. “We realized that if electrodes could exchange digital signals through the body, then the quality of that communication would directly reflect how well the electrodes were attached,” Kumar says.
The team tested the concept by building a system around a custom chip designed and developed at KAUST. The chip sends and receives tiny digital signals between electrodes placed across the body. A small processing unit then analyses how well each signal is received. “Clear signals indicate good electrode skin contact; small errors indicate weakening contact; and missing signals indicate disconnection,” Ali explains.
A final electronic component manages the electrode checking sequence, ensuring the system can automatically monitor multiple electrodes without interrupting medical measurements.
The team tested the system using electrode pairs attached to human skin, and showed it could clearly differentiate electrodes that were firmly attached, partially loose, intermittently losing contact, or completely disconnected. “Importantly, the system detected the early signs of contact degradation that traditional methods often miss,” Kumar says.
“The system’s very low power consumption should enable practical integration with wearable medical devices that need to run continuously for long periods,” adds Ali. “These components form a compact and efficient solution that can be added to existing medical devices with minimal changes.”
The team now plans to turn their laboratory prototype into a fully integrated, single-chip system that can monitor many electrodes simultaneously, for use in clinical devices such as multi-lead heart monitors.
“Ultimately, our goal is to translate this KAUST-developed technology into practical medical devices that are more reliable, more trustworthy, and better suited for continuous health monitoring in the clinic and at home,” Eltawil says.
Wind and solar energy promise to make the electricity grid greener by delivering renewable energy at scale. But to smooth out seasonal renewable energy fluctuations and decarbonize parts of the global energy and transport system that are difficult or impossible to electrify, we will need clean-burning, carbon-free fuels produced from renewable sources.
Ammonia (NH3) is a carbon-free molecule that can serve directly as a fuel, can be produced renewably at scale, and can be stored and transported easily. However, in its pure form, ammonia has a low burn rate and is difficult to ignite. To address this, ammonia can be ‘cracked’ — heated to high temperatures over a catalyst — to partially break it down into a mixture of ammonia, hydrogen, and nitrogen, which increases the fuel’s reactivity and improves flame stability.
“Partially cracked ammonia is a realistic and promising fuel for future clean power and propulsion systems,” says Suliman Abdelwahid, postdoctoral researcher in Hong Im’s lab. “Understanding its combustion behavior under realistic conditions is essential for its safe and efficient deployment.”
The components of partially cracked ammonia have widely varying physical and combustion properties. The fuel burns in complex, turbulent flows that can produce toxic emissions of unburnt ammonia and NOx. “Using high-fidelity computer simulations, we aim to identify optimal combustion configurations and conditions to ensure complete combustion of ammonia with minimal NOx emissions,” Im says.
Laboratory experiments can provide essential data about the fuel’s characteristics, but they are expensive to run and reveal little about the inner structure of a turbulent flame. “Computational modeling can provide critical information that enables researchers to study flame behavior in detail, safely testing operating conditions on a computer,” says Junjun Guo, a research scientist on the team. “Once these models are validated against experiments, they can help engineers to design complex combustion systems that optimize performance, while reducing the need for costly physical testing,” he adds.
Fully modeling the complex chemistry of partially cracked ammonia combustion is computationally prohibitive, making it a major challenge to simulate the process efficiently and accurately. “So, instead of trying to track dozens of chemicals, we focused on a few key variables that capture the main behavior of the flame,” Abdelwahid says.
The team used AI to assist with the task[1]. “We trained neural networks on high-quality flame data, to accurately reconstruct temperature and species mass fractions,” Guo says. “The model accounts for differences in how fuel molecules move and mix.”
Whereas detailed chemistry simulations might have taken a month to complete, the team’s model produced results in about half a day, achieving a major efficiency gain without sacrificing accuracy. “By reliably predicting flame structure, mixing, stability, and extinction, our model will help designers to optimize burners and operating conditions, supporting the development of safer, cleaner, and more efficient ammonia-based combustion technologies,” Abdelwahid explains.
Combining their advanced model’s output with a few key physical experiments, the team aims to collect enough data to build a virtual ‘digital twin’ combustion system, from which they can design and optimize a real-world system for partially cracked ammonia combustion.
Chemical genomic screening provides a powerful way to pinpoint which genes help an organism to survive (or falter) under a particular stressor, and to identify the functional roles played by these genes. For example, such screening tests can assess microbes for resistance to antibiotics, or to monitor their responses to new drug candidates.
However, there has been a lack of a standardized protocol for conducting these valuable tests, which has slowed progress. This gap has been addressed by a team at KAUST, in collaboration with scientists at the Universities of Birmingham and Newcastle in the United Kingdom[1]. The collaboration combined established practices from multiple laboratories to create a coherent and broadly applicable screening workflow.
“Chemical genomic screening has traditionally been performed using lab-specific, ad hoc methods,” says Georgia Williams, who worked on the project under the supervision of Danesh Moradigaravand. “Differences in equipment, organisms, experimental design and data analysis meant that no single protocol captured the full process from start to finish. As a result, experiments have been difficult to reproduce across institutions,” she explains.
Chemical genomic screening is used to assess the effect of chemical or environmental stressors on single-gene mutant libraries and can also be adapted for use with clinical strain collections from hospitals. The different gene mutants respond in different ways to stressors, resulting in variations in colony size, biofilm formation, and structural composition. The tests allow scientists to study isolated ‘phenotypes’ of a gene responding to specific conditions and determine the functional links between a stressor and that gene.
The research team focused on arrayed library-based screening, where individual gene mutants are placed in different wells in an array, then monitored for their responses to stressors. Their new protocol is fully integrated and covers every step of the process, from experimental setup to data analysis. It combines automated imaging, standardized data handling and built-in quality control with clear troubleshooting guidance for users.
For individual scientists, this protocol will save time, reduce error in the lab, and produce consistent, high-quality data. For the wider scientific community, the standardization of workflows enables data comparison and integration across laboratories, making large-scale datasets more reliable and reusable.
“Our workflow allows researchers to focus on biological questions rather than dealing with technical setup and troubleshooting,” says Williams. “Shared workflows also promote collaboration by allowing researchers from different institutions to work within a common technical framework, which can speed up discovery in microbial genetics and systems biology.”
The workflow can be used for instances when researchers need to assess how genetic changes affect microbial fitness under stress. For example, it can accelerate antibiotic discovery by identifying genes involved in drug sensitivity or resistance. It can also support synthetic biology by enabling scientists to rapidly compare engineered strains under industrial or environmental stress conditions. In environmental microbiology, the workflow can be used to screen microbes for tolerance to toxins or pollutants.
“We plan to expand the workflow to assess additional microbial species and stress conditions,” concludes Moradigaravand. “This will include many antibiotic-resistant bacterial strains that are on the WHO priority pathogen list, for which new drugs are urgently needed.”
All plants are holobionts: they survive and thrive thanks to complex interactions with their associated microbial communities, or ‘microbiomes’. KAUST researchers have shown for the first time that a specific gene in the host genome shapes the seed microbiome of Arabidopsis thaliana plants, so that they can grow in low pH, iron-rich soils[1].
“There have been many studies on microbiomes in different parts of plants – in the roots, leaves and fruits – but little is known about the seed microbiome, which is where each plant begins its life,” says postdoc Sabiha Parween, who worked on the project under the supervision of Heribert Hirt. “We wanted to understand if the host genome inside the plant helps to shape the composition of the seed microbiome, and if so, how it does this.”
Arabidopsis thaliana has been a model organism in plant science for decades. The team had access to 250 naturally occurring accessions, or varieties, of the plant collected from different locations across the globe. They investigated the diversity of seed microbiomes in these different accessions.
“Our previous seed microbiome study into millet showed that differences in seed microbiomes reflected varying lifestyles of accessions depending on geographical, environmental and soil particularities,” says Hirt. “However, we lacked a causal genetic proof for this conclusion. Now, we’ve found similar variations in Arabidopsis, and this time we’ve also found a causal link between the host genome and the seed microbiome.”
All 250 Arabidopsis accessions had already been genetically sequenced, enabling the team to conduct a genome-wide association study (GWAS) to search for the genes responsible for specific microbiome structures in particular accessions. Through GWAS, the researchers proved that certain microbial networks co-evolve alongside the Arabidopsis genome, helping the plants to adapt and thrive on particular soils.
“In Arabidopsis thaliana, host genetic variations across accessions associate directly with seed microbiome variations,” says Parween. “Also, the relative abundance of different microbial groups in the microbiome varied from genotype to genotype, and can be influenced by external factors, such as climatic conditions.”
Digging deeper, the team identified a gene in Arabidopsis – one that encodes the RNA-binding protein RPB47B – that deliberately shapes the seed microbiome. This genetic trait enables the plant to thrive in low pH soils with a high iron content, typical of northern latitudes.
“Too much iron in soils causes reactive oxygen species to accumulate, which damages plant DNA and growth,” says Hirt. “The gene we’ve pinpointed is responsible for enabling plant growth under these particular conditions, and we believe there is a co-evolution taking place. The host genome ensures that the plant recruits and passes on the right microbes to support the growth of future generations under low pH, high iron conditions.”
This gene is also found in many crop plants and has previously been identified as a marker of stress, though not specifically for iron toxicity. The researchers’ findings could provide actionable insights into crop development for iron-rich soils.
“If you’re working in plant genomics from now on, you can’t forget about the microbes,” concludes Hirt. “Plant genomics has to become holobiont genomics, and shaping seed microbiomes holds potential to improve crop growth.”