Coral reefs differ widely in the types of corals they host, how their populations are structured, and the extent of coral cover. These differences are influenced by environmental and biological factors, from local conditions to regional climate patterns. Now, KAUST researchers have determined baselines in spatial variability for eight reefs in the northeastern and central Red Sea, providing vital information for future management and conservation efforts[1].
“This study was conducted at a critical moment for coral reefs,” says Chiara Pisapia, who worked on the study with colleagues Eslam Osman and Maggie Johnson.
“Data was collected from November 2023 to January 2024, between two back-to-back mass bleaching events,” continues Pisapia. “These timely datasets will help scientists assess how much change, loss, or recovery follows subsequent bleaching events. This will improve understanding of the long-term consequences of climate stress on coral reefs.”
Latitudinal differences in temperature and salinity, together with factors such as nutrient availability, light levels, water quality and fishing pressures, all combine to determine the success of coral communities in different regions.
The team investigated spatial differences in coral cover, taxonomic composition and demographic structure in three common reef-building coral genera: Acropora, massive Porites and Pocillopora. They surveyed five reefs in the northeastern Red Sea, which is cooler and less saline than southern and central regions, and compared the data to those gathered at three reefs in the central Red Sea.
“This study goes beyond traditional reef surveys that focus mainly on coral cover. It’s one of the first studies in the Red Sea to directly compare coral population demography, including both adults and juveniles, across different reef habitats and large spatial gradients,” says Pisapia. “This meant we could identify how well populations are structured to recover in the future, which is essential for understanding resilience.”
Their findings showed pronounced spatial differences, not just between latitudes but also between reef habitats (the reef crest and reef slope). Specifically, northern reefs have high live coral cover – almost double the amount relative to the central reefs at the time the surveys were conducted.
“Coral populations are not uniform, and there were substantial differences in coral assemblages across regions and habitats in these reefs,” says Pisapia. “This highlights the sensitivity of coral populations to environmental gradients and disturbance history. It is vital that scientists include spatial context when evaluating reef condition.”
The team’s discovery highlights the need for conservation and management strategies that are specifically tailored to each habitat type and region.
“Some reefs, or specific reef habitats, may be better positioned to recover from climate disturbances than others,” says Johnson. “This information can help managers prioritize protection, restoration, and monitoring efforts where they are likely to be most effective. It’s only with pivotal data like those collected by Pisapia that we can evaluate the true effect of environmental change on Saudi Arabia’s coral reefs.”
Pisapia plans to revisit these sites to track how coral populations change through time as disturbances continue to intensify, and assess which reefs are most likely to persist.
The study was supported by the Ocean Science and Solutions Applied Research Institute (OSSARI) awarded to Pisapia and Osman, and a National Geographic Society Award granted to Osman.
Increasing AI’s ability to tackle complex challenges with greater accuracy and energy efficiency is not simply a matter of adding more computing power. Subtle details in the networking of the computing elements can have a significant impact on AI performance.
The architecture, or wiring, of the computing elements in an AI is inspired by how neurons form circuits to process information and learn. However, a key aspect of neural network structure has so far been overlooked in AI design, as a KAUST-led team has shown[1][2].
Jesper Tegnér and his team at KAUST – in collaboration with an international team including the AI technology company NVIDIA – made the discovery while examining the network architecture of an AI solving a balance task. “We focused on ‘network motifs’, which often form the fundamental building blocks of large, complex networks,” explains Haoling Zhang, a Ph.D. student in Tegnér’s lab.
Network motifs combine to form complete, complex networks, just as words come together to create language. “Network motifs have been widely studied in biology and social science, but surprisingly little in the AI literature,” Zhang says.
The concept had been so overlooked in the AI community that no methods existed to study it. “To make progress, we had to build from scratch our own analytical methods, combining neuroevolution with machine learning,” Zhang adds.
The team focused on some of the simplest possible network motifs, consisting of three nodes – two inputs and one output – connected to form a triangle. Depending on how they are wired and how information flows through them, different types of three-node network motifs can form. Two of the most important are known as ‘coherent’ and ‘incoherent’ loops. “The abundance of incoherent motifs in natural systems, where both activation and repression occur on the same node in small circuits, has remained intriguing,” Tegnér explains.
“Initially, we didn’t suspect that these two loop types would have a significant impact on AI performance,” Zhang says. But, as the team progressed from testing individual network motifs to assessing more complex networks in various tasks, a consistent and notable difference emerged between the two loop types.
Neural networks dominated by coherent loops tended to quickly focus on feature-rich ‘high-gradient’ patterns encoded in the learning dataset. In contrast, networks dominated by incoherent loops explored patterns without any early preference for specific features.
In real-world situations, random or irrelevant noise in datasets is almost impossible to eliminate. During the early training phase of machine learning, noise can easily mislead a neural network because it cannot yet distinguish meaningful from non-meaningful signals. By rapidly narrowing in on high-gradient regions of the dataset, coherent loops were more susceptible to being distracted by noise. “This could slow learning and distort what is learned,” explains Zhang.
Networks built from incoherent loops, in contrast, learned in a richer, more balanced way. “In noisy datasets, networks with more incoherent loops stayed noticeably more stable and got less confused,” Zhang says.
“We found that these biological connectivity patterns had functional significance in an AI system,” Tegnér says. The team is now exploring various nature-inspired ways to develop high-performing, creative, energy-efficient next-generation AI systems.
Hydrogen is a clean-burning gas that could help to tackle climate change by reducing our dependence on fossil fuels. But storing and transporting hydrogen is expensive and technically challenging, typically requiring high-pressure gas tanks or cryogenic systems that operate at very cold temperatures.
One promising alternative involves incorporating hydrogen into carbon-based molecules known as Liquid Organic Hydrogen Carriers (LOHCs), which are safer and easier to handle than the gas itself. KAUST researchers have shown that certain LOHCs could reliably store hydrogen underground in depleted oil fields, and then help to recover residual oil from those reservoirs[1].
“Together, these advantages make LOHCs a compelling alternative to conventional hydrogen storage technologies,” says Hussein Hoteit, who led the research team.
LOHC systems use a catalyst to chemically combine hydrogen with a liquid organic molecule, forming a hydrogenated liquid that can be stored or transported like a conventional fuel. A second catalytic reaction is subsequently used to release the hydrogen and regenerate the initial carrier molecule.
Crucially, LOHCs can be handled using existing petrochemical infrastructure, such as pipelines, tankers, and large-scale storage facilities. “This significantly reduces the cost and complexity of building new hydrogen-specific infrastructure, which is one of the major barriers to widespread hydrogen deployment,” says Zeeshan Tariq, a member of the team.
The researchers simulated how two different LOHC systems would perform in a depleted sandstone reservoir at a depth of about 2,200 meters, typical of oil fields in Saudi Arabia. Their calculations included a wide range of factors, including the viscosity, stability, and hydrogen-storage capacity of the LOHC molecules.
In the first system, hydrogen is combined with toluene at the surface to produce methylcyclohexane. Both molecules are stable, widely available, and already used in above-ground LOHC facilities. Toluene stores about 6.2 percent of its weight in hydrogen, while methylcyclohexane has a low viscosity that enables it to flow easily underground.
In one simulation, methylcyclohexane was injected into the reservoir for five months, left for two months, and then extracted over five months. The yearlong cycle was repeated 15 times. Calculations suggest that about three-quarters of the methylcyclohexane could be recovered after each cycle. By the end of the simulation, more than half of the residual oil trapped in the field had also been recovered. This additional oil would offset storage costs, and the researchers estimate that the whole project would generate $70 million more in value than it consumed.
The second LOHC system could store more hydrogen per molecule, but its higher viscosity caused greater resistance during injection and extraction, leading to much poorer performance.
Although recovering residual oil would ultimately lead to downstream CO2 emissions, these would be small compared with the climate benefits offered by large-scale hydrogen use. “Carrier-based storage does not undermine climate goals,” says Hoteit. “Instead, it helps make hydrogen storage deployable at scale today, using existing assets, while supporting a gradual and economically viable transition to a low-carbon energy system.”
The team now plans to extend their study to multi-well reservoir systems, in which several injection and production wells operate simultaneously across a depleted oil field.
Technologies capable of generating freshwater efficiently and cost-effectively are critical for reaching sustainability goals, particularly in arid regions such as the Middle East. Researchers have now developed a polymeric membrane that can desalinate seawater and brines at ambient temperature and pressure[1]. The work was carried out by an international research team led by scientists at King Abdullah University of Science and Technology (KAUST).
“Water scarcity is severe in Saudi Arabia and is reaching unprecedented levels in countries once thought to be safe from such pressures,” says Noreddine Ghaffour, who led the research. “We urgently need to produce freshwater from seawater and brines at any scale, efficiently and cost-effectively, while conserving energy.”
Conventional membrane-based technologies such as reverse osmosis are most cost-effective at very large scales and depend on sophisticated energy-recovery systems. Even under these conditions, treating highly concentrated brines remains difficult because of the extreme pressures required. Membrane distillation offers an alternative approach, but it typically relies on elevated temperatures to vaporize water before it passes through a membrane and condenses as freshwater.
The membrane developed by Ghaffour’s team consists of an ultrathin polymeric film supported by a porous substrate and is designed for membrane distillation at low temperatures. The film contains sub-nanometer-sized pores and has a highly water-repellent, or superhydrophobic, surface, allowing the process to operate under ambient pressure.
In the system, warm saline water at approximately 25 ºC flows along one side of the membrane, while cooler water at 20 ºC flows along the other. This small temperature difference creates a natural driving force that pulls only water vapor across the membrane, where it condenses as pure water, leaving salt and other contaminants behind.
“What distinguishes our membrane is its ultrathin separating layer, only a fraction of a micrometer thick, combined with a highly water-repellent surface,” says project team member, Mohamed Obaid Awad. “This superhydrophobicity is crucial because it prevents liquid seawater from entering and flooding the membrane pores.”
At the nanoscale, the membrane remains ‘air-filled’, ensuring that only water vapor can pass through. Water does not need to boil to evaporate; even at room temperature some water molecules naturally escape into the vapor phase. Here, the extremely small pores enhance this effect by increasing local vapor pressure, facilitating evaporation.
The membrane also demonstrates high rejection of salt and acts as a total barrier to boron and other contaminations, preventing dissolved ions from entering the vapor pathway and improving performance in realistic desalination conditions.
“Salt ions and other dissolved species cannot evaporate under these conditions and are therefore excluded,” says Sofiane Soukane, a member of the research team. “Also, the membrane material is resistant to chlorine-based oxidants, which improves durability and long-term stability.”
Moving beyond the laboratory, the team is currently testing the technology in a pilot plant at KAUST. “Lessons from the pilot study will guide how we scale up membrane production,” says Ghaffour. “We have several industrial partners keen to get involved.”
Smart biomedical devices are transforming modern healthcare, using skin-mounted sensors to capture in-depth health information directly from the body. As clinicians increasingly use biosensing devices to guide patient care, accurate and reliable signal acquisition is critical.
A new system that can rapidly detect when the electrodes of devices such as heart monitors start to detach from the skin has been developed by a team at KAUST[1]. Unlike indirect electrode monitoring techniques, the new system directly measures electrode integrity by evaluating digital signal quality between electrodes.
“Traditional methods for checking whether medical electrodes are properly attached, based on impedance or indirect monitoring, were developed many years ago and assume relatively stable conditions,” explains Rajat Kumar, a student in the lab of Ahmed Eltawil, who led the research.
But in real life, as people move and sweat, electrodes can partially loosen or intermittently lose skin contact, which traditional indirect monitoring methods can struggle to detect.
“This is especially problematic for home-based wearable medical devices, where poor electrode contact may go unnoticed for long periods, leading to inaccurate data being recorded and relied upon,” says Abdelhay Ali, a postdoc in Eltawil’s group.
To develop smarter electrode connection monitoring, the team rethought the role of the body itself, Eltawil says. “Instead of treating the body as something that interferes with measurements, we considered whether it could be part of the solution.”
Tiny electrical signals can safely pass through the body, previous research has shown. “We realized that if electrodes could exchange digital signals through the body, then the quality of that communication would directly reflect how well the electrodes were attached,” Kumar says.
The team tested the concept by building a system around a custom chip designed and developed at KAUST. The chip sends and receives tiny digital signals between electrodes placed across the body. A small processing unit then analyses how well each signal is received. “Clear signals indicate good electrode skin contact; small errors indicate weakening contact; and missing signals indicate disconnection,” Ali explains.
A final electronic component manages the electrode checking sequence, ensuring the system can automatically monitor multiple electrodes without interrupting medical measurements.
The team tested the system using electrode pairs attached to human skin, and showed it could clearly differentiate electrodes that were firmly attached, partially loose, intermittently losing contact, or completely disconnected. “Importantly, the system detected the early signs of contact degradation that traditional methods often miss,” Kumar says.
“The system’s very low power consumption should enable practical integration with wearable medical devices that need to run continuously for long periods,” adds Ali. “These components form a compact and efficient solution that can be added to existing medical devices with minimal changes.”
The team now plans to turn their laboratory prototype into a fully integrated, single-chip system that can monitor many electrodes simultaneously, for use in clinical devices such as multi-lead heart monitors.
“Ultimately, our goal is to translate this KAUST-developed technology into practical medical devices that are more reliable, more trustworthy, and better suited for continuous health monitoring in the clinic and at home,” Eltawil says.
Wind and solar energy promise to make the electricity grid greener by delivering renewable energy at scale. But to smooth out seasonal renewable energy fluctuations and decarbonize parts of the global energy and transport system that are difficult or impossible to electrify, we will need clean-burning, carbon-free fuels produced from renewable sources.
Ammonia (NH3) is a carbon-free molecule that can serve directly as a fuel, can be produced renewably at scale, and can be stored and transported easily. However, in its pure form, ammonia has a low burn rate and is difficult to ignite. To address this, ammonia can be ‘cracked’ — heated to high temperatures over a catalyst — to partially break it down into a mixture of ammonia, hydrogen, and nitrogen, which increases the fuel’s reactivity and improves flame stability.
“Partially cracked ammonia is a realistic and promising fuel for future clean power and propulsion systems,” says Suliman Abdelwahid, postdoctoral researcher in Hong Im’s lab. “Understanding its combustion behavior under realistic conditions is essential for its safe and efficient deployment.”
The components of partially cracked ammonia have widely varying physical and combustion properties. The fuel burns in complex, turbulent flows that can produce toxic emissions of unburnt ammonia and NOx. “Using high-fidelity computer simulations, we aim to identify optimal combustion configurations and conditions to ensure complete combustion of ammonia with minimal NOx emissions,” Im says.
Laboratory experiments can provide essential data about the fuel’s characteristics, but they are expensive to run and reveal little about the inner structure of a turbulent flame. “Computational modeling can provide critical information that enables researchers to study flame behavior in detail, safely testing operating conditions on a computer,” says Junjun Guo, a research scientist on the team. “Once these models are validated against experiments, they can help engineers to design complex combustion systems that optimize performance, while reducing the need for costly physical testing,” he adds.
Fully modeling the complex chemistry of partially cracked ammonia combustion is computationally prohibitive, making it a major challenge to simulate the process efficiently and accurately. “So, instead of trying to track dozens of chemicals, we focused on a few key variables that capture the main behavior of the flame,” Abdelwahid says.
The team used AI to assist with the task[1]. “We trained neural networks on high-quality flame data, to accurately reconstruct temperature and species mass fractions,” Guo says. “The model accounts for differences in how fuel molecules move and mix.”
Whereas detailed chemistry simulations might have taken a month to complete, the team’s model produced results in about half a day, achieving a major efficiency gain without sacrificing accuracy. “By reliably predicting flame structure, mixing, stability, and extinction, our model will help designers to optimize burners and operating conditions, supporting the development of safer, cleaner, and more efficient ammonia-based combustion technologies,” Abdelwahid explains.
Combining their advanced model’s output with a few key physical experiments, the team aims to collect enough data to build a virtual ‘digital twin’ combustion system, from which they can design and optimize a real-world system for partially cracked ammonia combustion.
Chemical genomic screening provides a powerful way to pinpoint which genes help an organism to survive (or falter) under a particular stressor, and to identify the functional roles played by these genes. For example, such screening tests can assess microbes for resistance to antibiotics, or to monitor their responses to new drug candidates.
However, there has been a lack of a standardized protocol for conducting these valuable tests, which has slowed progress. This gap has been addressed by a team at KAUST, in collaboration with scientists at the Universities of Birmingham and Newcastle in the United Kingdom[1]. The collaboration combined established practices from multiple laboratories to create a coherent and broadly applicable screening workflow.
“Chemical genomic screening has traditionally been performed using lab-specific, ad hoc methods,” says Georgia Williams, who worked on the project under the supervision of Danesh Moradigaravand. “Differences in equipment, organisms, experimental design and data analysis meant that no single protocol captured the full process from start to finish. As a result, experiments have been difficult to reproduce across institutions,” she explains.
Chemical genomic screening is used to assess the effect of chemical or environmental stressors on single-gene mutant libraries and can also be adapted for use with clinical strain collections from hospitals. The different gene mutants respond in different ways to stressors, resulting in variations in colony size, biofilm formation, and structural composition. The tests allow scientists to study isolated ‘phenotypes’ of a gene responding to specific conditions and determine the functional links between a stressor and that gene.
The research team focused on arrayed library-based screening, where individual gene mutants are placed in different wells in an array, then monitored for their responses to stressors. Their new protocol is fully integrated and covers every step of the process, from experimental setup to data analysis. It combines automated imaging, standardized data handling and built-in quality control with clear troubleshooting guidance for users.
For individual scientists, this protocol will save time, reduce error in the lab, and produce consistent, high-quality data. For the wider scientific community, the standardization of workflows enables data comparison and integration across laboratories, making large-scale datasets more reliable and reusable.
“Our workflow allows researchers to focus on biological questions rather than dealing with technical setup and troubleshooting,” says Williams. “Shared workflows also promote collaboration by allowing researchers from different institutions to work within a common technical framework, which can speed up discovery in microbial genetics and systems biology.”
The workflow can be used for instances when researchers need to assess how genetic changes affect microbial fitness under stress. For example, it can accelerate antibiotic discovery by identifying genes involved in drug sensitivity or resistance. It can also support synthetic biology by enabling scientists to rapidly compare engineered strains under industrial or environmental stress conditions. In environmental microbiology, the workflow can be used to screen microbes for tolerance to toxins or pollutants.
“We plan to expand the workflow to assess additional microbial species and stress conditions,” concludes Moradigaravand. “This will include many antibiotic-resistant bacterial strains that are on the WHO priority pathogen list, for which new drugs are urgently needed.”
All plants are holobionts: they survive and thrive thanks to complex interactions with their associated microbial communities, or ‘microbiomes’. KAUST researchers have shown for the first time that a specific gene in the host genome shapes the seed microbiome of Arabidopsis thaliana plants, so that they can grow in low pH, iron-rich soils[1].
“There have been many studies on microbiomes in different parts of plants – in the roots, leaves and fruits – but little is known about the seed microbiome, which is where each plant begins its life,” says postdoc Sabiha Parween, who worked on the project under the supervision of Heribert Hirt. “We wanted to understand if the host genome inside the plant helps to shape the composition of the seed microbiome, and if so, how it does this.”
Arabidopsis thaliana has been a model organism in plant science for decades. The team had access to 250 naturally occurring accessions, or varieties, of the plant collected from different locations across the globe. They investigated the diversity of seed microbiomes in these different accessions.
“Our previous seed microbiome study into millet showed that differences in seed microbiomes reflected varying lifestyles of accessions depending on geographical, environmental and soil particularities,” says Hirt. “However, we lacked a causal genetic proof for this conclusion. Now, we’ve found similar variations in Arabidopsis, and this time we’ve also found a causal link between the host genome and the seed microbiome.”
All 250 Arabidopsis accessions had already been genetically sequenced, enabling the team to conduct a genome-wide association study (GWAS) to search for the genes responsible for specific microbiome structures in particular accessions. Through GWAS, the researchers proved that certain microbial networks co-evolve alongside the Arabidopsis genome, helping the plants to adapt and thrive on particular soils.
“In Arabidopsis thaliana, host genetic variations across accessions associate directly with seed microbiome variations,” says Parween. “Also, the relative abundance of different microbial groups in the microbiome varied from genotype to genotype, and can be influenced by external factors, such as climatic conditions.”
Digging deeper, the team identified a gene in Arabidopsis – one that encodes the RNA-binding protein RPB47B – that deliberately shapes the seed microbiome. This genetic trait enables the plant to thrive in low pH soils with a high iron content, typical of northern latitudes.
“Too much iron in soils causes reactive oxygen species to accumulate, which damages plant DNA and growth,” says Hirt. “The gene we’ve pinpointed is responsible for enabling plant growth under these particular conditions, and we believe there is a co-evolution taking place. The host genome ensures that the plant recruits and passes on the right microbes to support the growth of future generations under low pH, high iron conditions.”
This gene is also found in many crop plants and has previously been identified as a marker of stress, though not specifically for iron toxicity. The researchers’ findings could provide actionable insights into crop development for iron-rich soils.
“If you’re working in plant genomics from now on, you can’t forget about the microbes,” concludes Hirt. “Plant genomics has to become holobiont genomics, and shaping seed microbiomes holds potential to improve crop growth.”
Cardiovascular diseases remain a leading cause of death worldwide. Every year thousands of children are born with congenital heart disease. Reliable, efficient tools that can accurately assess heart function are critical for heart disease diagnosis and treatments.
Now, a novel machine learning method of analyzing real-time cardiac magnetic resonance imaging (MRI) scans has been developed by KAUST researcher Raúl Tempone in collaboration with scientists at the German Aerospace Center, in Cologne[1].
“Cardiac real-time MRI is an invaluable method of scanning the heart, and can acquire up to 50 frames per second, which is excellent in clinical terms,” says Tempone. “However, this generates thousands of images that are near-impossible to process manually.”
A typical short scan at 30 frames per second for 10 seconds can produce around 4,500 images to annotate across 15 ‘slices’, or cross-sectional images through the heart. Neural networks can accurately segment most of these images, but fail to consistently segment the ventricular cavity in the outer slices at the base and apex of the heart. This means that the ventricular volume — the amount of blood inside the heart’s ventricles throughout the cardiac cycle — in the outer slices is not always estimated reliably by the neural networks.
In the workflow, a clinician must perform a visual quality check to separate reliably segmented inner slices from the unreliable outer slices. This is particularly important for patients with rare heart diseases and unusual anatomy, such as univentricular hearts.
“We wanted to find a way to attain trustworthy estimates of ventricular volume across the entire heart, and analyze the most pertinent images accurately,” says Tempone. Ventricular volume is a critical measurement in the assessment of cardiac diseases, but it changes continuously in complex ways depending on the patient’s breathing and heartbeat.
“The volume of a heart ventricle over time is not arbitrary: it is dominated by a small number of frequencies, primarily associated with the heart rate and breathing pattern,” says Tempone. “Our modeling framework uses sparse Bayesian learning (SBL) to identify the most relevant frequencies for each patient.”
Their model learns a patient-specific ‘frequency fingerprint’ from ventricular volume curves extracted from the reliably segmented inner slices. Those dominant frequencies correspond primarily to the heart rhythm and breathing. The SBL model identifies which frequencies really matter, and automatically prunes irrelevant frequency components during training. The model then selects the most informative frames (time points) for manual labeling on the unreliable outer slices, to reduce uncertainty.
“With a few carefully chosen manual labels, we can reconstruct accurate ventricular volume curves across the entire heart,” says Tempone. “Crucially, we can also quantify uncertainty: in other words, our model clearly states when automated measurements can, and can’t, be trusted.”
In tests on real-time MRI data from two patients with univentricular hearts, the approach required only a few manually labeled images to accurately predict ventricular volume on the challenging outer slices.
“Our model-based labeling strategy is generic and can be integrated into other cardiac MRI workflows,” concludes Tempone. “The model could easily be transferred into other medical image analysis pipelines that have similar data and label scarcity issues.”
Millions of servings of sustainably sourced fish are being lost each year, with data from a study of thousands of tropical reefs showing that reef fish stocks could provide much more if sustainably maintained.
Coral reef fish are a critical source of nutritious food for many tropical communities, and managing such resources sustainably offers potential food security benefits, especially for nations facing high burdens of malnutrition.
“Well-managed reef fisheries can deliver sustainable supplies of nutritious aquatic foods while helping reef ecosystems and dependent communities increase their resilience to other stresses,” says Jessica Zamborain-Mason, interdisciplinary marine scientist at KAUST, who led the international research team. “Millions of people are losing out on sustainably sourced and nutritious food supplies.”
According to Zamborain-Mason, the finding underscores the need for effective reef fisheries management as the global burden of malnutrition grows each year and reef ecosystems face unprecedented pressures from climate change[1].
Reef fisheries are inherently complex. They capture many species at once and are often located in data-poor regions with limited monitoring and management. This makes it difficult for scientists to collect and analyze data to assess their sustainability levels.
“What’s different now is the availability of large, fisheries-independent datasets that compile reef fish and associated information from across the globe,” says Zamborain-Mason. “These new datasets reveal patterns that were not visible with localized and scattered data.”
The team analyzed global data from 1,211 individual reef sites and 23 jurisdictions that had been identified as being below their maximum production levels. Rather than focusing on loss, the research team investigated what could be gained from replenishing reef fish stocks and ensuring that reef fisheries are sustainably managed.
Their results show that allowing fish stocks to recover could generate from 20,000 up to 162 million additional sustainable servings of reef fish annually, depending on the jurisdiction. For individual jurisdictions such as Indonesia, this could provide fish intake recommendations for an additional 1.4 million people per year. However, to achieve this requires active recovery of fish stocks, with many reefs needing to double their current biomass.
While recovery timeframes will depend on the state of depletion, the researchers estimate that, on average across all reefs examined, recovery could take from six years to almost 50 years, depending on the level of fishing allowed.
“If nations invest now in sustainably rebuilding their reef fishery resources, they can increase long-term yields and aquatic food supplies,” says Zamborain-Mason. “This could reduce their reliance on imports, improve nutritional and health outcomes, and boost ecosystem and economic resilience.”
Zamborain-Mason hopes their findings will encourage rigorous and systematic reef fisheries monitoring and management, while also raising awareness of the critical role of fisheries management for human health.
Such interventions could include greater investments in fisheries monitoring, management and enforcement, and the integration of fisheries into future food security plans and policies. Zamborain-Mason also considers that Saudi Arabia has the potential to provide a world-leading example of climate-resilient and sustainable reef fisheries management.
“By improving our understanding of reef fisheries and aquatic food systems in the region, we can provide scientifically grounded management information and support the Kingdom’s mission centered on sustainability, food security, and public health,” concludes Zamborain-Mason.