Plant diversity is the strongest predictor of resistance to grazing pressure in dryland ecosystems, according to a global-scale study by an international team led by KAUST[1]. This finding provides a scientific basis for developing and implementing more sustainable rangeland management strategies.
This insight is critical for global drylands, which support about half of the world’s livestock production and sustain the livelihoods of more than one billion people. Despite their vital role, these ecosystems face intense pressures and are highly vulnerable to desertification.
“Current trends indicate a global increase in meat consumption and livestock production, and so the pressures on dryland rangelands will also increase,” says Lucio Biancari, who worked on the study under the supervision of Fernando T. Maestre. “We already know from previous studies that ecosystems don’t all respond to grazing in the same way. In some dryland areas, vegetation declines rapidly as grazing intensifies, while in others, ecosystems remain surprisingly resistant.”
However, most previous work has been conducted at local or regional scales, making it difficult to identify general patterns or underlying mechanisms for these contrasting responses, notes Biancari. To examine these patterns on a global scale, an international team coordinated by Maestre conducted an extensive survey. The researchers analyzed data from 73 dryland sites across 25 countries to evaluate how climate, soil, vegetation and grazing-related factors influence ecosystem resistance.
They found that increased grazing pressure led to a decline in vegetation cover at 80 percent of the 73 sites. On a global scale, the average effect of increased grazing was a 35 percent reduction in vegetation cover. Plant species richness emerged as the strongest predictor of ecosystem resistance, with a higher number of species associated with lower vegetation cover loss.
“Sustainable grazing cannot rely only on reducing animal numbers, particularly in ecosystems with a long history of grazing such as Saudi Arabia,” says Biancari. “While stocking rates clearly matter, our results highlight the importance of management approaches that actively promote and protect plant diversity. Conserving plant diversity is not just a conservation objective; it is central to maintaining ecosystem functioning as land-use pressure increases.”
“Our results show that diversity enhances resistance not just because some plant species can replace others, but because diverse plant communities contain complementary strategies to boost resilience,” explains Biancari.
For example, different species exhibit variation in growth form, palatability, rooting depth, or chemical defenses, and collectively these features help the vegetation to withstand grazing impacts. Simplified plant communities may therefore be more vulnerable to degradation than more diverse communities.
The team highlights that management frameworks that support diverse plant communities are more likely to enhance long-term resistance to grazing, helping drylands remain productive while reducing the risk of degradation.
“This is particularly relevant for hyper-arid and arid systems like those in Saudi Arabia, where rangelands represent the most extensive land use across the country,” says Maestre. “We are now conducting a large, standardized field survey across major ecosystems in Saudi Arabia to understand their responses to grazing pressure. The results will help inform evidence-based grazing management and restoration strategies in the Kingdom.”
Materials that repel water are used in countless applications, including industrial separation processes, routine laboratory pipetting, and medical devices. When water touches these surfaces, the interface where they meet tends to acquire a small electrical charge — an effect that is ubiquitous, yet poorly understood.
KAUST researchers have now studied this in detail and their findings could have broad implications[1].
“This is not a niche laboratory curiosity,” says Yinfeng Xu, a Ph.D. student who led the experimental work in Himanshu Mishra’s laboratory. “This phenomenon plays a role in environmental processes such as dew droplets and raindrops; in industrial operations involving sprays, condensates, or emulsions; and in modern microfluidic and liquid-handling systems used in laboratories worldwide.”
Xu conducted a series of experiments that involved drawing water into a hydrophobic capillary tube, and then dispensing droplets into a Faraday cup — a copper container connected to a sensitive electrometer that measured the tiny electrical charge carried by the droplets. Hydrophobicity ensured that water left the capillary as a drop, without leaving much trace behind.
The researchers used glass capillaries with different chemical coatings to understand if the charging effect was due to water or the surface. This revealed that the droplets could carry either positive or negative charge, depending on the coating.
Next, Xu varied how quickly water was taken up and released. Surprisingly, he found that the rate of liquid uptake had almost no effect on the charge, and neither did the length of time that the water remained in the capillary. In contrast, the rate of droplet release had a significant impact: “In simple terms, the faster the liquid pulls away from the surface, the more charge is generated,” says Mishra.
The team then studied the effect of repeatedly drawing water into a capillary and releasing droplets into the Faraday cup. This confirmed that the final step of each cycle — droplet release — was where most of the charging occurred. “One would intuitively expect that charging happens mainly when water first touches a surface,” says Mishra. “Our findings show that the opposite is true.”
Curiously, changing the rate of droplet release also affected charging during the uptake step of the following cycle. “The interface does not simply return to a neutral state after each droplet,” says Xu. “It ‘remembers’ its recent past, and that memory influences how charge is transferred in subsequent cycles.”
Together, these findings help to reconcile previously reported contradictory observations by other scientists. They also provide a basis for further investigation of exactly how the charging process happens.
Meanwhile, the study has significant implications for industrial processes involving liquids, or for microfluidic devices that handle very small volumes of liquid. “In microfluidic systems, even small amounts of charge can noticeably influence how particles, droplets, or biomolecules move,” explains Xu. “By clarifying when and how charge is generated, our findings can help improve the reliability and reproducibility of microfluidic devices, especially in experiments that rely on precise control of tiny liquid volumes.”
The researchers plan to develop a theoretical model that could predict the generation of charge based on the movement of water across hydrophobic surfaces.
Following the devastating magnitude 7.8 and 7.6 Kahramanmaraş earthquakes in south-central Turkey and northwestern Syria in 2023, KAUST researchers and an international team have conducted a pioneering study into post-seismic deformation across the region. The work was enabled by exceptionally comprehensive satellite radar datasets. Their results underscore the importance of considering both temporal and 3D spatial data in exploring lithospheric recovery[1].
In the aftermath of major earthquakes, the Earth’s crust and uppermost mantle (the lithosphere) continue to deform and shift at plate boundaries, taking time to recover. Monitoring post-seismic surface deformation can help scientists understand which associated subsurface recovery processes occur after earthquakes, and offer insights into possible future tectonic movements.
“Unlike most large earthquakes that occur under the ocean, this massive and shallow earthquake doublet occurred on a continental plate boundary,” says Jihong Liu, research scientist who worked on the study under the supervision of Sigurjón Jónsson. “Having land surface on both sides of the fault allowed us to make detailed deformation observations using interferometric synthetic aperture radar (InSAR) images.”
“The tragedy is that while these types of earthquakes are incredibly useful for scientists, they are generally also the deadliest, causing widespread destruction and loss of life,” says Jónsson. The Kahramanmaraş earthquakes killed more than 50,000 people along the plate boundary (the East Anatolian Fault) between the Arabian Plate and the Anatolian Plate.
Scientists have long tried to ascertain different potential recovery processes in the lithosphere after large earthquakes, but their visible effects are similar, making it difficult to distinguish between the deformation processes at play. Using detailed InSAR images of the region, the team mapped the full spatial pattern and evolution of surface deformation in the first two years after the earthquakes. They then modeled the most likely processes that would result in these deformation patterns. The underlying plate geology helped narrow down the model, as Liu explains:
“Broadly speaking, the Arabian Plate is stiffer and more uniform, with relatively little internal seismic activity. In contrast, the softer Anatolian Plate contains a complex and fragmented fault network and is actively deforming.”
The team found that the deformation across the fault is asymmetric not only in space but also in time, with the two sides relaxing at markedly different rates. This provides critical evidence that the dominant post-seismic process is contrasting viscoelastic relaxation in the different rocks beneath the surface. Viscoelastic relaxation is the gradual release of stress over time inside a material that has been stretched or squeezed.
“Only this process can reproduce what we see on the surface here,” says Liu. There are also indications of poroelastic rebound, where groundwater fluid pressure recovers in pores in the rocks following the earthquake shock, causing gradual uplift in some areas and subsidence in others.
“These results are probably the best example to date in determining post-seismic recovery processes after a major earthquake,” explains Jónsson. “The constraints will directly support more reliable seismic hazard assessments for this tectonically active region.”
The team plans to continue monitoring the fault zone and will use their modeling framework to examine other large earthquakes around the world.
Energy systems can come under enormous strain from sudden changes in renewable generation, such as when sunlight rapidly increases as clouds pass, or when strong gusts hit a wind farm. A clean energy storage technology that handles these power peaks and troughs with ease, converting renewable electricity into green hydrogen, has been demonstrated by researchers at KAUST[1].
Storing renewable energy as clean hydrogen fuel is a critical element of future energy systems. Green hydrogen is made by using renewable electricity to split water molecules, using a device called an electrolyzer.
Today’s electrolyzers are poorly suited to this task. “Most water-splitting electrolyzers depend on steady electricity from the power grid — but that electricity often comes from fossil fuels, negating hydrogen’s environmental benefit,” says Abdul Malek, a postdoc in the lab of Xu Lu, who led the research.
One electrolyzer component highly vulnerable to sudden power surges is the water splitting catalyst. Low-cost nickel–iron (NiFe) catalysts work well when power supply is steady but can degrade rapidly when connected to renewable power sources that keep switching on and off, Malek explains. “Until now, there was no clear way to help such catalysts survive these harsh conditions for long periods,” he adds.
Some previous studies had suggested that adding chromium to NiFe catalysts might improve performance, but other reports concluded that the chromium-modified catalyst quickly broke down during operation. Lu and his team wondered if both findings might be true.
“We suspected that chromium might act like a temporary helper, guiding the catalyst into its most active and stable form when the system first starts running,” Malek says. “This idea was inspired by our earlier research, where we observed that chromium gradually washes out during operation — but instead of harming the catalyst, the process left behind a more porous structure that improved performance.”
The researchers tested the concept by designing a NiFe catalyst that incorporated a sacrificial quantity of chromium. They used a raft of analytical techniques, including X-ray photoelectron spectroscopy and inductively coupled plasma–optical emission spectroscopy, to track the catalyst’s changing composition and structure during use.
The results confirmed that the chromium gradually disappeared during electrolyzer operation, leaving behind an open structure of nickel and iron in a stable oxidized state.
A lab-scale electrolyzer fitted with the new catalyst maintained strong, stable performance over 30 days of fluctuating power. The researchers then teamed up with industry to test the catalyst at scale. “We demonstrated that an eight-cell electrolyzer stack, which delivered 2.5 kW peak power, remained stable over 13 simulated stop-start solar cycles,” Malek says. The device recovered instantly after a sudden power loss test, he adds.
“The challenge is to develop inexpensive electrolyzer systems that can operate stably for thousands of hours under real dynamic conditions,” Lu says. “Our next steps are larger stacks, direct coupling with solar power, and further improved earth-abundant catalysts and system engineering. The goal is practical, renewable-powered green hydrogen production that works outside the lab,” he concludes.
Quantum dot (QD) semiconductor lasers have been shown to operate reliably under strong optical feedback, which results from external light being reflected back from other circuit components[1]. A KAUST-led team says its discovery is the key to simpler and cheaper on-chip integration.
This advance brings these lasers closer to practical use in compact, scalable photonic circuits that enable faster data transfer and processing while using less energy.
Photonic integrated circuits typically use quantum well-based lasers containing III-V-type semiconductor materials like gallium arsenide, which are ideal for long-distance, high-speed data transmission in fiber optic networks. But when incorporated into standard silicon-based circuits, these lasers face specific hurdles. They are highly sensitive to optical feedback, which degrades performance, and can undergo coherence collapse — a chaotic state in which the laser signal becomes unstable and noisy — even under modest feedback levels.
As a result, quantum well-based lasers typically require optical isolators, which allow light transmission in just one direction, or complex engineering to prevent feedback when used on circuits. These protective measures add cost, complexity, and energy consumption.
In contrast, QD lasers are thermally stable, efficient, and resistant to optical feedback thanks to their ability to maintain a consistent, narrow-linewidth signal. This could eliminate the need for optical isolators, simplifying packaging and reducing costs. But, can the lasers stay reliable without isolators in real circuits, where reflections can be much stronger?
The research team — led by Yating Wan, with postdoc Ying Shi, and coworkers from KAUST and the University of California, Santa Barbara — have developed a laser setup to establish a realistic and quantitative feedback limit that system designers can rely on.
“We needed to push QD lasers far beyond previously explored regimes and directly observe where they finally become unstable,” Shi says.
The researchers coupled the QD gain medium with a Fabry-Perot cavity, a simple arrangement of mirrors and optical elements that allowed them to isolate the properties that govern feedback tolerance.
“Using this design ensured any improvements in feedback tolerance truly come from the quantum dot material itself, rather than from added cavity engineering,” Shi adds.
The system withstood feedback levels up to −6.7 dB before collapsing, which is tens of decibels better than standard quantum well-based lasers. “This confirmed that QD lasers are not feedback immune, yet they remained remarkably stable just below this limit,” Shi explains.
Even near the collapse threshold, the laser could transfer data at a sustained, maximum speed of 10 gigabits per second without significant performance degradation. It also maintained strong thermal stability, long-term stability, and reproducibility.
The system performed as well as hybrid platforms, which combine two microchips, and outperformed current state-of-the-art devices in feedback tolerance. Modeling revealed that coherence collapse is influenced by the external cavity length and circuit design, providing practical guidance for building photonic circuits that don’t require optical isolators.
“We are extending this work to application-oriented devices, such as narrow-linewidth and mode-locked quantum dot lasers,” Wan says. The team ultimately aims to develop robust, energy-efficient, and fully isolator-free circuits for emerging applications, such as LiDAR and optical computing.
Coral reefs differ widely in the types of corals they host, how their populations are structured, and the extent of coral cover. These differences are influenced by environmental and biological factors, from local conditions to regional climate patterns. Now, KAUST researchers have determined baselines in spatial variability for eight reefs in the northeastern and central Red Sea, providing vital information for future management and conservation efforts[1].
“This study was conducted at a critical moment for coral reefs,” says Chiara Pisapia, who worked on the study with colleagues Eslam Osman and Maggie Johnson.
“Data was collected from November 2023 to January 2024, between two back-to-back mass bleaching events,” continues Pisapia. “These timely datasets will help scientists assess how much change, loss, or recovery follows subsequent bleaching events. This will improve understanding of the long-term consequences of climate stress on coral reefs.”
Latitudinal differences in temperature and salinity, together with factors such as nutrient availability, light levels, water quality and fishing pressures, all combine to determine the success of coral communities in different regions.
The team investigated spatial differences in coral cover, taxonomic composition and demographic structure in three common reef-building coral genera: Acropora, massive Porites and Pocillopora. They surveyed five reefs in the northeastern Red Sea, which is cooler and less saline than southern and central regions, and compared the data to those gathered at three reefs in the central Red Sea.
“This study goes beyond traditional reef surveys that focus mainly on coral cover. It’s one of the first studies in the Red Sea to directly compare coral population demography, including both adults and juveniles, across different reef habitats and large spatial gradients,” says Pisapia. “This meant we could identify how well populations are structured to recover in the future, which is essential for understanding resilience.”
Their findings showed pronounced spatial differences, not just between latitudes but also between reef habitats (the reef crest and reef slope). Specifically, northern reefs have high live coral cover – almost double the amount relative to the central reefs at the time the surveys were conducted.
“Coral populations are not uniform, and there were substantial differences in coral assemblages across regions and habitats in these reefs,” says Pisapia. “This highlights the sensitivity of coral populations to environmental gradients and disturbance history. It is vital that scientists include spatial context when evaluating reef condition.”
The team’s discovery highlights the need for conservation and management strategies that are specifically tailored to each habitat type and region.
“Some reefs, or specific reef habitats, may be better positioned to recover from climate disturbances than others,” says Johnson. “This information can help managers prioritize protection, restoration, and monitoring efforts where they are likely to be most effective. It’s only with pivotal data like those collected by Pisapia that we can evaluate the true effect of environmental change on Saudi Arabia’s coral reefs.”
Pisapia plans to revisit these sites to track how coral populations change through time as disturbances continue to intensify, and assess which reefs are most likely to persist.
The study was supported by the Ocean Science and Solutions Applied Research Institute (OSSARI) awarded to Pisapia and Osman, and a National Geographic Society Award granted to Osman.
Increasing AI’s ability to tackle complex challenges with greater accuracy and energy efficiency is not simply a matter of adding more computing power. Subtle details in the networking of the computing elements can have a significant impact on AI performance.
The architecture, or wiring, of the computing elements in an AI is inspired by how neurons form circuits to process information and learn. However, a key aspect of neural network structure has so far been overlooked in AI design, as a KAUST-led team has shown[1][2].
Jesper Tegnér and his team at KAUST – in collaboration with an international team including the AI technology company NVIDIA – made the discovery while examining the network architecture of an AI solving a balance task. “We focused on ‘network motifs’, which often form the fundamental building blocks of large, complex networks,” explains Haoling Zhang, a Ph.D. student in Tegnér’s lab.
Network motifs combine to form complete, complex networks, just as words come together to create language. “Network motifs have been widely studied in biology and social science, but surprisingly little in the AI literature,” Zhang says.
The concept had been so overlooked in the AI community that no methods existed to study it. “To make progress, we had to build from scratch our own analytical methods, combining neuroevolution with machine learning,” Zhang adds.
The team focused on some of the simplest possible network motifs, consisting of three nodes – two inputs and one output – connected to form a triangle. Depending on how they are wired and how information flows through them, different types of three-node network motifs can form. Two of the most important are known as ‘coherent’ and ‘incoherent’ loops. “The abundance of incoherent motifs in natural systems, where both activation and repression occur on the same node in small circuits, has remained intriguing,” Tegnér explains.
“Initially, we didn’t suspect that these two loop types would have a significant impact on AI performance,” Zhang says. But, as the team progressed from testing individual network motifs to assessing more complex networks in various tasks, a consistent and notable difference emerged between the two loop types.
Neural networks dominated by coherent loops tended to quickly focus on feature-rich ‘high-gradient’ patterns encoded in the learning dataset. In contrast, networks dominated by incoherent loops explored patterns without any early preference for specific features.
In real-world situations, random or irrelevant noise in datasets is almost impossible to eliminate. During the early training phase of machine learning, noise can easily mislead a neural network because it cannot yet distinguish meaningful from non-meaningful signals. By rapidly narrowing in on high-gradient regions of the dataset, coherent loops were more susceptible to being distracted by noise. “This could slow learning and distort what is learned,” explains Zhang.
Networks built from incoherent loops, in contrast, learned in a richer, more balanced way. “In noisy datasets, networks with more incoherent loops stayed noticeably more stable and got less confused,” Zhang says.
“We found that these biological connectivity patterns had functional significance in an AI system,” Tegnér says. The team is now exploring various nature-inspired ways to develop high-performing, creative, energy-efficient next-generation AI systems.
Hydrogen is a clean-burning gas that could help to tackle climate change by reducing our dependence on fossil fuels. But storing and transporting hydrogen is expensive and technically challenging, typically requiring high-pressure gas tanks or cryogenic systems that operate at very cold temperatures.
One promising alternative involves incorporating hydrogen into carbon-based molecules known as Liquid Organic Hydrogen Carriers (LOHCs), which are safer and easier to handle than the gas itself. KAUST researchers have shown that certain LOHCs could reliably store hydrogen underground in depleted oil fields, and then help to recover residual oil from those reservoirs[1].
“Together, these advantages make LOHCs a compelling alternative to conventional hydrogen storage technologies,” says Hussein Hoteit, who led the research team.
LOHC systems use a catalyst to chemically combine hydrogen with a liquid organic molecule, forming a hydrogenated liquid that can be stored or transported like a conventional fuel. A second catalytic reaction is subsequently used to release the hydrogen and regenerate the initial carrier molecule.
Crucially, LOHCs can be handled using existing petrochemical infrastructure, such as pipelines, tankers, and large-scale storage facilities. “This significantly reduces the cost and complexity of building new hydrogen-specific infrastructure, which is one of the major barriers to widespread hydrogen deployment,” says Zeeshan Tariq, a member of the team.
The researchers simulated how two different LOHC systems would perform in a depleted sandstone reservoir at a depth of about 2,200 meters, typical of oil fields in Saudi Arabia. Their calculations included a wide range of factors, including the viscosity, stability, and hydrogen-storage capacity of the LOHC molecules.
In the first system, hydrogen is combined with toluene at the surface to produce methylcyclohexane. Both molecules are stable, widely available, and already used in above-ground LOHC facilities. Toluene stores about 6.2 percent of its weight in hydrogen, while methylcyclohexane has a low viscosity that enables it to flow easily underground.
In one simulation, methylcyclohexane was injected into the reservoir for five months, left for two months, and then extracted over five months. The yearlong cycle was repeated 15 times. Calculations suggest that about three-quarters of the methylcyclohexane could be recovered after each cycle. By the end of the simulation, more than half of the residual oil trapped in the field had also been recovered. This additional oil would offset storage costs, and the researchers estimate that the whole project would generate $70 million more in value than it consumed.
The second LOHC system could store more hydrogen per molecule, but its higher viscosity caused greater resistance during injection and extraction, leading to much poorer performance.
Although recovering residual oil would ultimately lead to downstream CO2 emissions, these would be small compared with the climate benefits offered by large-scale hydrogen use. “Carrier-based storage does not undermine climate goals,” says Hoteit. “Instead, it helps make hydrogen storage deployable at scale today, using existing assets, while supporting a gradual and economically viable transition to a low-carbon energy system.”
The team now plans to extend their study to multi-well reservoir systems, in which several injection and production wells operate simultaneously across a depleted oil field.
Technologies capable of generating freshwater efficiently and cost-effectively are critical for reaching sustainability goals, particularly in arid regions such as the Middle East. Researchers have now developed a polymeric membrane that can desalinate seawater and brines at ambient temperature and pressure[1]. The work was carried out by an international research team led by scientists at King Abdullah University of Science and Technology (KAUST).
“Water scarcity is severe in Saudi Arabia and is reaching unprecedented levels in countries once thought to be safe from such pressures,” says Noreddine Ghaffour, who led the research. “We urgently need to produce freshwater from seawater and brines at any scale, efficiently and cost-effectively, while conserving energy.”
Conventional membrane-based technologies such as reverse osmosis are most cost-effective at very large scales and depend on sophisticated energy-recovery systems. Even under these conditions, treating highly concentrated brines remains difficult because of the extreme pressures required. Membrane distillation offers an alternative approach, but it typically relies on elevated temperatures to vaporize water before it passes through a membrane and condenses as freshwater.
The membrane developed by Ghaffour’s team consists of an ultrathin polymeric film supported by a porous substrate and is designed for membrane distillation at low temperatures. The film contains sub-nanometer-sized pores and has a highly water-repellent, or superhydrophobic, surface, allowing the process to operate under ambient pressure.
In the system, warm saline water at approximately 25 ºC flows along one side of the membrane, while cooler water at 20 ºC flows along the other. This small temperature difference creates a natural driving force that pulls only water vapor across the membrane, where it condenses as pure water, leaving salt and other contaminants behind.
“What distinguishes our membrane is its ultrathin separating layer, only a fraction of a micrometer thick, combined with a highly water-repellent surface,” says project team member, Mohamed Obaid Awad. “This superhydrophobicity is crucial because it prevents liquid seawater from entering and flooding the membrane pores.”
At the nanoscale, the membrane remains ‘air-filled’, ensuring that only water vapor can pass through. Water does not need to boil to evaporate; even at room temperature some water molecules naturally escape into the vapor phase. Here, the extremely small pores enhance this effect by increasing local vapor pressure, facilitating evaporation.
The membrane also demonstrates high rejection of salt and acts as a total barrier to boron and other contaminations, preventing dissolved ions from entering the vapor pathway and improving performance in realistic desalination conditions.
“Salt ions and other dissolved species cannot evaporate under these conditions and are therefore excluded,” says Sofiane Soukane, a member of the research team. “Also, the membrane material is resistant to chlorine-based oxidants, which improves durability and long-term stability.”
Moving beyond the laboratory, the team is currently testing the technology in a pilot plant at KAUST. “Lessons from the pilot study will guide how we scale up membrane production,” says Ghaffour. “We have several industrial partners keen to get involved.”
Smart biomedical devices are transforming modern healthcare, using skin-mounted sensors to capture in-depth health information directly from the body. As clinicians increasingly use biosensing devices to guide patient care, accurate and reliable signal acquisition is critical.
A new system that can rapidly detect when the electrodes of devices such as heart monitors start to detach from the skin has been developed by a team at KAUST[1]. Unlike indirect electrode monitoring techniques, the new system directly measures electrode integrity by evaluating digital signal quality between electrodes.
“Traditional methods for checking whether medical electrodes are properly attached, based on impedance or indirect monitoring, were developed many years ago and assume relatively stable conditions,” explains Rajat Kumar, a student in the lab of Ahmed Eltawil, who led the research.
But in real life, as people move and sweat, electrodes can partially loosen or intermittently lose skin contact, which traditional indirect monitoring methods can struggle to detect.
“This is especially problematic for home-based wearable medical devices, where poor electrode contact may go unnoticed for long periods, leading to inaccurate data being recorded and relied upon,” says Abdelhay Ali, a postdoc in Eltawil’s group.
To develop smarter electrode connection monitoring, the team rethought the role of the body itself, Eltawil says. “Instead of treating the body as something that interferes with measurements, we considered whether it could be part of the solution.”
Tiny electrical signals can safely pass through the body, previous research has shown. “We realized that if electrodes could exchange digital signals through the body, then the quality of that communication would directly reflect how well the electrodes were attached,” Kumar says.
The team tested the concept by building a system around a custom chip designed and developed at KAUST. The chip sends and receives tiny digital signals between electrodes placed across the body. A small processing unit then analyses how well each signal is received. “Clear signals indicate good electrode skin contact; small errors indicate weakening contact; and missing signals indicate disconnection,” Ali explains.
A final electronic component manages the electrode checking sequence, ensuring the system can automatically monitor multiple electrodes without interrupting medical measurements.
The team tested the system using electrode pairs attached to human skin, and showed it could clearly differentiate electrodes that were firmly attached, partially loose, intermittently losing contact, or completely disconnected. “Importantly, the system detected the early signs of contact degradation that traditional methods often miss,” Kumar says.
“The system’s very low power consumption should enable practical integration with wearable medical devices that need to run continuously for long periods,” adds Ali. “These components form a compact and efficient solution that can be added to existing medical devices with minimal changes.”
The team now plans to turn their laboratory prototype into a fully integrated, single-chip system that can monitor many electrodes simultaneously, for use in clinical devices such as multi-lead heart monitors.
“Ultimately, our goal is to translate this KAUST-developed technology into practical medical devices that are more reliable, more trustworthy, and better suited for continuous health monitoring in the clinic and at home,” Eltawil says.