Skip to main content

Advanced carbon fiber materials could be used in applications from wind turbine blades to biomedical implants following the development of a low-cost carbon fiber feedstock.

The carbon fibers were spun from synergistic blends of the low-value heavy oils left over from crude oil refining by members of KAUST’s Clean Energy Research Platform[1]. The work could not only facilitate broader carbon fiber uptake but also create sustainable new uses for residual oils as the world transitions to alternative energy systems.

“Crude oil is a resource with immense potential beyond fuels,” says Edwin Guevara Romero, a researcher in the labs of Mani Sarathy, who led the work. “Using oil residues as feedstocks for carbon materials is an innovative, high-value application of oil-derived resources, paving the way for economic diversification,” he says.

Carbon fiber is in increasing demand across many industries due to its exceptional properties, including high mechanical strength and durability, low weight, thermal stability, and electrical conductivity. One limiting factor is its high cost, which can largely be attributed to the expensive carbon precursor, polyacrylonitrile (PAN), used to make it.

PAN’s high cost has prompted a search for alternative feedstocks. “Oil residues could offer a cost-effective and abundant alternative,” Guevara says. For their research, the team targeted the heaviest, most complex – and traditionally, hardest to process – components of residual oil, called asphaltenes and resins.

Previously, asphaltenes have been trialed as carbon fiber feedstocks. However, efforts to spin these materials into fibers were limited by their tendency to break, and the carbon fiber yield from the final carbonization heat treatment step was relatively low.

“Previous studies of oil residues have suggested that resins stabilize asphaltene molecules, highlighting their strong molecular affinity,” Guevara says. “This led us to hypothesize that blending asphaltenes with resins could create a synergistic feedstock for carbon fiber production.”

The team showed that the blend offered several advantages over asphaltenes alone as carbon fiber feedstocks. It had better flow characteristics and could be spun at a lower temperature, reducing energy consumption. The team also observed fewer strand breakages during spinning and attained a higher yield after the carbonization step. “This improves the viability of the process by maximizing the conversion of the precursor material into the final carbon fiber product,” Guevara says.

The resulting carbon fibers were also of high quality. “The properties of our fibers are comparable to those of ‘isotropic carbon fibers’, which are commonly used in applications requiring moderate-to-high mechanical performance,” Guevara says.

“Traditionally, oil residues have been used in very low-value applications such as road surfacing. By extracting the heaviest asphaltenes and resins for high-value carbon fiber manufacturing, the remaining residual oils can also be more easily processed to produce cleaner fuels or valuable small molecules, further improving the economics of the process,” Sarathy adds.

The researchers are now fine-tuning the residual oil blend to maximize the chemical interaction between the components to improve the fibers’ physical attributes further. They are also collaborating with Saudi Aramco to scale up the process. “Our primary objective is to generate high-performance carbon fibers and then scale towards an industrial process,” Sarathy says. “This will give Saudi Arabia a unique, economically competitive product that can be marketed globally.”

Imagine editing a manuscript, only to find random paragraphs from other books slipping into the text. That is the risk revealed in a new study of the celebrated genome-editing tool called CRISPR-Cas9: various unintended snippets of DNA sometimes get inserted in the edited region where they do not belong.

KAUST scientists uncovered this side effect in a detailed analysis of human embryonic stem cells edited with different types of donor DNA, whereby helper sequences provide a template for fixing the genome during the CRISPR process. They found that the genome editing platform, while efficient in cutting DNA, can sometimes leave behind large fragments of genetic material that were not part of the plan[1].

These stray pieces of DNA included repetitive sequences, regulatory elements and chunks from other parts of the genome. Though relatively rare — occurring in less than 1% of all edited cells — these large insertions could have significant consequences, especially in medical applications where accuracy is critical.

“This unexpected finding highlights the complexity of Cas9-editing outcomes,” says KAUST bioscientist Mo Li, who led the study.

Unlike large deletions, which typically result in the loss of gene functions, large insertions can lead to more complex and unpredictable genetic disruptions, including the activation of cellular pathways linked to cancer and other diseases. “While we cannot directly link these insertions to cancer risk yet, the presence of such sequences raises significant safety concerns for clinical applications,” Li says.

Li and his colleagues discovered that the type of donor DNA used plays a significant role in these outcomes. Linear donor DNA, for example, was more prone to causing unintended donor insertions compared to circular templates. The DNA repair process itself, which relies on the cell’s natural machinery, may also introduce these fragments when it mistakenly incorporates surrounding DNA into the repair site.

There is a simple fix. By chemically tweaking the donor DNA template with phosphorylation — a process that adds phosphate groups to the two ends of DNA — the researchers found that the likelihood of unintended large insertions dropped two-fold without compromising the efficiency of the intended edits.

“Phosphorylation of donor DNA is simple, does not alter experimental design or workflow, and reduces the risk of unintended structural variants without affecting gene-editing efficiency,” explains Chongwei Bi, a postdoc researcher in Li’s research group and the first author of the new study. “However, further validation is required in other settings to confirm the clinical impact and scalability of this approach.”

Without sounding alarms, the study authors emphasize the importance of refining CRISPR technology to make it even safer and more reliable — and of using ultra-sensitive DNA sequencing techniques like those developed in Li’s lab to systematically evaluate gene-editing products for rare unintended events.

“Additionally, designing editing strategies that minimize double-strand breaks or using alternative gene-editing tools without DNA breaks could further reduce these risks,” Li says. As gene editing moves into mainstream use, he points out, understanding and managing these subtleties will be crucial.

“During a research trip to Palmyra Atoll, we made a surprising discovery: a whole network of rhodolith beds[1],” says Lena Li, a recent graduate of KAUST. “No one knew rhodolith beds even existed in the tropical central Pacific.”

Despite years of research, there are still many mysteries to uncover in and around the world’s coral reefs. Rhodolith beds are one example. Scientists have limited understanding of these calcareous nodules made by coralline red algae, which accumulate in shallow seas close to coral reefs. 

Rhodoliths are not fixed to the seabed like corals; rather they accumulate over time into ‘beds’ made up of separate nodules, some of which are living, and others dead. These distinctive, pink-colored aggregations provide a unique, highly diverse habitat for multiple creatures in shallow seas. Rhodolith beds may serve as an indicator of wider reef resilience, because they provide a foundation for reef regeneration following periods of stress. However, scientists know little about how rhodoliths respond to stressors such as warming oceans.

Following their exciting discovery, the team improvised to gather as much data as they could on location.

“Palmyra Atoll is located within the largest marine protected area on Earth,” says Li, who was supervised by faculty member, Maggie Johnson. “It is as close to pristine and untouched as we can get, with no permanent human population and no local stressors. This allows us to get an idea of a baseline, healthy rhodolith ecosystem structure and function, without human activities clouding our view.”

The area where they found the rhodolith beds was quite inaccessible and had not yet been explored thoroughly. The team had to maneuver a small boat through tiny cuts between islands and could only reach the beds when the tides were just right.

“We hadn’t come prepared to study rhodoliths, so we used the equipment we had – namely myself, a pair of fins, a snorkel and a handheld GPS unit on a float,” says Johnson. “I swam around the edges of all the beds and further out to survey their full extent, and took a GPS point every two fin kicks. Exhausting, but worth it!”

The team mapped 15 different rhodolith beds, collectively covering 1.5 hectares. Together with the surrounding coral reefs, the whole ecosystem stretched to around 15 hectares. The beds revealed a distinctive ecosystem from the neighboring coral reef, providing food and shelter for large marine mammals, alongside numerous fish, mollusks, echinoderms and sponges.

The team also found an abundance of cryptic invertebrates there, compared with the surrounding coral rubble.

“Rhodoliths provide many of their ecosystem services because of their structural complexity, which is dependent on both their species and local environmental conditions,” explains Li. “Integrating molecular, morpho-anatomical and environmental data allowed us to understand some of the factors driving their complexity.”

“We also discovered two new rhodolith species,” says Li

The team are now further sequencing their samples in collaboration with researchers in Italy. They have also been working closer to home, examining rhodoliths and encrusting coralline algae in the central Red Sea for the first time.

“Rhodoliths are particularly sensitive to both local and global human impacts, so it is vital that we understand them and establish effective conservation policies to protect them,” concludes Johnson.

The glacier-fed streams (GFS) of the world’s highest mountains contain a diverse range of microorganisms but until recently, little was known about them.

A remarkable biodiversity in these streams has been revealed in a major study by a large international team, led by the Swiss Federal Technology Institute of Lausanne, and including KAUST researchers Ramona Marasco and Daniele Daffonchio[1].

The study involved collecting and analyzing samples from 152 GFS across Earth’s major mountain ranges, as part of the Vanishing Glaciers project.

Glacier-fed streams are largely restricted to mountain tops, where they initiate the flow of water for some of the world’s largest rivers. Glacier shrinkage due to climate change affects these streams, putting their ecosystem services and biodiversity at risk. “The shrinking of mountain glaciers in many parts of the world leads to decreasing freshwater supply to downstream areas,” says Daffonchio.

Microbial biofilms (the microbial communities attached to streambed sediments) that dominate life in the GFS have an important role in regulating the glacier and river biogeochemical cycles and the downstream water cycle, but very little is known about their ecology, biology and functions.

This research helps to fill the gap by presenting a biogeography and ecological analysis of the GFS microbiomes from the Earth’s major high mountain ranges,” says Daffonchio.

Using next-generation sequencing (NGS) technology, Marasco and the team sequenced the bacteria to create the dataset needed to describe the bacterial diversity. A challenge was ensuring sufficient sequencing depth and maintaining high-quality standards throughout the process.

“This was critical to guarantee that we could compare data from the different sequencing and libraries, as well as to define a final dataset that could accurately characterize the bacterial diversity and ecology of the GFS,” Marasco explains.

“We were also dealing with a diverse and complex environmental sample, with varying biomass (or microbial load) so we needed to ensure that the DNA sequencing was comprehensive enough to capture rare and abundant taxa alike,” she says.

The high-quality data generated enabled the researchers to construct a robust and comprehensive picture of the bacterial communities associated with these fragile ecosystems.

GFS are characterized by near-zero temperatures, low nutrient concentrations, and periods of either diminished light or strong UV radiation. Given these inhospitable conditions, the scientists were surprised to find such a wealth of microbial biodiversity.

The survey uncovered bacteria from 44 phyla, including many previously unclassified, with genetic diversity decreasing with elevation. It showed a bacterial microbiome that is taxonomically and functionally distinct from other cryospheric (frozen) microbiomes, characterized by both high regional specificity and local uniqueness.

“The GFS harbor a wide variety of bacterial lineages with many not previously characterized, highlighting their resilience and adaptability in extreme environments,” says Marasco.

The results showed that GFS microbial communities have a remarkable niche specialization. Certain taxa appear to be specific to particular environmental conditions; for instance, more than half were specific to a mountain range, some unique to a single stream, while a few were both cosmopolitan and abundant.

“This diversity and specificity provide new insights into the complex ecological interactions that drive these ecosystems,” says Marasco.

The team has now compiled the first global atlas of microbes in glacier-fed streams, which provides a baseline for future studies on the vanishing GFS ecosystem.

“We show that GFS microorganisms are distinct from the other microbial communities of the cryosphere and present high biodiversity and strong biogeographical patterns. The large dataset and the information presented in the study provide a novel reference on these important but vanishing microbiomes,” concludes Daffonchio.

Accurate assessment of the land surface damage (such as small-scale fracturing and inelastic deformation) from two major earthquakes in 2023 can help scientists assess future earthquake hazards and therefore minimize risk to people and infrastructure. However, attaining precise extensive measurements in earthquake zones remains challenging.

The two earthquakes that struck on 6 February 2023 were devastating: they were of magnitude 7.8 and 7.6 and occurred in quick succession near the border between Syria and Turkey. They caused widespread infrastructure destruction and resulted in tens of thousands of deaths across multiple provinces.

Using the two Kahramanmaraş earthquakes as a case study, KAUST researchers have demonstrated that the surface damage and inelastic deformation away from main faults probably extend more widely than previously thought[1].

“The Kahramanmaraş earthquakes offered us a unique opportunity to gain insights into details of the co-seismic surface displacement,” says Jihong Liu, postdoctoral fellow in KAUST’s Crustal Deformation and InSAR Group, who carried out the study in collaboration with colleagues from KAUST and IPGP in France. “Our results suggest that the width of the crustal damage zone can reach up to five kilometers from the fault itself, rather than just a few hundred meters as suggested by previous case studies.”

Large earthquakes occur when two tectonic plates that are stuck together move suddenly, instead of moving steadily past one another a few centimeters per year. This sudden slip, which yields meter-scale movement within seconds, causes extensive crustal damage. This damage is not just in the immediate vicinity of a plate boundary or fault, but also “off-fault damage” (OFD) away from the main fault. Measuring OFD accurately is a critical element of estimating fault slip rates and earthquake cycles, yet most case studies of major earthquakes appear to have significantly underestimated OFD. 

The team used image data from Synthetic Aperture Radar (SAR) satellites to quantify the OFD and 3D surface displacement caused by the two earthquakes. They used images taken before and after the two earthquakes.

“Radar satellites have transformed the study of earthquake zones, enabling us to visualize and analyze large areas in depth without requiring field observations,” says Liu.

Liu developed the SM-VCE method, an advanced co-seismic 3D surface displacement measurement technique, which the team used to precisely determine the 3D motion, OFD and surface deformation across a wide area around the two earthquakes. They showed that OFD consumed up to 35% of the co-seismic displacement, suggesting that slip rates in different parts of the world could be underestimated by as much as one-third. They also found that geometrically complex fault sections experienced a higher level of OFD than simple straight fault sections.

“Our results hold implications for geologic measurements of fault slip rates. If the OFD is larger than previously thought, it means that the plate boundary may be moving faster and could trigger more large earthquakes than anticipated,” says Sigurjón Jónsson, who led the team. “This increases the estimated earthquake hazard, with serious implications for planned infrastructure, buildings and decision making. Accurate OFD should also be factored into computer models of earthquake zones.”

“We will conduct OFD measurements on other typical earthquake cases to further validate and support the findings of this study,” concludes Liu.

Phytoplankton communities form the basis of the marine food web and are essential for nutrient cycling in the world’s oceans. However, both natural and human activities can easily disrupt the balance and functioning of these essential microalgae communities. Understanding how phytoplankton respond to external pressures over time can provide useful insights for safeguarding these vital ecosystem components in the future.

Now, a study of sediment cores taken from the western Arabian Gulf to examine centennial and decadal trends in phytoplankton communities in the region has been undertaken by KAUST researchers in collaboration with King Fahd University of Petroleum and Minerals (KFUPHM)[1].

“People may be surprised to learn that phytoplankton produce nearly half of the oxygen on the planet,” says Sdena Nunes, postdoc at KAUST, who worked on the study with KAUST faculty Carlos M. Duarte and Susana Agusti. “However, when agricultural run-off, urban wastewater discharge and industrial effluents enter the marine environment, the water becomes enriched with nutrients such as phosphorus and nitrogen. This ‘eutrophication’ process fuels the rapid growth of phytoplankton, leading to harmful algal blooms (HABs).”

As these blooms decay, the decomposition process consumes large amounts of oxygen, creating hypoxic zones, or “dead zones,” where marine life cannot survive. Instead of producing oxygen and feeding wider ecosystems, the phytoplankton suffocate habitats.

HABs also pose health risks to humans. Scientists are concerned about the visible rise in eutrophication and HABs in the Arabian Gulf in recent years.

To learn more about the historical trends in phytoplankton in the region, the team analyzed the phytoplankton pigments present in the layers of sediment that have built up over centuries on the seafloor. 

“Pigments are colored compounds present in all photosynthetic organisms, including phytoplankton. They are naturally preserved in seafloor sediments after the phytoplankton die,” says Nunes. “These pigments can indicate what species were present and their prevalence over different decades.”

The team’s results show clear evidence of widespread eutrophication in the Arabian Gulf, which has intensified since the 1980s. Chlorophyll-a concentrations increased in all the sediment cores over the last 40 years, reflecting a steady rise in phytoplankton abundance. Over time, the phytoplankton community has shifted from cyanobacteria and prasinophytes, which dominated in the early 20th century, to diatoms and dinoflagellates.

“These changes align with the region’s rapid urbanization and industrialization, and with rising nutrient inputs into the Gulf’s waters,” says Nunes. “The frequency and intensity of HABs have increased rapidly since the early 2000s, and there are also seasonal phytoplankton variations that are influenced by other factors, such as coastal wastewater discharges.”

Cyanobacteria experienced a sharp decline during the 1990s, coinciding with oil spills from the Gulf War. Oil pollutants inhibit the growth, metabolism and photosynthesis processes of cyanobacteria.

“These findings underscore the widespread and ongoing impact of eutrophication — compounded by anthropogenic pressures — on the region’s marine ecosystems,” says Agusti. “By mapping these changes on a centennial scale, we’ve gained insights into how phytoplankton dynamics respond to environmental pressures.”

The team hope these findings can guide the development of national and regional monitoring and management strategies to reduce nutrient in-flows and control pollution in the Arabian Gulf.

A better understanding of how tiny airborne pollutant particles, or aerosols, promotes sulfate formation could help improve management of air quality, a KAUST-led team[1] has shown. The findings could improve models used to predict and reduce secondary pollution.

Aerosols pose serious environmental and health threats, especially in Asia and the Middle East and North Africa region. Primary aerosols are generated by outdoor and household sources, such as power plants, industries, agriculture, automobiles, wildfires, dust storms, cooking activities, and incense burning. Secondary aerosols result from chemical reactions in the atmosphere.

Together, primary and secondary aerosols cause respiratory and cardiovascular disorders that lead to seven million premature deaths annually. Air pollution in Saudi Arabia, with one of the world’s highest aerosol concentrations, has shortened life expectancy by almost 1.5 years.

Aerosols from burning biomass represent 60 to 85 percent of the total primary organic aerosols emitted annually and are likely to increase as wildfires become more frequent and severe with intensifying climate change. They also absorb sunlight and worsen haze, which accelerates the warming of the Earth’s atmosphere.

Sulfate is a major aerosol component that results from the oxidation of sulfur dioxide during haze events. The Middle East emits more than 15 percent of global sulfur dioxide. Conventional air quality models have attributed this reaction to oxidants present in the gas phase, yet fail to explain the elevated sulfate levels observed when haze occurs.

Some molecules in the wildfire smoke absorb light and can transition to long-lived high-energy states called triplet excited states when exposed to light. In a triplet state, the molecules show unique reactivity and can initiate reactions with other compounds.

Inspired by the ability of these brown carbon molecules to form reactive species inside atmospheric particles, a team led by Chak Chan, and post-doc Zhancong Liang, has now proposed that triplet states generated in burning-biomass organic aerosols account for the ‘missing sulfate’.

The researchers assessed the photochemical reactivity of the aerosol particles in the multiphase oxidation of sulfur dioxide. They burned typical biomass to produce particles and then used an aerosol flow reactor to better mimic the reactions of submicron particulates in the atmosphere.

The team discovered that biomass smoke contained photosensitizers and generated triplet states that were key in stimulating the oxidation of sulfur dioxide into sulfate in the aerosol particles. Aerosol particles also displayed oxidation rates three orders of magnitude higher than bulk solution.

Simulations further indicated that triplet-state-driven sulfate formation enhances sulfate levels in wildfire-prone regions that are rich in biomass-burning organic aerosols. “Incorporating our kinetic parameters into atmospheric models can improve predictions of secondary pollution and help improve understanding of the environmental impact of global warming,” Liang says.

Next, the researchers plan to develop a predictive framework for the reactivity of various biomass-burning organic aerosols using their chemical structures. “The diversity and mechanistic ambiguity of organic aerosols make it complex to parameterize and predict triplet-state-driven reactions,” Liang says, noting the need to use samples with diverse compositions to investigate interactions with different atmospheric molecules, such as volatile organic compounds.

The team also collaborates with Imed Gallouzi, Chair of the KAUST Center of Excellence for Smart Health, on health impacts of atmospheric particles.

A mathematical proof established more than 140 years ago provided the key for a KAUST-led team to develop a computational method for accurately simulating complex biophysical processes, such as the spread of disease and the growth of tumors[1].

Reaction-diffusion equations are widely used to model the dynamics of complex systems by providing a macroscopic mathematical description of the interplay between local interactions and random motion.

“Reaction-diffusion equations play a crucial role in many clinical applications, including computational epidemiology — an interdisciplinary field that combines mathematics and computational science — to enhance our understanding and control of the spatiotemporal spread of diseases in real time,” says Rasha Al Jahdali from the KAUST research team.

“This discipline is vital for informing health policy decisions worldwide. Since studying epidemiological phenomena is often complex and sometimes unfeasible, it is important to develop efficient, robust, and predictive algorithms for reaction-diffusion processes.”

These equations capture real physical mechanisms in the form of continuous ‘partial differential’ equations that describe rates of change over space and time. While mathematically ideal, such equations can be very difficult to solve for the purposes of simulating complex biophysical processes because computers require numerical methods involving discrete calculations of real numbers.

“Discretizing continuous reaction-diffusion equations using numerical schemes allows us to use computers to solve them numerically,” says Al Jahdali. “This involves approximating continuous variables and their derivatives or rates of change at specific points in space and time. The problem is that existing discretization methods often lack the ability to maintain stability and accuracy when simulating complex, nonlinear interactions like those inherent in biological systems.”

That means existing numerical schemes often produce ‘unphysical’ results or break down, particularly with spatially varying reaction-diffusion equations, which limits their utility for making reliable predictions in a clinical context.

To solve this problem, Al Jahdali and her colleagues turned to an old mathematical proof, called the Lyapunov direct method, that tests for the existence of a stable solution of time-varying system without needing to solve the underlying partial differential equations.

“Our computational framework leverages the Lyapunov’s direct method to develop fully discrete and ‘smart’ self-adapting schemes of arbitrary accuracy in space and time,” says Al Jahdali. “This new computational framework provides robust and accurate solutions suitable for applications in complex environments, capturing correctly the dynamics of phenomena, which is crucial for accurately modeling real-world scenarios in biological and clinical applications.”

The researchers applied their method to a commonly used spatially varying ‘susceptible-infected’ model for predicting the endemic spread of disease, showing that their numerical approach stayed consistent with the original physically accurate reaction-diffusion solution. They also used their approach to model the treatment of a tumor in the brain using virotherapy, involving complex interactions in space and time based on known biochemical processes.

“Our approach demonstrates superior performance compared to traditional numerical methods for solving reaction-diffusion partial differential equations, which will enable more reliable and physically consistent results,” says Al Jahdali. “This represents a significant step in developing computational methods that are not only theoretically sound, but practically useful for addressing pressing global problems.”

Optoelectronic devices developed at KAUST that behave as either synapses or neurons, and adapt and reconfigure their response to light, could find use in optical neuromorphic information processing and edge computing[1].

The team from KAUST has designed and fabricated metal-oxide semiconductor capacitors (MOSCaps) based on the 2D material hafnium diselenide (HfSe2) that act as smart memories. The devices feature a vertical stack structure where HfSe2 is sandwiched between layers of aluminum oxide (Al2O3) and placed on a p-type silicon substrate. A transparent indium tin oxide (ITO) layer sits on top, allowing light to enter from above.

“When hafnium diselenide nanosheets are integrated into charge-trapping memory devices through solution-based processes, they enable both optical data sensing and retention capabilities,” says graduate student Bashayr Alqahtani. This allows the device to be reconfigured to sense light or store optical data after the light source is removed, depending on the bias conditions. “Our device is based on a two-terminal capacitive memory, which shows promise for device 3D stacking, paving the way for more adaptive and energy-efficient solutions,” she explains.

Experiments show that the charge trapping and capacitance of MOSCaps change with light conditions, allowing them to serve as smart memories that learn using light. As a result, optical signals can be used to train and alter the response of the device, while electrical bias signals can be used to erase the device.  In particular, the team has shown that exposure to blue light with a wavelength of 465nm can reinforce or strengthen the response to red light at 635nm, a behavior known as associated learning. In the terminology of neuromorphic computing, the MOSCap acts like an artificial synapse showing both long-term potentiation (increase in synaptic response) and long-term depression (weakening of synaptic response).

”This work investigates how artificial neurons respond and adapt to optical stimuli — specifically, changes in light intensity, duration and wavelength,” explains Nazek El-Atab, who led the team. “This research is crucial for understanding these smart memories’ capabilities and improving their adaptive learning mechanisms.”

The team used these characteristics to run a simulation that predicts that a capacitive synaptic array circuit based on the devices could recognize handwritten digits from the industry standard MNIST database with an accuracy of 96%. They also show that, in principle, the adaptive sensing capabilities of a MOSCap neuron could be used to perform exoplanet detection by correctly spotting transient changes in a star’s light intensity as an exoplanet periodically passes in front of the star with an accuracy of 90%.

“These devices demonstrate in-memory light sensing capabilities that make them ideal for edge computing applications,” comments El-Atab. “They show particular promise for artificial intelligence applications where rapid processing and storage of large data volumes is essential, especially when it comes to optical data. The range of potential applications can be wide — from autonomous vehicles to virtual reality and IoT systems.”

Separating and purifying closely related mixtures of molecules can be some of the most energy-intensive processes in the chemical industry, and contributes to its globally significant carbon footprint. In many cases, traditional industrial separation protocols could be replaced using the latest energy-efficient nanofiltration membranes — but testing the best separation technology for each industrial use case is slow and expensive.

A computational tool that can reduce this work by comparing separation technologies for a given chemical mixture, and predict the most efficient and inexpensive technology for the task, has been developed by researchers at KAUST[1].

“We are able to predict the separation of millions of molecules relevant across industries such as pharmaceuticals, pesticides, and pigments,” says Gyorgy Szekely, who led the research.

Commercial nanofiltration membranes can slash the energy cost of chemical separations, compared to traditional heat-driven methods such as evaporation and distillation, by selectively filtering out the desired product. Nanofiltration does not work in all cases, however. “Predicting the separation performance of membranes for different chemical mixtures is a notoriously difficult challenge,” Szekely says.

To develop their overall chemical separation technology selection tool, Szekely and his team compiled a collection of nearly 10,000 nanofiltration measurements from the scientific literature, focusing on commercially available membranes.

The researchers used machine learning to analyze the data, generating an AI model able to predict the nanofiltration performance for untested chemical mixtures. This information was combined with mechanistic models to estimate the energy and cost requirements of a chemical separation if it was performed by nanofiltration, evaporation or extraction.

“Our novel hybrid modelling approach enables us to evaluate millions of potential separation options, to identify the most suitable and energy-efficient technology for any given chemical separation task,” says Gergo Ignacz, a member of Szekely’s team. “This will allow industry to make better-informed decisions that significantly reduce operating costs, energy consumption, and carbon emissions,” he adds.

The predictive power of the hybrid model was experimentally validated using three industrially relevant case studies, Szekely says. “We found an excellent match between the values that our model predicted, and measured values for these processes,” he says.

The researchers showed that the carbon dioxide emissions of pharmaceutical purifications could be reduced by up to 90 percent by selecting the most efficient technology for the task. Overall, the energy consumption and carbon dioxide emissions of industrial separations could be cut by an average 40 percent using this method, they estimated.

One surprising finding was the stark difference between the best method and the other two methods for any given separation, Ignacz says. “For most cases, either nanofiltration, evaporation, or extraction emerged as a clear winner, with one method significantly outperforming the others based on economic and energy metrics, leaving little middle ground.” he says.

Although the predictive power of the model proved to be high, there is still room for improvement and further validation, Szekely says. “Our tools are available as open access through the OSN Database at www.osndatabase.com, and we encourage the community to use them,” he says.