Hydrogen is a clean-burning fuel that could help to replace fossil fuels in transportation, the chemicals industry, and many other sectors. However, hydrogen is also an explosive gas, so it is essential to have safety systems that can reliably detect leaks in a variety of circumstances.
KAUST researchers have invented a robust, highly sensitive, low-cost hydrogen sensor that outperforms commercial detectors, offering a vital safeguard for the burgeoning hydrogen economy[1].
“Conventional hydrogen sensors face several limitations,” explains Suman Mandal of the Physical Science and Engineering Division at KAUST, a member of the team behind the work. “These sensors often respond slowly to hydrogen leaks, cannot detect trace levels of hydrogen, and must be heated during operation, for example.”
The researchers have overcome these problems using a semiconducting polymer called DPP-DTT, which they coated onto a pair of platinum electrodes. Exposure to hydrogen reduced the current flowing through the device by up to 10,000 times, offering a powerful detection signal, with the drop in current corresponding to the concentration of hydrogen.
“This high responsivity ensures rapid and precise detection of gas leaks, which is essential for safety in industrial and transportation sectors,” says Mandal
The device operates at room temperature and can detect traces of hydrogen at just 192 parts per billion. It responds within one second of exposure and consumes barely two microwatts of power. Laboratory tests showed the device could operate over a wide temperature and humidity range and remained functional for two years.
The researchers tested the device in various real-world scenarios, which included hydrogen leaking from a pipe and bursting hydrogen-filled balloons in a room. They even mounted the device on a drone and flew it through an area where a hydrogen leak had occurred. In all scenarios, the device performed better than commercial sensors.
The sensor could also detect hydrogen in mixtures of volatile molecules such as ethanol and acetone, and in complex gas mixtures. The sensor only failed when the atmosphere lacked any oxygen, which provided an important clue about how it works.
Oxygen from the air enters the polymer and draws electrons from the material. This increases the current flowing through the device and leaves oxygen within the polymer and on the electrodes. If there is any hydrogen in the surrounding air, it also passes through the polymer and reaches the electrodes, where it splits into hydrogen atoms that stick to the platinum’s surface. Hydrogen and oxygen atoms then combine to form water, which escapes the device. Removing this oxygen reduces the current flowing through the device, which signals the presence of hydrogen.
“This is an entirely new hydrogen sensing mechanism,” Mandal says.
Using an inexpensive screen-printing method, the sensor could be manufactured at a low cost, making it an affordable and practical way to rapidly identify hydrogen leaks.
The team has filed a patent on the work, and plans to collaborate with a company to further develop the technology. “I believe these efforts will help address hydrogen safety issues in a cost-effective and environmentally friendly manner,” says Mandal.
The need to tackle antibiotic resistance is becoming more urgent, posing threats to the health of all species, as antibiotic resistance genes (ARGs) proliferate in the wider environment. Wastewater treatment plants are a key hotspot for the spread of antimicrobial resistance, because ARGs present in waste from humans can pass through the treatment process intact and disseminate into the environment.
“Solid waste, or sludge, is a byproduct of wastewater management that is regularly discharged into the environment after wastewater is treated,” says Julie Sanchez Medina, Ph.D. student at KAUST, who is supervised by faculty member Pei-Ying Hong. “However, sludge now contains considerable numbers of ARGs, and they are free to potentially move between species in the sludge, spreading antimicrobial resistance still further.”
Now, a metagenomics study by Sanchez Medina and co-workers has demonstrated that one type of bioreactor used in some wastewater plants – anaerobic membrane bioreactors – may be better at reducing the amount of ARGs released into the environment[1].
Membrane bioreactors use a bacterial digestion process to break down organic pollutants in wastewater. This digestion process either uses aerobic bacteria (those that thrive with oxygen) or anaerobic bacteria (without oxygen). The wastewater is then filtered through a membrane to separate out the contaminants, resulting in clean fluid effluent and leaving behind the solid sludge.
“We wanted to examine whether there are differences in the amount of ARGs and antibiotic-resistant bacterial strains in the sludge produced by aerobic or anaerobic membrane bioreactors,” says Sanchez Medina.
The team compared two systems that were treating the same stream of wastewater, and collected sludge from each of them in two independent runs during a five-month period. They extracted DNA from the sludge, then sequenced and analyzed it with bioinformatic tools. The resulting datasets enabled the researchers to estimate the abundance of ARGs according to the sludge volume that was released by each bioreactor.
“We found that sludge from the anaerobic system had a lower abundance of ARGs. Crucially, there was also a lower potential for the horizontal transfer of genes to opportunistic pathogens in the anaerobic system compared to the aerobic one,” says Sanchez Medina.
The team also examined the diversity in the types of antibiotic resistance for each system. Aerobic sludge had a higher diversity of potentially harmful ARGs, including those that confer resistance to broad-spectrum antibiotics like quinolones and tetracycline. Anaerobic sludge had fewer mobile genetic elements and the conditions within the system were less conducive to gene transfer.
“Coupled with the other advantages of anaerobic membrane bioreactors – lower overall sludge waste and lower energy requirements – this technology may be especially useful in the fight against antimicrobial resistance,” notes Hong.
“We plan to conduct a further, long-term study of anaerobic versus aerobic systems to gain more insights into the risk of antibiotic resistance dissemination and the downstream impacts in terms of reusing treated wastewater, which is a vital consideration for an arid country such as Saudi Arabia,” concludes Sanchez Medina.
A new tool created by KAUST researchers helps make climate models more practical to use[1]. The tool, called an online stochastic generator, reduces the space needed to store and analyze climate data while enabling researchers to generate nearly real-time climate data, helping them understand climate change in a timely manner.
An important element in climate modeling is reanalysis data, in which observations are incorporated into a model’s predictions to improve its accuracy. Reanalysis data can be extremely large and expensive to generate, making storage a serious issue. “The storage aspect is becoming a big problem for climate research centers because they run simulations on supercomputers that take weeks or months, and then they have terabytes of data to store somewhere for future use, which has a cost. And they’re reluctant to throw that data away,” says KAUST’s Al-Khawarizmi Distinguished Professor of Statistics Marc Genton, the study’s senior author.
To address this challenge, researchers can use tools called stochastic generators. A stochastic generator represents the climate data in a statistical model, which can be used to recreate statistically similar data. “If you fit a stochastic generator to the data, you only need to store the parameters. You could throw away the data and re-simulate it at any time quickly and cheaply,” explains Genton.
Stochastic generators also enable researchers to regenerate multiple ensembles of climate data from the stored parameters. This can give climate modelers better insight into the uncertainty of the data and help them reach more accurate predictions and conclusions about how the climate works.
However, existing stochastic generators suffer from a few shortcomings. They aren’t developed with storage constraints in mind and can’t be updated live as new data come in. “Since reanalysis data can come in real-time and span a considerable number of time points, a stochastic generator for reanalysis data must address these two challenges,” explains Dr. Yan Song, the postdoc who led the study.
Together with a collaborator at Lahore University of Management Sciences, Song and Genton have developed a stochastic generator which can incorporate new data as it comes in—an online stochastic generator. Their paper describing the new stochastic generator is one of the five finalists for the ADIA Lab Best Paper Award in Climate Data Sciences for “Pioneering Solutions for a Sustainable Future.”
The generator can take data into the model sequentially as blocks, so the model’s parameters can be updated as new data come in. “Our online stochastic generator can emulate near real-time data at high resolution, so it’s suitable for reanalysis data,” says Song. “The performance of our online stochastic generator is comparable to that of a stochastic generator developed using the entire dataset at a single time.”
Processing the data in blocks also offers a way to reduce the computational load of climate models. For example, when the available computational resources aren’t enough to store and analyze a data set, it can instead be processed as a sequence of blocks. “Because it doesn’t process all the data at once, the model we’ve developed can deal with higher resolutions in both space and time,” says Genton.
The new generator can also handle multiple variables rather than just one. The team used it to analyze two different wind speed components, but it could also be adjusted for other variables. Some variables, such as precipitation, are more complicated to model and would take more work to include in the stochastic generator. The researchers plan to continue developing the generator to handle such variables.
A key challenge lies in balancing patient privacy with the opportunity to improve future outcomes when training artificial intelligence (AI) models for applications such as medical diagnosis and treatment. A KAUST-led research team has now developed a machine-learning approach that allows relevant knowledge about a patient’s unique genetic, disease and treatment profile to be passed between AI models without transferring any original data[1].
“When it comes to machine learning, more data generally improves model quality,” says Norah Alballa, a computer scientist from KAUST. “However, much data is private and hard to share due to legal and privacy concerns. Collaborative learning is an approach that aims to train models without sharing private training data for enhanced privacy and scalability. Still, existing methods often fail in heterogeneous data environments when local data representation is insufficient.”
Learning from sensitive data while preserving privacy is a long-standing problem in AI: it restricts access to large data sets, such as clinical records, that could greatly accelerate research and the effectiveness of personalized medicine.
One way privacy can be maintained in machine learning is to break up the dataset and train AI models on individual subsets. The trained model can then share just the learnings from the underlying data without breaching privacy.
This approach, known as federated learning, can work well when the datasets are largely similar, but in situations when distinctly different datasets form part of the training library, there can be a breakdown in machine-learning process.
“These approaches can fail because, in a heterogenous data environment, a local client can ‘forget’ existing knowledge when new updates interfere with previously learned information,” says Alballa. “In some cases, introducing new tasks or classes from other datasets can lead to catastrophic forgetting, causing old knowledge to be overwritten or diluted.”
Alballa, working with principal investigator Marco Canini in the SANDS computing lab, addressed this problem by modifying an existing approach called knowledge distillation (KD) with data-free relevance estimation. The latter was designed to improve the relevance of retrieved learnings, a masking process using synthetic data to filter out irrelevant knowledge, and a two-phase training strategy that integrates new knowledge without disrupting existing knowledge to avoid the risk of catastrophic forgetting.
“Mitigating knowledge interference and catastrophic forgetting was our main challenge in developing our query-based knowledge transfer approach,” says Alballa. “QKT outperforms methods like naive KD, federated learning, and ensemble approaches by a significant margin – more than 20 percentage points in single-class queries. It also eliminates communication overhead by operating in a single round, unlike federated learning, which requires multiple communication steps.”
QKT has broad applications in medical diagnosis. It could enable, for example, one hospital’s AI model to learn to detect a rare disease from another hospital’s model without sharing sensitive patient data.
It also has applications in other systems where models must adapt to new knowledge while preserving privacy, such as fraud detection and intelligent Internet-of-Things systems.
“By balancing learning efficiency, knowledge customization, and privacy preservation, QKT represents a step forward in decentralized and collaborative machine learning,” Alballa says.
Next-generation solar cells made from organic materials could soon contribute to real-world renewable energy generation, suggests an outdoor stress test conducted under intense Saudi Arabian sunlight. The long-term study showed that certain organic light-capturing materials are surprisingly resilient to light-induced ‘photodegradation’ and revealed new ways to further optimize the cells’ longevity in realistic conditions[1].
Lightweight, semi-transparent and flexible, organic solar cells (OSCs) could potentially be used in a range of situations where conventional silicon solar panels would be too heavy, rigid or opaque to be deployed. “Recently, OSCs’ solar power conversion efficiencies (PCEs) have improved rapidly, surpassing 20 percent in laboratory settings,” says Han Xu, a postdoc in the KAUST lab of Derya Baran, who led the research with then postdoc Jianhua Han. “However, there has been much less focus on improving long-term stability of OSCs, which remains a major bottleneck hindering commercial viability,” he says.
The performance of an OSC can decline precipitously when exposed to the heat, light and moisture of outdoor environments; this occurs via degradation pathways that are poorly understood. “To bridge this knowledge gap, we systematically investigated the degradation behavior of various OSCs under light, thermal stress, and outdoor conditions,” Xu says.
The researchers focused on a component of the OSC’s light-harvesting core called the polymer donor. These materials’ photodegradation pathways have rarely been studied, despite their crucial role in OSC light absorption, charge generation and transport.
The team made a series of OSCs incorporating different polymer donors and studied the impact of factors such as polymer molecular structure on OSC longevity.
The weak spot of the polymer donors’ key photodegradation, it turned out, was the side chains that branch from the polymers’ central molecular backbone. The team showed that light could knock a hydrogen atom from a side chain or break off a side chain entirely, initiating cascading damage. “This can result in by-products such as cleaved side chains, radicals, twisted polymer backbones, and cross-linked structures,” Xu says.
Some side chains, however, were far less susceptible to this form of light-driven damage than others, the team showed. “In our study, OSCs incorporating the polymer donor PCE10, which features robust side chains, achieved 91 percent of retained PCE even after seven months of outdoor stability testing,” Xu says.
“Also notable is that some OSCs can survive harsh environmental conditions over a long period of time,” Baran says. PCE10’s class-leading stability was a surprise, she adds, because previous results have shown that PCE10 is photo-unstable in air. For their outdoor testing, the team encapsulated their OSCs to exclude air and moisture from the device. Under these conditions, PCE10 proved to be remarkably resilient to degradation — despite experiencing intense sunlight and peak temperatures of over 65 degrees Celsius.
The study highlights that OSCs should be optimized not only for their initial power conversion efficiency, but also for how well that efficiency is maintained over time in outdoor environments, Baran says.
Based on their findings, the team’s next step will be to test a new set of OSCs, trialing them under various conditions that solar cells might be exposed to in different outdoor environments around the world.
By uniting different forms of the same semiconductor material, KAUST researchers have created a self-powered device that detects ultraviolet light[1]. The key part of the device is known as a phase heterojunction (PHJ), an arrangement that could open up a host of new electronics applications.
The atoms within a crystalline semiconductor can be arranged in various patterns known as polymorphic phases. Each phase offers distinct properties, such as the light wavelength it absorbs. The new device uses two phases of gallium oxide (Ga2O3), a stable and relatively inexpensive material that absorbs deep in the ultraviolet part of the spectrum.
Electronic devices often contain adjacent layers of two semiconductor materials. But marrying two phases of the same semiconductor to create a PHJ offers several advantages over the traditional approach, explains team member Yi Lu.
For instance, PHJs can avoid absorbing unwanted wavelengths of light, and they tend to create a strong electric field at the interface between the phases, which could significantly enhance the performance of devices including solar cells, transistors and photodetectors. “Growing and integrating polymorphic phases of the same material is also more straightforward and economical than combining dissimilar materials, but it is difficult to maintain a high-quality interface” says Lu.
The researchers used a method called epitaxy to prepare their PHJ. This involves firing a laser at a target to generate a stream of atoms that subsequently assemble on a substrate.
First, they grew gallium oxide’s orthorhombic phase (κ-Ga2O3) on a sapphire substrate. They used high vacuum conditions, and included some tin in the target, which helped to generate the desired phase. On top of that, they formed a layer of the monoclinic phase (β-Ga2O3), using oxygen-rich conditions to ensure the correct crystal structure.
“The main contribution of this work is the first demonstration of a phase heterojunction with a clear, atomically-sharp, and well-defined interface,” says Xiaohang Li, who led the team.
When ultraviolet light hits this PHJ, it excites electrons into a higher energy band, leaving positively-charge ‘holes’ behind in a lower-energy band. Crucially, each of these energy bands differs slightly between the two phases, which creates an electric field at the interface between the layers. This helps to quickly and efficiently separate the electrons and holes, generating a current without having to apply an external voltage — meaning the device is self-powered.
“Researchers have previously tried to demonstrate this PHJ,” says Ph.D. student and team member Patsy A. Miranda Cortez.“However, they just formed randomly distributed mixed phases, which may not be suitable for semiconductor device mass production.”
The PHJ created a current roughly 1000 times greater than similar devices that contained only a single phase of gallium oxide, and it did so much more quickly. Consequently, it could produce a much stronger detection signal in response to very weak deep ultraviolet light.
The team now plans to combine other phases of gallium oxide, and apply their PHJs to areas such as advanced imaging, energy-efficient photonics, and power electronics.
Engineered photosynthetic algae could be used within sunlight-powered sustainable chemical biofactories using a new circular production process developed at KAUST[1]. The scalable process uses bespoke functionalized microparticles — rather than flammable organic solvents — for the critical step of harvesting the valuable chemicals that the algae produce. The microparticles are robust, reusable, and can be tailored to capture a range of chemical products.
Algae, whose metabolism has been reprogrammed to biosynthesize target chemicals in high volume, could offer a green and sustainable way to make essential products. “Our lab has several metabolically engineered algal strains that produce chemicals called terpenoids. These have potential applications ranging from cosmetics ingredients to biofuels,” says Sebastian Overmans, a postdoc in the lab of Kyle Lauersen, who led the research.
Overmans explains that upscaling the process has been hampered by challenges with extracting the chemicals from the watery fluid that the algae are grown in. “Traditionally, the terpenoids are extracted using a layer of alkane solvent on top of the algal culture,” he says. As the algae circulate in the medium, the chemicals that they generate can dissolve into the solvent layer, where they can be collected.
Other challenges limit the scalability of this process. Firstly, the solvents can be toxic and flammable.
Secondly, as air and carbon dioxide are bubbled through the reactor to support algal growth, the solvent layer can mix into the culture layer. “This mixing causes foam formation and emulsions, which limit productivity and hamper extraction,” Lauersen says.
A cross-campus collaboration with Himanshu Mishra and his team at KAUST could finally have solved the problem. “They recommended that we chemically link the solvent onto the surface of low-cost silica particles to create particles that capture our target terpenoids like a solvent but are physically easier to work with,” Lauersen adds.
The new particles work harmoniously with the gas stream bubbled up through the reactor. “When we tested this method in commercial hanging-bag reactors, we confirmed that the uplift of the air-CO2 gas mix is sufficient to keep the particles suspended,” Overmans explains. “Once we stop the bubbling, the microparticles settle quickly, which makes them easy to separate from the algal culture for product recovery.”
The valuable chemicals captured in the particles’ solvent coating can be collected by washing them with a small volume of ethanol. “The efficiency of the particles’ extraction varied depending on the compound that we were trying to extract from the algae — but even in cases where the microparticles extracted less well than traditional solvents, the advantages of our method far outweighed the slightly lower performance,” Overmans says.
The team is currently optimizing parameters such as the mixing rate and particle separation procedure, which could further improve efficiency.
“Also, one distinct advantage is that the microparticles can be tailored with coatings optimized for different compounds of interest,” Overmans concludes. “The next step is to show that the technology can be upscaled further before benchmarking its performance in real-world bioproduction.”
A simple new design for a passive wireless strain sensor offers unprecedented sensitivity while being thin enough to be embedded within structures without causing defects[1]. With further development, the KAUST team expects that the chipless design could be adapted to make sensors that respond to other stimuli, such as the temperature or chemical environment.
The design combines several existing technologies into a very efficient package. “The advantage in terms of sensitivity comes from merging different physics for two different responses from two materials,” explains Hassan Mahmoud, a doctoral researcher in the team of Gilles Lubineau, who led the study.
The sensor is printed with inks that change their electrical resistance in response to strain. The ink is printed in such a way as to create a circuit with capacitive domains, enabling the sensor to be activated wirelessly by a specific frequency. The circuit is designed to make the activation frequency very sensitive to stimulation, so it reflects the amount of strain the sensor is under.
Manufacturing the new sensor should be quick and easy. “Our mechanism uses simple available materials and techniques, so it is suitable and scalable for use by industry,” says Mahmoud.
The design might have to be adjusted depending on the use case, but Mahmoud does not expect that to be challenging. “The R&D is done, so it is just about customizing it to be suitable to the application, like using a substrate that’s right for wearables or that can withstand the high temperatures in structural manufacturing,” he explains.
Mahmoud, a mechanical engineer, explains that the sensor was developed in response to engineers’ needs. “We have lots of challenges when it comes to inspecting or monitoring structures during operations, especially with composite materials, which are very sensitive to having sensors embedded,” he says. But the new sensor is thin enough that it could even be used in composites.
He sees a wide range of potential applications for the new sensor. “It can be used in industrial applications, such as aerospace structures, oil and gas rigs, and many other structures that need real-time monitoring, like bridges. The technology can also be used for wearables, for example, to monitor the movement of muscles in sports or to track relevant health indicators,” he says.
Lubineau outlines how further work will open even more applications. “With additional research, we can tune the architecture that we invented to design sensors for other types of stimuli, like the chemical environment. We can build a full portfolio of sensors using the same physical principles,” he explains.
Marine and coastal plant-based ecosystems, including mangroves, seagrass meadows and macroalgae (seaweeds), play a significant role in capturing and storing atmospheric carbon. Understanding these blue carbon resources is increasingly important in tackling climate change, so KAUST researchers are finding out more about the understudied realm of the Red Sea’s macroalgae species[1].
“Not many people will think of seaweed as a versatile tool. But alongside sequestering carbon, certain macroalgae species hold potential for bioremediation, helping restore polluted coastal areas and improve overall ecosystem health,” says Chunzhi Cai, former Ph.D. student at KAUST. “Some species are excellent at filtering out metal contaminants, for example, while others can help prevent eutrophication and algal blooms.”
For Saudi Arabia, where oil refineries and industrial activity contribute significantly to emissions, understanding the health and the role of macroalgae is particularly relevant. Alongside its carbon-sequestration capabilities, macroalgae are often used as ingredients in food, pharmaceutical and beauty products. Hence, a comprehensive analysis of nutrients and pollutants is critical to safeguard human health.
Cai collaborated with colleagues, under the supervision of KAUST faculty Susana Agusti and Carlos M. Duarte, to conduct a comprehensive analysis of 161 macroalgae samples collected from 45 sites along the Saudi Arabian Red Sea coast. They determined the concentrations of 22 chemical elements, including nutrients and heavy metals, in the 19 different species of macroalgae sampled.
Their results revealed high levels of potassium, sodium and sulphur in many species, which can be attributed to the Red Sea’s unique high salinity, low rainfall rates and high evaporation levels.
There were significant differences in nutrient and trace metal levels depending on the location and habitats the macroalgae came from. For example, sediments can trap pollutants such as metals, resulting in macroalgae harvested from coastal seagrass meadows exhibiting higher metal accumulation compared to those from coral reefs, where sediment is limited or absent. Macroalgae in the southern Red Sea showed higher levels of total organic carbon, nitrogen, phosphorus and cadmium compared to those in northern locations.
“This trend is influenced by nutrient inflows from the Indian Ocean and the unique semi-enclosed nature of the Red Sea,” says Cai. “Crucially, we also showed that as macroalgae absorb more heavy metals, their ability to store carbon is potentially reduced.”
The macroalgae Amphiroa fragilissima, Padina sp. and Udotea flabellum had the highest trace metal contents of all samples taken. A. fragilissima shows considerable potential as a bio-remediator, given its ability to absorb large amounts of metals like chromium.
The team also identified the Red Sea’s Halymenia species as particularly nutritious, providing a vital food source for marine creatures. Halymenia may also be suitable for human consumption. Another macroalgae species with high potassium content may be useful for agricultural fertilizers.
However, the researchers also uncovered worrying contamination trends in certain locations. Samples collected near several Saudi coastal towns exhibited chromium and nickel levels that exceeded toxicity thresholds.
“Given that the Red Sea is a shared environment, regional collaboration is crucial to safeguard marine ecosystems,” concludes Agusti. “All Red Sea nations need to establish coordinated pollution management strategies. Regional initiatives could include joint monitoring programs, data sharing and the development of common regulations for protecting and utilizing these valuable ecosystems.”
A biogas purification system that is compact, efficient and cost effective could help to turn organic waste into valuable streams of methane and carbon dioxide at small scales.
Biogas is a renewable energy source, produced using microbes that break down sewage sludge, agricultural residues or food waste. Biogas is roughly 50% to 70% methane — which can replace natural gas used for cooking, heating, or generating electricity — while the remainder is mostly carbon dioxide.
A method called pressure swing adsorption (PSA) is used to separate these gases, by passing high-pressure biogas over a porous adsorbent material in a separation column. The adsorbent selectively traps carbon dioxide from the mixture, leaving relatively pure methane. Once this methane has been piped away, the pressure is lowered so that the adsorbent releases its carbon dioxide in a stream of ‘tail gas’. Several separation columns can be connected together to improve the purity of each gas stream over multiple adsorption cycles.
However, tail gas often contains traces of methane, which has up to 30 times greater global warming potential than carbon dioxide. This mixture is typically released into the atmosphere, where it contributes to climate change.
Carlos Grande at KAUST, along with his Ph.D. student, Saravanakumar Ganesan, and research scientist, Rafael Canevesi, aimed to design a PSA system that could economically upgrade biogas into purer product streams, even when operating at small scales[1]. This would not only avoid methane emissions in the tail gas, but also deliver carbon dioxide that is sufficiently pure for industrial use, so that it is not merely vented.
“If small units could be made affordable, they could be implemented in farms or small communities, extending the capabilities of producing biomethane as a renewable fuel,” says Grande. “Our studies in this field are targeted to make a profitable case for farm-scale implementation of biogas upgrading while also being environmentally responsible.”
The team simulated different PSA configurations that used a commercially-available carbon molecular sieve as the adsorbent, and found that a dual-stage PSA was the most successful1. In the first stage, four columns produce a high-purity stream of carbon dioxide. Methane-rich gas from this stage is then piped into the second stage, which uses two columns to maximize the methane’s purity. Tail gas from the second stage is recycled back into the first stage, giving any stray methane another opportunity to be extracted.
Then the researchers refined their design, by using industrial balloons to manage the flow of gases through the system. This enabled a simpler PSA configuration with only two columns that achieved more than 97 percent methane purity, and carbon dioxide purity over 99.5 percent — enough to satisfy the most stringent legislation in this area, Grande says. This system also had a lower energy demand than previous designs, and would be cheaper to set up[2].
“The next stage is to reduce the cost by 30 percent in small units,” says Grande. “We can also tailor the PSA units to different biogas sources, adapting the technology to treat different streams available in Saudi Arabia.”