Statistics
Trio of tuning tools for modeling large spatial datasets
Statistical tools help finetune the parameters and approximations of models used to make sense of large spatial datasets.
Predictive modeling of very large datasets, such as environmental measurements, across a wide area can be a highly computationally intensive exercise. These computational demands can be significantly reduced by applying various approximations, but at what cost to accuracy? KAUST researchers have now developed statistical tools that help remove the guesswork from this approximation process.
“In spatial statistics, it is extremely time consuming to fit a standard process model to large datasets using the most accurate likelihoodbased methods,” says Yiping Hong, who led the research. “Approximation methods can cut down the computation time and computing resources significantly.”
Rather than model the relationship between each pair of observations explicitly using a standard process model, approximation methods try to adopt an alternative modeling structure to describe the relationships in the data. This approach is less accurate but more computationally friendly. The tile lowrank (TLR) estimation method developed by KAUST, for example, applies a blockwise approximation to reduce the computational time.
“Thus, one needs to determine some tuning parameters, such as how many blocks should be split and the precision of the block approximation,” says Hong. “For this, we developed three criteria to assess the loss of prediction efficiency, or the loss of information, when the model is approximated.”
With a lack of informative measures for evaluating the impact of approximation, Hong, along with computational scientist Sameh Abdulah and statisticians Marc Genton and Ying Sun, developed their own. The three measures — the mean loss of efficiency, the mean misspecification and a root mean square of the mean misspecification — together provide insight into the “fit” of the approximation parameters to the dataset, including prediction variability, and not just the pointbypoint evaluation given by conventional prediction criterion.
“We can use our criteria to compare the prediction performance of the TLR method with different tuning parameters, which allows us to suggest the best parameters to use,” says Hong.
The team applied the method to a real dataset of highresolution soil moisture measurements in the Mississippi Basin. By adjusting tuning parameters using the new measures, the TLR approximation provided estimates that are very close to the exact maximum likelihood estimates, with a significantly shorter computational time.
“Our criteria, which were developed to choose the tuning parameter for TLR, can also be used to tune other approximation methods,” says Hong. “We now plan to compare the performance of other approximation methods developed for large spatial datasets, which will provide valuable guidance for analysis of real data.”
References

Hong, Y., Abdulah, S., Genton, M. G. & Sun, Y. Efficiency assessment of approximated spatial predictions for large datasets. Spatial Statistics 43, 100517 (2021). article
You might also like
Applied Mathematics and Computational Sciences
Datadriven regional ocean models essential for planning
Applied Mathematics and Computational Sciences
Penalizing complexity for better stats
Applied Mathematics and Computational Sciences
Balancing renewable energy systems in Saudi buildings
Statistics
A little competition improves statistics
Statistics
Better models show how infectious diseases spread
Statistics
A skewed model for imbalanced health data
Statistics
Deep neural networks that “get” spatial dependence
Statistics