Equipped with the continuous representation capability of Multi-Layer Perceptron (MLP), Implicit Neural Representation (INR) has been successfully employed for Arbitrary-scale Super-Resolution (ASR). However, the limited receptive field of the linear layers in MLP restricts the representation capability of INR, while it is computationally expensive to query the MLP numerous times to render each pixel. Recently, Gaussian Splatting (GS) has shown its advantages over INR in both visual quality and rendering speed in 3D tasks, which motivates us to explore whether GS can be employed for the ASR task. However, directly applying 3D GS to ASR is exceptionally challenging, since the original GS is an optimization-based method through overfitting every single scene, while in ASR we aim to learn a single model to generalize on different images and scaling factors. We overcome these challenges by developing two novel techniques. Firstly, to generalize GS for ASR, we elaborately design an architecture to predict the corresponding image-conditioned Gaussians of the input low-resolution image in a feed-forward manner. Secondly, we implement an efficient differentiable 2D GPU/CUDA-based scale-aware rasterization, which renders superresolved images by sampling discrete RGB values from the predicted contiguous Gaussians for a given scaling vector. Through end-to-end training, our optimized network, namely GSASR, can perform ASR for any image and unseen scaling factors. Extensive experiments validate the effectiveness of our proposed method.
@misc{chen@2025gsasr,
title={Generalized and Efficient 2D Gaussian Splatting for Arbitrary-scale Super-Resolution.},
author={Chen Du and Chen Liyi and Zhang Zhengqiang and Zhang Lei},
journal={arXiv preprint arXiv:2501.06838},
year={2025},
}
We sincerely thank CompletionfFormer for their opensource code. We also thanks HAMMER for the opensource dataset.