Spatiotemporal-aware Neural Fields for Dynamic CT Reconstruction

National University of Defense Technology

Abstract

We propose a dynamic Computed Tomography (CT) reconstruction framework called STNF4D (SpatioTemporal-aware Neural Fields). First, we represent the 4D scene using four orthogonal volumes and compress these volumes into more compact hash grids. Compared to the plane decomposition method, this method enhances the model's capacity while keeping the representation compact and efficient. However, in densely predicted high-resolution dynamic CT scenes, the lack of constraints and hash conflicts in the hash grid features lead to obvious dot-like artifact and blurring in the reconstructed images. To address these issues, we propose the Spatiotemporal Transformer (ST-Former) that guides the model in selecting and optimizing features by sensing the spatiotemporal information in different hash grids, significantly improving the quality of reconstructed images. We conducted experiments on medical and industrial datasets covering various motion types, sampling modes, and reconstruction resolutions. Experimental results show that our method outperforms the second-best by 5.99 dB and 4.27 dB in medical and industrial scenes, respectively.

Pipeline

pipeline image

More results

We present the sampling patterns and motion states of medical and industrial datasets, and show more experimental results.

BibTeX

@InProceedings{Zhou_2025_AAAI,
  title={Spatiotemporal-Aware Neural Fields for Dynamic CT Reconstruction},
  author={Zhou, Qingyang and Ye, Yunfan and Cai, Zhiping},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={10},
  pages={10834--10842},
  year={2025}
}