Hyperspectral Imagery (HSI) has been used in many applications to non-destructively determine the material and/or chemical compositions of samples. There is growing interest in creating 3D hyperspectral reconstructions, which could provide both spatial and spectral information while also mitigating common HSI challenges such as non-Lambertian surfaces and translucent objects. However, traditional 3D reconstruction with HSI is difficult due to technological limitations of hyperspectral cameras. In recent years, Neural Radiance Fields (NeRFs) have seen widespread success in creating high quality volumetric 3D representations of scenes captured by a variety of camera models. Leveraging recent advances in NeRFs, we propose computing a hyperspectral 3D reconstruction in which every point in space and view direction is characterized by wavelength-dependent radiance and transmittance spectra. To evaluate our approach, a dataset containing nearly 2000 hyperspectral images across 8 scenes and 2 cameras was collected. We perform comparisons against traditional RGB NeRF baselines and apply ablation testing with alternative spectra representations. Finally, we demonstrate the potential of hyperspectral NeRFs for hyperspectral super-resolution and imaging sensor simulation. We show that our hyperspectral NeRF approach enables creating fast, accurate volumetric 3D hyperspectral scenes and enables several new applications and areas for future study.
Here are some examples of raw images taken by the hyperspectral cameras.
Our HS-NeRF approach (Ours-Hyper) using 128 channels clearly outperforms NeRFs using only RGB
We can observe that the right-most column (trained using all 128 hyperspectral channels) produces better color accuracy and clarity than the middle-3 columns (trained using only pseudo-RGB images).
We believe this to be because the additional channels improve the SNR, as nearby wavelengths are correlated so more data samples provides more information to the model.
The additional data outweighs the effect of increased data complexity while keeping model complexity (and parameter count) roughly constant.
Using our method on pseudo-rgb data (Ours-RGB and Ours-Cont), we don't see much performance difference to the nerfacto baseline despite the more challenging representation.
Depth maps can reveal issues with the 3D structures of NeRFs that are not immediately apparent in the rendered images.
Qualitatively, ours is noticeably better than the rgb baselines, and is among the best of the ablations.
Finally, we can also export the NeRFs to point clouds to verify that their 3D structure is reasonable. Shown below are screenshots of the point clouds for Anacampseros (left) and Caladium (right)
Qualitatively, our stated approach appears among the best, but slightly different architectures are not significantly worse.
This video is the easiest for comparing the performance between different methods.
This video shows the camera moving to evidence that the NeRF's performance is consistently good across novel viewpoints.
Ch 15 (477nm)
Ch 35 (576nm)
Ch 55 (675nm)
Ch 75 (772nm)
Ch 95 (869nm)
Ch 105 (918nm)
This video shows the camera moving simultaneously while the wavelength sweeps to evidence that the NeRF's performance is consistently good across novel viewpoints over all wavelengths.
Our NeRF loses almost no accuracy in predicting the full hyperspectral image, even when trained on only 1/8th of the wavelengths.
Our NeRFs can accurately predict an unseen viewpoint consistently across all wavelengths.
Furthermore, even withholding 87.5% of wavelengths from the training set has marginal impact on accuracy.
Watch this rotating video to see that the wavelength interpolation performance is consistently good across novel viewpoints.
Inspect the individual rendered images by wavelength and amount of interpolation.
This is the same as the video above, but in image form to allow for closer inspection.
Hyperspectral NeRFs allow us to simulate arbitrary camera image sensors from a single reference image.
@article{Chen24arxiv_HS-NeRF,
title={Hyperspectral Neural Radiance Fields},
author={Gerry Chen and Sunil Kumar Narayanan and Thomas Gautier Ottou and Benjamin Missaoui and Harsh Muriki and Cédric Pradalier and Yongsheng Chen},
journal={arXiv preprint arXiv:2403.14839},
year={2024}
}