Software is quickly replacing what was previously a mechanical realm. These recent technological advances are replacing human expertise to some extent as well, especially when it comes to domains related to image processing and software-driven mobility. In the past, sensors were used to capture clear photos in low light. Nowadays, in addition to these, software methodologies are also catching up quickly. Large full frame sensors are not an option for smartphone photography, even though low light photography is heavily hardware-dependent. Smaller sensors require much software processing to function well in low-light conditions, and noise still poses a significant issue.
Although G-Cam works marvels on even simple budget phones, Google researchers have developed a mind-blowing AI tool that functions miraculously in low light called “MultiNerf” (Neural Radiance Field). This AI-based image processing tool might be a breakthrough in the noiseless low-light smartphone photography field. The study’s paper will also be presented at the Computer Vision and Pattern Recognition Conference (CVPR). The team has developed three methods that are expansions of the current NeRF (Neural Radiation Fields) algorithm: Mip-NeRF 360, Ref-NeRF, and RawNeRF. It is anticipated that the recently released AI technology would considerably enhance the Pixel phone’s photo or video-taking capabilities.
The Mip-NeRF 360 algorithm is the expansion of the existing Mip-NeRF method and permits image synthesis of borderless 360-degree scenes. Additionally, it can produce fully realized 3D objects and scenes, allowing for 360-degree viewing of photos. Like NeRF was designed for low-light conditions, RawNeRF is an upgraded version of NeRF. By giving users control over tone mapping and exposure, this improved method offers multi-stage image noise reduction and gorgeous shallow depth-of-field effects. Ref-NeRF re-parameterizes the impacts of various viewing angles using surface normals. This helps the NeRF algorithm better infer materials and light sources, which is very helpful when dealing with objects with shining surfaces.
The researchers think MutiNeRF performs best because it can handle most aerial photogrammetry applications where a user’s primary motivation is to render patterns. However, exporting to the mesh is not currently available. According to the MultiNeRF project page, the present code enables Google’s Mip-NeRF 360 implementation to replicate the outcomes of Mip-NeRF 360 by integrating the implementation of Ref-NeRF and RawNeRF. However, there may be a slight difference between the results of Ref-NeRF and RawNeRF. Tools and examples for white papers, training, assessment, testing, and rendering are also available on the MultiNeRF project website. The implementation forks from Mip-NeRF and is written in JAX.
Users who tested the recently released MultiNeRF concurred that it had some effective camera programs that perform well for large scenes. Additionally, Google Research has made the complete code open-source and welcomes contributions from the AI community. However, they also note that the current code is merely in a research stage and should be used with caution since Google products do not officially support it.
Paper ‘Mip-NeRF 360‘ | Paper ‘Ref-NeRF‘ | Paper ‘NeRF in the Dark’ | Github Code
Please Don't Forget To Join Our ML Subreddit
Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.