Will NeRFs Replace Photogrammetry?

Will NeRFs Replace Photogrammetry?

Computer Vision Decoded

In this episode of Computer Vision Decoded, we are going to dive into one of the hottest topics in the industry: Neural Radiance Fields (NeRFs)

We are joined by Matt Tancik, a student pursuing a PhD in the computer science and electrical engineering department at UC Berkeley. He has also contributed research to the original NeRF project in 2020 along with several others since then.

Last but not least, he is building NeRFStudio - a collaboration friendly studio for NeRFs.

In this episode you will learn about what NeRFs are and more importantly what they are not. Matt goes into the challenges of large scale NeRF creation with his experience with Block-NeRF.

Follow Matt's work at https://www.matthewtancik.com/

Get started with Nerfstudio here: https://docs.nerf.studio/en/latest/

Block-NeRF details: https://waymo.com/research/block-nerf/

00:00 Intro
00:45 Matt’s Background Into NeRF Research 
04:00 What is a NeRF and how it is different from photogrammetry
11:57 Can geometry be extracted from NeRFs?
15:30 Will NeRFs supersede photogrammetry in the future? 
22:47 Block-NeRF and the pros and cons of using 360 cameras
25:30 What is the goal of Block-NeRF
30:44 Why do NeRFs need large GPUs to compute?
35:45 Meshes to simulate NeRF visualizations
40:28 What is Nerfstudio?
47:40 How to get started with Nerfstudio

Follow Jared Heinly on Twitter: https://twitter.com/JaredHeinly
Follow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85

This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io

Activity

Switch to the Fountain App