Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nvidia Kaolin Wisp: a PyTorch library to work with neural fields (github.com/nvidiagameworks)
83 points by lnyan on Aug 20, 2022 | hide | past | favorite | 13 comments


It's a CUDA accelerated toolkit for storing and tracing volume data.

In my opinion, this one is pretty pointless because everyone who uses Nerf or similar technology in a game or end user product will need AMD and Intel GPU support, so CUDA is a no-go. And researchers can just copy a 200 line PyTorch model of they want to build a customized Nerf. Plus if you do that, you get the production export to onnx for free.

In short, I can't imagine any case where using a CUDA-specific closed source library would be superior to using readily available PyTorch scripts.


It appears to be a framework for creating and viewing NeRFs and similar for research and experimentation. You could do the same thing starting from PyTorch but then you’d have to write something like this yourself, and now you don’t have to.


Researchers have a strong need for cross-platform compatibility?

Why waste time porting something to a different platform when the platform of choice already has a ready made script?

I’ll admit right away that I know nothing about the subject at hand, but browsing a bit through this repo, we’re most definitely not talking about 200 trivial lines of code…


Exactly. The output is some models you can dump into any of a number of inference engines. The hard part is learning/training, & NV's constant work to make proprietary well featured toolkits keeps tbe world hooked on their devices for machine learning. It's a wildly good strategy, of a rarely practiced path of non-cooperatively giving away software for free.


The thing is, because this is an nvidia product, there are people being paid to work on it. What’s going to get made will be what gives nvidia gpus the most features, or rather, the biggest edge.

We aren’t quite yet at the stage of this technology that there is something like an OpenGL equivalent for NeRFs that can be expected to work across many architectures. It just means that nvidia will probably get some early-mover commitment advantage before the rest of the ML community catches up by implementing a cross-platform library. Though that said there is interesting work by Microsoft being done using DirectX 12 to make any dx12 gpu able to accelerate tensorflow. So it might not be inconceivable somebody implements something like this in tensorflow so that it is cross-platform.


> We aren’t quite yet at the stage of this technology that there is something like an OpenGL equivalent for NeRFs that can be expected to work across many architectures

If you export the original NeRF code to ONNX, it'll execute just fine on OpenGL GPUs using (for example) the DirectML nackend on Windows.


The great set of software libraries and debugging tools available to CUDA users.

It is up to AMD and Intel to actually improve their software offerings, not researchers to downgrade themselves to lesser tools.


What is a neural field?


A neural network mapping spacetime coordinates to output values.


Is there a way to generate industry standard 3D models from NeRF? Say something that unity or unreal can use?


It's volumetric data, so you can use marching cubes to extract a mesh. Nvidia has also released nvdiffrec with the explicit goal of generating meshes, materials, and lighting.

https://nvlabs.github.io/nvdiffrec/


google has been putting significant work into a neural fields extension of their very popular Jax machine-learning toolkit too, https://github.com/google-research/jax3d


This appears to be a gamechanger




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: