Hacker Newsnew | past | comments | ask | show | jobs | submit | LegNeato's commentslogin

We use the cuda device allocator for allocations on the GPU via Rust's default allocator.

Have you considered “allocating” out of shared memory instead?

Flip on the pedantic switch. We have std::fs, std::time, some of std::io, and std::net(!). While the `libc` calls go to the host, all the `std` code in-between runs on the GPU.

Author here! Flip on the pedantic switch, we agree ;-)

You might be interested in a previous blog post where we showed one codebase running on many types of GPUs: https://rust-gpu.github.io/blog/2025/07/25/rust-on-every-gpu...


One of the founders here, feel free to ask whatever. We purposefully didn't put much technical detail in the post as it is an announcement post (other people posted it here, we didn't).


1. What does it mean to be a GPU-native process?

2. Can modern GPU hardware efficiently make system calls? (if you can do this, you can eventually build just about anything, treating the CPU as just another subordinate processor).

3. At what order-of-magnitude size might being GPU-native break down? (Can CUDA dynamically load new code modules into an existing process? That used to be problematic years ago)

Thinking about what's possible, this looks like an exceptionally fun project. Congrats on working on an idea that seems crazy at first glance but seems more and more possible the more you think about it. Still it's all a gamble of whether it'll perform well enough to be worth writing applications this way.


1. The GPU owns the control loop And the only sparingly kicks to the CPU when it can't do something.

2. Yes

3. We're still investigating the limitations. A lot of them are hardware dependent, obviously data center cards have higher limits more capability than desktop cards.

Thanks! It is super fun trailblazing and realizing more of the pieces are there than everybody expects.


Pedantic note: rust-cuda was created by https://github.com/RDambrosio016 and he is not currently involved in VectorWare. rust-gpu was created by the folks at embark software. We are the current maintainers of both.

We didn't post this or the title, we would never claim we created the projects from scratch.


My bad! "contributors" is more accurate, but HN doesn't allow editing titles, sadly :(


HN allows the submitter to edit the title, at least it did last time I checked.


It still does, but you have a timeout for the first set of minutes after submission.

I routinely have to fix the autoformating done by HN.


No worries, just wanted to correct it for folks. Thanks for posting!


folks at embark software

seems like embark has disembarked from Rust and support for it altogether


Yes, it is all these folks getting together and getting resources to push those projects to the next level: https://www.vectorware.com/team/

wgpu, ash, and cudarc are great. We're focusing on the actual code that runs on the GPU in Rust, and we work with those projects. We have cust in rust-cuda, but that existed before cudarc and we have been seriously discussing just killing it in favor of cudarc.


We are investing in them and they form the basis of what we are doing. That being said, we are also exploring other technical avenues with different tradeoffs...we don't want to assume a solution merely because we are familiar with them.


More software than you think can run fully on the GPU, especially with datacenter cards. We'll be sharing some demos in the coming weeks.


We have some demos coming in the next couple weeks. The hardware is there, the software isn't!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: