Skip to main content
Stack Overflow
  1. About
  2. For Teams

Return to Revisions

1 of 3
talonmies
  • 72.8k
  • 35
  • 204
  • 297

I'd like to allocate executable memory in CUDA ...

There is no such thing as user allocable "executable" memory. All the empirical evidence I have seen, and architecture whitepapers which NVIDIA has released over the years suggests that the GPU has a programmable MMU and NVIDIA has chosen to logically divide the GPU DRAM into regions for different functions (global memory, constant memory, local memory, code pages). The latter appear fully inaccessible from user code by design.

write SASS/CUBIN code there, and then execute this code.

I don’t see how that could work either. The CUDA execution model requires static allocation of global symbols, registers, local memory, and constant memory in a linking phase which must be performed prior to code being loading onto the GPU and executed. This linking phase can be done at compile time, or runtime, but it must be done. This is the purpose of the nvjitlink API which you reject in your question. There is, to the best of my knowledge, no way you could conceivable run code whose resource requirements are not known a priori.

Finally, I would regard the ability to bypass all of the protections which NVIDIA have implemented in their driver and runtime and inject and run arbitrary code on the GPU to be a potential security flaw and expect NVIDIA to eliminate it, if such a vector was documented to exist.

talonmies
  • 72.8k
  • 35
  • 204
  • 297

AltStyle によって変換されたページ (->オリジナル) /