0

I'd like to allocate executable memory in CUDA, write SASS/CUBIN code there, and then execute this code. On the CPU for Linux systems, this is quite easy and well-documented -- just a combination of mprotect and mmap will do the job for the memory allocation, and you are able to allocate memory that is executable.

I have tried to do the following on an RTX 4070, showing that memory is not executable by default (compile via nvcc -arch=sm_89 FILE.cu -lcuda):

#include <stdio.h>
#include <cuda.h>
#include <cassert>
typedef void (* funptr)(int *);
__global__ void globalfunc(int * a, void * fun)
{
 funptr ptr = (funptr) fun;
 ptr(a);
}
int main(void)
{
 int h_a[1];
 int * d_a;
 uint64_t h_ins[32] =
 {
 // This is the SASS code for sm_89 with function signature
 // __device__ void myfunc(int * a)
 // {
 // *a = 1337;
 // }
 0x0000053900037802,
 0x000fe20000000f00,
 0x0000460000047ab9,
 0x000fc80000000a00,
 0x0000000304007985,
 0x0001e4000c101904,
 0x0000000014007950,
 0x001fea0003e00000,
 0xfffffff000007947,
 0x000fc0000383ffff,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000,
 0x0000000000007918,
 0x000fc00000000000
 };
 void * d_ins;
 cudaMalloc((void **) &d_a, 1 * sizeof(int));
 cudaMalloc((void **) &d_ins, 32 * sizeof(uint64_t));
 cudaMemcpy(d_ins, h_ins, 32 * sizeof(uint64_t), cudaMemcpyHostToDevice);
 // Executable code seems to require 128 byte alignments, at least on Ada architecture.
 // cudaMalloc allegedly allocate on 256 byte alignments, so we assert that this indeed
 // is the case.
 assert(((uint64_t) d_ins) % 256 == 0);
 // Launch the kernel with one block and 1 thread
 globalfunc<<<1, 1>>>(d_a, d_ins);
 // Copy the result back to the host
 cudaMemcpy(h_a, d_a, sizeof(int), cudaMemcpyDeviceToHost);
 // Print the result
 printf("*h_a = %d\n", *h_a);
 // Free device memory
 cudaFree(d_a);
 cudaFree(d_ins);
 return 0;
}

That is, running the code with an actual __device__ void myfunc(int * a) works as intended, but pushing the SASS instructions to memory only yields *h_a = 0.

I have also tried using cuMemSetAccess, by using the code provided in this answer and changing the line

accessDesc.flags = CU_MEM_ACCESS_FLAGS_PROT_READWRITE;

to

accessDesc.flags = (CUmemAccess_flags) 0x7;

since this seems to correspond to executable, readable and executable memory access in the NVIDIA Linux Open Kernel Modules' (internal?) header nvport/memory.h. However, such a change yields a CUDA error.

I am aware of NVIDIA's nvJitLink, but I am not interested in answers involving this here.

So, how can I allocate and use executable memory for NVIDIA cards?

When answering the question, you may assume that I am on a recent Ubuntu system with sudo access, on an x86 CPU and RTX 4070 GPU.

asked Nov 22, 2024 at 15:05
10
  • Can someone please explain why my question got downvoted? I am using this as a sort of last resort, and I really tried my best in formulating the question and providing good information. Commented Nov 22, 2024 at 15:34
  • Agreed, upvoted Commented Nov 22, 2024 at 19:48
  • What makes you think it is even possible? I very much doubt it is Commented Nov 23, 2024 at 12:10
  • @talonmies How is the GPU able to execute instructions if they are not in memory? Indeed, it does not seem to be a documented way of obtaining executable memory, which is sort of why I'm writing here. Commented Nov 23, 2024 at 12:49
  • You are assuming that the GPU can execute arbitrary instructions in the logical global memory address space. As best as I can tell, that isn’t possible. The logical memory segments for code are separate and not accessible from user space except via the exposed APIs you apparently aren’t interested in using Commented Nov 23, 2024 at 12:57

1 Answer 1

2

I'd like to allocate executable memory in CUDA ...

There is no such thing as user allocable "executable" memory. All the empirical evidence I have seen, and architecture whitepapers which NVIDIA has released over the years suggests that the GPU has a programmable MMU and NVIDIA has chosen to logically divide the GPU DRAM into regions for different functions (global memory, constant memory, local memory, code pages). The latter appear fully inaccessible from user code by design.

write SASS/CUBIN code there, and then execute this code.

I don’t see how that could work either. The CUDA execution model requires static allocation of global symbols, registers, local memory, and constant memory in a linking phase which must be performed prior to code being loading onto the GPU and executed. This linking phase can be done at compile time, or runtime, but it must be done. This is the purpose of the nvjitlink API which you reject in your question. The GPU runtime must then take the resource requirements of the linked code payload, reserve the necessary register file pages, statically defined shared memory, etc. and tries to run, when or if those resources are available on the target device. There is, to the best of my knowledge, no way you could conceivably run code whose resource requirements are not known and which the runtime has not reserved the necessary GPU resources a priori.

Finally, I would regard the ability to bypass all of the protections which NVIDIA have implemented in their driver and runtime and inject and run arbitrary code on the GPU to be a potential security flaw and expect NVIDIA to eliminate it, if such a vector was documented to exist.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks for your elaborate response. I would still like to see some sources for your claims (I do, however, believe you are right). I resorted to writing the SASS instructions directly to a CUBIN file on the heap, and load it through cuModuleLoadData and cuModuleGetFunction.

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.