A very naive WGSL backend for LLVM intended to be used to port CUDA programs to the browser with WebGPU.
- If you are cloning this repo for the first time, then clone this repo recursively to get
dawn
's repo as well
git clone https://github.com/grx6741/llvm-wgsl.git --recursive
- Install LLVM 19.0.0 from source or use a package manager.
sudo apt install llvm-19-dev
- Compile with
cmake
.
# Note, may not work with other compilers like gcc
export CC=clang
export CXX=clang++
cmake -B build -G Ninja # only tested with Ninja
- Build with
ninja
.
cmake --build build
- Take a LLVM IR file compiled with
clang
of a cuda program.
clang -x cuda -emit-llvm cuda.cu --cuda-gpu-arch=sm_50 --cuda-device-only --cuda-path=$(CUDA_PATH)
- Run the WGSL backend.
./build/bin/llvm-wgsl cuda.ll
- The output will be a WGSL file.
- Nothing is implemented yet.