Skip to content

Commit 112e822

Browse files
committed
feat: add example script for CUDA usage and update README instructions
1 parent 8edb984 commit 112e822

File tree

3 files changed

+38
-29
lines changed

3 files changed

+38
-29
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@ pip install -U kokoro-onnx
3636

3737
1. Install [uv](https://docs.astral.sh/uv/getting-started/installation) for isolated Python (Recommend).
3838

39-
Basically open the terminal (PowerShell / Bash) and run the command listed in their website.
40-
41-
_Note: you don't have to use `uv`. but it just make things much simpler. You can use regular Python as well._
39+
```console
40+
pip install uv
41+
```
4242

4343
2. Create new project folder (you name it)
4444
3. Run in the project folder

examples/with_cuda.py

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
"""
2+
Note:
3+
On Linux you need to run this as well: apt-get install portaudio19-dev
4+
gpu version is sufficient only for Linux and Windows. macOS works with GPU by default.
5+
You can see the used execution provider by enable debug log. see with_log.py
6+
The script tested on CUDA 12.1 with CUDNN 9.1.0 on RTX4060 TI
7+
You might need to install the current CUDA version along with the CUDNN version (12.1, 9.1.0)
8+
See https://developer.nvidia.com/cuda-12-1-0-download-archive and https://developer.nvidia.com/cudnn-9-1-0-download-archive
9+
10+
Setup:
11+
pip install -U kokoro-onnx[gpu] soudfile
12+
wget https://github.com/thewh1teagle/kokoro-onnx/releases/download/model-files-v1.0/kokoro-v1.0.onnx
13+
wget https://github.com/thewh1teagle/kokoro-onnx/releases/download/model-files-v1.0/voices-v1.0.bin
14+
15+
Run with Python:
16+
python examples/with_cuda.py
17+
18+
Run with uv (if you cloned the repo):
19+
uv run --extra gpu ./examples/with_cuda.py
20+
"""
21+
22+
import soundfile as sf
23+
from kokoro_onnx import Kokoro
24+
import onnxruntime as ort
25+
26+
privders = ort.get_available_providers()
27+
print("Available providers:", privders) # Make sure CUDAExecutionProvider is listed
28+
print(f'Is CUDA available: {"CUDAExecutionProvider" in privders}')
29+
30+
kokoro = Kokoro("kokoro-v1.0.onnx", "voices-v1.0.bin")
31+
samples, sample_rate = kokoro.create(
32+
"Hello. This audio generated by kokoro!", voice="af_sarah", speed=1.0, lang="en-us"
33+
)
34+
sf.write('audio.wav', samples, sample_rate)
35+
print('Created audio.wav')

examples/with_gpu.py

Lines changed: 0 additions & 26 deletions
This file was deleted.

0 commit comments

Comments
 (0)