Skip to content

Commit ff8a696

Browse files
authored
build(Makefile): add back single target to build native llama-cpp (#2448)
Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent 10c64db commit ff8a696

File tree

2 files changed

+17
-4
lines changed

2 files changed

+17
-4
lines changed

Makefile

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -672,6 +672,14 @@ else
672672
LLAMA_VERSION=$(CPPLLAMA_VERSION) $(MAKE) -C backend/cpp/${VARIANT} grpc-server
673673
endif
674674

675+
# This target is for manually building a variant with-auto detected flags
676+
backend-assets/grpc/llama-cpp: backend-assets/grpc
677+
cp -rf backend/cpp/llama backend/cpp/llama-cpp
678+
$(MAKE) -C backend/cpp/llama-cpp purge
679+
$(info ${GREEN}I llama-cpp build info:avx2${RESET})
680+
$(MAKE) VARIANT="llama-cpp" build-llama-cpp-grpc-server
681+
cp -rfv backend/cpp/llama-cpp/grpc-server backend-assets/grpc/llama-cpp
682+
675683
backend-assets/grpc/llama-cpp-avx2: backend-assets/grpc
676684
cp -rf backend/cpp/llama backend/cpp/llama-avx2
677685
$(MAKE) -C backend/cpp/llama-avx2 purge

docs/content/docs/advanced/advanced-usage.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -351,7 +351,7 @@ For example, to start vllm manually after compiling LocalAI (also assuming runni
351351
./local-ai --external-grpc-backends "vllm:$PWD/backend/python/vllm/run.sh"
352352
```
353353

354-
Note that first is is necessary to create the conda environment with:
354+
Note that first is is necessary to create the environment with:
355355

356356
```bash
357357
make -C backend/python/vllm
@@ -369,7 +369,7 @@ there are additional environment variables available that modify the behavior of
369369
| `BUILD_TYPE` | | Build type. Available: `cublas`, `openblas`, `clblas` |
370370
| `GO_TAGS` | | Go tags. Available: `stablediffusion` |
371371
| `HUGGINGFACEHUB_API_TOKEN` | | Special token for interacting with HuggingFace Inference API, required only when using the `langchain-huggingface` backend |
372-
| `EXTRA_BACKENDS` | | A space separated list of backends to prepare. For example `EXTRA_BACKENDS="backend/python/diffusers backend/python/transformers"` prepares the conda environment on start |
372+
| `EXTRA_BACKENDS` | | A space separated list of backends to prepare. For example `EXTRA_BACKENDS="backend/python/diffusers backend/python/transformers"` prepares the python environment on start |
373373
| `DISABLE_AUTODETECT` | `false` | Disable autodetect of CPU flagset on start |
374374
| `LLAMACPP_GRPC_SERVERS` | | A list of llama.cpp workers to distribute the workload. For example `LLAMACPP_GRPC_SERVERS="address1:port,address2:port"` |
375375

@@ -475,15 +475,15 @@ If you wish to build a custom container image with extra backends, you can use t
475475
```Dockerfile
476476
FROM quay.io/go-skynet/local-ai:master-ffmpeg-core
477477
478-
RUN PATH=$PATH:/opt/conda/bin make -C backend/python/diffusers
478+
RUN make -C backend/python/diffusers
479479
```
480480

481481
Remember also to set the `EXTERNAL_GRPC_BACKENDS` environment variable (or `--external-grpc-backends` as CLI flag) to point to the backends you are using (`EXTERNAL_GRPC_BACKENDS="backend_name:/path/to/backend"`), for example with diffusers:
482482

483483
```Dockerfile
484484
FROM quay.io/go-skynet/local-ai:master-ffmpeg-core
485485
486-
RUN PATH=$PATH:/opt/conda/bin make -C backend/python/diffusers
486+
RUN make -C backend/python/diffusers
487487
488488
ENV EXTERNAL_GRPC_BACKENDS="diffusers:/build/backend/python/diffusers/run.sh"
489489
```
@@ -525,3 +525,8 @@ A list of the environment variable that tweaks parallelism is the following:
525525

526526
Note that, for llama.cpp you need to set accordingly `LLAMACPP_PARALLEL` to the number of parallel processes your GPU/CPU can handle. For python-based backends (like vLLM) you can set `PYTHON_GRPC_MAX_WORKERS` to the number of parallel requests.
527527

528+
### Disable CPU flagset auto detection in llama.cpp
529+
530+
LocalAI will automatically discover the CPU flagset available in your host and will use the most optimized version of the backends.
531+
532+
If you want to disable this behavior, you can set `DISABLE_AUTODETECT` to `true` in the environment variables.

0 commit comments

Comments
 (0)