Skip to content

Get Llama.cpp working as a backend Inference provider. #41

Closed
@rmusser01

Description

@rmusser01

Title.

As a user, I should be able to select Llama.cpp and set it up and use it as a local inference server.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions