Skip to content

Commit 3766702

Browse files
authoredJun 3, 2025··
Merge pull request #40 from rmusser01/dev
Sync
2 parents d99aa00 + 534a5ef commit 3766702

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+9633
-1738
lines changed
 

‎Docs/Design/Architecture_and_Design.md

Lines changed: 34 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,4 +30,37 @@
3030
- **`tldw_chatbook/Widgets/`**: Reusable UI components used across different screens.
3131
- **`tldw_chatbook/Config.py`**: Contains all configuration settings for the application, including API keys, database paths, and other settings.
3232
- **`tldw_chatbook/Constants.py`**: Contains all constants used throughout the application, such as default values and error messages.
33-
- **`tldw_chatbook/Logging_Config.py`**: Contains the logging configuration for the application, setting up loggers, handlers, and formatters.
33+
- **`tldw_chatbook/Logging_Config.py`**: Contains the logging configuration for the application, setting up loggers, handlers, and formatters.
34+
35+
36+
37+
38+
39+
## LLM Backend Integrations
40+
41+
This section details the various Large Language Model (LLM) inference backends integrated into `tldw_chatbook`.
42+
43+
### Llama.cpp Integration
44+
-
45+
46+
### Llamafile Integration
47+
48+
### Ollama Integration
49+
50+
### vLLM Integration
51+
52+
### Transformers Integration
53+
54+
### ONNX Runtime Integration
55+
56+
### MLX-LM Integration
57+
- https://github.com/ml-explore/mlx-lm/tree/main
58+
59+
The application now supports MLX-LM for running local language models optimized for Apple Silicon hardware.
60+
Users can manage MLX-LM instances via the "LLM Management" tab, allowing configuration of:
61+
62+
* **Model Path**: Specify a HuggingFace model ID compatible with MLX or a path to a local MLX model.
63+
* **Server Host & Port**: Configure the network address for the MLX-LM server.
64+
* **Additional Arguments**: Pass extra command-line arguments to the `mlx_lm.server` process.
65+
66+
The integration starts a local `mlx_lm.server` process and interacts with it, assuming an OpenAI-compatible API endpoint (typically at `/v1`). This allows for efficient local inference leveraging MLX's performance benefits on supported hardware.

‎Docs/Design/Coding_Tab.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
https://github.com/eyaltoledano/claude-task-master/tree/main
2+
https://github.com/eyaltoledano/claude-task-master/blob/main/docs/tutorial.md
3+
https://repoprompt.com/
4+
https://github.com/snarktank/ai-dev-tasks
5+
6+
7+
8+
9+
10+
11+
12+
13+
14+
15+
16+
17+
18+
19+

0 commit comments

Comments
 (0)
Please sign in to comment.