Skip to content

Sync #40

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 40 commits into from
Jun 3, 2025
Merged

Sync #40

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
40037f3
Create __init__.py
rmusser01 Jun 1, 2025
ffc7284
MLX placeholder
rmusser01 Jun 1, 2025
d6c5224
Update chat_message.py
rmusser01 Jun 1, 2025
dacd0f2
stuff
rmusser01 Jun 1, 2025
b389a5f
eh
rmusser01 Jun 1, 2025
dfe3efd
Update worker_events.py
rmusser01 Jun 1, 2025
4e54172
aaaaaaaaaaaaaaa
rmusser01 Jun 1, 2025
8dda1a4
checkpoint
rmusser01 Jun 1, 2025
c56ab75
failed attempt at fixing large copy/paste
rmusser01 Jun 1, 2025
7d9be56
break out ollama and vllm from llm mgmt events
rmusser01 Jun 1, 2025
989546d
eh
rmusser01 Jun 1, 2025
3ce48e8
ollama
rmusser01 Jun 1, 2025
bfaf483
funky buttons
rmusser01 Jun 1, 2025
58c5eb9
Create Coding_Tab.md
rmusser01 Jun 1, 2025
01d914e
footer widget
rmusser01 Jun 1, 2025
031c801
eh attempted footer fixes to show ctrl+p
rmusser01 Jun 1, 2025
2a20b18
Update Constants.py
rmusser01 Jun 1, 2025
182bbc4
emoji picker in notes
rmusser01 Jun 1, 2025
b72b562
fix
rmusser01 Jun 1, 2025
564ffce
checkpoint
rmusser01 Jun 2, 2025
42a2f26
wew
rmusser01 Jun 2, 2025
a031a87
llama.cpp working (on windows at least)
rmusser01 Jun 2, 2025
24170b2
Added a reference guide for llama.cpp server
rmusser01 Jun 2, 2025
a21604c
llamafile being a pain
rmusser01 Jun 2, 2025
74490e0
strip thinking tags toggle
rmusser01 Jun 2, 2025
c335c84
coding tab placeholder, thinking tags toggle broken
rmusser01 Jun 2, 2025
f45119e
Llamafile, pyproject.toml, progress on vllm
rmusser01 Jun 3, 2025
5351618
Themes
rmusser01 Jun 3, 2025
e46ae3a
Update themes.py
rmusser01 Jun 3, 2025
af4f407
refactor logging into logging_config from app.py
rmusser01 Jun 3, 2025
a876d7d
Move helper functions out of app.py into Utils.py
rmusser01 Jun 3, 2025
16ac21b
CCP is funky and event handlers moved out
rmusser01 Jun 3, 2025
4c703f3
character chat populate moved
rmusser01 Jun 3, 2025
ad3db13
Update ingest_events.py
rmusser01 Jun 3, 2025
f34e0ff
More refactoring, app.py now less than 3k liens
rmusser01 Jun 3, 2025
ee021f3
Create chat_events_sidebar.py
rmusser01 Jun 3, 2025
75d4d89
fix
rmusser01 Jun 3, 2025
74b256a
idk
rmusser01 Jun 3, 2025
fc0d41c
Update Conv_Char_Window.py
rmusser01 Jun 3, 2025
534a5ef
Update Conv_Char_Window.py
rmusser01 Jun 3, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 34 additions & 1 deletion Docs/Design/Architecture_and_Design.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,37 @@
- **`tldw_chatbook/Widgets/`**: Reusable UI components used across different screens.
- **`tldw_chatbook/Config.py`**: Contains all configuration settings for the application, including API keys, database paths, and other settings.
- **`tldw_chatbook/Constants.py`**: Contains all constants used throughout the application, such as default values and error messages.
- **`tldw_chatbook/Logging_Config.py`**: Contains the logging configuration for the application, setting up loggers, handlers, and formatters.
- **`tldw_chatbook/Logging_Config.py`**: Contains the logging configuration for the application, setting up loggers, handlers, and formatters.





## LLM Backend Integrations

This section details the various Large Language Model (LLM) inference backends integrated into `tldw_chatbook`.

### Llama.cpp Integration
-

### Llamafile Integration

### Ollama Integration

### vLLM Integration

### Transformers Integration

### ONNX Runtime Integration

### MLX-LM Integration
- https://github.com/ml-explore/mlx-lm/tree/main

The application now supports MLX-LM for running local language models optimized for Apple Silicon hardware.
Users can manage MLX-LM instances via the "LLM Management" tab, allowing configuration of:

* **Model Path**: Specify a HuggingFace model ID compatible with MLX or a path to a local MLX model.
* **Server Host & Port**: Configure the network address for the MLX-LM server.
* **Additional Arguments**: Pass extra command-line arguments to the `mlx_lm.server` process.

The integration starts a local `mlx_lm.server` process and interacts with it, assuming an OpenAI-compatible API endpoint (typically at `/v1`). This allows for efficient local inference leveraging MLX's performance benefits on supported hardware.
19 changes: 19 additions & 0 deletions Docs/Design/Coding_Tab.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
https://github.com/eyaltoledano/claude-task-master/tree/main
https://github.com/eyaltoledano/claude-task-master/blob/main/docs/tutorial.md
https://repoprompt.com/
https://github.com/snarktank/ai-dev-tasks















5 changes: 5 additions & 0 deletions Docs/Design/Packaging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Packaging


https://fedoramagazine.org/enhancing-your-python-workflow-with-uv-on-fedora/
https://github.com/beeware/briefcase
16 changes: 8 additions & 8 deletions Docs/Design/TUIs.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@

### Link Dump
https://github.com/Textualize/toolong








https://github.com/the-impact-craft/sourcerer
https://github.com/presstab/jrdev
https://github.com/edward-jazzhands/textual-window
https://github.com/juftin/browsr
https://github.com/NSPC911/carto
https://github.com/Salvodif/TomeTrove
https://github.com/edward-jazzhands/rich-pyfiglet
https://terminaltrove.com/language/python/


87 changes: 87 additions & 0 deletions Docs/FAQs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Frequently Asked Questions (FAQs)

## Table of Contents
- [What is the purpose of this documentation?](#what-is-the-purpose-of-this-documentation)
- [How can I contribute to this project?](#how-can-i-contribute-to-this-project)
- [Where can I find the source code?](#where-can-i-find-the-source-code)
- [How do I report a bug or issue?](#how-do-i-report-a-bug-or-issue)
- [How do I request a feature?](#how-do-i-request-a-feature)
- [How do I get help or support?](#how-do-i-get-help-or-support)
- [What are the system requirements?](#what-are-the-system-requirements)
- [How do I install the application?](#how-do-i-install-the-application)
- [How do I run the application?](#how-do-i-run-the-application)
- [How do I update the application?](#how-do-i-update-the-application)
- [How do I uninstall the application?](#how-do-i-uninstall-the-application)
- [How do I configure the application?](#how-do-i-configure-the-application)
- [How do I use the application?](#how-do-i-use-the-application)
- [How do I customize the application?](#how-do-i-customize-the-application)
- [How do I troubleshoot common issues?](#how-do-i-troubleshoot-common-issues)
- [How do I reset the application?](#how-do-i-reset-the-application)
- [How do I back up my data?](#how-do-i-back-up-my-data)
- [How do I restore my data?](#how-do-i-restore-my-data)
- [How do I delete my data?](#how-do-i-delete-my-data)
- [How do I manage my data?](#how-do-i-manage-my-data)
- [How do I export my data?](#how-do-i-export-my-data)
- [How do I import my data?](#how-do-i-import-my-data)
- [How do I sync my data?](#how-do-i-sync-my-data)
- [How do I share my data?](#how-do-i-share-my-data)
- [How do I secure my data?](#how-do-i-secure-my-data)
- [How do I encrypt my data?](#how-do-i-encrypt-my-data)
- [How do I decrypt my data?](#how-do-i-decrypt-my-data)



### Windows Terminal Users

# Handling Large Pastes in Windows Terminal

When using this application (or any command-line application) within Windows Terminal, you might encounter a warning when attempting to paste a large amount of text (typically over 5 KiB). This is a built-in safety feature of Windows Terminal itself.

## Windows Terminal's Paste Warnings

Windows Terminal has specific settings that control how paste operations are handled:

1. **`largePasteWarning`**:
* **Default**: `true`
* **Behavior**: If you try to paste text exceeding 5 KiB, Windows Terminal will display a confirmation dialog asking if you want to proceed. If you select "No" (or cancel), the text may not be pasted into the application.
* This is the most common warning users encounter when dealing with large text blocks.

2. **`multiLinePasteWarning`**:
* **Default**: `true`
* **Behavior**: If you try to paste text that contains multiple lines, Windows Terminal will display a confirmation dialog. This is a security measure, as pasting multiple lines (each potentially a command) into a shell could have unintended consequences.

These settings are part of Windows Terminal's configuration and are independent of this application's behavior.

## Workaround / Configuration

If you frequently paste large amounts of text and find this warning disruptive, you can configure Windows Terminal to disable it.

**To change these settings:**

1. Open Windows Terminal.
2. Go to **Settings** (usually by clicking the dropdown arrow in the tab bar or pressing `Ctrl+,`).
3. In the settings UI, navigate to the "Interaction" section (the names might vary slightly depending on your Terminal version).
4. Look for options related to "Warn when pasting large amounts of text" (for `largePasteWarning`) and "Warn when pasting text with multiple lines" (for `multiLinePasteWarning`).
5. Alternatively, you can directly edit the `settings.json` file:
* Click on "Open JSON file" in the Settings tab.
* In the root of the JSON structure, you can add or modify these properties:
```json
"largePasteWarning": false, // Disables the warning for large pastes
"multiLinePasteWarning": false // Disables the warning for multi-line pastes
```
* Set the desired value to `false` to disable the warning.

**Important Considerations:**

* **Security**: Disabling `multiLinePasteWarning` can be risky, especially if you paste commands from untrusted sources, as it won't prompt you before potentially executing multiple commands.
* **Application Behavior**: If these terminal warnings are disabled, large amounts of text will be sent directly to the application. This application does not currently implement its own separate warning for large pastes, relying on the user's terminal configuration.

If Windows Terminal's `largePasteWarning` is enabled and you click "Yes" to proceed with the paste, but the text still doesn't appear correctly in the application, this might indicate that the terminal itself is still having trouble sending the entirety of the large input to the application, or the application itself might have other limitations (though the 5KB issue described by the user seems directly tied to the terminal's warning).

Refer to the official [Windows Terminal Interaction Settings Documentation](https://docs.microsoft.com/en-us/windows/terminal/customize-settings/interaction) for the most up-to-date information.




### Samplers
https://rentry.org/samplers
39 changes: 39 additions & 0 deletions Docs/LLM_FAQs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Large Language Model (LLM) Frequently Asked Questions (FAQs)

## Table of Contents
- [What is an LLM?](#what-is-an-llm)
- [What are the main features of LLMs?](#what-are-the-main-features-of-llms)
- [How do LLMs work?](#how-do-llms-work)
- [What are the common applications of LLMs?](#what-are-the-common-applications-of-llms)
- [What are the limitations of LLMs?](#what-are-the-limitations-of-llms)
- [What are the ethical considerations of using LLMs?](#what-are-the-ethical-considerations-of-using-llms)
- [How do I choose the right LLM for my needs?](#how-do-i-choose-the-right-llm-for-my-needs)
- [How do I fine-tune an LLM?](#how-do-i-fine-tune-an-llm)
- [How do I deploy an LLM?](#how-do-i-deploy-an-llm)
- [How do I evaluate the performance of an LLM?](#how-do-i-evaluate-the-performance-of-an-llm)
- [How do I troubleshoot common issues with LLMs?](#how-do-i-troubleshoot-common-issues-with-llms)











### Determinism in LLMs
https://docs.sglang.ai/references/faq.html












Empty file.
Loading
Loading