Skip to content

Commit f6cc666

Browse files
committed
Complete and refactor docs
1 parent 265c556 commit f6cc666

File tree

5 files changed

+166
-109
lines changed

5 files changed

+166
-109
lines changed

README.md

Lines changed: 45 additions & 109 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,14 @@ A command-line interface for LLMs written in Bash.
99
*Demo usage of ell (GIF, 2.1MB)*
1010

1111
- Ask LLMs from your terminal
12+
- Pipe friendly
1213
- Bring your terminal context to the LLMs and ask questions
1314
- Chat with LLMs in your terminal
1415

1516
## Requirements
1617

18+
To use ell, you need the following:
19+
1720
- bash
1821
- jq (For parsing JSON)
1922
- curl (For sending HTTPS requests)
@@ -23,171 +26,104 @@ A command-line interface for LLMs written in Bash.
2326

2427
```
2528
git clone https://github.com/simonmysun/ell.git ~/.ellrcd
26-
export PATH="/home/$USER/.ellrcd:$PATH"
29+
echo 'export PATH="${HOME}/.ellrcd:${PATH}"' >> ~/.bashrc
2730
```
31+
2832
or
33+
2934
```
3035
git clone [email protected]:simonmysun/ell.git ~/.ellrcd
31-
export PATH="/home/$USER/.ellrcd:$PATH"
36+
echo 'export PATH="${HOME}/.ellrcd:${PATH}"' >> ~/.bashrc
3237
```
3338

3439
This will clone the repository into `.ellrcd` in your home directory and add it to your PATH.
3540

3641
## Configuration
3742

38-
ell can be configured in three ways (in order of precedence, from lowest to highest):
39-
40-
- configuration files
41-
- environment variables
42-
- command line arguments
43-
44-
The configuration files are read in the following order:
45-
46-
- `~/.ellrc`
47-
- `.ellrc` in the current directory
48-
- `$ELL_CONFIG` specified in the environment variables or command line arguments.
49-
50-
Specifying `ELL_CONFIG` in the file provided with the `-c` / `--config` option will not work since looking for the config file is not recursive.
51-
52-
### Variables
53-
54-
The following variables can be set in the configuration files, environment variables:
55-
56-
- `ELL_LOG_LEVEL`: The log level of the logger. The default is `3`.
57-
- `ELL_CONFIG`: The configuration file to use. The default is `~/.ellrc`.
58-
- `ELL_LLM_MODEL`: The model to use. Default is `gpt-4o-mini`.
59-
- `ELL_LLM_TEMPERATURE`: The temperature of the model. The default is `0.6`.
60-
- `ELL_LLM_MAX_TOKENS`: The maximum number of tokens to generate. The default is `4096`.
61-
- `ELL_TEMPLATE_PATH`: The path to the templates. The default is `~/.ellrc.d/templates`.
62-
- `ELL_TEMPLATE`: The template to use. The default is `default`. The file extension is not needed.
63-
- `ELL_INPUT_FILE`: The input file to use. If specified, it will override the prompt given in command line arguments.
64-
- `ELL_RECORD`: This is used for controlling whether record mode is on. It should be set to `false` unless you want to disable recording.
65-
- `ELL_INTERACTIVE`: Run ell in interactive mode. The default is `false`.
66-
- `ELL_API_STYLE`: The API style to use. The default is `openai`.
67-
- `ELL_API_KEY`: The API key to use.
68-
- `ELL_API_URL`: The API URL to use.
69-
- `ELL_API_STREAM`: Whether to stream the output. The default is `true`.
70-
71-
The following variables can be set in the command line arguments:
72-
73-
- `-m, --model`: `ELL_LLM_MODEL`
74-
- `-T, --template-path`: `ELL_TEMPLATE_PATH`
75-
- `-t, --template`: `ELL_TEMPLATE`
76-
- `-f, --input-file`: `ELL_INPUT_FILE`
77-
- `-i, --interactive`: `ELL_INTERACTIVE`;
78-
- `--api-style`: `ELL_API_STYLE`
79-
- `--api-key`: `ELL_API_KEY`
80-
- `--api-url`: `ELL_API_URL`
81-
- `--api-disable-streaming`: sets `ELL_API_STREAM` to **false**
82-
- `-c, --config`: `ELL_CONFIG`
83-
- `-l, --log-level`: `ELL_LOG_LEVEL`
84-
- `-o, --option`: Other options. The format is `A=b` or `C=d,E=f`.
43+
See [Configuration](docs/Configuration.md).
8544

86-
For example, to use `gpt-4o-mini` from OpenAI, you need to set these variables:
45+
Here's an example configuration to use `gpt-4o-mini` from OpenAI. You need to set these variables in your `~/.ellrc`:
8746

88-
```
47+
```ini
8948
ELL_API_STYLE=openai
9049
ELL_LLM_MODEL=gpt-4o-mini
9150
ELL_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
9251
ELL_API_URL=https://api.openai.com/v1/chat/completions
9352
```
9453

95-
Currently, only OpenAI style API is supported. More API styles are coming soon.
54+
## Usage examples
9655

97-
## Usage
98-
```
99-
Usage: ell [options] PROMPT
100-
-h, --help: show this help
101-
-V, --version: show version
102-
-m, --model: model name
103-
-T, --template-path: path to search for templates
104-
-t, --template: template filename without extension
105-
-f, --input-file: use file as input prompt
106-
-r, --record: enter record mode
107-
-i, --interactive: enter interactive mode
108-
--api-style: api style
109-
--api-key: api key
110-
--api-url: api url
111-
--api-disable-streaming: disable api response streaming
112-
-c, --config: config file
113-
-l, --log-level: log level
114-
-o, --option: other options, e.g. -o A=b -o C=d,E=f
115-
PROMPT: prompt to input
116-
For more information, see https://github.com/simonmysun/ell
117-
```
56+
Make sure you have configured correctly.
11857

119-
### Examples
120-
#### Ask a question
58+
Ask a question:
12159

12260
```bash
12361
ell "What is the capital of France?"
12462
```
12563

126-
#### Specify a template
64+
Specify a model and use a file as input:
12765

12866
```bash
129-
ell -t default "What is the capital of France?"
67+
ell -m gpt-4o -f user_prompt.txt
13068
```
13169

132-
#### Specify a model
70+
Specify a template:
13371

13472
```bash
135-
ell -m gpt-4o "What is the capital of France?"
73+
ell -t default "What is the capital of France?"
13674
```
13775

138-
#### Record terminal input and output and use as context
76+
Record terminal input and output and use as context:
13977

14078
```bash
14179
ell -r
14280
# do random stuff
143-
ell What does the error mean?
81+
ell What does the error code mean?
14482
ell How to fix it?
14583
```
14684

147-
#### Run in interactive mode
85+
Run in interactive mode:
14886

14987
```bash
15088
ell -i
15189
```
15290

153-
if you were in record mode via `ell -r`, the context of the shell will be used. The two modes can be combined: `ell -r -i`.
91+
If you were in record mode via `ell -r`, the context of the shell will be used. The two modes can be combined: `ell -r -i`.
92+
93+
## Writing Templates
15494

155-
### Writing Templates
156-
Currently, there are two variables that can be used in the templates, except the ones given by users:
95+
See [Templates](docs/Templates.md).
15796

158-
- `$SHELL_CONTEXT`: The context of the shell. This only works when the shell is started with `ell -r`.
159-
- `$USER_PROMPT`: The prompt text given by the user.
97+
## Styling
16098

161-
For more information, please refer to [templates/default.json].
99+
See [Styling](docs/Styling.md).
162100

163-
More possibilities are coming soon!
101+
## Plugins
164102

103+
See [Plugins](docs/Plugins.md).
165104

166105
## Risks to consider
167106

168-
- The prompts are sent to LLM backends, so be careful with sensitive information.
169-
- The output of LLMs is not guaranteed to be correct or safe.
170-
- In record mode, all your input and output history are written to `/tmp/tmp.xxxx` and are readable by root user.
171-
- LLM can be tuned or prompted to return deceptive results, e.g. manipulating your terminal
172-
- Unexpected exit of record mode may cause the history file to remain in `/tmp/`.
173-
- Password input is not recorded by `script`, so it is safe to type sudo or ssh passwords in terminal.
107+
See [Risks Consideration](docs/Risks.md).
108+
109+
## FAQ
110+
111+
- **Q**: Why is it called "ell"?
112+
- **A**: "ell" is a combination of shell and LLM. It is a shell script to use LLM backends. "shellm" was once considered, but it was dropped because it could be misunderstood as "she llm". "ell" is shorter, easy to type and easy to remember. It does not conflict with any active software. Note that the name "shell" of shell scripts is because it is the outer layer of the operating system exposed to the user. It doesn't indicate that it is a CLI or GUI. Unfortunately it cannot be shortened to "L" which has the same pronunciation because that would conflict with too many things.
113+
114+
115+
- **Q**: Why is it written in Bash?
116+
- **A**: Because Bash is the most common shell on Unix-like systems and there is just no need to use a more complex language for this.
117+
118+
119+
- **Q**: What is the difference between ell and other similar projects?
120+
- **A**: ell is written in almost pure Bash, which makes it very lightweight and easy to install. It is also very easy to extend and modify. It is pipe friendly, which means it is designed to be used in combination with other tools.
174121

175122
## Similar Projects
176123

177-
- https://github.com/kardolus/chatgpt-cli
178-
- A CLI for ChatGPT written in Go.
179-
- Cannot bring terminal context to the LLMs.
180-
- No syntax highlighting.
181-
- Supports system prompt customization but doesn't support prompt templates.
182-
- https://github.com/kharvd/gpt-cli
183-
- A CLI for various LLM backends written in Python.
184-
- Has syntax highlighting for markdown output.
185-
- Cannot bring terminal context to the LLMs.
186-
- Supports system prompt customization but doesn't support prompt templates.
187-
- https://github.com/JohannLai/gptcli
188-
- A CLI for OpenAI LLms written in TypeScript.
189-
- Has plugin system.
190-
- Support customizing CLI tools
124+
- https://github.com/kardolus/chatgpt-cli - A CLI for ChatGPT written in Go.
125+
- https://github.com/kharvd/gpt-cli A CLI for various LLM backends written in Python.
126+
- https://github.com/JohannLai/gptcli A CLI for OpenAI LLms written in TypeScript.
191127

192128
## Contributing
193129

docs/Configuration.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# Configuration
2+
3+
## Order of precedence
4+
5+
ell can be configured in three ways (in order of precedence, from lowest to highest):
6+
7+
- configuration files
8+
- environment variables
9+
- command line arguments
10+
11+
The configuration files are read and applied in the following order:
12+
13+
- `~/.ellrc`
14+
- `.ellrc` in the current directory
15+
- `$ELL_CONFIG` specified in the environment variables or command line arguments.
16+
17+
Specifying `ELL_CONFIG` in the file provided with the `-c` / `--config` option will not work since looking for the config file is not recursive.
18+
19+
## Configurable variables
20+
21+
The following variables can be set in the configuration files, environment variables:
22+
23+
- `ELL_LOG_LEVEL`: The log level of the logger. The default is `3`.
24+
- `ELL_CONFIG`: The configuration file to use. The default is `~/.ellrc`.
25+
- `ELL_LLM_MODEL`: The model to use. Default is `gpt-4o-mini`.
26+
- `ELL_LLM_TEMPERATURE`: The temperature of the model. The default is `0.6`.
27+
- `ELL_LLM_MAX_TOKENS`: The maximum number of tokens to generate. The default is `4096`.
28+
- `ELL_TEMPLATE_PATH`: The path to the templates. The default is `~/.ellrc.d/templates`.
29+
- `ELL_TEMPLATE`: The template to use. The default is `default`. The file extension is not needed.
30+
- `ELL_INPUT_FILE`: The input file to use. If specified, it will override the prompt given in command line arguments.
31+
- `ELL_RECORD`: This is used for controlling whether record mode is on. It should be set to `false` unless you want to disable recording.
32+
- `ELL_INTERACTIVE`: Run ell in interactive mode. The default is `false`.
33+
- `ELL_API_STYLE`: The API style to use. The default is `openai`.
34+
- `ELL_API_KEY`: The API key to use.
35+
- `ELL_API_URL`: The API URL to use.
36+
- `ELL_API_STREAM`: Whether to stream the output. The default is `true`.
37+
- Plugins related variables:
38+
- `TO_TTY`: Force ell to output with syntax highlighting and pagination or not.
39+
- Styling related variables can be found in [Styling](docs/Styling.md).
40+
41+
The following variables can be set in the command line arguments:
42+
43+
- `-m, --model`: `ELL_LLM_MODEL`
44+
- `-T, --template-path`: `ELL_TEMPLATE_PATH`
45+
- `-t, --template`: `ELL_TEMPLATE`
46+
- `-f, --input-file`: `ELL_INPUT_FILE`
47+
- `-i, --interactive`: `ELL_INTERACTIVE`;
48+
- `--api-style`: `ELL_API_STYLE`
49+
- `--api-key`: `ELL_API_KEY`
50+
- `--api-url`: `ELL_API_URL`
51+
- `--api-disable-streaming`: sets `ELL_API_STREAM` to **false**
52+
- `-c, --config`: `ELL_CONFIG`
53+
- `-l, --log-level`: `ELL_LOG_LEVEL`
54+
- `-o, --option`: Other options. The format is `A=b` or `C=d,E=f`.
55+
56+
Currently, only OpenAI style API is supported. More API styles are coming soon.

docs/Plugins.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Plugins
2+
3+
Ell supports plugins to extend its functionality through a hook system. Currently, the following hooks are available:
4+
5+
- `post_input`: Called after the user prompt is received.
6+
- `pre_llm`: Called before the payload is sent to the language model.
7+
- `post_llm`: Called after the response is received and decoded from the language model.
8+
- `pre_output`: Called before the output is sent to the user.
9+
10+
Plugins should be placed in the `./plugins` directory related to the ell script, typically located at `~/.ellrc.d/plugins` if you follow the installation instructions in the readme.
11+
12+
Each plugin should be a folder containing executable shell scripts. The file name should follow the format `XX_${HOOK_NAME}.sh`, where `XX` is a number that determines the execution order among other plugins. For example, the paginator plugin is placed in `~/.ellrc.d/plugins/paginator/90_pre_output.sh`.
13+
14+
Plugin scripts are executed in ascending numerical order and piped to each other.
15+
16+
It is recommended to write plugins in a streaming manner.
17+
18+
Below is an example of a simple plugin script:
19+
20+
```bash
21+
#!/usr/bin/env bash
22+
23+
cat;
24+
```
25+
26+
This plugin will simply pass the input to the next plugin in the chain.

docs/Risk_Consideration.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# Risk Consideration
2+
3+
The following risks should be considered when using ell:
4+
5+
- The prompts are sent to LLM backends, so be careful with sensitive information.
6+
- The output of LLMs is not guaranteed to be correct or safe.
7+
- In record mode, all your input and output history are written to `/tmp/tmp.xxxx` and are readable by root user.
8+
- LLM can be tuned or prompted to return deceptive results, e.g. manipulating your terminal
9+
- Unexpected exit of record mode may cause the history file to remain in `/tmp/`.
10+
- Password input is not recorded by `script`, so it is safe to type sudo or ssh passwords in terminal.

docs/Templates.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Templates
2+
3+
Templates are used to generate the payload sent to the language model. They are written in JSON format and can be customized by users. Templates are where you set the prompt text and other parameters for the language model.
4+
5+
Currently, there are two variables that can be used in the templates, except the ones given by users:
6+
7+
- `$SHELL_CONTEXT`: The context of the shell. This only works when the shell is started with `ell -r`.
8+
- `$USER_PROMPT`: The prompt text given by the user.
9+
10+
More possibilities are coming soon!
11+
12+
```json
13+
{
14+
"model": "${ELL_LLM_MODEL}",
15+
"messages": [
16+
{
17+
"role": "system",
18+
"content": "You are a helpful assistant. "
19+
},
20+
{
21+
"role": "user",
22+
"content": "${USER_PROMPT}"
23+
}
24+
],
25+
"temperature": ${ELL_LLM_TEMPERATURE},
26+
"max_tokens": ${ELL_LLM_MAX_TOKENS},
27+
"stream": ${ELL_API_STREAM}
28+
}
29+
```

0 commit comments

Comments
 (0)