You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/AIAssistant.md
+27-7Lines changed: 27 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,8 +19,8 @@ To set up the AI Assistant, follow these steps:
19
19
That's really it. You're now ready to use the AI Assistant.
20
20
21
21
The basic idea is that you set up a QuickAdd Macro, which will trigger the AI Assistant.
22
-
The AI Assistant will then use the prompt template you specify to generate a prompt, which it will then send to OpenAI.
23
-
OpenAI will then return a response, which the AI Assistant passes on to the QuickAdd Macro.
22
+
The AI Assistant will then use the prompt template you specify to generate a prompt, which it will then send to your selected provider.
23
+
The provider will then return a response, which the AI Assistant passes on to the QuickAdd Macro.
24
24
You can then use the response in subsequent steps in the macro, e.g. to capture to a note, or create a new note.
25
25
26
26
**Creating prompt templates is simple: just create a note in your prompt templates folder.**
@@ -36,18 +36,19 @@ You can also use AI Assistant features from within the [API](./QuickAddAPI.md).
36
36
## Providers
37
37
38
38
QuickAdd supports multiple providers for LLMs.
39
-
The only requirement is that they are OpenAI-compatible, which means their API should be similar to OpenAIs.
39
+
QuickAdd works with OpenAI-compatible APIs and also supports Google Gemini.
40
40
41
41
Here are a few providers that are known to work with QuickAdd:
42
42
43
43
-[OpenAI](https://openai.com)
44
+
-[Gemini (Google AI)](https://ai.google.dev)
44
45
-[TogetherAI](https://www.together.ai)
45
46
-[Groq](https://groq.com)
46
47
-[Ollama (local)](https://ollama.com)
47
48
48
49
Paid providers expose their own API, which you can use with QuickAdd. Free providers, such as Ollama, are also supported.
49
50
50
-
By default, QuickAdd will add the OpenAI provider. You can add more providers by clicking the "Add Provider" button in the AI Assistant settings.
51
+
By default, QuickAdd will add the OpenAI and Gemini providers. You can add more providers by clicking the "Add Provider" button in the AI Assistant settings.
51
52
52
53
Here's a video showcasing adding Groq as a provider:
53
54
@@ -75,11 +76,30 @@ Api Key: (empty)
75
76
And that's it! You can now use Ollama as a provider in QuickAdd.
76
77
Make sure you add the model you want to use. [mistral](https://ollama.com/library/mistral) is great.
77
78
79
+
### Gemini (Google AI)
80
+
81
+
Gemini is supported out of the box.
82
+
83
+
```
84
+
Name: Gemini
85
+
URL: https://generativelanguage.googleapis.com
86
+
API Key: (AI Studio API key)
87
+
Models (add one or more):
88
+
- gemini-1.5-pro (Max Tokens: 1000000)
89
+
- gemini-1.5-flash (Max Tokens: 1000000)
90
+
- gemini-1.5-flash-8b (Max Tokens: 1000000)
91
+
```
92
+
93
+
Notes:
94
+
95
+
- Use only supported parameters for Gemini (temperature, top_p). Frequency/presence penalties are not sent to Gemini.
96
+
- Make sure "Disable AI & Online features" is turned off in QuickAdd settings to enable requests.
97
+
78
98
## AI Assistant Settings
79
99
80
100
Within the main AI Assistant settings accessible via QuickAdd settings, you can configure the following options:
81
101
82
-
-OpenAI API Key: The key to interact with OpenAI's models.
102
+
-Providers: Configure provider endpoints and API keys.
83
103
- Prompt Templates Folder: The location where all your prompt templates reside.
84
104
- Default model: The default OpenAI model to be used.
85
105
- Show Assistant: Toggle for status messages.
@@ -96,8 +116,8 @@ You can also tweak model parameters in advanced settings:
96
116
97
117
-**temperature:** Allows you to adjust the sampling temperature between 0 and 2. Higher values result in more random outputs, while lower values make the output more focused and deterministic.
98
118
-**top_p:** This parameter relates to nucleus sampling. The model considers only the tokens comprising the top 'p' probability mass. For example, 0.1 means only tokens from the top 10% probability mass are considered.
99
-
-**frequency_penalty:** A parameter ranging between -2.0 and 2.0. Positive values penalize new tokens based on their frequency in the existing text, reducing the model's tendency to repeat the same lines.
100
-
-**presence_penalty:** Also ranging between -2.0 and 2.0, positive values penalize new tokens based on their presence in the existing text, encouraging the model to introduce new topics.
119
+
-**frequency_penalty:** A parameter ranging between -2.0 and 2.0. Positive values penalize new tokens based on their frequency in the existing text, reducing the model's tendency to repeat the same lines. (Not applicable to Gemini.)
120
+
-**presence_penalty:** Also ranging between -2.0 and 2.0, positive values penalize new tokens based on their presence in the existing text, encouraging the model to introduce new topics. (Not applicable to Gemini.)
0 commit comments