Skip to content

Commit 0a26469

Browse files
committed
docs(ai): document Gemini provider setup and compatibility
Add Gemini (Google AI) provider details, setup steps, supported params, and notes to AI Assistant docs.
1 parent 28cb833 commit 0a26469

File tree

1 file changed

+27
-7
lines changed

1 file changed

+27
-7
lines changed

docs/docs/AIAssistant.md

Lines changed: 27 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ To set up the AI Assistant, follow these steps:
1919
That's really it. You're now ready to use the AI Assistant.
2020

2121
The basic idea is that you set up a QuickAdd Macro, which will trigger the AI Assistant.
22-
The AI Assistant will then use the prompt template you specify to generate a prompt, which it will then send to OpenAI.
23-
OpenAI will then return a response, which the AI Assistant passes on to the QuickAdd Macro.
22+
The AI Assistant will then use the prompt template you specify to generate a prompt, which it will then send to your selected provider.
23+
The provider will then return a response, which the AI Assistant passes on to the QuickAdd Macro.
2424
You can then use the response in subsequent steps in the macro, e.g. to capture to a note, or create a new note.
2525

2626
**Creating prompt templates is simple: just create a note in your prompt templates folder.**
@@ -36,18 +36,19 @@ You can also use AI Assistant features from within the [API](./QuickAddAPI.md).
3636
## Providers
3737

3838
QuickAdd supports multiple providers for LLMs.
39-
The only requirement is that they are OpenAI-compatible, which means their API should be similar to OpenAIs.
39+
QuickAdd works with OpenAI-compatible APIs and also supports Google Gemini.
4040

4141
Here are a few providers that are known to work with QuickAdd:
4242

4343
- [OpenAI](https://openai.com)
44+
- [Gemini (Google AI)](https://ai.google.dev)
4445
- [TogetherAI](https://www.together.ai)
4546
- [Groq](https://groq.com)
4647
- [Ollama (local)](https://ollama.com)
4748

4849
Paid providers expose their own API, which you can use with QuickAdd. Free providers, such as Ollama, are also supported.
4950

50-
By default, QuickAdd will add the OpenAI provider. You can add more providers by clicking the "Add Provider" button in the AI Assistant settings.
51+
By default, QuickAdd will add the OpenAI and Gemini providers. You can add more providers by clicking the "Add Provider" button in the AI Assistant settings.
5152

5253
Here's a video showcasing adding Groq as a provider:
5354

@@ -75,11 +76,30 @@ Api Key: (empty)
7576
And that's it! You can now use Ollama as a provider in QuickAdd.
7677
Make sure you add the model you want to use. [mistral](https://ollama.com/library/mistral) is great.
7778

79+
### Gemini (Google AI)
80+
81+
Gemini is supported out of the box.
82+
83+
```
84+
Name: Gemini
85+
URL: https://generativelanguage.googleapis.com
86+
API Key: (AI Studio API key)
87+
Models (add one or more):
88+
- gemini-1.5-pro (Max Tokens: 1000000)
89+
- gemini-1.5-flash (Max Tokens: 1000000)
90+
- gemini-1.5-flash-8b (Max Tokens: 1000000)
91+
```
92+
93+
Notes:
94+
95+
- Use only supported parameters for Gemini (temperature, top_p). Frequency/presence penalties are not sent to Gemini.
96+
- Make sure "Disable AI & Online features" is turned off in QuickAdd settings to enable requests.
97+
7898
## AI Assistant Settings
7999

80100
Within the main AI Assistant settings accessible via QuickAdd settings, you can configure the following options:
81101

82-
- OpenAI API Key: The key to interact with OpenAI's models.
102+
- Providers: Configure provider endpoints and API keys.
83103
- Prompt Templates Folder: The location where all your prompt templates reside.
84104
- Default model: The default OpenAI model to be used.
85105
- Show Assistant: Toggle for status messages.
@@ -96,8 +116,8 @@ You can also tweak model parameters in advanced settings:
96116

97117
- **temperature:** Allows you to adjust the sampling temperature between 0 and 2. Higher values result in more random outputs, while lower values make the output more focused and deterministic.
98118
- **top_p:** This parameter relates to nucleus sampling. The model considers only the tokens comprising the top 'p' probability mass. For example, 0.1 means only tokens from the top 10% probability mass are considered.
99-
- **frequency_penalty:** A parameter ranging between -2.0 and 2.0. Positive values penalize new tokens based on their frequency in the existing text, reducing the model's tendency to repeat the same lines.
100-
- **presence_penalty:** Also ranging between -2.0 and 2.0, positive values penalize new tokens based on their presence in the existing text, encouraging the model to introduce new topics.
119+
- **frequency_penalty:** A parameter ranging between -2.0 and 2.0. Positive values penalize new tokens based on their frequency in the existing text, reducing the model's tendency to repeat the same lines. (Not applicable to Gemini.)
120+
- **presence_penalty:** Also ranging between -2.0 and 2.0, positive values penalize new tokens based on their presence in the existing text, encouraging the model to introduce new topics. (Not applicable to Gemini.)
101121

102122
## AI-Powered Workflows
103123

0 commit comments

Comments
 (0)