Skip to content

Commit 1f7f8ed

Browse files
Merge pull request #10 from Flaconi/OPS-6367
OPS-6367
2 parents bea6d2b + 4cdc309 commit 1f7f8ed

File tree

3 files changed

+744
-1
lines changed

3 files changed

+744
-1
lines changed

README.md

Lines changed: 360 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,54 @@ Type: `string`
127127

128128
Default: `"RETAIN"`
129129

130+
### <a name="input_vector_ingestion_configuration"></a> [vector\_ingestion\_configuration](#input\_vector\_ingestion\_configuration)
131+
132+
Description: n/a
133+
134+
Type:
135+
136+
```hcl
137+
object({
138+
chunking_configuration = object({
139+
chunking_strategy = string
140+
fixed_size_chunking_configuration = optional(object({
141+
max_tokens = number
142+
overlap_percentage = optional(number)
143+
}))
144+
hierarchical_chunking_configuration = optional(object({
145+
overlap_tokens = number
146+
level_1 = object({ max_tokens = number })
147+
level_2 = object({ max_tokens = number })
148+
}))
149+
semantic_chunking_configuration = optional(object({
150+
breakpoint_percentile_threshold = number
151+
buffer_size = number
152+
max_token = number
153+
}))
154+
})
155+
custom_transformation_configuration = optional(object({
156+
intermediate_storage = string
157+
transformation_function = string
158+
}))
159+
})
160+
```
161+
162+
Default:
163+
164+
```json
165+
{
166+
"chunking_configuration": {
167+
"chunking_strategy": "FIXED_SIZE",
168+
"fixed_size_chunking_configuration": {
169+
"max_tokens": 300,
170+
"overlap_percentage": 20
171+
},
172+
"hierarchical_chunking_configuration": null,
173+
"semantic_chunking_configuration": null
174+
}
175+
}
176+
```
177+
130178
### <a name="input_oss_additional_roles_arns"></a> [oss\_additional\_roles\_arns](#input\_oss\_additional\_roles\_arns)
131179

132180
Description: Additional ARNs of roles to access OpenSearch
@@ -135,6 +183,318 @@ Type: `list(string)`
135183

136184
Default: `[]`
137185

186+
### <a name="input_knowledge_base_response_generation_prompt_template"></a> [knowledge\_base\_response\_generation\_prompt\_template](#input\_knowledge\_base\_response\_generation\_prompt\_template)
187+
188+
Description: Prompt template for pre-processing.
189+
190+
Type: `string`
191+
192+
Default: `" You are a helpful assistant. Answer the following question using the context provided:\n Question: {question}\n Context: {context}\n Your response should be thoughtful, detailed, and relevant to the provided context.\n"`
193+
194+
### <a name="input_knowledge_base_response_generation_parser_mode"></a> [knowledge\_base\_response\_generation\_parser\_mode](#input\_knowledge\_base\_response\_generation\_parser\_mode)
195+
196+
Description: Parser mode for pre-processing.
197+
198+
Type: `string`
199+
200+
Default: `"DEFAULT"`
201+
202+
### <a name="input_knowledge_base_response_generation_prompt_creation_mode"></a> [knowledge\_base\_response\_generation\_prompt\_creation\_mode](#input\_knowledge\_base\_response\_generation\_prompt\_creation\_mode)
203+
204+
Description: Prompt creation mode for pre-processing.
205+
206+
Type: `string`
207+
208+
Default: `"OVERRIDDEN"`
209+
210+
### <a name="input_knowledge_base_response_generation_prompt_state"></a> [knowledge\_base\_response\_generation\_prompt\_state](#input\_knowledge\_base\_response\_generation\_prompt\_state)
211+
212+
Description: Prompt state for pre-processing.
213+
214+
Type: `string`
215+
216+
Default: `"ENABLED"`
217+
218+
### <a name="input_knowledge_base_response_generation_max_length"></a> [knowledge\_base\_response\_generation\_max\_length](#input\_knowledge\_base\_response\_generation\_max\_length)
219+
220+
Description: Maximum number of tokens to allow in the generated response.
221+
222+
Type: `number`
223+
224+
Default: `512`
225+
226+
### <a name="input_knowledge_base_response_generation_stop_sequences"></a> [knowledge\_base\_response\_generation\_stop\_sequences](#input\_knowledge\_base\_response\_generation\_stop\_sequences)
227+
228+
Description: List of stop sequences that will stop generation.
229+
230+
Type: `list(string)`
231+
232+
Default:
233+
234+
```json
235+
[
236+
"END"
237+
]
238+
```
239+
240+
### <a name="input_knowledge_base_response_generation_temperature"></a> [knowledge\_base\_response\_generation\_temperature](#input\_knowledge\_base\_response\_generation\_temperature)
241+
242+
Description: Likelihood of the model selecting higher-probability options while generating a response.
243+
244+
Type: `number`
245+
246+
Default: `0.7`
247+
248+
### <a name="input_knowledge_base_response_generation_top_k"></a> [knowledge\_base\_response\_generation\_top\_k](#input\_knowledge\_base\_response\_generation\_top\_k)
249+
250+
Description: Number of top most-likely candidates from which the model chooses the next token.
251+
252+
Type: `number`
253+
254+
Default: `50`
255+
256+
### <a name="input_knowledge_base_response_generation_top_p"></a> [knowledge\_base\_response\_generation\_top\_p](#input\_knowledge\_base\_response\_generation\_top\_p)
257+
258+
Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.
259+
260+
Type: `number`
261+
262+
Default: `0.9`
263+
264+
### <a name="input_pre_processing_prompt_template"></a> [pre\_processing\_prompt\_template](#input\_pre\_processing\_prompt\_template)
265+
266+
Description: Prompt template for pre-processing.
267+
268+
Type: `string`
269+
270+
Default: `" You are preparing the input. Extract relevant context and pre-process the following question:\n Question: {question}\n Context: {context}\n Pre-processing should focus on extracting the core information.\n"`
271+
272+
### <a name="input_pre_processing_parser_mode"></a> [pre\_processing\_parser\_mode](#input\_pre\_processing\_parser\_mode)
273+
274+
Description: Parser mode for pre-processing.
275+
276+
Type: `string`
277+
278+
Default: `"DEFAULT"`
279+
280+
### <a name="input_pre_processing_prompt_creation_mode"></a> [pre\_processing\_prompt\_creation\_mode](#input\_pre\_processing\_prompt\_creation\_mode)
281+
282+
Description: Prompt creation mode for pre-processing.
283+
284+
Type: `string`
285+
286+
Default: `"OVERRIDDEN"`
287+
288+
### <a name="input_pre_processing_prompt_state"></a> [pre\_processing\_prompt\_state](#input\_pre\_processing\_prompt\_state)
289+
290+
Description: Prompt state for pre-processing.
291+
292+
Type: `string`
293+
294+
Default: `"ENABLED"`
295+
296+
### <a name="input_pre_processing_max_length"></a> [pre\_processing\_max\_length](#input\_pre\_processing\_max\_length)
297+
298+
Description: Maximum number of tokens to allow in the generated response.
299+
300+
Type: `number`
301+
302+
Default: `512`
303+
304+
### <a name="input_pre_processing_stop_sequences"></a> [pre\_processing\_stop\_sequences](#input\_pre\_processing\_stop\_sequences)
305+
306+
Description: List of stop sequences that will stop generation.
307+
308+
Type: `list(string)`
309+
310+
Default:
311+
312+
```json
313+
[
314+
"END"
315+
]
316+
```
317+
318+
### <a name="input_pre_processing_temperature"></a> [pre\_processing\_temperature](#input\_pre\_processing\_temperature)
319+
320+
Description: Likelihood of the model selecting higher-probability options while generating a response.
321+
322+
Type: `number`
323+
324+
Default: `0.7`
325+
326+
### <a name="input_pre_processing_top_k"></a> [pre\_processing\_top\_k](#input\_pre\_processing\_top\_k)
327+
328+
Description: Number of top most-likely candidates from which the model chooses the next token.
329+
330+
Type: `number`
331+
332+
Default: `50`
333+
334+
### <a name="input_pre_processing_top_p"></a> [pre\_processing\_top\_p](#input\_pre\_processing\_top\_p)
335+
336+
Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.
337+
338+
Type: `number`
339+
340+
Default: `0.9`
341+
342+
### <a name="input_orchestration_prompt_template"></a> [orchestration\_prompt\_template](#input\_orchestration\_prompt\_template)
343+
344+
Description: Prompt template for orchestration.
345+
346+
Type: `string`
347+
348+
Default: `" You are orchestrating the flow of the agent. Based on the question and context, determine the next steps in the process:\n Question: {question}\n Context: {context}\n Plan the next steps to follow the best strategy.\n"`
349+
350+
### <a name="input_orchestration_parser_mode"></a> [orchestration\_parser\_mode](#input\_orchestration\_parser\_mode)
351+
352+
Description: Parser mode for orchestration.
353+
354+
Type: `string`
355+
356+
Default: `"DEFAULT"`
357+
358+
### <a name="input_orchestration_prompt_creation_mode"></a> [orchestration\_prompt\_creation\_mode](#input\_orchestration\_prompt\_creation\_mode)
359+
360+
Description: Prompt creation mode for orchestration.
361+
362+
Type: `string`
363+
364+
Default: `"OVERRIDDEN"`
365+
366+
### <a name="input_orchestration_prompt_state"></a> [orchestration\_prompt\_state](#input\_orchestration\_prompt\_state)
367+
368+
Description: Prompt state for orchestration.
369+
370+
Type: `string`
371+
372+
Default: `"ENABLED"`
373+
374+
### <a name="input_orchestration_max_length"></a> [orchestration\_max\_length](#input\_orchestration\_max\_length)
375+
376+
Description: Maximum number of tokens to allow in the generated response.
377+
378+
Type: `number`
379+
380+
Default: `512`
381+
382+
### <a name="input_orchestration_stop_sequences"></a> [orchestration\_stop\_sequences](#input\_orchestration\_stop\_sequences)
383+
384+
Description: List of stop sequences that will stop generation.
385+
386+
Type: `list(string)`
387+
388+
Default:
389+
390+
```json
391+
[
392+
"END"
393+
]
394+
```
395+
396+
### <a name="input_orchestration_temperature"></a> [orchestration\_temperature](#input\_orchestration\_temperature)
397+
398+
Description: Likelihood of the model selecting higher-probability options while generating a response.
399+
400+
Type: `number`
401+
402+
Default: `0.7`
403+
404+
### <a name="input_orchestration_top_k"></a> [orchestration\_top\_k](#input\_orchestration\_top\_k)
405+
406+
Description: Number of top most-likely candidates from which the model chooses the next token.
407+
408+
Type: `number`
409+
410+
Default: `50`
411+
412+
### <a name="input_orchestration_top_p"></a> [orchestration\_top\_p](#input\_orchestration\_top\_p)
413+
414+
Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.
415+
416+
Type: `number`
417+
418+
Default: `0.9`
419+
420+
### <a name="input_post_processing_prompt_template"></a> [post\_processing\_prompt\_template](#input\_post\_processing\_prompt\_template)
421+
422+
Description: Prompt template for post-processing.
423+
424+
Type: `string`
425+
426+
Default: `"You are performing post-processing. Review the agent's output and refine the response for clarity and relevance:\nResponse: {response}\nContext: {context}\nEnsure the output is polished and aligns with the context.\n"`
427+
428+
### <a name="input_post_processing_parser_mode"></a> [post\_processing\_parser\_mode](#input\_post\_processing\_parser\_mode)
429+
430+
Description: Parser mode for post-processing.
431+
432+
Type: `string`
433+
434+
Default: `"DEFAULT"`
435+
436+
### <a name="input_post_processing_prompt_creation_mode"></a> [post\_processing\_prompt\_creation\_mode](#input\_post\_processing\_prompt\_creation\_mode)
437+
438+
Description: Prompt creation mode for post-processing.
439+
440+
Type: `string`
441+
442+
Default: `"OVERRIDDEN"`
443+
444+
### <a name="input_post_processing_prompt_state"></a> [post\_processing\_prompt\_state](#input\_post\_processing\_prompt\_state)
445+
446+
Description: Prompt state for post-processing.
447+
448+
Type: `string`
449+
450+
Default: `"DISABLED"`
451+
452+
### <a name="input_post_processing_max_length"></a> [post\_processing\_max\_length](#input\_post\_processing\_max\_length)
453+
454+
Description: Maximum number of tokens to allow in the generated response.
455+
456+
Type: `number`
457+
458+
Default: `512`
459+
460+
### <a name="input_post_processing_stop_sequences"></a> [post\_processing\_stop\_sequences](#input\_post\_processing\_stop\_sequences)
461+
462+
Description: List of stop sequences that will stop generation.
463+
464+
Type: `list(string)`
465+
466+
Default:
467+
468+
```json
469+
[
470+
"END"
471+
]
472+
```
473+
474+
### <a name="input_post_processing_temperature"></a> [post\_processing\_temperature](#input\_post\_processing\_temperature)
475+
476+
Description: Likelihood of the model selecting higher-probability options while generating a response.
477+
478+
Type: `number`
479+
480+
Default: `0.7`
481+
482+
### <a name="input_post_processing_top_k"></a> [post\_processing\_top\_k](#input\_post\_processing\_top\_k)
483+
484+
Description: Number of top most-likely candidates from which the model chooses the next token.
485+
486+
Type: `number`
487+
488+
Default: `50`
489+
490+
### <a name="input_post_processing_top_p"></a> [post\_processing\_top\_p](#input\_post\_processing\_top\_p)
491+
492+
Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.
493+
494+
Type: `number`
495+
496+
Default: `0.9`
497+
138498
### <a name="input_tags"></a> [tags](#input\_tags)
139499

140500
Description: A map of tags to assign to the customization job and custom model.

0 commit comments

Comments
 (0)