Skip to content

Commit c1f62ac

Browse files
committed
fix docs
1 parent 7039346 commit c1f62ac

16 files changed

+17
-226
lines changed

b.py

Lines changed: 0 additions & 98 deletions
This file was deleted.

d.py

Lines changed: 0 additions & 12 deletions
This file was deleted.

docs/confident-ai/confident-ai-guardrails-cybersecurity.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe input would look like in the context of a cybersecurity br
2121
Since `TopicalGuard` is a input guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import CybersecurityGuard
24+
from deepeval.guardrails import Guardrails, CybersecurityGuard
2525
from deepeval.guardrails.cybersecurity_guard import (
2626
CyberattackCategory
2727
)

docs/confident-ai/confident-ai-guardrails-graphic-content.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe output would look like in the context of graphic content:
2121
Since `GraphicContentGuard` is an output guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import GraphicContentGuard
24+
from deepeval.guardrails import Guardrails, GraphicContentGuard
2525

2626
guardrails = Guardrails(guards=[GraphicContentGuard()])
2727
```

docs/confident-ai/confident-ai-guardrails-hallucination.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe output would look like in the context of graphic content:
2121
Since `HallucinationGuard` is an output guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import HallucinationGuard
24+
from deepeval.guardrails import Guardrails, HallucinationGuard
2525

2626
guardrails = Guardrails(guards=[HallucinationGuard()])
2727
```

docs/confident-ai/confident-ai-guardrails-illegal.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
id: confident-ai-guardrails-illegal
3-
title: Illegal Guard
4-
sidebar_label: Illegal Guard
3+
title: Illegal Activity Guard
4+
sidebar_label: Illegal Activity Guard
55
---
66

77
The **Illegal Activity Guard** is an output guard that analyzes the output generated by your LLM application to detect any content that promotes or describes illegal or unethical activities, ensuring all outputs comply with legal and ethical standards.
@@ -21,7 +21,7 @@ Here's what an unsafe output would look like in the context of illegal activity:
2121
Since `IllegalActivityGuard` is an output guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import IllegalActivityGuard
24+
from deepeval.guardrails import Guardrails, IllegalActivityGuard
2525

2626
guardrails = Guardrails(guards=[IllegalActivityGuard()])
2727
```

docs/confident-ai/confident-ai-guardrails-jailbreaking.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe input would look like in the context of jailbreaking:
2121
Since `JailbreakingGuardGuard` is a input guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import JailbreakingGuardGuard
24+
from deepeval.guardrails import Guardrails, JailbreakingGuardGuard
2525

2626
guardrails = Guardrails(guards=[JailbreakingGuardGuard()])
2727
```

docs/confident-ai/confident-ai-guardrails-modernization.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe output would look like in the context of modernization:
2121
Since `SyntaxGuard` is an output guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import SyntaxGuard
24+
from deepeval.guardrails import Guardrails, SyntaxGuard
2525

2626
guardrails = Guardrails(guards=[SyntaxGuard()])
2727
```

docs/confident-ai/confident-ai-guardrails-privacy.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe input would look like in the context of privacy:
2121
Since `PrivacyGuard` is a input guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import PrivacyGuard
24+
from deepeval.guardrails import Guardrails, PrivacyGuard
2525

2626
guardrails = Guardrails(guards=[PrivacyGuard()])
2727
```

docs/confident-ai/confident-ai-guardrails-prompt-injection.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe input would look like in the context of prompt injection:
2121
Since `PromptInjectionGuard` is a input guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import PromptInjectionGuard
24+
from deepeval.guardrails import Guardrails, PromptInjectionGuard
2525

2626
guardrails = Guardrails(guards=[PromptInjectionGuard()])
2727
```

docs/confident-ai/confident-ai-guardrails-syntax.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe output would look like in the context of incorrect syntax:
2121
Since `SyntaxGuard` is an output guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import SyntaxGuard
24+
from deepeval.guardrails import Guardrails, SyntaxGuard
2525

2626
guardrails = Guardrails(guards=[SyntaxGuard()])
2727
```

docs/confident-ai/confident-ai-guardrails-topical.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ print(guard_result.breached, guard_result.guard_data)
5656
## Example
5757

5858
```python
59-
from deepeval.guardrails import TopicalGuard
59+
from deepeval.guardrails import Guardrails, TopicalGuard
6060

6161
allowed_topics = ["technology", "science", "health"]
6262
user_input = "Can you tell me about the latest advancements in quantum computing?"

docs/confident-ai/confident-ai-guardrails-toxic.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Here's what an unsafe output would look like in the context of toxicity:
2121
Since `ToxicityGuard` is an output guard, simply provide it as a guard in the list of `guards` when initializing a `Guardrails` object:
2222

2323
```python
24-
from deepeval.guardrails import ToxicityGuard
24+
from deepeval.guardrails import Guardrails, ToxicityGuard
2525

2626
guardrails = Guardrails(guards=[ToxicityGuard()])
2727
```

docs/docs/metrics-tool-correctness.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,17 +25,17 @@ To use the `ToolCorrectnessMetric`, you'll have to provide the following argumen
2525

2626
```python
2727
from deepeval.metrics import ToolCorrectnessMetric
28-
from deepeval.test_case import LLMTestCase, ToolCallParams, ToolCall
28+
from deepeval.test_case import LLMTestCase, ToolCall
2929

3030
test_case = LLMTestCase(
3131
input="What if these shoes don't fit?",
3232
actual_output="We offer a 30-day full refund at no extra cost.",
3333
# Replace this with the tools that was actually used by your LLM agent
3434
tools_called=[ToolCall(name="WebSearch"), ToolCall(name="ToolQuery")],
35-
expected_tools=[ToolCall(name="WebSearch")]
35+
expected_tools=[ToolCall(name="WebSearch")],
3636
)
3737

38-
metric = ToolCorrectnessMetric(evaluation_param=ToolCallParams.TOOL)
38+
metric = ToolCorrectnessMetric()
3939
metric.measure(test_case)
4040
print(metric.score)
4141
print(metric.reason)
@@ -44,7 +44,7 @@ print(metric.reason)
4444
There are seven optional parameters when creating a `ToolCorrectnessMetric`:
4545

4646
- [Optional] `threshold`: a float representing the minimum passing threshold, defaulted to 0.5.
47-
[Optional] `evaluation_params`: A list of `ToolCallParams` indicating the strictness of the correctness criteria. For example, supplying a list containing `ToolCallParams.NAME` and `ToolCallParams.INPUT_PARAMETERS`, but excluding `ToolCallParams.OUTPUT`, will consider a tool correct if the tool name and input parameters match, even if the output does not. Defaults to a list with one element: `[ToolCallParams.NAME]`.
47+
[Optional] `evaluation_params`: A list of `ToolCallParams` indicating the strictness of the correctness criteria, available options are `ToolCallParams.INPUT_PARAMETERS` and `ToolCallParams.OUTPUT`. For example, supplying a list containing `ToolCallParams.INPUT_PARAMETERS` but excluding `ToolCallParams.OUTPUT`, will deem a tool correct if the tool name and input parameters match, even if the output does not. Defaults to a an empty list.
4848
- [Optional] `include_reason`: a boolean which when set to `True`, will include a reason for its evaluation score. Defaulted to `True`.
4949
- [Optional] `strict_mode`: a boolean which when set to `True`, enforces a binary metric score: 1 for perfection, 0 otherwise. It also overrides the current threshold and sets it to 1. Defaulted to `False`.
5050
- [Optional] `verbose_mode`: a boolean which when set to `True`, prints the intermediate steps used to calculate said metric to the console, as outlined in the [How Is It Calculated](#how-is-it-calculated) section. Defaulted to `False`.

r.py

Lines changed: 0 additions & 90 deletions
This file was deleted.

v.py

Lines changed: 0 additions & 9 deletions
This file was deleted.

0 commit comments

Comments
 (0)