Skip to content

Commit d2d81ec

Browse files
committed
Interim Gen-ai solution
1 parent a9453cc commit d2d81ec

File tree

2 files changed

+351
-249
lines changed

2 files changed

+351
-249
lines changed

docs/gen-ai/gen-ai-events.md

Lines changed: 2 additions & 249 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,6 @@ linkTitle: Events
66

77
**Status**: [Development][DocumentStatus]
88

9-
<!-- toc -->
10-
11-
- [Events](#events)
12-
- [Custom events](#custom-events)
13-
- [Examples](#examples)
14-
- [Chat completion](#chat-completion)
15-
- [Tools](#tools)
16-
- [Chat completion with multiple choices](#chat-completion-with-multiple-choices)
17-
18-
<!-- tocstop -->
19-
209
> [!Warning]
2110
>
2211
> Existing GenAI instrumentations that are using
@@ -65,247 +54,11 @@ Instrumentations MAY offer configuration options allowing to disable events or a
6554

6655
Is now described in the namespace registry.
6756

57+
To see usage of the events defined in the registry refer to the [Gen-AI implementations](gen-ai-implementations.md) documentation.
58+
6859
## Custom events
6960

7061
System-specific events that are not covered in this document SHOULD be documented in corresponding Semantic Conventions extensions and
7162
SHOULD follow `{gen_ai.provider.name}.*` naming pattern.
7263

73-
## Examples
74-
75-
### Chat completion
76-
77-
This is an example of telemetry generated for a chat completion call with system and user messages.
78-
79-
```mermaid
80-
%%{init:
81-
{
82-
"sequence": { "messageAlign": "left", "htmlLabels":true },
83-
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
84-
}
85-
}%%
86-
sequenceDiagram
87-
participant A as Application
88-
participant I as Instrumented Client
89-
participant M as Model
90-
A->>+I: #U+200D
91-
I->>M: gen_ai.system.message: You are a helpful bot<br/>gen_ai.user.message: Tell me a joke about OpenTelemetry
92-
Note left of I: GenAI Client span
93-
I-->M: gen_ai.choice: Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!
94-
I-->>-A: #U+200D
95-
```
96-
97-
**GenAI Client span:**
98-
99-
| Attribute name | Value |
100-
|---------------------------------|--------------------------------------------|
101-
| Span name | `"chat gpt-4"` |
102-
| `gen_ai.provider.name` | `"openai"` |
103-
| `gen_ai.request.model` | `"gpt-4"` |
104-
| `gen_ai.request.max_tokens` | `200` |
105-
| `gen_ai.request.top_p` | `1.0` |
106-
| `gen_ai.response.id` | `"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"` |
107-
| `gen_ai.response.model` | `"gpt-4-0613"` |
108-
| `gen_ai.usage.output_tokens` | `47` |
109-
| `gen_ai.usage.input_tokens` | `52` |
110-
| `gen_ai.response.finish_reasons`| `["stop"]` |
111-
112-
**Events:**
113-
114-
1. `gen_ai.system.message`
115-
116-
| Property | Value |
117-
|---------------------|-------------------------------------------------------|
118-
| `gen_ai.provider.name`| `"openai"` |
119-
| Event body (with content enabled) | `{"content": "You're a helpful bot"}` |
120-
121-
2. `gen_ai.user.message`
122-
123-
| Property | Value |
124-
|---------------------|-------------------------------------------------------|
125-
| `gen_ai.provider.name`| `"openai"` |
126-
| Event body (with content enabled) | `{"content":"Tell me a joke about OpenTelemetry"}` |
127-
128-
3. `gen_ai.choice`
129-
130-
| Property | Value |
131-
|---------------------|-------------------------------------------------------|
132-
| `gen_ai.provider.name`| `"openai"` |
133-
| Event body (with content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}` |
134-
| Event body (without content) | `{"index":0,"finish_reason":"stop","message":{}}` |
135-
136-
### Tools
137-
138-
This is an example of telemetry generated for a chat completion call with user message and function definition
139-
that results in a model requesting application to call provided function. Application executes a function and
140-
requests another completion now with the tool response.
141-
142-
```mermaid
143-
%%{init:
144-
{
145-
"sequence": { "messageAlign": "left", "htmlLabels":true },
146-
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
147-
}
148-
}%%
149-
sequenceDiagram
150-
participant A as Application
151-
participant I as Instrumented Client
152-
participant M as Model
153-
A->>+I: #U+200D
154-
I->>M: gen_ai.user.message: What's the weather in Paris?
155-
Note left of I: GenAI Client span 1
156-
I-->M: gen_ai.choice: Call to the get_weather tool with Paris as the location argument.
157-
I-->>-A: #U+200D
158-
A -->> A: parse tool parameters<br/>execute tool<br/>update chat history
159-
A->>+I: #U+200D
160-
I->>M: gen_ai.user.message: What's the weather in Paris?<br/>gen_ai.assistant.message: get_weather tool call<br/>gen_ai.tool.message: rainy, 57°F
161-
Note left of I: GenAI Client span 2
162-
I-->M: gen_ai.choice: The weather in Paris is rainy and overcast, with temperatures around 57°F
163-
I-->>-A: #U+200D
164-
```
165-
166-
Here's the telemetry generated for each step in this scenario:
167-
168-
**GenAI Client span 1:**
169-
170-
| Attribute name | Value |
171-
|---------------------|-------------------------------------------------------|
172-
| Span name | `"chat gpt-4"` |
173-
| `gen_ai.provider.name`| `"openai"` |
174-
| `gen_ai.request.model`| `"gpt-4"` |
175-
| `gen_ai.request.max_tokens`| `200` |
176-
| `gen_ai.request.top_p`| `1.0` |
177-
| `gen_ai.response.id`| `"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"` |
178-
| `gen_ai.response.model`| `"gpt-4-0613"` |
179-
| `gen_ai.usage.output_tokens`| `17` |
180-
| `gen_ai.usage.input_tokens`| `47` |
181-
| `gen_ai.response.finish_reasons`| `["tool_calls"]` |
182-
183-
**Events**:
184-
185-
All the following events are parented to the **GenAI chat span 1**.
186-
187-
1. `gen_ai.user.message` (not reported when capturing content is disabled)
188-
189-
| Property | Value |
190-
|---------------------|-------------------------------------------------------|
191-
| `gen_ai.provider.name`| `"openai"` |
192-
| Event body | `{"content":"What's the weather in Paris?"}` |
193-
194-
2. `gen_ai.choice`
195-
196-
| Property | Value |
197-
|---------------------|-------------------------------------------------------|
198-
| `gen_ai.provider.name`| `"openai"` |
199-
| Event body (with content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
200-
| Event body (without content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |
201-
202-
**GenAI Client span 2:**
203-
204-
| Attribute name | Value |
205-
|---------------------------------|-------------------------------------------------------|
206-
| Span name | `"chat gpt-4"` |
207-
| `gen_ai.provider.name` | `"openai"` |
208-
| `gen_ai.request.model` | `"gpt-4"` |
209-
| `gen_ai.request.max_tokens` | `200` |
210-
| `gen_ai.request.top_p` | `1.0` |
211-
| `gen_ai.response.id` | `"chatcmpl-call_VSPygqKTWdrhaFErNvMV18Yl"` |
212-
| `gen_ai.response.model` | `"gpt-4-0613"` |
213-
| `gen_ai.usage.output_tokens` | `52` |
214-
| `gen_ai.usage.input_tokens` | `47` |
215-
| `gen_ai.response.finish_reasons`| `["stop"]` |
216-
217-
**Events**:
218-
219-
All the following events are parented to the **GenAI chat span 2**.
220-
221-
In this example, the event content matches the original messages, but applications may also drop messages or change their content.
222-
223-
1. `gen_ai.user.message`
224-
225-
| Property | Value |
226-
|----------------------------------|------------------------------------------------------------|
227-
| `gen_ai.provider.name` | `"openai"` |
228-
| Event body | `{"content":"What's the weather in Paris?"}` |
229-
230-
2. `gen_ai.assistant.message`
231-
232-
| Property | Value |
233-
|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
234-
| `gen_ai.provider.name` | `"openai"` |
235-
| Event body (content enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
236-
| Event body (content not enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |
237-
238-
3. `gen_ai.tool.message`
239-
240-
| Property | Value |
241-
|----------------------------------|------------------------------------------------------------------------------------------------|
242-
| `gen_ai.provider.name` | `"openai"` |
243-
| Event body (content enabled) | `{"content":"rainy, 57°F","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
244-
| Event body (content not enabled) | `{"id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
245-
246-
4. `gen_ai.choice`
247-
248-
| Property | Value |
249-
|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
250-
| `gen_ai.provider.name` | `"openai"` |
251-
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"The weather in Paris is rainy and overcast, with temperatures around 57°F"}}` |
252-
| Event body (content not enabled) | `{"index":0,"finish_reason":"stop","message":{}}` |
253-
254-
### Chat completion with multiple choices
255-
256-
This example covers the scenario when user requests model to generate two completions for the same prompt :
257-
258-
```mermaid
259-
%%{init:
260-
{
261-
"sequence": { "messageAlign": "left", "htmlLabels":true },
262-
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
263-
}
264-
}%%
265-
sequenceDiagram
266-
participant A as Application
267-
participant I as Instrumented Client
268-
participant M as Model
269-
A->>+I: #U+200D
270-
I->>M: gen_ai.system.message - "You are a helpful bot"<br/>gen_ai.user.message - "Tell me a joke about OpenTelemetry"
271-
Note left of I: GenAI Client span
272-
I-->M: gen_ai.choice - Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!<br/>gen_ai.choice - Why did OpenTelemetry get promoted? It had great span of control!
273-
I-->>-A: #U+200D
274-
```
275-
276-
**GenAI Client Span**:
277-
278-
| Attribute name | Value |
279-
|---------------------|--------------------------------------------|
280-
| Span name | `"chat gpt-4"` |
281-
| `gen_ai.provider.name`| `"openai"` |
282-
| `gen_ai.request.model`| `"gpt-4"` |
283-
| `gen_ai.request.max_tokens`| `200` |
284-
| `gen_ai.request.top_p`| `1.0` |
285-
| `gen_ai.response.id`| `"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"` |
286-
| `gen_ai.response.model`| `"gpt-4-0613"` |
287-
| `gen_ai.usage.output_tokens`| `77` |
288-
| `gen_ai.usage.input_tokens`| `52` |
289-
| `gen_ai.response.finish_reasons`| `["stop", "stop"]` |
290-
291-
**Events**:
292-
293-
All events are parented to the GenAI chat span above.
294-
295-
1. `gen_ai.system.message`: the same as in the [Chat Completion](#chat-completion) example
296-
2. `gen_ai.user.message`: the same as in the [Chat Completion](#chat-completion) example
297-
3. `gen_ai.choice`
298-
299-
| Property | Value |
300-
|------------------------------|-------------------------------------------------------|
301-
| `gen_ai.provider.name` | `"openai"` |
302-
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}` |
303-
304-
4. `gen_ai.choice`
305-
306-
| Property | Value |
307-
|------------------------------|-------------------------------------------------------|
308-
| `gen_ai.provider.name` | `"openai"` |
309-
| Event body (content enabled) | `{"index":1,"finish_reason":"stop","message":{"content":"Why did OpenTelemetry get promoted? It had great span of control!"}}` |
310-
31164
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status

0 commit comments

Comments
 (0)