Skip to content

Fix metadata propagation from scrape to otel components #4098

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ptodev
Copy link
Contributor

@ptodev ptodev commented Jul 31, 2025

PR Description

Fixes propagation of metric metadata, but only for pipelines which start with a prometheus.scrape and transition into otelcol components. The metrics need to be sent by an otelcol exporter.

I can see the metadata in live debugging. Before:

Screenshot 2025-07-31 at 18 26 58

After:

Screenshot 2025-07-31 at 18 39 12

Which issue(s) this PR fixes

Related to #547

Notes to the Reviewer

I need to assess the performance impact of this feature before writing the docs and before opening the PR for review.
For now it'd be good to keep it as an experimental opt-in feature until we're sure it's stable and without consuming much more resources.

PR Checklist

  • CHANGELOG.md updated
  • Documentation added
  • Tests updated
  • Config converters updated

Comment on lines +84 to +86
// TODO(@ptodev): What exactly fails if we don't include the target in the context?
// Can we instead use the more recent translator package:
// https://github.com/prometheus/otlptranslator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the otel prometheus receiver requires it just like it requires a metadata store to be present

target, ok := scrape.TargetFromContext(t.ctx)
if !ok {
return nil, errors.New("unable to find target in context")
}
t.mc, ok = scrape.MetricMetadataStoreFromContext(t.ctx)
if !ok {
return nil, errors.New("unable to find MetricMetadataStore in context")
}

@ptodev ptodev force-pushed the ptodev/fix-metadata branch from 7a5d8b9 to 9b9a610 Compare August 6, 2025 14:08
@kgeckhart
Copy link
Contributor

Using

prometheus.remote_write "k3d" {
	endpoint {
		url = "http://mimir.k3d.localhost:9999/api/prom/push"

		headers = {
			"X-Scope-OrgID" = "001",
		}
	}
}

prometheus.write.queue "k3d" {
	endpoint "default" {
		url = "http://mimir.k3d.localhost:9999/api/prom/push"

		headers = {
			"X-Scope-OrgID" = "001",
		}
	}
}

prometheus.exporter.unix "integrations_node_exporter" { }

prometheus.scrape "integrations_node_exporter" {
	targets    = prometheus.exporter.unix.integrations_node_exporter.targets
        honor_metadata = true

        // forward_to = [prometheus.relabel.remotewrite_relabel.receiver]
        forward_to = [prometheus.relabel.writequeue_relabel.receiver]
}

prometheus.relabel "remotewrite_relabel" {
	forward_to = [prometheus.remote_write.k3d.receiver]

	rule {
		action = "replace"
		replacement = "remotewrite"
		target_label = "job"
	}
}

prometheus.relabel "writequeue_relabel" {
	forward_to = [prometheus.write.queue.k3d.receiver]

	rule {
		action = "replace"
		replacement = "writequeue"
		target_label = "job"
	}
}

I was able to verify mimir had metadata when using prometheus.write.queue but has no metadata for prometheus.remote_write.

I'll see if I can test out the other components we talked about, anything else using Fanout, and update the metadata ticket.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants