Metrics

Sentry provides an abstraction called ‘metrics’ which is used for internal monitoring, generally timings and various counters.

The default backend simply discards them (though some values are still kept in the internal time series database).

Statsd Backend

Copied
SENTRY_METRICS_BACKEND = 'sentry.metrics.statsd.StatsdMetricsBackend'
SENTRY_METRICS_OPTIONS = {
    'host': 'localhost',
    'port': 8125,
}

Datadog Backend

Datadog will require you to install the datadog package into your Sentry environment:

Copied
$ pip install datadog

In your sentry.conf.py:

Copied
SENTRY_METRICS_BACKEND = 'sentry.metrics.datadog.DatadogMetricsBackend'
SENTRY_METRICS_OPTIONS = {
    'api_key': '...',
    'app_key': '...',
    'tags': {},
}

Once installed, the Sentry metrics will be emitted to the Datadog REST API over HTTPS.

DogStatsD Backend

Using the DogStatsD backend requires a Datadog Agent to be running with the DogStatsD backend (on by default at port 8125).

You must also install the datadog Python package into your Sentry environment:

Copied
$ pip install datadog

In your sentry.conf.py:

Copied
SENTRY_METRICS_BACKEND = 'sentry.metrics.dogstatsd.DogStatsdMetricsBackend'
SENTRY_METRICS_OPTIONS = {
    'statsd_host': 'localhost',
    'statsd_port': 8125,
    'tags': {},
}

Once configured, the metrics backend will emit to the DogStatsD server and then flushed periodically to Datadog over HTTPS.

Logging Backend

The LoggingBackend reports all operations to the sentry.metrics logger. In addition to the metric name and value, log messages also include extra data such as the instance and tags values which can be displayed using a custom formatter.

Copied
SENTRY_METRICS_BACKEND = 'sentry.metrics.logging.LoggingBackend'

LOGGING['loggers']['sentry.metrics'] = {
    'level': 'DEBUG',
    'handlers': ['console:metrics'],
    'propagate': False,
}

LOGGING['formatters']['metrics'] = {
    'format': '[%(levelname)s] %(message)s; instance=%(instance)r; tags=%(tags)r',
}

LOGGING['handlers']['console:metrics'] = {
    'level': 'DEBUG',
    'class': 'logging.StreamHandler',
    'formatter': 'metrics',
}

Notes on Metrics

When emitting metrics you need to be careful about cardinality. Particularly for metrics that are supposed to be sent to Datadog, at our scale there can be large server bills heading our way if a high cardinality tag is emitted. Examples for high cardinality tags are:

  • event_id, request_id etc.: these are unique values, absolutely pointless and if one were to emit them at out scale: very, very expensive.
  • project_id, org_id: for common operations that happen across the entire user base: still a very bad idea. Maybe they are acceptable for very rare circumstances or things that only happen for a small segment of the user base (eg: when you are tracking a beta feature with a small user base, but even in that case probably a horrible idea).

Probably acceptable cardinality:

  • platform as in SDK platform: there is a finite number of them.
  • status as in HTTP status code: not great, but we expect a finite number of them.
  • task_name, endpoint etc.
You can edit this page on GitHub.