Spring Boot Actuator provides dependency management and auto-configuration for Micrometer, an application metrics facade that supports {micrometer-docs}[numerous monitoring systems], including:
Tip
|
To learn more about Micrometer’s capabilities, see its reference documentation, in particular the {micrometer-concepts-docs}[concepts section]. |
Spring Boot auto-configures a composite MeterRegistry
and adds a registry to the composite for each of the supported implementations that it finds on the classpath.
Having a dependency on micrometer-registry-{system}
in your runtime classpath is enough for Spring Boot to configure the registry.
Most registries share common features. For instance, you can disable a particular registry even if the Micrometer registry implementation is on the classpath. The following example disables Datadog:
management:
metrics:
export:
datadog:
enabled: false
You can also disable all registries unless stated otherwise by the registry-specific property, as the following example shows:
management:
metrics:
export:
defaults:
enabled: false
Spring Boot also adds any auto-configured registries to the global static composite registry on the Metrics
class, unless you explicitly tell it not to:
management:
metrics:
use-global-registry: false
You can register any number of MeterRegistryCustomizer
beans to further configure the registry, such as applying common tags, before any meters are registered with the registry:
link:{docs-java}/actuator/metrics/gettingstarted/commontags/MyMeterRegistryConfiguration.java[role=include]
You can apply customizations to particular registry implementations by being more specific about the generic type:
link:{docs-java}/actuator/metrics/gettingstarted/specifictype/MyMeterRegistryConfiguration.java[role=include]
Spring Boot also configures built-in instrumentation that you can control through configuration or dedicated annotation markers.
This section briefly describes each of the supported monitoring systems.
By default, the AppOptics registry periodically pushes metrics to https://api.appoptics.com/v1/measurements
.
To export metrics to SaaS {micrometer-registry-docs}/appOptics[AppOptics], your API token must be provided:
management:
metrics:
export:
appoptics:
api-token: "YOUR_TOKEN"
By default, metrics are exported to {micrometer-registry-docs}/atlas[Atlas] running on your local machine. You can provide the location of the Atlas server:
management:
metrics:
export:
atlas:
uri: "https://atlas.example.com:7101/api/v1/publish"
A Datadog registry periodically pushes metrics to datadoghq. To export metrics to {micrometer-registry-docs}/datadog[Datadog], you must provide your API key:
management:
metrics:
export:
datadog:
api-key: "YOUR_KEY"
You can also change the interval at which metrics are sent to Datadog:
management:
metrics:
export:
datadog:
step: "30s"
Dynatrace offers two metrics ingest APIs, both of which are implemented for {micrometer-registry-docs}/dynatrace[Micrometer].
Configuration properties in the v1
namespace apply only when exporting to the {dynatrace-help}/dynatrace-api/environment-api/metric-v1/[Timeseries v1 API].
Configuration properties in the v2
namespace apply only when exporting to the {dynatrace-help}/dynatrace-api/environment-api/metric-v2/post-ingest-metrics/[Metrics v2 API].
Note that this integration can export only to either the v1
or v2
version of the API at a time.
If the device-id
(required for v1 but not used in v2) is set in the v1
namespace, metrics are exported to the v1
endpoint.
Otherwise, v2
is assumed.
You can use the v2 API in two ways.
If a local OneAgent is running on the host, metrics are automatically exported to the {dynatrace-help}/how-to-use-dynatrace/metrics/metric-ingestion/ingestion-methods/local-api/[local OneAgent ingest endpoint].
The ingest endpoint forwards the metrics to the Dynatrace backend.
This is the default behavior and requires no special setup beyond a dependency on io.micrometer:micrometer-registry-dynatrace
.
If no local OneAgent is running, the endpoint of the {dynatrace-help}/dynatrace-api/environment-api/metric-v2/post-ingest-metrics/[Metrics v2 API] and an API token are required.
The {dynatrace-help}/dynatrace-api/basics/dynatrace-api-authentication/[API token] must have the “Ingest metrics” (metrics.ingest
) permission set.
We recommend limiting the scope of the token to this one permission.
You must ensure that the endpoint URI contains the path (for example, /api/v2/metrics/ingest
):
The URL of the Metrics API v2 ingest endpoint is different according to your deployment option:
-
SaaS:
https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest
-
Managed deployments:
https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest
The example below configures metrics export using the example
environment id:
management:
metrics:
export:
dynatrace:
uri: "https://example.live.dynatrace.com/api/v2/metrics/ingest"
api-token: "YOUR_TOKEN"
When using the Dynatrace v2 API, the following optional features are available:
-
Metric key prefix: Sets a prefix that is prepended to all exported metric keys.
-
Enrich with Dynatrace metadata: If a OneAgent or Dynatrace operator is running, enrich metrics with additional metadata (for example, about the host, process, or pod).
-
Default dimensions: Specify key-value pairs that are added to all exported metrics. If tags with the same key are specified with Micrometer, they overwrite the default dimensions.
It is possible to not specify a URI and API token, as shown in the following example. In this scenario, the local OneAgent endpoint is used:
management:
metrics:
export:
dynatrace:
# Specify uri and api-token here if not using the local OneAgent endpoint.
v2:
metric-key-prefix: "your.key.prefix"
enrich-with-dynatrace-metadata: true
default-dimensions:
key1: "value1"
key2: "value2"
The Dynatrace v1 API metrics registry pushes metrics to the configured URI periodically by using the {dynatrace-help}/dynatrace-api/environment-api/metric-v1/[Timeseries v1 API].
For backwards-compatibility with existing setups, when device-id
is set (required for v1, but not used in v2), metrics are exported to the Timeseries v1 endpoint.
To export metrics to {micrometer-registry-docs}/dynatrace[Dynatrace], your API token, device ID, and URI must be provided:
management:
metrics:
export:
dynatrace:
uri: "https://{your-environment-id}.live.dynatrace.com"
api-token: "YOUR_TOKEN"
v1:
device-id: "YOUR_DEVICE_ID"
For the v1 API, you must specify the base environment URI without a path, as the v1 endpoint path is added automatically.
In addition to the API endpoint and token, you can also change the interval at which metrics are sent to Dynatrace.
The default export interval is 60s
.
The following example sets the export interval to 30 seconds:
management:
metrics:
export:
dynatrace:
step: "30s"
You can find more information on how to set up the Dynatrace exporter for Micrometer in {micrometer-registry-docs}/dynatrace[the Micrometer documentation].
By default, metrics are exported to {micrometer-registry-docs}/elastic[Elastic] running on your local machine. You can provide the location of the Elastic server to use by using the following property:
management:
metrics:
export:
elastic:
host: "https://elastic.example.com:8086"
By default, metrics are exported to {micrometer-registry-docs}/ganglia[Ganglia] running on your local machine. You can provide the Ganglia server host and port, as the following example shows:
management:
metrics:
export:
ganglia:
host: "ganglia.example.com"
port: 9649
By default, metrics are exported to {micrometer-registry-docs}/graphite[Graphite] running on your local machine. You can provide the Graphite server host and port, as the following example shows:
management:
metrics:
export:
graphite:
host: "graphite.example.com"
port: 9004
Micrometer provides a default HierarchicalNameMapper
that governs how a dimensional meter ID is {micrometer-registry-docs}/graphite#_hierarchical_name_mapping[mapped to flat hierarchical names].
Tip
|
To take control over this behavior, define your link:{docs-java}/actuator/metrics/export/graphite/MyGraphiteConfiguration.java[role=include] |
By default, the Humio registry periodically pushes metrics to https://cloud.humio.com. To export metrics to SaaS {micrometer-registry-docs}/humio[Humio], you must provide your API token:
management:
metrics:
export:
humio:
api-token: "YOUR_TOKEN"
You should also configure one or more tags to identify the data source to which metrics are pushed:
management:
metrics:
export:
humio:
tags:
alpha: "a"
bravo: "b"
By default, metrics are exported to an {micrometer-registry-docs}/influx[Influx] v1 instance running on your local machine with the default configuration.
To export metrics to InfluxDB v2, configure the org
, bucket
, and authentication token
for writing metrics.
You can provide the location of the Influx server to use by using:
management:
metrics:
export:
influx:
uri: "https://influx.example.com:8086"
Micrometer provides a hierarchical mapping to {micrometer-registry-docs}/jmx[JMX], primarily as a cheap and portable way to view metrics locally.
By default, metrics are exported to the metrics
JMX domain.
You can provide the domain to use by using:
management:
metrics:
export:
jmx:
domain: "com.example.app.metrics"
Micrometer provides a default HierarchicalNameMapper
that governs how a dimensional meter ID is {micrometer-registry-docs}/jmx#_hierarchical_name_mapping[mapped to flat hierarchical names].
Tip
|
To take control over this behavior, define your link:{docs-java}/actuator/metrics/export/jmx/MyJmxConfiguration.java[role=include] |
By default, metrics are exported to {micrometer-registry-docs}/kairos[KairosDB] running on your local machine. You can provide the location of the KairosDB server to use by using:
management:
metrics:
export:
kairos:
uri: "https://kairosdb.example.com:8080/api/v1/datapoints"
A New Relic registry periodically pushes metrics to {micrometer-registry-docs}/new-relic[New Relic]. To export metrics to New Relic, you must provide your API key and account ID:
management:
metrics:
export:
newrelic:
api-key: "YOUR_KEY"
account-id: "YOUR_ACCOUNT_ID"
You can also change the interval at which metrics are sent to New Relic:
management:
metrics:
export:
newrelic:
step: "30s"
<<<<<<< HEAD By default, metrics are published through REST calls, but you can also use the Java Agent API if you have it on the classpath:
By default, metrics are published through REST calls but it is also possible to use the Java Agent API if you have it on the classpath: >>>>>>> Replace "via" with English words
management:
metrics:
export:
newrelic:
client-provider-type: "insights-agent"
Finally, you can take full control by defining your own NewRelicClientProvider
bean.
==== Prometheus
{micrometer-registry-docs}/prometheus[Prometheus] expects to scrape or poll individual application instances for metrics.
Spring Boot provides an actuator endpoint at /actuator/prometheus
to present a Prometheus scrape with the appropriate format.
Tip
|
By default, the endpoint is not available and must be exposed. See exposing endpoints for more details. |
The following example scrape_config
adds to prometheus.yml
:
scrape_configs:
- job_name: 'spring'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['HOST:PORT']
For ephemeral or batch jobs that may not exist long enough to be scraped, you can use Prometheus Pushgateway support to expose the metrics to Prometheus. To enable Prometheus Pushgateway support, add the following dependency to your project:
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_pushgateway</artifactId>
</dependency>
When the Prometheus Pushgateway dependency is present on the classpath and the configprop:management.metrics.export.prometheus.pushgateway.enabled[] property is set to true
, a PrometheusPushGatewayManager
bean is auto-configured.
This manages the pushing of metrics to a Prometheus Pushgateway.
You can tune the PrometheusPushGatewayManager
by using properties under management.metrics.export.prometheus.pushgateway
.
For advanced configuration, you can also provide your own PrometheusPushGatewayManager
bean.
==== SignalFx SignalFx registry periodically pushes metrics to {micrometer-registry-docs}/signalFx[SignalFx]. To export metrics to SignalFx, you must provide your access token:
management:
metrics:
export:
signalfx:
access-token: "YOUR_ACCESS_TOKEN"
You can also change the interval at which metrics are sent to SignalFx:
management:
metrics:
export:
signalfx:
step: "30s"
==== Simple Micrometer ships with a simple, in-memory backend that is automatically used as a fallback if no other registry is configured. This lets you see what metrics are collected in the metrics endpoint.
The in-memory backend disables itself as soon as you use any other available backend. You can also disable it explicitly:
management:
metrics:
export:
simple:
enabled: false
==== Stackdriver The Stackdriver registry periodically pushes metrics to Stackdriver. To export metrics to SaaS {micrometer-registry-docs}/stackdriver[Stackdriver], you must provide your Google Cloud project ID:
management:
metrics:
export:
stackdriver:
project-id: "my-project"
You can also change the interval at which metrics are sent to Stackdriver:
management:
metrics:
export:
stackdriver:
step: "30s"
==== StatsD The StatsD registry eagerly pushes metrics over UDP to a StatsD agent. By default, metrics are exported to a {micrometer-registry-docs}/statsD[StatsD] agent running on your local machine. You can provide the StatsD agent host, port, and protocol to use by using:
management:
metrics:
export:
statsd:
host: "statsd.example.com"
port: 9125
protocol: "udp"
You can also change the StatsD line protocol to use (it defaults to Datadog):
management:
metrics:
export:
statsd:
flavor: "etsy"
==== Wavefront The Wavefront registry periodically pushes metrics to {micrometer-registry-docs}/wavefront[Wavefront]. If you are exporting metrics to Wavefront directly, you must provide your API token:
management:
metrics:
export:
wavefront:
api-token: "YOUR_API_TOKEN"
Alternatively, you can use a Wavefront sidecar or an internal proxy in your environment to forward metrics data to the Wavefront API host:
management:
metrics:
export:
wavefront:
uri: "proxy://localhost:2878"
Note
|
If you publish metrics to a Wavefront proxy (as described in the Wavefront documentation), the host must be in the proxy://HOST:PORT format.
|
You can also change the interval at which metrics are sent to Wavefront:
management:
metrics:
export:
wavefront:
step: "30s"
=== Supported Metrics and Meters Spring Boot provides automatic meter registration for a wide variety of technologies. In most situations, the defaults provide sensible metrics that can be published to any of the supported monitoring systems.
==== JVM Metrics
Auto-configuration enables JVM Metrics by using core Micrometer classes.
JVM metrics are published under the jvm.
meter name.
The following JVM metrics are provided:
-
Various memory and buffer pool details
-
Statistics related to garbage collection
-
Thread utilization
-
The number of classes loaded and unloaded
==== System Metrics
Auto-configuration enables system metrics by using core Micrometer classes.
System metrics are published under the system.
, process.
, and disk.
meter names.
The following system metrics are provided:
-
CPU metrics
-
File descriptor metrics
-
Uptime metrics (both the amount of time the application has been running and a fixed gauge of the absolute start time)
-
Disk space available
==== Application Startup Metrics Auto-configuration exposes application startup time metrics:
-
application.started.time
: time taken to start the application. -
application.ready.time
: time taken for the application to be ready to service requests.
Metrics are tagged by the fully qualified name of the application class.
==== Logger Metrics
Auto-configuration enables the event metrics for both Logback and Log4J2.
The details are published under the log4j2.events.
or logback.events.
meter names.
==== Task Execution and Scheduling Metrics
Auto-configuration enables the instrumentation of all available ThreadPoolTaskExecutor
and ThreadPoolTaskScheduler
beans, as long as the underling ThreadPoolExecutor
is available.
Metrics are tagged by the name of the executor, which is derived from the bean name.
==== Spring MVC Metrics
Auto-configuration enables the instrumentation of all requests handled by Spring MVC controllers and functional handlers.
By default, metrics are generated with the name, http.server.requests
.
You can customized the name by setting the configprop:management.metrics.web.server.request.metric-name[] property.
@Timed
annotations are supported on @Controller
classes and @RequestMapping
methods (see actuator.adoc for details).
If you do not want to record metrics for all Spring MVC requests, you can set configprop:management.metrics.web.server.request.autotime.enabled[] to false
and exclusively use @Timed
annotations instead.
By default, Spring MVC related metrics are tagged with the following information:
Tag | Description |
---|---|
|
The simple class name of any exception that was thrown while handling the request. |
|
The request’s method (for example, |
|
The request’s outcome, based on the status code of the response.
1xx is |
|
The response’s HTTP status code (for example, |
|
The request’s URI template prior to variable substitution, if possible (for example, |
To add to the default tags, provide one or more @Bean
s that implement WebMvcTagsContributor
.
To replace the default tags, provide a @Bean
that implements WebMvcTagsProvider
.
Tip
|
In some cases, exceptions handled in web controllers are not recorded as request metrics tags. Applications can opt in and record exceptions by setting handled exceptions as request attributes. |
==== Spring WebFlux Metrics
Auto-configuration enables the instrumentation of all requests handled by Spring WebFlux controllers and functional handlers.
By default, metrics are generated with the name, http.server.requests
.
You can customize the name by setting the configprop:management.metrics.web.server.request.metric-name[] property.
@Timed
annotations are supported on @Controller
classes and @RequestMapping
methods (see actuator.adoc for details).
If you do not want to record metrics for all Spring WebFlux requests, you can set configprop:management.metrics.web.server.request.autotime.enabled[] to false
and exclusively use @Timed
annotations instead.
By default, WebFlux related metrics are tagged with the following information:
Tag | Description |
---|---|
|
The simple class name of any exception that was thrown while handling the request. |
|
The request’s method (for example, |
|
The request’s outcome, based on the status code of the response.
1xx is |
|
The response’s HTTP status code (for example, |
|
The request’s URI template prior to variable substitution, if possible (for example, |
To add to the default tags, provide one or more beans that implement WebFluxTagsContributor
.
To replace the default tags, provide a bean that implements WebFluxTagsProvider
.
Tip
|
In some cases, exceptions handled in controllers and handler functions are not recorded as request metrics tags. Applications can opt in and record exceptions by setting handled exceptions as request attributes. |
==== Jersey Server Metrics
Auto-configuration enables the instrumentation of all requests handled by the Jersey JAX-RS implementation whenever Micrometer’s micrometer-jersey2
module is on the classpath.
By default, metrics are generated with the name, http.server.requests
.
You can customize the name by setting the configprop:management.metrics.web.server.request.metric-name[] property.
@Timed
annotations are supported on request-handling classes and methods (see actuator.adoc for details).
If you do not want to record metrics for all Jersey requests, you can set configprop:management.metrics.web.server.request.autotime.enabled[] to false
and exclusively use @Timed
annotations instead.
By default, Jersey server metrics are tagged with the following information:
Tag | Description |
---|---|
|
The simple class name of any exception that was thrown while handling the request. |
|
The request’s method (for example, |
|
The request’s outcome, based on the status code of the response.
1xx is |
|
The response’s HTTP status code (for example, |
|
The request’s URI template prior to variable substitution, if possible (for example, |
To customize the tags, provide a @Bean
that implements JerseyTagsProvider
.
==== HTTP Client Metrics
Spring Boot Actuator manages the instrumentation of both RestTemplate
and WebClient
.
For that, you have to inject the auto-configured builder and use it to create instances:
-
RestTemplateBuilder
forRestTemplate
-
WebClient.Builder
forWebClient
You can also manually apply the customizers responsible for this instrumentation, namely MetricsRestTemplateCustomizer
and MetricsWebClientCustomizer
.
By default, metrics are generated with the name, http.client.requests
.
You can customize the name by setting the configprop:management.metrics.web.client.request.metric-name[] property.
By default, metrics generated by an instrumented client are tagged with the following information:
Tag | Description |
---|---|
|
The host portion of the URI |
|
The request’s method (for example, |
|
The request’s outcome, based on the status code of the response.
1xx is |
|
The response’s HTTP status code if available (for example, |
|
The request’s URI template prior to variable substitution, if possible (for example, |
To customize the tags, and depending on your choice of client, you can provide a @Bean
that implements RestTemplateExchangeTagsProvider
or WebClientExchangeTagsProvider
.
There are convenience static functions in RestTemplateExchangeTags
and WebClientExchangeTags
.
==== Tomcat Metrics
Auto-configuration enables the instrumentation of Tomcat only when an MBeanRegistry
is enabled.
By default, the MBeanRegistry
is disabled, but you can enable it by setting configprop:server.tomcat.mbeanregistry.enabled[] to true
.
Tomcat metrics are published under the tomcat.
meter name.
==== Cache Metrics
Auto-configuration enables the instrumentation of all available Cache
instances on startup, with metrics prefixed with cache
.
Cache instrumentation is standardized for a basic set of metrics.
Additional, cache-specific metrics are also available.
The following cache libraries are supported:
-
Caffeine
-
EhCache 2
-
Hazelcast
-
Any compliant JCache (JSR-107) implementation
-
Redis
Metrics are tagged by the name of the cache and by the name of the CacheManager
, which is derived from the bean name.
Note
|
Only caches that are configured on startup are bound to the registry.
For caches not defined in the cache’s configuration, such as caches created on the fly or programmatically after the startup phase, an explicit registration is required.
A CacheMetricsRegistrar bean is made available to make that process easier.
|
==== DataSource Metrics
Auto-configuration enables the instrumentation of all available DataSource
objects with metrics prefixed with jdbc.connections
.
Data source instrumentation results in gauges that represent the currently active, idle, maximum allowed, and minimum allowed connections in the pool.
Metrics are also tagged by the name of the DataSource
computed based on the bean name.
Tip
|
By default, Spring Boot provides metadata for all supported data sources.
You can add additional DataSourcePoolMetadataProvider beans if your favorite data source is not supported.
See DataSourcePoolMetadataProvidersConfiguration for examples.
|
Also, Hikari-specific metrics are exposed with a hikaricp
prefix.
Each metric is tagged by the name of the pool (you can control it with spring.datasource.name
).
==== Hibernate Metrics
If org.hibernate:hibernate-micrometer
is on the classpath, all available Hibernate EntityManagerFactory
instances that have statistics enabled are instrumented with a metric named hibernate
.
Metrics are also tagged by the name of the EntityManagerFactory
, which is derived from the bean name.
To enable statistics, the standard JPA property hibernate.generate_statistics
must be set to true
.
You can enable that on the auto-configured EntityManagerFactory
:
spring:
jpa:
properties:
"[hibernate.generate_statistics]": true
==== Spring Data Repository Metrics
Auto-configuration enables the instrumentation of all Spring Data Repository
method invocations.
By default, metrics are generated with the name, spring.data.repository.invocations
.
You can customize the name by setting the configprop:management.metrics.data.repository.metric-name[] property.
@Timed
annotations are supported on Repository
classes and methods (see actuator.adoc for details).
If you do not want to record metrics for all Repository
invocations, you can set configprop:management.metrics.data.repository.autotime.enabled[] to false
and exclusively use @Timed
annotations instead.
By default, repository invocation related metrics are tagged with the following information:
Tag | Description |
---|---|
|
The simple class name of the source |
|
The name of the |
|
The result state ( |
|
The simple class name of any exception that was thrown from the invocation. |
To replace the default tags, provide a @Bean
that implements RepositoryTagsProvider
.
==== RabbitMQ Metrics
Auto-configuration enables the instrumentation of all available RabbitMQ connection factories with a metric named rabbitmq
.
==== Spring Integration Metrics
Spring Integration automatically provides {spring-integration-docs}system-management.html#micrometer-integration[Micrometer support] whenever a MeterRegistry
bean is available.
Metrics are published under the spring.integration.
meter name.
==== Kafka Metrics
Auto-configuration registers a MicrometerConsumerListener
and MicrometerProducerListener
for the auto-configured consumer factory and producer factory, respectively.
It also registers a KafkaStreamsMicrometerListener
for StreamsBuilderFactoryBean
.
For more detail, see the {spring-kafka-docs}#micrometer-native[Micrometer Native Metrics] section of the Spring Kafka documentation.
==== MongoDB Metrics This section briefly describes the available metrics for MongoDB.
===== MongoDB Command Metrics
Auto-configuration registers a MongoMetricsCommandListener
with the auto-configured MongoClient
.
A timer metric named mongodb.driver.commands
is created for each command issued to the underlying MongoDB driver.
Each metric is tagged with the following information by default:
Tag | Description |
---|---|
|
The name of the command issued. |
|
The identifier of the cluster to which the command was sent. |
|
The address of the server to which the command was sent. |
|
The outcome of the command ( |
To replace the default metric tags, define a MongoCommandTagsProvider
bean, as the following example shows:
link:{docs-java}/actuator/metrics/supported/mongodb/command/MyCommandTagsProviderConfiguration.java[role=include]
To disable the auto-configured command metrics, set the following property:
management:
metrics:
mongo:
command:
enabled: false
===== MongoDB Connection Pool Metrics
Auto-configuration registers a MongoMetricsConnectionPoolListener
with the auto-configured MongoClient
.
The following gauge metrics are created for the connection pool:
-
mongodb.driver.pool.size
reports the current size of the connection pool, including idle and and in-use members. -
mongodb.driver.pool.checkedout
reports the count of connections that are currently in use. -
mongodb.driver.pool.waitqueuesize
reports the current size of the wait queue for a connection from the pool.
Each metric is tagged with the following information by default:
Tag | Description |
---|---|
|
The identifier of the cluster to which the connection pool corresponds. |
|
The address of the server to which the connection pool corresponds. |
To replace the default metric tags, define a MongoConnectionPoolTagsProvider
bean:
link:{docs-java}/actuator/metrics/supported/mongodb/connectionpool/MyConnectionPoolTagsProviderConfiguration.java[role=include]
To disable the auto-configured connection pool metrics, set the following property:
management:
metrics:
mongo:
connectionpool:
enabled: false
==== Jetty Metrics
Auto-configuration binds metrics for Jetty’s ThreadPool
by using Micrometer’s JettyServerThreadPoolMetrics
.
Metrics for Jetty’s Connector
instances are bound by using Micrometer’s JettyConnectionMetrics
and, when configprop:server.ssl.enabled[] is set to true
, Micrometer’s JettySslHandshakeMetrics
.
==== @Timed Annotation Support
You can use the @Timed
annotation from the io.micrometer.core.annotation
package with several of the supported technologies described earlier.
If supported, you can use the annotation at either the class level or the method level.
For example, the following code shows how you can use the annotation to instrument all request mappings in a @RestController
:
link:{docs-java}/actuator/metrics/supported/timedannotation/all/MyController.java[role=include]
If you want only to instrument a single mapping, you can use the annotation on the method instead of the class:
link:{docs-java}/actuator/metrics/supported/timedannotation/single/MyController.java[role=include]
You can also combine class-level and method-level annotations if you want to change the timing details for a specific method:
link:{docs-java}/actuator/metrics/supported/timedannotation/change/MyController.java[role=include]
Note
|
A @Timed annotation with longTask = true enables a long task timer for the method.
Long task timers require a separate metric name and can be stacked with a short task timer.
|
==== Redis Metrics
Auto-configuration registers a MicrometerCommandLatencyRecorder
for the auto-configured LettuceConnectionFactory
.
For more details refer to the {lettuce-docs}#command.latency.metrics.micrometer[Micrometer Metrics section] of the Lettuce documentation.
=== Registering Custom Metrics
To register custom metrics, inject MeterRegistry
into your component:
link:{docs-java}/actuator/metrics/registeringcustom/MyBean.java[role=include]
If your metrics depend on other beans, we recommend that you use a MeterBinder
to register them:
link:{docs-java}/actuator/metrics/registeringcustom/MyMeterBinderConfiguration.java[role=include]
Using a MeterBinder
ensures that the correct dependency relationships are set up and that the bean is available when the metric’s value is retrieved.
A MeterBinder
implementation can also be useful if you find that you repeatedly instrument a suite of metrics across components or applications.
Note
|
By default, metrics from all MeterBinder beans are automatically bound to the Spring-managed MeterRegistry .
|
=== Customizing Individual Metrics
If you need to apply customizations to specific Meter
instances, you can use the io.micrometer.core.instrument.config.MeterFilter
interface.
For example, if you want to rename the mytag.region
tag to mytag.area
for all meter IDs beginning with com.example
, you can do the following:
link:{docs-java}/actuator/metrics/customizing/MyMetricsFilterConfiguration.java[role=include]
Note
|
By default, all MeterFilter beans are automatically bound to the Spring-managed MeterRegistry .
Make sure to register your metrics by using the Spring-managed MeterRegistry and not any of the static methods on Metrics .
These use the global registry that is not Spring-managed.
|
management:
metrics:
tags:
region: "us-east-1"
stack: "prod"
The preceding example adds region
and stack
tags to all meters with a value of us-east-1
and prod
, respectively.
Note
|
The order of common tags is important if you use Graphite.
As the order of common tags cannot be guaranteed by using this approach, Graphite users are advised to define a custom MeterFilter instead.
|
==== Per-meter Properties
In addition to MeterFilter
beans, you can apply a limited set of customization on a per-meter basis by using properties.
Per-meter customizations apply to any meter IDs that start with the given name.
The following example disables any meters that have an ID starting with example.remote
management:
metrics:
enable:
example:
remote: false
The following properties allow per-meter customization:
Property | Description |
---|---|
configprop:management.metrics.enable[] |
Whether to prevent meters from emitting any metrics. |
configprop:management.metrics.distribution.percentiles-histogram[] |
Whether to publish a histogram suitable for computing aggregable (across dimension) percentile approximations. |
configprop:management.metrics.distribution.minimum-expected-value[], configprop:management.metrics.distribution.maximum-expected-value[] |
Publish fewer histogram buckets by clamping the range of expected values. |
configprop:management.metrics.distribution.percentiles[] |
Publish percentile values computed in your application |
configprop:management.metrics.distribution.expiry[], configprop:management.metrics.distribution.buffer-length[] |
Give greater weight to recent samples by accumulating them in ring buffers which rotate after a configurable expiry, with a configurable buffer length. |
configprop:management.metrics.distribution.slo[] |
Publish a cumulative histogram with buckets defined by your service-level objectives. |
For more details on the concepts behind percentiles-histogram
, percentiles
, and slo
, see the {micrometer-concepts-docs}#_histograms_and_percentiles[“Histograms and percentiles” section] of the Micrometer documentation.
=== Metrics Endpoint
Spring Boot provides a metrics
endpoint that you can use diagnostically to examine the metrics collected by an application.
The endpoint is not available by default and must be exposed. See exposing endpoints for more details.
Navigating to /actuator/metrics
displays a list of available meter names.
You can drill down to view information about a particular meter by providing its name as a selector — for example, /actuator/metrics/jvm.memory.max
.
Tip
|
The name you use here should match the name used in the code, not the name after it has been naming-convention normalized for a monitoring system to which it is shipped.
In other words, if |
You can also add any number of tag=KEY:VALUE
query parameters to the end of the URL to dimensionally drill down on a meter — for example, /actuator/metrics/jvm.memory.max?tag=area:nonheap
.
Tip
|
The reported measurements are the sum of the statistics of all meters that match the meter name and any tags that have been applied.
In the preceding example, the returned |