摘要:前言实际项目中,提供接口,对接公司的监控系统,增加服务的可观察性,是一个基本的要求。在中集成,非常简单。由上可知,我们可以访问获取格式的。在层,增加然后具体的逻辑中,直接使用。最后重新启动项目,再次访问可看到自定义的已经存在了。
前言
实际项目中,提供metrics接口,对接公司的监控系统,增加服务的可观察性,是一个基本的要求。在spring boot 1.X 中集成prometheus metrics,非常简单。但是spring boot 2.X 颇费周折。因为prometheus官方提供的prometheus-client-java不兼容spring boot 2.X 。需要借助micrometer。
步骤1:引入所需的包
在pom.xml文件中增加如下:
org.springframework.boot spring-boot-starter-actuator io.micrometer micrometer-core io.micrometer micrometer-registry-prometheus
2: 增加相关配置
在 application.yml中增加如下设置:
management: endpoints: web: exposure: include: ["metrics","prometheus"] endpoint: metrics: enabled: true prometheus: enabled: true metrics: export: prometheus: enabled: true
PS: 如果想获取其他的metrics,可以设置include: ["*"]
3:运行查看metrics
运行项目,访问 http://localhost:8090/actuator,可看到如下:
{"_links":{"self":{"href":"http://localhost:8090/actuator","templated":false},"prometheus":{"href":"http://localhost:8090/actuator/prometheus","templated":false},"metrics-requiredMetricName":{"href":"http://localhost:8090/actuator/metrics/{requiredMetricName}","templated":true},"metrics":{"href":"http://localhost:8090/actuator/metrics","templated":false}}}
PS:注意我的项目端口是8090。
由上可知,我们可以访问 http://localhost:8090/actuator/prometheus 获取prometheus格式的metrics。
具体如下:
# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time # TYPE system_load_average_1m gauge system_load_average_1m 3.12939453125 # HELP system_cpu_count The number of processors available to the Java virtual machine # TYPE system_cpu_count gauge system_cpu_count 8.0 # HELP system_cpu_usage The "recent cpu usage" for the whole system # TYPE system_cpu_usage gauge system_cpu_usage 0.11287867482465304 # HELP jvm_gc_pause_seconds Time spent in GC pause # TYPE jvm_gc_pause_seconds summary jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 1.0 jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 0.014 # HELP jvm_gc_pause_seconds_max Time spent in GC pause # TYPE jvm_gc_pause_seconds_max gauge jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.014 # HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process # TYPE process_cpu_usage gauge process_cpu_usage 2.803742769828689E-4 # HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the young generation memory pool after one GC to before the next # TYPE jvm_gc_memory_allocated_bytes_total counter jvm_gc_memory_allocated_bytes_total 1.73539328E8 # HELP process_uptime_seconds The uptime of the Java virtual machine # TYPE process_uptime_seconds gauge process_uptime_seconds 175.835 # HELP tomcat_sessions_active_current_sessions # TYPE tomcat_sessions_active_current_sessions gauge tomcat_sessions_active_current_sessions 0.0 # HELP tomcat_global_received_bytes_total # TYPE tomcat_global_received_bytes_total counter tomcat_global_received_bytes_total{name="http-nio-8090",} 0.0 # HELP tomcat_global_error_total # TYPE tomcat_global_error_total counter tomcat_global_error_total{name="http-nio-8090",} 0.0 # HELP tomcat_threads_current_threads # TYPE tomcat_threads_current_threads gauge tomcat_threads_current_threads{name="http-nio-8090",} 10.0 # HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use # TYPE jvm_memory_committed_bytes gauge jvm_memory_committed_bytes{area="heap",id="PS Survivor Space",} 1.8874368E7 jvm_memory_committed_bytes{area="heap",id="PS Old Gen",} 1.63053568E8 jvm_memory_committed_bytes{area="heap",id="PS Eden Space",} 1.73539328E8 jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 5.505024E7 jvm_memory_committed_bytes{area="nonheap",id="Code Cache",} 1.114112E7 jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 7602176.0 # HELP tomcat_sessions_expired_sessions_total # TYPE tomcat_sessions_expired_sessions_total counter tomcat_sessions_expired_sessions_total 0.0 # HELP tomcat_sessions_rejected_sessions_total # TYPE tomcat_sessions_rejected_sessions_total counter tomcat_sessions_rejected_sessions_total 0.0 # HELP jvm_threads_states_threads The current number of threads having NEW state # TYPE jvm_threads_states_threads gauge jvm_threads_states_threads{state="runnable",} 11.0 jvm_threads_states_threads{state="blocked",} 0.0 jvm_threads_states_threads{state="waiting",} 13.0 jvm_threads_states_threads{state="timed-waiting",} 5.0 jvm_threads_states_threads{state="new",} 0.0 jvm_threads_states_threads{state="terminated",} 0.0 # HELP tomcat_sessions_alive_max_seconds # TYPE tomcat_sessions_alive_max_seconds gauge tomcat_sessions_alive_max_seconds 0.0 # HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads # TYPE jvm_threads_live_threads gauge jvm_threads_live_threads 29.0 # HELP tomcat_global_sent_bytes_total # TYPE tomcat_global_sent_bytes_total counter tomcat_global_sent_bytes_total{name="http-nio-8090",} 9114.0 # HELP jvm_gc_max_data_size_bytes Max size of old generation memory pool # TYPE jvm_gc_max_data_size_bytes gauge jvm_gc_max_data_size_bytes 0.0 # HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management # TYPE jvm_memory_max_bytes gauge jvm_memory_max_bytes{area="heap",id="PS Survivor Space",} 1.8874368E7 jvm_memory_max_bytes{area="heap",id="PS Old Gen",} 2.863661056E9 jvm_memory_max_bytes{area="heap",id="PS Eden Space",} 1.392508928E9 jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0 jvm_memory_max_bytes{area="nonheap",id="Code Cache",} 2.5165824E8 jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9 # HELP process_files_open_files The open file descriptor count # TYPE process_files_open_files gauge process_files_open_files 142.0 # HELP tomcat_sessions_active_max_sessions # TYPE tomcat_sessions_active_max_sessions gauge tomcat_sessions_active_max_sessions 0.0 # HELP jvm_threads_daemon_threads The current number of live daemon threads # TYPE jvm_threads_daemon_threads gauge jvm_threads_daemon_threads 25.0 # HELP tomcat_threads_config_max_threads # TYPE tomcat_threads_config_max_threads gauge tomcat_threads_config_max_threads{name="http-nio-8090",} 200.0 # HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool # TYPE jvm_buffer_count_buffers gauge jvm_buffer_count_buffers{id="direct",} 5.0 jvm_buffer_count_buffers{id="mapped",} 0.0 # HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC # TYPE jvm_gc_memory_promoted_bytes_total counter jvm_gc_memory_promoted_bytes_total 8192.0 # HELP logback_events_total Number of error level events that made it to the logs # TYPE logback_events_total counter logback_events_total{level="warn",} 0.0 logback_events_total{level="debug",} 0.0 logback_events_total{level="error",} 0.0 logback_events_total{level="trace",} 0.0 logback_events_total{level="info",} 77.0 # HELP jvm_gc_live_data_size_bytes Size of old generation memory pool after a full GC # TYPE jvm_gc_live_data_size_bytes gauge jvm_gc_live_data_size_bytes 0.0 # HELP tomcat_global_request_seconds # TYPE tomcat_global_request_seconds summary tomcat_global_request_seconds_count{name="http-nio-8090",} 2.0 tomcat_global_request_seconds_sum{name="http-nio-8090",} 0.131 # HELP jvm_memory_used_bytes The amount of used memory # TYPE jvm_memory_used_bytes gauge jvm_memory_used_bytes{area="heap",id="PS Survivor Space",} 1.4836336E7 jvm_memory_used_bytes{area="heap",id="PS Old Gen",} 2.4336488E7 jvm_memory_used_bytes{area="heap",id="PS Eden Space",} 1.60499616E8 jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 5.22724E7 jvm_memory_used_bytes{area="nonheap",id="Code Cache",} 1.0880512E7 jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 7006832.0 # HELP http_server_requests_seconds # TYPE http_server_requests_seconds summary http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/actuator/prometheus",} 1.0 http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/actuator/prometheus",} 0.071661037 http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/actuator",} 1.0 http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/actuator",} 0.026864224 # HELP http_server_requests_seconds_max # TYPE http_server_requests_seconds_max gauge http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/actuator/prometheus",} 0.071661037 http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/actuator",} 0.026864224 # HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool # TYPE jvm_buffer_memory_used_bytes gauge jvm_buffer_memory_used_bytes{id="direct",} 40960.0 jvm_buffer_memory_used_bytes{id="mapped",} 0.0 # HELP process_start_time_seconds Start time of the process since unix epoch. # TYPE process_start_time_seconds gauge process_start_time_seconds 1.556615678449E9 # HELP tomcat_threads_busy_threads # TYPE tomcat_threads_busy_threads gauge tomcat_threads_busy_threads{name="http-nio-8090",} 1.0 # HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset # TYPE jvm_threads_peak_threads gauge jvm_threads_peak_threads 36.0 # HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine # TYPE jvm_classes_loaded_classes gauge jvm_classes_loaded_classes 10376.0 # HELP tomcat_sessions_created_sessions_total # TYPE tomcat_sessions_created_sessions_total counter tomcat_sessions_created_sessions_total 0.0 # HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool # TYPE jvm_buffer_total_capacity_bytes gauge jvm_buffer_total_capacity_bytes{id="direct",} 40960.0 jvm_buffer_total_capacity_bytes{id="mapped",} 0.0 # HELP tomcat_global_request_max_seconds # TYPE tomcat_global_request_max_seconds gauge tomcat_global_request_max_seconds{name="http-nio-8090",} 0.103 # HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution # TYPE jvm_classes_unloaded_classes_total counter jvm_classes_unloaded_classes_total 1.0 # HELP process_files_max_files The maximum file descriptor count # TYPE process_files_max_files gauge process_files_max_files 10240.0
4: 自定义自己的metrics
在service层,编写具体的MetricsService,如下:
import com.scmp.scmpnotify.service.MetricsService; import io.micrometer.core.instrument.Counter; import io.micrometer.core.instrument.MeterRegistry; import org.springframework.stereotype.Service; @Service public class MetricsService { private final Counter sendSuccessCounter; private final Counter sendFaileCounter; MetricsServiceImpl(MeterRegistry registry) { this.sendFaileCounter = Counter.builder("send_faile") .description("send faile email total").register(registry); this.sendSuccessCounter = Counter.builder("send_success") .description("send success email total").register(registry); } public void sendSuccessIncrement(){ sendSuccessCounter.increment(); } public void sendFaileIncrement(){ sendFaileCounter.increment(); } }
这里我们定义了两个counter 变量,分别统计发送成功和失败的消息数。
在controller层,
增加
@Autowired private MetricsService metricsService;
然后具体的逻辑中,直接使用 metricsService.sendSuccessIncrement()。
最后重新启动项目,再次访问:http://localhost:8090/actuator/prometheus 可看到自定义的metrics已经存在了。
如下:
... # HELP send_success_total send success email total # TYPE send_success_total counter send_success_total 0.0 # HELP send_faile_total send faile email total # TYPE send_faile_total counter send_faile_total 0.0 ...
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/74365.html
摘要:简介中文名称为普罗米修斯,受启发于的监控系统,从年开始由前工程师在以开源软件的形式进行研发,年月发布版本。拉的代表,主要代表就是,让我们不用担心监控应用本身的状态。这样的一个程序称为,的实例称为一个。 Prometheus 简介 Prometheus 中文名称为普罗米修斯,受启发于Google的Brogmon监控系统,从2012年开始由前Google工程师在Soundcloud以开源软...
摘要:可简单地认为它是的扩展,负载均衡自然成为不可或缺的特性。是基于开发的服务代理组件,在使用场景中,它与和整合,打造具备服务动态更新和负载均衡能力的服务网关。类似的特性在项目也有体现,它是另一种高性能代理的方案,提供服务发现健康和负载均衡。 摘要: Cloud Native 应用架构随着云技术的发展受到业界特别重视和关注,尤其是 CNCF(Cloud Native Computing Fo...
摘要:可简单地认为它是的扩展,负载均衡自然成为不可或缺的特性。类似的特性在项目也有体现,它是另一种高性能代理的方案,提供服务发现健康和负载均衡。 Dubbo Cloud Native 实践与思考 分享简介 Cloud Native 应用架构随着云技术的发展受到业界特别重视和关注,尤其是 CNCF(Cloud Native Computing Foundation)项目蓬勃发展之际。Dubbo...
摘要:比如定义了基础的数据类型以及对应的方法收集事件次数等单调递增的数据收集当前的状态,比如数据库连接数收集随机正态分布数据,比如响应延迟收集随机正态分布数据,和是类似的库的详细解析可以参考本文为容器监控实践系列文章,完整内容见 概述 Prometheus从2016年加入CNCF,到2018年8月毕业,现在已经成为Kubernetes的官方监控方案,接下来的几篇文章将详细解读Promethu...
摘要:比如定义了基础的数据类型以及对应的方法收集事件次数等单调递增的数据收集当前的状态,比如数据库连接数收集随机正态分布数据,比如响应延迟收集随机正态分布数据,和是类似的库的详细解析可以参考本文为容器监控实践系列文章,完整内容见 概述 Prometheus从2016年加入CNCF,到2018年8月毕业,现在已经成为Kubernetes的官方监控方案,接下来的几篇文章将详细解读Promethu...
阅读 3297·2023-04-26 00:07
阅读 3861·2021-11-23 10:08
阅读 2909·2021-11-22 09:34
阅读 810·2021-09-22 15:27
阅读 1714·2019-08-30 15:54
阅读 3661·2019-08-30 14:07
阅读 884·2019-08-30 11:12
阅读 638·2019-08-29 18:44