摘要:监控告警原型图原型图解释与作为运行在同一个中并交由控制器管理,默认开启端口,因为我们的与是处于同一个中,所以直接使用就可以与通信用于发送告警通知,告警规则配置以的形式挂载到容器供使用,告警通知对象配置也通过挂载到容器供使用,这里我们使用邮件
监控告警原型图 原型图解释
prometheus与alertmanager作为container运行在同一个pods中并交由Deployment控制器管理,alertmanager默认开启9093端口,因为我们的prometheus与alertmanager是处于同一个pod中,所以prometheus直接使用localhost:9093就可以与alertmanager通信(用于发送告警通知),告警规则配置rules.yml以Configmap的形式挂载到prometheus容器供prometheus使用,告警通知对象配置也通过Configmap挂载到alertmanager容器供alertmanager使用,这里我们使用邮件接收告警通知,具体配置在alertmanager.yml中
测试环境环境:Linux 3.10.0-693.el7.x86_64 x86_64 GNU/Linux
平台:Kubernetes v1.10.5
Tips:prometheus与alertmanager完整的配置在文档末尾
在prometheus中指定告警规则的路径, rules.yml就是用来指定报警规则,这里我们将rules.yml用ConfigMap的形式挂载到/etc/prometheus目录下面即可:
rule_files: - /etc/prometheus/rules.yml
这里我们指定了一个InstanceDown告警,当主机挂掉1分钟则prometheus会发出告警
rules.yml: | groups: - name: example rules: - alert: InstanceDown expr: up == 0 for: 1m labels: severity: page annotations: summary: "Instance {{ $labels.instance }} down" description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minutes."配置prometheus与alertmanager通信(用于prometheus向alertmanager发送告警信息)
alertmanager默认开启9093端口,又因为我们的prometheus与alertmanager是处于同一个pod中,所以prometheus直接使用localhost:9093就可以与alertmanager通信
alerting: alertmanagers: - static_configs: - targets: ["localhost:9093"]alertmanager配置告警通知对象
我们这里举了一个邮件告警的例子,alertmanager接收到prometheus发出的告警时,alertmanager会向指定的邮箱发送一封告警邮件,这个配置也是通过Configmap的形式挂载到alertmanager所在的容器中供alertmanager使用
alertmanager.yml: |- global: smtp_smarthost: "smtp.exmail.qq.com:465" smtp_from: "xin.liu@woqutech.com" smtp_auth_username: "xin.liu@woqutech.com" smtp_auth_password: "xxxxxxxxxxxx" smtp_require_tls: false route: group_by: [alertname] group_wait: 30s group_interval: 5m repeat_interval: 10m receiver: default-receiver receivers: - name: "default-receiver" email_configs: - to: "1148576125@qq.com"原型效果展示
在prometheus web ui中可以看到配置的告警规则
为了看测试效果,关掉一个主机节点:
在prometheus web ui中可以看到一个InstanceDown告警被触发
在alertmanager web ui中可以看到alertmanager收到prometheus发出的告警
指定接收告警的邮箱收到alertmanager发出的告警邮件
全部配置node_exporter_daemonset.yaml
apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: node-exporter namespace: kube-system labels: app: node_exporter spec: selector: matchLabels: name: node_exporter template: metadata: labels: name: node_exporter spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: node-exporter image: alery/node-exporter:1.0 ports: - name: node-exporter containerPort: 9100 hostPort: 9100 volumeMounts: - name: localtime mountPath: /etc/localtime - name: host mountPath: /host readOnly: true volumes: - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai - name: host hostPath: path: /
alertmanager-cm.yaml
kind: ConfigMap apiVersion: v1 metadata: name: alertmanager namespace: kube-system data: alertmanager.yml: |- global: smtp_smarthost: "smtp.exmail.qq.com:465" smtp_from: "xin.liu@woqutech.com" smtp_auth_username: "xin.liu@woqutech.com" smtp_auth_password: "xxxxxxxxxxxx" smtp_require_tls: false route: group_by: [alertname] group_wait: 30s group_interval: 5m repeat_interval: 10m receiver: default-receiver receivers: - name: "default-receiver" email_configs: - to: "1148576125@qq.com"
prometheus-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: prometheus namespace: kube-system rules: - apiGroups: [""] resources: - nodes - nodes/proxy - services - endpoints - pods verbs: ["get", "list", "watch"] - nonResourceURLs: ["/metrics"] verbs: ["get"] --- apiVersion: v1 kind: ServiceAccount metadata: name: prometheus namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: prometheus namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: - kind: ServiceAccount name: prometheus namespace: kube-system
prometheus-cm.yaml
kind: ConfigMap apiVersion: v1 data: prometheus.yml: | rule_files: - /etc/prometheus/rules.yml alerting: alertmanagers: - static_configs: - targets: ["localhost:9093"] scrape_configs: - job_name: "node" kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_ip] action: replace target_label: __address__ replacement: $1:9100 - source_labels: [__meta_kubernetes_pod_host_ip] action: replace target_label: instance - source_labels: [__meta_kubernetes_pod_node_name] action: replace target_label: node_name - action: labelmap regex: __meta_kubernetes_pod_label_(name) - source_labels: [__meta_kubernetes_pod_label_name] regex: node_exporter action: keep rules.yml: | groups: - name: example rules: - alert: InstanceDown expr: up == 0 for: 5m labels: severity: page annotations: summary: "Instance {{ $labels.instance }} down" description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes." - alert: APIHighRequestLatency expr: api_http_request_latencies_second{quantile="0.5"} > 1 for: 10m annotations: summary: "High request latency on {{ $labels.instance }}" description: "{{ $labels.instance }} has a median request latency above 1s (current value: {{ $value }}s)" metadata: name: prometheus-config-v0.1.0 namespace: kube-system
prometheus.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: kube-system name: prometheus labels: name: prometheus spec: replicas: 1 selector: matchLabels: app: prometheus template: metadata: name: prometheus labels: app: prometheus spec: serviceAccountName: prometheus nodeSelector: node-role.kubernetes.io/master: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists securityContext: runAsUser: 0 fsGroup: 0 containers: - name: prometheus image: prom/prometheus:v2.4.0 args: - "--config.file=/etc/prometheus/prometheus.yml" ports: - name: web containerPort: 9090 volumeMounts: - name: prometheus-config mountPath: /etc/prometheus - name: prometheus-storage mountPath: /prometheus - name: localtime mountPath: /etc/localtime - name: alertmanager image: prom/alertmanager:v0.14.0 args: - "--config.file=/etc/alertmanager/alertmanager.yml" - "--log.level=debug" ports: - containerPort: 9093 protocol: TCP name: alertmanager volumeMounts: - name: alertmanager-config mountPath: /etc/alertmanager - name: alertmanager-storage mountPath: /alertmanager - name: localtime mountPath: /etc/localtime volumes: - name: prometheus-config configMap: name: prometheus-config-v0.1.0 - name: alertmanager-config configMap: name: alertmanager - name: localtime hostPath: path: /usr/share/zoneinfo/Asia/Shanghai - name: prometheus-storage hostPath: path: /gaea/prometheus type: DirectoryOrCreate - name: alertmanager-storage hostPath: path: /gaea/alertmanager type: DirectoryOrCreate --- apiVersion: v1 kind: Service metadata: labels: name: prometheus kubernetes.io/cluster-service: "true" name: prometheus namespace: kube-system spec: ports: - name: prometheus nodePort: 30065 port: 9090 protocol: TCP targetPort: 9090 selector: app: prometheus sessionAffinity: None type: NodePort --- apiVersion: v1 kind: Service metadata: labels: name: prometheus kubernetes.io/cluster-service: "true" name: alertmanager namespace: kube-system spec: ports: - name: alertmanager nodePort: 30066 port: 9093 protocol: TCP targetPort: 9093 selector: app: prometheus sessionAffinity: None type: NodePort
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/32736.html
摘要:同时有权限控制日志审计整体配置过期时间等功能。将成为趋势前置条件要求的版本应该是因为和支持的限制的核心思想是将的部署与它监控的对象的配置分离,做到部署与监控对象的配置分离之后,就可以轻松实现动态配置。 一.单独部署 二进制安装各版本下载地址:https://prometheus.io/download/ Docker运行 运行命令:docker run --name promet...
摘要:集群三步安装概述应当是使用监控系统的最佳实践了,首先它一键构建整个监控系统,通过一些无侵入的手段去配置如监控数据源等故障自动恢复,高可用的告警等。。 kubernetes集群三步安装 概述 prometheus operator应当是使用监控系统的最佳实践了,首先它一键构建整个监控系统,通过一些无侵入的手段去配置如监控数据源等故障自动恢复,高可用的告警等。。 不过对于新手使用上还是有一...
摘要:集群三步安装概述应当是使用监控系统的最佳实践了,首先它一键构建整个监控系统,通过一些无侵入的手段去配置如监控数据源等故障自动恢复,高可用的告警等。。 kubernetes集群三步安装 概述 prometheus operator应当是使用监控系统的最佳实践了,首先它一键构建整个监控系统,通过一些无侵入的手段去配置如监控数据源等故障自动恢复,高可用的告警等。。 不过对于新手使用上还是有一...
阅读 3078·2023-04-26 00:53
阅读 3524·2021-11-19 09:58
阅读 1695·2021-09-29 09:35
阅读 3280·2021-09-28 09:46
阅读 3856·2021-09-22 15:38
阅读 2695·2019-08-30 15:55
阅读 3009·2019-08-23 14:10
阅读 3823·2019-08-22 18:17