Kaynağa Gözat

add es exporter

Your Name 6 yıl önce
ebeveyn
işleme
fb7e0c599e

+ 24 - 0
prometheus-operator/exporter/elasticsearch-exporter/.helmignore

@@ -0,0 +1,24 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+# OWNERS file for Kubernetes
+OWNERS
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+.vscode/

+ 15 - 0
prometheus-operator/exporter/elasticsearch-exporter/Chart.yaml

@@ -0,0 +1,15 @@
+apiVersion: v1
+description: Elasticsearch stats exporter for Prometheus
+name: elasticsearch-exporter
+version: 1.1.3
+appVersion: 1.0.2
+home: https://github.com/justwatchcom/elasticsearch_exporter
+sources:
+  - https://github.com/justwatchcom/elasticsearch_exporter
+keywords:
+  - metrics
+  - elasticsearch
+  - monitoring
+maintainers:
+  - name: svenmueller
+    email: sven.mueller@commercetools.com

+ 6 - 0
prometheus-operator/exporter/elasticsearch-exporter/OWNERS

@@ -0,0 +1,6 @@
+approvers:
+- desaintmartin
+- svenmueller
+reviewers:
+- desaintmartin
+- svenmueller

+ 87 - 0
prometheus-operator/exporter/elasticsearch-exporter/README.md

@@ -0,0 +1,87 @@
+# Elasticsearch Exporter
+
+Prometheus exporter for various metrics about ElasticSearch, written in Go.
+
+Learn more: https://github.com/justwatchcom/elasticsearch_exporter
+
+## TL;DR;
+
+```bash
+$ helm install stable/elasticsearch-exporter
+```
+
+## Introduction
+
+This chart creates an Elasticsearch-Exporter deployment on a [Kubernetes](http://kubernetes.io)
+cluster using the [Helm](https://helm.sh) package manager.
+
+## Prerequisites
+
+- Kubernetes 1.8+ with Beta APIs enabled
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```bash
+$ helm install --name my-release stable/elasticsearch-exporter
+```
+
+The command deploys Elasticsearch-Exporter on the Kubernetes cluster using the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```bash
+$ helm delete --purge my-release
+```
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following table lists the configurable parameters of the Elasticsearch-Exporter chart and their default values.
+
+Parameter | Description | Default
+--- | --- | ---
+`replicaCount` | desired number of pods | `1`
+`restartPolicy` | container restart policy | `Always`
+`image.repository` | container image repository | `justwatch/elasticsearch_exporter`
+`image.tag` | container image tag | `1.0.2`
+`image.pullPolicy` | container image pull policy | `IfNotPresent`
+`resources` | resource requests & limits | `{}`
+`priorityClassName` | priorityClassName | `nil`
+`nodeSelector` | Node labels for pod assignment | `{}`
+`tolerations` | Node tolerations for pod assignment | `{}`
+`podAnnotations` | Pod annotations | `{}` |
+`service.type` | type of service to create | `ClusterIP`
+`service.httpPort` | port for the http service | `9108`
+`service.annotations` | Annotations on the http service | `{}`
+`es.uri` | address of the Elasticsearch node to connect to | `localhost:9200`
+`es.all` | if `true`, query stats for all nodes in the cluster, rather than just the node we connect to | `true`
+`es.indices` | if true, query stats for all indices in the cluster | `true`
+`es.timeout` | timeout for trying to get stats from Elasticsearch | `30s`
+`es.ssl.enabled` | If true, a secure connection to E cluster is used | `false`
+`es.ssl.client.ca.pem` | PEM that contains trusted CAs used for setting up secure Elasticsearch connection |
+`es.ssl.client.pem` | PEM that contains the client cert to connect to Elasticsearch |
+`es.ssl.client.key` | Private key for client auth when connecting to Elasticsearch |
+`web.path` | path under which to expose metrics | `/metrics`
+`serviceMonitor.enabled` | If true, a ServiceMonitor CRD is created for a prometheus operator | `false`
+`serviceMonitor.labels` | Labels for prometheus operator | `{}`
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```bash
+$ helm install --name my-release \
+    --set key_1=value_1,key_2=value_2 \
+    stable/elasticsearch-exporter
+```
+
+Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
+
+```bash
+# example for staging
+$ helm install --name my-release -f values.yaml stable/elasticsearch-exporter
+```
+
+> **Tip**: You can use the default [values.yaml](values.yaml)

+ 99 - 0
prometheus-operator/exporter/elasticsearch-exporter/alter.rules

@@ -0,0 +1,99 @@
+ALERT Elastic_UP
+  IF elasticsearch_up{job="elasticsearch"} != 1
+  FOR 120s
+  LABELS { severity="alert", value = "{{$value}}" }
+  ANNOTATIONS {
+    summary = "Instance {{ $labels.instance }}: Elasticsearch instance status is not 1",
+    description = "This server's Elasticsearch instance status has a value of {{ $value }}.",
+  }
+
+ALERT Elastic_Cluster_Health_RED
+  IF elasticsearch_cluster_health_status{color="red"}==1
+  FOR 300s
+  LABELS { severity="alert", value = "{{$value}}" }
+  ANNOTATIONS {
+    summary = "Instance {{ $labels.instance }}: not all primary and replica shards are allocated in elasticsearch cluster {{ $labels.cluster }}",
+    description = "Instance {{ $labels.instance }}: not all primary and replica shards are allocated in elasticsearch cluster {{ $labels.cluster }}.",
+  }
+
+ALERT Elastic_Cluster_Health_Yellow
+  IF elasticsearch_cluster_health_status{color="yellow"}==1
+  FOR 300s
+  LABELS { severity="alert", value = "{{$value}}" }
+  ANNOTATIONS {
+    summary = "Instance {{ $labels.instance }}: not all primary and replica shards are allocated in elasticsearch cluster {{ $labels.cluster }}",
+    description = "Instance {{ $labels.instance }}: not all primary and replica shards are allocated in elasticsearch cluster {{ $labels.cluster }}.",
+  }
+
+ALERT Elasticsearch_JVM_Heap_Too_High
+ IF elasticsearch_jvm_memory_used_bytes{area="heap"} / elasticsearch_jvm_memory_max_bytes{area="heap"} > 0.8
+ FOR 15m
+ LABELS { severity="alert", value = "{{$value}}" }
+ ANNOTATIONS {
+    summary = "ElasticSearch node {{ $labels.instance }} heap usage is high",
+    description = "The heap in {{ $labels.instance }} is over 80% for 15m.",
+  }
+
+ALERT Elasticsearch_health_up
+ IF elasticsearch_cluster_health_up !=1
+ FOR 1m
+ LABELS { severity="alert", value = "{{$value}}" }
+ ANNOTATIONS {
+    summary = "ElasticSearch node: {{ $labels.instance }} last scrape of the ElasticSearch cluster health failed",
+    description = "ElasticSearch node: {{ $labels.instance }} last scrape of the ElasticSearch cluster health failed",
+  }
+
+ALERT Elasticsearch_Too_Few_Nodes_Running
+  IF elasticsearch_cluster_health_number_of_nodes < 3
+  FOR 5m
+  LABELS { severity="alert", value = "{{$value}}" }
+  ANNOTATIONS {
+    description="There are only {{$value}} < 3 ElasticSearch nodes running",
+    summary="ElasticSearch running on less than 3 nodes"
+  }
+
+ALERT Elasticsearch_Count_of_JVM_GC_Runs
+ IF rate(elasticsearch_jvm_gc_collection_seconds_count{}[5m])>5
+ FOR 60s
+ LABELS { severity="warning", value = "{{$value}}" }
+ ANNOTATIONS {
+    summary = "ElasticSearch node {{ $labels.instance }}: Count of JVM GC runs > 5 per sec and has a value of {{ $value }}",
+    description = "ElasticSearch node {{ $labels.instance }}: Count of JVM GC runs > 5 per sec and has a value of {{ $value }}",
+  }
+
+ALERT Elasticsearch_GC_Run_Time
+ IF rate(elasticsearch_jvm_gc_collection_seconds_sum[5m])>0.3
+ FOR 60s
+ LABELS { severity="warning", value = "{{$value}}" }
+ ANNOTATIONS {
+    summary = "ElasticSearch node {{ $labels.instance }}: GC run time in seconds > 0.3 sec and has a value of {{ $value }}",
+    description = "ElasticSearch node {{ $labels.instance }}: GC run time in seconds > 0.3 sec and has a value of {{ $value }}",
+  }
+
+ALERT Elasticsearch_json_parse_failures
+ IF elasticsearch_cluster_health_json_parse_failures>0
+ FOR 60s
+ LABELS { severity="warning", value = "{{$value}}" }
+ ANNOTATIONS {
+    summary = "ElasticSearch node {{ $labels.instance }}: json parse failures > 0 and has a value of {{ $value }}",
+    description = "ElasticSearch node {{ $labels.instance }}: json parse failures > 0 and has a value of {{ $value }}",
+  }
+
+
+ALERT Elasticsearch_breakers_tripped
+ IF rate(elasticsearch_breakers_tripped{}[5m])>0
+ FOR 60s
+ LABELS { severity="warning", value = "{{$value}}" }
+ ANNOTATIONS {
+    summary = "ElasticSearch node {{ $labels.instance }}: breakers tripped > 0 and has a value of {{ $value }}",
+    description = "ElasticSearch node {{ $labels.instance }}: breakers tripped > 0 and has a value of {{ $value }}",
+  }
+
+ALERT Elasticsearch_health_timed_out
+ IF elasticsearch_cluster_health_timed_out>0
+ FOR 60s
+ LABELS { severity="warning", value = "{{$value}}" }
+ ANNOTATIONS {
+    summary = "ElasticSearch node {{ $labels.instance }}: Number of cluster health checks timed out > 0 and has a value of {{ $value }}",
+    description = "ElasticSearch node {{ $labels.instance }}: Number of cluster health checks timed out > 0 and has a value of {{ $value }}",
+  }

+ 15 - 0
prometheus-operator/exporter/elasticsearch-exporter/templates/NOTES.txt

@@ -0,0 +1,15 @@
+1. Get the application URL by running these commands:
+{{- if contains "NodePort" .Values.service.type }}
+  export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "elasticsearch-exporter.fullname" . }})
+  export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+  echo http://$NODE_IP:$NODE_PORT{{ .Values.web.path }}
+{{- else if contains "LoadBalancer" .Values.service.type }}
+     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+           You can watch the status of by running 'kubectl get svc -w {{ template "elasticsearch-exporter.fullname" . }}'
+  export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "elasticsearch-exporter.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+  echo http://$SERVICE_IP:{{ .Values.service.httpPort }}{{ .Values.web.path }}
+{{- else if contains "ClusterIP"  .Values.service.type }}
+  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "elasticsearch-exporter.fullname" . }}" -o jsonpath="{.items[0].metadata.name}")
+  echo "Visit http://127.0.0.1:{{ .Values.service.httpPort }}{{ .Values.web.path }} to use your application"
+  kubectl port-forward $POD_NAME {{ .Values.service.httpPort }}:{{ .Values.service.httpPort }} --namespace {{ .Release.Namespace }}
+{{- end }}

+ 33 - 0
prometheus-operator/exporter/elasticsearch-exporter/templates/_helpers.tpl

@@ -0,0 +1,33 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "elasticsearch-exporter.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "elasticsearch-exporter.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "elasticsearch-exporter.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+

+ 16 - 0
prometheus-operator/exporter/elasticsearch-exporter/templates/cert-secret.yaml

@@ -0,0 +1,16 @@
+{{- if .Values.es.ssl.enabled }}
+apiVersion: v1
+kind: Secret
+metadata:
+  name: {{ template "elasticsearch-exporter.fullname" . }}-cert
+  labels:
+    chart: {{ template "elasticsearch-exporter.chart" . }}
+    app: {{ template "elasticsearch-exporter.name" . }}
+    release: "{{ .Release.Name }}"
+    heritage: "{{ .Release.Service }}"
+type: Opaque
+data:
+  ca.pem: {{ .Values.es.ssl.ca.pem | b64enc }}
+  client.pem: {{ .Values.es.ssl.client.pem | b64enc }}
+  client.key: {{ .Values.es.ssl.client.key | b64enc }}
+{{- end }}

+ 107 - 0
prometheus-operator/exporter/elasticsearch-exporter/templates/deployment.yaml

@@ -0,0 +1,107 @@
+apiVersion: apps/v1beta2
+kind: Deployment
+metadata:
+  name: {{ template "elasticsearch-exporter.fullname" . }}
+  labels:
+    chart: {{ template "elasticsearch-exporter.chart" . }}
+    k8s-app: {{ template "elasticsearch-exporter.name" . }}
+    release: "{{ .Release.Name }}"
+    heritage: "{{ .Release.Service }}"
+spec:
+  replicas: {{ .Values.replicaCount }}
+  selector:
+    matchLabels:
+      k8s-app: {{ template "elasticsearch-exporter.name" . }}
+      release: "{{ .Release.Name }}"
+  strategy:
+    rollingUpdate:
+      maxSurge: 1
+      maxUnavailable: 0
+    type: RollingUpdate
+  template:
+    metadata:
+      labels:
+        k8s-app: {{ template "elasticsearch-exporter.name" . }}
+        release: "{{ .Release.Name }}"
+      {{- if .Values.podAnnotations }}
+      annotations:
+{{ toYaml .Values.podAnnotations | indent 8 }}
+      {{- end }}
+    spec:
+{{- if .Values.priorityClassName }}
+      priorityClassName: "{{ .Values.priorityClassName }}"
+{{- end }}
+      restartPolicy: {{ .Values.restartPolicy }}
+      securityContext:
+        runAsNonRoot: true
+        runAsUser: 1000
+      containers:
+        - name: {{ .Chart.Name }}
+          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+          imagePullPolicy: {{ .Values.image.pullPolicy }}
+          command: ["elasticsearch_exporter",
+                    "-es.uri={{ .Values.es.uri }}",
+                    "-es.all={{ .Values.es.all }}",
+                    "-es.indices={{ .Values.es.indices }}",
+                    "-es.timeout={{ .Values.es.timeout }}",
+                    {{- if .Values.es.ssl.enabled }}
+                    "-es.ca=/ssl/ca.pem",
+                    "-es.client-cert=/ssl/client.pem",
+                    "-es.client-private-key=/ssl/client.key",
+                    {{- end }}
+                    "-web.listen-address=:{{ .Values.service.httpPort }}",
+                    "-web.telemetry-path={{ .Values.web.path }}"]
+          securityContext:
+            capabilities:
+              drop:
+                - SETPCAP
+                - MKNOD
+                - AUDIT_WRITE
+                - CHOWN
+                - NET_RAW
+                - DAC_OVERRIDE
+                - FOWNER
+                - FSETID
+                - KILL
+                - SETGID
+                - SETUID
+                - NET_BIND_SERVICE
+                - SYS_CHROOT
+                - SETFCAP
+            readOnlyRootFilesystem: true
+          resources:
+{{ toYaml .Values.resources | indent 12 }}
+          ports:
+            - containerPort: {{ .Values.service.httpPort }}
+              name: http
+          livenessProbe:
+            httpGet:
+              path: /health
+              port: http
+            initialDelaySeconds: 30
+            timeoutSeconds: 10
+          readinessProbe:
+            httpGet:
+              path: /health
+              port: http
+            initialDelaySeconds: 10
+            timeoutSeconds: 10
+          volumeMounts:
+            {{- if .Values.es.ssl.enabled }}
+            - mountPath: /ssl
+              name: ssl
+            {{- end }}
+{{- if .Values.nodeSelector }}
+      nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+{{- end }}
+{{- if .Values.tolerations }}
+      tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
+{{- end }}
+      volumes:
+        {{- if .Values.es.ssl.enabled }}
+        - name: ssl
+          secret:
+            secretName: {{ template "elasticsearch-exporter.fullname" . }}-cert
+        {{- end }}

+ 22 - 0
prometheus-operator/exporter/elasticsearch-exporter/templates/service.yaml

@@ -0,0 +1,22 @@
+kind: Service
+apiVersion: v1
+metadata:
+  name: {{ template "elasticsearch-exporter.fullname" . }}
+  labels:
+    chart: {{ template "elasticsearch-exporter.chart" . }}
+    k8s-app: {{ template "elasticsearch-exporter.name" . }}
+    release: "{{ .Release.Name }}"
+    heritage: "{{ .Release.Service }}"
+{{- if .Values.service.annotations }}
+  annotations:
+{{ toYaml .Values.service.annotations | indent 4 }}
+{{- end }}
+spec:
+  type: {{ .Values.service.type }}
+  ports:
+    - name: http
+      port: {{ .Values.service.httpPort }}
+      protocol: TCP
+  selector:
+    k8s-app: {{ template "elasticsearch-exporter.name" . }}
+    release: "{{ .Release.Name }}"

+ 30 - 0
prometheus-operator/exporter/elasticsearch-exporter/templates/servicemonitor.yaml

@@ -0,0 +1,30 @@
+{{- if .Values.serviceMonitor.enabled }}
+---
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+  name: {{ template "elasticsearch-exporter.fullname" . }}
+  labels:
+    chart: {{ template "elasticsearch-exporter.chart" . }}
+    k8s-app: {{ template "elasticsearch-exporter.name" . }}
+    release: "{{ .Release.Name }}"
+    heritage: "{{ .Release.Service }}"
+    {{- if .Values.serviceMonitor.labels }}
+    {{- toYaml .Values.serviceMonitor.labels | nindent 4 }}
+    {{- end }}
+spec:
+  endpoints:
+  - interval: 10s
+    honorLabels: true
+    port: http
+    path: {{ .Values.web.path }}
+    scheme: http
+  jobLabel: "{{ .Release.Name }}"
+  selector:
+    matchLabels:
+      k8s-app: {{ template "elasticsearch-exporter.name" . }}
+      release: "{{ .Release.Name }}"
+  namespaceSelector:
+    matchNames:
+      - {{ .Release.Namespace }}
+{{- end }}

+ 87 - 0
prometheus-operator/exporter/elasticsearch-exporter/values.yaml

@@ -0,0 +1,87 @@
+## number of exporter instances
+##
+replicaCount: 1
+
+## restart policy for all containers
+##
+restartPolicy: Always
+
+image:
+  repository: justwatch/elasticsearch_exporter
+  tag: 1.0.2
+  pullPolicy: IfNotPresent
+
+resources: {}
+  # requests:
+  #   cpu: 100m
+  #   memory: 128Mi
+  # limits:
+  #   cpu: 100m
+  #   memory: 128Mi
+
+priorityClassName: ""
+
+nodeSelector: {}
+
+tolerations: {}
+
+podAnnotations: {}
+
+service:
+  type: ClusterIP
+  httpPort: 9108
+  annotations: {}
+
+es:
+  ## Address (host and port) of the Elasticsearch node we should connect to.
+  ## This could be a local node (localhost:9200, for instance), or the address
+  ## of a remote Elasticsearch server. When basic auth is needed,
+  ## specify as: <proto>://<user>:<password>@<host>:<port>. e.g., http://admin:pass@localhost:9200.
+  ##
+  uri: http://external-es.logging.svc:9200
+
+  ## If true, query stats for all nodes in the cluster, rather than just the
+  ## node we connect to.
+  ##
+  all: true
+
+  ## If true, query stats for all indices in the cluster.
+  ##
+  indices: true
+
+  ## Timeout for trying to get stats from Elasticsearch. (ex: 20s)
+  ##
+  timeout: 30s
+
+  ssl:
+    ## If true, a secure connection to ES cluster is used (requires SSL certs below)
+    ##
+    enabled: false
+
+    ca:
+
+      ## PEM that contains trusted CAs used for setting up secure Elasticsearch connection
+      ##
+      # pem:
+
+    client:
+
+      ## PEM that contains the client cert to connect to Elasticsearch.
+      ##
+      # pem:
+
+      ## Private key for client auth when connecting to Elasticsearch
+      ##
+      # key:
+
+web:
+  ## Path under which to expose metrics.
+  ##
+  path: /metrics
+
+serviceMonitor:
+  ## If true, a ServiceMonitor CRD is created for a prometheus operator
+  ## https://github.com/coreos/prometheus-operator
+  ##
+  enabled: true
+  labels: {}