Skip to content


The Pelorus Core configuration applies to Prometheus, Grafana, Thanos and other operational aspects of the Pelorus stack.

Those configuration options are used in the Pelorus configuration object YAML file to create Pelorus application instance.

Each Pelorus Core configuration option must be placed under spec in the Pelorus configuration object YAML file as in the example:

kind: Pelorus
  name: pelorus-pelorus-instance
  namespace: pelorus
    [...] # Pelorus exporters configuration options
  # Pelorus Core configuration options

For configuration options for exporters, check its configuration guide.


Configuration part of the Pelorus object YAML file, with some non-default options:

kind: Pelorus
  name: pelorus-instance
  namespace: pelorus
    [...] # Pelorus exporters configuration options
  openshift_prometheus_basic_auth_pass: mysecretpassword
  openshift_prometheus_htpasswd_auth: 'internal:{SHA}CM2SM2eJAAllfquBJ1M3m9syHus='
  prometheus_retention: 500d
  prometheus_retention_size: 2GB
  prometheus_storage: true
  prometheus_storage_pvc_capacity: 3Gi
  prometheus_storage_pvc_storageclass: mystorageclass

List of all configuration options

Variable Required Default Value
[exporters] yes -
prometheus_retention_size no 1GB
prometheus_retention no 1y
prometheus_storage no false
prometheus_storage_pvc_capacity no 2Gi
prometheus_storage_pvc_storageclass no gp2
openshift_prometheus_htpasswd_auth no internal:{SHA}+pvrmeQCmtWmYVOZ57uuITVghrM=
openshift_prometheus_basic_auth_pass no changeme
[extra_prometheus_hosts] no -
thanos_version no v0.28.0
bucket_access_point no -
bucket_access_key no -
bucket_secret_access_key no -
thanos_bucket_name no thanos
custom_ca no -


Pelorus allows to configure few aspects of Prometheus, that is deployed as an Prometheus Operator available from the OLM dependency mechanism.

For detailed information about planning Prometheus storage capacity and configuration options please refer to the operational aspects of the Prometheus documentation.

Prometheus Data Retention

  • Required: no
    • Default Value: 1GB
  • Type: string
Users have the option to configure maximum size of storage to be used by Prometheus. The oldest data will be removed first if it exceeds that limit. Even if the data is within retention time, but over retention size, it will also be removed.
Units supported: MB, GB, TB, PB, EB
  • Required: no
    • Default Value: 1y
  • Type: string
Prometheus is removing data older than 1 year, so if the metric you are interested in happened to be older than 1 year it won't be visible.
Units supported: d, y

Prometheus Persistent Volume

Unlike ephemeral volume that have a lifetime of a pod, persistent volume allows to withstand container restarts or crashes making Prometheus data resilient to such situations. Pelorus allows to use underlying Prometheus Operator Storage capabilities by using Kubernetes StorageClass.

It is recommended to use Prometheus Persistent Volume together with Thanos for the long term storage.

  • Required: no
    • Default Value: false
  • Type: boolean
Controls wether Prometheus should use persistent volume. If set to true PersistentVolumeClaim will be created.
  • Required: no
    • Default Value: 2Gi
  • Type: string
The amount of storage available to the PVC.
Units supported: As documented in the Quantity Kubernetes API
  • Required: no
    • Default Value: gp2
  • Type: string
StorageClass Name to be used for the PersistentVolumeClaim.

Prometheus credentials

  • Required: no
    • Default Value: internal:{SHA}+pvrmeQCmtWmYVOZ57uuITVghrM=
  • Type: string
Credentials for the internal user that are used by Grafana to communicate with the Prometheus instance deployed by Pelorus. Those credentials must use internal user name and must match the openshift_prometheus_basic_auth_pass password from the Grafana credentials configuration option.
Format supported: Base64-encoded SHA-1
Note: The generate new password for the internal user, you may invoke the htpasswd CLI as in the example:
$ htpasswd -nbs internal <my-secret-password>

Multiple Prometheus

By default Pelorus gathers the data from the Prometheus instance deployed in the same cluster in which it is running. To collect data across multiple OpenShift clusters additional Prometheus hosts have to be configured. To do this extra_prometheus_hosts configuration option is used.

  • Required: no
  • Type: list
It is a list that consists of three configuration items per additional Prometheus host:
  • id - a description of the prometheus host (this will be used as a label to select metrics in the federated instance).
  • hostname - the fully qualified domain name or ip address of the host with the extra Prometheus instance
  • password - the password used for the internal basic auth account (this is provided by the k8s metrics prometheus instances in a secret).


- id: "ci-1"
  hostname: ""
  password: "<redacted>"

- id: "ci-2"
  hostname: ""
  password: "<redacted>"


Grafana is a dashboard which represents data stored in the Prometheus. Is deployed as an Grafana Operator available from the OLM dependency mechanism.

Grafana credentials

  • Required: no
    • Default Value: changeme
  • Type: string
The password that grafana will use for its Prometheus datasource. Must match the openshift_prometheus_htpasswd_auth.


The Pelorus chart supports deploying a Thanos instance for long term storage. If you don't have an object storage provider, we recommend NooBaa as a free, open source option. You can check NooBaa for Long Term Storage to guide on how to host an instance on OpenShift and configure Pelorus to use it.

  • Required: no
    • Default Value: v0.28.0
  • Type: string
Which Thanos version from the Official Thanos podman image use.
  • Required: no
  • Type: string
S3 named network endpoint that is used to perform S3 object operations
  • Required: no
  • Type: string
S3 Access Key ID
  • Required: no
  • Type: string
S3 Secret Access Key
  • Required: no
    • Default Value: thanos
  • Type: string
S3 bucket name

Custom PKI

  • Required: no
  • Type: 'true' string or commented out for 'false'
Whether or not the cluster serves custom signed certificates for ingress (e.g. router certs). If true we will load the custom via the certificate injection method.

Deploying Across Multiple Clusters

By default, Pelorus will pull in data from the cluster in which it is running, but it also supports collecting data across multiple OpenShift clusters. In order to do this, the thanos sidecar can be configured to read from a shared S3 bucket across clusters. See Pelorus Multi-Cluster Architecture for details. You define exporters for the desired metrics in each of the clusters and the main cluster's Grafana dashboard will display a combined view of the metrics collected in the shared S3 bucket via thanos.