Since Version 5.0, Grafana includes a new feature : provisioning. This feature lets you describe your datasources and dashboards using YAML files. Everytime Grafana is restarted, it will read from these files and set the current configuration to the values listed in the YAML files found in directories /datasources and /dashboards respectively.

Our historical way of updating and maintaining datasources was by using the HTTP API  coupled with Ansible for automation. It was getting really slow with the increasing number of entries. We were also getting errors because our SQLLITE database is getting pretty big. The configuration was saved in a grafana.yml file that was over 1600 lines long (yikes!) which didn't help with readability. This led us to give provisioning a try (and do a massive code refactoring).

Setting up provisioning:

  1. grafana.ini
    Make sure provisioning  is not commented out and points to your desired folder. Be aware that Grafana will use this as a base path only and will then look for your datasources.yaml in the subdirectory datasources/

And that's about it for the configuration part !

Setting up Ansible for automation:

  1. grafana.yml

Here, we define the attributes of our clusters such as:

  • Url of the cluster
  • Password of the user

We can then delve into the configuration of each individual datasource by adding its properties.

    url: "{{ elasticsearch_cluster_1_url }}:9200"
    user: "{{ grafana_cluster_1_user }}"
    pass: "{{ grafana_password_cluster_1 }}"
      - name: datasource_1
        index: "[metrics_datasource_1_]YYYY.MM.DD"
        interval: Daily
        group_by_time: "1m"
        org_id: 1
        max_concurrent_shard_requests: 21
    url: "{{ elasticsearch_cluster_2_url }}:9200"
    user: "{{ grafana_cluster_2_user }}"
    pass: "{{ grafana_password_cluster_2 }}"
      - name: metrics_inf_ucs_rin
        index: "[metrics_c1_inf_ucs_ido_]YYYY.MM"
        interval: Monthly
        group_by_time: "5m"

2. datasources.yaml.j2
This is where all the magic happens !
Our example configuration file is specifically tailored to manage ElasticSearch datasources in different clusters. Some of the more complex settings can largely be avoided depending on your situation (you don't need a double loop if you only have one cluster for example)

# config file version
apiVersion: 1

# list of datasources to insert/update depending
# what's available in the database
{% for id,cluster in grafana_datasources.items() %}
{% for datasource in cluster.datasources %}
  - name: "{{ }}"
    type: elasticsearch
    access: proxy
    url: "{{ cluster.url }}"
    basicAuth: true
    basicAuthUser: "{{ cluster.user }}"
    basicAuthPassword: "{{ cluster.pass }}"
    database: "{{ datasource.index }}"
    orgId: "{{ datasource.get('org_id', '1') }}"
      interval: "{{ datasource.get('interval', '') }}"
      timeInterval: "{{ datasource.group_by_time }}"
      timeField: "{{ datasource.get('time_field', '@timestamp_second' ) }}"
      esVersion: 56
      maxConcurrentShardRequests: "{{ datasource.get('max_concurrent_shard_requests', 42) }}"
      tlsAuth: true
      tlsAuthWithCACert: true
      tlsCACert: |
        {{ grafana_tls_ca_cert | indent(8) }}
      tlsClientCert: |
        {{ tls_certificate_cert | indent(8) }}
      tlsClientKey: |
        {{ tls_certificate_key | indent(8) }}
    editable: true
{% endfor %}
{% endfor %}

Once you have those two files created and edited to match your configuration all that is left is to drop it in the correct Grafana folder using the template module which should look a little like this:

  src: "templates/datasources.yaml.j2"
  dest: "{{ grafana_provisioning_dir }}/datasources/datasources.yaml"

Restarting Grafana will now load the datasources on each restart. Depending on the number of datasources it might take a little bit of time to get your grafana instance back up.

With around 150 datasources and a datasources.yaml file that is almost 7 MB it takes Grafana around 15 seconds to restart. Additionnaly, major refactoring of our grafana.yml file allowed us to remove half the lines ! (now under the 800 lines mark... still big but a lot more manageable).

Thanks for reading and I hope this article has helped you in managing your Grafana Datasources more efficiently .