cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Prometheus Extension 2.0 404 error

a2
Participant

Hello,

I created a local Prometheus extension and from within the server I am able to query the Prometheus server and get results using the /api/v1/query endpoint.  When I use that endpoint in my config it returns a 404 but the documentation says that it is supported.  The error also gives an error from the EEC which is enabled at the host and environment level.  Has anyone come across this before?  Maybe its something in the extension config?

[ds:prometheus] [severe] [EEC_CONNECTION_ERROR] (http://localhost:9090/api/v1/query/): Server returned response code 404 from  [status code=30]
 
monitoring_config:
[
  {
    "value": {
      "enabled": true,
      "description": "monitoring_config",
      "version": "1.0.1",
      "activationContext": "LOCAL",
      "prometheusLocal": {
        "endpoints": [
          {
            "url": "http://localhost:9090/api/v1/query",
            "authentication": {
              "scheme": "none",
              "skipVerifyHttps": true
            }
          }
        ]
      }
    },
    "scope": "HOST01"
  }
]
 
extension_config:
 
name: custom:prometheus-confluent
version: 1.0.1
minDynatraceVersion: "1.236"
author:
  name: ccloud-exporter

prometheus:
  - group: confluent kafka metrics
    interval: 1m
    dimensions:
      - key: confluent
        value: kafka
    subgroups:
      - subgroup: Connections
        metrics:
          - key: custom.prometheus-confluent.confluent_kafka_server_active_connection_count
            value: metric:confluent_kafka_server_active_connection_count
            type: gauge

metrics:
  - key: custom.prometheus-confluent.confluent_kafka_server_active_connection_count
    metadata:
      displayName: confluent kafka server active connection count
      description: The count of active authenticated connections.
      unit: Count
8 REPLIES 8

JamesKitson
Dynatrace Guru
Dynatrace Guru

I haven't tried this before, but I set up a local instance very quickly to see what it does when you use that endpoint and I think there may be a clue a little further down in the logs:

 

JamesKitson_1-1649879264660.png

 

[prometheus-8db6a00c-ea7b-3066-9308-3da27177abe8][10312][out]|[2022-04-13 19:40:19.087Z] [ds:prometheus] [info] [OK] Sending event message: {"endpoint":"http://localhost:9090/api/v1/query/","message":"There are no queryMetrics defined in the extension. Nothing to query Prometheus API for.","timestamp":""}

 

It is not in the documentation online but it is in the downloadable schemas, it looks like you need to define queryMetrics to use and endpoint like that. This makes sense as in that case the query would be appended to the url and not 404.

 

I have not tried this before but will update if I can try it before someone else who has responds.

 

    "prometheus_query_metric" : {
      "type" : "object",
      "additionalProperties" : false,
      "required" : [ "key" ],
      "properties" : {
        "key" : {
          "type" : "string",
          "description" : "generic definition of multiple metrics"
        },
        "featureSet" : {
          "$ref" : "extension.schema.json#/definitions/featureSet"
        },
        "interval" : {
          "$ref" : "extension.schema.json#/definitions/interval"
        }
      }
    }

 

JamesKitson_2-1649879443687.png

 

 

I am trying the queryMetrics and have slimmed my file down to just this one metrics for testing but I get an error when I try to upload the extension.  This is my config and the error:

 

name: custom:prometheus-confluent
version: 1.0.2
minDynatraceVersion: "1.236"
author:
  name: ccloud-exporter

prometheus:
  - group: confluent kafka metrics
    interval: 1m
    dimensions:
      - key: confluent
        value: kafka
    queryMetrics:
          - key: custom.prometheus-confluent.confluent_kafka_server_active_connection_count
            value: metric:confluent_kafka_server_active_connection_count
            type: gauge
 
a2_0-1649942393043.png

 

a2
Participant

I tried it without the value and am still getting the 404 and metrics undefined  This is my latest config:

 

name: custom:prometheus-confluent
version: 1.0.5
minDynatraceVersion: "1.236"
author:
  name: ccloud-exporter

prometheus:
  - group: confluent kafka metrics
    interval:
      minutes: 1
    dimensions:
      - key: confluent
        value: kafka
    queryMetrics:
      - key: confluent_kafka_server_active_connection_count

I'm still trying to track down how it would be used. I will update as soon as I have something working.

JamesKitson
Dynatrace Guru
Dynatrace Guru

Ok I think I have a better understanding of this now. The error I was seeing was just an informational log message and in this case queryMetrics are not required (you may be using an older schema hence the error you saw afterwards as the variable names changed in more recent versions).

Regardless, you may be able to resolve your original error simply be removing 'query' from the url you enter in the configuration. Just .../api/v1/ is enough and Dynatrace will be appending query?query=... when it runs. Putting query in the url seems to mess it up.

JamesKitson_0-1649972851303.png

This is what I have working now (with the above URL configured):

name: custom:prometheus-server-api-test
minDynatraceVersion: "1.238"
version: 0.0.17
author:
  name: "Me"

metrics:
  - key: new_prometheus_go_gc_cycles_automatic_gc_cycles_total
    metadata:
      displayName: "New Prometheus GO GC Cycles"
      unit: Count

prometheus:
  - group: testing
    featureSet: testing
    interval:
          minutes: 1
    metrics:
      - key: new_prometheus_go_gc_cycles_automatic_gc_cycles_total
        value: metric:go_gc_cycles_automatic_gc_cycles_total

 

James, I switched my config back with all the metrics I am trying to get and I don't get the 404 anymore after putting in just the api/v1 endpoint but it still states no metrics defined.  I have no issue hitting http://localhost:9090/api/v1/metadata  /targets  /query?query=...   on the server with no issue.  Attached is the log.

The message about no queryMetrics being defined was what I focused on at first as well, but I found it is merely informational and it should be able to collect the specific metrics that are defined in the metrics section (like in my example).

[prometheus-58f5d752-87b4-39e5-b68e-eded6b0923c0][38696][out]|[2022-04-15 02:04:15.488Z] [ds:prometheus] [info] [OK] Sending event message: {"endpoint":"http://localhost:9090/api/v1/","message":"1 items returned from Prometheus API.","timestamp":""}
[prometheus-58f5d752-87b4-39e5-b68e-eded6b0923c0][38696][out]|[2022-04-15 02:04:15.597Z] [ds:prometheus] [info] [OK] Sending event message: {"endpoint":"http://localhost:9090/api/v1/","message":"There are no queryMetrics defined in the extension. Nothing to query Prometheus API for.","timestamp":""}

 

The important thing to look for is the "X items returned from Prometheus API" which in my case is the 1 metric I had configured and in yours is 0 which I can only interpret as the query not having any values for that metric.

If you're able to see this working with your metrics:

JamesKitson_0-1649988657118.png

 

I'm not sure what else can be looked at apart from double checking everything. My environment is ahead of what you would be using but I can't say if it is a 'version' issue that is already fixed.

The only difference I see between ours is in mine I have a trailing forward slash in the URL - I don't think that should break it and in fact I've seen that it is supposed to be explicitly added but it is something to try.

I had been testing with a remote activation but also tried local now and it also is behaving the same and collecting my single metric.

 

 

I added the metric "up" and see it coming through and noticed that the confluent datapoints are anywhere from 30s to 1m behind.  So if I query a confluent metric with current time I get nothing but if I query a confluent metric for 30s ago I get a datapoint. How can I accommodate for that?  Interval doesnt help as it will still query current time.  Basically I need to tell it to run the query with -30s.

Featured Posts