cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Kafka metrics via hub extensions

vpasinelli
Observer

Hi,
We have installed in our environment via hub Confluent Kafka and Apache Kafka. In Confluent Kafka among the metrics we would need is the kafka schema but it results: "Metric confluent_kafka_schema_registry_schema_count.gauge not found" and "Metric confluent_kafka_schema_registry_request_gauge not found" or in any case all the metrics related to the schema.
Reading the documentation I have true with our colleagues who manage the kafka hosts and have confirmed that all prerequisites have been met.
Also, if you can give a few things about which metric to observe to check the status of Kafka tasks.

Thank you very much

5 REPLIES 5

DavidMass
Dynatrace Mentor
Dynatrace Mentor

Hi @vpasinelli , 
That metric comes from the Confluent Cloud extension in the Schema Registry Metrics Feature Set : 
https://www.dynatrace.com/hub/detail/confluent-cloud-kafka 

DavidMass_0-1716384126020.png

Are you using Confluent Cloud and have you added the Schema Registry ID(s) to the Public Confluent Export API URL : 

DavidMass_1-1716384208875.png

 

Thank you very much, but we were wrong and we did not merge in the cloud but on our machines. So the idea that would make me immensely is to retrieve the metrics through metric ingestion but from the documentation I am not clear if I can make the call from conflunet and then upload them to Dynatrace. Thank you very much

PierreGutierrez
Dynatrace Pro
Dynatrace Pro

Hi @vpasinelli 
If you are already seeing the message "Metric confluent_kafka_schema_registry_schema_count.gauge not found", it means that you have already configured the Confluent Cloud (Kafka) extension.

I suggest you check carefully:
- Have an ActiveGate that you can install Extensions and that has free internet access
- Scheme ID
- The full URL entered
- Credentials entered

Example of URL :

https://api.telemetry.confluent.cloud/v2/metrics/cloud/export?resource.schema_registry.id=xxxx-xxxxxx

Repleace the "xxxx-xxxxxx" for some value like : "lsrc-7yd7zx"

It is common for there to be an error in that part of the configuration. Then , also you can validate if there is correct communication from the "Monitoring configurations" section.

PierreGutierrez_0-1716433776601.png

If you have already validated everything, you can identify any possible errors by clicking on the "Status" value.
In the example it is the Text "Error" and it will take you to Log information ingested in Dynatrace with the details of what happened.

PierreGutierrez_1-1716433915970.png

When you have everything configured and working you should be able to have metrics like the following in the Kafka Dashboard Template

PierreGutierrez_2-1716433941285.png
---------------------------------------------------------------------------------------------------
I think 2 of the most important metrics for Confluent Kafka Cloud monitoring are:

PierreGutierrez_3-1716434132462.png


I hope it's helpful 💪

Pierre Gutierrez - LATAM ACE Consultant - Loving Cats! Loving Technology !

Thank you so much for your answers. Unfortunately I was informed that our confluent is onprem and not in the cloud so Confluent Cloud is not the right extension for us, while in the extension Apache Kafka such metric is not present, how could we recover that metric (schema and task status)?

HI @vpasinelli 
I understand what you're saying, I was checking.
The metrics you get from the integration with the extension are specific to the Confluent Kafka Cloud platform, exposed through the "Confluent metric export API"( https://api.telemetry.confluent.cloud/docs#tag/Version-2/paths/~1v2~1metrics~1%7Bdataset%7D~1export/... )  and it seems that those APIs are only for cloud and cloud-custom environments.

PierreGutierrez_0-1716467988768.png

That may be the reason why metrics do not arrive.

-----------------------------------------------------------------

I have never tested this scenario, but because Confluent Kafka Cloud is based on Apache Kafka, perhaps installing an Agent on the server will give you visibility into the metrics.
(https://www.dynatrace.com/hub/detail/apache-kafka/?query=apache+kafka&filter=all )

Another idea I think is to use the Confluent Kafka APIs on-premise, connected to Promethus and expose it to dynatrace.

They are ideas, but I have not validated them

I hope someone can help you with more specific detail 💪

Pierre Gutierrez - LATAM ACE Consultant - Loving Cats! Loving Technology !

Featured Posts