12 Oct 2022 03:50 PM - last edited on 16 May 2023 09:07 AM by Michal_Gebacki
Hello,
I want to collect metrics through JMX plugin, but split them into more than only one dimension :
{
"timeseries": {
"key": "metric_put_batch_avg_time_ms_",
"unit": "MilliSecond",
"displayname": "KafkaConnect-put-batch-avg-time-ms",
"dimensions": [
"rx_pid",
"task"
]
},
"source": {
"domain": "kafka.connect",
"keyProperties": {
"type": "sink-task-metrics",
"task": "*"
},
"attribute": "put-batch-avg-time-ms",
"allowAdditionalKeys": true,
"calculateDelta": false,
"calculateRate": false,
"aggregation": "AVG",
"splitting": {
"name": "task",
"type": "keyProperty",
"keyProperty": "task"
}
}
}
In this exemple, it's ok for the dimension "task", but I want to add another dimension. OK to add the dimension:
"dimensions": [
"rx_pid",
"task",
"connector"
]
But how to add it in the splitting field? Is there something else to add?
Thanks.
Solved! Go to Solution.
12 Oct 2022 04:36 PM
Maybe it is not an answer for your question but I have checked a jmx metric with 3 dimensons in an environment with timeseries API and metric API too. See the results, maybe it helps for you.
Timeseries APIv1
{
"timeseriesId": "custom.jmx.PendingRequestCount_V21663771262394:metric_PendingRequestCount_1663771256883",
"displayName": "PendingRequestCount",
"dimensions": [
"PROCESS_GROUP_INSTANCE",
"Process ID",
"Name"
],
"aggregationTypes": [
"AVG",
"SUM",
"MIN",
"MAX"
],
"unit": "Count (count)",
"filter": "PLUGIN",
"detailedSource": "PendingRequestCount_V2",
"pluginId": "custom.jmx.PendingRequestCount_V21663771262394",
"types": [],
"warnings": []
}
Metrics APIv2
{
"metricId": "ext:custom.jmx.PendingRequestCount_V2.metric_PendingRequestCount_1663771256883",
"displayName": "PendingRequestCount",
"description": "",
"unit": "Count",
"dduBillable": true,
"created": 1663771262476,
"lastWritten": 1665587469308,
"entityType": [
"PROCESS_GROUP_INSTANCE"
],
"aggregationTypes": [
"auto",
"avg",
"count",
"max",
"min",
"sum"
],
"transformations": [
"filter",
"fold",
"limit",
"merge",
"names",
"parents",
"timeshift",
"sort",
"last",
"splitBy",
"lastReal",
"setUnit"
],
"defaultAggregation": {
"type": "avg"
},
"dimensionDefinitions": [
{
"key": "dt.entity.process_group_instance",
"name": "Process",
"displayName": "Process",
"index": 0,
"type": "ENTITY"
},
{
"key": "rx_pid",
"name": "rx_pid",
"displayName": "rx_pid",
"index": 1,
"type": "NUMBER"
},
{
"key": "Name",
"name": "Name",
"displayName": "Name",
"index": 2,
"type": "STRING"
}
],
"tags": [],
"metricValueType": {
"type": "unknown"
},
"scalar": false,
"resolutionInfSupported": true
}
Best regards,
Mizső
13 Oct 2022 10:57 AM
Thanks for you're suggestion.
It's possible because when i ingest some metrics they can be automatically split on many dimension, but i don't fine any documentation to define this in a custom plugin.
It was a good idea to check th e metrics throught the API, but it doesn't help me much 😞
10 Oct 2023 09:26 PM
Just adding that this feature request is something my company could use as well, also for collecting Kafka server metrics like you are. Very common to want to split by both topic + partition, for example. Let me know if you ever found a way to do this via the JMX Extensions.
11 Oct 2023 08:22 AM
You can do this with the new JMX datasource for extensions 2.0 in Dynatrace, which is the updated version of the JMX framework you have all been using: