You can add filter to the metric to to pick the key request you are looking for.
The filter will look something like:
More information about filter transformation can be found in documentation
You will be looking for the metric(s) for Response Time. Please see the glossary on the definition here: https://www.dynatrace.com/support/help/get-started/glossary/#expand-response-time-72
Note that, usually in Dynatrace, when we use the term 'latency' we are specifically relating to a certain component of response time, such as network latency. Whereas overall service transaction time is response time.
Hi Wai Keat,
Yes, this is possible. It doesn't even need to be a request marked as key request. It just needs to be a metric.
For this you can create a calculated service metric (from the service, click on the chart option for the request, then 'create metric' from the multi-dimensional analysis view).
There's actually a walkthrough that makes this use case as an example in the "Getting Started with SLOs in Dynatrace" performance clinic webinar. You may watch this explanation either within Dynatrace University or on Youtube.
Starts from 26min mark if you're in a hurry
Hope that helps.
The only concern with creating metrics is that customers do care about their DDU's license that creation of metrics do consume, while creating key user action and using built in metric is free of DDU charge.
The new features is not the same as the one used in the University, we lost a lot of parameters...
Can we have details about how we can create the same SLOs shown in the videos with the actual features?
If the video seems not quite up-to-date with current, I would suggest to review the documentation here: https://www.dynatrace.com/support/help/shortlink/objectives-hub
It's often quicker for the Dynatrace docs team to update the online documentation, since review and re-recording of training videos takes longer and the SLO feature is still evolving and improving over time.
To add further to this,
Calculated service metrics can be created using conditionals on request name or request type (among others). A calculated service metric can then be used as a single metric or included as the numerator/denominator for SLO creation.
This needs to be better documented as is standard SLO/SLI practice. We've managed to get it working but found it was incompatible with the "SLO filter" when using calc service metrics. We also couldn't filter by status code without a calc service metric:
e.g. Latency SLI
<custom success metrics/filter service, endpoint, availability, and latency>
e.g. Availability SLI
SLO filter: type("SERVICE_METHOD") AND entityId("SERVICE_METHOD-XXX")
I'm having problem with metrics for SLO calculation based on keyRequest usingentityId("SERVICE_METHOD-XXX") change over time which causes the SLO to break, if I do it using entitySelector("type(service),entityName(~"createEvent~") in the filter the problem of duplicate names is presented, and when trying to add in the entitySelector for example the controller does not bring data. This generates that when the METHOD_ID changes it is necessary to adjust the dashboard again and in some cases it does not work with that either.
I would appreciate if someone has had this problem and has managed to mitigate it somehow.
Hi @Kenny ,
if there is a change in the entityId identifier for a key request, then it is in fact a new key request. Unless there is an architectural change (host group has changed) or deployment change in the application, the entityId will not change.
Technically you should be able to enter entityselector using entityName and leveraging the relationships. For example:
This will return key requests which contain "/my/request/name" in the name and are on the service with the ID specified.
Anyway - I recommend finding out why your SERVICE_METHOD ID had changed. It should not.
Thanks @Julius_Loman .
It works for me for example for availability SLO as follows:
(100)*((builtin:service.keyRequest.errors.fivexx.successCount:filter(eq(eq(dt.entity.service_method, "SERVICE_METHOD-XXX")):splitBy())/ (builtin:service.keyRequest.count.server:filter(eq(dt.entity.service_method, "SERVICE_METHOD-XXX")):splitBy()))))
We are analyzing why we lose in some cases the SERVICE_METHOD_ID since they are not all, for the case of availability we only change the metric of good event by a calculated one and it also makes it perfect, we only have to determine the origin of the loss of KeyRequest
@Kenny hard to tell - most likely there has been a change in deployment and Dynatrace considers even the service as a different one. I'd suggest to choose a long timeframe in the global time selector so it spans the time when the "old" keyrequest was present as well as the new one. Look at the process group. I'd bet you will see two services with the same name.
Thanks a lot for the feedback!
I already triggered to improve docs with your help/feedback/examples
and we think about how to make the filtering more self-explanatory.
The SLO filter would apply for both metrics and for the custom calc metric, there is only the dimension of the service.
This will change with a new query syntax for metrics (coming along with calculated metrics), which will also be part of the SLO setup then. There it becomes more clear, what is possible (to filter) on the individual metric queries and makes the overall setup easier.
I'm with the same problem, with calculated metrics I could build latency SLO, but this generates licensing costs but with the builtin metrics I have not been able to do it, only for the availability SLOs by services and KeyRequest
HIi @Malaik ,
this is not possible. At least in current Dynatrace versions. SLOs work on metrics only. You won't have the metric (Response time for example) for a single request unless you mark it as a key request or you create the calculated service metric.
Also, this should now be easier to implement as code, since there is now a key requests REST API for web and mobile apps.