Showing results for 
Show  only  | Search instead for 
Did you mean: 

Discrepancy between number of Problem ID on GUI vs JSON


Hope someone can clarify following discrepancy. Here is managed DT cluster and I am selecting one tenant one day timeframe, filter out and leave only Error type of events outside of maintenance window. GUI shows that there are 5 such events with ID 673, 285, 983, 67, 924

Then I downloaded data via API, imported at excel pivot table and I see that instead of 5 events, I have 13 Error type events which lists above listed events and 8 more.

I see the same situation on other days as well, actually for the whole week in JSON file there were 40 Problems, while GUI showed only 28.

Anyone can give some explanation?




The Problem number so recycle - but the ID will always be unique. This can be found in the URL and in the API pull of the event


I understand that the Problem is in the 1-999 range and it rotates. But I am asking about the different things - for the specific time range there are Problems and ID and the only subset of them are listed in GUI compared to JSON.

In the above-mentioned example problems with ID 15, 229, 504, and others I see in JSON but I don't see them when logged on at GUI. Why?

Dynatrace Champion
Dynatrace Champion

We do hide semantically duplicate problems from the UI. So once Davis found that two problems are semantically identical after some minutes (as problems evolve over time) Davis hides the duplicates from UI.

Nevertheless we do have to export them in the API as well as to send out close notifications, as alert notifications were also sent out already for duplicate problems. So receiving systems expect closing message also for duplicate problems.

Backlink is of course also working as expected, its just that the problem list does not show the duplicates.

Best greetings,


Wolfgang, thanks.

I have follow-up questions then - how do I filter out duplicate problems from the JSON file? Is there additional tag/property/key with such problems/events helping to identify and filter them out afterward?

How a single point of truth can be determined/established? If the GUI is such a single point of truth, then I need to have data pulled via API and notifications to be in sync with it.

See - a problem is sent out via the notification system, support folks see n such notifications, then they log on to GUI and see n-m displayed on the page. Then the explanation and digging need to be done to determine which one is a real event.

How much time does it take for the AI engine to consolidate events/problems? Let's consider the following example - there is a setting that 'Failure Rate Increased' event will be sent out if it has been going on for 30 min or more. Will the notification sent out after 30 min include the duplicate events or only consolidated events will be sent out?