cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Dashboards performances with Timeouts & Errors

JeanBlanc
Helper

Hi everyone 👋

I'm currently troubleshooting a dashboard performance issue and I'm seeing a pattern I'd like to validate with the community.

When I select larger time ranges (e.g., 24h), some of the backend queries triggered by the dashboard are being aborted by the frontend — I consistently see net::ERR_ABORTED in the browser's dev tools.

🧪 Based on HAR analysis:

  • The backend does not return an error.

  • The request is cut off after ~450ms — even for valid queries.

  • When I reduce the time range (e.g., to 30 minutes), the same queries complete successfully, even when they take longer (~580ms).

So my question is: Is there any known frontend-side timeout or cancellation mechanism that would abort requests if they take too long (e.g., >450ms)?

Any insights, docs, or similar experiences would be greatly appreciated!

Thanks in advance 🙏

4 REPLIES 4

zietho
Dynatrace Leader
Dynatrace Leader

Hey sorry for coming back rather late. Any news on your side? 
Is the this behavior still the same? 
Have you already raised a support ticket for this? 

JeanBlanc
Helper

Hey @zietho

The issue actually occurs when working with a large dataset.

In my case, I’m using a dashboard variable backed by an array of over 10,000 entries. The dropdown becomes very slow, and any query using an IN clause eventually times out.

I worked around it by changing the variable type to Text, which helped avoid the timeout. However, I lost the dropdown selection and autocompletion features in the process.

Do you have any alternative suggestions to handle such large datasets more efficiently?

Best regards,

zietho
Dynatrace Leader
Dynatrace Leader

Hey, sorry for the late reply missed your previous one. 

Our lead product engineer tested a bit internally and found the following:

  • Timeouts: he found that we should not have such a short timeout. When he tried out some large data sets (~10k and more) he was able to let the queries run for >71 seconds. The backend limit should be around 5 minutes or so. When he trie it with 100k he started to run into query limits and you should then see the following message "TOO_MANY_ELEMENTS_IN_REPETITIVE_GROUP". 
  • Limits: We currently should be able to deal with 10k values in the variable but not more because then the query would not allow it any more. Why is that because we build it in a way that for 80% of the cases you don't need to do crazy DQL stunts (see workaround next) 

Alternatives/Solutions if you need > 10k variable values: 

  1. you use more narrowing filters when defining your variable to push the number of values below 10k, or
  2. you use the following workaround.  
    1. Create an "Any" or "All" (naming is up to you) entry in your variable. For example by using data record()... and | append [...
    2. During referencing the variable value you deal with the all by dropping the filter 

Workaround

For example, to add an "All" to all kubernetes clusters you could do it like 

zietho_2-1749151285641.png

And then during referencing you can simply do it like that, which if you select all "drops" the filter because its "true" and therefore doesn't actually filter.

zietho_3-1749151776734.png

Link to a working example on the playground https://playground.apps.dynatrace.com/ui/apps/dynatrace.dashboards/dashboard/30a2fd95-90e8-40d8-8387... 

 

JeanBlanc
Helper

Hi @zietho 

Thanks for your reply.

I’ve used a similar approach to handle large volumes of data.

Regards,

Featured Posts