cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Number of data points exceeded the limit of 20000000

AlanFoley
Observer

Hi all

 

I am trying to extract some api/v2 metrics via the API , I also apply filtering by tags,osType and mzName to reduce output however I am still getting the following message:

"Number of data points exceeded the limit of 20000000" - 

 

The .csv that is returned contains only 881 rows of data - see sample of a single row below

"builtin:host.disk.usedPct:names,Host001,HOST-FE7A6A123B192776,D:\,DISK-1E2CC761ED90F1C,2021-10-01 00:00:00,54.4775917176209"

 

How is it possible to get to 20m data points with only 881 rows and with 7 columns of data ? It just seems impossible to get to 20m data points

 

I am on version  1.226.116.20211006-102024

 

I would be keen to understand how the data points are calculated based on the sample above

 

Any ideas,workarounds or suggestions welcome 

 

Apprecaite the help

 

Thanks

Alan

 

2 REPLIES 2

david_lopes
Dynatrace Mentor
Dynatrace Mentor

You have 1 datapoint per host per disk per minute

Let's say you are looking at 2 weeks of data

And assuming you have 100 hosts, every host has 3 disks

100 hosts * 3 disks * 60 datapoints/hour * 336 hours/week = 6,048,000 datapoints

As you can see you can quickly get millions of datapoints, the error means that the limit has been reached, whatever response you got won't include the millions of datapoints.

My suggestion: have your script make one request per host (or a couple of hosts) at a time, or use Inf resolution depending on your use case



Hi David

Appreciate the detailed response - I have set my collections for every 4 hours and for those metrics which are a fit I set resolution-Inf - This way I have been able to avoid the data limits and have useful data

Thanks a mil

Alan

Featured Posts