I am trying to extract some api/v2 metrics via the API , I also apply filtering by tags,osType and mzName to reduce output however I am still getting the following message:
"Number of data points exceeded the limit of 20000000" -
The .csv that is returned contains only 881 rows of data - see sample of a single row below
How is it possible to get to 20m data points with only 881 rows and with 7 columns of data ? It just seems impossible to get to 20m data points
I am on version 126.96.36.19911006-102024
I would be keen to understand how the data points are calculated based on the sample above
Any ideas,workarounds or suggestions welcome
Apprecaite the help
Solved! Go to Solution.
You have 1 datapoint per host per disk per minute
Let's say you are looking at 2 weeks of data
And assuming you have 100 hosts, every host has 3 disks
100 hosts * 3 disks * 60 datapoints/hour * 336 hours/week = 6,048,000 datapoints
As you can see you can quickly get millions of datapoints, the error means that the limit has been reached, whatever response you got won't include the millions of datapoints.
My suggestion: have your script make one request per host (or a couple of hosts) at a time, or use Inf resolution depending on your use case