Hi, I'm a newbie here and would like to understand why the DDUs are different depending on the metric keys.
In Data Explorer I used the following (in advanced mode).
builtin:billing.ddu.metrics.byMetric:filter(and(or(eq("Metric Key","builtin:cloud.aws.dynamo.capacityUnits.consumed.write"),eq("Metric Key","builtin:cloud.aws.dynamo.capacityUnits.consumed.read"),eq("Metric Key","builtin:cloud.aws.dynamo.capacityUnits.write"),eq("Metric Key","builtin:cloud.aws.dynamo.capacityUnits.read"),eq("Metric Key","builtin:cloud.aws.dynamo.capacityUnits.provisioned.write"),eq("Metric Key","builtin:cloud.aws.dynamo.tables"),eq("Metric Key","builtin:cloud.aws.dynamo.capacityUnits.provisioned.read")))):splitBy("Metric Key"):sort(value(auto,descending)):limit(20)
The result is shown as attached. As you can see dynamo.tables shows 0.002 for a 10 minutes window. Does it mean the data point was checked only twice in 10 minutes? I thought it would check every minute. Should't each metric key have the same value? Note we hardly have custom built dashboards, queries yet.
AWS metrics are extracted by using a service - Amazon DynamoDB (built-in) without an ActiveGate instance. We use a SaaS solution.
I think the key here to understanding your data is your aggregation. In the attached image, are you using the default, avg or last aggregation? Because those aggregations would show the value only for one minute, which would make more sense. What you probably need to do is a sum aggregation, which gives you the total consumption for the time period you're looking at.
The metrics might also have a different consumption based on the dimensions that come with them. For example, the one with the lowest consumption has the AWS credentials and availability zones dimensions, meanwhile the others have the DynamoDB table as dimension. The more different tuples you find in the dimensions of a metric, the more it will cost.
@victor_balbuena, thank you for your reply.
I changed the query to specify to use SUM explicitly, but the results were the same. These metrics are retrieved from AWS Cloudwatch.
I figured out that there were six tables and each one had one data point for capacityUnits.write/read and capacityUnits.consumed.write/read, which generated 0.001 DDU x 6 tables x 10 minutes = 0.06.
Also, capacityUnits.provisioned.write/read actually occurred only twice for each data point, which generated 0.001 DDU x 6 tables x 2 times = 0.012.
The only thing I cannot explain is the last one for tables. According to the documentation the metric counts the number of tables for AvailabilityZone. Because the value is 0.002 DDU I wonder if the data point was ingested only twice in the 10 minutes window.