DQL
Questions about Dynatrace Query Language
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

How do I add timestamp and schema to a statement performance tile I have? (Oracle Database Monitoring Extension)

badgerfifteen
Organizer

Hi all,

This is regarding the Statement Performance section in the 'Databases' section (Applicable to Oracle Database Monitoring extension only).

In this section, when you click 'Pin to Dashboard' this code is produced:

fetch logs 
| filter dt.extension.name == "com.dynatrace.extension.sql-oracle"
| filter matchesValue(event.group, "query_performance")
| filter dt.cost.costcenter == "GROUP"
| sort timestamp desc
| filter not contains(content, "/*dt:ownQuery*/") | summarize {number = count(),
  sql_id = takeFirst(sql_id),
  fetch_exec_plan_for_sql_id = takeFirst(sql_id),
  event.group = takeFirst(event.group),
  endpoint = takeFirst(endpoint),
  device.port = takeFirst(device.port),
  device.name = takeFirst(device.name),
  device.address = takeFirst(device.address),
  device = takeFirst(device),
  instance.name = takeFirst(instance.name),
  database.identifier = takeFirst(database.identifier),
  loglevel = takeFirst(loglevel),
  dt.extension.config.id = takeFirst(dt.extension.config.id),
  dt.extension.endpoint.hints = takeFirst(dt.extension.endpoint.hints),
  dt.extension.name = takeFirst(dt.extension.name)},
  by:{content, container.name}
| fields
  sql_id,
  container.name,
  id = concat(sql_id, container.name),
  fetch_exec_plan_for_sql_id,
  event.group,
  endpoint,
  device.port,
  device.name,
  device.address,
  device,
  instance.name,
  database.identifier,
  loglevel,
  dt.extension.config.id,
  dt.extension.endpoint.hints,
  dt.extension.name,
  content
| join [
  fetch logs
| filter dt.extension.name == "com.dynatrace.extension.sql-oracle"
AND matchesValue(event.group, "query_performance")
| filter dt.cost.costcenter == "GROUP"
  | makeTimeseries {
      executions_timeseries=avg(toDouble(executions)), 
      elapsed_time_timeseries=avg(toDouble(elapsed_time)),
      application_wait_time_timeseries=avg(toDouble(application_wait_time)),
      concurrency_wait_time_timeseries=avg(toDouble(concurrency_wait_time)),
      user_io_wait_time_timeseries=avg(toDouble(user_io_wait_time)),
      cluster_wait_time_timeseries=avg(toDouble(cluster_wait_time)),
      cpu_time_timeseries=avg(toDouble(cpu_time)),
      rows_processed_timeseries=avg(toDouble(rows_processed)),
      parse_calls_timeseries=avg(toDouble(parse_calls)),
      direct_writes_timeseries=avg(toDouble(direct_writes)),
      disk_reads_timeseries=avg(toDouble(disk_reads))
    },
    by: {sql_id, container.name}, interval:5m
    | fieldsAdd 
      executions_delta = arrayDelta(executions_timeseries), 
      elapsed_time_delta = arrayDelta(elapsed_time_timeseries),
      application_wait_time_delta = arrayDelta(application_wait_time_timeseries),
      concurrency_wait_time_delta = arrayDelta(concurrency_wait_time_timeseries),
      user_io_wait_time_delta = arrayDelta(user_io_wait_time_timeseries),
      cluster_wait_time_delta = arrayDelta(cluster_wait_time_timeseries),
      cpu_time_delta = arrayDelta(cpu_time_timeseries),
      rows_processed_delta = arrayDelta(rows_processed_timeseries),
      parse_calls_delta = arrayDelta(parse_calls_timeseries),
      direct_writes_delta = arrayDelta(direct_writes_timeseries),
      disk_reads_delta = arrayDelta(disk_reads_timeseries)
    | fields executions = arraySum(iCollectArray(if(executions_delta[] > 0, executions_delta[], else: 0))),
      elapsed_time = arraySum(iCollectArray(if(elapsed_time_delta[] > 0, elapsed_time_delta[], else: 0))),
      application_wait_time = arraySum(iCollectArray(if(application_wait_time_delta[] > 0, application_wait_time_delta[], else: 0))),
      concurrency_wait_time = arraySum(iCollectArray(if(concurrency_wait_time_delta[] > 0, concurrency_wait_time_delta[], else: 0))),
      user_io_wait_time = arraySum(iCollectArray(if(user_io_wait_time_delta[] > 0, user_io_wait_time_delta[], else: 0))),
      cluster_wait_time = arraySum(iCollectArray(if(cluster_wait_time_delta[] > 0, cluster_wait_time_delta[], else: 0))),
      cpu_time = arraySum(iCollectArray(if(cpu_time_delta[] > 0, cpu_time_delta[], else: 0))),
      rows_processed = arraySum(iCollectArray(if(rows_processed_delta[] > 0, rows_processed_delta[], else: 0))),
      parse_calls = arraySum(iCollectArray(if(parse_calls_delta[] > 0, parse_calls_delta[], else: 0))),
      direct_writes = arraySum(iCollectArray(if(direct_writes_delta[] > 0, direct_writes_delta[], else: 0))),
      disk_reads = arraySum(iCollectArray(if(disk_reads_delta[] > 0, disk_reads_delta[], else: 0))),
      id = concat(sql_id, container.name)
], on:{id}, fields: {executions, elapsed_time, application_wait_time, concurrency_wait_time, user_io_wait_time, cluster_wait_time, cpu_time, rows_processed, parse_calls, direct_writes, disk_reads}
| filter isNotNull(elapsed_time)
| sort elapsed_time desc
| fields content, 
  sql_id,
  container.name,
  executions, elapsed_time, application_wait_time, concurrency_wait_time, user_io_wait_time, cluster_wait_time, cpu_time, rows_processed, parse_calls, direct_writes, disk_reads,
  mean_elapsed_time = elapsed_time/executions, 
  fetch_exec_plan_for_sql_id
| fields content, sql_id, container.name, executions, elapsed_time, mean_elapsed_time, fetch_exec_plan_for_sql_id
| filter elapsed_time >= 329709768

Its slightly edited to meet my requirements (see: filters at the top) but what I am trying to get as part of this DQL query is the timestamp and possibly the schema attached to the statement. I've been finding it quite difficult as I can't quite find where to input it.

The obvious choice would be to use 

| fieldsAdd timestamp

...and then remove portions of the fields query (to not limit the field selection) but it still seems timestamp won't be ingested. Any ideas? It must be being closed off in the query somewhere. 

 

 

0 REPLIES 0

Featured Posts