cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Response status 202 with api call

ni5hat
Newcomer

Sometimes when I make a POST API call to dynatrace through python, I get 202 as the response status code,  meaning the request was accepted but is still being processed. 

My final goal is to get the final json response, hence I need a way to check the status of long-running requests. In general, 202 response often includes a Location header, which provides a URL where you can check the status of your request. But dynatrace does not include the location header in the 202 response.

Could anyone help me with the specific endpoint or method for checking long-running operations for dynatrace?

3 REPLIES 3

Yosi_Neuman
DynaMight Guru
DynaMight Guru

Hi @ni5hat 

Under Dynatrace API - Response codes the follows is written:

In case when a successful request may return different codes, it is specified in the description of the request.

Anyhow, Can you elaborate on which API call you are using that's return 202?
My guess is that you are using a post call to create something, within the response body you should receive an ID that IMO you can check with the appropriate get API 
HTH
Yos 
dynatrace certificated professional - dynatrace master partner - Matrix Soft Ware Division - Israel

PedroSantos
Advisor

Hello @ni5hat 

 

If your request takes too long to be processed, for example when you query Grail through API, it is entirely possible to get a 202 with the following json in its body that may look like this:

{ "state": "RUNNING",

"requestToken": "[You can use this token on the /query:poll endpoint to check its status],

"ttlSeconds": 79 }

 

However, I have found it is possible to adjust the parameter "requestTimeoutMilliseconds" in order to tell Dynatrace how long you're willing to wait for a query to be processed. When querying Grail through the API, you can edit the curl as follows:

curl -X 'POST' \
  'https:/[yourenvironment]/platform/storage/query/v1/query:execute' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer [your Bearer Token]' \
  -H 'Content-Type: application/json' \
  -d '{
  "query":[The DQL query],
  "defaultTimeframeStart": "2025-02-01T00:00:00Z[examples, edit to fit your needs]",
  "defaultTimeframeEnd": "2025-02-20T23:59:59Z[examples, edit to fit your needs]",
  "timezone": "UTC",
  "locale": "en_US",
  "maxResultRecords": 1000,
  "maxResultBytes": 1000000,
  "fetchTimeoutSeconds": 60,
  "requestTimeoutMilliseconds": [EDIT THIS ONE in order to get the full response instead of the 202],
  "enablePreview": true,
  "defaultSamplingRatio": 100,
  "defaultScanLimitGbytes": 100,
  "queryOptions": null,
  "filterSegments": null
}
'

 

It is entirely possible that you my be able to circumvent your issue by doing this.

To make an error is human. To spread the error across all servers in an automated way is DevOps.

hi @PedroSantos That is exactly what I did to resolve this issue 🙂 I played around with both the parameters "requestTimeoutMilliseconds" and "maxResultRecords" and was able to get a finished response. 

Featured Posts