08 Nov 2024 08:12 AM
Hi,
We have a use-case where we want to test a end-to-end feature with synthetic (creating a request in a web portal / filling a form and sending it / automatic batch treatment via 3rd party tool / check that request is "done"/ verification that object has been created / deletion of request/ deletion of the object)
My concern is that the batch between request sending can take between 10 and 30 min
our first idea was to schedule every hour :
- schedule a 1st scenario (request filling + creation of request) at xx:00
- schedule a 2nd scenario( check that request is done, verification of object / deletions) at xx:40
We have tried to schedule the 2 scenario like this, but unfortunately it seems not possible to schedule at a defined timing by usual synthetic setup
What we have done is , via a workflow , schedule the 1st scenario by "on-demand" execution at xx:00 and then 2nd scenario via on-demand execution at xx:40 . This is working fine and suits the need.
Unfortunately, when there are some problems created, the event timeout (dt.davis.timeout) is set a value we can't configure (8 min)
Meaning that if we have an issue with the batch during few hours
-> at 01:40 a problem will be raised / 01:48, problem will be autoclosed
-> at 02:40 a problem will be raised / 02:48, problem will be autoclosed etc..
It seems from support that dt.davis.timeout can't be configured for on-demand executions.
Maybe we are not implementing this in the right way, do you have any idea how to implement this kind of need and manage problems correctly ?
Best regards,
Christophe
Solved! Go to Solution.
12 Nov 2024 03:55 PM
The hard timeout for Browser Monitors is 15 minutes, so having two monitors is the only option in this scenario. The event timeout for Synthetic monitors running on-demand, unfortunately, cannot be changed. I think it would make a good Product Idea to be able to set it for on-demand frequency monitors (I thought it was already a Product Idea but can't find it currently)
I can't think of any alternatives to the way you have it set up.
14 Nov 2024 09:37 AM
Thanks Hannah for your response.
I'll open a Product Idea for timeout configuration !
22 Nov 2024 10:48 AM
Actually there is a possibility to achieve the desired configuration, it's not the most convenient way though.
What I would consider is to define a browser monitor with a JavaScript as the first event. Using custom scripting it's possible to skip selected events.
The workaround would be to define a browser monitor with 10 min frequency, in the first java script event, e.g. based on the current time, decide which part of the monitor to execute.
The javascript step could look like that:
var minutes = Math.floor(new Date().getMinutes() / 10);
if (minutes === 3) {
api.skipSyntheticEvents([4, 5, 6, 7]); // execute the 1st scenario
} else if (minutes === 4) {
api.skipSyntheticEvents([2, 3]); // execute the 2st scenario and validation
} else {
api.skipSyntheticEvents([2, 3, 4, 5, 6]) // execute the validation step only
}
The validation step should fail the monitor in case of any issue in 1st or 2nd scenario, that will keep the problem open.
22 Nov 2024 12:23 PM
Hi Piotr,
Thanks for the solution, you may have saved me one Workflow licence ! (indeed we have set up this logic in a workflow and not in the scenario), that's very good idea. 😀
However, I'm not sure about failure management.
I'm not sure to get this sentence "The validation step should fail the monitor in case of any issue in 1st or 2nd scenario, that will keep the problem open.", can you explain ?
Christophe
22 Nov 2024 01:03 PM
The validation depends on the details of Your scenario. We would need to verify if the last executions (of both scenarios) have been successful.
If there is an option to verify that on the web portal under tests, the validation event could use that, e.g. verify whether there was request created and deleted in last 'n' minutes.
If that's not possible I would suggest to verify results of the previous executions of the monitor via public metrics api and builtin:synthetic.browser.failure.geo metric. In case previous executions has failed we could use the power of custom scripting again and fail the execution using api.fail method.
I know it's not the easiest way to achieve the goal.