We are trying to implement dynatrace AppMon in our continuous build and delivery environment with Jenkins. We have installed the dynatrace plugin for Jenkins.
It works fine for the unit tests (JUnit), but we face issues with our integration tests. Indeed, we want to reuse the LoadRunner scripts we use for our performance tests. The first concern is that on the first execution after the deployment, the response times are very bad. Is there a way aggregate several executions into a single measure (averaging the different tests with the same test uuid?). We currently workaround this by running a first execution of the tests without instrumentation, to warm up the server.
The second issue is more problematic: in our VUGen scripts, we use web_reg_find calls to ensure that the responses we get are valid from a functional point of view. In case we do not get the expected response, the VUGen test fails and stops, but from a dynatrace point of view, the test is passed as we do get an HTTP 200 response. And there is no obvious sign of build failure when looking at the Jenkins Project build status.
We would have liked to be able to mark the test as failed in case the web_reg_find fails. This would be easy with a REST call passing the test uuid from the VUGen script. Does dynatrace AppMon 6.3 offers such a interface? (I could not find anything like this, but maybe there are some undocumented REST calls?)
A very dirty workaround I can think of would be to reuse the same X-dynatrace header (TN and TR) on a URL which would not return 200 (RC), which should result in test failure. But this does not mark the first and original test as failed...
Does anyone have a solution to this problem? I don't think this is addressed on the "
Integrate Web API Performance Monitoring in JMeter/SoapUI" page.
Thanks in advance for sharing your experience on this.
Solved! Go to Solution.
Great question and thanks for the details to describe your scenario. Here are some quick answers:
#1: There is no REST API right now to mark a test as failed AFTER it was executed. Our intention was that your functional testing tool will do the functional verification and dynatrace then provides the overview of architectural validation, scalability and performance. If you push all these metrics to Jenkins you shoudl be able to say: Functional Failed from Testing Tool -> no need to look at Dynatrace OR - Functional Good - Now Lets look if Dynatrace found a regression
#2: The only option to fail a WebAPI Test right now is if you specify an Expected HTTP Return Code. If that code doesnt match we mark the test as failed
#3: I think that your asking for such a REST API is perfectly valid. We actually discussed such a REST API just recently with the product team, e.g: if Jenkins tells you that the build is bad because of a problem found somewhere else then I also want to mark e.g: ALL Tests as failed or certain tests as failed so that Dynatrace also has the same type of information
I will forward this posting to our Product Team - we are very eager to learn how our users use our Test Automation Integraton and which use cases are missing
in addition to what Andi said, I want to let you know that we are already looking into the possibility of providing a REST API to mark the WebAPI test as failed. This should make it into the Spring 2017 release.