we are facing issue in viewing the Test results in Load Test Overview Report.
We have ran Unit test with different versions
Ex:5.0 and 6.0
and they are visible in TestOverview dashlet but when we try to view/analyze Load test Results we are not able to see the test that are executed .we tried with filters still no data displayed.
please help us.
The question is: Are You Executing Unit Tests or are you executing HTTP-based Load Tests.
The Load Testing DAshboard is meant for analyzing HTTP-based Load Tests executed with tools such as Load Runner, SilkPerformer, JMeter, Blazemeter, ... - please have a look at my YouTube Tutorial on Load Testing with Dynatrace
If you really run individual Unit tests - which are tests that are executed once per Build and test a small portion of your code you are correct with the Test Automation Integration that you have also shown in your screenshot. You can double click on these bars which will bring you to the Test Results Dashlet. In this dashlet you see the individual test results for each version. From here you can compare different individual test runs. This works for your Unit-tests (JUnit, NUnit) and "Web API/REST API Tests" using tools such as JMeter. Check out my tutorial on this as well: Test Automation
Then these tests WILL NOT show up in the Load Testing DAshboard. The Load TEsting DAshboard Only shows those Web Requests that have a special HTTP Header with the name "dynatrace" on it. We offer this as an "Integration option for Load Testing TOols".
Please have a look at my tutorials and feel free to also check out the online documentation on Load Testing Integration with Dynatrace.
The question is "How many Web Service Calls does your Unit Test Execute?" Just one? Hundreds? Thousands? If your Unit Test is in fact executing A LOT of Web Requests then you might want to add this special HTTP Header on these calls and instead of looking at the Unit Test Results in the Test Automation Dashlet you look at the Load TEsting Dashboard
We want to compare Unit test results:
ex: 5.0 vs 6.0 .testresults.png
in Test Results i can see both tests but i don't see any comparison like Response time,Improved runs etc.
Can you please let me know how to do this.
At the end we want to see that which version is more preferable in terms of performance.
In your screenshot you can see two dots for your DB Count Measure. Each Dot represents a Test Run against a different build. Now you can just double click the second Dot which will then open up a Comparision Dashboard between that Test and the Test from the previous build.
If you want to see response times you need to specify the test category "performance" instead of "unit" when you register you test runs. Depending on the Test Category Dynatrace will look at different metrics. here is an overview of test categories and the metrics we baseline: https://community.dynatrace.com/community/display/...
As I said - you specify the category when registering a test run through the REST Interface: https://community.dynatrace.com/community/display/DOCDT62/REST+Interfaces+for+Test+Automation
Thanks. i could see Purepath duration after sleeting up the Category as "performance"
Is there any way to see in one go that performance is for better when we run particular tests multiple times.
i have ran 10 test for one version and same for other version.
i would like to see that which is performing better .samplereport.png
how to configure to view the Results as shown in attached .
we would like to configure the report to show the Test results as attached file(this contains sample test run on different version)
If you run the same test multiple times I would say this is more like a little "Load Test". Our Test Automation Framework is able to analyze individual tests but not load tests. We do however have a cool feature that will give you exactly what you want - but it is not in the Test Automation DAshlet.
You can run your first set of tests using our Load Testing Integration Option. This is basically the same HTTP Header that you use but you do not specify the TestRunID. Lets say you have 10 PurePaths for that "mini load test". now you run the same test against the next version. You again have 10 PurePaths. Cool thing is that you can now COMPARE these 20 PurePaths with each other. The best practice is that you either store these PurePaths in separate session files and then compare the two sessions. OR - you could also tag your two tests with one of the Load Testing HTTP Headers, e.g: you could use the VU or the TE field to specify "TestsforBuild1" or "TestsForBuild2". Here is a sample http header: X-Dynatrace: NA=TestCase1;VU=TestForBuild1
You will end up seeing these requests in the Tagged WEb Request Dashlet. You can configure that dashlet to not only show you the name of your test that you specify in the NA field. but you can also say: "show me the virtual user name". so you get a nice table that shows you an overview of all captured purepaths by Test NAme and per Virtual User. And now you can compare it with your second set of tests by specifying a comparison data source on your dashboard and specifinyg a Tagged Web Request Filter for Header Field "TE" where the value is "TestForBuild1". For the "regular source" you can specify the same filter but for "TestForBuild2"
I recommend that you check out the following docs & videos:
Hope this helps