Why is no data being added to the Test Automation dashlet?
To troubleshoot and resolve this issue:
- Make sure the Performance Warehouse is connected.
- For unit tests, make sure the Java Tests or .NET Tests Sensor Pack is enabled for your Agent Group.
- Review the Agents overview to make sure the Agent is connected to the Server and it is in the correct Agent Group.
- Make sure the measures listed on the Capture Performance Data from Tests page have the Create a measure for each agent option selected. Also see below: Why are some measures missing in the Metrics section of the Test Results dashlet?
- Make sure you are using a Pre-Production License. Test Automation is not available in Production Edition.
Why aren't comparison and drilldowns available for the selected test runs?
To troubleshoot this issue:
- Check whether session recording was enabled for the test run.
- Check whether the session recording for this test run has already been deleted.
Why are some measures missing in the Metrics section of the Test Results dashlet?
The Metrics section contains a limited set of measures. Each test category has its own, separate list of measures, matching its characteristics. You can find a list of measures assigned to each test category on the Capture Performance Data from Tests page.
A measure might not appear in the Metrics section if it's not configured to be split by Agent. You need to check the Create a measure for each agent option in measure properties (Details panel, Measure Spliting section) in order to enable that bahavior.
Why are some measures duplicated in the Metrics section of the Test Results dashlet?
We consider measures to be different if they are coming from Agents running on different hosts. Such measures are reported in separate rows (you can display the Host column in the metric table to verify). In the case of larger CI environments that make use of many hosts running the builds and executing tests (build farm), that approach may lead to unwanted measure duplication. In order to solve the problem, you should use the
overridehostname Agent option that would cause the Agent to report a given host name instead of the detected one. See Java Agent Configuration page for details.
Note that if machines executing the builds have different performance capabilities and you force all of them to be reported as a single host, you may see shifts in performance related metric values as the consequent test executions are reported from different hosts. That may cause unwanted alerts on measure volatility or baseline violation.
Test Automation Terms
AppMon identifies test methods of unit tests from testing frameworks such as JUnit, NUnit, and MSUnit as a test case. Every test method is listed as an individual test.
The corridor is the expected range of values for a measure in a test case.
- The corridor is the
((100 - False Positive %) - Confidence Interval %)of the Student's t-Distribution of a measure. By default, False Positive % is 1, so the default Confident Interval % is 99.
- The False Positive % is an AppMon term used for setting the confidence interval.
- Volatility is the Coefficient of Variation. By changing the Volatile % in the Test Automation settings, you define how high this Coefficient of Variation has to be for a test to be considered volatile.
- The calibration runs are applied every time changes are accepted.
- When accepting changes, the system behaves exactly as if the selected measurement was the first one ever observed; all existing values (for standard deviation, etc.) are discarded and are only shown in the chart. When you select Accept Change instead of Accept Changes from Here, the first measurement outside the corridor will be chosen for accepting changes.
Test Case Assignee
The test case assignee is the person responsible for the test performance, who will receive notification emails when a measure for a test case is outside the corridor for the number of tests configured in the System Profile - Test Automation settings.
One execution of a test case, e.g. a unit test.
On This Page
- Why is no data being added to the Test Automation dashlet?
- Why aren't comparison and drilldowns available for the selected test runs?
- Why are some measures missing in the Metrics section of the Test Results dashlet?
- Why are some measures duplicated in the Metrics section of the Test Results dashlet?
- Test Automation Terms