Build Run - Summary
TestReport.io
Last Update 6 måneder siden
Welcome to the Build Run page! This page gives you detailed insights into the results of a specific test run in two parts: Summary and Tests.
Let’s break down each of them step by step.

At the top of the page, you can see the Build Summary:

- Total Tests: In the center you can see total number of tests run in this build.
- Passed Tests: The number of tests that passed successfully (represented in green).
- Failed Tests: The number of tests that failed during this run (shown in red).
- Skipped Tests: Tests that were skipped (yellow).
- Ignored Tests: Tests that were ignored for any reason (grey).
For example, in this report, 46 tests were run, with:
- 30 passed
- 3 failed
- 10 skipped
- 3 ignored
On the right side, you can see the Build History. This is a visual bar chart showing the results of recent test runs. It helps you quickly see trends in test results over time:

- Green bars: Tests that passed.
- Red bars: Tests that failed.
- Yellow bars: Skipped tests.
- Grey bars: Ignored tests.
Each bar represents a build, giving you a historical perspective of performance across multiple builds.
The Build Stability graph shows how stable your test runs have been in last 10 runs. The line chart provides a visual representation of your test performance over time. The higher the stability percentage, the better your build performance is.

In this example, the stability is 60.29**%.
4. Test Flakiness, Always Failing, Muted Tests, & New Failures
Under the build summary, you’ll find sections for:

- Flakiness: Tests that pass sometimes and fail other times. Here, there are 14 flaky tests, meaning these tests might not be reliable.
- Always Failing: Tests that consistently fail in every build. There are 11 always failing tests in this build.
- Muted Tests: Muted Tests are tests that have been intentionally ignored or silenced.
- New Failures: New Failures represent test cases that have failed for the first time in the current run.
All of these are important indicators of test health and stability, helping you pinpoint unreliable or problematic tests.
This section provides a breakdown of Unique Errors in the test run. Each error listed here represents a unique failure cause. For example, you can see errors like:

- org.openqa.selenium.TimeoutException
- org.openqa.selenium. JavascriptException
- org.openqa.selenium.TimeoutException
There are 150 unique errors across 10 unique failures, giving you detailed insight into what went wrong.
This section categorizes the failures in your tests. Each failure can be classified into the following categories:

- To Be Investigated: Failures that need further inspection.
- Automation Bug: Issues related to the automation scripts.
- Environment Issue: Problems caused by the test environment.
- Product Bug: Failures caused by actual bugs in the product under test.
- No Defect: If no specific issue is detected.
In this example, there are 2 failures marked as "To Be Investigated", 3 failures marked as “Automation Bug”, 2 failures marked as Environment Issue, 4 failures marked as “Product Bug”, and 2 failure in the "No Defect" category.
At the bottom, the Run Summary provides an overview of all the test runs:

- Total Runs: The total number of tests in the run.
- Passed: The number of tests that passed.
- Failed: The number of tests that failed.
- Skipped: Tests that were skipped during the run.
- Ignored: Tests that were ignored.
For this build, 15 tests were run, with 7 passing and 6 failing.
Now if you select “Test” from the above section then you’ll see all the tests done in a comprehensive way. You can find it Here.