In the last two weeks, I have started coding for the “Mochitest Failure Investigator” GSoC project (Bug 1014125). The work done in these two weeks:
- Added mach support for –bisect-chunk (this option is used if a user wants to explicitly provide the name of the failing test). This would help in faster debugging locally.
- Wrote a prototype patch in which I coded two algorithms namely, Reverse Search and Binary Search. As the name suggests, Reverse Search is used to split all the tests before the failing test in 10 chunks and iterate over each chunk till the failing chunk is found. Once, the failing chunk is found each test in that chunk is iterated again to determine the test causing the failure. Binary Search on the other hand split the tests in halves and iterates over each half in a recursive way to find the failure point.
- The mochitest test harness did not support random test filtering but only supported sequential tests filtering, that is, if we needed to run “test1”, “test2” and “test99” we could not do that and we had to run all the tests for 1 to 99. So, I initially implemented the algorithm such that tests are run in sequential way, however this method was not optimal as a lot of unnecessary tests were run again and again.
- Next, I optimized both the search methods and added support for running random tests in the mochitest test harness. This was done by filtering the tests that were added to tests.json when the profile was created.
- I re-factored the patch on :jmaher’s recommendations and made the code more modular. This was followed by testing the patch for the sample test cases that I had initially developed on the try server for mochitest-plain, mochitest-chrome and browser-chrome tests.
The results on try were fantastic.
A typical binary search bisection algorithm on try looks like this:
A typical reverse search algorithm looks like this:
The “TEST-BLEEDTHROUGH” shows the test responsible for the failure. As we would expect, reverse search performs better than binary search when the failure point is closer to the failing test and vice-versa. The bisection algorithms took 20 mins on an average to compute the result.
How is all of this useful?
The contributors in mozilla spend large amount of effort to investigate test failures. This tool will help in increasing productivity, and saving on an average 4-6 hours of finding the test case down and also reduce the number of unnecessary try pushes. Once this tool is hooked up on try, it will monitor the tests and as soon as the first failure occurs, it will bisect and find the failure point. Also, we can use this tool to validate new tests and help reducing intermittent problems by adding tests in chunks and verifying whether they are passing and if not which tests are affecting the to be added test. We can also use this tool to find out the reason for timeout/crash of a chunk.
It has been quite exciting to tackle mochitest problems with :jmaher . He is an amazing mentor. In the coming weeks, I will be working on making the tool support intermittent problems and incorporating the logic of auto-bisection. Happy hacking!