The first month of Google Summer of Code

Already a month has passed since the start of Google Summer of Code 2014. As previously mentioned I had developed a test case to show a failure so that we can detect it. In the past month, most of my time was spent in familiarizing and researching on the existing mochitest harness. I researched about mochitest-plain, browser-chrome and mochitest-chrome. Mochitest plain tests are normal html tests without administrator privileges while chrome tests are the ones with administrator privileges. Browser chrome tests are similar to chrome tests but they control and run the user interface(UI) of the webpage.  Also, chrome tests depend on harness.xul while browser-chrome tests depend on browser-harness.xul, and browser-test-overlay.xul is used to provide an overlay to both type of tests. These files are entry points to the harness and deal with the UI of the tests.

Next, I learnt about test chunking(how the tests are segregated), manifest filtering(filtering the tests that have to be run depending on the command line options) and log parsing. It has been an interesting and a humbling experience to learn about the different things that hold together a  framework. Then, I started working on Bug 992911. Mochitests till now are run per manifest and this bug basically deals with running mochitest per directory and displaying the cumulative results(passed, failed, todos) at the end. The advantage of doing this is that we can determine how many tests are run in each directory and we get a great balance between runtime and test isolation. Joel Maher had already coded the basic algorithm for this functionality but there was a problem with log parsing and summarization. On his advice, I fixed up the log parsing and we could correctly display the cumulative number of pass/fail/todo tests and I also added mach support for ‘–run-by-dir’ command. The patch is almost done and we will very soon have this functionality.
In the coming weeks, I will start working on the tool for bisecting the chunks to find the failure point of a failing test. Stay tuned!