We have seen a growing body of evidence to suggest that sometimes Marathon does not properly clean up fully after itself when the test suite runs, and this leads to instability of our tests.
One example can be found in the failure rate for our `ForwardToLeader` tests.
This input for this report was builds 938 to 1175, which tells us that there is small cluster of failures in the middle of the dataset. An informal scanning of the HEADS for each failing build does not help us find any significant code change to which we can attribute the cluster of failures. It seems entirely unrelated to changes. However, there IS a 4 hour idle period between build 1099 and the next successful build:
We are not entirely sure why there was such a gap (best hypothesis is that Jenkins master had issues). The slave may have either timed out from not being used in this period, or, as part of some recovery action, was destroyed or recreated? It's guesses at this point, but, the best guess (and the data could support it) is that some event happened to clean the Jenkins slave state of the accumulated junk.
This will take some investigative work to solve. It would be helpful to collect some data, such as after each build, capture the output of `ps aux` and perhaps some other data. If we can collect data proving there is a leak, then we can either fix it, or, put in some pre-flight stage that detects leaked processes from a prior run and kills them.
Or, we could simply recycle our Jenkins slaves more often.