Our code base is pretty big. It will cost 30 minutes to compile and unit tests, which will not include build the installers. Recently there were always failures happening in the server unit tests. There are two sets of the tests. One will generate the Derby (or java database) and using Hibernate to perform the actual unit tests. The other ones are normal unit tests. They are using mock objects to perform the unit tests. The unit tests will randomly fail due to real DAO and mock DAO problems. All the unit tests will pass if they are run separately.
1) Batch the any test cases inheriting DAOTestCase and batch any test cases inheriting BaseTestCase.
Explanation: Controls how many Java Virtual Machines get created if you want to fork some tests. Possible values are "perTest" (the default), "perBatch" and "once". "once" creates only a single Java VM for all tests while "perTest" creates a new VM for each TestCase class. "perBatch" creates a VM for each nested <batchtest> and one collecting all nested <test>s. Note that only tests with the same settings of filtertrace, haltonerror, haltonfailure, errorproperty and failureproperty can share a VM, so even if you set forkmode to "once", Ant may have to create more than a single Java VM. This attribute is ignored for tests that don't get forked into a new Java VM. since Ant 1.6.2
2) Reduce the test cases which require external database to tests. I look into the many test cases require external database are not actually the DAO classes. In other words, those test cases are not very in high qualities and could be optimized not to use external DB.
Explanation: I agree with your assumption that the tests that aren't the DAO tests that are going against the database are of lower quality and should be corrected so that they are more isolated in their scope. i.e. no database interaction at all. My philosophy is avoid the database at all costs, because it is expensive with respect to time.
3) Introduce JUnit4 which don't require database and hibernate initialization for each test case but for test suites, which will substantially reduce the test time we spend on setting up and tearing down databases and make the unit tests more efficient and shorter time, subsequently less problems.
4) Using memory database for cruise testing and Derby database for daily testing. Rationales are the same as 3).
5) Temporary using forMode="perTest" for stabling current AMT build.
Cons: I realize that from our discussions yesterday that 5) will cost time for the build, but may be a way to bridge the problem as try to correct the root cause. However there is a risk of the "oh its working now, let's not touch it" syndrome sneaking in and stopping us at this stage before we fix what is really going on.
6) Current AMT cruise build is 30 minutes long which definitely drag down the productivity. We should try to reduce the time by multiple means, such as splitting Sentry and Ardmore out of frameworks.
Pros: Yes the 30 minute build is something truly amazing in terms of time. Please, if you find some time, present me with some good arguments for different mechanisms for restructuring the build which could make it more efficient. We can then review them and see what we can do going forward.
Well, as lean software development principles go, we should probably stop and fix this problem now.
So what it comes down to is this, I believe we should be fixing the build, at least to get rid of this recurring problem with the failed tests in the framework. We need to work some more build stories into the iterations, and therefore I believe we have to prepare some compelling arguments with solid goals and at least a rough plan of how to get there to present to management so we can encourage him to "buy" some build improvements.
No comments:
Post a Comment