I have recently been writing about big theoretical discussions. While I think that those are important, the other part of writing blog posts is to give back to the community. This post is about discussing the best practices of testing Cappuccino and other Objective-J projects. I have been doing this for several months now and (hopefully) can write something pretty authoritative on this subject. I hope this helps those new to Cappuccino and/or testing!

The culture of testing in the Cappuccino community has been pretty weak. However, the mindshare in the testing culture has been slowly but steadily increasing. Writing good unit tests is essential to the growth and acceptance of the testing culture. I am going to present some best practices that should help the newly minted tester to the mindset that is testing.

OJTest Suite

First, you need to know the tools. The OJTest repository contains all[1] of the tools necessary to effectively test Cappuccino applications. In this repository there are two particularly important frameworks: OJUnit and OJMoq. OJUnit is an xUnit style of framework that we will use to actually run our tests. OJMoq is a mocking framework that will allow us to break dependencies in tests.

In order to run a test, you need to issue the following command

ojtest SomeTest.j

where SomeTest.j is a test that follows the conventional test setup

@implementation SomeTest : OJTestCase

- (void)testThatSomeTestWorks
{
  [self assertTrue:YES];
}

@end

(we will get more into the conventions of OJTest in a second). In the above code, we are creating a test case called SomeTest. In OJTest, a test case is actually a group of tests. This is usually just a one-to-one mapping of the classes to tests. For example, SomeTest.j would probably be testing a class called Some that is contained in Some.j.

OJTest Conventions

There are some conventions that we normally use in the testing environment for Objective-J. These conventions make the development of additional tools (like OJAutotest) and for easier navigation and readability by other Cappuccino developers.

The first convention is where we put our tests. We should put our tests into a folder called Test in the top level directory. For example,

AppController.j
SomeClass.j
AnotherClass.j
Test/
- AppControllerTest.j
- SomeClassTest.j
- AnotherClassTest.j

You will also notice that in the above example, each of the tests correspond to a single file (e.g. AppController.j -> AppControllerTest.j) and that tests are identified by appending Test to the end of the file's name. You should also note that it is a de facto convention that SomeClass.j contains a single class called SomeClass.

The last and most important convention is that all of your tests must begin with "test". This is important because that is how the test runner identifies the tests that it needs to run. We can also leverage this for our own purposes. This allows us to create helper functions.

@implementation SomeTest : OJTestCase

- (void)testThatSomeTestWorks
...

- (Some)buildASome
{
  return [[Some alloc] init];
}

@end

In the above example, testThatSomeTestWorks will be run by the test runner but buildASome will not.

OJTest Techniques

There are a variety of techniques that can be used to effectively test your Cappuccino and/or Objective-J projects. These techniques are best practices that I have found when testing over the past few months and are not as concrete as the descriptions above. Hopefully, they will be a good starting point for many developers starting to test applications.

Assertions

Assertions are the key technique used in unit testing. Assertions are members of OJTestCase and can be used in the following manner:

[self assertTrue:YES];
[self assertFalse:NO];
[self assert:expected equals:actual];
...

For a full list of available assertions, you can look at the class documentation on OJTest. They can also be seen in the source.

Small Tests

Make your tests small. This is really an xUnit best practice but in a dynamic language such as Objective-J, this all the more important. Small tests allow a granularity that allows you to quickly find problems with your production code. A good rule of thumb (though, this is definitely not always the case!) is that you should have one assertion per test.

Mocking

Dependency Graph We are attempting to test ClassUnderTest. In this case, we see that it has three immediate dependencies called Database, View, and MathUtility. In the 1plus technique, we would instantiate the MathUtility and View classes (they have no external dependencies). We would also instantiate the Database class but we would mock its two dependencies (LocalStorage and Connection). As with any rule, though, there are exceptions. If, for example, MathUtility cannot be instantiated in test (which can be for a variety of reasons), then we should prefer a dynamic mock over a fake.

Mocking is one of the more controversial techniques in testing communities across a variety of languages. However, I find that mocking can be useful in a pragmatic way. I usually use a technique that involves mocking only dependencies with a length greater than 1 (I refer to this as the "1plus" mocking technique). This allows us to preserve some mini-integration benefits of instantiating everything while keeping the locality benefits of mocking.

OJMoq makes mocking awesome. It is very simple and allows you to create stubs, mocks and spies in a very intuitive way. In order to create a stub, you just do the following

var stub = moq();

In order to create a mock, you just add expectations to a stub.

var mock = moq();
[mock selector:@selector(init) times:1];

In order to create a spy, you just wrap an existing object.

var dependency = [[View alloc] initWithFrame:CGRectMakeZero()];
var spy = moq(dependency);

In order to keep this post short(er), I won't go into the difference of stubs, mocks and spies. If you aren't sure what to use for your situation, just ask in the IRC channel (irc.freenode.net#cappuccino).

SetUp / TearDown

For anyone familiar with setup and teardown methods in the xUnit framework, OJTest does support these. SetUp and TearDown methods are called before and after, respectively, each test is run. This allows us to set up some information that may be repeated over and over. In order to use them, just methods called setUp and tearDown.

@implementation SomeTest : OJTestCase
{
  Some  target;
}

- (void)setUp
{
  target = [[Some alloc] init];
}

- (void)tearDown
{
  [target cleanUp];
}

- (void)testSomething
...

@end

Run Your Tests!

Part of the beautiful of unit testing is that it is also future-proofing your code. You are simultaneously testing a small segment of code and incrementally growing your regression test suite. But, the regression test suite isn't useful unless you run those tests! I suggest that you add this to your Jakefile

task("test", function()
{
    var tests = new FileList('Test/*Test.j');
    var cmd = ["ojtest"].concat(tests.items());
    var cmdString = cmd.map(OS.enquote).join(" ");

    var code = OS.system(cmdString);
    if (code !== 0)
        OS.exit(code);
});

which will allow you to run jake test and run all of your tests.

Refactor

Refactoring is considered a huge benefit in production code. Unfortunately, it is not leveraged as much in test code. We can reduce the pain of testing by refactoring. This includes using the SetUp and TearDown methods described above but also includes creating some object factories for often used objects and creating static utility functions for often executed code.

Conclusion

Hopefully this will be a good starting point for everyone interesting in testing for Cappuccino. There are definitely areas of this post that could be expanded upon and I am willing to answer those questions! Just shoot me an email ( derek r hammer at gmail dot com ) or catch me on the IRC channel.

[1] There are other testing tools that are not part of the OJTest repository. Notably there is OJSpec and Barista.