In the past couple of months, there has been a backlash in some of the industry leaders about the role of automated acceptance tests. James Shore and Michael Feathers have both expressed disillusionment with the state of automated acceptance testing. While I understand their frustration, I whole-heartedly disagree.
The arguments against automated acceptance testing boil down to three main points: a) automated acceptance tests are too costly and difficult to maintain, b) customers do not want or cannot participate in automated acceptance testing and c) the tools are too ineffective. The first argument is just a knee-jerk reaction type of response. The second argument can be resolved by tailoring the solution to the particular user. The last argument is the most legitimate but is not an unsolvable problem.
Despite not always being knee-jerk, the type of response to acceptance tests as too costly and not effective in providing business value is of the kind. A great example of this is how the industry first reacted to automated regression tests (which I think most industry leaders and most people reading this blog would agree is a Good Thing(TM)). There were (and still are) people in our industry that stated that regression test suites are an investment not worth making. They argue that when you are writing unit tests, you are wasting valuable time and effort that could be put toward implementing new features and increasing business value. However, what we've come to realize is that regression test suites give us confidence to deliver and refactor. They also catch bugs that change in far away place that may not be caught until production. These benefits (and others) increase the value of the product that we are creating.
While I cannot list all of the benefits of automated acceptance testing (we need wide-scale acceptance to really understand the true benefits of any practice), I can say that acceptance tests give developers, project managers and customers a measuring stick for progress. We can measure velocity and similar metrics. Acceptance tests also tell developers that they are done. When the test passes, the developer should not be writing any more code (refactoring / clean up excluded, of course). And, finally, it creates a medium for the idea of a negotiable contract. This is a core XP principle with satisfies the values of communication and adaptability. We can more easily give the customer what she wants.
Another argument is a lack of customer willingness. And, I certainly agree that it is not the customer's comfort zone to be thrown into a tool such as FitNesse. However, I think that this area needs to be very fluid and needs to match the needs of the customer from project to project. I'll demonstrate that fluidity with a couple of examples:
You have a customer that is very knowledgeable about the domain and serves as the project champion for the rest of his company. However, he is neither a business analyst or, in any form, a technically oriented person. In this case, attempt to get him to use Cucumber, FitNesse or Selenium is not going to work very well. However, he will be more than sufficient to help the development team draw up user stories. These user stories can then be (hopefully) quickly translated into acceptance tests in a tool by a business analyst or developer on the development team. At that point you would continue to triage and satisfy as normal.
You have a customer that is able to tackle technical responsibility and can fill the role of business analyst for the team. She sits with the team, understands the high-level technical details of the project and can communicate with developers. In this case, the customer would write the acceptance tests. She can handle the tool effectively and can write effective tests because she understands both the problem domain and the high level technical details of the project. Depending on her effectiveness, a developer or a team's business analyst may need to review the acceptance tests.
Of course, you have to have a full-time customer contact on your team. This is another XP suggestion: pay the contact to be a part of your team from your own contract. That's a tough jump for many organizations. However, if you want the best result out of your project, then having a full-time client contact is essential.
The last argument is that the tools are too slow, too brittle, too end-to-end to be effective. Let me first say that I agree with this, in general. Selenium and cousins are end-to-end web application testing tools that are very, very slow and very brittle. They require specific knowledge to run the tool. They require a full stack to even work. However, this is a problem we can solve.
The Goal: Provide a system-level test that ensures certain behaviors of the system for a user. The Current Solution: Create a user-interface full-stack end-to-end based automated user acceptance test suite. The Problem with the Current Solution: Too slow, Too brittle (End-To-End is not bad in itself, its just hard) Another Solution: Create a user-interface, subset of the full stack, end-to-middle automated user acceptance test suite. Reason Why This Works: Acceptance tests are NOT end-to-end tests. They are ensuring that a system responds to input according to specifications.
There is a solution to this problem, we just need to stop thinking of UATs as "big integration tests." The tools today attempt to solve the big integration test mindset. However, if we just want to ensure behavior of the system from the users' perspective, then we can most assuredly just test the part that interacts with the user. We can create Fakes and Mocks to enable fast, flexible tools that we can rely on to produce the UX that we want.
I'm sad to see the disillusionment caused by misconceptions and lack of imagination among our industry. Hopefully new tools will come out that will resolve these issues and automated UATs (and other forms of ATs) will become as mainstream as regression tests.