We’ve spent a lot of time recently on fixing up our automated feature tests (AATs). The problem has been that these failing tests have blinded us to real problems that have crept into live systems. The usual answer is to ‘just run them again’ and eventually they go green. The main problem is our attitude to the tests, we don’t respect them and we don’t listen to them, as such they provide no feedback and are completely pointless. The response to broken feature tests would normally range from the test environment is down, the data is in contention, the database is down etc etc, but never something I’ve done has broken something.
So what is the solution?