Browser automation is a fairly new discipline and it has been implemented in several projects. The good news is that there is always a new opportunity to learn.
One of the headache of this process is the fact of facing non-deterministic test, because if some test fails sporadically , then there is no confidence on that (it is hard to determine if the test failed because of a set of changes that indirectly affected it or because there where any timing error or any other error in the framework).
I’ve started to implement a set of changes in the TeamMentor automation process, those best practices are:
- Using a meaningful test name, for instance Search_Feature_Return_results
- Adding a description of the test, so future developers can easily determine what was the original purpose of the test:
- Always use the NUnit assertion paradigm to show a description if the assertion fails. This is a very common case and if you add a descriptive message , then you will be able to identify the reason of why the test failed. See the following example
Along with this I’m also trying to avoid the nasty Thread.Sleep() in the test and changing it by a way to determine when the Asynchronous request have ended.