Implementing best practices on TeamMentor Automation project based on learned leasons

Browser automation is a fairly new discipline and it has been implemented in several projects. The good news is that there is always a new opportunity to learn.

One of the headache of this process  is the fact of facing non-deterministic test, because if some test fails  sporadically , then there is no confidence on that (it is hard to determine if the test failed because of a set of changes that indirectly affected it or because there where any timing error or any other error in the framework).

I’ve started to implement a set of changes in the TeamMentor automation process, those best practices are:

  • Using a meaningful test name, for instance Search_Feature_Return_results
  • Adding a description of the test, so future developers can easily determine what was the original purpose of the test:
  • Best practices
  • Always use the NUnit assertion paradigm to show a description if the assertion fails. This is a very common case and if you add a descriptive message , then you will be able to identify the reason of why the test failed. See the following example


  • Asserts

Along with this I’m also trying to avoid the nasty Thread.Sleep() in the test and changing it by a way to determine when the Asynchronous request have ended.


About Michael Hidalgo

Michael is a Software Developer Engineer based on San José, Costa Rica. He leads the OWASP Chapter from Costa Rica. You can take a look at my blog at
This entry was posted in UnitTests and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s