As mentioned last time, I decided to target the methods listed here as those were the ones TeamMentor uses a lot. Now if you remember, I’d hacked up a little Python script last week to test authorization of all 121 methods across all 3 types of users and I’d got a CSV file as an output, which told me who could query what. Some of the methods could be accessed as expected; but many weren’t queried well and threw exceptions…either due to me not having enough business data OR permissions. Me not having enough permissions…that’s fine, as that is the purpose of the testing itself…but me not having business data is NOT, and that’s why I’m creating unit tests for the 17 methods on the link I mentioned at the start of the post.
Very similar to what I did while choosing a language, I also did a lot of Googling and reading before deciding which Unit Test framework I’d choose. I did not have any intention of learning each and every single thing in the framework; I just needed enough to write a test. With that in mind and reading a little about pyunit, doctest and py.test, I finally plumped for py.test as it seemed very intuitive and did just about exactly what I wanted, with me hardly having to learn anything. It is quite easy to get up and running with py.test on Ubuntu; I’ll leave that for you guys to figure out.
As you all know by now, I’m a fairly average programmer :). So I started out, writing all my Unit tests in 1 fat file…full of functions and assert statements. I showed those to Dinis after I’d done around 7…and he said..”Yeah cool. But it needs to be separate; so we know which test failed”. Well, I didn’t initially agree with this, coz if you run py.test in verbose mode, it clearly tells you which test has failed. However, on 2nd thoughts, separating tests looks cleaner; specially if you name all the tests well.
Now at this point, since I’d been using 2 different WSDLs [Test and Production setup] for my testing; half of my tests were UAT and half production. So I first converted all the tests to work with the UAT setup. This involved changing the values of some of the data to be used in some methods. So, I changed the data and also updated by main datatype TO sample value mapping file. I then thought, let me re-run my auth testing script and generate a new report, just to check how many exceptions were resolved after I updated my values. Once I did this and got a report, suddenly I found that all my data had vanished for some reason. I pinged Dinis who checked and then said that I must’ve deleted it somehow. As it turns out my auth test also called DeleteLibrary with a valid LibraryID and whopped it off, leaving me with no sample data. Dinis wrote a nice post here about the same here. Oh well… 😀
Eventually though I managed to get a total of 15 unit tests working. The only ones I couldn’t get working were the CreateArticle and the UpdateGuidanceItem ones; primarily because I couldn’t figure out a way to tell Suds to pass parameters for these methods. The values for these parameters weren’t simple integers or strings; they were actual class objects. So for example, the definition for a function I could write a Unit test for was like this CreateArticle_Simple(ns0:guid libraryId, xs:string title, xs:string dataType, xs:string htmlCode, ) but the definition for one I couldn’t was like this CreateArticle(TeamMentor_Article article, ). Now TeamMentor_Article has further structures inside it, so I need to figure out a way in which I can initialize the entire structure with correct data in Python/Suds and then create a test for this method. The other method I couldn’t do was UpdateGuidanceItem…same reasons.
I’ve pushed all my unit tests to the private Git repository we’re using, but thought you guys would like to take a look at the code for 1 Unit test. Here it is. Also, here’s 2 screenshots showing how “separating unit tests” makes sense.
So now, the key lesson to be learnt here is to NOT invoke every single method automatically. I need to now tweak my auth testing script to test only the methods that I tell it to and not do whatever it wants to do :). That apart, we’re on track.I’ll catch up with you guys once I’ve updated my authorization testing script.