Monday, December 12, 2011

Scrutinizing another project code base: what to look for

So when you're evaluating an unfamiliar project's codebase, make sure to be careful and not be deceived by how it first looks. On the current project we're working on the code looked like it had a good architecture to begin with, but after working with it for a few weeks you begin to notice some things that are just weird.

Here's some things to watch out for (keep in mind we have an ASP .Net MVC 2 project):
  • Controller logic mixed into the view layer (this is a terrible sign). Not very common in this project but I've seen plenty of it.
  • (Off that bullet) Too much C# code in the view layer that's not really controller logic, but stuff like loops, if blocks, etc. that render different HTML elements are just ugly. 
  • Embedded JavaScript code inside aspx files: In my opinion this is sloppy and pollutes the markup with JavaScript code. What's even worse is JavaScript code that also contains C# code in it!
  • Large JavaScript files. JavaScript code should be treated like your other code -- this includes having some kind of separation to give a modular structure (makes it unit testable too) 
  • Code duplication. This is a big one: why is the code being duplicated? A severe case in this project is two JavaScript files containing the same code but in different places. They even had the same typos! A really annoying case of this is when you have three classes with the same name but in different namespaces.
  • Doing too much in the data layer. Having too much stuff (such as branching logic etc.) in the data layer makes it hard to unit test the bugs that will pop up from it.
  • Commented out code: This is not a red flag but should still be a warning sign. Why is there code that's been commented out? If it's under source control (and it better be) you already have a copy of it, if not then why isn't there an explanation of why it's been commented out?
  • Swallowing exceptions: This is my favorite because often to the user it just looks like their button doesn't work, when in reality an exception was caught, swallowed, and the action just didn't proceed.
So, I've seen all of these things in the project we're on now and this is based on that experience. Your results may vary.

Thursday, December 8, 2011

Selenium tests that fail "sometimes" and Jenkins

We're running Jenkins as our CI server, and part of the automated deployment is to make sure the UI tests pass. Well, sometimes the Selenium tests fail for one reason or another after passing for say, 20 builds. Initially I hypothized the following:
  • Click events don't register
  • IE has a bad day
  • Some kind of timing thing
All of these things have occurred in the past -- especially with IE. So with the help of stackoverflow I went ahead and implemented some things to fix this problem.

The first thing was to Re-run the UI tests that failed a second time. Of course this comes with all the drawbacks and this points to problems with the test itself. In this case that was ok because the test was still useful for confirming a bug was fixed, but we still need the build to pass. But I'll come back to this issue later on in the post.

Well it turns out that while JUnit has the @Rule annotation (see this post) which allows you to intercept test calls, NUnit doesn't have that. Oops. I tried using the NUnit extensibility API but that didn't turn out to well (my fault I'm sure).

So I resorted to this method in our UI test base class, SeleniumTest :

public void RetryFailure(Action testMethod)
{
    if (_retrying)
    {
        _retrying = false; 
        return;
    }
    try
    {
       _retrying = true; 
       testMethod();
    }
    catch (Exception e)
    {
        Console.WriteLine("::::::retrying:::::::::::");
        Console.WriteLine("error was {0}", e.Message); 
        WebDriver.Quit();
        SwitchBrowser(_currentBrowser); 
        testMethod(); 
    }
    _retrying = false; 
}

I'm not too excited about the code quality of this method, but it does work. All it does is attempt to run the test, catch any failures, and then rerun the test again. Here's what the beginning of a test that uses this looks like:

[TestCase] 
public void AdminCancelANeedViaOfficeCalendar()
{
    if (!Retrying)
    {
        RetryFailure(AdminCancelANeedViaOfficeCalendar);
        return;
    }

// ... more code

As you can see, we have to have this additional bit of logic at the beginning of each test. Now, it would have been easy to just encapsulate the test code in some kind of loop and then repeat it that way, but this allows the SeleniumTest.RetryFailure method to have all of the control.

Alright, so the next step was to generate screenshots of test failures. This is pretty easy as Selenium already provides a way to do this by casting the WebDriver to ITakesScreenshot. So nothing exciting there.

Now to integrate this into Jenkins. At this point, we have a bunch of failure screenshots being archived as artifacts, but you have to actually go look for them to view them. I'm lazy so instead I took a shot at providing a custom HTML report of the failures with their screenshots:
Summary of test failures

So the screenshot is from a page that is accessible via the Jenkins project page. This means that we can be lazy and still get the information we need.


 
UI Test Failures link shows up on the project page
 The HTML Publisher plugin takes care of the details, all we have to is generate the actual page. This is done once again inside SeleniumTest.

 private void BuildReport(string reportFile, string testName, string testImagePath)
{
    if (!File.Exists(reportFile))
    {
        using (StreamWriter sw = new StreamWriter(new   FileStream(reportFile, FileMode.Create, FileAccess.Write)))
        {
            sw.WriteLine("<html><body>");
             sw.WriteLine("<h1>Test Failure Screenshots</h1>");
         }
    }
    try
    {
          using (StreamWriter sw = new StreamWriter(new FileStream(reportFile, FileMode.Append,  FileAccess.Write)))
         {
             sw.WriteLine("<div>");
             sw.WriteLine("<h2>" + testName + "</h2>");
             sw.WriteLine("<img src=\"" + testImagePath + "\"/>");
             sw.WriteLine("</div>");
         }
    }
    catch (Exception e)
    {
         Console.WriteLine("Problem writing report: {0}", e.Message); 
    }
}

Pretty simple, probably could be moved to somewhere more fitting. In the mean time though, it works well enough.

Anyway, the point of all of this was to just allow us to see why tests fail but also retry them to see if they fail reproducibly. It turns out that a couple of the tests repeatedly fail the first time but not the second, so there's probably some work to do.

See you next time.

Sunday, December 4, 2011

Another abstraction layer on top of PageObjects with Selenium

Hi everyone,

As I mentioned before, on this project our Selenium 2 tests are extremely important. We've tried to keep the test code on the level of the production code, so it naturally has its own object model. We already use PageObjects extensively in the project.

Recently a few bugs popped up that we reproduced with Selenium tests that were pretty similar and required similar actions. Here's an example of some duplicated code:

// first step: log in as a temp to accept a need
LogOn login = new LogOn(WebDriver);
login.Go();
login.Login("Temp", "temporary");

TempNeeds need = new TempNeeds(WebDriver);
need.ViewFirst();
need.Accept();

login.Logout();
login.Go();
login.Login("Chris", "foo");

Billing billing = new Billing(WebDriver);
billing.Go();
billing.CreateNewInvoices();
billing.UpdateAndReview();
billing.FinishCreate();

Turns out this bit of logic is common in our tests, so it gets duplicated everywhere. So in the interest if minimizing the amount of code, I modified it to look more like this:

Temp temp = new Temp(WebDriver, "Temp", "temporary");
temp.AcceptFirstAvailableNeed();

Admin admin = new Admin(WebDriver, "Chris", "foo");
admin.CreateInvoices();

This code does the same thing, it's just been hidden by another abstract layer that represents the different users of the site and what they would do. Usually, each method off of a user would interact with many different PageObjects. The idea is to make the code reveal its purpose more by telling you what the user is trying to do, not the different PageObjects that need to be used.

Anyway, we're trying this approach out now for newer tests so we'll see how it goes. Hopefully the more tests we write and the more objects we build up, the less new Selenium code we'll have to write.