REST web service client in C#

Here is an example of how to do REST web requests in C#. The example shown is POST but for GET just remove the request body stuff and change the request.Method:

updated with checks for Basic Auth and GET requests

// Create a request and set the headers
HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
request.Method = "POST"
request.ContentType = contentType
request.UserAgent = userAgent
request.Timeout = timeout

if (useBasicAuth == true)
{
    // Create a token for basic authentication and add a header for it
    String authorization = System.Convert.ToBase64String(Encoding.UTF8.GetBytes(username + ":" + password));
    request.Headers.Add("Authorization", "Basic " + authorization)
}

if (request.Method == "POST"  && requestBody != null)
{
    // Convert the request contents to a byte array and include it
    byte[] requestBodyBytes = System.Text.Encoding.UTF8.GetBytes(requestBody);
    Stream requestStream = request.GetRequestStream();
    requestStream.Write(requestBodyBytes, 0, requestBodyBytes.Length);
    requestStream.Close();
}

// Initialize the response
HttpWebResponse response = null;
String responseText = null;

// Now try to send the request
try {
    response = request.GetResponse() as HttpWebResponse;

    // expect the unexpected
    // WebException may be thrown already for some of this already 
    // like timeout or 404
    if (request.HaveResponse == true && response == null) {
        String msg = "response was not returned or is null";
        throw new WebException(msg);
    }
    if (response.StatusCode != HttpStatusCode.OK) {
        String msg = "response with status: " + response.StatusCode + " " + response.StatusDescription;
        throw new WebException(msg);
    }

    // check response headers for the content type
    contentType = response.GetResponseHeader("Content-Type");

    // get the response content
    StreamReader reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8);
    responseText = reader.ReadToEnd();
    reader.Close();

// handle failures
} catch (WebException e) {
    if (e.Response != null) {
        response = (HttpWebResponse) e.Response;
        log.Error(response.StatusCode + " " + response.StatusDescription);
    } else {
        log.Error(e.Message);
    }

// and clean up after ourselves
} finally {
    if (response != null) {
        response.Close();
    }
}

// display the response
if (responseText != null) {
   System.Console.Write(responseText);
}

As you can see, there’s a lot of drudgework here — and this is an abbreviated example. Ideally, I’d like to wrap it all up in a class with a pretty bow so I can do something like this:

RestClient client = new RestClient();
System.Console.Write(client.get(url));

Of course a more complex example might look more like this:

var settings = new Dictionary<String, String> {
   {"key1", "value1"},
   {"key2", "value2"}
};

RestClient client = new RestClient(settings);
client.Url = "http://rest.example.com/1/resource/method?param=value"
client.BasicAuth("username", "password");
client.Headers.Add("header", "value");

client.post("content")

System.Console.WriteLine(client.Response.Body);

foreach (KeyValuePair<String, String> header in client.Response.Headers) {
    System.Console.WriteLine(header.Key ": " + header.Value);
}

if (client.Response.Error) {
    System.Console.WriteLine(client.Response.StatusCode)
    System.Console.WriteLine(client.Response.Error.Message)
}

Then I could simply extend RestClient to handle things like urls, parameters, and custom headers for each webservice client class.

Currently, I’m sorry to admit I have a jumble of cut & paste code that intermingles JSON serialization with HttpWebRequest handling — but at least I’ve abstracted out URLs & stream handling into a base class.

For maintainability (by someone else) I’ve avoided extending HttpWebRequest and HttpWebResponse to make them less painful. Also, by not allowing default constructors, I’d have to end up using composition.

I’ve looked at other libraries, like Hammock, but I don’t really see much value added. The level of abstraction it uses for just the 2 lines of basic auth code is staggering.

Sponsoring a volunteer for OSSO in Ecuador

In 2004 Kelsey went to Ecuador to participate in the OSSO program. She did
volunteer work at orphanages and made many new friends there, both Ecuadorian and among the other volunteers.

When I came back from Fiji, she took me to Ecuador with her to see the place & people she loved so much. In 2007, shortly after getting married, we moved to Cuenca, Ecuador together for 6 months, frequently visiting the orphanages and OSSO house. We returned to the USA after Kelsey got pregnant. 4 years and 2 children later, we’re moving back again.

It’s safe to say Ecuador has a permanent place in our hearts, and that Kelsey’s experience with OSSO has had a major part in shaping our lives.

We want others to have that experience. If you want to volunteer, if you’d like to experience the wonderful Ecuadorian culture, make lifelong friends, and have your heart touched by children who need your love, please consider volunteering for OSSO. If money is a concern, don’t let that be an obstacle.

We can help 1 person with up to 50% of the cost of your OSSO program fees. This doesn’t include the application fee or spending money. Once you’ve been accepted, send us a sponsor letter. Our only stipulation is that you work with all your heart for at least 8 weeks. If you happen to be in Cuenca when we’re there, you can buy us dinner if you’d like to say thanks.

Send us a note if you’re seriously interested or have any questions about our experience. Check out http://orphanagesupport.org for more information.

Getting a QC test coverage report from JUnit

I’ve written a quick example that shows how to use custom annotations on JUnit tests to map them to Quality Center test cases.

You can create an annotation called “QCTestCases” and add it to your JUnit tests like this:

package oneshore.example.qcintegration;

import static org.junit.Assert.*;
import org.junit.Test;

public class MyTest extends TestBase {
	@Test
	@QCTestCases(covered={"QC-TEST-1", "QC-TEST-2"}, related={"QC-TEST-3"})
	public void testSomething() {
		assertTrue(true);
	}
}

The annotation is a simple interface:

package oneshore.example.qcintegration;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface QCTestCases {
	String[] covered();
	String[] related();
}

And the base class uses reflection to discover if the annotation has been added for the test case. You can then use it to report the coverage:

package oneshore.example.qcintegration;

import java.lang.reflect.Method;

import org.junit.After;
import org.junit.Before;
import org.junit.Rule;
import org.junit.rules.TestName;

public class TestBase {
	@Rule public TestName testName = new TestName();
	
	QCTestCases qcTestCases;
	
	@Before
	public void getQCTestCoverage() {
		try {
			Method m = this.getClass().getMethod(testName.getMethodName());
			
			if (m.isAnnotationPresent(QCTestCases.class)) {
				qcTestCases = m.getAnnotation(QCTestCases.class);
			} 
		} catch (SecurityException e) {
			e.printStackTrace();
		} catch (NoSuchMethodException e) {
			e.printStackTrace();
		}
	}
	
	@After
	public void writeCoverageReport() {
		StringBuilder result = new StringBuilder();
		result.append(testName.getMethodName());
		result.append(",");
		
		for (String qcTestId : qcTestCases.covered())
		{
			result.append(qcTestId);
			result.append(",");
		}
		
		System.out.println("qc tests covered: " + result);
	}
}

The trick is using the “Rule” to get the testName which is available in Junit 4.5+ to get the currently executing test method.

It’s probably a bit unnecessarily complex of an example, since “related” is something extra I added for my own analysis.

This code is available on the QCIntegration repository on github.

Moving to Ecuador

I’m moving to Ecuador in January.

My wife Kelsey volunteered for OSSO (http://orphanagesupport.org) in 2004 before we were married. When I came back from Fiji, she took me there for Christmas to play Santa Claus. After we were married we moved to Ecuador for 6 months in 2007. She got pregnant, we came back to Seattle and had two kids. Now we’re going back.

Besides getting rid of all my stuff (wanna buy a surf board or guitar?) I need to buy more hardware – a new laptop, Android phone & tablet for development (just got an iPhone & iMac.) I’m going to try to get monitors there.

I’m going to be looking for telecommute work & test consulting through my company, One Shore. I’m also going to be working on my test management tool, QA Site, and perhaps some other projects. Kelsey will be watching the kids, and we will try to do what we can to help OSSO & the orphanages in Cuenca, Ecuador.

Running NUnit tests programmatically

I’m working on a test framework that needs to be run by less-technical testers. The tests are data driven from a spreadsheet (google docs spreadsheet API + gdata.)

Tests will be run locally (for now at least) since there isn’t a test lab available for remote execution, and no CI. I didn’t want to have to require users to install NUnit to execute tests.

At first I started by writing a main() class and rolling my own assertions. But I decided that the parameterized test features of NUnit were worth the effort of a little research. NUnit can, in fact, be run programmatically, though the execution appears less flexible than with other frameworks.

I created a TestRunner class with main function:


using System;
using NUnit.Core;
using NUnit.Framework.Constraints;
using NLog;

namespace oneshore.qa.testrunner
{
    class TestRunner
    {
        Logger log = NLog.LogManager.GetCurrentClassLogger();

        public static void Main(String[] args)
        {
            //get from command line args
            String pathToTestLibrary = "C:\\dev\\oneshore.Tests.DLL"; 

            DateTime startTime = System.DateTime.Now;
            log.Info("starting test execution...");

            TestRunner runner = new TestRunner();
            runner.run(pathToTestLibrary);

            log.Info("...test execution finished");
            DateTime finishTime = System.DateTime.Now;
            TimeSpan elapsedTime = finishTime.Subtract(startTime);
            log.Info("total elapsed time: " + elapsedTime);
        }

        public void run(String pathToTestLibrary)
        {
            CoreExtensions.Host.InitializeService();
            TestPackage testPackage = new TestPackage(@pathToTestLibrary);
            testPackage.BasePath = Path.GetDirectoryName(pathToTestLibrary);
            TestSuiteBuilder builder = new TestSuiteBuilder();
            TestSuite suite = builder.Build(testPackage);
            TestResult result = suite.Run(new NullListener(), TestFilter.Empty);

            log.Debug("has results? " + result.HasResults);
            log.Debug("results count: " + result.Results.Count);
            log.Debug("success? " + result.IsSuccess);
        }
    }
}

Link to gist of this code.

The advantage to running tests this way is that NUnit does not need to be installed (though DLLs for NUnit — nunit.core.dll & nunit.core.interfaces.dll — and any other dependencies like NLog need to be packaged with the TestRunner.) You can still write and execute your tests from NUnit while under development.

One disadvantage is that you don’t have the full test results by using the TestSuiteBuilder to bundle every test it finds into one suite. I’d like to find a way to improve that. You also can’t run more than one test assembly at the same time — you can create a nunit project xml for that — and at that point you might as well bundle the nunit test framework.

For now, my base test class (that each of my Nunit tests inherit from) reports via catching and counting assertion failures and writing to a log file. It can then use the Quality Center integration tool I described in an earlier post, though I’m planning on wiring it all together eventually, so anyone can run tests automatically by clicking on an icon, using a File Picker dialog to select the test library (see upcoming post) and have test results entered in QC.

This will allow distributed parameterized testing that can be done by anyone. I may try to set up a web UI like fitnesse for data driven tests as well.

Documentation for TDConnection (OTAClient API)

Documentation on the OTA API can be download from Quality Center.

Go to Help -> …

The file is called OTA_API_Reference.chm. It is a “help file” and once downloaded, you may need to unblock it. If you’re seeing this:

Right click on the file and select Properties. Then click “Unblock”


This file came from another computer and might be blocked to to help protect this computer.

Microsoft’s idea of security is to write insecure code and then instead of patching it, have you to click a button to perform normal actions that could just as easily still install a virus.

Updating test results in QC using the QC OTA API explained

Yesterday I cleaned up and posted my example QCIntegration utility on GitHub.

While it works as a standalone tool, some people might not want to wade through the code to understand or modify it. So today, I’m going to try to explain how the OTA API works by recreating the steps as a blog post with explanation in a simple script.

I’ll start with an example using C# and then give an equivalent Python example. I’ll use the same scenario, updating test case results in QC, but if requested, I can also show how to get test steps from a test plan, or read & update defects in QC using the OTA library.

First, create a new project in Visual Studio (or SharpDevelop). You’ll need to add the OTAClient.dll as a reference. It is a COM library and contains the single interface TDConnection.

When searching for the library name it is called the “OTA COM Type Library”. The package is “TDApiOle80.” Since it is a COM library, it needs to use an interop for C#, but this is handled automatically by the IDE.

using TDAPIOLELib;
TDConnection conn = new TDConnection();

Now, let’s create a connection to your Quality Center server. You’ll need to know the URL of your QC Server and have valid login credentials with access to an existing Domain and Project.

Assuming you have quality center installed on your local machine (not a typical setup) you might have the following setup:

string qcUrl = "http://localhost:8080/qcbin";
string qcDomain = "oneshore";
string qcProject = "qa-site";
string qcLoginName = "aaron";
string qcPassword = "secret";

Note: I do not use this same password for my bank account

There are several ways to log in, but I’ll use the simplest here:

tdConn.InitConnectionEx(qcUrl);
tdConn.ConnectProjectEx(qcDomain, qcProject, qcLoginName, qcPassword);

Now you need to find your test sets that need updated. I typically use folder structure that goes something like:

Project – Iteration – Component – Feature

It’s a bit convoluted but here’s the code to get a testSet:

string testFolder = "Root\QASite\Sprint5\Dashboard\Recent Updates";
string testSet = "Recent Updates - New Defects Logged";

TestSetFactory tsFactory = (TestSetFactory)tdConn.TestSetFactory;
TestSetTreeManager tsTreeMgr = (TestSetTreeManager)tdConn.TestSetTreeManager;
TestSetFolder tsFolder = (TestSetFolder)tsTreeMgr.get_NodeByPath(testFolder);
List tsList = tsFolder.FindTestSets(testSetName, false, null);

The parameters for FindTestSets are a pattern to match, whether to match case, and a filter. Since I’m looking for a specific test set, I don’t bother with the other two parameters.
You could easily get a list of all test sets that haven’t been executed involving the recent updates feature by substituting this line:

List tsList = tsFolder.FindTestSets("recent updates", true, "status=No Run");

Now we want to loop through the test set and build a collection of tests to update. Note that we might have more than one test set in the folder and one or more subfolders as well:

foreach (TestSet testSet in tsList)
{
	TestSetFolder tsFolder = (TestSetFolder)testSet.TestSetFolder;
	TSTestFactory tsTestFactory = (TSTestFactory)testSet.TSTestFactory;
	List tsTestList = tsTestFactory.NewList("");

And finally, update each test case status:

    foreach (TSTest tsTest in tsTestList)
    {
        Run lastRun = (Run)tsTest.LastRun;

        // don't update test if it may have been modified by someone else
        if (lastRun == null)
        {
            RunFactory runFactory = (RunFactory)test.RunFactory;
            String date = DateTime.Now.ToString("yyyyMMddhhmmss");
            Run run = (Run)runFactory.AddItem("Run" + date);
            run.Status = "Pass";
            run.Post();
        }
    } // end loop of test cases

} // end outer loop of test sets

Of course you might want to add your actual test results. If you have a dictionary of test names and statuses, you can simply do this:

Dictionary testResults = new Dictionary();
testResults.Add("New defects in Recent Updates are red", "Pass");
testResults.Add("Resolved defects in Recent Updates are green", "Pass");
testResults.Add("Reopened defects in Recent Updates are bold", "Fail");

if (testResults.ContainsKey(tsTest.TestName))
{
    string status = testResults[tsTest.TestName];
    recordTestResult(tsTest, status);
}

That’s all for now. I’ll translate the example into Python tomorrow, but you’ll see it’s really quite straightforward.

Upload Selenium/JUnit test results to Quality Center

Since I’ve had so many requests (the latest being today), and since I’m working on something very similar for a current client, I decided to take some time today to respond in detail about integrating QC with JUnit & Selenium.

The steps I’ve used for Selenium / QC integration are:

1. Write tests with Selenium using an open source test framework (JUnit/TestNG/PHPUnit/NUnit/RSpec/py.test)

2. Map your test cases to QC test cases. I described how I did this in detail by extending JUnit in these posts:

Note that this isn’t necessarily the way I’d do it now. Annotations are useful, but I think a mapping file is perhaps easier. Simply create a spreadsheet with the QC TestId in one column and the xUnit test name in another column. This is a little trickier with parameterized tests.

3. Parse the tests results from your test runner and update Quality Center using the OTA API (OTAClient.dll is installed with QC Explorer & can be downloaded from QC. Go to Help->Addins Page->HP Quality Center Client Side Setup Add-in)

I have a quick example of how to connect to Quality Center using OTAClient here:

Connecting to HP/Mercury Quality Center from a client side script

You can check out my sample QCIntegration project on GitHub:
http://github.com/fijiaaron/QCIntegration

I’m adding more details about integrating with Quality Center on One Shore.

Buliding a better spreadsheet

How many people are frustrated by just doing a sum or match count–or don’t even try? How about auto-resizing of fields? Or viewing two tables side-by-side? Or adding comments without filling a cell. Or creating simple charts? These should be easy on spreadsheets.

Things that seem tricky like using a relational database on the backend or versioning data aren’t really tricky, and they’re not complex concepts.

This is 2011! You should be able to have multiple tables side by side, with descriptive bubbles and dynamically updated data, automatic formula calculation from templates, and form data-entry workflow without even thinking about it.

How bad spreadsheets suck

This morning I was thinking about how we’re stuck with pretty much the
interface VisiCalc gave us back in 1980 (without the user-friendliness) and it doesn’t have to be that way.

90% of people that use spreadsheets don’t even know how get the sum for a column. Which is the whole reason spreadsheets were invented! That stuff should be automatic. Highlight and click the big green plus sign. Or select a row and type a word and hit “filter” or “count”

I like the idea of storing formulas, but I think I could go further to make it easier, so grandma can use it like a calculator backed by a database. So she buys crap like QuickBooks to balance her checkbook. It’s not that it’s too hard to do, it’s that the UI of spreadsheets is so horrible.

I think we can do better and I’m toying with the idea of building a
better spreadsheet. There has to be a better interface.