Interview with Rich Seddon on m2eclipse Testing


February 24, 2009 By Tim O'Brien

In a previous post entitled m2e Roadmap, Jason discussed Sonatype’s commitment to quality and testing for m2eclipse. As a follow-up, I did a quick interview with Rich Seddon and asked him some questions about Sonatype’s approach to testing for Eclipse plugins. The interview is approximately five minutes long and in this interview, you’ll here Rich give some detailed descriptions of the tools he uses to test the m2eclipse plugin.

[media id=1 width=320 height=70]

INTERVIEW TRANSCRIPT

Tim O’Brien: So, Rich could you introduce yourself?

Rich Seddon: Yeah, my name is Rich Seddon and I’ve been brought on at Sonatype for the purpose of producing a set of automated tests for the M2 Eclipse Product and the reason for doing this is that there’s a lot of performance improvements and bugs that we want to fix. But if we don’t have the automated test in place we won’t know if we’re breaking things. And it’s got to be automated because of the number of platforms that we support and the number of operating systems that we run on is very large and the manual testing process is really much, much more than a small company can possibly handle.

TO: So, can you give me a sense of what’s the difference between what you do and something like a basic JUnit test.

RS: Well, what I’m actually doing here is a full set of system tests for M2 eclipse so I actually in the test harness bring up the M2 eclipse on various platforms and drive the UI through a set of scenarios and verify along the way that no errors have occurred and that the state of the UI is as expected and the state of the project under test is as expected….you might say that what I’m doing is trying to capture, in an automated fashion, the types of usage that our users will actually go through from day to day.

TO: What kind of tools are you using for this?

RS: Right now I’m using uumm; I’m kind of using Hybrid approach. There are sort of two approaches to doing eclipse UI I testing. One of them is to drive it with a sort of UI robot where the robot pushes buttons and manipulates trees and things like that. And the other approach is sort of a white box testing approach where you bring up in a regular old unit test, eclipse, and you drive eclipse at the API level to the unit test. Both of these tests have their advantages and disadvantages.

The reason that I got attracted to this position is that, in the last few years, it seems there’s been a few sort of hybrid approaches come along one of them is called SWT POD (sp?) PPTP by eclipse and then there was Abbott. Then there’s a commercial company, Instantiations has a product called “Window Tester”. And these allow you to do both kinds of testing, UI Robot testing and API testing at the same time. What they do is they sort of record your actions as your drive eclipse through a set of scenarios and produce a JUnit test that you can then edit.

TO: Is this run as a part of the Maven build for m2eclipse?

RS: Yeah, it is run in….. integration using the Maven OSGI plugin. There’s an OSGI test runner plug-in that actually brings up the eclipse, loads in the unit test, and then within the running eclipses project runs the unit test.

TO: So, what sorts of things have you found through the testing that we wouldn’t have found if we hadn’t been doing this?

RS: Well, I’ve found quite a few things. There’s been certainly some deadlock situations that I don’t think we would have been able to reproduce. We would have got reports of them, but we never would have been able to reproduce them in-house because they don’t happen that often. But, when you have these tests running continuously and you’re capturing the output of them when things go wrong then that allows us to find intermittent bugs which is very difficult to do otherwise with manual testing.

The other thing that I’m finding though is that because I’m actually driving eclipse through an automated scenario, it forces me to pay very close attention to eclipse itself. So, I actually think even the manual testing that I’ve been doing has benefited from this process because I have to really think very carefully about how I’m driving eclipse through various scenarios in order to be able to produce these tests.

TO: I notice that sometimes the bugs that people find in a plug-in for eclipse have nothing to do with the plug-in itself. Are you testing different distributions of eclipse; like different combinations of plug-ins?

RS: I am not doing so much of that yet. I’ve just got started about a month ago, but it’s on the road map. So, I plan to have these automated tests— Right now they’re just running in Eclipse 3.4 and they’re running essentially with all the of the required plug-ins that M2Eclipse could ever never need: WTP, Subclipse and all kinds of things like that. But yes, it’s on the road map for maybe in a month or two, I’m going to start expanding the set of platforms and have different sets of plug-ins and different combinations of plug-ins. Certainly Eclipse 3.3, possibly Eclipse 3.2, IBM RAD and of course get running on different OS; These are all on the road map. And, once I’ve got the automated test running, I mean this is the advantage of the automation right, I can really easily expand it to be able to get good quality testing on all these different platforms.

TO: Well, thanks for taking the time to talk to us. And, we’ll check with you in a few months to see what progress you’ve made.

RS: All right, sounds good.