Unfortunately, there are still lots of developers that remain unconvinced of the usefulness of automated testing, particularly if they have to author the tests themselves. Enter the Software Development Engineer in Test (SDET), or Software Engineer in Test (SET). There are varying opinions about the roles and responsibilities of an SDET. This post is not likely to conform to [your favorite company]’s description, but I expect that it will have a lot in common.
SDETs are subject-matter experts in the testability of code.
This requires them to know …
1) … what kinds of tests to write.
There are different ways to test a product. Most products require a combination of different types of tests. Common types include unit, integration, and functional–Google uses small, medium, and large.
2) … approximately how many of each kind of test is needed.
Different types of tests have different strengths, weaknesses, and ROIs. Things like how long a test takes to execute, maintain, or create affects how many of each of the various types yield the greatest value.
3) … techniques for refactoring product code so that it is testable.
Especially with legacy code, it is common to have to break dependencies before you can properly test a unit of code. For functional tests, the product needs to include hooks (e.g. identifiers for UI components, backdoors for initializing state, etc.) to facilitate testing.
4) … that bad tests are much worse than no tests.
Bad tests might require excessive amount of work to maintain, all fail at once making it hard to locate the source of the issue, or render false positives/negatives leading to distrust of the tests.
I think bad tests are a major source of skepticism among developers. A lot of developers have been burned by their first attempts to automate testing that resulted in unreliable, unmaintainable tests.
5) … how to choose great tools.
Different products require different testing tools. For example, MOQ and RhinoMocks are sufficient mocking libraries for green field development in C#. However, a legacy system in C# might require something like TypeMock Isolator to be able to get poorly designed code under tests.
You want to choose tools that make it easy to write good tests.
6) … how to mentor others.
The benefits of testable code go beyond testability. SDETs need to be able to explain why the code changes are not merely in support of testing, but actually improve the quality of the code.
SDETs are not …
7) … lesser/beginner developers.
Based on the items above, I wouldn’t expect a beginner programmer to fill those shoes. Would you? I think this comes from developers that think product code is more important than test code. In my experience, writing tests and making code testable is often more challenging than writing product code that works.
8) … the only ones writing tests.
Quality needs to be a priority of the entire team. It is hard to have high levels with quality without tests. Also, product developers nearly always outnumber SDETs by a significant ratio (1:4 on my current team) and the amount of test code usually exceeds the amount of product code. That being the case, developers have to write tests, too, if you want a shot at adequate test coverage.
9) … forbidden to touch product code.
As mentioned in #3, a product often has to be changed to make it testable. Even when using TDD, you might have to add hooks for functional tests to facilitate testing.
Also, it shouldn’t surprise you that working within a code base makes you better equipped to test it. Refactoring has always been a quick way for me to familiarize myself with a code base. I think that SDETs should spend time implementing features, too–just not as much time as product developers.
10) … permanent as a role.
I believe that if the SDET role is properly realized in an organization that it will eventually phase itself out. As product developers begin to see the value of testing and incorporate it into their daily routine, there will be less of a need for a distinction between product developers and SDETs.