I decided to share my vision of parameterized unit tests, how we do it, and how you probably don't (but want to do).
I would like to write a beautiful phrase about what should be tested correctly, and tests are important, but a lot of material has already been said and written before me, I will just try to summarize and highlight what, in my opinion, people rarely use (understand), for which basically moves in.
The main goal of the article is to show how you can (and should) stop cluttering your unit test with code for creating objects, and how to declaratively create test data if mock (any ()) is not enough, and there are plenty of such situations.
Let's create a maven project, add junit5, junit-jupiter-params and mokito to it
So that it is not completely boring, we will start writing right away from the test, as TDD apologists like, we need a service that we will test declaratively, any will do, let it be HabrService.
Let's create a test HabrServiceTest. Add a link to the HabrService in the test class field:
public class HabrServiceTest {
private HabrService habrService;
@Test
void handleTest(){
}
}
create a service via ide (by lightly pressing the shortcut), add the @InjectMocks annotation to the field.
Let's start directly with the test: the HabrService in our small application will have one single handle () method that will take one single HabrItem argument, and now our test looks like this:
public class HabrServiceTest {
@InjectMocks
private HabrService habrService;
@Test
void handleTest(){
HabrItem item = new HabrItem();
habrService.handle(item);
}
}
Let's add a handle () method to HabrService, which will return the id of a new post on Habré after it is moderated and saved to the database, and takes the HabrItem type, we will also create our HabrItem, and now the test compiles, but crashes.
The point is that we added a check for the expected return value.
public class HabrServiceTest {
@InjectMocks
private HabrService habrService;
@BeforeEach
void setUp(){
initMocks(this);
}
@Test
void handleTest() {
HabrItem item = new HabrItem();
Long actual = habrService.handle(item);
assertEquals(1L, actual);
}
}
Also, I want to make sure that during the call to the handle () method, ReviewService and PersistanceService were called, they were called strictly one after the other, they worked exactly 1 time, and no other methods were called anymore. In other words, like this:
public class HabrServiceTest {
@InjectMocks
private HabrService habrService;
@BeforeEach
void setUp(){
initMocks(this);
}
@Test
void handleTest() {
HabrItem item = new HabrItem();
Long actual = habrService.handle(item);
InOrder inOrder = Mockito.inOrder(reviewService, persistenceService);
inOrder.verify(reviewService).makeRewiew(item);
inOrder.verify(persistenceService).makePersist(item);
inOrder.verifyNoMoreInteractions();
assertEquals(1L, actual);
}
}
Add reviewService and persistenceService to the fields of the class class, create them, add makeRewiew () and makePersist () methods to them, respectively. Now everything compiles, but of course the test is red.
In the context of this article, the ReviewService and PersistanceService implementations are not so important, the HabrService implementation is important, let's make it a little more interesting than it is now:
public class HabrService {
private final ReviewService reviewService;
private final PersistenceService persistenceService;
public HabrService(final ReviewService reviewService, final PersistenceService persistenceService) {
this.reviewService = reviewService;
this.persistenceService = persistenceService;
}
public Long handle(final HabrItem item) {
HabrItem reviewedItem = reviewService.makeRewiew(item);
Long persistedItemId = persistenceService.makePersist(reviewedItem);
return persistedItemId;
}
}
and using when (). then () constructs we lock the behavior of auxiliary components, as a result, our test became like this and now it is green:
public class HabrServiceTest {
@Mock
private ReviewService reviewService;
@Mock
private PersistenceService persistenceService;
@InjectMocks
private HabrService habrService;
@BeforeEach
void setUp() {
initMocks(this);
}
@Test
void handleTest() {
HabrItem source = new HabrItem();
HabrItem reviewedItem = mock(HabrItem.class);
when(reviewService.makeRewiew(source)).thenReturn(reviewedItem);
when(persistenceService.makePersist(reviewedItem)).thenReturn(1L);
Long actual = habrService.handle(source);
InOrder inOrder = Mockito.inOrder(reviewService, persistenceService);
inOrder.verify(reviewService).makeRewiew(source);
inOrder.verify(persistenceService).makePersist(reviewedItem);
inOrder.verifyNoMoreInteractions();
assertEquals(1L, actual);
}
}
A mockup to demonstrate the power of parameterized tests is ready.
Add a field with the hub type, hubType to our request model for the HabrItem service, create an enum HubType and include several types in it:
public enum HubType {
JAVA, C, PYTHON
}
and for the HabrItem model, add a getter and a setter to the created HubType field.
Suppose that a switch is hidden in the depths of our HabrService, which, depending on the type of hub, does something unknown with the request, and in the test we want to test each of the cases of the unknown, the naive implementation of the method would look like this:
@Test
void handleTest() {
HabrItem reviewedItem = mock(HabrItem.class);
HabrItem source = new HabrItem();
source.setHubType(HubType.JAVA);
when(reviewService.makeRewiew(source)).thenReturn(reviewedItem);
when(persistenceService.makePersist(reviewedItem)).thenReturn(1L);
Long actual = habrService.handle(source);
InOrder inOrder = Mockito.inOrder(reviewService, persistenceService);
inOrder.verify(reviewService).makeRewiew(source);
inOrder.verify(persistenceService).makePersist(reviewedItem);
inOrder.verifyNoMoreInteractions();
assertEquals(1L, actual);
}
You can make it a little prettier and more convenient by making the test parameterized and adding a random value from our enum as a parameter, as a result, the test declaration will look like this:
@ParameterizedTest
@EnumSource(HubType.class)
void handleTest(final HubType type)
nicely, declaratively, and all the values of our enum will definitely be used at some next run of tests, the annotation has parameters, we can add strategies for include, exclude.
But perhaps I haven't convinced you that parameterized tests are good. Add to
the original HabrItem request, a new editCount field, in which the number of thousands of times that Habr users edit their article will be written before posting, so that you like it at least a little, and let's assume that somewhere in the depths of HabrService there is some kind of logic that does the unknown something, depending on how much the author tried, what if I do not want to write 5 or 55 tests for all possible editCount options, but I want to test declaratively, and somewhere in one place immediately indicate all the values that I would like to check ... There is nothing simpler, and using the api of parameterized tests, we get something like this in the method declaration:
@ParameterizedTest
@ValueSource(ints = {0, 5, 14, 23})
void handleTest(final int type)
There is a problem, we want to collect two values in the test method parameters at once declaratively, you can use another excellent method of parameterized tests @CsvSource, perfect for testing simple parameters, with a simple output value (extremely convenient for testing utility classes), but what if the object gets much more complicated? Let's say it will have about 10 fields, and not only primitives and java types.
The @MethodSource annotation comes to the rescue, our test method has become noticeably shorter and there are no more setters in it, and the source of the incoming request is fed to the test method as a parameter:
@ParameterizedTest
@MethodSource("generateSource")
void handleTest(final HabrItem source) {
HabrItem reviewedItem = mock(HabrItem.class);
when(reviewService.makeRewiew(source)).thenReturn(reviewedItem);
when(persistenceService.makePersist(reviewedItem)).thenReturn(1L);
Long actual = habrService.handle(source);
InOrder inOrder = Mockito.inOrder(reviewService, persistenceService);
inOrder.verify(reviewService).makeRewiew(source);
inOrder.verify(persistenceService).makePersist(reviewedItem);
inOrder.verifyNoMoreInteractions();
assertEquals(1L, actual);
}
the @MethodSource annotation has the generateSource string, what is it? this is the name of the method that will collect the required model for us, its declaration will look like this:
private static Stream<Arguments> generateSource() {
HabrItem habrItem = new HabrItem();
habrItem.setHubType(HubType.JAVA);
habrItem.setEditCount(999L);
return nextStream(() -> habrItem);
}
For convenience, I moved the formation of a stream of nextStream arguments into a separate utility test class:
public class CommonTestUtil {
private static final Random RANDOM = new Random();
public static <T> Stream<Arguments> nextStream(final Supplier<T> supplier) {
return Stream.generate(() -> Arguments.of(supplier.get())).limit(nextIntBetween(1, 10));
}
public static int nextIntBetween(final int min, final int max) {
return RANDOM.nextInt(max - min + 1) + min;
}
}
Now, when starting the test, the HabrItem request model will be declaratively added to the test method parameter, and the test will be launched as many times as the number of arguments generated by our test utility, in our case from 1 to 10.
This can be especially convenient if the model is in the stream of arguments is collected not by hardcode, as in our example, but with the help of randomizers. (Long live floating tests, but if they are, there is a problem).
In my opinion, everything is already super, the test now describes only the behavior of our stubs, and the expected results.
But here's the bad luck, a new field, text, an array of strings is added to the HabrItem model, which may or may not be very large, it doesn't matter, the main thing is that we don't want to clutter up our tests, we don't need random data, we want a strictly defined model, with specific data, collect it in a test or anywhere else - we do not want. It would be cool if you could take the body of a json request from anywhere, for example from a postman, make a mock file based on it, and form the model declaratively in the test, specifying only the path to the json file with data.
Excellent. We use the @JsonSource annotation, which will take a path parameter, with a relative file path and target class. Heck! There is no such annotation in parameterized tests, but I would like to.
Let's write it ourselves.
ArgumentsProvider is responsible for processing all annotations that come with @ParametrizedTest in junit, we will write our own JsonArgumentProvider:
public class JsonArgumentProvider implements ArgumentsProvider, AnnotationConsumer<JsonSource> {
private String path;
private MockDataProvider dataProvider;
private Class<?> clazz;
@Override
public void accept(final JsonSource jsonSource) {
this.path = jsonSource.path();
this.dataProvider = new MockDataProvider(new ObjectMapper());
this.clazz = jsonSource.clazz();
}
@Override
public Stream<Arguments> provideArguments(final ExtensionContext context) {
return nextSingleStream(() -> dataProvider.parseDataObject(path, clazz));
}
}
MockDataProvider is a class for parsing mock json files, its implementation is extremely simple:
public class MockDataProvider {
private static final String PATH_PREFIX = "json/";
private final ObjectMapper objectMapper;
public <T> T parseDataObject(final String name, final Class<T> clazz) {
return objectMapper.readValue(new ClassPathResource(PATH_PREFIX + name).getInputStream(), clazz);
}
}
The mock provider is ready, the argument provider for our annotation too, it remains to add the annotation itself:
/**
* Source- ,
* json-
*/
@Target({ElementType.ANNOTATION_TYPE, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@ArgumentsSource(JsonArgumentProvider.class)
public @interface JsonSource {
/**
* json-, classpath:/json/
*
* @return
*/
String path() default "";
/**
* ,
*
* @return
*/
Class<?> clazz();
}
Hooray. Our annotation is ready for use, the test method is now:
@ParameterizedTest
@JsonSource(path = MOCK_FILE_PATH, clazz = HabrItem.class)
void handleTest(final HabrItem source) {
HabrItem reviewedItem = mock(HabrItem.class);
when(reviewService.makeRewiew(source)).thenReturn(reviewedItem);
when(persistenceService.makePersist(reviewedItem)).thenReturn(1L);
Long actual = habrService.handle(source);
InOrder inOrder = Mockito.inOrder(reviewService, persistenceService);
inOrder.verify(reviewService).makeRewiew(source);
inOrder.verify(persistenceService).makePersist(reviewedItem);
inOrder.verifyNoMoreInteractions();
assertEquals(1L, actual);
}
in mock json, we can produce as much and very quickly a bunch of objects we need, and nowhere from now on there is code that distracts from the essence of the test, for the formation of test data, of course, you can often do with mocks, but not always.
Summing up, I would like to say the following: often we work as we used to work, for years, without thinking about the fact that some things can be done beautifully and simply, often using the standard api of libraries that we have been using for years, but do not know all their capabilities.
PS The article is not an attempt on the knowledge of TDD concepts, I wanted to add test data to the storytelling campaign to make it a little clearer and more interesting.