This topic is dedicated to one extremely important and complicated tool for testing the application performance. Here, you will explore the Java Microbenchmark Harness (JMH) tool. We will start from the very basics: how to set up a Maven project for benchmarking purposes, make required configurations and run a simple benchmark with default configurations. Also, you will see how to make some configurations to run a benchmark with specified settings. At the end, you will explore alternative ways to set up a project and run it for benchmarking.
Setting up the environment
Let's create a Maven project from the command line using an archetype described in the JMH GitHub repo. It will set up a project with required dependencies and configurations. You can also do it another way but there is no guarantee you won't face any issues. So, we will follow the official instructions to set up our project using Maven.
An archetype is a default project structure that Maven uses to create a project. When you create a Maven project from IDEA it uses maven-archetype-quickstart but, in our case, we will use another one. You can find its name on the fourth line of the command below — it's jmh-java-benchmark-archetype.
/* Unix */
mvn archetype:generate \
-DinteractiveMode=false \
-DarchetypeGroupId=org.openjdk.jmh \
-DarchetypeArtifactId=jmh-java-benchmark-archetype \
-DgroupId=org.sample \
-DartifactId=test \
-Dversion=1.0
/* Windows */
mvn archetype:generate ^
-DinteractiveMode=false ^
-DarchetypeGroupId=org.openjdk.jmh ^
-DarchetypeArtifactId=jmh-java-benchmark-archetype ^
-DgroupId=org.sample ^
-DartifactId=test ^
-Dversion=1.0Now, let's choose the destination folder for the project and run the appropriate script in that folder from the command line. Maven will download all necessary files and create the project named test (-DartifactId=test) with this structure:
So, the first step is completed. It's time to explore what's inside the project and understand what each component does.
Exploring the project components
If you open and explore the pom.xml you will see a big configuration file. Here, we will focus on three important sections to understand how JMH runs with this configuration. They are:
Dependencies
Properties
Plugins
In the first section, you will have two dependencies:
JMH Core. A JMH tool itself for building, running, and analyzing benchmarks written in JVM languages.
JMH Generators: Annotation Processors. A benchmark generator that uses annotation processors. You will use it to configure benchmark settings with annotations.
In the properties section you will see a picture like this:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<!--
JMH version to use with this project.
-->
<jmh.version>1.35</jmh.version>
<!--
Java source/target to use for compilation.
-->
<javac.target>1.8</javac.target>
<!--
Name of the benchmark Uber-JAR to generate.
-->
<uberjar.name>benchmarks</uberjar.name>
</properties>The last property is the Uber-JAR name, which is benchmarks. We need this packaging type to run benchmarks from the command line. Remember that name, you will need it when benchmarking in the future.
Finally, in the plugins section, there will be the maven compiler plugin and the maven shade plugin which is responsible for packaging the project in an Uber-JAR.
With these configurations, we can write benchmarks and run them.
Running the first benchmark
So, we already have a project with the necessary configurations. Now, we just need to write a small benchmark and run it. Let's remove comments from the testMethod() of the MyBenchamark class and write some code instead. Note that each benchmark must have a @Benchmark annotation, otherwise it won't be recognized as a benchmark.
We will test a simple code sample. In practice, you may not need to measure the map creation time — JMH provides the functionality to mark the code you don't need to measure, but you'll learn about it in due time. So, our code sample results will also include the map creation time.
public class MyBenchmark {
@Benchmark
public String testMethod() {
Map<Integer, String> map = new HashMap<>();
map.put(1, "A");
return map.get(1);
}
}As you might remember, the JVM performs dead code elimination if you write unusable code. To avoid such situations, the method returns the entry value.
To run the benchmark, first, you need to open your project folder on the command line and run the mvn clean verify command. This will create the target folder inside your project, which contains the benchmarks Uber-JAR file you will execute. Now, run the java -jar target/benchmarks.jar command from the same place. You will see the following message when it finishes all benchmarks.
In our case, there is just one method but if there are more, you can explicitly specify which method/benchmark to run. For instance, to run the testMethod() among others, the command will be java -jar target/benchmarks.jar testMethod. If you don't specify the method/benchmark name and have multiple benchmarks, they'll run one by one.
// Section 1 - Environment info
/* ... */
// Section 2 - Benchmark info
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.MyBenchmark.testMethod
// Section 3 - Iteration output
# Run progress: 0,00% complete, ETA 00:08:20
# Fork: 1 of 5
# Warmup Iteration 1: 32354813,827 ops/s // 10 seconds
# Warmup Iteration 2: 33237144,499 ops/s // 10 seconds
# Warmup Iteration 3: 29950004,229 ops/s // 10 seconds
# Warmup Iteration 4: 22132665,384 ops/s // 10 seconds
# Warmup Iteration 5: 27434002,069 ops/s // 10 seconds
Iteration 1: 34581742,854 ops/s // 10 seconds
Iteration 2: 33504374,005 ops/s // 10 seconds
Iteration 3: 36413999,946 ops/s // 10 seconds
Iteration 4: 36256528,915 ops/s // 10 seconds
Iteration 5: 36339061,181 ops/s // 10 seconds
# Run progress: 20,00% complete, ETA 00:06:42
# Fork: 2 of 5
/** Warmup iterations and iteration data **/
# Run progress: 40,00% complete, ETA 00:05:01
# Fork: 3 of 5
/** Warmup iterations and iteration data **/
# Run progress: 60,00% complete, ETA 00:03:21
# Fork: 4 of 5
/** Warmup iterations and iteration data **/
# Run progress: 80,00% complete, ETA 00:01:40
# Fork: 5 of 5
# Warmup Iteration 1: 21229268,528 ops/s
# Warmup Iteration 2: 21105387,390 ops/s
# Warmup Iteration 3: 21740334,267 ops/s
# Warmup Iteration 4: 16963808,167 ops/s
# Warmup Iteration 5: 18290098,057 ops/s
Iteration 1: 19665236,170 ops/s
Iteration 2: 19800213,880 ops/s
Iteration 3: 27456219,881 ops/s
Iteration 4: 23776323,726 ops/s
Iteration 5: 28538151,918 ops/s
// Section 4 - Results
Result "org.sample.MyBenchmark.testMethod":
25417752,517 ±(99.9%) 4824294,111 ops/s [Average]
(min, avg, max) = (17784316,018, 25417752,517, 36413999,946), stdev = 6440294,614
CI (99.9%): [20593458,406, 30242046,628] (assumes normal distribution)
# Run complete. Total time: 00:08:23
REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.
NOTE: Current JVM experimentally supports Compiler Blackholes, and they are in use. Please exercise
extra caution when trusting the results, look into the generated code to check the benchmark still
works, and factor in a small probability of new VM bugs. Additionally, while comparisons between
different JVMs are already problematic, the performance difference caused by different Blackhole
modes can be very significant. Please make sure you use the consistent Blackhole mode for comparisons.
Benchmark Mode Cnt Score Error Units
MyBenchmark.testMethod thrpt 25 25417752,517 ± 4824294,111 ops/s
Here you can see default benchmark configurations. They can be modified in the future, so don't expect to see the same default configurations every time.
Let's take a look at what is shown here in detail. The message starts with the environment info, which isn't important, so we've skipped it. In the first two lines, the message tells us that a single benchmark cycle consists of five warmup iterations and five measurement iterations. After that, you see that all benchmarks are executed in one thread that will run in the throughput mode, and the path of your benchmark method. The rest of the message will be printed gradually during benchmark execution. In section three, the first line shows the ETA (Estimated time of arrival) of 8 minutes 20 seconds which is equal to 500 seconds. The next line tells us there will be 5 forks of benchmark iterations, each consisting of 5 warmup iterations and 5 benchmark iterations. Each of these iterations will last 10 seconds. So, each fork will have a duration of 100 seconds. Inside a fork, each line shows the number of performed operations, that is, how many times the testMethod() was executed. For example, on the first line, you see 32354813,827 ops/s, which is the number of method executions in 10 seconds. In the end, in section 4 you see the final results described in different measurement units.
Specifying the number of iterations
Now that you know how to run a benchmark with default settings, let's learn how to make some configurations according to our needs. In the previous section, the benchmark was executed with 5 forks, each of them had 5 warmup iterations and 5 measurement iterations. Now, let's modify these configurations one by one using annotations.
@Benchmark
@Fork(2)
public String testMethod() {
Map<Integer, String> map = new HashMap<>();
map.put(1, "A");
return map.get(1);
}In the code above, we've explicitly specified 2 forks of iterations. All you need to do is run mvn clean verify and run this benchmark.
Remember that each time you modify your benchmarks, you need to run the mvn clean verify command, otherwise, new configurations won't take effect.
In this code, we've specified the number of forks only, but this annotation has another parameter.
@Benchmark
@Fork(value = 2, warmups = 2)
public String testMethod() {
Map<Integer, String> map = new HashMap<>();
map.put(1, "A");
return map.get(1);
}Here, you will have 4 forks in total: the first two forks will be executed as warmup forks and after, you will have two execution forks.
Great! You have mastered how to set the settings for forks, now let's move on to the settings for iterations inside the fork.
@Benchmark
@Fork(2)
@Measurement(iterations = 3)
@Warmup(iterations = 2)
public String testMethod() {
Map<Integer, String> map = new HashMap<>();
map.put(1, "A");
return map.get(1);
}
You can also use the command line to run benchmarks with given configurations. In this case, it isn't necessary to run mvn clean verify command. For instance, the example above would run with java -jar ./target/benchmarks.jar testMethod -f 2 -i 3 -wi 2 command. The java -jar ./target/benchmarks.jar -h command would display all available arguments.
In this benchmark, there are 2 forks: each will perform 2 warmup iterations and 3 measurement iterations.
Alternative ways of launching benchmarks
So far, you've explored how to run benchmarks from the command line, but we mentioned that there are also other ways to do that. This section will introduce to you two approaches:
JMH Plugin. If you visit the official GitHub repo you will find a quite simple explanation of what it is. It's a plugin that allows you to run JMH benchmarks in the same way as JUnit tests. It requires few configurations: only two dependencies that you are already familiar with:
<dependencies> <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-core</artifactId> <version>1.35</version> </dependency> <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-generator-annprocess</artifactId> <version>1.35</version> <scope>provided</scope> </dependency> </dependencies>Just create a regular Maven project from IDE. Then, add the
jmhpackage inside thejavafolder and just write benchmarks to run using the appropriate icon:This is all you need to do. As you can see, you can run either a single benchmark or all benchmarks inside a class.
Running from the
main()method. This approach is also easy to configure. Assuming you have the same project from the previous point, you can just add themain()method, and set up running configurations via theOptionsBuilderclass as shown below:public class App { @Benchmark public String testMethod() { Map<Integer, String> map = new HashMap<>(); map.put(1, "A"); return map.get(1); } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(App.class.getSimpleName()) .forks(5) .measurementIterations(10) .warmupIterations(5) .build(); new Runner(opt).run(); } }The code above will run a benchmark similarly to this one with annotations:
@Benchmark @Fork(5) @Measurement(iterations = 10) @Warmup(iterations = 5) public String testMethod() { Map<Integer, String> map = new HashMap<>(); map.put(1, "A"); return map.get(1); }Here you don't even need the JMH plugin. An empty project with two required dependencies is enough.
Conclusion
Now you know the fundamentals of this important technique: how to set up a project and run basic benchmarks with the JMH tool. You learned how to set up a project using an archetype, as well as write and run a benchmark both with default and custom configurations. You also learned about some alternative approaches to launching benchmarks, such as using the JMH Plugin or running from the main method. Here, you've learn the basics of JMH, there are many other configuration options that will help you design various benchmark tests.