One of the first applications we decided to both use internally and to optimize was Jira, as many development environments use this tool and can relate to performance issues with it.  In this post, I’ll document the test environment that was created, and the setup of the scripts that were used to perform the benchmark, with the next post detailing the actual results observed.

Hardware & Software configuration

  • Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz
  • 32 GB Ram
  • ST3500418AS (note database optimizations to avoid disk IO issues below)
  • Ubuntu Linux 15.10 with current patches
  • Jira v6.4.12, the latest supported by the addon Jira Data Generator (
  • PostgreSQL 9.4

Optimization Configuration

  • OS “Performance” governor set as per
    • This is necessary to get consistent performance during benchmarks, and should always be used when consistent results are needed
  • PostgreSQL configured to disable sync writes for optimal disk write performance (not recommended for production environments) via:
    • fsync = off
    • synchronous_commit = off
    • Note:  This minimizes the impact of disk IO speed for writes

Steps to install Jira:

  1. Downloaded and installed Jira with default settings
  2. Created the PostgreSQL database “Jira”
  3. Edited the script to allow a 16GB heap, necessary for a complete data load
  4. Restarted Jira, and perform initial configuration, using local PostgreSQL database
  5. Updated all plugins
  6. Installed the Jira Data Generation addon
  7. Ran Metadata generation to replicate the test documented at:
    • 550 Projects
    • 628 custom fields
    • 10,000 users
    • Result:
      Created 550 projects in 0:00:06.340
      No workflows generation requested, skipping.
      Generated 10000 users in 0:03:53.598
      Generated 628 custom fields in 0:00:03.631, Field Managers refresh took 0:00:00.001.
      Generated 0 screen configs in 0:00:00.000.
      Generated custom permission scheme in 0:00:11.570
  8. Ran data generation, creating 450,000 issues (if async writes aren’t enabled on the database, this could take days to complete):
    • Result:
      Created 0 versions in 0:00:00.001
      – generating versions: 0:00:00.000
      Created 450000 issues in 0:22:10.471
      – generating issues: 0:05:44.502
      – generating comments: 0:01:26.974
      – generating worklogs: 0:00:29.316
      – generating custom field values: 0:14:29.643
  9. Copy heimdalldriver.jar and hazelcast-3.6.jar into /opt/atlassian/jira/lib
  10. edit /var/atlassian/application-data/jira to change the JDBC URL to the local Heimdall server instance:
    • <url>jdbc:heimdall://</url>
    • <driver-class>com.heimdalldata.HeimdallDriver</driver-class>
  11. Restart Jira
  12. Verify Jira is working properly

At this point, the Heimdall driver is in place, and we can setup the scripts on the test machine.  Test client configuration

  • Intel(R) Core(TM) i7-4720HQ CPU @ 2.60GHz
  • 16 GB Ram
  • SSD 840 EVO 1TB
  • Ubuntu Linux 15.10 with current patches

Data files & Scripts:

  • ids.txt: a list of all project names now loaded in jira, extracted via PGadmin III from the table with the query “select pkey from public.project”
    • UULS
  • urls.txt–a list of URL’s to query as part of the benchmark–note, not all URL’s that will be queried in a real Jira pageview are here, as some are harder to replicate than others:
  • Headers.txt:User-Agent:
    • Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:43.0) Gecko/20100101 Firefox/43.0.4 Waterfox/43.0.4
      X-Requested-With: XMLHttpRequest
      Cookie: atlassian.xsrf.token=…
  • Note:  the Cookie field needs to be pulled after a login to Jira, so that the test has access to Jira as if it had logged in
  •–the actual benchmark script:
    • #!/bin/bashlimit=500
      eval ‘ids=($(cat ids.txt))’
      let “rand=${RANDOM} % ${#ids[@]}”
      id=”${ids[${rand}]}”while true; do
      let “rand=${RANDOM} % ${#ids[@]} % 100″
      let “count=${RANDOM} % ${limit} +1″
      cat urls.txt | {
      while read url; do
      url=echo $url | sed "s/{{ts}}/$curtime/" | sed "s/{{id}}/${id}/g" |sed "s/{{count}}/$count/"
      urls=”$urls -k $url ”
      donecurtime=date +%s%N | cut -b1-13
      curl -s -o /dev/null -H “$(cat headers.txt)” -g ${urls} > /dev/null
      donetime=date +%s%N | cut -b1-13let “elapsed=donetime-curtime”
      echo “$curtime, EXECUTE, $id, $count, $elapsed”
  •–executed on the Jira/database server, this allows latency to be induced at the network level, allowing the impact of latency between Jira and the database to be measured:
    • #!/bin/bashtc qdisc del dev eno1 root netem
      timer=date +%s%N | cut -b1-13
      echo “$timer, DELAY, 0”
      sleep 1m
      cat list.txt | while read delay; do
      timer=date +%s%N | cut -b1-13
      echo “$timer, DELAY, $delay”
      tc qdisc add dev eno1 root netem delay ${delay}ms
      sleep 1m
      tc qdisc del dev eno1 root netem
      tc qdisc del dev eno1 root netem

When executed, the benchmark script provides output similar to:

1457041912832, EXECUTE, UULS, 302, 711
1457041913554, EXECUTE, HTSI, 210, 136
1457041913700, EXECUTE, BHMI, 191, 133
1457041913842, EXECUTE, INDI, 92, 124
1457041913976, EXECUTE, OAVR, 57, 116

The benefit to this is that the output can be sorted with the Heimdall logs, and the actual SQL events relating to a given execution can be correlated, based on the timestampls.  Likewise, the output of the latency script is:

1457042187225, DELAY, 0
1457042247245, DELAY, 0.100

This output can also be sorted in with the Heimdall logs and the individual pageview logs to allow tracking of what happened under different conditions.  Once sorted into with the other data, the following command can be used to break the logs into individual parts, for analysis at different latency levels:

csplit –prefix=outfile infile “/DELAY/” “{*}”

The output files can then be important into a spreadsheet program, and analyzed in more detail.

When starting the benchmarking, as Java takes time to warm up, and for the caches to get populated, it is recommended that the script be run for an extended period of time, at least 10 minutes before data gathering starts.  This allows the JIT compiler to kick in, common content to be cached, etc.

The results of this analysis will be presented in the next blog segment, and will show the impact of caching on Jira, as well as caching vs. non-caching performance as the latency is increased from zero up to about 18ms.  Hint: latency between the application and the database is deadly, even in very small amounts.