Using Artillery to test REST Services

Stay awhile and listen… load testing is a crucial step in ensuring the performance and scalability of REST-like web services. Artillery is a popular open-source load testing tool that allows you to simulate high user traffic and measure the system’s response under different loads. In this guide, we’ll walk through the process of creating an artillery load test for a REST service. Artillery docs can be found here.


Key Concepts

  • Test Script: Artillery test scripts or test definitions are usually written as YAML (with the option for near-infinite customization with Node.js code, which may use any public or private npm packages). A test definition is composed of one or more scenarios.
  • Virtual Users: A virtual user emulates a real user interacting with your app. A “real user” may be a person, or an API client depending on your app. A virtual user executes a scenario. Every virtual user running the scenario is completely independent of other virtual users running the same scenario, just like users and API clients in the real world. No memory, network connections, cookies, or other state is shared between virtual users.
  • Scenarios: Each scenario is a series of steps representing a typical sequence of requests or messages sent by a user of an application. Usually is a sequence of actions, such as making a HTTP GET request, followed by a HTTP POST request and so on.
  • Phases: A load phase tells Artillery how many virtual users to create over a period of time. A production-grade load test will usually have multiple load phases, such as: A warm up phase, which will gently increase load over a period of time; One or more heavy load phases; Load phases are expressed as duration + an arrival rate. Each arrival is a new virtual user.

Installing Artillery

Node.js is a prerequisite for Artillery, you can find the installation procedure here.
To install Artillery globally open a terminal or command prompt and run the following command:

npm install -g artillery

To test the installation run:

artillery

You should get something like this

artillery command output

You can test the installation running the dino script:

artillery dino -m "Kernel Panic!" -r

And you should get a nice dinosaur like this one:

artillery dino command output

Test Script

Artillery use YAML files to define test scripts and its scenarios. Create a new file with a .yml extension (e.g., asciiart-load-test.yml).
In this file we have some root properties like config, scenarios, plugins and many others.

  • config is what defines how our load test will run, e.g. the URL of the system we’re testing, how much load will be generated, any plugins we want to use, and so on.
  • scenarios is where we define what the virtual users created by Artillery will do. A scenario is usually a sequence of steps that describes a user session in the app.
  • plugins is where you define the plugins. there are build-in and third party plugins available

At the config section you should define the target url, this means that all requests will use that base URL by default.

target: "http://asciiart.artillery.io:8080"

Also at the config section you’ll need to define the phases. Load phases tell Artillery how many virtual users to create, and describe the shape of the load we want.

phases:
  - name: Warm up the API
    duration: 60
    arrivalRate: 5
    rampTo: 10
  - name: Ramp up to peak load
    duration: 60
    arrivalRate: 10
    rampTo: 50
  - name: Sustained peak load
    duration: 300
    rampTo: 50

In this test we’ve defined three distict phases:

  • Warm up the API – this phase will run for 60 seconds. Artillery will start by creating 5 new virtual users per second, and gradually ramp up to 10 new virtual users per second by the end of the phase.
  • Ramp up to peak load – this phase will also last for 60 seconds. Artillery will continue ramping up load from 10 to 50 virtual users per second.
  • Sustained peak load – this phase will run for 300 seconds. Artillery will create 50 new virtual users every second during this phase.

Now we’ll define our scenarios.

scenarios:
  - name: Get 3 animal pictures
    flow:
      - get:
          url: "/dino"
      - get:
          url: "/armadillo"
      - get:
          url: "/pony"

This scenario contains three actions, and will be executed by all virtual users created by Artillery for this test.


Running the Test

Execute the following command to run the test:

artillery run asciiart-load-test.yml

Artillery will run the test as described in the asciiart-load-test.yml. As virtual users run their scenario and collect performance metrics, Artillery will print a report every 10 seconds with a summary of collected metrics for that time period.

The output will look similar to this, with reports describing the number of virtual users created, HTTP response codes, and response times from API endpoints we’re testing:

test finished report

Reports

Artillery can create nice HTML reports. To generate such reports the parameter –output asciiart-load-test.json must be added to the run command and when the test is finished, you must execute:

artillery report asciiart-load-test.json

This will generate a file named asciiart-load-test.json.html which can be opened on any browser. The output contains a summary and many charts like the http.codes.200 and http.response_time, shown bellow.

http.codes.200 chart

http.response_time chart


Example of Functional YAML File

config:
  target: "http://asciiart.artillery.io:8080"
  phases:
    - duration: 10
      arrivalRate: 5
      rampTo: 10
    - duration: 20
      arrivalRate: 10
      rampTo: 20
    - duration: 40
      rampTo: 20
scenarios:
  - name: Get 3 animal pictures
    flow:
      - get:
          url: "/dino"
      - get:
          url: "/armadillo"
      - get:
          url: "/pony"

Extending the Tests

Sometimes we need to execute routines before run the test or maybe generate some random data to use during test. To achieve this task we can extend our test script with external custom javascript functions.

Now we create a javascript file called testUtils.js and write our functions in it.

I had to generate a random number with a specific format, and send it on a POST request for a particular test I was running so I created the following functions.

function generateNumber() {
    let c = 8;
    let l = ['55', '00', '9'];
    while(c) {
        l.push(`${Math.floor(Math.random() * 10)}`);
        c--;
    }
    return `${l.join('')}`;
}

function modifyRequestBody(requestParams, context, ee, next) {
    requestParams.json.myRandomNumber = generateNumber();
    return next();
}

module.exports = { modifyRequestBody }

To use this function we need to configure the property config.processor with the path of a CommonJS module which will be require()d and made available to scenarios.

config:
  target: "https://myservice:44334"
  phases:
    - duration: 10
      arrivalRate: 5
      name: Warm up
    - duration: 30
      arrivalRate: 5
      rampTo: 80
      name: Ramp up load
    - duration: 60
      arrivalRate: 80
      name: Sustained load
  processor: "./testUtils.js"

After that, the exported functions can be used inside the scenario flow thru one of the available hooks which are: beforeScenario, afterScenario, beforeRequest and afterResponse.

At this particular example, I called the custom function modifyRequestBody on the beforeRequest hook

scenarios:
  - name: "simple POST test"
    flow:
      - post:
          url: "/myapicontroller/mypostaction"
          headers:
            Content-Type: "application/json"
            api-token: "mysecretkey"
          json:
            myOrigin: "Artillery Test"
            myRandomNumber: "modified by the custom function"
          beforeRequest: "modifyRequestBody"
          capture:
            - json: "$.success"
              as: "success"
            - json: "&.message"
              as: "message"
              strict: false

To retain data of this request we can use the capture property and define from where data should be captured. Captured data can be used in any other request of the same virtual user.

The capture option must always have an as attribute, which names the value for use in subsequent requests. It also requires one of the following attributes:

  • json – Allows you to define a JSONPath(opens in a new tab) expression.
  • xpath – Allows you to define an XPath(opens in a new tab) expression.
  • regexp – Allows you to define a regular expression that gets passed to a RegExp constructor(opens in a new tab). A specific capturing group(opens in a new tab) to return may be set with the group attribute (set to an integer index of the group in the regular expression). Flags(opens in a new tab) for the regular expression may be set with the flags attribute.
  • header – Allows you to set the name of the response header whose value you want to capture.
  • selector – Allows you to define a Cheerio(opens in a new tab) element selector. The attr attribute will contain the name of the attribute whose value we want. An optional index attribute may be set to a number to grab an element matching a selector at the specified index, “random” to grab an element at random, or “last” to grab the last matching element. If the index attribute is not specified, the first matching element will get captured.

By default, captures are strict. If a capture rule fails because nothing matches, any subsequent requests in the scenario will not run, and that virtual user will stop making requests. This behavior is the default since subsequent requests typically depend on captured values and fail when one is not available.

In some cases, it may be useful to turn this behavior off. To make virtual users continue with running requests even after a failed capture, set strict to false:

- get:
    url: "/"
    capture:
      json: "$.id"
      as: "id"
      strict: false # We don't mind if `id` can't be captured and the next requests 404s
- get:
    url: "/things/{{ id }}"

Multiple Environments

Typically, you may want to reuse a load testing script across multiple environments with minor tweaks. For instance, you may want to run the same performance tests in development, staging, and production. However, for each environment, you need to set a different target and modify the load phases.

Instead of duplicating your test definition files for each environment, you can use the config.environments setting. It allows you to specify the number of named environments that you can define with environment-specific configuration.

A typical use-case is to define multiple targets with different load phase definitions for each of those systems:

config:
  target: "http://service1.acme.corp:3003"
  phases:
    - duration: 10
      arrivalRate: 1
  environments:
    production:
      target: "http://service1.prod.acme.corp:44321"
      phases:
        - duration: 1200
          arrivalRate: 10
    local:
      target: "http://127.0.0.1:3003"
      phases:
        - duration: 1200
          arrivalRate: 20

When running your performance test, you can specify the environment on the command line using the -e flag. For example, to execute the example test script defined above with the staging configuration:

artillery run -e staging my-script.yml

When running your tests in a specific environment, you can access the name of the current environment using the $environment variable.

For example, you can print the name of the current environment from a scenario during test execution:

config:
  environments:
    local:
      target: "http://127.0.0.1:3003"
      phases:
        - duration: 120
          arrivalRate: 20
scenarios:
  - flow:
      - log: "Current environment is set to: {{ $environment }}"

Plugins

Artillery has support for plugins, which can add functionality and extend its built-in features. Plugins can hook into Artillery’s internal APIs and extend its behavior with new capabilities.

Plugins are distributed as normal npm packages which are named with an artillery-plugin- prefix, e.g. artillery-plugin-expect.

There are some built-in plugins like expect, which can be used to check assertions on HTTP response.

To enable expect, add it to the config section:

config:
  target: "https://myservice:44334"
  plugins:
    expect: {} //usually you can pass parameters in braces. expect do not require any parameter

Adding expectations to HTTP response:

scenarios:
  - name: Get a movie
    flow:
      - get:
          url: "/movies/5"
          capture:
            - json: "$.title"
              as: title
          expect:
            - statusCode: 200
            - contentType: json
            - hasProperty: title
            - equals:
                - "From Dusk Till Dawn"
                - "{{ title }}"

This is a simple example, but Artillery can be easily extended and has great powers! Check out the documentation.

Artillery can be used together with some trace application like newrelic, datadog or dynatrace to deal with bottlenecks and other performance issues.

Conclusion

In conclusion, the artillery test tool proves to be an invaluable asset in the world of software development and performance testing. Its versatility, ease of use, and powerful features make it an essential tool for any development team looking to ensure the reliability and scalability of their applications.

By simulating realistic user traffic and behavior, the artillery test tool provides insights of application’s performance under various scenarios. Working together with a trace service like newrelic, datadog or dynatrace allows to identify and rectify potential bottlenecks, optimize resource allocation, and fine-tune code for optimal efficiency.

See ya!

Self-signed certificates made simple

A self-signed certificate is a digital certificate NOT signed by a recognized public or private certificate authority (CA). Instead, it is signed by the same entity using the certificate also known as myself. Self-signed certificates are commonly used for testing purposes or for securing communication within a private network.

Creating a self-signed certificate is a relatively simple task, and can be done using OpenSSL, a widely used open-source cryptographic tool/library.

Here is a simple guide on how to create a self-signed certificate and I will follow these steps:

  • Install required tools
  • Create your own CA
  • Create a private key to generate the CSR
  • Create a certificate with the CSR, the CA and the private key
  • Using the self-signed certificate

I’m using Ubuntu Linux 20.04 to generate a certificate but the commands should be similar for other Linux distributions.

If you don’t have openssl installed, use apt, yum, rpm, source code or whatever to install it.

On Ubuntu and Debian alike systems, you can install it with the following:

sudo apt install openssl -y

Now create a directory to put your files to be generated with the command:

mkdir mycertificate && cd mycertificate

Prior to generating a certificate, you’ll need your own CA (Certificate Authority) to sign it, as said before. Use this command to generate the CA:

openssl req -x509 \
-sha256 -days 3560 \
-nodes \
-newkey rsa:2048 \
-subj "/CN=my.domain.com/C=BR/ST=Bahia/L=Salvador" \
-keyout CA.key -out CA.crt

This command will generate a CA certificate and its private key which is valid for 10 years. You should change the command to fit your needs. Here are the meanings: CN (common name), C (country name), ST (state or province name) and L (locality).

The next step is to generate a private key using the following command:

openssl genrsa -out server.key 2048

Now we create a CSR (Certificate Signing Request). But to create a CSR we will write a configuration file:

cat > csr.conf <<EOF
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn

[ dn ]
C = BR
ST = Bahia
L = Salvador
O = MyOrganization
OU = MyOrganization Unit
CN = my.domain.com

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = my.domain.com
DNS.2 = www.my.domain.com
IP.1 = 192.168.0.254
IP.2 = 192.168.0.253

EOF

Now run this command to generate the CSR:

openssl req -new \
-key server.key \
-config csr.conf \
-out server.csr

This will create a new CSR file named server.csr in the current directory. If you omit the option -config csr.conf, you will be prompted to enter some information about the certificate, such as the country, state, and organization name.

You should have these files in your directory: CA.key, CA.crt, csr.conf, server.csr and server.key.

Now let’s create another configuration file for the final certificate:

cat > cert.conf <<EOF

authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = my.domain.com

EOF

To generate the certificate run this command:

openssl x509 -req \
-in server.csr \
-CA CA.crt -CAkey CA.key \
-CAcreateserial -out server.crt \
-days 3650 \
-sha256 -extfile cert.conf

The command should generate a file named server.crt and it should be used with the previously generated key. The -days option specifies the number of days the certificate will be valid.

Example of use with nginx:

server {
    listen 443 ssl;
    server_name my.domain.com www.my.domain.com;

    ssl_certificate /path/to/mycertificate/server.crt;
    ssl_certificate_key /path/to/mycertificate/server.key;
    
    location / {
        root /var/www/html/my.domain.com/;
        index index.html;
    }
}

After generating the self-signed certificate, you can use it to secure communication between two entities. However, it’s worth noting that self-signed certificates are not trusted by web browsers or operating systems by default. This means that if you use a self-signed certificate on a public website, visitors will see a warning message in their browser.

If you want to use this certificate on a group of LAN computers, you could add the CA to the trust list of the operating system.

On Ubuntu, you can run these commands:

cp CA.crt /usr/local/share/ca-certificates/my.ca.entity/CA.pem
sudo update-ca-certificates

This will copy the CA to /usr/local/share/ca-certificates/CA_FOLDER_NAME and create links to the trusted CA folder with update-ca-certificates. So next time you browse the example site, chrome will not complain about a “not trusted” certificate.