viernes, 2 de febrero de 2018

Procedure - (Node + MongoDB) Application Performance BenchMarking with Jmeter + Grafana + collectd

Well, the post subject is very illustrative ...

How do you benchmark your app/database tiers?, there is a lot of tools to do that, you just have to search for those procedures that fits your needs.

Our typicall test environment is like this:



In Sepalo Software we are using Jmeter as our reference benchmarking tool, most of the apps can be stressed by API calls that we can easy generate with Jmeter. Apps are typically Node apps (Cloud based) with MongoDB (Cloud based) as backend database.

Our first goal was to parametrize all the api calls scenarios by grouping them and generating a lot of parameters that can be passed to the jmeter executable to modify the benchmark scenario as we need (more update calls vs insert calls, ramp users, test interval, burst intervals, and so on) and even the type of scenario itself (Stability, Stress, Burst).

We met this goal by using jmeter parameters (e.g." ${__P(users,1)}" ) that we can specify in command line jmeter executable call.

With these parameters we can execute multiple test scenarios:
  • Stability Test scenario that is mean to verify if application can continuously perform well within or just above the acceptable period. It is often referred as load or Endurance testing.
  • Burst Test scenario is a type of performance testing focused on determining an application’s robustness, availability, and reliability under peak load conditions. The goal of burst testing is to check if application performance recovers after the peak conditions.
  • Stress Test scenario is a type of performance testing focused on determining an application’s robustness, availability, and reliability under extreme conditions. The goal of stress testing is to identify application issues that arise or become apparent only under extreme conditions.

An also, we can have multiple different configurations within a single scenario (distinct percentage api calls of behaviors of each request).

After these automated and configurable jmeter benchmarks test, we realized that most of our test time was focused in the test data results preparation, where we spend time taking csv outputted files and making spreadsheets dynamic tables and graphics.

So we just configured Grafana as our time dasboard based solution to show the tests results, which are saved automatically by jmeter in a influxDB database. For OS statistics and metrics, we use collectd that automatically save all servers statistics in the influxDB. All these tools are open source (which does not mean free).

Finally, we have customizable web based results like these





With this test environment configuration (Cloud based of course ...) we are spending our time in the important phase of the whole process, the initial API calls definitions to stress the apps.

Please, feel free to contact with Sepalo Software if you need more information.

No hay comentarios:

Publicar un comentario