Performance Testing for APIs: Basic setup with K6

Introduction
Performance testing is a crucial part of modern software development, ensuring that your APIs can handle real-world traffic and deliver a smooth user experience. It is the process of evaluating how a system behaves under various levels of load. For APIs, this means simulating multiple users making requests to ensure the API remains responsive and stable.
Key goals
- Measure response times
- Identify bottlenecks
- Ensure scalability
- Validate reliability under stress
Major types of Performance Testing
There are several major kinds of performance testing, each with a specific purpose:
- Load Testing: Measures how the system performs under expected user loads. The goal is to identify performance bottlenecks before the software application goes live.
- Stress Testing: Determines the system’s robustness by testing it under extreme conditions, often beyond its expected capacity, to see how it handles high traffic or data processing.
- Soak (Endurance) Testing: Checks the system’s stability and performance over an extended period, ensuring it can handle sustained use without degradation.
- Spike Testing: Evaluates the system’s reaction to sudden large spikes in the load generated by users.
- Scalability Testing: Assesses the system’s ability to scale up or down in response to increased or decreased load.
When Should You Do Performance Testing?
Below are some of the cases where you might find perf tests very useful:
- Before production releases to catch issues early.
- After major changes (features, refactoring, infrastructure).
- Periodically as part of CI/CD pipelines.
- When expecting traffic spikes (campaigns, launches, events).
Why Use K6 for API Performance Testing?
K6 is an open-source, developer-centric tool for load testing APIs and web applications. It’s popular because:
- It uses JavaScript for scripting, making it accessible to most developers.
- It’s easy to automate and integrate into CI/CD pipelines.
- It provides rich metrics and reporting.
- It’s lightweight and fast.
Other Popular Tools for API Performance Testing
While K6 is a great choice, there are several other widely used tools for API performance testing:
Other Popular Tools
- Apache JMeter: Mature, GUI-based, supports many protocols.
- Gatling: Scala-based, expressive DSL, detailed reports.
- Locust: Python-based, easy scripting, scalable.
- Artillery: JavaScript-based, CLI and YAML/JS scripting.
- NBomber: .NET-based, C#/F#, HTTP/WebSockets/gRPC, extensible.
Each tool has its strengths—choose based on your team’s language preferences, required features, and integration needs.
How: A Practical Example with .NET API and K6
Let’s walk through a real-world example using this repository.
The Use Case: Most Borrowed Book API
We have a simple ASP.NET Core Web API that exposes a single endpoint /mostborrowedbook
. This endpoint returns the book with the highest borrow count from an in-memory database seeded with thousands of records.
API Code Example
// Controller action
[HttpGet]
public async Task<IActionResult> Get() {
var result = await _context.Books.Select(b =>new {
b.Id,
b.Title,
b.Author,
BorrowCount = b.Borrows.Count
}).OrderByDescending(b =>b.BorrowCount).FirstOrDefaultAsync();
if (result == null || result.BorrowCount == 0) return NotFound("No borrow records found.");
return Ok(result);
}
Setting Up K6 for Performance Testing
Steps
- Install K6: On macOS, run 'brew install k6'.
- Write a K6 test script in JavaScript.
- Run your API and execute the K6 test.
- Optionally, generate an HTML report with k6-reporter.
K6 Test Script Example
import http from 'k6/http';
import {
check,
sleep
} from 'k6';
export const options = {
vus: 10, // number of virtual users
duration: '10s', // test duration
};
export default function() {
const res = http.get('http://localhost:5127/mostborrowedbook');
check(res, {
'status is 200': (r) => r.status === 200,
});
sleep(1);
}
K6 Test Script Example
- vus: Number of virtual users (simulated clients)
- duration: How long the test runs
- check: Asserts that the response status is 200
- sleep: Waits 1 second between requests
Running the API and the Test
cd MostBorrowedBookApi
dotnet run
k6 run k6-mostborrowedbook.js
Generate an HTML Report (Optional)
k6 run k6-mostborrowedbook.js --out json=results.json
npx k6-reporter results.json
Demo Repository
The full demo code for this article is available at: ajaysskumar/API-Performance-Testing
This repository contains:
- MostBorrowedBookApi: ASP.NET Core Web API with an in-memory database, seeded with thousands of books and borrows at startup. The endpoint
/mostborrowedbook
returns the book with the highest borrow count. - k6-mostborrowedbook.js: K6 test script for load testing the API.
- run-perf-test.sh: Bash script to run the API and K6 test in one shot.
One-Shot Bash Script
To run the API and the K6 test together, use the run-perf-test.sh
script in the repository root. This script:
- Checks if K6 is installed (installs via Homebrew on macOS if needed).
- Starts the API in the background.
- Waits for the API to start.
- Runs the K6 test script.
- Stops the API process after the test.
This makes it easy to run the full demo and performance test with a single command.
Understanding K6 Metrics
After running the test, K6 outputs a summary like this:
Running K6 test...
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: k6-mostborrowedbook.js
output: -
scenarios: (100.00%) 1 scenario, 10 max VUs, 40s max duration (incl. graceful stop):
* default: 10 looping VUs for 10s (gracefulStop: 30s)
✓ status is 200
checks.........................: 100.00% ✓ 60 ✗ 0
data_received..................: 14 kB 1.2 kB/s
data_sent......................: 5.8 kB 509 B/s
http_req_blocked...............: avg=1.63ms min=1µs med=6µs max=11.72ms p(90)=11.09ms p(95)=11.33ms
http_req_connecting............: avg=589.03µs min=0s med=0s max=7.52ms p(90)=2.06ms p(95)=4.84ms
http_req_duration..............: avg=874.47ms min=758.66ms med=823.23ms max=1.18s p(90)=1.15s p(95)=1.17s
{ expected_response:true }...: avg=874.47ms min=758.66ms med=823.23ms max=1.18s p(90)=1.15s p(95)=1.17s
http_req_failed................: 0.00% ✓ 0 ✗ 60
http_req_receiving.............: avg=1.09ms min=13µs med=48µs max=11.35ms p(90)=1.23ms p(95)=10.06ms
http_req_sending...............: avg=1.43ms min=5µs med=27.5µs max=9.13ms p(90)=8.22ms p(95)=8.46ms
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=871.94ms min=758.58ms med=823.17ms max=1.17s p(90)=1.13s p(95)=1.16s
http_reqs......................: 60 5.302087/s
iteration_duration.............: avg=1.87s min=1.75s med=1.82s max=2.2s p(90)=2.18s p(95)=2.19s
iterations.....................: 60 5.302087/s
vus............................: 10 min=10 max=10
vus_max........................: 10 min=10 max=10
running (11.3s), 00/10 VUs, 60 complete and 0 interrupted iterations
default ✓ [======================================] 10 VUs 10s
Here’s a breakdown of the key metrics:
Metric | Explanation |
---|---|
checks | Percentage and count of successful checks (assertions), e.g., status 200. |
data_received | Total and average amount of data received from the server. |
data_sent | Total and average amount of data sent to the server. |
http_req_blocked | Time spent waiting for a free TCP connection slot (blocked by OS or client limits). |
http_req_connecting | Time spent establishing TCP connections to the server. |
http_req_duration | Total time for request (from start to end, including waiting, sending, receiving). |
{ expected_response:true } | Duration for requests that received expected responses (usually status 200). |
http_req_failed | Percentage and count of failed HTTP requests. |
http_req_receiving | Time spent receiving response data from the server. |
http_req_sending | Time spent sending request data to the server. |
http_req_tls_handshaking | Time spent on TLS/SSL handshake (if using HTTPS). |
http_req_waiting | Time spent waiting for server to respond after sending request (server processing time). |
http_reqs | Total number of HTTP requests made and average per second. |
iteration_duration | Total time taken for each test iteration (one virtual user run). |
iterations | Total number of test iterations and average per second. |
vus | Number of active virtual users during the test. |
vus_max | Maximum number of virtual users reached during the test. |
If you use k6-reporter, you’ll get an HTML file with charts and detailed breakdowns.
Real-World Analogy
Think of your API like a busy coffee shop. Performance testing is like simulating hundreds of customers ordering coffee at once. You want to ensure the baristas (your API) can handle the rush without slowing down or making mistakes. K6 helps you create that simulated rush and measure how well your API copes.
Conclusion
Performance testing is essential for delivering reliable APIs. K6 makes it easy to simulate real-world load, catch bottlenecks, and ensure your API is production-ready. With a simple setup and powerful reporting, it’s a great tool for developers and DevOps alike.
Try it out with this repository and see how your API performs!