After test an erlang framework to serve JSON API from a postgreSQL database, i remember that had installed CouchDB in my laptop, and why not use it with same data and try the performance load using the same httperf command.
The result of json from couchdb :
{
"total_rows": 2,
"offset": 0,
"rows": [
{
"id": "2f9bc9fb62f3e8fa19ace932b9000d9f",
"key": "2f9bc9fb62f3e8fa19ace932b9000d9f",
"value": {
"_id": "2f9bc9fb62f3e8fa19ace932b9000d9f",
"_rev": "1-0a77ba71f874dc7ca2b7d22893cf4882",
"task": "learn",
"status": "not done"
}
},
{
"id": "2f9bc9fb62f3e8fa19ace932b90013d9",
"key": "2f9bc9fb62f3e8fa19ace932b90013d9",
"value": {
"_id": "2f9bc9fb62f3e8fa19ace932b90013d9",
"_rev": "1-6127c1359f9d34d2733943876931e7d4",
"task": "erlang",
"status": "not done"
}
}
]
}
And the design document to retreive the data save with path /todo/_design/todo/_view/list
function(doc) {
emit(doc._id, doc);
}
so here are the result with same data.
httperf --client=0/1 --server=127.0.0.1 --port=5984 --uri=/todo/_design/todo/_view/list --rate=150 --send-buffer=4096 --recv-buffer=16384 --num-conns=27000 --num-calls=1
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
Maximum connect burst length: 1
Total: connections 27000 requests 27000 replies 27000 test-duration 179.995 s
Connection rate: 150.0 conn/s (6.7 ms/conn, <=13 concurrent connections)
Connection time [ms]: min 0.6 avg 1.1 max 92.4 median 0.5 stddev 2.5
Connection time [ms]: connect 0.0
Connection length [replies/conn]: 1.000
Request rate: 150.0 req/s (6.7 ms/req)
Request size [B]: 90.0
Reply rate [replies/s]: min 149.8 avg 150.0 max 150.0 stddev 0.0 (36 samples)
Reply time [ms]: response 1.1 transfer 0.1
Reply size [B]: header 231.0 content 470.0 footer 2.0 (total 703.0)
Reply status: 1xx=0 2xx=27000 3xx=0 4xx=0 5xx=0
CPU time [s]: user 62.39 system 117.63 (user 34.7% system 65.4% total 100.0%)
Net I/O: 115.9 KB/s (0.9*10^6 bps)
Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
The result of json from couchdb :
{
"total_rows": 2,
"offset": 0,
"rows": [
{
"id": "2f9bc9fb62f3e8fa19ace932b9000d9f",
"key": "2f9bc9fb62f3e8fa19ace932b9000d9f",
"value": {
"_id": "2f9bc9fb62f3e8fa19ace932b9000d9f",
"_rev": "1-0a77ba71f874dc7ca2b7d22893cf4882",
"task": "learn",
"status": "not done"
}
},
{
"id": "2f9bc9fb62f3e8fa19ace932b90013d9",
"key": "2f9bc9fb62f3e8fa19ace932b90013d9",
"value": {
"_id": "2f9bc9fb62f3e8fa19ace932b90013d9",
"_rev": "1-6127c1359f9d34d2733943876931e7d4",
"task": "erlang",
"status": "not done"
}
}
]
}
And the design document to retreive the data save with path /todo/_design/todo/_view/list
function(doc) {
emit(doc._id, doc);
}
so here are the result with same data.
httperf --client=0/1 --server=127.0.0.1 --port=5984 --uri=/todo/_design/todo/_view/list --rate=150 --send-buffer=4096 --recv-buffer=16384 --num-conns=27000 --num-calls=1
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
Maximum connect burst length: 1
Total: connections 27000 requests 27000 replies 27000 test-duration 179.995 s
Connection rate: 150.0 conn/s (6.7 ms/conn, <=13 concurrent connections)
Connection time [ms]: min 0.6 avg 1.1 max 92.4 median 0.5 stddev 2.5
Connection time [ms]: connect 0.0
Connection length [replies/conn]: 1.000
Request rate: 150.0 req/s (6.7 ms/req)
Request size [B]: 90.0
Reply rate [replies/s]: min 149.8 avg 150.0 max 150.0 stddev 0.0 (36 samples)
Reply time [ms]: response 1.1 transfer 0.1
Reply size [B]: header 231.0 content 470.0 footer 2.0 (total 703.0)
Reply status: 1xx=0 2xx=27000 3xx=0 4xx=0 5xx=0
CPU time [s]: user 62.39 system 117.63 (user 34.7% system 65.4% total 100.0%)
Net I/O: 115.9 KB/s (0.9*10^6 bps)
Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
As we can see the result is almost the same. no error happening, all request processed with 6.7ms / request.
Test done in 179 s or 3 minutes. I don't know if all erlang use same processing to dump json.
Anyhow, great result and positif about this.