方法/步骤
1
比较常用的命令,如:
ab -n 请求数 -c 并发数 URL
2
跑了一个简单的Demo:
usertekiMacBook-Pro:~ zhaoxianlie$ ab -n 200 -c 10 http://127.0.0.1:8793/
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.jinbanz.com/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8793
Document Path: /
Document Length: 28012 bytes
Concurrency Level: 10
Time taken for tests: 6.847 seconds
Complete requests: 200
Failed requests: 0
Total transferred: 5665503 bytes
HTML transferred: 5601103 bytes
Requests per second: 29.21 [#/sec] (mean)
Time per request: 342.343 [ms] (mean)
Time per request: 34.234 [ms] (mean, across all concurrent requests)
Transfer rate: 808.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 148 338 93.2 335 633
Waiting: 148 337 93.3 335 633
Total: 148 338 93.2 336 634
Percentage of the requests served within a certain time (ms)
50% 336
66% 371
75% 397
80% 413
90% 461
95% 500
98% 568
99% 610
100% 634 (longest request)
3
这其中Requests per second、Time per request算是大家都比较看重的两个评估参数了吧。
不过对于这种压力测试来说,也不能一上来就 -n 1000 -c 10,还是得慢慢来,一开始可以来一个 -n 50 -c 10这样子,逐渐网上增加,取Request per second的最大值作为Http server的性能指标,应该会靠谱一些。
假设我的这个测试服务器RPS峰值是30,那一分钟能扛过来的请求差不多 1800 个,这是单核CPU的情况下。
假设是我厂服务器的配置,24核,开启Node的cluster模式,RPS应该是倍数增加的,明天去公司找台服务压测一把。如果真是这样,那么每秒钟能扛过来的请求差不多 720 个,换算到一个小时,是259.2w个访问请求。
假设服务器配置是Nginx+Node,Nginx做负载均衡,proxy_pass对应的upstream配置到4台Node服务器,且每台Node服务器均衡负载,那么,一个小时能扛得动的流量基本是 1kw多一些,满负荷跑一天,2.4亿的流量了。