当前位置:   article > 正文

Spring Cloud Gateway压测(wrk、k8s、nginx)_spring cloud gateway 有没有机器配置的压测参考指标

spring cloud gateway 有没有机器配置的压测参考指标

压测环境

K8S容器中安装wrk,scg(Spring Cloud Gateway)发布在k8s容器中,使用nginx访问scg,scg转发到nginx的html页面。

k8s容器配置:4核、8G内存

wrk:https://github.com/wg/wrk

jvm配置:基本不占用内存,配置1G即可。

scg配置:

  1. spring:
  2. application:
  3. name: scg-test
  4. server:
  5. servlet:
  6. context-path: /
  7. tomcat:
  8. accept-count: 200 #连接数
  9. connection-timeout: 3s #连接超时
  10. spring:
  11. cloud:
  12. gateway:
  13. httpclient:
  14. connect-timeout: 3000
  15. response-timeout: 3s
  16. routes:
  17. - id: r_maxtest
  18. uri: http://192.168.0.184 # 跳转到nginx本地页面
  19. predicates:
  20. - Path=/gwmanager/**
  21. filters:
  22. - StripPrefix=1

wrk用法

./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
  • --latency:显示延时分布
  • -t:启动线程数,一般为cpu核*2,可以根据IO或cpu密集型进行调整
  • -c: 并发数,平分到每个线程中,熟练不能大于可以用TCP端口数
  • -d: 持续请求时间
  1. Running 10s test @ http://192.168.0.184:30001/gwmanager
  2. 8 threads and 100 connections
  3. Thread Stats Avg Stdev Max +/- Stdev
  4. Latency 27.41ms 28.46ms 89.75ms 75.26%
  5. Req/Sec 626.35 70.19 1.22k 86.62%
  6. Latency Distribution
  7. 50% 6.71ms
  8. 75% 54.47ms
  9. 90% 79.40ms
  10. 99% 83.89ms
  11. 49908 requests in 10.01s, 7.19MB read
  12. Requests/sec: 4985.86
  13. Transfer/sec: 735.26KB
  • Latency:响应时间

  • Req/Sec:单个线程处理请求数

  • Avg:平均值

  • Stdev:标准差,值越大说明数据分布均匀,可能是机器或服务性能不稳定导致。

  • Max:最大值

  • +/- Stdev:正负标准差比例,差值比标准差大或小的数据比率

  • Latency Distribution:延时分布多少ms一下请求数比例

  • Requests/sec:平均每秒处理请求数

  • Transfer/sec:平均每秒传输数据量

测试结果与分析

持续时间测试

  1. [root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
  2. Running 10s test @ http://192.168.0.184:30001/gwmanager
  3. 8 threads and 100 connections
  4. Thread Stats Avg Stdev Max +/- Stdev
  5. Latency 27.41ms 28.46ms 89.75ms 75.26%
  6. Req/Sec 626.35 70.19 1.22k 86.62%
  7. Latency Distribution
  8. 50% 6.71ms
  9. 75% 54.47ms
  10. 90% 79.40ms
  11. 99% 83.89ms
  12. 49908 requests in 10.01s, 7.19MB read
  13. Requests/sec: 4985.86
  14. Transfer/sec: 735.26KB
  15. [root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d1m http://192.168.0.184:30001/gwmanager
  16. Running 1m test @ http://192.168.0.184:30001/gwmanager
  17. 8 threads and 100 connections
  18. Thread Stats Avg Stdev Max +/- Stdev
  19. Latency 27.98ms 29.04ms 95.57ms 74.34%
  20. Req/Sec 613.72 46.70 787.00 70.65%
  21. Latency Distribution
  22. 50% 6.52ms
  23. 75% 57.97ms
  24. 90% 79.76ms
  25. 99% 84.23ms
  26. 293262 requests in 1.00m, 42.23MB read
  27. Requests/sec: 4886.49
  28. Transfer/sec: 720.58KB

结论:在并发相同的情况的,持续时间不影响rps。

并发量测试

  1. [root@k8s-master-yace wrk]# ./wrk --latency -t8 -c100 -d10s http://192.168.0.184:30001/gwmanager
  2. Running 10s test @ http://192.168.0.184:30001/gwmanager
  3. 8 threads and 100 connections
  4. Thread Stats Avg Stdev Max +/- Stdev
  5. Latency 27.41ms 28.46ms 89.75ms 75.26%
  6. Req/Sec 626.35 70.19 1.22k 86.62%
  7. Latency Distribution
  8. 50% 6.71ms
  9. 75% 54.47ms
  10. 90% 79.40ms
  11. 99% 83.89ms
  12. 49908 requests in 10.01s, 7.19MB read
  13. Requests/sec: 4985.86
  14. Transfer/sec: 735.26KB
  15. [root@k8s-master-yace wrk]# ./wrk --latency -t8 -c200 -d10s http://192.168.0.184:30001/gwmanager
  16. Running 10s test @ http://192.168.0.184:30001/gwmanager
  17. 8 threads and 200 connections
  18. Thread Stats Avg Stdev Max +/- Stdev
  19. Latency 41.65ms 32.40ms 103.63ms 47.76%
  20. Req/Sec 630.29 72.27 1.11k 75.25%
  21. Latency Distribution
  22. 50% 44.15ms
  23. 75% 82.33ms
  24. 90% 87.58ms
  25. 99% 93.23ms
  26. 50215 requests in 10.01s, 7.23MB read
  27. Requests/sec: 5016.90
  28. Transfer/sec: 739.90KB
  29. [root@k8s-master-yace wrk]# ./wrk --latency -t8 -c500 -d10s http://192.168.0.184:30001/gwmanager
  30. Running 10s test @ http://192.168.0.184:30001/gwmanager
  31. 8 threads and 500 connections
  32. Thread Stats Avg Stdev Max +/- Stdev
  33. Latency 100.10ms 43.26ms 435.69ms 76.63%
  34. Req/Sec 611.35 105.46 2.14k 83.62%
  35. Latency Distribution
  36. 50% 99.54ms
  37. 75% 105.39ms
  38. 90% 180.69ms
  39. 99% 198.63ms
  40. 48697 requests in 10.02s, 7.02MB read
  41. Requests/sec: 4861.25
  42. Transfer/sec: 717.47KB
  43. [root@k8s-master-yace wrk]# ./wrk --latency -t8 -c1000 -d10s http://192.168.0.184:30001/gwmanager
  44. Running 10s test @ http://192.168.0.184:30001/gwmanager
  45. 8 threads and 1000 connections
  46. Thread Stats Avg Stdev Max +/- Stdev
  47. Latency 202.90ms 65.88ms 669.54ms 63.63%
  48. Req/Sec 597.58 130.27 1.10k 70.12%
  49. Latency Distribution
  50. 50% 199.57ms
  51. 75% 209.78ms
  52. 90% 297.11ms
  53. 99% 393.08ms
  54. 47606 requests in 10.02s, 6.87MB read
  55. Requests/sec: 4749.34
  56. Transfer/sec: 701.98KB

结论:随着并发量的提升,rps有下降的趋势,响应时间逐步提升。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/364828
推荐阅读
相关标签
  

闽ICP备14008679号