赞
踩
jmeter -n
,Linux/Mac也是同样python -m http.server 80
jconsole
JVM_ARGS
参数,Windows 是 jmeter.bat 里:jmeter -n -t aggregate.jmx -l test_http.jtl
.jtl
报告.jmx
文件是 xml 格式,可以直接打开修改配置docker run --name nginx-load-test -p 88:80 -d nginx:1.17.9
,我们映射到本地的 88 端口,和 python server 区别docker stats nginx-load-test
),平均响应时间server.rmi.ssl.disable=true
不使用 SSLjava.rmi.server.hostname=192.168.100.99
本机内网 ipjmeter-server.bat
192.168.109.131
# Remote Hosts - comma delimited
remote_hosts=192.168.109.131:1099,192.168.109.132
创建线程组,配置要请求的地址
Remote Start,使用集群机器向被测机器发起请求,如果是 Start All 每个 slave 都会执行一遍计划(request/record controller)
1099 是 slave 的默认端口,可以不用指定,也可以在 jmeter.properties 指定 server_port=1099
也许会主从连接失败,一般是因为忘了关闭 slave 的防火墙 systemctl stop firewalld.service
jmeter -n -t fenbushi.jmx -l fenbushi.jtl -R 192.168.109.131:1099,192.168.109.132
influxdb 是数据源,可以理解成是一种时间序列数据库,解压后在 cmd 运行 influxd
,从浏览器来到图形界面
有三种操作数据库的方式
UI 就是图形界面啦,在浏览器操作
CLI (command line interface) 单独提供的命令行工具,must be downloaded and installed separately.
HTTP API 也就是通过 Client Libraries (开发语言操作) UI界面有教程,或者是直接使用 cURL 之类的工具调用API
先看下 HTTP API
,选一种你熟悉的语言,跟着教程走一遍,官方文档
核心概念是 bucket,所以的数据都存在这里,类似 SQL 数据表;完整代码:
import influxdb_client, os, time
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
# InfluxDB Cloud uses Tokens to authenticate API access.
# We've created an all-access token for you for this set up process.
# token = os.environ.get("INFLUXDB_TOKEN")
token = "xxx-xxx" # 用你自己的token
print(token)
org = "Roy"
# url = "https://us-west-2-1.aws.cloud2.influxdata.com/"
url = "http://localhost:8086"
# initialize the token, organization info, and server url
# that are needed to set up the initial connection to InfluxDB.
# The client connection is then established with the InfluxDBClient initialization.
client = influxdb_client.InfluxDBClient(url=url, token=token, org=org)
bucket = "first_bucket"
# 得到API,执行写操作
write_api = client.write_api(write_options=SYNCHRONOUS)
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
# 先执行上面的部分,写入数据到 bucket
if __name__ == '__main__':
# 查询
query_api = client.query_api()
# 类似SQL语句,这里叫做 flux,语法类似
# The query client sends the Flux query to InfluxDB
# and returns a Flux object with a table structure.
query = 'from(bucket:"first_bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn:(r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature")'
result = query_api.query(org=org, query=query)
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results) # [(temperature, 25.3)]
grafana-server.exe
访问 http://localhost:3000
Flux
,因为 influxdb 用的是 OSS v2.5 版本,不再使用 databases 的概念了,即使用 SQL-like 方式,也是转化为 bucketapplication
node_exporter
,这个是真正监控主机的,在 Linux 上直接解压运行http://localhost:9182/
,可以在 Metrics 看到信息(在哪运行就是监听哪个机器)prometheus.yml
,写上运行 exporter 的 IP:portscrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
scrape_interval: 5s
metrics_path: "/metrics"
static_configs:
- targets: ["localhost:9182"]
http://localhost:9090/
,在 Targets 能看到Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。