赞
踩
最近在做一个openresty项目,每次访问需要通过openresty读取redis,判断是否可以允许访问。
问题:
如果每次访问就要与redis建立连接,当并发量大时,连接数会直接爆炸。影响性能和系统。
方案一:
在init_by_lua中先创建redis连接,在access_by_lua中直接使用:
init.lua:
- local redis = require "redis"
- client = redis.connect('127.0.0.1', 6379)
filter.lua(演示:简单的黑名单功能,如果request的remoteip在redis中存在key,就返回403):
- client = red;
-
- function redisHasKey(client,keyname)
- if(client:get(keyname))
- then
- return true;
- else
- return false;
- end
- end
-
- local keyname = ngx.var.remote_addr;
- if(pcall(redisHasKey,client,keyname))then
- if(redisHasKey(client,keyname))
- then
- ngx.exit(403);
- else
- return
- end
- else
- return
- end
在nginx启动时,就创建了连接,并只使用这个连接。就不会出现连接数过多问题。
方案一问题:
1、在使用中redis故障会导致功能不可用,即使重启redis还需要重启nginx才能重新获取连接。
2、linux tcp连接不能保证长时间不中断。中断后不能修复。
3、单连接在高并发多请求情况下,会阻塞请求,多个请求会等待连接空闲后再进行操作,甚至直接报错。
方案二:
连接池,针对连接数管理
openresty官方本身有连接池管理(set_keepalve)
语法:syntax: ok, err = red:set_keepalive(max_idle_timeout, pool_size)
我就来试着用下,这次只使用filter.lua:
- local redis = require "resty.redis"
- local red = redis:new()
- red:set_timeouts(1000, 1000, 1000)
- red:set_keepalive(1000, 20)
- red:connect("127.0.0.1", 6379)
- client = red;
-
- function redisHasKey(client,keyname)
- if(client:get(keyname))
- then
- return true;
- else
- return false;
- end
- end
-
- local keyname = ngx.var.remote_addr;
- if(pcall(redisHasKey,client,keyname))then
- if(redisHasKey(client,keyname))
- then
- ngx.exit(403);
- else
- return
- end
- else
- return
- end
问题:
使用之后发现更大的问题,连接数量没有被连接池控制,同时出现大量报错:
- 2021/05/19 13:57:57 [error] 2734932#0: *2019 attempt to send data on a closed socket: u:00000000401F79D0, c:0000000000000000, ft:0 eof:0, client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "127.0.0.1:8082"
- 2021/05/19 13:57:57 [error] 2734932#0: *2019 attempt to send data on a closed socket: u:00000000401F79D0, c:0000000000000000, ft:0 eof:0, client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "127.0.0.1:8082"
- 2021/05/19 13:57:57 [alert] 2734932#0: *2019 socket() failed (24: Too many open files), client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "127.0.0.1:8082"
- 2021/05/19 13:57:57 [error] 2734932#0: *2019 attempt to send data on a closed socket: u:00000000401F6290, c:0000000000000000, ft:0 eof:0, client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "127.0.0.1:8082"
连接在被使用的时候已经中断了。
解决:
看下官方对于set_keepalive
的用法说明:
syntax: ok, err = red:set_keepalive(max_idle_timeout, pool_size)
Puts the current Redis connection immediately into the ngx_lua cosocket connection pool.
You can specify the max idle timeout (in ms) when the connection is in the pool and the maximal size of the pool every nginx worker process.
In case of success, returns 1
. In case of errors, returns nil
with a string describing the error.
Only call this method in the place you would have called the close
method instead. Calling this method will immediately turn the current redis object into the closed
state. Any subsequent operations other than connect()
on the current object will return the closed
error.
简单来说,set_keepalive使用时是用来替代close的,如果使用了set_keepalive之后再对连接进行任何除了connect的操作都会报错。
看来看官方文档很重要啊!
改进下:
- local redis = require "resty.redis"
- local red = redis:new()
- --red:set_timeouts(1000, 1000, 1000)
- --red:set_keepalive(1000, 20)
- red:connect("127.0.0.1", 6379)
- client = red;
-
- function redisHasKey(client,keyname)
- if(client:get(keyname))
- then
- return true;
- else
- return false;
- end
- end
-
- local keyname = ngx.var.remote_addr;
- if(pcall(redisHasKey,client,keyname))then
- if(redisHasKey(client,keyname))
- then
- red:set_keepalive(1000, 200)
- ngx.exit(403);
- else
- red:set_keepalive(1000, 200)
- return
- end
- else
- red:set_keepalive(1000, 200)
- return
- end
虽然用起来还挺恶心,但确实解决了我们的问题。
在使用ab测试时连接数虽然不会被严格控制在set_keepalive的pool_size数值内,但能够与测试的并发量相关。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。