当前位置:   article > 正文

openresty lua-resty-mlcache多级缓存

openresty lua-resty-mlcache多级缓存

openresty lua-resty-mlcache多级缓存

         

官网:https://github.com/thibaultcha/lua-resty-mlcache

        

              

                                  

多级缓存

          

                

 一级缓存:使用lrucache存储最常使用的数据,每个worker单独使用一份内存

二级缓存:使用lua_shared_dict存储共享数据,当一级缓存没有命中时,到二级缓存中读取数据

三级缓存:一、二级缓存没有命中,就从后端读取数据(使用lock加锁,避免大量请求同时访问),并将数据存储到二级缓存,和其他worker共享

         

             

                                  

创建缓存实例

          

new:创建缓存实例

  1. 语法格式:cache, err = mlcache.new(name, shm, opts?)
  2. * 创建缓存实例,如果创建失败,返回nil、错误信息
  3. * name:缓存实例的名称,如果不同的缓存实例name相同,则数据共享
  4. * shm:lua_shared_dict共享缓存名称,如果不同的mlcache的shm相同,则数据共享
  5. * opts:可选参数
  6. * lru_size:一级缓存大小,默认100
  7. * ttl:缓存过期时间,单位为秒,设置为0表示永不过期,默认30
  8. * neg_ttl:当三级缓存返回nil时,缓存过期时间,单位为秒,设置为0表示永不过期,默认5
  9. * resurrect_ttl:当3级缓存返回nil,延长过期缓存时间,单位为秒
  10. * lru:使用的lua-resty-lrucache实例
  11. * shm_set_tries:缓存重试次数
  12. * shm_miss:the name of a lua_shared_dict. When specified, misses
  13. (callbacks returning nil) will be cached in this separate lua_shared_dict
  14. 单独使用额外的共享空间存储三级接口返回的nil值,可使原来的缓存不过期
  15. * shm_locks:The name of a lua_shared_dict. When specified,
  16. lua-resty-lock will use this shared dict to store its lock
  17. * resty_lock_opts:Options for lua-resty-lock instances. When mlcache runs the
  18. L3 callback, it uses lua-resty-lock to ensure that a single
  19. worker runs the provided callback
  20. * ipc_shm:使用共享空间进行一级缓存同步
  21. If you wish to use set(), delete(), or purge(), you must provide
  22. an IPC (Inter-Process Communication) mechanism for workers to
  23. synchronize and invalidate their L1 caches
  24. * ipc:Like the above ipc_shm option, but lets you use the IPC library
  25. of your choice to propagate inter-worker events
  26. * l1_serializer:缓存序列化
  27. Its signature and accepted values are documented under the
  28. get() method, along with an example. If specified, this function
  29. will be called each time a value is promoted from the L2 cache
  30. into the L1 (worker Lua VM)

         

示例:创建分级缓存

  1. local mlcache = require "resty.mlcache"
  2. local cache, err = mlcache.new("my_cache", "cache_shared_dict", {
  3. lru_size = 1000, -- hold up to 1000 items in the L1 cache (Lua VM)
  4. ttl = 3600, -- caches scalar types and tables for 1h
  5. neg_ttl = 60 -- caches nil values for 60s
  6. })
  7. if not cache then
  8. error("could not create mlcache: " .. err)
  9. end

         

示例:创建多个分级缓存,二级缓存使用相同空间

  1. local mlcache = require "resty.mlcache"
  2. local cache_1 = mlcache.new("cache_1", "cache_shared_dict", { lru_size = 100 })
  3. local cache_2 = mlcache.new("cache_2", "cache_shared_dict", { lru_size = 1e5 })
  4. in the above example, cache_1 is ideal for holding a few, very large values. cache_2 can be used to hold a large number of small values.
  5. * cache_1:存储少量的大值
  6. * cache_2:存储大量的小值
  7. Both instances will rely on the same shm: lua_shared_dict cache_shared_dict 2048m;.
  8. Even if you use identical keys in both caches, they will not conflict with each
  9. other since they each have a different namespace
  10. * 两个分级缓存都使用相同的shm(共享缓存,二级缓存)
  11. * 因为有不同的命名空间,即使key相同,cache_1、cache_2也不会冲突

             

示例:一级缓存使用共享空间同步

  1. local mlcache = require "resty.mlcache"
  2. local cache, err = mlcache.new("my_cache_with_ipc", "cache_shared_dict", {
  3. lru_size = 1000,
  4. ipc_shm = "ipc_shared_dict"
  5. })

        

               

                                  

获取缓存值

          

get:获取缓存值

  1. 语法格式:value, err, hit_level = cache:get(key, opts?, callback?, ...)
  2. Perform a cache lookup. This is the primary and most efficient method
  3. of this module. A typical pattern is to not call set(), and let get()
  4. perform all the work.
  5. * 执行缓存查找
  6. When this method succeeds, it returns value and no error. Because nil
  7. values from the L3 callback can be cached (i.e. "negative caching"),
  8. value can be nil albeit already cached. Hence, one must rely on the
  9. second return value err to determine if this method succeeded or not.
  10. * 方法调用成功,返回value、没有错误输出
  11. * 因为三级缓存可能返回nil,需要对err进行判断,是否出错
  12. The third return value is a number which is set if no error was
  13. encountered. It indicated the level at which the value was fetched: 1
  14. for L1, 2 for L2, and 3 for L3.
  15. * hit_level:数据值是从哪个缓存中获取的
  16. * 1:一级缓存、2:二级缓存、3:三级缓存
  17. If an error is encountered, this method returns nil plus a string
  18. describing the error
  19. * 如果发生错误,返回nil、错误信息
  20. The first argument key is a string. Each value must be stored
  21. under a unique key
  22. * 第一个参数key:缓存名称,需要唯一标识
  23. * opts:可选参数
  24. * ttl:缓存过期时间,单位为秒,设置为0表示永不过期,默认从缓存实例继承
  25. * neg_ttl:当三级缓存返回nil时,缓存过期时间,单位为秒,设置为0表示永不过期,默认从缓存实例继承
  26. * resurrect_ttl:当3级缓存返回nil,延长过期缓存时间,单位为秒
  27. * shm_set_tries:缓存重试次数
  28. * l1_serializer:缓存序列化
  29. Its signature and accepted values are documented under the
  30. get() method, along with an example. If specified, this function
  31. will be called each time a value is promoted from the L2 cache
  32. into the L1 (worker Lua VM)

        

callback:回调函数,可选

  1. If provided, it must be a function whose signature and return
  2. values are documented in the following example
  3. * 如果设置了,必须是一个函数
  4. * 返回结果:value, err, ttl
  5. -- arg1, arg2, and arg3 are arguments forwarded to the callback from the
  6. -- `get()` variadic arguments, like so:
  7. -- cache:get(key, opts, callback, arg1, arg2, arg3)
  8. * ars1、arg2、arg3是get方法中紧跟在callback后面的参数
  9. * 返回3个值:value、err、ttl
  10. local function callback(arg1, arg2, arg3)
  11. -- I/O lookup logic
  12. -- ...
  13. -- value: the value to cache (Lua scalar or table)
  14. -- err: if not `nil`, will abort get(), which will return `value` and `err`
  15. -- ttl: override ttl for this value
  16. -- If returned as `ttl >= 0`, it will override the instance
  17. -- (or option) `ttl` or `neg_ttl`.
  18. -- If returned as `ttl < 0`, `value` will be returned by get(),
  19. -- but not cached. This return value will be ignored if not a number.
  20. return value, err, ttl
  21. end

          

没有提供回调函数

  1. If callback is not provided, get() will still lookup the requested key in
  2. the L1 and L2 caches and return it if found. In the case when no value is
  3. found in the cache and no callback is provided, get() will return nil, nil,
  4. -1, where -1 signifies a cache miss (no value). This is not to be confused
  5. with return values such as nil, nil, 1, where 1 signifies a negative cached
  6. item found in L1 (cached nil)
  7. * 如果没有设置回调函数,get方法会在一级、二级缓存中查找缓存
  8. * 如果没有找到value,并且没有callback,返回nil、nil、-1(表示没有找到缓存)
  9. * 如果在一级缓存中找到缓存,并且value为nil,则返回nil、nil、1

                 

提供了回调函数

  1. When provided a callback, get() follows the below logic:
  2. * 如果提供了回调函数,执行以下查询过程
  3. query the L1 cache (lua-resty-lrucache instance). This cache lives in the Lua VM, and as such, it is the most efficient one to query.
  4. if the L1 cache has the value, return it.
  5. if the L1 cache does not have the value (L1 miss), continue.
  6. * 查询一级缓存,如果找到返回值
  7. * 如果没有找到,到二级缓存中查找
  8. query the L2 cache (lua_shared_dict memory zone). This cache is shared by all workers, and is almost as efficient as the L1 cache. It however requires serialization of stored Lua tables.
  9. if the L2 cache has the value, return it.
  10. if l1_serializer is set, run it, and promote the resulting value in the L1 cache.
  11. if not, directly promote the value as-is in the L1 cache.
  12. if the L2 cache does not have the value (L2 miss), continue.
  13. * 二级缓存查找,
  14. * 如果找到,并且设置了l1_serializer,将序列化后的结果存储到一级缓存
  15. * 如果找到,没有设置了l1_serializer,直接将结果存储到一级缓存
  16. * 如果没有找到,到后端数据库去查找
  17. create a lua-resty-lock, and ensures that a single worker will run the callback (other workers trying to access the same value will wait).
  18. * 创建锁,获得所得worker去后端查询,其他想要查找相同值的worker需要等待
  19. a single worker runs the L3 callback (e.g. performs a database query)
  20. the callback succeeds and returns a value: the value is set in the L2 cache, and then in the L1 cache (as-is by default, or as returned by l1_serializer if specified).
  21. the callback failed and returned nil, err: a. if resurrect_ttl is specified, and if the stale value is still available, resurrect it in the L2 cache and promote it to the L1. b. otherwise, get() returns nil, err.
  22. * 三级缓存查找数据,如果找到,将数据存储到二级缓存、一级缓存(l1_serializer判断是否需要序列化)
  23. * 如果查找失败,返回nil、err,
  24. * 如果设置了resurrect_ttl,就将旧值返回,并更新存活时间
  25. * 如果没有设置resurrect_ttl,直接返回nil, err
  26. other workers that were trying to access the same value but were waiting are unlocked and read the value from the L2 cache (they do not run the L3 callback) and return it.
  27. * 其他的worker等待获取锁,释放锁后,从二级缓存中读取数值,不去后端查找
  28. When not provided a callback, get() will only execute steps 1. and 2
  29. * 如果没有提供回调函数,直接在一级、二级缓存中查找,不去三级缓存查找

          

示例

  1. local mlcache = require "resty.mlcache"
  2. local cache, err = mlcache.new("my_cache", "cache_shared_dict", {
  3. lru_size = 1000,
  4. ttl = 3600,
  5. neg_ttl = 60
  6. })
  7. local function fetch_user(user_id)
  8. local user, err = db:query_user(user_id)
  9. if err then
  10. -- in this case, get() will return `nil` + `err`
  11. return nil, err
  12. end
  13. return user -- table or nil
  14. end
  15. local user_id = 3
  16. local user, err = cache:get("users:" .. user_id, nil, fetch_user, user_id)
  17. if err then
  18. ngx.log(ngx.ERR, "could not retrieve user: ", err)
  19. return
  20. end
  21. -- `user` could be a table, but could also be `nil` (does not exist)
  22. -- regardless, it will be cached and subsequent calls to get() will
  23. -- return the cached value, for up to `ttl` or `neg_ttl`.
  24. if user then
  25. ngx.say("user exists: ", user.name)
  26. else
  27. ngx.say("user does not exists")
  28. end

         

示例:l1_serializer

  1. -- Our l1_serializer, called when a value is promoted from L2 to L1
  2. --
  3. -- Its signature receives a single argument: the item as returned from
  4. -- an L2 hit. Therefore, this argument can never be `nil`. The result will be
  5. -- kept in the L1 cache, but it cannot be `nil`.
  6. --
  7. -- This function can return `nil` and a string describing an error, which
  8. -- will bubble up to the caller of `get()`. It also runs in protected mode
  9. -- and will report any Lua error.
  10. local function load_code(user_row)
  11. if user_row.custom_code ~= nil then
  12. local f, err = loadstring(user_row.raw_lua_code)
  13. if not f then
  14. -- in this case, nothing will be stored in the cache (as if the L3
  15. -- callback failed)
  16. return nil, "failed to compile custom code: " .. err
  17. end
  18. user_row.f = f
  19. end
  20. return user_row
  21. end
  22. local user, err = cache:get("users:" .. user_id,
  23. { l1_serializer = load_code },
  24. fetch_user, user_id)
  25. if err then
  26. ngx.log(ngx.ERR, "could not retrieve user: ", err)
  27. return
  28. end
  29. -- now we can call a function that was already loaded once, upon entering
  30. -- the L1 cache (Lua VM)
  31. user.f()

            

                  

                                  

批量查询

          

get_bulk:批量查询

  1. 语法格式:res, err = cache:get_bulk(bulk, opts?)
  2. Performs several get() lookups at once (in bulk). Any of these lookups
  3. requiring an L3 callback call will be executed concurrently, in a pool
  4. of ngx.thread.
  5. * 批量查询,三级缓存查询会同时进行
  6. The first argument bulk is a table containing n operations.
  7. * 第一个参数时table类型
  8. The second argument opts is optional. If provided, it must be a
  9. table holding the options for this bulk lookup. The possible options are:
  10. concurrency: a number greater than 0. Specifies the number of threads
  11. that will concurrently execute the L3 callbacks for this bulk
  12. lookup. A concurrency of 3 with 6 callbacks to run means than
  13. each thread will execute 2 callbacks. A concurrency of 1 with
  14. 6 callbacks means than a single thread will execute all 6
  15. callbacks. With a concurrency of 6 and 1 callback, a single
  16. thread will run the callback. Default: 3.
  17. * 第二个参数可选
  18. * concurrency:并发执行callback函数,需要>0,默认为3
  19. Upon success, this method returns res, a table containing the
  20. results of each lookup, and no error.
  21. * 执行成功,返回res(table类型)、error(nil)
  22. Upon failure, this method returns nil plus a string describing the error
  23. * 执行失败,返回nil、错误信息

          

示例

  1. local mlcache = require "resty.mlcache"
  2. local cache, err = mlcache.new("my_cache", "cache_shared_dict")
  3. cache:get("key_c", nil, function() return nil end)
  4. local res, err = cache:get_bulk({
  5. -- bulk layout:
  6. -- key opts L3 callback callback argument
  7. "key_a", { ttl = 60 }, function() return "hello" end, nil,
  8. "key_b", nil, function() return "world" end, nil,
  9. "key_c", nil, function() return "bye" end, nil,
  10. n = 3 -- specify the number of operations
  11. }, { concurrency = 3 })
  12. if err then
  13. ngx.log(ngx.ERR, "could not execute bulk lookup: ", err)
  14. return
  15. end
  16. -- res layout:
  17. -- data, "err", hit_lvl }
  18. for i = 1, res.n, 3 do
  19. local data = res[i]
  20. local err = res[i + 1]
  21. local hit_lvl = res[i + 2]
  22. if not err then
  23. ngx.say("data: ", data, ", hit_lvl: ", hit_lvl)
  24. end
  25. end
  26. # 执行结果
  27. data: hello, hit_lvl: 3
  28. data: world, hit_lvl: 3
  29. data: nil, hit_lvl: 1

         

                  

                                  

批量查询条件

          

new_bulk:批量查询条件

  1. 语法格式:bulk = mlcache.new_bulk(n_lookups?)
  2. Creates a table holding lookup operations for the get_bulk() function.
  3. It is not required to use this function to construct a bulk lookup table,
  4. but it provides a nice abstraction.
  5. * 创建批量查询条件
  6. The first and only argument n_lookups is optional, and if specified, is a
  7. number hinting the amount of lookups this bulk will eventually contain so
  8. that the underlying table is pre-allocated for optimization purposes
  9. * 只有一个参数,该参数可选,表示批量查询个数

         

示例

  1. local mlcache = require "resty.mlcache"
  2. local cache, err = mlcache.new("my_cache", "cache_shared_dict")
  3. local bulk = mlcache.new_bulk(3)
  4. # 参数依次为:key、opts、callback、回调函数参数
  5. bulk:add("key_a", { ttl = 60 }, function(n) return n * n, 42)
  6. bulk:add("key_b", nil, function(str) return str end, "hello")
  7. bulk:add("key_c", nil, function() return nil end)
  8. local res, err = cache:get_bulk(bulk)

            

                 

                                  

遍历批量结果

          

each_bulk_res:遍历批量结果

  1. 语法格式:iter, res, i = mlcache.each_bulk_res(res)
  2. Provides an abstraction to iterate over a get_bulk() res return table.
  3. It is not required to use this method to iterate over a res table,
  4. but it provides a nice abstraction
  5. * 遍历批量结果

           

示例

  1. local mlcache = require "resty.mlcache"
  2. local cache, err = mlcache.new("my_cache", "cache_shared_dict")
  3. local res, err = cache:get_bulk(bulk)
  4. for i, data, err, hit_lvl in mlcache.each_bulk_res(res) do
  5. if not err then
  6. ngx.say("lookup ", i, ": ", data)
  7. end
  8. end

       

              

                                  

二级缓存查询

          

peek:直接查询二级缓存,不将结果放入一级缓存

  1. 语法格式:ttl, err, value = cache:peek(key, stale?)
  2. Peek into the L2 (lua_shared_dict) cache.
  3. The first argument key is a string which is the key to lookup in the cache.
  4. * 到二级缓存查找数据
  5. The second argument stale is optional. If true, then peek() will consider
  6. stale values as cached values. If not provided, peek() will consider stale
  7. values, as if they were not in the cache
  8. * stale可选,如果为true,会返回过期的值,默认true
  9. This method returns nil and a string describing the error upon failure.
  10. If there is no value for the queried key, it returns nil and no error.
  11. * 方法失败时返回nil、错误描述信息
  12. * 如果没有value,返回nil、无错误
  13. If there is a value for the queried key, it returns a number indicating
  14. the remaining TTL of the cached value (in seconds) and no error. If the
  15. value for key has expired but is still in the L2 cache, returned TTL
  16. value will be negative. Finally, the third returned value in that case
  17. will be the cached value itself, for convenience.
  18. * 如果查找到对应的值,返回剩余存活时间、无错误、value
  19. * 如果过期了,ttl为负,返回缓存的值
  20. This method is useful when you want to determine if a value is cached.
  21. A value stored in the L2 cache is considered cached regardless of whether
  22. or not it is also set in the L1 cache of the worker. That is because the
  23. L1 cache is considered volatile (as its size unit is a number of slots),
  24. and the L2 cache is still several orders of magnitude faster than the
  25. L3 callback anyway.
  26. * 当你考虑一个value是否缓存时,这个方法很有用
  27. * 相比于三级缓存,二级缓存会快很多
  28. As its only intent is to take a "peek" into the cache to determine its
  29. warmth for a given value, peek() does not count as a query like get(),
  30. and does not promote the value to the L1 cache
  31. * peek方法不像get方法,不会将value存入一级缓存

                   

示例

  1. local mlcache = require "resty.mlcache"
  2. local cache = mlcache.new("my_cache", "cache_shared_dict")
  3. local ttl, err, value = cache:peek("key")
  4. if err then
  5. ngx.log(ngx.ERR, "could not peek cache: ", err)
  6. return
  7. end
  8. ngx.say(ttl) -- nil because `key` has no value yet
  9. ngx.say(value) -- nil
  10. -- cache the value
  11. cache:get("key", { ttl = 5 }, function() return "some value" end)
  12. -- wait 2 seconds
  13. ngx.sleep(2)
  14. local ttl, err, value = cache:peek("key")
  15. if err then
  16. ngx.log(ngx.ERR, "could not peek cache: ", err)
  17. return
  18. end
  19. ngx.say(ttl) -- 3
  20. ngx.say(value) -- "some value"

                

                 

                                  

设置缓存

          

set:设置缓存

  1. 语法格式:ok, err = cache:set(key, opts?, value)
  2. Unconditionally set a value in the L2 cache and broadcasts an event to
  3. other workers so they can refresh the value from their L1 cache.
  4. * 在二级缓存设置value,
  5. * 将事件传播给其他worker,更新worker的一级缓存
  6. The first argument key is a string, and is the key under which to store
  7. the value.The second argument opts is optional, and if provided, is
  8. identical to the one of get().
  9. * key:缓存的key
  10. * opts:可选参数,和get的opts参数相同
  11. The third argument value is the value to cache, similar to the return
  12. value of the L3 callback. Just like the callback's return value, it must
  13. be a Lua scalar, a table, or nil. If a l1_serializer is provided either
  14. from the constructor or in the opts argument, it will be called with
  15. value if value is not nil.
  16. * value:设置的缓存值
  17. * 如果设置了l1_serializer,当value不是nil时,l1_serializer会被调用
  18. On success, the first return value will be true.
  19. On failure, this method returns nil and a string describing the error.
  20. * 方法执行成功时,ok为true
  21. * 方法执行失败时,返回nil、错误描述信息
  22. Note: by its nature, set() requires that other instances of mlcache (from
  23. other workers) refresh their L1 cache. If set() is called from a single
  24. worker, other workers' mlcache instances bearing the same name must call
  25. update() before their cache be requested during the next request, to make
  26. sure they refreshed their L1 cache.
  27. * 注意:set方法需要其他worker更新一级缓存
  28. * 如果set时在某个worker中调用,其他含有相同key的缓存需要调用update更新一级缓存
  29. Note bis: It is generally considered inefficient to call set() on a hot
  30. code path (such as in a request being served by OpenResty). Instead, one
  31. should rely on get() and its built-in mutex in the L3 callback. set() is
  32. better suited when called occasionally from a single worker, for example
  33. upon a particular event that triggers a cached value to be updated. Once
  34. set() updates the L2 cache with the fresh value, other workers will rely
  35. on update() to poll the invalidation event and invalidate their L1 cache,
  36. which will make them fetch the (fresh) value in L2
  37. * 通常认为set方法效率低,建议使用get方法的callback接口设置缓存
  38. * 某个worker使用set方法,其他worker需要调用update更新缓存

       

             

                                  

删除缓存

          

delete:删除缓存

  1. 语法格式:ok, err = cache:delete(key)
  2. Delete a value in the L2 cache and publish an event to other workers
  3. so they can evict the value from their L1 cache.
  4. * 从二级缓存删除缓存,将事件传播给其他worker,其他worker从一级缓存删除相应缓存
  5. The first and only argument key is the string at which the value is stored.
  6. * key:缓存对应的key
  7. On success, the first return value will be true.
  8. On failure, this method returns nil and a string describing the error.
  9. * 删除成功,ok为true
  10. * 删除失败,返回nil、错误描述信息
  11. Note: by its nature, delete() requires that other instances of mlcache
  12. (from other workers) refresh their L1 cache. If delete() is called from
  13. a single worker, other workers' mlcache instances bearing the same name
  14. must call update() before their cache be requested during the next request,
  15. to make sure they refreshed their L1 cache
  16. * 删除操作需要其他worker更新一级缓存,其他worker需要调用update更新一级缓存

          

               

                                  

清除缓存

          

purge:清除一级、二级缓存

  1. 语法格式:ok, err = cache:purge(flush_expired?)
  2. Purge the content of the cache, in both the L1 and L2 levels. Then
  3. publishes an event to other workers so they can purge their L1 cache as well.
  4. * 清理一级、二级缓存,并将事件传播给其他worker,其他worker也清理一级、二级缓存
  5. This method recycles the lua-resty-lrucache instance, and calls
  6. ngx.shared.DICT:flush_all , so it can be rather expensive.
  7. * 该方法调用lua-resty-lrucache、ngx.shared.DICT:flush_all的清除方法
  8. The first and only argument flush_expired is optional, but if given true,
  9. this method will also call ngx.shared.DICT:flush_expired (with no arguments).
  10. This is useful to release memory claimed by the L2 (shm) cache if needed.
  11. * flush_expired:可选,如果设置为true,调用ngx.shared.DICT:flush_expired方法
  12. On success, the first return value will be true.
  13. On failure, this method returns nil and a string describing the error.
  14. * 方法执行成功,ok为true
  15. * 方法执行失败,返回nil、错误描述信息
  16. Note: it is not possible to call purge() when using a custom LRU cache in
  17. OpenResty 1.13.6.1 and below. This limitation does not apply for OpenResty
  18. 1.13.6.2 and above.
  19. * purge方法在openresty 1.13.6.2及以上版本使用
  20. Note: by its nature, purge() requires that other instances of mlcache (from
  21. other workers) refresh their L1 cache. If purge() is called from a single
  22. worker, other workers' mlcache instances bearing the same name must call
  23. update() before their cache be requested during the next request, to make
  24. sure they refreshed their L1 cache
  25. * 某个worker调用了purge方法,其他worker需要调用update更新缓存

             

                

                                  

更新缓存

          

update:更新缓存

  1. 语法格式:ok, err = cache:update(timeout?)
  2. Poll and execute pending cache invalidation events published by other workers.
  3. * 拉取并执行其他worker发出的事件
  4. The set(), delete(), and purge() methods require that other instances of
  5. mlcache (from other workers) refresh their L1 cache. Since OpenResty currently
  6. has no built-in mechanism for inter-worker communication, this module bundles
  7. an "off-the-shelf" IPC library to propagate inter-worker events. If the bundled
  8. IPC library is used, the lua_shared_dict specified in the ipc_shm option must
  9. not be used by other actors than mlcache itself.
  10. * set、delete、purge需要其他worker更新一级缓存
  11. This method allows a worker to update its L1 cache (by purging values
  12. considered stale due to an other worker calling set(), delete(), or
  13. purge()) before processing a request.
  14. * update方法会更新一级缓存
  15. This method accepts a timeout argument whose unit is seconds and which
  16. defaults to 0.3 (300ms). The update operation will timeout if it isn't
  17. done when this threshold in reached. This avoids update() from staying
  18. on the CPU too long in case there are too many events to process. In an
  19. eventually consistent system, additional events can wait for the next
  20. call to be processed.
  21. * timeout:单位为秒,默认0.3s
  22. A typical design pattern is to call update() only once before each request processing. This allows your hot code paths to perform a single shm access
  23. in the best case scenario: no invalidation events were received, all get()
  24. calls will hit in the L1 cache. Only on a worst case scenario (n values were
  25. evicted by another worker) will get() access the L2 or L3 cache n times.
  26. Subsequent requests will then hit the best case scenario again, because
  27. get() populated the L1 cache
  28. * 在发起请求前先调用uodate方法,避免读到过期的一级缓存
  29. For example, if your workers make use of set(), delete(), or purge()
  30. anywhere in your application, call update() at the entrance of your
  31. hot code path, before using get()
  32. * 如果worker执行了set、delete、purge,在发起请求前先调用update方法

        

示例

  1. http {
  2. listen 9000;
  3. location / {
  4. content_by_lua_block {
  5. local cache = ... -- retrieve mlcache instance
  6. -- make sure L1 cache is evicted of stale values
  7. -- before calling get()
  8. local ok, err = cache:update()
  9. if not ok then
  10. ngx.log(ngx.ERR, "failed to poll eviction events: ", err)
  11. -- /!\ we might get stale data from get()
  12. end
  13. -- L1/L2/L3 lookup (best case: L1)
  14. local value, err = cache:get("key_1", nil, cb1)
  15. -- L1/L2/L3 lookup (best case: L1)
  16. local other_value, err = cache:get(key_2", nil, cb2)
  17. -- value and other_value are up-to-date because:
  18. -- either they were not stale and directly came from L1 (best case scenario)
  19. -- either they were stale and evicted from L1, and came from L2
  20. -- either they were not in L1 nor L2, and came from L3 (worst case scenario)
  21. }
  22. }
  23. location /delete {
  24. content_by_lua_block {
  25. local cache = ... -- retrieve mlcache instance
  26. -- delete some value
  27. local ok, err = cache:delete("key_1")
  28. if not ok then
  29. ngx.log(ngx.ERR, "failed to delete value from cache: ", err)
  30. return ngx.exit(500)
  31. end
  32. ngx.exit(204)
  33. }
  34. }
  35. location /set {
  36. content_by_lua_block {
  37. local cache = ... -- retrieve mlcache instance
  38. -- update some value
  39. local ok, err = cache:set("key_1", nil, 123)
  40. if not ok then
  41. ngx.log(ngx.ERR, "failed to set value in cache: ", err)
  42. return ngx.exit(500)
  43. end
  44. ngx.exit(200)
  45. }
  46. }
  47. }

           

                

                                  

使用示例

          

nginx.conf:创建共享缓存

  1. pcre_jit on;
  2. #error_log logs/error.log;
  3. #error_log logs/error.log notice;
  4. #error_log logs/error.log info;
  5. events {
  6. worker_connections 1024;
  7. }
  8. http {
  9. include mime.types;
  10. default_type application/octet-stream;
  11. client_body_temp_path /var/run/openresty/nginx-client-body;
  12. proxy_temp_path /var/run/openresty/nginx-proxy;
  13. fastcgi_temp_path /var/run/openresty/nginx-fastcgi;
  14. uwsgi_temp_path /var/run/openresty/nginx-uwsgi;
  15. scgi_temp_path /var/run/openresty/nginx-scgi;
  16. sendfile on;
  17. keepalive_timeout 65;
  18. include /etc/nginx/conf.d/*.conf;
  19. #设置共享缓存
  20. lua_shared_dict test 10m;
  21. }

         

default.conf

  1. server {
  2. listen 80;
  3. server_name localhost;
  4. location / {
  5. root /usr/local/openresty/nginx/html;
  6. index index.html index.htm;
  7. }
  8. location /test {
  9. content_by_lua_block {
  10. local mlcache = require 'resty.mlcache';
  11. local cache, err = mlcache.new("cache", "test", {
  12. lru_size = 1000,
  13. ttl = 3600,
  14. neg_ttl = 60
  15. });
  16. local res, err = cache:get("1");
  17. if err then
  18. ngx.say("没有找到数据 ==> ", err);
  19. return
  20. end
  21. ngx.say("查询的结果为 ==> ", res);
  22. }
  23. }
  24. location /test2 {
  25. content_by_lua_block {
  26. local mlcache = require 'resty.mlcache';
  27. local cache, err = mlcache.new("cache", "test", {
  28. lru_size = 1000,
  29. ttl = 3600,
  30. neg_ttl = 60
  31. });
  32. local function fetch_data_from_db(arg)
  33. --模拟从后端获取数据
  34. ngx.say("回调接口参数 ==> ", arg);
  35. if arg == '1' then
  36. ngx.say("回调接口返回 ==> ", 1);
  37. return '1';
  38. else
  39. ngx.say("回调接口返回 ==> ", 0);
  40. return '0';
  41. end
  42. end
  43. local res, err = cache:get("1", nil, fetch_data_from_db, 1);
  44. if err then
  45. ngx.say("没有找到数据 ==> ", err);
  46. return
  47. end
  48. ngx.say("查询的结果为 ==> ", res);
  49. }
  50. }
  51. location /test3 {
  52. content_by_lua_block {
  53. local mlcache = require 'resty.mlcache';
  54. local cache, err = mlcache.new("cache", "test", {
  55. lru_size = 1000,
  56. ttl = 3600,
  57. neg_ttl = 60
  58. });
  59. local function fetch_data_from_db(arg)
  60. --模拟从后端获取数据
  61. ngx.say("回调接口参数 ==> ", arg);
  62. if arg == '1' then
  63. ngx.say("回调接口返回 ==> ", 1);
  64. return '1' * 10;
  65. else
  66. ngx.say("回调接口返回 ==> ", 0);
  67. return '0';
  68. end
  69. end
  70. local function serialize(data)
  71. ngx.say("序列化参数 ==> ", data);
  72. if data == '1' then
  73. ngx.say("序列化返回 ==> ", data);
  74. return '10';
  75. end
  76. ngx.say("序列化返回 ==> ", data);
  77. return data;
  78. end
  79. local arg = ngx.var.arg_name;
  80. local res, err = cache:get("1", {l1_serializer = serialize}, fetch_data_from_db, arg);
  81. if err then
  82. ngx.say("没有找到数据 ==> ", err);
  83. return
  84. end
  85. ngx.say("查询的结果为 ==> ", res);
  86. }
  87. }
  88. error_page 500 502 503 504 /50x.html;
  89. location = /50x.html {
  90. root /usr/local/openresty/nginx/html;
  91. }
  92. }

           

创建容器

  1. docker run -it -d --net fixed --ip 172.18.0.101 -p 8001:80 \
  2. -v /Users/huli/lua/openresty/cache/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf \
  3. -v /Users/huli/lua/openresty/cache/default5.conf:/etc/nginx/conf.d/default.conf \
  4. --name open-cache5 lihu12344/openresty

        

进入容器,安装lua-resty-mlcache

  1. huli@hudeMacBook-Pro cache % docker exec -it open-cache5 bash
  2. [root@46007ec3d010 /]# cd /usr/local/openresty/bin
  3. # 查找mlcache安装包
  4. [root@46007ec3d010 bin]# opm search lua-resty-mlcache
  5. thibaultcha/lua-resty-mlcache Multi-level caching library for OpenResty
  6. # 安装mlcache
  7. [root@46007ec3d010 bin]# opm install thibaultcha/lua-resty-mlcache
  8. * Fetching thibaultcha/lua-resty-mlcache
  9. Downloading https://opm.openresty.org/api/pkg/tarball/thibaultcha/lua-resty-mlcache-2.5.0.opm.tar.gz
  10. % Total % Received % Xferd Average Speed Time Time Time Current
  11. Dload Upload Total Spent Left Speed
  12. 100 33256 100 33256 0 0 71927 0 --:--:-- --:--:-- --:--:-- 71982
  13. Package thibaultcha/lua-resty-mlcache 2.5.0 installed successfully under /usr/local/openresty/site/ .
  14. # 查看安装的三方包
  15. [root@46007ec3d010 bin]# opm list
  16. thibaultcha/lua-resty-mlcache 2.5.0

               

使用测试

  1. # 只有一级、二级缓存
  2. huli@hudeMacBook-Pro cache % curl --location --request GET 'localhost:8001/test'
  3. 查询的结果为 ==> nil
  4. # 回调接口返回数据
  5. huli@hudeMacBook-Pro cache % curl --location --request GET 'localhost:8001/test2'
  6. 回调接口参数 ==> 1
  7. 回调接口返回 ==> 0
  8. 查询的结果为 ==> 0
  9. # 依次到一级、二级、三级缓存查询
  10. huli@hudeMacBook-Pro cache % curl --location --request GET 'localhost:8001/test3'
  11. 回调接口参数 ==> nil
  12. 回调接口返回 ==> 0
  13. 序列化参数 ==> 0
  14. 序列化返回 ==> 0
  15. 查询的结果为 ==> 0
  16. # 使用缓存的结果
  17. huli@hudeMacBook-Pro cache % curl --location --request GET 'localhost:8001/test3?name=1'
  18. 查询的结果为 ==> 0

           

                  

本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号