Squashed 'src/deps/src/lua-resty-mlcache/' content from commit f140f5666

git-subtree-dir: src/deps/src/lua-resty-mlcache
git-subtree-split: f140f56663cbdb9cdd247d29f75c299c702ff6b4
This commit is contained in:
Théophile Diot 2023-06-30 15:38:43 -04:00
commit 31bf774f63
26 changed files with 13778 additions and 0 deletions

59
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,59 @@
name: CI
on:
push:
branches: main
pull_request:
branches: '*'
workflow_dispatch:
inputs:
openresty:
description: 'OpenResty version (e.g. 1.21.4.1rc2)'
required: true
defaults:
run:
shell: bash
jobs:
tests:
name: Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
openresty:
- 1.21.4.1
- 1.19.9.1
- 1.19.3.2
- 1.17.8.2
- 1.15.8.3
- 1.13.6.2
- 1.11.2.5
steps:
- if: ${{ github.event_name == 'workflow_dispatch' }}
run: echo "OPENRESTY_VER=${{ github.event.inputs.openresty }}" >> $GITHUB_ENV
- if: ${{ github.event_name == 'push' || github.event_name == 'pull_request' }}
run: echo "OPENRESTY_VER=${{ matrix.openresty }}" >> $GITHUB_ENV
- uses: actions/checkout@v2
- name: Setup OpenResty
uses: thibaultcha/setup-openresty@main
with:
version: ${{ env.OPENRESTY_VER }}
- run: prove -r t/
lint:
name: Lint
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
openresty: [1.19.9.1]
steps:
- uses: actions/checkout@v2
- name: Setup OpenResty
uses: thibaultcha/setup-openresty@main
with:
version: ${{ matrix.openresty }}
- run: |
echo "luarocks check"

5
.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
t/servroot*
lua-resty-mlcache-*/
*.tar.gz
*.rock
work/

3
.luacheckrc Normal file
View File

@ -0,0 +1,3 @@
std = "ngx_lua"
redefined = false
max_line_length = 80

270
CHANGELOG.md Normal file
View File

@ -0,0 +1,270 @@
# Table of Contents
- [2.6.0](#2.6.0)
- [2.5.0](#2.5.0)
- [2.4.1](#2.4.1)
- [2.4.0](#2.4.0)
- [2.3.0](#2.3.0)
- [2.2.1](#2.2.1)
- [2.2.0](#2.2.0)
- [2.1.0](#2.1.0)
- [2.0.2](#2.0.2)
- [2.0.1](#2.0.1)
- [2.0.0](#2.0.0)
- [1.0.1](#1.0.1)
- [1.0.0](#1.0.0)
## [2.6.0]
> Released on: 2022/08/22
#### Added
- Use the new LuaJIT `string.buffer` API for L2 (shm layer) encoding/decoding
when available.
[#110](https://github.com/thibaultcha/lua-resty-mlcache/pull/110)
[Back to TOC](#table-of-contents)
## [2.5.0]
> Released on: 2020/11/18
#### Added
- `get()` callback functions are now optional. Without a callback, `get()` now
still performs on-cpu L1/L2 lookups (no yielding). This allows implementing
new cache lookup patterns guaranteed to be on-cpu for a more constant,
smoother latency tail end (e.g. values are refreshed in background timers with
`set()`).
Thanks Hamish Forbes and Corina Purcarea for proposing this feature and
participating in its development!
[#96](https://github.com/thibaultcha/lua-resty-mlcache/pull/96)
#### Fixed
- Improve `update()` robustness to worker crashes. Now, the library behind
`cache:update()` is much more robust to re-spawned workers when initialized in
the `init_by_lua` phase.
[#97](https://github.com/thibaultcha/lua-resty-mlcache/pull/97)
- Document the `peek()` method `stale` argument which was not mentioned, as well
as the possibility of negative TTL return values for expired items.
[Back to TOC](#table-of-contents)
## [2.4.1]
> Released on: 2020/01/17
#### Fixed
- The IPC module now avoids replaying all events when spawning new workers, and
gets initialized with the latest event index instead.
[#88](https://github.com/thibaultcha/lua-resty-mlcache/pull/88)
[Back to TOC](#table-of-contents)
## [2.4.0]
> Released on: 2019/03/28
#### Added
- A new `get_bulk()` API allows for fetching several values from the layered
caches in a single call, and will execute all L3 callback functions
concurrently, in a configurable pool of threads.
[#77](https://github.com/thibaultcha/lua-resty-mlcache/pull/77)
- `purge()` now clears the L1 LRU cache with the new `flush_all()` method when
used in OpenResty >= 1.13.6.2.
Thanks [@Crack](https://github.com/Crack) for the patch!
[#78](https://github.com/thibaultcha/lua-resty-mlcache/pull/78)
#### Fixed
- `get()` is now resilient to L3 callback functions calling `error()` with
non-string arguments. Such functions could result in a runtime error when
LuaJIT is compiled with `-DLUAJIT_ENABLE_LUA52COMPAT`.
Thanks [@MartinAmps](https://github.com/MartinAmps) for the patch!
[#75](https://github.com/thibaultcha/lua-resty-mlcache/pull/75)
- Instances using a custom L1 LRU cache in OpenResty < 1.13.6.2 are now
restricted from calling `purge()`, since doing so would result in the LRU
cache being overwritten.
[#79](https://github.com/thibaultcha/lua-resty-mlcache/pull/79)
[Back to TOC](#table-of-contents)
## [2.3.0]
> Released on: 2019/01/17
#### Added
- Returning a negative `ttl` value from the L3 callback will now make the
fetched data bypass the cache (it will still be returned by `get()`).
This is useful when some fetched data indicates that it is not cacheable.
Thanks [@eaufavor](https://github.com/eaufavor) for the patch!
[#68](https://github.com/thibaultcha/lua-resty-mlcache/pull/68)
[Back to TOC](#table-of-contents)
## [2.2.1]
> Released on: 2018/07/28
#### Fixed
- When `get()` returns a value from L2 (shm) during its last millisecond of
freshness, we do not erroneously cache the value in L1 (LRU) indefinitely
anymore. Thanks [@jdesgats](https://github.com/jdesgats) and
[@javierguerragiraldez](https://github.com/javierguerragiraldez) for the
report and initial fix.
[#58](https://github.com/thibaultcha/lua-resty-mlcache/pull/58)
- When `get()` returns a previously resurrected value from L2 (shm), we now
correctly set the `hit_lvl` return value to `4`, instead of `2`.
[307feca](https://github.com/thibaultcha/lua-resty-mlcache/commit/307fecad6adac8755d4fcd931bbb498da23d069c)
[Back to TOC](#table-of-contents)
## [2.2.0]
> Released on: 2018/06/29
#### Added
- Implement a new `resurrect_ttl` option. When specified, `get()` will behave
in a more resilient way upon errors, and in particular callback errors.
[#52](https://github.com/thibaultcha/lua-resty-mlcache/pull/52)
- New `stale` argument to `peek()`. When specified, `peek()` will return stale
shm values.
[#52](https://github.com/thibaultcha/lua-resty-mlcache/pull/52)
[Back to TOC](#table-of-contents)
## [2.1.0]
> Released on: 2018/06/14
#### Added
- Implement a new `shm_locks` option. This option receives the name of a
lua_shared_dict, and, when specified, the mlcache instance will store
lua-resty-lock objects in it instead of storing them in the cache hits
lua_shared_dict. This can help reducing LRU churning in some workloads.
[#55](https://github.com/thibaultcha/lua-resty-mlcache/pull/55)
- Provide stack traceback in `err` return value when the L3 callback throws an
error.
[#56](https://github.com/thibaultcha/lua-resty-mlcache/pull/56)
#### Fixed
- Ensure `no memory` errors returned by shm insertions are properly returned
by `set()`.
[#53](https://github.com/thibaultcha/lua-resty-mlcache/pull/53)
[Back to TOC](#table-of-contents)
## [2.0.2]
> Released on: 2018/04/09
#### Fixed
- Make `get()` lookup in shm after lock timeout. This prevents a possible (but
rare) race condition under high load. Thanks to
[@jdesgats](https://github.com/jdesgats) for the report and initial fix.
[#49](https://github.com/thibaultcha/lua-resty-mlcache/pull/49)
[Back to TOC](#table-of-contents)
## [2.0.1]
> Released on: 2018/03/27
#### Fixed
- Ensure the `set()`, `delete()`, `peek()`, and `purge()` method properly
support the new `shm_miss` option.
[#45](https://github.com/thibaultcha/lua-resty-mlcache/pull/45)
[Back to TOC](#table-of-contents)
## [2.0.0]
> Released on: 2018/03/18
This release implements numerous new features. The major version digit has been
bumped to ensure that the changes to the interpretation of the callback return
values (documented below) do not break any dependent application.
#### Added
- Implement a new `purge()` method to clear all cached items in both
the L1 and L2 caches.
[#34](https://github.com/thibaultcha/lua-resty-mlcache/pull/34)
- Implement a new `shm_miss` option. This option receives the name
of a lua_shared_dict, and when specified, will cache misses there instead of
the instance's `shm` shared dict. This is particularly useful for certain
types of workload where a large number of misses can be triggered and
eventually evict too many cached values (hits) from the instance's `shm`.
[#42](https://github.com/thibaultcha/lua-resty-mlcache/pull/42)
- Implement a new `l1_serializer` callback option. It allows the
deserialization of data from L2 or L3 into arbitrary Lua data inside the LRU
cache (L1). This includes userdata, cdata, functions, etc...
Thanks to [@jdesgats](https://github.com/jdesgats) for the contribution.
[#29](https://github.com/thibaultcha/lua-resty-mlcache/pull/29)
- Implement a new `shm_set_tries` option to retry `shm:set()`
operations and ensure LRU eviction when caching values of disparate sizes.
[#41](https://github.com/thibaultcha/lua-resty-mlcache/issues/41)
- The L3 callback can now return `nil + err`, which will be bubbled up
to the caller of `get()`. Prior to this change, the second return value of
callbacks was ignored, and users had to throw hard Lua errors from inside
their callbacks.
[#35](https://github.com/thibaultcha/lua-resty-mlcache/pull/35)
- Support for custom IPC module.
[#31](https://github.com/thibaultcha/lua-resty-mlcache/issues/31)
#### Fixed
- In the event of a `no memory` error returned by the L2 lua_shared_dict cache
(after the number of `shm_set_tries` failed), we do not interrupt the `get()`
flow to return an error anymore. Instead, the retrieved value is now bubbled
up for insertion in L1, and returned to the caller. A warning log is (by
default) printed in the nginx error logs.
[#41](https://github.com/thibaultcha/lua-resty-mlcache/issues/41)
[Back to TOC](#table-of-contents)
## [1.0.1]
> Released on: 2017/08/26
#### Fixed
- Do not rely on memory address of mlcache instance in invalidation events
channel names. This ensures invalidation events are properly broadcasted to
sibling instances in other workers.
[#27](https://github.com/thibaultcha/lua-resty-mlcache/pull/27)
[Back to TOC](#table-of-contents)
## [1.0.0]
> Released on: 2017/08/23
Initial release.
[Back to TOC](#table-of-contents)
[2.6.0]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.5.0...2.6.0
[2.5.0]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.4.1...2.5.0
[2.4.1]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.4.0...2.4.1
[2.4.0]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.3.0...2.4.0
[2.3.0]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.2.1...2.3.0
[2.2.1]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.2.0...2.2.1
[2.2.0]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.1.0...2.2.0
[2.1.0]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.0.2...2.1.0
[2.0.2]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.0.1...2.0.2
[2.0.1]: https://github.com/thibaultcha/lua-resty-mlcache/compare/2.0.0...2.0.1
[2.0.0]: https://github.com/thibaultcha/lua-resty-mlcache/compare/1.0.1...2.0.0
[1.0.1]: https://github.com/thibaultcha/lua-resty-mlcache/compare/1.0.0...1.0.1
[1.0.0]: https://github.com/thibaultcha/lua-resty-mlcache/tree/1.0.0

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2017-2022 Thibault Charbonnier
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

1028
README.md Normal file

File diff suppressed because it is too large Load Diff

9
dist.ini Normal file
View File

@ -0,0 +1,9 @@
name = lua-resty-mlcache
abstract = Layered caching library for OpenResty
author = Thibault Charbonnier (thibaultcha)
is_original = yes
license = mit
lib_dir = lib
repo_link = https://github.com/thibaultcha/lua-resty-mlcache
main_module = lib/resty/mlcache.lua
requires = openresty

1395
lib/resty/mlcache.lua Normal file

File diff suppressed because it is too large Load Diff

257
lib/resty/mlcache/ipc.lua Normal file
View File

@ -0,0 +1,257 @@
-- vim: ts=4 sts=4 sw=4 et:
local ERR = ngx.ERR
local WARN = ngx.WARN
local INFO = ngx.INFO
local sleep = ngx.sleep
local shared = ngx.shared
local worker_pid = ngx.worker.pid
local ngx_log = ngx.log
local fmt = string.format
local sub = string.sub
local find = string.find
local min = math.min
local type = type
local pcall = pcall
local error = error
local insert = table.insert
local tonumber = tonumber
local setmetatable = setmetatable
local INDEX_KEY = "lua-resty-ipc:index"
local FORCIBLE_KEY = "lua-resty-ipc:forcible"
local POLL_SLEEP_RATIO = 2
local function marshall(worker_pid, channel, data)
return fmt("%d:%d:%s%s", worker_pid, #data, channel, data)
end
local function unmarshall(str)
local sep_1 = find(str, ":", nil , true)
local sep_2 = find(str, ":", sep_1 + 1, true)
local pid = tonumber(sub(str, 1 , sep_1 - 1))
local data_len = tonumber(sub(str, sep_1 + 1, sep_2 - 1))
local channel_last_pos = #str - data_len
local channel = sub(str, sep_2 + 1, channel_last_pos)
local data = sub(str, channel_last_pos + 1)
return pid, channel, data
end
local function log(lvl, ...)
return ngx_log(lvl, "[ipc] ", ...)
end
local _M = {}
local mt = { __index = _M }
function _M.new(shm, debug)
local dict = shared[shm]
if not dict then
return nil, "no such lua_shared_dict: " .. shm
end
local self = {
dict = dict,
pid = debug and 0 or worker_pid(),
idx = 0,
callbacks = {},
}
return setmetatable(self, mt)
end
function _M:subscribe(channel, cb)
if type(channel) ~= "string" then
error("channel must be a string", 2)
end
if type(cb) ~= "function" then
error("callback must be a function", 2)
end
if not self.callbacks[channel] then
self.callbacks[channel] = { cb }
else
insert(self.callbacks[channel], cb)
end
end
function _M:broadcast(channel, data)
if type(channel) ~= "string" then
error("channel must be a string", 2)
end
if type(data) ~= "string" then
error("data must be a string", 2)
end
local marshalled_event = marshall(worker_pid(), channel, data)
local idx, err = self.dict:incr(INDEX_KEY, 1, 0)
if not idx then
return nil, "failed to increment index: " .. err
end
local ok, err, forcible = self.dict:set(idx, marshalled_event)
if not ok then
return nil, "failed to insert event in shm: " .. err
end
if forcible then
-- take note that eviction has started
-- we repeat this flagging to avoid this key from ever being
-- evicted itself
local ok, err = self.dict:set(FORCIBLE_KEY, true)
if not ok then
return nil, "failed to set forcible flag in shm: " .. err
end
end
return true
end
-- Note: if this module were to be used by users (that is, users can implement
-- their own pub/sub events and thus, callbacks), this method would then need
-- to consider the time spent in callbacks to prevent long running callbacks
-- from penalizing the worker.
-- Since this module is currently only used by mlcache, whose callback is an
-- shm operation, we only worry about the time spent waiting for events
-- between the 'incr()' and 'set()' race condition.
function _M:poll(timeout)
if timeout ~= nil and type(timeout) ~= "number" then
error("timeout must be a number", 2)
end
local shm_idx, err = self.dict:get(INDEX_KEY)
if err then
return nil, "failed to get index: " .. err
end
if shm_idx == nil then
-- no events to poll yet
return true
end
if type(shm_idx) ~= "number" then
return nil, "index is not a number, shm tampered with"
end
if not timeout then
timeout = 0.3
end
if self.idx == 0 then
local forcible, err = self.dict:get(FORCIBLE_KEY)
if err then
return nil, "failed to get forcible flag from shm: " .. err
end
if forcible then
-- shm lru eviction occurred, we are likely a new worker
-- skip indexes that may have been evicted and resume current
-- polling idx
self.idx = shm_idx - 1
end
else
-- guard: self.idx <= shm_idx
self.idx = min(self.idx, shm_idx)
end
local elapsed = 0
for _ = self.idx, shm_idx - 1 do
-- fetch event from shm with a retry policy in case
-- we run our :get() in between another worker's
-- :incr() and :set()
local v
local idx = self.idx + 1
do
local perr
local pok = true
local sleep_step = 0.001
while elapsed < timeout do
v, err = self.dict:get(idx)
if v ~= nil or err then
break
end
if pok then
log(INFO, "no event data at index '", idx, "', ",
"retrying in: ", sleep_step, "s")
-- sleep is not available in all ngx_lua contexts
-- if we fail once, never retry to sleep
pok, perr = pcall(sleep, sleep_step)
if not pok then
log(WARN, "could not sleep before retry: ", perr,
" (note: it is safer to call this function ",
"in contexts that support the ngx.sleep() ",
"API)")
end
end
elapsed = elapsed + sleep_step
sleep_step = min(sleep_step * POLL_SLEEP_RATIO,
timeout - elapsed)
end
end
-- fetch next event on next iteration
-- even if we timeout, we might miss 1 event (we return in timeout and
-- we don't retry that event), but it's better than being stuck forever
-- on an event that might have been evicted from the shm.
self.idx = idx
if elapsed >= timeout then
return nil, "timeout"
end
if err then
log(ERR, "could not get event at index '", self.idx, "': ", err)
elseif type(v) ~= "string" then
log(ERR, "event at index '", self.idx, "' is not a string, ",
"shm tampered with")
else
local pid, channel, data = unmarshall(v)
if self.pid ~= pid then
-- coming from another worker
local cbs = self.callbacks[channel]
if cbs then
for j = 1, #cbs do
local pok, perr = pcall(cbs[j], data)
if not pok then
log(ERR, "callback for channel '", channel,
"' threw a Lua error: ", perr)
end
end
end
end
end
end
return true
end
return _M

View File

@ -0,0 +1,36 @@
package = "lua-resty-mlcache"
version = "2.6.0-1"
source = {
url = "git://github.com/thibaultcha/lua-resty-mlcache",
tag = "2.6.0"
}
description = {
summary = "Layered caching library for OpenResty",
detailed = [[
This library can be manipulated as a key/value store caching scalar Lua
types and tables, combining the power of the lua_shared_dict API and
lua-resty-lrucache, which results in an extremely performant and flexible
layered caching solution.
Features:
- Caching and negative caching with TTLs.
- Built-in mutex via lua-resty-lock to prevent dog-pile effects to your
database/backend on cache misses.
- Built-in inter-worker communication to propagate cache invalidations and
allow workers to update their L1 (lua-resty-lrucache) caches upon changes
(`set()`, `delete()`).
- Support for split hits and misses caching queues.
- Multiple isolated instances can be created to hold various types of data
while relying on the *same* `lua_shared_dict` L2 cache.
]],
homepage = "https://github.com/thibaultcha/lua-resty-mlcache",
license = "MIT"
}
build = {
type = "builtin",
modules = {
["resty.mlcache.ipc"] = "lib/resty/mlcache/ipc.lua",
["resty.mlcache"] = "lib/resty/mlcache.lua"
}
}

717
t/00-ipc.t Normal file
View File

@ -0,0 +1,717 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
workers(1);
plan tests => repeat_each() * (blocks() * 5);
our $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict ipc 1m;
init_by_lua_block {
-- local verbose = true
local verbose = false
local outfile = "$Test::Nginx::Util::ErrLogFile"
-- local outfile = "/tmp/v.log"
if verbose then
local dump = require "jit.dump"
dump.on(nil, outfile)
else
local v = require "jit.v"
v.on(outfile)
end
require "resty.core"
-- jit.opt.start("hotloop=1")
-- jit.opt.start("loopunroll=1000000")
-- jit.off()
}
};
run_tests();
__DATA__
=== TEST 1: new() ensures shm exists
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
local ipc, err = mlcache_ipc.new("foo")
ngx.say(err)
}
}
--- request
GET /t
--- response_body
no such lua_shared_dict: foo
--- no_error_log
[error]
=== TEST 2: broadcast() sends an event through shm
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "received event from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "hello world"))
assert(ipc:poll())
}
}
--- request
GET /t
--- response_body
--- no_error_log
[error]
--- error_log
received event from my_channel: hello world
=== TEST 3: broadcast() runs event callback in protected mode
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
error("my callback had an error")
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "hello world"))
assert(ipc:poll())
}
}
--- request
GET /t
--- response_body
--- error_log eval
qr/\[error\] .*? \[ipc\] callback for channel 'my_channel' threw a Lua error: init_worker_by_lua:\d: my callback had an error/
--- no_error_log
lua entry thread aborted: runtime error
=== TEST 4: poll() catches invalid timeout arg
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
}
}
--- config
location = /t {
content_by_lua_block {
local ok, err = pcall(ipc.poll, ipc, false)
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
timeout must be a number
--- no_error_log
[error]
=== TEST 5: poll() catches up with all events
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "received event from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "msg 1"))
assert(ipc:broadcast("my_channel", "msg 2"))
assert(ipc:broadcast("my_channel", "msg 3"))
assert(ipc:poll())
}
}
--- request
GET /t
--- response_body
--- no_error_log
[error]
--- error_log
received event from my_channel: msg 1
received event from my_channel: msg 2
received event from my_channel: msg 3
=== TEST 6: poll() resumes to current idx if events were previously evicted
This ensures new workers spawned during a master process' lifecycle do not
attempt to replay all events from index 0.
https://github.com/thibaultcha/lua-resty-mlcache/issues/87
https://github.com/thibaultcha/lua-resty-mlcache/issues/93
--- http_config eval
qq{
lua_package_path "$::pwd/lib/?.lua;;";
lua_shared_dict ipc 32k;
init_by_lua_block {
require "resty.core"
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "my_channel event: ", data)
end)
for i = 1, 32 do
-- fill shm, simulating busy workers
-- this must trigger eviction for this test to succeed
assert(ipc:broadcast("my_channel", string.rep(".", 2^10)))
end
}
}
--- config
location = /t {
content_by_lua_block {
ngx.say("ipc.idx: ", ipc.idx)
assert(ipc:broadcast("my_channel", "first broadcast"))
assert(ipc:broadcast("my_channel", "second broadcast"))
-- first poll without new() to simulate new worker
assert(ipc:poll())
-- ipc.idx set to shm_idx-1 ("second broadcast")
ngx.say("ipc.idx: ", ipc.idx)
}
}
--- request
GET /t
--- response_body
ipc.idx: 0
ipc.idx: 34
--- error_log
my_channel event: second broadcast
--- no_error_log
my_channel event: first broadcast
[error]
=== TEST 7: poll() does not execute events from self (same pid)
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc"))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "received event from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "hello world"))
assert(ipc:poll())
}
}
--- request
GET /t
--- response_body
--- no_error_log
[error]
received event from my_channel: hello world
=== TEST 8: poll() runs all registered callbacks for a channel
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback 1 from my_channel: ", data)
end)
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback 2 from my_channel: ", data)
end)
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback 3 from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "hello world"))
assert(ipc:poll())
}
}
--- request
GET /t
--- response_body
--- no_error_log
[error]
--- error_log
callback 1 from my_channel: hello world
callback 2 from my_channel: hello world
callback 3 from my_channel: hello world
=== TEST 9: poll() exits when no event to poll
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:poll())
}
}
--- request
GET /t
--- response_body
--- no_error_log
[error]
callback from my_channel: hello world
=== TEST 10: poll() runs all callbacks from all channels
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback 1 from my_channel: ", data)
end)
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback 2 from my_channel: ", data)
end)
ipc:subscribe("other_channel", function(data)
ngx.log(ngx.NOTICE, "callback 1 from other_channel: ", data)
end)
ipc:subscribe("other_channel", function(data)
ngx.log(ngx.NOTICE, "callback 2 from other_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "hello world"))
assert(ipc:broadcast("other_channel", "hello ipc"))
assert(ipc:broadcast("other_channel", "hello ipc 2"))
assert(ipc:poll())
}
}
--- request
GET /t
--- response_body
--- no_error_log
[error]
--- error_log
callback 1 from my_channel: hello world
callback 2 from my_channel: hello world
callback 1 from other_channel: hello ipc
callback 2 from other_channel: hello ipc
callback 1 from other_channel: hello ipc 2
callback 2 from other_channel: hello ipc 2
=== TEST 11: poll() catches tampered shm (by third-party users)
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "msg 1"))
assert(ngx.shared.ipc:set("lua-resty-ipc:index", false))
local ok, err = ipc:poll()
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
index is not a number, shm tampered with
--- no_error_log
[error]
=== TEST 12: poll() retries getting an event until timeout
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "msg 1"))
ngx.shared.ipc:delete(1)
ngx.shared.ipc:flush_expired()
local ok, err = ipc:poll()
if not ok then
ngx.log(ngx.ERR, "could not poll: ", err)
end
}
}
--- request
GET /t
--- response_body
--- error_log eval
[
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.001s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.002s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.004s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.008s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.016s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.032s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.064s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.128s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.045s/,
qr/\[error\] .*? could not poll: timeout/,
]
=== TEST 13: poll() reaches custom timeout
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "msg 1"))
ngx.shared.ipc:delete(1)
ngx.shared.ipc:flush_expired()
local ok, err = ipc:poll(0.01)
if not ok then
ngx.log(ngx.ERR, "could not poll: ", err)
end
}
}
--- request
GET /t
--- response_body
--- error_log eval
[
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.001s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.002s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.004s/,
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.003s/,
qr/\[error\] .*? could not poll: timeout/,
]
=== TEST 14: poll() logs errors and continue if event has been tampered with
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("my_channel", "msg 1"))
assert(ipc:broadcast("my_channel", "msg 2"))
assert(ngx.shared.ipc:set(1, false))
assert(ipc:poll())
}
}
--- request
GET /t
--- response_body
--- error_log eval
[
qr/\[error\] .*? \[ipc\] event at index '1' is not a string, shm tampered with/,
qr/\[notice\] .*? callback from my_channel: msg 2/,
]
=== TEST 15: poll() is safe to be called in contexts that don't support ngx.sleep()
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback from my_channel: ", data)
end)
}
}
--- config
location = /t {
return 200;
log_by_lua_block {
assert(ipc:broadcast("my_channel", "msg 1"))
ngx.shared.ipc:delete(1)
ngx.shared.ipc:flush_expired()
local ok, err = ipc:poll()
if not ok then
ngx.log(ngx.ERR, "could not poll: ", err)
end
}
}
--- request
GET /t
--- response_body
--- error_log eval
[
qr/\[info\] .*? \[ipc\] no event data at index '1', retrying in: 0\.001s/,
qr/\[warn\] .*? \[ipc\] could not sleep before retry: API disabled in the context of log_by_lua/,
qr/\[error\] .*? could not poll: timeout/,
]
=== TEST 16: poll() guards self.idx from growing beyond the current shm idx
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
assert(ipc:broadcast("other_channel", ""))
assert(ipc:poll())
assert(ipc:broadcast("my_channel", "fist broadcast"))
assert(ipc:broadcast("other_channel", ""))
assert(ipc:broadcast("my_channel", "second broadcast"))
-- shm idx is 5, let's mess with the instance's idx
ipc.idx = 10
assert(ipc:poll())
-- we may have skipped the above events, but we are able to resume polling
assert(ipc:broadcast("other_channel", ""))
assert(ipc:broadcast("my_channel", "third broadcast"))
assert(ipc:poll())
}
}
--- request
GET /t
--- ignore_response_body
--- error_log
callback from my_channel: third broadcast
--- no_error_log
callback from my_channel: first broadcast
callback from my_channel: second broadcast
[error]
=== TEST 17: poll() JITs
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
for i = 1, 10e3 do
assert(ipc:poll())
end
}
}
--- request
GET /t
--- response_body
--- error_log eval
qr/\[TRACE\s+\d+ content_by_lua\(nginx\.conf:\d+\):2 loop\]/
=== TEST 18: broadcast() JITs
--- http_config eval
qq{
$::HttpConfig
init_worker_by_lua_block {
local mlcache_ipc = require "resty.mlcache.ipc"
ipc = assert(mlcache_ipc.new("ipc", true))
ipc:subscribe("my_channel", function(data)
ngx.log(ngx.NOTICE, "callback from my_channel: ", data)
end)
}
}
--- config
location = /t {
content_by_lua_block {
for i = 1, 10e3 do
assert(ipc:broadcast("my_channel", "hello world"))
end
}
}
--- request
GET /t
--- response_body
--- error_log eval
qr/\[TRACE\s+\d+ content_by_lua\(nginx\.conf:\d+\):2 loop\]/

605
t/01-new.t Normal file
View File

@ -0,0 +1,605 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
repeat_each(2);
plan tests => repeat_each() * (blocks() * 3) + 4;
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
};
run_tests();
__DATA__
=== TEST 1: module has version number
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
ngx.say(mlcache._VERSION)
}
}
--- request
GET /t
--- response_body_like
\d+\.\d+\.\d+
--- no_error_log
[error]
=== TEST 2: new() validates name
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new)
if not ok then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
name must be a string
=== TEST 3: new() validates shm
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name")
if not ok then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
shm must be a string
=== TEST 4: new() validates opts
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", "foo")
if not ok then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
opts must be a table
=== TEST 5: new() ensures shm exists
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("name", "foo")
if not cache then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
no such lua_shared_dict: foo
=== TEST 6: new() supports ipc_shm option and validates it
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", { ipc_shm = 1 })
if not ok then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
ipc_shm must be a string
=== TEST 7: new() supports opts.ipc_shm and ensures it exists
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("name", "cache_shm", { ipc_shm = "ipc" })
if not cache then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- ignore_response_body
--- error_log eval
qr/\[error\] .*? no such lua_shared_dict: ipc/
--- no_error_log
[crit]
=== TEST 8: new() supports ipc options and validates it
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", { ipc = false })
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
opts.ipc must be a table
--- no_error_log
[error]
=== TEST 9: new() prevents both opts.ipc_shm and opts.ipc to be given
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
ipc_shm = "ipc",
ipc = {}
})
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
cannot specify both of opts.ipc_shm and opts.ipc
--- no_error_log
[error]
=== TEST 10: new() validates ipc.register_listeners + ipc.broadcast + ipc.poll (type: custom)
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local args = {
"register_listeners",
"broadcast",
"poll",
}
for _, arg in ipairs(args) do
local ipc_opts = {
register_listeners = function() end,
broadcast = function() end,
poll = function() end,
}
ipc_opts[arg] = false
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
ipc = ipc_opts,
})
if not ok then
ngx.say(err)
end
end
}
}
--- request
GET /t
--- response_body
opts.ipc.register_listeners must be a function
opts.ipc.broadcast must be a function
opts.ipc.poll must be a function
--- no_error_log
[error]
=== TEST 11: new() ipc.register_listeners can return nil + err (type: custom)
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("name", "cache_shm", {
ipc = {
register_listeners = function()
return nil, "something happened"
end,
broadcast = function() end,
poll = function() end,
}
})
if not cache then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body_like
failed to initialize custom IPC \(opts\.ipc\.register_listeners returned an error\): something happened
--- no_error_log
[error]
=== TEST 12: new() calls ipc.register_listeners with events array (type: custom)
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("name", "cache_shm", {
ipc = {
register_listeners = function(events)
local res = {}
for ev_name, ev in pairs(events) do
table.insert(res, string.format("%s | channel: %s | handler: %s",
ev_name, ev.channel, type(ev.handler)))
end
table.sort(res)
for i = 1, #res do
ngx.say(res[i])
end
end,
broadcast = function() end,
poll = function() end,
}
})
if not cache then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
invalidation | channel: mlcache:invalidations:name | handler: function
purge | channel: mlcache:purge:name | handler: function
--- no_error_log
[error]
=== TEST 13: new() ipc.poll is optional (some IPC libraries might not need it
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("name", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function() end,
poll = nil
}
})
if not cache then
ngx.say(err)
end
ngx.say("ok")
}
}
--- request
GET /t
--- response_body
ok
--- no_error_log
[error]
=== TEST 14: new() validates opts.lru_size
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
lru_size = "",
})
if not ok then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
opts.lru_size must be a number
=== TEST 15: new() validates opts.ttl
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
ttl = ""
})
if not ok then
ngx.log(ngx.ERR, err)
end
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
ttl = -1
})
if not ok then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
opts.ttl must be a number
opts.ttl must be >= 0
=== TEST 16: new() validates opts.neg_ttl
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
neg_ttl = ""
})
if not ok then
ngx.log(ngx.ERR, err)
end
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
neg_ttl = -1
})
if not ok then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
opts.neg_ttl must be a number
opts.neg_ttl must be >= 0
=== TEST 17: new() validates opts.resty_lock_opts
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
resty_lock_opts = false,
})
if not ok then
ngx.log(ngx.ERR, err)
end
}
}
--- request
GET /t
--- response_body
--- error_log
opts.resty_lock_opts must be a table
=== TEST 18: new() validates opts.shm_set_tries
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local values = {
false,
-1,
0,
}
for _, v in ipairs(values) do
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
shm_set_tries = v,
})
if not ok then
ngx.say(err)
end
end
}
}
--- request
GET /t
--- response_body
opts.shm_set_tries must be a number
opts.shm_set_tries must be >= 1
opts.shm_set_tries must be >= 1
--- no_error_log
[error]
=== TEST 19: new() validates opts.shm_miss
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
shm_miss = false,
})
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
opts.shm_miss must be a string
--- no_error_log
[error]
=== TEST 20: new() ensures opts.shm_miss exists
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = mlcache.new("name", "cache_shm", {
shm_miss = "foo",
})
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
no such lua_shared_dict for opts.shm_miss: foo
--- no_error_log
[error]
=== TEST 21: new() creates an mlcache object with default attributes
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("name", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
end
ngx.say(type(cache))
ngx.say(type(cache.ttl))
ngx.say(type(cache.neg_ttl))
}
}
--- request
GET /t
--- response_body
table
number
number
--- no_error_log
[error]
=== TEST 22: new() accepts user-provided LRU instances via opts.lru
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local pureffi_lrucache = require "resty.lrucache.pureffi"
local my_lru = pureffi_lrucache.new(100)
local cache = assert(mlcache.new("name", "cache_shm", { lru = my_lru }))
ngx.say("lru is user-provided: ", cache.lru == my_lru)
}
}
--- request
GET /t
--- response_body
lru is user-provided: true
--- no_error_log
[error]

2702
t/02-get.t Normal file

File diff suppressed because it is too large Load Diff

666
t/03-peek.t Normal file
View File

@ -0,0 +1,666 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
workers(2);
#repeat_each(2);
plan tests => repeat_each() * (blocks() * 3) + 2;
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict cache_shm_miss 1m;
init_by_lua_block {
-- local verbose = true
local verbose = false
local outfile = "$Test::Nginx::Util::ErrLogFile"
-- local outfile = "/tmp/v.log"
if verbose then
local dump = require "jit.dump"
dump.on(nil, outfile)
else
local v = require "jit.v"
v.on(outfile)
end
require "resty.core"
-- jit.opt.start("hotloop=1")
-- jit.opt.start("loopunroll=1000000")
-- jit.off()
}
};
run_tests();
__DATA__
=== TEST 1: peek() validates key
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
return
end
local ok, err = pcall(cache.peek, cache)
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
key must be a string
--- no_error_log
[error]
=== TEST 2: peek() returns nil if a key has never been fetched before
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
return
end
local ttl, err = cache:peek("my_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", ttl)
}
}
--- request
GET /t
--- response_body
ttl: nil
--- no_error_log
[error]
=== TEST 3: peek() returns the remaining ttl if a key has been fetched before
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
return
end
local function cb()
return nil
end
local val, err = cache:get("my_key", { neg_ttl = 19 }, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
local ttl, err = cache:peek("my_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", math.ceil(ttl))
ngx.sleep(1)
local ttl, err = cache:peek("my_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", math.ceil(ttl))
}
}
--- request
GET /t
--- response_body
ttl: 19
ttl: 18
--- no_error_log
[error]
=== TEST 4: peek() returns a negative ttl when a key expired
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
local function cb()
return nil
end
local val, err = cache:get("my_key", { neg_ttl = 0 }, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.sleep(1)
local ttl = assert(cache:peek("my_key"))
ngx.say("ttl: ", math.ceil(ttl))
ngx.sleep(1)
local ttl = assert(cache:peek("my_key"))
ngx.say("ttl: ", math.ceil(ttl))
}
}
--- request
GET /t
--- response_body
ttl: -1
ttl: -2
--- no_error_log
[error]
=== TEST 5: peek() returns remaining ttl if shm_miss is specified
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
shm_miss = "cache_shm_miss",
}))
local function cb()
return nil
end
local val, err = cache:get("my_key", { neg_ttl = 19 }, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
local ttl, err = cache:peek("my_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", math.ceil(ttl))
ngx.sleep(1)
local ttl, err = cache:peek("my_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", math.ceil(ttl))
}
}
--- request
GET /t
--- response_body
ttl: 19
ttl: 18
--- no_error_log
[error]
=== TEST 6: peek() returns the value if a key has been fetched before
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
return
end
local function cb_number()
return 123
end
local function cb_nil()
return nil
end
local val, err = cache:get("my_key", nil, cb_number)
if err then
ngx.log(ngx.ERR, err)
return
end
local val, err = cache:get("my_nil_key", nil, cb_nil)
if err then
ngx.log(ngx.ERR, err)
return
end
local ttl, err, val = cache:peek("my_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", math.ceil(ttl), " val: ", val)
local ttl, err, val = cache:peek("my_nil_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", math.ceil(ttl), " nil_val: ", val)
}
}
--- request
GET /t
--- response_body_like
ttl: \d* val: 123
ttl: \d* nil_val: nil
--- no_error_log
[error]
=== TEST 7: peek() returns the value if shm_miss is specified
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
shm_miss = "cache_shm_miss",
}))
local function cb_nil()
return nil
end
local val, err = cache:get("my_nil_key", nil, cb_nil)
if err then
ngx.log(ngx.ERR, err)
return
end
local ttl, err, val = cache:peek("my_nil_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", math.ceil(ttl), " nil_val: ", val)
}
}
--- request
GET /t
--- response_body_like
ttl: \d* nil_val: nil
--- no_error_log
[error]
=== TEST 8: peek() JITs on hit
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
local function cb()
return 123456
end
local val = assert(cache:get("key", nil, cb))
ngx.say("val: ", val)
for i = 1, 10e3 do
assert(cache:peek("key"))
end
}
}
--- request
GET /t
--- response_body
val: 123456
--- no_error_log
[error]
--- error_log eval
qr/\[TRACE\s+\d+ content_by_lua\(nginx\.conf:\d+\):13 loop\]/
=== TEST 9: peek() JITs on miss
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
for i = 1, 10e3 do
local ttl, err, val = cache:peek("key")
assert(err == nil)
assert(ttl == nil)
assert(val == nil)
end
}
}
--- request
GET /t
--- response_body
--- no_error_log
[error]
--- error_log eval
qr/\[TRACE\s+\d+ content_by_lua\(nginx\.conf:\d+\):6 loop\]/
=== TEST 10: peek() returns nil if a value expired
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
return
end
assert(cache:get("my_key", { ttl = 0.3 }, function()
return 123
end))
ngx.sleep(0.3)
local ttl, err, data, stale = cache:peek("my_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", ttl)
ngx.say("data: ", data)
ngx.say("stale: ", stale)
}
}
--- request
GET /t
--- response_body
ttl: nil
data: nil
stale: nil
--- no_error_log
[error]
=== TEST 11: peek() returns nil if a value expired in 'shm_miss'
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
shm_miss = "cache_shm_miss"
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local data, err = cache:get("my_key", { neg_ttl = 0.3 }, function()
return nil
end)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.sleep(0.3)
local ttl, err, data, stale = cache:peek("my_key")
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", ttl)
ngx.say("data: ", data)
ngx.say("stale: ", stale)
}
}
--- request
GET /t
--- response_body
ttl: nil
data: nil
stale: nil
--- no_error_log
[error]
=== TEST 12: peek() accepts stale arg and returns stale values
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
return
end
assert(cache:get("my_key", { ttl = 0.3 }, function()
return 123
end))
ngx.sleep(0.3)
local ttl, err, data, stale = cache:peek("my_key", true)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", ttl)
ngx.say("data: ", data)
ngx.say("stale: ", stale)
}
}
--- request
GET /t
--- response_body_like chomp
ttl: -0\.\d+
data: 123
stale: true
--- no_error_log
[error]
=== TEST 13: peek() accepts stale arg and returns stale values from 'shm_miss'
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
shm_miss = "cache_shm_miss"
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local data, err = cache:get("my_key", { neg_ttl = 0.3 }, function()
return nil
end)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.sleep(0.3)
local ttl, err, data, stale = cache:peek("my_key", true)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("ttl: ", ttl)
ngx.say("data: ", data)
ngx.say("stale: ", stale)
}
}
--- request
GET /t
--- response_body_like chomp
ttl: -0\.\d+
data: nil
stale: true
--- no_error_log
[error]
=== TEST 14: peek() does not evict stale items from L2 shm
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ttl = 0.3,
}))
local data, err = cache:get("key", nil, function()
return 123
end)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.sleep(0.3)
for i = 1, 3 do
remaining_ttl, err, data = cache:peek("key", true)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("remaining_ttl: ", remaining_ttl)
ngx.say("data: ", data)
end
}
}
--- request
GET /t
--- response_body_like chomp
remaining_ttl: -\d\.\d+
data: 123
remaining_ttl: -\d\.\d+
data: 123
remaining_ttl: -\d\.\d+
data: 123
--- no_error_log
[error]
=== TEST 15: peek() does not evict stale negative data from L2 shm_miss
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
neg_ttl = 0.3,
shm_miss = "cache_shm_miss",
}))
local data, err = cache:get("key", nil, function()
return nil
end)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.sleep(0.3)
for i = 1, 3 do
remaining_ttl, err, data = cache:peek("key", true)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("remaining_ttl: ", remaining_ttl)
ngx.say("data: ", data)
end
}
}
--- request
GET /t
--- response_body_like chomp
remaining_ttl: -\d\.\d+
data: nil
remaining_ttl: -\d\.\d+
data: nil
remaining_ttl: -\d\.\d+
data: nil
--- no_error_log
[error]

117
t/04-update.t Normal file
View File

@ -0,0 +1,117 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
workers(2);
#repeat_each(2);
plan tests => repeat_each() * (blocks() * 3);
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict ipc_shm 1m;
init_by_lua_block {
-- local verbose = true
local verbose = false
local outfile = "$Test::Nginx::Util::ErrLogFile"
-- local outfile = "/tmp/v.log"
if verbose then
local dump = require "jit.dump"
dump.on(nil, outfile)
else
local v = require "jit.v"
v.on(outfile)
end
require "resty.core"
-- jit.opt.start("hotloop=1")
-- jit.opt.start("loopunroll=1000000")
-- jit.off()
}
};
run_tests();
__DATA__
=== TEST 1: update() errors if no ipc
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
local ok, err = pcall(cache.update, cache, "foo")
ngx.say(err)
}
}
--- request
GET /t
--- response_body
no polling configured, specify opts.ipc_shm or opts.ipc.poll
--- no_error_log
[error]
=== TEST 2: update() calls ipc poll() with timeout arg
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function() end,
poll = function(...)
ngx.say("called poll() with args: ", ...)
return true
end,
}
}))
assert(cache:update(3.5, "not me"))
}
}
--- request
GET /t
--- response_body
called poll() with args: 3.5
--- no_error_log
[error]
=== TEST 3: update() JITs when no events to catch up
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
for i = 1, 10e3 do
assert(cache:update())
end
}
}
--- request
GET /t
--- ignore_response_body
--- no_error_log
[error]
--- error_log eval
qr/\[TRACE\s+\d+ content_by_lua\(nginx\.conf:\d+\):8 loop\]/

624
t/05-set.t Normal file
View File

@ -0,0 +1,624 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
#repeat_each(2);
plan tests => repeat_each() * (blocks() * 3) + 2;
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict cache_shm_miss 1m;
lua_shared_dict ipc_shm 1m;
};
run_tests();
__DATA__
=== TEST 1: set() errors if no ipc
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
local ok, err = pcall(cache.set, cache, "foo")
ngx.say(err)
}
}
--- request
GET /t
--- response_body
no ipc to propagate update, specify opts.ipc_shm or opts.ipc
--- no_error_log
[error]
=== TEST 2: set() validates key
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
local ok, err = pcall(cache.set, cache)
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
key must be a string
--- no_error_log
[error]
=== TEST 3: set() puts a value directly in shm
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- setting a value in shm
assert(cache:set("my_key", nil, 123))
-- declaring a callback that MUST NOT be called
local function cb()
ngx.log(ngx.ERR, "callback was called but should not have")
end
-- try to get()
local value = assert(cache:get("my_key", nil, cb))
ngx.say("value from get(): ", value)
-- value MUST BE in lru
local value_lru = cache.lru:get("my_key")
ngx.say("cache lru value after get(): ", value_lru)
}
}
--- request
GET /t
--- response_body
value from get(): 123
cache lru value after get(): 123
--- no_error_log
[error]
=== TEST 4: set() puts a negative hit directly in shm_miss if specified
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
shm_miss = "cache_shm_miss",
}))
-- setting a value in shm
assert(cache:set("my_key", nil, nil))
-- declaring a callback that MUST NOT be called
local function cb()
ngx.log(ngx.ERR, "callback was called but should not have")
end
-- try to get()
local value, err = cache:get("my_key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("value from get(): ", value)
}
}
--- request
GET /t
--- response_body
value from get(): nil
--- no_error_log
[error]
=== TEST 5: set() puts a value directly in its own LRU
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- setting a value in shm
assert(cache:set("my_key", nil, 123))
-- value MUST BE be in lru
local value_lru = cache.lru:get("my_key")
ngx.say("cache lru value after set(): ", value_lru)
}
}
--- request
GET /t
--- response_body
cache lru value after set(): 123
--- no_error_log
[error]
=== TEST 6: set() respects 'ttl' for non-nil values
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- setting a non-nil value in shm
assert(cache:set("my_key", {
ttl = 0.2,
neg_ttl = 1,
}, 123))
-- declaring a callback that logs accesses
local function cb()
ngx.say("callback called")
return 123
end
-- try to get() (callback MUST NOT be called)
ngx.say("calling get()")
local value = assert(cache:get("my_key", nil, cb))
ngx.say("value from get(): ", value)
-- wait until expiry
ngx.say("waiting until expiry...")
ngx.sleep(0.3)
ngx.say("waited 0.3s")
-- try to get() (callback MUST be called)
ngx.say("calling get()")
local value = assert(cache:get("my_key", nil, cb))
ngx.say("value from get(): ", value)
}
}
--- request
GET /t
--- response_body
calling get()
value from get(): 123
waiting until expiry...
waited 0.3s
calling get()
callback called
value from get(): 123
--- no_error_log
[error]
=== TEST 7: set() respects 'neg_ttl' for nil values
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- setting a nil value in shm
assert(cache:set("my_key", {
ttl = 1,
neg_ttl = 0.2,
}, nil))
-- declaring a callback that logs accesses
local function cb()
ngx.say("callback called")
return nil
end
-- try to get() (callback MUST NOT be called)
ngx.say("calling get()")
local value, err = cache:get("my_key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
end
ngx.say("value from get(): ", value)
-- wait until expiry
ngx.say("waiting until expiry...")
ngx.sleep(0.3)
ngx.say("waited 0.3s")
-- try to get() (callback MUST be called)
ngx.say("calling get()")
local value, err = cache:get("my_key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
end
ngx.say("value from get(): ", value)
}
}
--- request
GET /t
--- response_body
calling get()
value from get(): nil
waiting until expiry...
waited 0.3s
calling get()
callback called
value from get(): nil
--- no_error_log
[error]
=== TEST 8: set() respects 'set_shm_tries'
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local dict = ngx.shared.cache_shm
dict:flush_all()
dict:flush_expired()
local mlcache = require "resty.mlcache"
-- fill up shm
local idx = 0
while true do
local ok, err, forcible = dict:set(idx, string.rep("a", 2^2))
if not ok then
ngx.log(ngx.ERR, err)
return
end
if forcible then
break
end
idx = idx + 1
end
-- shm:set() will evict up to 30 items when the shm is full
-- now, trigger a hit with a larger value which should trigger LRU
-- eviction and force the slab allocator to free pages
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
local data, err = cache:set("key", {
shm_set_tries = 5,
}, string.rep("a", 2^12))
if err then
ngx.log(ngx.ERR, err)
return
end
-- from shm
cache.lru:delete("key")
local cb_called
local function cb()
cb_called = true
end
local data, err = cache:get("key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("type of data in shm: ", type(data))
ngx.say("callback was called: ", cb_called ~= nil)
}
}
--- request
GET /t
--- response_body
type of data in shm: string
callback was called: false
--- no_error_log
[warn]
[error]
=== TEST 9: set() with shm_miss can set a nil where a value was
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
shm_miss = "cache_shm_miss",
}))
local function cb()
return 123
end
-- install a non-nil value in the cache
local value, err = cache:get("my_key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("initial value from get(): ", value)
-- override that value with a negative hit that
-- must go in the shm_miss (and the shm value must be
-- erased)
assert(cache:set("my_key", nil, nil))
-- and remove it from the LRU
cache.lru:delete("my_key")
-- ok, now we should be getting nil from the cache
local value, err = cache:get("my_key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("value from get() after set(): ", value)
}
}
--- request
GET /t
--- response_body
initial value from get(): 123
value from get() after set(): nil
--- no_error_log
[error]
=== TEST 10: set() with shm_miss can set a value where a nil was
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
shm_miss = "cache_shm_miss",
}))
local function cb()
return nil
end
-- install a non-nil value in the cache
local value, err = cache:get("my_key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("initial value from get(): ", value)
-- override that value with a negative hit that
-- must go in the shm_miss (and the shm value must be
-- erased)
assert(cache:set("my_key", nil, 123))
-- and remove it from the LRU
cache.lru:delete("my_key")
-- ok, now we should be getting nil from the cache
local value, err = cache:get("my_key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("value from get() after set(): ", value)
}
}
--- request
GET /t
--- response_body
initial value from get(): nil
value from get() after set(): 123
--- no_error_log
[error]
=== TEST 11: set() returns 'no memory' errors upon fragmentation in the shm
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- fill shm
local idx = 0
while true do
local ok, err, forcible = ngx.shared.cache_shm:set(idx, true)
if not ok then
ngx.log(ngx.ERR, err)
return
end
if forcible then
break
end
idx = idx + 1
end
-- set large value
local ok, err = cache:set("my_key", { shm_set_tries = 1 }, string.rep("a", 2^10))
ngx.say(ok)
ngx.say(err)
}
}
--- request
GET /t
--- response_body
nil
could not write to lua_shared_dict 'cache_shm': no memory
--- no_error_log
[error]
[warn]
=== TEST 12: set() does not set LRU upon shm insertion error
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- fill shm
local idx = 0
while true do
local ok, err, forcible = ngx.shared.cache_shm:set(idx, true)
if not ok then
ngx.log(ngx.ERR, err)
return
end
if forcible then
break
end
idx = idx + 1
end
-- set large value
local ok = cache:set("my_key", { shm_set_tries = 1 }, string.rep("a", 2^10))
assert(ok == nil)
local data = cache.lru:get("my_key")
ngx.say(data)
}
}
--- request
GET /t
--- response_body
nil
--- no_error_log
[error]
=== TEST 13: set() calls broadcast() with invalidated key
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function(channel, data, ...)
ngx.say("channel: ", channel)
ngx.say("data: ", data)
ngx.say("other args:", ...)
return true
end,
poll = function() end,
}
}))
assert(cache:set("my_key", nil, nil))
}
}
--- request
GET /t
--- response_body
channel: mlcache:invalidations:my_mlcache
data: my_key
other args:
--- no_error_log
[error]

252
t/06-delete.t Normal file
View File

@ -0,0 +1,252 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
workers(2);
#repeat_each(2);
plan tests => repeat_each() * (blocks() * 3);
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict cache_shm_miss 1m;
lua_shared_dict ipc_shm 1m;
};
run_tests();
__DATA__
=== TEST 1: delete() errors if no ipc
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
local ok, err = pcall(cache.delete, cache, "foo")
ngx.say(err)
}
}
--- request
GET /t
--- response_body
no ipc to propagate deletion, specify opts.ipc_shm or opts.ipc
--- no_error_log
[error]
=== TEST 2: delete() validates key
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
local ok, err = pcall(cache.delete, cache, 123)
ngx.say(err)
}
}
--- request
GET /t
--- response_body
key must be a string
--- no_error_log
[error]
=== TEST 3: delete() removes a cached value from LRU + shm
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
local value = 123
local function cb()
ngx.say("in callback")
return value
end
-- set a value (callback call)
local data = assert(cache:get("key", nil, cb))
ngx.say("from callback: ", data)
-- get a value (no callback call)
data = assert(cache:get("key", nil, cb))
ngx.say("from LRU: ", data)
-- test if value is set from shm (safer to check due to the key)
local v = ngx.shared.cache_shm:get(cache.name .. "key")
ngx.say("shm has value before delete: ", v ~= nil)
-- delete the value
assert(cache:delete("key"))
local v = ngx.shared.cache_shm:get(cache.name .. "key")
ngx.say("shm has value after delete: ", v ~= nil)
-- ensure LRU was also deleted
v = cache.lru:get("key")
ngx.say("from LRU: ", v)
-- start over from callback again
value = 456
data = assert(cache:get("key", nil, cb))
ngx.say("from callback: ", data)
}
}
--- request
GET /t
--- response_body
in callback
from callback: 123
from LRU: 123
shm has value before delete: true
shm has value after delete: false
from LRU: nil
in callback
from callback: 456
--- no_error_log
[error]
=== TEST 4: delete() removes a cached nil from shm_miss if specified
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
shm_miss = "cache_shm_miss",
}))
local value = nil
local function cb()
ngx.say("in callback")
return value
end
-- set a value (callback call)
local data, err = cache:get("key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("from callback: ", data)
-- get a value (no callback call)
data, err = cache:get("key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("from LRU: ", data)
-- test if value is set from shm (safer to check due to the key)
local v = ngx.shared.cache_shm_miss:get(cache.name .. "key")
ngx.say("shm_miss has value before delete: ", v ~= nil)
-- delete the value
assert(cache:delete("key"))
local v = ngx.shared.cache_shm_miss:get(cache.name .. "key")
ngx.say("shm_miss has value after delete: ", v ~= nil)
-- ensure LRU was also deleted
v = cache.lru:get("key")
ngx.say("from LRU: ", v)
-- start over from callback again
value = 456
data, err = cache:get("key", nil, cb)
if err then
ngx.log(ngx.ERR, err)
return
end
ngx.say("from callback again: ", data)
}
}
--- request
GET /t
--- response_body
in callback
from callback: nil
from LRU: nil
shm_miss has value before delete: true
shm_miss has value after delete: false
from LRU: nil
in callback
from callback again: 456
--- no_error_log
[error]
=== TEST 5: delete() calls broadcast with invalidated key
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function(channel, data, ...)
ngx.say("channel: ", channel)
ngx.say("data: ", data)
ngx.say("other args:", ...)
return true
end,
poll = function() end,
}
}))
assert(cache:delete("my_key"))
}
}
--- request
GET /t
--- response_body
channel: mlcache:invalidations:my_mlcache
data: my_key
other args:
--- no_error_log
[error]

741
t/07-l1_serializer.t Normal file
View File

@ -0,0 +1,741 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
workers(2);
#repeat_each(2);
plan tests => repeat_each() * (blocks() * 3) + 1;
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict ipc_shm 1m;
init_by_lua_block {
-- local verbose = true
local verbose = false
local outfile = "$Test::Nginx::Util::ErrLogFile"
-- local outfile = "/tmp/v.log"
if verbose then
local dump = require "jit.dump"
dump.on(nil, outfile)
else
local v = require "jit.v"
v.on(outfile)
end
require "resty.core"
-- jit.opt.start("hotloop=1")
-- jit.opt.start("loopunroll=1000000")
-- jit.off()
}
};
run_tests();
__DATA__
=== TEST 1: l1_serializer is validated by the constructor
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "my_mlcache", "cache_shm", {
l1_serializer = false,
})
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
opts.l1_serializer must be a function
--- no_error_log
[error]
=== TEST 2: l1_serializer is called on L1+L2 cache misses
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
return string.format("transform(%q)", s)
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local data, err = cache:get("key", nil, function() return "foo" end)
if not data then
ngx.log(ngx.ERR, err)
return
end
ngx.say(data)
}
}
--- request
GET /t
--- response_body
transform("foo")
--- no_error_log
[error]
=== TEST 3: get() JITs when hit of scalar value coming from shm with l1_serializer
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(i)
return i + 2
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local function cb_number()
return 123456
end
for i = 1, 10e2 do
local data = assert(cache:get("number", nil, cb_number))
assert(data == 123458)
cache.lru:delete("number")
end
}
}
--- request
GET /t
--- response_body
--- error_log eval
qr/\[TRACE\s+\d+ content_by_lua\(nginx\.conf:\d+\):18 loop\]/
--- no_error_log
[error]
=== TEST 4: l1_serializer is not called on L1 hits
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local calls = 0
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
calls = calls + 1
return string.format("transform(%q)", s)
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
for i = 1, 3 do
local data, err = cache:get("key", nil, function() return "foo" end)
if not data then
ngx.log(ngx.ERR, err)
return
end
ngx.say(data)
end
ngx.say("calls: ", calls)
}
}
--- request
GET /t
--- response_body
transform("foo")
transform("foo")
transform("foo")
calls: 1
--- no_error_log
[error]
=== TEST 5: l1_serializer is called on each L2 hit
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local calls = 0
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
calls = calls + 1
return string.format("transform(%q)", s)
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
for i = 1, 3 do
local data, err = cache:get("key", nil, function() return "foo" end)
if not data then
ngx.log(ngx.ERR, err)
return
end
ngx.say(data)
cache.lru:delete("key")
end
ngx.say("calls: ", calls)
}
}
--- request
GET /t
--- response_body
transform("foo")
transform("foo")
transform("foo")
calls: 3
--- no_error_log
[error]
=== TEST 6: l1_serializer is called on boolean false hits
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
return string.format("transform_boolean(%q)", s)
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local function cb()
return false
end
local data, err = cache:get("key", nil, cb)
if not data then
ngx.log(ngx.ERR, err)
return
end
ngx.say(data)
}
}
--- request
GET /t
--- response_body
transform_boolean("false")
--- no_error_log
[error]
=== TEST 7: l1_serializer is called in protected mode (L2 miss)
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
error("cannot transform")
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local data, err = cache:get("key", nil, function() return "foo" end)
if not data then
ngx.say(err)
end
ngx.say(data)
}
}
--- request
GET /t
--- response_body_like
l1_serializer threw an error: .*?: cannot transform
--- no_error_log
[error]
=== TEST 8: l1_serializer is called in protected mode (L2 hit)
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local called = false
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
if called then error("cannot transform") end
called = true
return string.format("transform(%q)", s)
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
assert(cache:get("key", nil, function() return "foo" end))
cache.lru:delete("key")
local data, err = cache:get("key", nil, function() return "foo" end)
if not data then
ngx.say(err)
end
ngx.say(data)
}
}
--- request
GET /t
--- response_body_like
l1_serializer threw an error: .*?: cannot transform
--- no_error_log
[error]
=== TEST 9: l1_serializer is not called for L2+L3 misses (no record)
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local called = false
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
called = true
return string.format("transform(%s)", s)
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local data, err = cache:get("key", nil, function() return nil end)
if data ~= nil then
ngx.log(ngx.ERR, "got a value for a L3 miss: ", tostring(data))
return
elseif err ~= nil then
ngx.log(ngx.ERR, "got an error for a L3 miss: ", tostring(err))
return
end
-- our L3 returned nil, we do not call the l1_serializer and
-- we store the LRU nil sentinel value
ngx.say("l1_serializer called for L3 miss: ", called)
-- delete from LRU, and try from L2 again
cache.lru:delete("key")
local data, err = cache:get("key", nil, function() error("not supposed to call") end)
if data ~= nil then
ngx.log(ngx.ERR, "got a value for a L3 miss: ", tostring(data))
return
elseif err ~= nil then
ngx.log(ngx.ERR, "got an error for a L3 miss: ", tostring(err))
return
end
ngx.say("l1_serializer called for L2 negative hit: ", called)
}
}
--- request
GET /t
--- response_body
l1_serializer called for L3 miss: false
l1_serializer called for L2 negative hit: false
--- no_error_log
[error]
=== TEST 10: l1_serializer is not supposed to return a nil value
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
return nil
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local ok, err = cache:get("key", nil, function() return "foo" end)
assert(not ok, "get() should not return successfully")
ngx.say(err)
}
}
--- request
GET /t
--- response_body_like
l1_serializer returned a nil value
--- no_error_log
[error]
=== TEST 11: l1_serializer can return nil + error
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
return nil, "l1_serializer: cannot transform"
end,
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local data, err = cache:get("key", nil, function() return "foo" end)
if not data then
ngx.say(err)
end
ngx.say("data: ", data)
}
}
--- request
GET /t
--- response_body
l1_serializer: cannot transform
data: nil
--- no_error_log
[error]
=== TEST 12: l1_serializer can be given as a get() argument
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
return
end
local data, err = cache:get("key", {
l1_serializer = function(s)
return string.format("transform(%q)", s)
end
}, function() return "foo" end)
if not data then
ngx.log(ngx.ERR, err)
return
end
ngx.say(data)
}
}
--- request
GET /t
--- response_body
transform("foo")
--- no_error_log
[error]
=== TEST 13: l1_serializer as get() argument has precedence over the constructor one
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
l1_serializer = function(s)
return string.format("constructor(%q)", s)
end
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local data, err = cache:get("key1", {
l1_serializer = function(s)
return string.format("get_argument(%q)", s)
end
}, function() return "foo" end)
if not data then
ngx.log(ngx.ERR, err)
return
end
ngx.say(data)
local data, err = cache:get("key2", nil, function() return "bar" end)
if not data then
ngx.log(ngx.ERR, err)
return
end
ngx.say(data)
}
}
--- request
GET /t
--- response_body
get_argument("foo")
constructor("bar")
--- no_error_log
[error]
=== TEST 14: get() validates l1_serializer is a function
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm")
if not cache then
ngx.log(ngx.ERR, err)
return
end
local ok, err = pcall(cache.get, cache, "key", {
l1_serializer = false,
}, function() return "foo" end)
if not data then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
opts.l1_serializer must be a function
--- no_error_log
[error]
=== TEST 15: set() calls l1_serializer
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
l1_serializer = function(s)
return string.format("transform(%q)", s)
end
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local ok, err = cache:set("key", nil, "value")
if not ok then
ngx.log(ngx.ERR, err)
return
end
local value, err = cache:get("key", nil, error)
if not value then
ngx.log(ngx.ERR, err)
return
end
ngx.say(value)
}
}
--- request
GET /t
--- response_body
transform("value")
--- no_error_log
[error]
=== TEST 16: set() calls l1_serializer for boolean false values
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
l1_serializer = function(s)
return string.format("transform_boolean(%q)", s)
end
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local ok, err = cache:set("key", nil, false)
if not ok then
ngx.log(ngx.ERR, err)
return
end
local value, err = cache:get("key", nil, error)
if not value then
ngx.log(ngx.ERR, err)
return
end
ngx.say(value)
}
}
--- request
GET /t
--- response_body
transform_boolean("false")
--- no_error_log
[error]
=== TEST 17: l1_serializer as set() argument has precedence over the constructor one
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
l1_serializer = function(s)
return string.format("constructor(%q)", s)
end
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local ok, err = cache:set("key", {
l1_serializer = function(s)
return string.format("set_argument(%q)", s)
end
}, "value")
if not ok then
ngx.log(ngx.ERR, err)
return
end
local value, err = cache:get("key", nil, error)
if not value then
ngx.log(ngx.ERR, err)
return
end
ngx.say(value)
}
}
--- request
GET /t
--- response_body
set_argument("value")
--- no_error_log
[error]
=== TEST 18: set() validates l1_serializer is a function
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache, err = mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
})
if not cache then
ngx.log(ngx.ERR, err)
return
end
local ok, err = pcall(cache.set, cache, "key", {
l1_serializer = true
}, "value")
if not data then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
opts.l1_serializer must be a function
--- no_error_log
[error]

402
t/08-purge.t Normal file
View File

@ -0,0 +1,402 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
use lib '.';
use t::Util;
#repeat_each(2);
plan tests => repeat_each() * (blocks() * 3);
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict cache_shm_miss 1m;
lua_shared_dict ipc_shm 1m;
};
run_tests();
__DATA__
=== TEST 1: purge() errors if no ipc
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
local ok, err = pcall(cache.purge, cache)
ngx.say(err)
}
}
--- request
GET /t
--- response_body
no ipc to propagate purge, specify opts.ipc_shm or opts.ipc
--- no_error_log
[error]
=== TEST 2: purge() deletes all items from L1 + L2 (sanity 1/2)
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- populate mlcache
for i = 1, 100 do
assert(cache:get(tostring(i), nil, function() return i end))
end
-- purge
assert(cache:purge())
for i = 1, 100 do
local value, err = cache:get(tostring(i), nil, function() return nil end)
if err then
ngx.log(ngx.ERR, err)
return
end
if value ~= nil then
ngx.say("key ", i, " had: ", value)
end
end
ngx.say("ok")
}
}
--- request
GET /t
--- response_body
ok
--- no_error_log
[error]
=== TEST 3: purge() deletes all items from L1 (sanity 2/2)
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- populate mlcache
for i = 1, 100 do
assert(cache:get(tostring(i), nil, function() return i end))
end
-- purge
assert(cache:purge())
for i = 1, 100 do
local value = cache.lru:get(tostring(i))
if value ~= nil then
ngx.say("key ", i, " had: ", value)
end
end
ngx.say("ok")
}
}
--- request
GET /t
--- response_body
ok
--- no_error_log
[error]
=== TEST 4: purge() deletes all items from L1 with a custom LRU
--- skip_eval: 3: t::Util::skip_openresty('<', '1.13.6.2')
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local lrucache = require "resty.lrucache"
local lru = lrucache.new(100)
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
lru = lru,
}))
-- populate mlcache
for i = 1, 100 do
assert(cache:get(tostring(i), nil, function() return i end))
end
-- purge
assert(cache:purge())
for i = 1, 100 do
local value = cache.lru:get(tostring(i))
if value ~= nil then
ngx.say("key ", i, " had: ", value)
end
end
ngx.say("ok")
ngx.say("lru instance is the same one: ", lru == cache.lru)
}
}
--- request
GET /t
--- response_body
ok
lru instance is the same one: true
--- no_error_log
[error]
=== TEST 5: purge() is prevented if custom LRU does not support flush_all()
--- skip_eval: 3: t::Util::skip_openresty('>', '1.13.6.1')
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local lrucache = require "resty.lrucache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
lru = lrucache.new(10),
}))
local pok, perr = pcall(cache.purge, cache)
if not pok then
ngx.say(perr)
return
end
ngx.say("ok")
}
}
--- request
GET /t
--- response_body
cannot purge when using custom LRU cache with OpenResty < 1.13.6.2
--- no_error_log
[error]
=== TEST 6: purge() deletes all items from shm_miss is specified
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
shm_miss = "cache_shm_miss",
}))
-- populate mlcache
for i = 1, 100 do
local _, err = cache:get(tostring(i), nil, function() return nil end)
if err then
ngx.log(ngx.ERR, err)
return
end
end
-- purge
assert(cache:purge())
local called = 0
for i = 1, 100 do
local value, err = cache:get(tostring(i), nil, function() return i end)
if value ~= i then
ngx.say("key ", i, " had: ", value)
end
end
ngx.say("ok")
}
}
--- request
GET /t
--- response_body
ok
--- no_error_log
[error]
=== TEST 7: purge() does not call shm:flush_expired() by default
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
do
local cache_shm = ngx.shared.cache_shm
local mt = getmetatable(cache_shm)
local orig_cache_shm_flush_expired = mt.flush_expired
mt.flush_expired = function(self, ...)
ngx.say("flush_expired called with 'max_count'")
return orig_cache_shm_flush_expired(self, ...)
end
end
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
assert(cache:purge())
}
}
--- request
GET /t
--- response_body_unlike
flush_expired called with 'max_count'
--- no_error_log
[error]
=== TEST 8: purge() calls shm:flush_expired() if argument specified
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
do
local cache_shm = ngx.shared.cache_shm
local mt = getmetatable(cache_shm)
local orig_cache_shm_flush_expired = mt.flush_expired
mt.flush_expired = function(self, ...)
local arg = { ... }
local n = arg[1]
ngx.say("flush_expired called with 'max_count': ", n)
return orig_cache_shm_flush_expired(self, ...)
end
end
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
}))
assert(cache:purge(true))
}
}
--- request
GET /t
--- response_body
flush_expired called with 'max_count': nil
--- no_error_log
[error]
=== TEST 9: purge() calls shm:flush_expired() if shm_miss is specified
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
do
local cache_shm = ngx.shared.cache_shm
local mt = getmetatable(cache_shm)
local orig_cache_shm_flush_expired = mt.flush_expired
mt.flush_expired = function(self, ...)
local arg = { ... }
local n = arg[1]
ngx.say("flush_expired called with 'max_count': ", n)
return orig_cache_shm_flush_expired(self, ...)
end
end
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
shm_miss = "cache_shm_miss",
}))
assert(cache:purge(true))
}
}
--- request
GET /t
--- response_body
flush_expired called with 'max_count': nil
flush_expired called with 'max_count': nil
--- no_error_log
[error]
=== TEST 10: purge() calls broadcast() on purge channel
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function(channel, data, ...)
ngx.say("channel: ", channel)
ngx.say("data:", data)
ngx.say("other args:", ...)
return true
end,
poll = function() end,
}
}))
assert(cache:purge())
}
}
--- request
GET /t
--- response_body
channel: mlcache:purge:my_mlcache
data:
other args:
--- no_error_log
[error]

375
t/09-isolation.t Normal file
View File

@ -0,0 +1,375 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
repeat_each(2);
plan tests => repeat_each() * (blocks() * 3);
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict ipc_shm 1m;
};
run_tests();
__DATA__
=== TEST 1: multiple instances with the same name have same lua-resty-lru instance
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache_1 = assert(mlcache.new("my_mlcache", "cache_shm"))
local cache_2 = assert(mlcache.new("my_mlcache", "cache_shm"))
ngx.say("lua-resty-lru instances are the same: ",
cache_1.lru == cache_2.lru)
}
}
--- request
GET /t
--- response_body
lua-resty-lru instances are the same: true
--- no_error_log
[error]
=== TEST 2: multiple instances with different names have different lua-resty-lru instances
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache_1 = assert(mlcache.new("my_mlcache_1", "cache_shm"))
local cache_2 = assert(mlcache.new("my_mlcache_2", "cache_shm"))
ngx.say("lua-resty-lru instances are the same: ",
cache_1.lru == cache_2.lru)
}
}
--- request
GET /t
--- response_body
lua-resty-lru instances are the same: false
--- no_error_log
[error]
=== TEST 3: garbage-collected instances also GC their lru instance
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
collectgarbage("collect")
local cache_1 = assert(mlcache.new("my_mlcache", "cache_shm"))
local cache_2 = assert(mlcache.new("my_mlcache", "cache_shm"))
-- cache something in cache_1's LRU
cache_1.lru:set("key", 123)
-- GC cache_1 (the LRU should survive because it is shared with cache_2)
cache_1 = nil
collectgarbage("collect")
-- prove LRU survived
ngx.say((cache_2.lru:get("key")))
-- GC cache_2 (and the LRU this time, since no more references)
cache_2 = nil
collectgarbage("collect")
-- re-create the caches and a new LRU
cache_1 = assert(mlcache.new("my_mlcache", "cache_shm"))
cache_2 = assert(mlcache.new("my_mlcache", "cache_shm"))
-- this is a new LRU, it has nothing in it
ngx.say((cache_2.lru:get("key")))
}
}
--- request
GET /t
--- response_body
123
nil
--- no_error_log
[error]
=== TEST 4: multiple instances with different names get() of the same key are isolated
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
-- create 2 mlcache
local cache_1 = assert(mlcache.new("my_mlcache_1", "cache_shm"))
local cache_2 = assert(mlcache.new("my_mlcache_2", "cache_shm"))
-- set a value in both mlcaches
local data_1 = assert(cache_1:get("my_key", nil, function() return "value A" end))
local data_2 = assert(cache_2:get("my_key", nil, function() return "value B" end))
-- get values from LRU
local lru_1_value = cache_1.lru:get("my_key")
local lru_2_value = cache_2.lru:get("my_key")
ngx.say("cache_1 lru has: ", lru_1_value)
ngx.say("cache_2 lru has: ", lru_2_value)
-- delete values from LRU
cache_1.lru:delete("my_key")
cache_2.lru:delete("my_key")
-- get values from shm
local shm_1_value = assert(cache_1:get("my_key", nil, function() end))
local shm_2_value = assert(cache_2:get("my_key", nil, function() end))
ngx.say("cache_1 shm has: ", shm_1_value)
ngx.say("cache_2 shm has: ", shm_2_value)
}
}
--- request
GET /t
--- response_body
cache_1 lru has: value A
cache_2 lru has: value B
cache_1 shm has: value A
cache_2 shm has: value B
--- no_error_log
[error]
=== TEST 5: multiple instances with different names delete() of the same key are isolated
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
-- create 2 mlcache
local cache_1 = assert(mlcache.new("my_mlcache_1", "cache_shm", {
ipc_shm = "ipc_shm",
}))
local cache_2 = assert(mlcache.new("my_mlcache_2", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- set 2 values in both mlcaches
local data_1 = assert(cache_1:get("my_key", nil, function() return "value A" end))
local data_2 = assert(cache_2:get("my_key", nil, function() return "value B" end))
-- test if value is set from shm (safer to check due to the key)
local shm_v = ngx.shared.cache_shm:get(cache_1.name .. "my_key")
ngx.say("cache_1 shm has a value: ", shm_v ~= nil)
-- delete value from mlcache 1
ngx.say("delete from cache_1")
assert(cache_1:delete("my_key"))
-- ensure cache 1 key is deleted from LRU
local lru_v = cache_1.lru:get("my_key")
ngx.say("cache_1 lru has: ", lru_v)
-- ensure cache 1 key is deleted from shm
local shm_v = ngx.shared.cache_shm:get(cache_1.name .. "my_key")
ngx.say("cache_1 shm has: ", shm_v)
-- ensure cache 2 still has its value
local shm_v_2 = ngx.shared.cache_shm:get(cache_2.name .. "my_key")
ngx.say("cache_2 shm has a value: ", shm_v_2 ~= nil)
local lru_v_2 = cache_2.lru:get("my_key")
ngx.say("cache_2 lru has: ", lru_v_2)
}
}
--- request
GET /t
--- response_body
cache_1 shm has a value: true
delete from cache_1
cache_1 lru has: nil
cache_1 shm has: nil
cache_2 shm has a value: true
cache_2 lru has: value B
--- no_error_log
[error]
=== TEST 6: multiple instances with different names peek() of the same key are isolated
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
-- must reset the shm so that when repeated, this tests doesn't
-- return unpredictible TTLs (0.9xxxs)
ngx.shared.cache_shm:flush_all()
ngx.shared.cache_shm:flush_expired()
local mlcache = require "resty.mlcache"
-- create 2 mlcaches
local cache_1 = assert(mlcache.new("my_mlcache_1", "cache_shm", {
ipc_shm = "ipc_shm",
}))
local cache_2 = assert(mlcache.new("my_mlcache_2", "cache_shm", {
ipc_shm = "ipc_shm",
}))
-- reset LRUs so repeated tests allow the below get() to set the
-- value in the shm
cache_1.lru:delete("my_key")
cache_2.lru:delete("my_key")
-- set a value in both mlcaches
local data_1 = assert(cache_1:get("my_key", { ttl = 1 }, function() return "value A" end))
local data_2 = assert(cache_2:get("my_key", { ttl = 2 }, function() return "value B" end))
-- peek cache 1
local ttl, err, val = assert(cache_1:peek("my_key"))
ngx.say("cache_1 ttl: ", ttl)
ngx.say("cache_1 value: ", val)
-- peek cache 2
local ttl, err, val = assert(cache_2:peek("my_key"))
ngx.say("cache_2 ttl: ", ttl)
ngx.say("cache_2 value: ", val)
}
}
--- request
GET /t
--- response_body
cache_1 ttl: 1
cache_1 value: value A
cache_2 ttl: 2
cache_2 value: value B
--- no_error_log
[error]
=== TEST 7: non-namespaced instances use different delete() broadcast channel
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
-- create 2 mlcaches
local cache_1 = assert(mlcache.new("my_mlcache_1", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function(channel)
ngx.say("cache_1 channel: ", channel)
return true
end,
poll = function() end,
}
}))
local cache_2 = assert(mlcache.new("my_mlcache_2", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function(channel)
ngx.say("cache_2 channel: ", channel)
return true
end,
poll = function() end,
}
}))
assert(cache_1:delete("my_key"))
assert(cache_2:delete("my_key"))
}
}
--- request
GET /t
--- response_body
cache_1 channel: mlcache:invalidations:my_mlcache_1
cache_2 channel: mlcache:invalidations:my_mlcache_2
--- no_error_log
[error]
=== TEST 8: non-namespaced instances use different purge() broadcast channel
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
-- create 2 mlcaches
local cache_1 = assert(mlcache.new("my_mlcache_1", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function(channel)
ngx.say("cache_1 channel: ", channel)
return true
end,
poll = function() end,
}
}))
local cache_2 = assert(mlcache.new("my_mlcache_2", "cache_shm", {
ipc = {
register_listeners = function() end,
broadcast = function(channel)
ngx.say("cache_2 channel: ", channel)
return true
end,
poll = function() end,
}
}))
assert(cache_1:purge())
assert(cache_2:purge())
}
}
--- request
GET /t
--- response_body
cache_1 channel: mlcache:purge:my_mlcache_1
cache_2 channel: mlcache:purge:my_mlcache_2
--- no_error_log
[error]

319
t/10-ipc_shm.t Normal file
View File

@ -0,0 +1,319 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
use lib '.';
use t::Util;
workers(2);
#repeat_each(2);
plan tests => repeat_each() * (blocks() * 3) + 2;
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict ipc_shm 1m;
init_by_lua_block {
-- local verbose = true
local verbose = false
local outfile = "$Test::Nginx::Util::ErrLogFile"
-- local outfile = "/tmp/v.log"
if verbose then
local dump = require "jit.dump"
dump.on(nil, outfile)
else
local v = require "jit.v"
v.on(outfile)
end
require "resty.core"
-- jit.opt.start("hotloop=1")
-- jit.opt.start("loopunroll=1000000")
-- jit.off()
}
};
run_tests();
__DATA__
=== TEST 1: update() with ipc_shm catches up with invalidation events
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
debug = true -- allows same worker to receive its own published events
}))
cache.ipc:subscribe(cache.events.invalidation.channel, function(data)
ngx.log(ngx.NOTICE, "received event from invalidations: ", data)
end)
assert(cache:delete("my_key"))
assert(cache:update())
}
}
--- request
GET /t
--- ignore_response_body
--- no_error_log
[error]
--- error_log
received event from invalidations: my_key
=== TEST 2: update() with ipc_shm timeouts when waiting for too long
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
debug = true -- allows same worker to receive its own published events
}))
cache.ipc:subscribe(cache.events.invalidation.channel, function(data)
ngx.log(ngx.NOTICE, "received event from invalidations: ", data)
end)
assert(cache:delete("my_key"))
assert(cache:delete("my_other_key"))
ngx.shared.ipc_shm:delete(2)
local ok, err = cache:update(0.1)
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
could not poll ipc events: timeout
--- no_error_log
[error]
received event from invalidations: my_other
--- error_log
received event from invalidations: my_key
=== TEST 3: update() with ipc_shm JITs when no events to catch up
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm", {
ipc_shm = "ipc_shm",
debug = true -- allows same worker to receive its own published events
}))
for i = 1, 10e3 do
assert(cache:update())
end
}
}
--- request
GET /t
--- ignore_response_body
--- no_error_log
[error]
--- error_log eval
qr/\[TRACE\s+\d+ content_by_lua\(nginx\.conf:\d+\):7 loop\]/
=== TEST 4: set() with ipc_shm invalidates other workers' LRU cache
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local opts = {
ipc_shm = "ipc_shm",
debug = true -- allows same worker to receive its own published events
}
local cache = assert(mlcache.new("namespace", "cache_shm", opts))
local cache_clone = assert(mlcache.new("namespace", "cache_shm", opts))
do
local lru_delete = cache.lru.delete
cache.lru.delete = function(self, key)
ngx.say("called lru:delete() with key: ", key)
return lru_delete(self, key)
end
end
assert(cache:set("my_key", nil, nil))
ngx.say("calling update on cache")
assert(cache:update())
ngx.say("calling update on cache_clone")
assert(cache_clone:update())
}
}
--- request
GET /t
--- response_body
calling update on cache
called lru:delete() with key: my_key
calling update on cache_clone
called lru:delete() with key: my_key
--- no_error_log
[error]
=== TEST 5: delete() with ipc_shm invalidates other workers' LRU cache
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local opts = {
ipc_shm = "ipc_shm",
debug = true -- allows same worker to receive its own published events
}
local cache = assert(mlcache.new("namespace", "cache_shm", opts))
local cache_clone = assert(mlcache.new("namespace", "cache_shm", opts))
do
local lru_delete = cache.lru.delete
cache.lru.delete = function(self, key)
ngx.say("called lru:delete() with key: ", key)
return lru_delete(self, key)
end
end
assert(cache:delete("my_key"))
ngx.say("calling update on cache")
assert(cache:update())
ngx.say("calling update on cache_clone")
assert(cache_clone:update())
}
}
--- request
GET /t
--- response_body
called lru:delete() with key: my_key
calling update on cache
called lru:delete() with key: my_key
calling update on cache_clone
called lru:delete() with key: my_key
--- no_error_log
[error]
=== TEST 6: purge() with mlcache_shm invalidates other workers' LRU cache (OpenResty < 1.13.6.2)
--- skip_eval: 3: t::Util::skip_openresty('>=', '1.13.6.2')
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local opts = {
ipc_shm = "ipc_shm",
debug = true -- allows same worker to receive its own published events
}
local cache = assert(mlcache.new("namespace", "cache_shm", opts))
local cache_clone = assert(mlcache.new("namespace", "cache_shm", opts))
local lru = cache.lru
local lru_clone = cache_clone.lru
assert(cache:purge())
-- cache.lru should be different now
ngx.say("cache has new lru: ", cache.lru ~= lru)
ngx.say("cache_clone still has same lru: ", cache_clone.lru == lru_clone)
ngx.say("calling update on cache_clone")
assert(cache_clone:update())
-- cache.lru should be different now
ngx.say("cache_clone has new lru: ", cache_clone.lru ~= lru_clone)
}
}
--- request
GET /t
--- response_body
cache has new lru: true
cache_clone still has same lru: true
calling update on cache_clone
cache_clone has new lru: true
--- no_error_log
[error]
=== TEST 7: purge() with mlcache_shm invalidates other workers' LRU cache (OpenResty >= 1.13.6.2)
--- skip_eval: 3: t::Util::skip_openresty('<', '1.13.6.2')
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local opts = {
ipc_shm = "ipc_shm",
debug = true -- allows same worker to receive its own published events
}
local cache = assert(mlcache.new("namespace", "cache_shm", opts))
local cache_clone = assert(mlcache.new("namespace", "cache_shm", opts))
local lru = cache.lru
ngx.say("both instances use the same lru: ", cache.lru == cache_clone.lru)
do
local lru_flush_all = lru.flush_all
cache.lru.flush_all = function(self)
ngx.say("called lru:flush_all()")
return lru_flush_all(self)
end
end
assert(cache:purge())
ngx.say("calling update on cache_clone")
assert(cache_clone:update())
ngx.say("both instances use the same lru: ", cache.lru == cache_clone.lru)
ngx.say("lru didn't change after purge: ", cache.lru == lru)
}
}
--- request
GET /t
--- response_body
both instances use the same lru: true
called lru:flush_all()
calling update on cache_clone
called lru:flush_all()
both instances use the same lru: true
lru didn't change after purge: true
--- no_error_log
[error]

115
t/11-locks_shm.t Normal file
View File

@ -0,0 +1,115 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
plan tests => repeat_each() * (blocks() * 3);
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
lua_shared_dict locks_shm 1m;
init_by_lua_block {
-- local verbose = true
local verbose = false
local outfile = "$Test::Nginx::Util::ErrLogFile"
-- local outfile = "/tmp/v.log"
if verbose then
local dump = require "jit.dump"
dump.on(nil, outfile)
else
local v = require "jit.v"
v.on(outfile)
end
require "resty.core"
-- jit.opt.start("hotloop=1")
-- jit.opt.start("loopunroll=1000000")
-- jit.off()
}
};
run_tests();
__DATA__
=== TEST 1: new() validates opts.shm_locks
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = pcall(mlcache.new, "name", "cache_shm", {
shm_locks = false,
})
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
opts.shm_locks must be a string
--- no_error_log
[error]
=== TEST 2: new() ensures opts.shm_locks exists
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local ok, err = mlcache.new("name", "cache_shm", {
shm_locks = "foo",
})
if not ok then
ngx.say(err)
end
}
}
--- request
GET /t
--- response_body
no such lua_shared_dict for opts.shm_locks: foo
--- no_error_log
[error]
=== TEST 3: get() stores resty-locks in opts.shm_locks if specified
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("name", "cache_shm", {
shm_locks = "locks_shm",
}))
local function cb()
local keys = ngx.shared.locks_shm:get_keys()
for i, key in ipairs(keys) do
ngx.say(i, ": ", key)
end
return 123
end
cache:get("key", nil, cb)
}
}
--- request
GET /t
--- response_body
1: lua-resty-mlcache:lock:namekey
--- no_error_log
[error]

1047
t/12-resurrect-stale.t Normal file

File diff suppressed because it is too large Load Diff

1735
t/13-get_bulk.t Normal file

File diff suppressed because it is too large Load Diff

227
t/14-bulk-and-res.t Normal file
View File

@ -0,0 +1,227 @@
# vim:set ts=4 sts=4 sw=4 et ft=:
use Test::Nginx::Socket::Lua;
use Cwd qw(cwd);
use lib '.';
use t::Util;
no_long_string();
workers(2);
#repeat_each(2);
plan tests => repeat_each() * blocks() * 3;
my $pwd = cwd();
our $HttpConfig = qq{
lua_package_path "$pwd/lib/?.lua;;";
lua_shared_dict cache_shm 1m;
#lua_shared_dict cache_shm_miss 1m;
init_by_lua_block {
-- local verbose = true
local verbose = false
local outfile = "$Test::Nginx::Util::ErrLogFile"
-- local outfile = "/tmp/v.log"
if verbose then
local dump = require "jit.dump"
dump.on(nil, outfile)
else
local v = require "jit.v"
v.on(outfile)
end
require "resty.core"
-- jit.opt.start("hotloop=1")
-- jit.opt.start("loopunroll=1000000")
-- jit.off()
}
};
run_tests();
__DATA__
=== TEST 1: new_bulk() creates a bulk
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local bulk = mlcache.new_bulk()
ngx.say("type: ", type(bulk))
ngx.say("size: ", #bulk)
ngx.say("bulk.n: ", bulk.n)
}
}
--- request
GET /t
--- response_body
type: table
size: 0
bulk.n: 0
--- no_error_log
[error]
=== TEST 2: new_bulk() creates a bulk with narr in arg #1
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local bulk = mlcache.new_bulk(3)
ngx.say("type: ", type(bulk))
ngx.say("size: ", #bulk)
ngx.say("bulk.n: ", bulk.n)
}
}
--- request
GET /t
--- response_body
type: table
size: 0
bulk.n: 0
--- no_error_log
[error]
=== TEST 3: bulk:add() adds bulk operations
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local function cb() end
local bulk = mlcache.new_bulk(3)
for i = 1, 3 do
bulk:add("key_" .. i, nil, cb, i)
end
for i = 1, 3*4, 4 do
ngx.say(tostring(bulk[i]), " ",
tostring(bulk[i + 1]), " ",
tostring(bulk[i + 2]), " ",
tostring(bulk[i + 3]))
end
ngx.say("bulk.n: ", bulk.n)
}
}
--- request
GET /t
--- response_body_like
key_1 nil function: 0x[0-9a-fA-F]+ 1
key_2 nil function: 0x[0-9a-fA-F]+ 2
key_3 nil function: 0x[0-9a-fA-F]+ 3
bulk\.n: 3
--- no_error_log
[error]
=== TEST 4: bulk:add() can be given to get_bulk()
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
local function cb(i) return i end
local bulk = mlcache.new_bulk(3)
for i = 1, 3 do
bulk:add("key_" .. i, nil, cb, i)
end
local res, err = cache:get_bulk(bulk)
if not res then
ngx.log(ngx.ERR, err)
return
end
for i = 1, res.n, 3 do
ngx.say(tostring(res[i]), " ",
tostring(res[i + 1]), " ",
tostring(res[i + 2]))
end
}
}
--- request
GET /t
--- response_body
1 nil 3
2 nil 3
3 nil 3
--- no_error_log
[error]
=== TEST 5: each_bulk_res() iterates over get_bulk() results
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local cache = assert(mlcache.new("my_mlcache", "cache_shm"))
local res, err = cache:get_bulk {
"key_a", nil, function() return 1 end, nil,
"key_b", nil, function() return 2 end, nil,
"key_c", nil, function() return 3 end, nil,
n = 3,
}
if not res then
ngx.log(ngx.ERR, err)
return
end
for i, data, err, hit_lvl in mlcache.each_bulk_res(res) do
ngx.say(i, " ", data, " ", err, " ", hit_lvl)
end
}
}
--- request
GET /t
--- response_body
1 1 nil 3
2 2 nil 3
3 3 nil 3
--- no_error_log
[error]
=== TEST 6: each_bulk_res() throws an error on unrocognized res
--- http_config eval: $::HttpConfig
--- config
location = /t {
content_by_lua_block {
local mlcache = require "resty.mlcache"
local pok, perr = pcall(mlcache.each_bulk_res, {})
if not pok then
ngx.say(perr)
end
}
}
--- request
GET /t
--- response_body
res must have res.n field; is this a get_bulk() result?
--- no_error_log
[error]

51
t/Util.pm Normal file
View File

@ -0,0 +1,51 @@
use strict;
package t::Util;
sub get_openresty_canon_version (@) {
sprintf "%d.%03d%03d%03d", $_[0], $_[1], $_[2], $_[3];
}
sub get_openresty_version () {
my $NginxBinary = $ENV{TEST_NGINX_BINARY} || 'nginx';
my $out = `$NginxBinary -V 2>&1`;
if (!defined $out || $? != 0) {
bail_out("Failed to get the version of the OpenResty in PATH");
die;
}
if ($out =~ m{openresty[^/]*/(\d+)\.(\d+)\.(\d+)\.(\d+)}s) {
return get_openresty_canon_version($1, $2, $3, $4);
}
if ($out =~ m{nginx[^/]*/(\d+)\.(\d+)\.(\d+)}s) {
return;
}
bail_out("Failed to parse the output of \"nginx -V\": $out\n");
die;
}
sub skip_openresty {
my ($op, $ver) = @_;
my $OpenrestyVersion = get_openresty_version();
if ($ver =~ m{(\d+)\.(\d+)\.(\d+)\.(\d+)}s) {
$ver = get_openresty_canon_version($1, $2, $3, $4);
} else {
bail_out("Invalid skip_openresty() arg: $ver");
die;
}
if (defined $OpenrestyVersion and eval "$OpenrestyVersion $op $ver") {
return 1;
}
return;
}
our @EXPORT = qw(
skip_openresty
);
1;
# vim: set ft=perl: