|
8 | 8 | This module is under active development and is production ready. |
9 | 9 |
|
10 | 10 | Version |
11 | | - This document describes ngx_lua v0.6.10 |
12 | | - (<https://github.com/chaoslawful/lua-nginx-module/tags>) released on 5 |
| 11 | + This document describes ngx_lua v0.7.0 |
| 12 | + (<https://github.com/chaoslawful/lua-nginx-module/tags>) released on 10 |
13 | 13 | October 2012. |
14 | 14 |
|
15 | 15 | Synopsis |
@@ -4700,6 +4700,278 @@ Nginx API for Lua |
4700 | 4700 |
|
4701 | 4701 | This API was first enabled in the "v0.6.0" release. |
4702 | 4702 |
|
| 4703 | + ngx.thread.spawn |
| 4704 | + syntax: *co = ngx.thread.spawn(func, arg1, arg2, ...)* |
| 4705 | + |
| 4706 | + context: *rewrite_by_lua*, access_by_lua*, content_by_lua** |
| 4707 | + |
| 4708 | + Spawns a new user "light thread" with the Lua function "func" as well as |
| 4709 | + those optional arguments "arg1", "arg2", and etc. Returns a Lua thread |
| 4710 | + (or Lua coroutine) object represents this "light thread". |
| 4711 | + |
| 4712 | + "Light threads" are just a special kind of Lua coroutines that are |
| 4713 | + scheduled automatically by the "ngx_lua" module. |
| 4714 | + |
| 4715 | + Before "ngx.thread.spawn" returns, the "func" will be called with those |
| 4716 | + optional arguments until it returns, aborts with an error, or gets |
| 4717 | + yielded automatically due to I/O operations via the Nginx API for Lua |
| 4718 | + (like <tcpsock:receive|/"tcpsock:receive">). |
| 4719 | + |
| 4720 | + After "ngx.thread.spawn" returns, the newly-created "light thread" will |
| 4721 | + keep running asynchronously usually at various I/O events. |
| 4722 | + |
| 4723 | + All the Lua code chunks running by rewrite_by_lua, access_by_lua, and |
| 4724 | + content_by_lua are in a boilerplate "light thread" created automatically |
| 4725 | + by "ngx_lua". Such boilerplate "light thread" are also called "entry |
| 4726 | + threads". |
| 4727 | + |
| 4728 | + By default, the corresponding Nginx handler (e.g., rewrite_by_lua |
| 4729 | + handler) will not terminate until |
| 4730 | + |
| 4731 | + 1. both the "entry thread" and all the user "light threads" terminates, |
| 4732 | + |
| 4733 | + 2. a "light thread" (either the "entry thread" or a user "light thread" |
| 4734 | + aborts by calling ngx.exit, ngx.exec, ngx.redirect, or |
| 4735 | + ngx.req.set_uri(uri, true), or |
| 4736 | + |
| 4737 | + 3. the "entry thread" terminates with a Lua error. |
| 4738 | + |
| 4739 | + When the user "light thread" terminates with a Lua error, however, it |
| 4740 | + will not abort other running "light threads" like the "entry thread" |
| 4741 | + does. |
| 4742 | + |
| 4743 | + Due to the limitation in the Nginx subrequest model, it is not allowed |
| 4744 | + to abort a running Nginx subrequest in general. So it is also prohibited |
| 4745 | + to abort a running "light thread" that is pending on one ore more Nginx |
| 4746 | + subrequests. You must call ngx.thread.wait to wait for those "light |
| 4747 | + thread" to terminate before quitting the "world". |
| 4748 | + |
| 4749 | + The "light threads" are not scheduled in a pre-emptive way. In other |
| 4750 | + words, no automatic time-slicing is performed. A "light thread" will |
| 4751 | + keep running exclusively on the CPU until |
| 4752 | + |
| 4753 | + 1. a (nonblocking) I/O operation cannot be completed in a single run, |
| 4754 | + |
| 4755 | + 2. it calls coroutine.yield to actively give up execution, or |
| 4756 | + |
| 4757 | + 3. it is aborted by a Lua error or an invocation of ngx.exit, ngx.exec, |
| 4758 | + ngx.redirect, or ngx.req.set_uri(uri, true). |
| 4759 | + |
| 4760 | + For the first two cases, the "light thread" will usually be resumed |
| 4761 | + later by the "ngx_lua" scheduler unless a "stop-the-world" event |
| 4762 | + happens. |
| 4763 | + |
| 4764 | + User "light threads" can create "light threads" themselves and normal |
| 4765 | + user coroutiens created by coroutine.create can also create "light |
| 4766 | + threads". The coroutine (be it a normal Lua coroutine or a "light |
| 4767 | + thread") that directly spawns the "light thread" is called the "parent |
| 4768 | + coroutine" for the "light thread" newly spawned. |
| 4769 | + |
| 4770 | + The "parent coroutine" can call ngx.thread.wait to wait on the |
| 4771 | + termination of its child "light thread". |
| 4772 | + |
| 4773 | + You can call coroutine.status() and coroutine.yield() on the "light |
| 4774 | + thread" coroutines. |
| 4775 | + |
| 4776 | + The status of the "light thread" coroutine can be "zombie" if |
| 4777 | + |
| 4778 | + 1. the current "light thread" already terminates (either successfully |
| 4779 | + or with an error), |
| 4780 | + |
| 4781 | + 2. its parent coroutine is still alive, and |
| 4782 | + |
| 4783 | + 3. its parent coroutine is not waiting on it with ngx.thread.wait. |
| 4784 | + |
| 4785 | + The following example demonstrates the use of coroutine.yield() in the |
| 4786 | + "light thread" coroutines to do manual time-slicing: |
| 4787 | + |
| 4788 | + local yield = coroutine.yield |
| 4789 | + |
| 4790 | + function f() |
| 4791 | + local self = coroutine.running() |
| 4792 | + ngx.say("f 1") |
| 4793 | + yield(self) |
| 4794 | + ngx.say("f 2") |
| 4795 | + yield(self) |
| 4796 | + ngx.say("f 3") |
| 4797 | + end |
| 4798 | + |
| 4799 | + local self = coroutine.running() |
| 4800 | + ngx.say("0") |
| 4801 | + yield(self) |
| 4802 | + |
| 4803 | + ngx.say("1") |
| 4804 | + ngx.thread.spawn(f) |
| 4805 | + |
| 4806 | + ngx.say("2") |
| 4807 | + yield(self) |
| 4808 | + |
| 4809 | + ngx.say("3") |
| 4810 | + yield(self) |
| 4811 | + |
| 4812 | + ngx.say("4") |
| 4813 | + |
| 4814 | + Then it will generate the output |
| 4815 | + |
| 4816 | + 0 |
| 4817 | + 1 |
| 4818 | + f 1 |
| 4819 | + 2 |
| 4820 | + f 2 |
| 4821 | + 3 |
| 4822 | + f 3 |
| 4823 | + 4 |
| 4824 | + |
| 4825 | + "Light threads" are mostly useful for doing concurrent upstream requests |
| 4826 | + in a single Nginx request handler, kinda like a generalized version of |
| 4827 | + ngx.location.capture_multi that can work with all the Nginx API for Lua. |
| 4828 | + The following example demonstrates parallel requests to MySQL, |
| 4829 | + Memcached, and upstream HTTP services in a single Lua handler, and |
| 4830 | + outputting the results in the order that they actually return (very much |
| 4831 | + like the Facebook BigPipe model): |
| 4832 | + |
| 4833 | + -- query mysql, memcached, and a remote http service at the same time, |
| 4834 | + -- output the results in the order that they |
| 4835 | + -- actually return the results. |
| 4836 | + |
| 4837 | + local mysql = require "resty.mysql" |
| 4838 | + local memcached = require "resty.memcached" |
| 4839 | + |
| 4840 | + local function query_mysql() |
| 4841 | + local db = mysql:new() |
| 4842 | + db:connect{ |
| 4843 | + host = "127.0.0.1", |
| 4844 | + port = 3306, |
| 4845 | + database = "test", |
| 4846 | + user = "monty", |
| 4847 | + password = "mypass" |
| 4848 | + } |
| 4849 | + local res, err, errno, sqlstate = |
| 4850 | + db:query("select * from cats order by id asc") |
| 4851 | + db:set_keepalive(0, 100) |
| 4852 | + ngx.say("mysql done: ", cjson.encode(res)) |
| 4853 | + end |
| 4854 | + |
| 4855 | + local function query_memcached() |
| 4856 | + local memc = memcached:new() |
| 4857 | + memc:connect("127.0.0.1", 11211) |
| 4858 | + local res, err = memc:get("some_key") |
| 4859 | + ngx.say("memcached done: ", res) |
| 4860 | + end |
| 4861 | + |
| 4862 | + local function query_http() |
| 4863 | + local res = ngx.location.capture("/my-http-proxy") |
| 4864 | + ngx.say("http done: ", res.body) |
| 4865 | + end |
| 4866 | + |
| 4867 | + ngx.thread.spawn(query_mysql) -- create thread 1 |
| 4868 | + ngx.thread.spawn(query_memcached) -- create thread 2 |
| 4869 | + ngx.thread.spawn(query_http) -- create thread 3 |
| 4870 | + |
| 4871 | + This API was first enabled in the "v0.7.0" release. |
| 4872 | + |
| 4873 | + ngx.thread.wait |
| 4874 | + syntax: *ok, res1, res2, ... = ngx.thread.wait(thread1, thread2, ...)* |
| 4875 | + |
| 4876 | + context: *rewrite_by_lua*, access_by_lua*, content_by_lua** |
| 4877 | + |
| 4878 | + Waits on one or more child "light threads" and returns the results of |
| 4879 | + the first "light thread" that terminates (either successfully or with an |
| 4880 | + error). |
| 4881 | + |
| 4882 | + The arguments "thread1", "thread2", and etc are the Lua thread objects |
| 4883 | + returned by earlier calls of ngx.thread.spawn. |
| 4884 | + |
| 4885 | + The return values have exactly the same meaning as coroutine.resume, |
| 4886 | + that is, the first value returned is a boolean value indicating whether |
| 4887 | + the "light thread" terminates successfully or not, and subsequent values |
| 4888 | + returned are the return values of the user Lua function that was used to |
| 4889 | + spawn the "light thread" (in case of success) or the error object (in |
| 4890 | + case of failure). |
| 4891 | + |
| 4892 | + Only the direct "parent coroutine" can wait on its child "light thread", |
| 4893 | + otherwise a Lua exception will be raised. |
| 4894 | + |
| 4895 | + The following example demonstrates the use of "ngx.thread.wait" and |
| 4896 | + ngx.location.capture to emulate ngx.location.capture_multi: |
| 4897 | + |
| 4898 | + local capture = ngx.location.capture |
| 4899 | + local spawn = ngx.thread.spawn |
| 4900 | + local wait = ngx.thread.wait |
| 4901 | + local say = ngx.say |
| 4902 | + |
| 4903 | + local function fetch(uri) |
| 4904 | + return capture(uri) |
| 4905 | + end |
| 4906 | + |
| 4907 | + local threads = { |
| 4908 | + spawn(fetch, "/foo"), |
| 4909 | + spawn(fetch, "/bar"), |
| 4910 | + spawn(fetch, "/baz") |
| 4911 | + } |
| 4912 | + |
| 4913 | + for i = 1, #threads do |
| 4914 | + local ok, res = wait(threads[i]) |
| 4915 | + if not ok then |
| 4916 | + say(i, ": failed to run: ", res) |
| 4917 | + else |
| 4918 | + say(i, ": status: ", res.status) |
| 4919 | + say(i, ": body: ", res.body) |
| 4920 | + end |
| 4921 | + end |
| 4922 | + |
| 4923 | + Here it essentially implements the "wait all" model. |
| 4924 | + |
| 4925 | + And below is an example demonstrating the "wait any" model: |
| 4926 | + |
| 4927 | + function f() |
| 4928 | + ngx.sleep(0.2) |
| 4929 | + ngx.say("f: hello") |
| 4930 | + return "f done" |
| 4931 | + end |
| 4932 | + |
| 4933 | + function g() |
| 4934 | + ngx.sleep(0.1) |
| 4935 | + ngx.say("g: hello") |
| 4936 | + return "g done" |
| 4937 | + end |
| 4938 | + |
| 4939 | + local tf, err = ngx.thread.spawn(f) |
| 4940 | + if not tf then |
| 4941 | + ngx.say("failed to spawn thread f: ", err) |
| 4942 | + return |
| 4943 | + end |
| 4944 | + |
| 4945 | + ngx.say("f thread created: ", coroutine.status(tf)) |
| 4946 | + |
| 4947 | + local tg, err = ngx.thread.spawn(g) |
| 4948 | + if not tg then |
| 4949 | + ngx.say("failed to spawn thread g: ", err) |
| 4950 | + return |
| 4951 | + end |
| 4952 | + |
| 4953 | + ngx.say("g thread created: ", coroutine.status(tg)) |
| 4954 | + |
| 4955 | + ok, res = ngx.thread.wait(tf, tg) |
| 4956 | + if not ok then |
| 4957 | + ngx.say("failed to wait: ", res) |
| 4958 | + return |
| 4959 | + end |
| 4960 | + |
| 4961 | + ngx.say("res: ", res) |
| 4962 | + |
| 4963 | + -- stop the "world", aborting other running threads |
| 4964 | + ngx.exit(ngx.OK) |
| 4965 | + |
| 4966 | + And it will generate the following output: |
| 4967 | + |
| 4968 | + f thread created: running |
| 4969 | + g thread created: running |
| 4970 | + g: hello |
| 4971 | + res: g done |
| 4972 | + |
| 4973 | + This API was first enabled in the "v0.7.0" release. |
| 4974 | + |
4703 | 4975 | ndk.set_var.DIRECTIVE |
4704 | 4976 | syntax: *res = ndk.set_var.DIRECTIVE_NAME* |
4705 | 4977 |
|
@@ -4943,8 +5215,7 @@ Known Issues |
4943 | 5215 | module-level global variables that are shared among *all* requests: |
4944 | 5216 |
|
4945 | 5217 | getmetatable(foo.bar).__newindex = function (table, key, val) |
4946 | | - error('Attempt to write to undeclared variable "' .. key .. '": ' |
4947 | | - .. debug.traceback()) |
| 5218 | + error('Attempt to write to undeclared variable "' .. key .. '"') |
4948 | 5219 | end |
4949 | 5220 |
|
4950 | 5221 | Assuming the current Lua module is named "foo.bar", this will guarantee |
|
0 commit comments