Yann Ylavic [Fri, 10 Aug 2018 16:15:50 +0000 (16:15 +0000)]
core: ap_filter_output_pending() to flush outer most filters first.
Since previous output filters may use ap_filter_should_yield() to determine
whether they should send more data (e.g. ap_request_core_filter), we need
to flush pending data from the core output filter first, and so on up the
chain.
Otherwise we may enter an infinite loop where ap_request_core_filter() does
nothing on ap_filter_output_pending() called from MPM event.
Rainer Jung [Tue, 7 Aug 2018 10:25:31 +0000 (10:25 +0000)]
mod_status: Complete the data shown for async
MPMs in "auto" mode. Added number of processes,
number of stopping processes and number
of busy and idle workers.
Rainer Jung [Tue, 7 Aug 2018 10:17:33 +0000 (10:17 +0000)]
mod_proxy: Improve the balancer member data shown
in mod_status when "ProxyStatus" is "On":
add "busy" count and show byte counts in auto
mode always in units of kilobytes.
Yann Ylavic [Fri, 3 Aug 2018 09:53:42 +0000 (09:53 +0000)]
event, worker: initialize the objects used by signal_threads() first.
Follow up to r1835845.
If a signal is received early when the MPM children start, signal_threads() may
be called concurrently with start_streads() thus before the latter (or its
underlying threads like the listener_thread) had a chance to create and init
the queues, mutexes, pollset and sockets array used by the former.
So move those initializations to a new setup_threads_runtime() function called
before start_threads(), where the pruntime pool is also created.
If ProxyPassReverse is used for reverse mapping of relative redirects, subsequent ProxyPassReverse statements, whether they are relative or absolute, may fail.
Jim Jagielski [Wed, 1 Aug 2018 11:27:28 +0000 (11:27 +0000)]
Fix PR54848 in a 2.4.x backportable format. Ideally deprecating the use
of ->client in whatever version of 2.4 this is added into would be
more logical.
mod_ratelimit: Don't interfere with "chunked" encoding.
By the time ap_http_header_filter() sends the header brigade and adds the
"CHUNK" filter, we need to garantee that the header went through all the
filters' stack, and more specifically above ap_http_chunk_filter() which
assumes that all it receives is content data.
Since rate_limit_filter() may retain the header brigade, make it run after
ap_http_chunk_filter(), just before AP_FTYPE_CONNECTION filters.
Also, ap_http_header_filter() shouldn't eat the EOS for HEAD/no-body responses.
For instance mod_ratelimit depends on it since r1835168, but any next request
filter may as well to flush and/or bail out approprietely.
This fixes the regression introduced in 2.4.34 (r1835168).
PR 62568.
mod_proxy_http: follow up to r1836588/r1836648: handle unread 100-continue.
When the backend responds with a non-interim response to a 100-continue,
mod_proxy_http won't read the client's body, so make sure "Connection: close"
ends up being added to the response if nobody reads that body later.
The right thing to do at mod_proxy level, rather then forcing AP_CONN_CLOSE,
is to restore r->expecting_100 so that further processing (like error_override
or trying on the next balancer member) can still work.
Eric Covener [Thu, 26 Jul 2018 00:51:31 +0000 (00:51 +0000)]
expand on ProxyPassReverse args
split up the two arguments into their own paragraphs
try to reinforce that the 2nd arg has to match the response
hedaer, and what the first one is used for.
mod_proxy_http: follow up to r1836588: avoid 100-continue responses from core.
When mod_proxy_http handles end-to-end "100 continue", it can't let
ap_http_filter() send its own interim response whenever the body is read.
So save/restore r->expecting_100 before/after handling the request, and use
req->expecting_100 internally (including to restore r->expecting appropriately).
While at it, add comments and debug logs about 100 continue handling, and
fill in missing APLOGNO()s from r1836588.
* ap_proxy_balancer_get_best_worker cannot be exported and used as an optional
function at the same time. So rename ap_proxy_balancer_get_best_worker to
proxy_balancer_get_best_worker and make it static which is then used as an
optional function and recreate ap_proxy_balancer_get_best_worker as an
exported thin wrapper of proxy_balancer_get_best_worker.
Handle end-to-end 100-continue, according to RFC 7231, such that the client
request body is not read/forwarded (according to its "Expect:" header) until
the backend wants to receive it (with interim 100 continue response), or never
forwarded if the backend provides a (non-interim) response and doesn't need
the client body at all.
This is achieved by filling the header_brigade in ap_proxy_http_prefetch()
and letting ap_proxy_http_request() determine whether it should forward that
brigade only (with the "Expect: 100-continue" specified by the client or added
according to "ping=" configuration), or forward the whole body for the usual
case (as before).
When 100-continue expectation is in place, the body is actually forwarded by
ap_proxy_http_process_response() when/if a "100 continue" response is sent by
the backend, otherwise the body is discarded; a future enhancement could make
so that in a balancer configuration, the body could be forwarded to another
balancer member depending on the status/error from the backend.
So stream_reqbody_cl() and stream_reqbody_chunked() functions are adapted to be
called by either ap_proxy_http_request() or ap_proxy_http_process_response(),
while spool_reqbody_cl() still spools the body in ap_proxy_http_prefetch() thus
before the backend is connected/reused to avoid inactivity on the connection
for the prefetch time (the prefetched body is also forwarded according to the
100-continue expectation, though).
Also, since the brigades and other runtime objects now need to be shared by the
ap_proxy_http_*() functions chain, a proxy_http_req_t struct/context is created
from the start and passed to them as (the single) argument. This is also a good
candidate for a future async baton, if we wanted to let the MPM event wait for
connection data for us at any stage and be called back ;)
Finally, ap_send_interim_response() is modified to correcly handle 100 continue
responses once, and take care of clearing r->expecting_100 only for them.
* mod_proxy: Remove load order and link dependency between mod_lbmethod_*
modules and mod_proxy by providing mod_proxy's ap_proxy_balancer_get_best_worker
as an optional function.
They were superseded by ap_filter_should_yield() and ap_run_in/output_pending()
in r1706669 and had poor semantics since then (we can't maintain pending
semantics both by filter and for the whole connection).
Register ap_filter_input_pending() as the default input_pending hook (which
seems to have been forgotten in the first place).
On the MPM event side, we don't need to flush pending output data when the
connection has just been processed, ap_filter_should_yield() is lightweight and
enough to determine whether we should really enter write completion state or go
straight to reading. ap_run_output_pending() is used only when write completion
is in place and needs to be completed before more processing.
mod_proxy_hcheck: take balancer's SSLProxy* directives into account.
mod_proxy_hcheck was missing the merge of SSLProxy* directives defined by
balancer with the ones of the VirtualHost.
Since ap_proxy_connection_create_ex() needs a merged r->per_dir_config to apply
the correct SSL configuration, let's split create_request_rec() in two:
- create_request_rec() to only initialize the non-connection fields and merge
balancer->section_config into r->per_dir_config,
- set_request_connection() to associate the connection with the request once
it's been created from the merged configuration of the minimal request.
core: integrate data_in_{in,out}put_filter to ap_filter_{in,out}put_pending().
Straightforward for ap_filter_input_pending() since c->data_in_input_filter is
always checked wherever ap_run_input_pending(c) is.
For ap_filter_output_pending(), this allows to set c->data_in_output_filter in
ap_process_request_after_handler() and avoid an useless flush from mpm_event.
The core output filter used to determine first if it needed to block before
trying to send its data (including set aside ones), and if so it did call
send_brigade_blocking().
This can be avoided by making send_brigade_nonblocking() send as much data as
possible (nonblocking), and only if data remain check whether they should be
flushed (blocking), according to the same ap_filter_reinstate_brigade()
heuristics but afterward.
This allows both to simplify the code (axe send_brigade_blocking and some
duplicated logic) and optimize sends since send_brigade_nonblocking() is now
given all the buckets so it can make use of scatter/gather (iovec) or NOPUSH
option with the whole picture.
When sendfile is available and/or with fine tuning of FlushMaxThreshold (and
ReadBufferSize) from r1836032, one can now take advantage of modern network
speeds and bandwidth.
This commit also adds some APLOG_TRACE6 messages for outputed bytes (including
at mod_ssl level since splitting happens there when it's active).
core: Add ReadBufferSize, FlushMaxThreshold and FlushMaxPipelined directives.
ReadBufferSize allows to configure the size of read buffers, for now it's
mainly used for file buckets reads (apr_bucket_file_set_buf_size), but it could
be used to replace AP_IOBUFSIZE in multiple places.
FlushMaxThreshold and FlushMaxPipelined allow to configure the hardcoded
THRESHOLD_MAX_BUFFER and MAX_REQUESTS_IN_PIPELINE from "util_filter.c".
The former sets the maximum size above which pending data are forcibly flushed
to the network (blocking eventually), and the latter sets the number of
pipelined/pending responses above which they are flushed regardless of whether
a pipelined request is immediately available (zero disables pipelining).
Larger ReadBufferSize and FlushMaxThreshold can trade memory consumption for
performances with the capacity of today's networks.
Since it's internal util_filter use, we shouldn't expose it in conn_rec and
can replace it with a pooled brigade provided by ap_reuse_brigade_from_pool().
util_filter: follow up to r1835640: pending_filter_cleanup() precedence.
Register pending_filter_cleanup() as a normal cleanup (not pre_cleanup) so
that the pending filters are still there on pool cleanup, and f->bb is set
to NULL where needed.
Then is_pending_filter() check is moved where relevant.
Always favor APR_POLLSET_WAKEABLE over method/implementation.
Probably more about correctness than a real issue since systems are
unlikely to implement more than one/their method...
This also makes use of pruntime for event_pollset (an oversight from r1835845).
MPMs event and worker both need a dedicated pool to handle the creation of
the threads (listener, workers) and synchronization objects (queues, pollset,
mutexes...) in the start_threads() thread, with at least the lifetime of
the connections they handle, and thus survive pchild destruction (notably
in ONE_PROCCESS mode, but SIG_UNGRACEFUL is concerned too).
For instance, without this fix, the below backtrace can happen in ONE_PROCCESS
mode and a signal/^C is received (with active connections):
Thread 1 "httpd" received signal SIGSEGV, Segmentation fault.
(gdb) bt
#0 <BOOM>
#1 0x00007ffff7c7e016 in apr_file_write (thefile=0x0, ...)
^ NULL (cleared)
at file_io/unix/readwrite.c:230
#2 0x00007ffff7c7e4a7 in apr_file_putc (ch=1 '\001', thefile=0x0)
^ NULL (cleared)
at file_io/unix/readwrite.c:377
#3 0x00007ffff7c8da4a in apr_pollset_wakeup (pollset=0x55555568b870)
^ already destroyed by pchild
at poll/unix/pollset.c:224
#4 0x00007ffff7fc16c7 in decrement_connection_count (cs_=0x7fff08000ea0)
at event.c:811
#5 0x00007ffff7c83e15 in run_cleanups (cref=0x7fffe4002b78)
at memory/unix/apr_pools.c:2672
#6 0x00007ffff7c82c2f in apr_pool_destroy (pool=0x7fffe4002b58)
^ master_conn
at memory/unix/apr_pools.c:1007
#7 0x00007ffff7c82c12 in apr_pool_destroy (pool=0x7fff08000c28)
^ ptrans
at memory/unix/apr_pools.c:1004
#8 0x00007ffff7c82c12 in apr_pool_destroy (pool=0x555555638698)
^ pconf
at memory/unix/apr_pools.c:1004
#9 0x00007ffff7c82c12 in apr_pool_destroy (pool=0x555555636688)
^ pglobal
at memory/unix/apr_pools.c:1004
#10 0x00005555555f4709 in ap_terminate ()
at unixd.c:522
#11 0x00007ffff6dbc8f1 in __run_exit_handlers (...)
at exit.c:108
#12 0x00007ffff6dbc9ea in __GI_exit (status=<optimized out>)
at exit.c:139
#13 0x00007ffff7fc1616 in clean_child_exit (code=0)
at event.c:774
^ pchild already destroyed here
#14 0x00007ffff7fc5ae4 in child_main (child_num_arg=0, child_bucket=0)
at event.c:2869
...
While at it, add comments about the lifetimes of MPMs pools and their objects,
and give each pool a tag (e.g. "pchild" accordingly to other MPMs).
Lucien Gentis [Thu, 12 Jul 2018 13:05:22 +0000 (13:05 +0000)]
Rebuild (with 8 bits characters replaced by their HTML entities because -Xbootclasspath/p option disabled in build.sh script because it is no more supported in openjdk 10)
util_filter: keep filters with aside buckets in order.
Read or write of filter's pending data must happen in the same order as the
filter chain, thus we can't use an apr_hash_t to maintain the pending filters
since it provides no garantee on this matter.
Instead use an APR_RING maintained in c->pending_filters, and since both the
name (was c->filters) and the type changed, MAJOR is bumped (trunk only code
anyway so far).
Joe Orton [Wed, 11 Jul 2018 07:46:08 +0000 (07:46 +0000)]
* modules/ssl/ssl_engine_pphrase.c (modssl_load_engine_keypair): Load
the engine associated with the private key (&cert) explicitly
rather than requiring the engine to be set as the default method
for all operations (with "SSLCryptoDevice <engine>").
(Thanks to Anderson Sasaki <ansasaki redhat.com> for suggested
improvement and guidance)
Joe Orton [Fri, 6 Jul 2018 12:01:29 +0000 (12:01 +0000)]
Hook up PKCS#11 PIN entry through configured passphrase entry method.
* modules/ssl/ssl_engine_pphrase.c: Add wrappers for OpenSSL UI * API
around passphrase entry.
(modssl_load_engine_keypair): Take vhost ID and use above rather than
default OpenSSL UI.