Changes with Apache 2.4.19
+ *) mod_http2: Accept-Encoding is, when present on the initiating request,
+ added to push promises. This lets compressed content work in pushes.
+ by the client. [Stefan Eissing]
+
+ *) mod_http2: fixed possible read after free when streams were cancelled early
+ by the client. [Stefan Eissing]
+
+ *) mod_http2: fixed possible deadlock during connection shutdown. Thanks to
+ @FrankStolle for reporting and getting the necessary data.
+ [Stefan Eissing]
+
+ *) mod_http2: fixed apr_uint64_t formatting in a log statement to user proper
+ APR def, thanks to @Sp1l.
+
+ *) mod_http2: number of worker threads allowed to a connection is adjusting
+ dynamically. Starting with 4, the number is doubled when streams can be
+ served without block on http/2 connection flow. The number is halfed, when
+ the server has to wait on client flow control grants.
+ This can happen with a maximum frequency of 5 times per second.
+ When a connection occupies too many workers, repeatable requests
+ (GET/HEAD/OPTIONS) are cancelled and placed back in the queue. Should that
+ not suffice and a stream is busy longer than the server timeout, the
+ connection will be aborted with error code ENHANCE_YOUR_CALM.
+ This does *not* limit the number of streams a client may open, rather the
+ number of server threads a connection might use.
+ [Stefan Eissing]
+
+ *) mod_http2: allowing link header to specify multiple "rel" values,
+ space-separated inside a quoted string. Prohibiting push when Link
+ parameter "nopush" is present.
+ [Stefan Eissing]
+
+ *) mod_http2: reworked connection state handling. Idle connections accept a
+ GOAWAY from the client without further reply. Otherwise the
+ module makes a best effort to send one last GOAWAY to the client.
+
+ *) mod_http2: the values from standard directives Timeout and KeepAliveTimeout
+ properly are applied to http/2 connections.
+ [Stefan Eissing]
+
+ *) mod_http2: idle connections are returned to async mpms. new hook
+ "pre_close_connection" used to send GOAWAY frame when not already done.
+ Setting event mpm server config "by hand" for the main connection to
+ the correct negotiated server.
+ [Stefan Eissing]
+
+ *) mod_http2: keep-alive blocking reads are done with 1 second timeouts to
+ check for MPM stopping. Will announce early GOAWAY and finish processing
+ open streams, then close.
+ [Stefan Eissing]
+
+ *) mod_http2: bytes read/written on slave connections are reported via the
+ optional mod_logio functions. Fixes PR 58871.
+
*) mod_ssl: Add SSLOCSPProxyURL to add the possibility to do all queries
to OCSP responders through a HTTP proxy. [Ruediger Pluem]
be inherited by virtual hosts that define a CustomLog.
[Edward Lu]
- *) mod_http2: connection how keep a "push diary" where hashes of already
+ *) mod_http2: connections how keep a "push diary" where hashes of already
pushed resources are kept. See directive H2PushDiarySize for managing this.
Push diaries can be initialized by clients via the "Cache-Digest" request
header. This carries a base64url encoded. compressed Golomb set as described
when available for request.
[Stefan Eissing]
- *) mod_http2: new config directives and the implementation behind
- them: H2Timeout, H2KeepAliveTimeout, H2StreamTimeout. Documentation in
- the http2 manual.
- [Stefan Eissing]
-
*) mod_http2: fixed bug in input window size calculation by moving chunked
request body encoding into later stage of processing. Fixes PR 58825.
[Stefan Eissing]
modules/http2/h2_mplx.c modules/http2/h2_push.c
modules/http2/h2_request.c modules/http2/h2_response.c
modules/http2/h2_session.c modules/http2/h2_stream.c
- modules/http2/h2_stream_set.c modules/http2/h2_switch.c
+ modules/http2/h2_switch.c
modules/http2/h2_task.c modules/http2/h2_task_input.c
- modules/http2/h2_task_output.c modules/http2/h2_task_queue.c
+ modules/http2/h2_task_output.c modules/http2/h2_int_queue.c
modules/http2/h2_util.c modules/http2/h2_worker.c
modules/http2/h2_workers.c
)
-#
-# This Makefile requires the environment var NGH2SRC
-# pointing to the base directory of nghttp2 source tree.
-#
-
#
# Declare the sub-directories to be built here
#
#
# build this level's files
+
#
# Make sure all needed macro's are defined
#
# INCDIRS
#
XINCDIRS += \
- $(APR)/include \
- $(APRUTIL)/include \
- $(SRC)/include \
- $(NGH2SRC)/lib/ \
- $(NGH2SRC)/lib/includes \
- $(SERVER)/mpm/NetWare \
- $(NWOS) \
$(EOLIST)
#
# These defines will come after DEFINES
#
XDEFINES += \
- -DHAVE_CONFIG_H \
$(EOLIST)
#
# These flags will be added to the link.opt file
#
XLFLAGS += \
- -L$(OBJDIR) \
$(EOLIST)
#
# This is used by the link 'name' directive to name the nlm. If left blank
# TARGET_nlm (see below) will be used.
#
-NLM_NAME = mod_http2
+NLM_NAME =
#
# This is used by the link '-desc ' directive.
# If left blank, NLM_NAME will be used.
#
-NLM_DESCRIPTION = Apache $(VERSION_STR) HTTP2 Support module (w/ NGHTTP2 Lib)
+NLM_DESCRIPTION =
#
# This is used by the '-threadname' directive. If left blank,
# NLM_NAME Thread will be used.
#
-NLM_THREAD_NAME = $(NLM_NAME)
+NLM_THREAD_NAME =
#
# If this is specified, it will override VERSION value in
#
# If this is specified, it will override the default of 64K
#
-NLM_STACK_SIZE = 65536
+NLM_STACK_SIZE =
+
#
# If this is specified it will be used by the link '-entry' directive
NLM_CHECK_SYM =
#
-# If this is specified it will be used by the link '-flags' directive
+# If these are specified it will be used by the link '-flags' directive
#
NLM_FLAGS =
#
XDCDATA =
-#
-# Declare all target files (you must add your files here)
-#
-
#
# If there is an NLM target, put it here
#
TARGET_nlm = \
- $(OBJDIR)/$(NLM_NAME).nlm \
+ $(OBJDIR)/mod_http2.nlm \
+ $(OBJDIR)/mod_http2.nlm \
$(EOLIST)
#
# If there is an LIB target, put it here
#
TARGET_lib = \
- $(OBJDIR)/nghttp2.lib \
$(EOLIST)
#
# These are the OBJ files needed to create the NLM target above.
# Paths must all use the '/' character
#
-FILES_nlm_objs := $(sort $(patsubst %.c,$(OBJDIR)/%.o,$(wildcard *.c)))
+FILES_nlm_objs = \
+ $(EOLIST)
#
# These are the LIB files needed to create the NLM target above.
# These will be added as a library command in the link.opt file.
#
FILES_nlm_libs = \
- $(PRELUDE) \
- $(OBJDIR)/nghttp2.lib \
$(EOLIST)
#
# These will be added as a module command in the link.opt file.
#
FILES_nlm_modules = \
- Libc \
- Apache2 \
$(EOLIST)
#
# Any additional imports go here
#
FILES_nlm_Ximports = \
- @libc.imp \
- @aprlib.imp \
- @httpd.imp \
$(EOLIST)
#
# Any symbols exported to here
#
FILES_nlm_exports = \
- http2_module \
$(EOLIST)
#
# These are the OBJ files needed to create the LIB target above.
# Paths must all use the '/' character
#
-FILES_lib_objs := $(sort $(patsubst $(NGH2SRC)/lib/%.c,$(OBJDIR)/%.o,$(wildcard $(NGH2SRC)/lib/*.c)))
+FILES_lib_objs = \
+ $(EOLIST)
+
#
# implement targets and dependancies (leave this section alone)
#
-libs :: $(OBJDIR) $(NGH2SRC)/lib/config.h $(TARGET_lib)
+libs :: $(OBJDIR) $(TARGET_lib)
nlms :: libs $(TARGET_nlm)
# correct place. (See $(AP_WORK)/build/NWGNUhead.inc for examples)
#
install :: nlms FORCE
- $(call COPY,$(OBJDIR)/*.nlm, $(INSTALLBASE)/modules/)
+ $(call COPY,$(OBJDIR)/*.nlm, $(INSTALLBASE)/modules/)
-clean ::
- $(call DEL,$(NGH2SRC)/lib/config.h)
#
# Any specialized rules here
#
-vpath %.c $(NGH2SRC)/lib
-
-$(NGH2SRC)/lib/config.h : NWGNUmakefile
- @echo $(DL)GEN $@$(DL)
- @echo $(DL)/* For NetWare target.$(DL) > $@
- @echo $(DL)** Do not edit - created by Make!$(DL) >> $@
- @echo $(DL)*/$(DL) >> $@
- @echo $(DL)#ifndef NGH2_CONFIG_H$(DL) >> $@
- @echo $(DL)#define NGH2_CONFIG_H$(DL) >> $@
- @echo #define HAVE_ARPA_INET_H 1 >> $@
- @echo #define HAVE_CHOWN 1 >> $@
- @echo #define HAVE_DECL_STRERROR_R 1 >> $@
- @echo #define HAVE_DLFCN_H 1 >> $@
- @echo #define HAVE_DUP2 1 >> $@
- @echo #define HAVE_FCNTL_H 1 >> $@
- @echo #define HAVE_GETCWD 1 >> $@
- @echo #define HAVE_INTTYPES_H 1 >> $@
- @echo #define HAVE_LIMITS_H 1 >> $@
- @echo #define HAVE_LOCALTIME_R 1 >> $@
- @echo #define HAVE_MALLOC 1 >> $@
- @echo #define HAVE_MEMCHR 1 >> $@
- @echo #define HAVE_MEMMOVE 1 >> $@
- @echo #define HAVE_MEMORY_H 1 >> $@
- @echo #define HAVE_MEMSET 1 >> $@
- @echo #define HAVE_NETDB_H 1 >> $@
- @echo #define HAVE_NETINET_IN_H 1 >> $@
- @echo #define HAVE_PTRDIFF_T 1 >> $@
- @echo #define HAVE_PWD_H 1 >> $@
- @echo #define HAVE_SOCKET 1 >> $@
- @echo #define HAVE_SQRT 1 >> $@
- @echo #define HAVE_STDDEF_H 1 >> $@
- @echo #define HAVE_STDINT_H 1 >> $@
- @echo #define HAVE_STDLIB_H 1 >> $@
- @echo #define HAVE_STRCHR 1 >> $@
- @echo #define HAVE_STRDUP 1 >> $@
- @echo #define HAVE_STRERROR 1 >> $@
- @echo #define HAVE_STRERROR_R 1 >> $@
- @echo #define HAVE_STRINGS_H 1 >> $@
- @echo #define HAVE_STRING_H 1 >> $@
- @echo #define HAVE_STRSTR 1 >> $@
- @echo #define HAVE_STRTOL 1 >> $@
- @echo #define HAVE_STRTOUL 1 >> $@
- @echo #define HAVE_SYSLOG_H 1 >> $@
- @echo #define HAVE_SYS_SOCKET_H 1 >> $@
- @echo #define HAVE_SYS_STAT_H 1 >> $@
- @echo #define HAVE_SYS_TIME_H 1 >> $@
- @echo #define HAVE_SYS_TYPES_H 1 >> $@
- @echo #define HAVE_TIME_H 1 >> $@
- @echo #define HAVE_UNISTD_H 1 >> $@
-
- @echo #define SIZEOF_INT_P 4 >> $@
- @echo #define STDC_HEADERS 1 >> $@
- @echo #define STRERROR_R_CHAR_P 4 >> $@
-
-# Hint to compiler a function parameter is not used
- @echo #define _U_ >> $@
-
- @echo #ifndef __cplusplus >> $@
- @echo #define inline __inline >> $@
- @echo #endif >> $@
-
- @echo $(DL)#endif /* NGH2_CONFIG_H */$(DL) >> $@
#
# Include the 'tail' makefile that has targets that depend on variables defined
--- /dev/null
+#
+# This Makefile requires the environment var NGH2SRC
+# pointing to the base directory of nghttp2 source tree.
+#
+
+#
+# Declare the sub-directories to be built here
+#
+
+SUBDIRS = \
+ $(EOLIST)
+
+#
+# Get the 'head' of the build environment. This includes default targets and
+# paths to tools
+#
+
+include $(AP_WORK)/build/NWGNUhead.inc
+
+#
+# build this level's files
+#
+# Make sure all needed macro's are defined
+#
+
+#
+# These directories will be at the beginning of the include list, followed by
+# INCDIRS
+#
+XINCDIRS += \
+ $(APR)/include \
+ $(APRUTIL)/include \
+ $(SRC)/include \
+ $(NGH2SRC)/lib/ \
+ $(NGH2SRC)/lib/includes \
+ $(SERVER)/mpm/NetWare \
+ $(STDMOD)/ssl \
+ $(NWOS) \
+ $(EOLIST)
+
+#
+# These flags will come after CFLAGS
+#
+XCFLAGS += \
+ $(EOLIST)
+
+#
+# These defines will come after DEFINES
+#
+XDEFINES += \
+ -DHAVE_CONFIG_H \
+ $(EOLIST)
+
+#
+# These flags will be added to the link.opt file
+#
+XLFLAGS += \
+ -L$(OBJDIR) \
+ $(EOLIST)
+
+#
+# These values will be appended to the correct variables based on the value of
+# RELEASE
+#
+ifeq "$(RELEASE)" "debug"
+XINCDIRS += \
+ $(EOLIST)
+
+XCFLAGS += \
+ $(EOLIST)
+
+XDEFINES += \
+ $(EOLIST)
+
+XLFLAGS += \
+ $(EOLIST)
+endif
+
+ifeq "$(RELEASE)" "noopt"
+XINCDIRS += \
+ $(EOLIST)
+
+XCFLAGS += \
+ $(EOLIST)
+
+XDEFINES += \
+ $(EOLIST)
+
+XLFLAGS += \
+ $(EOLIST)
+endif
+
+ifeq "$(RELEASE)" "release"
+XINCDIRS += \
+ $(EOLIST)
+
+XCFLAGS += \
+ $(EOLIST)
+
+XDEFINES += \
+ $(EOLIST)
+
+XLFLAGS += \
+ $(EOLIST)
+endif
+
+#
+# These are used by the link target if an NLM is being generated
+# This is used by the link 'name' directive to name the nlm. If left blank
+# TARGET_nlm (see below) will be used.
+#
+NLM_NAME = mod_http2
+
+#
+# This is used by the link '-desc ' directive.
+# If left blank, NLM_NAME will be used.
+#
+NLM_DESCRIPTION = Apache $(VERSION_STR) HTTP2 Support module (w/ NGHTTP2 Lib)
+
+#
+# This is used by the '-threadname' directive. If left blank,
+# NLM_NAME Thread will be used.
+#
+NLM_THREAD_NAME = $(NLM_NAME)
+
+#
+# If this is specified, it will override VERSION value in
+# $(AP_WORK)/build/NWGNUenvironment.inc
+#
+NLM_VERSION =
+
+#
+# If this is specified, it will override the default of 64K
+#
+NLM_STACK_SIZE = 65536
+
+#
+# If this is specified it will be used by the link '-entry' directive
+#
+NLM_ENTRY_SYM =
+
+#
+# If this is specified it will be used by the link '-exit' directive
+#
+NLM_EXIT_SYM =
+
+#
+# If this is specified it will be used by the link '-check' directive
+#
+NLM_CHECK_SYM =
+
+#
+# If this is specified it will be used by the link '-flags' directive
+#
+NLM_FLAGS =
+
+#
+# If this is specified it will be linked in with the XDCData option in the def
+# file instead of the default of $(NWOS)/apache.xdc. XDCData can be disabled
+# by setting APACHE_UNIPROC in the environment
+#
+XDCDATA =
+
+#
+# Declare all target files (you must add your files here)
+#
+
+#
+# If there is an NLM target, put it here
+#
+TARGET_nlm = \
+ $(OBJDIR)/$(NLM_NAME).nlm \
+ $(EOLIST)
+
+#
+# If there is an LIB target, put it here
+#
+TARGET_lib = \
+ $(OBJDIR)/nghttp2.lib \
+ $(EOLIST)
+
+#
+# These are the OBJ files needed to create the NLM target above.
+# Paths must all use the '/' character
+#
+FILES_nlm_objs = \
+ $(OBJDIR)/h2_alt_svc.o \
+ $(OBJDIR)/h2_bucket_eoc.o \
+ $(OBJDIR)/h2_bucket_eos.o \
+ $(OBJDIR)/h2_config.o \
+ $(OBJDIR)/h2_conn.o \
+ $(OBJDIR)/h2_conn_io.o \
+ $(OBJDIR)/h2_ctx.o \
+ $(OBJDIR)/h2_filter.o \
+ $(OBJDIR)/h2_from_h1.o \
+ $(OBJDIR)/h2_h2.o \
+ $(OBJDIR)/h2_int_queue.o \
+ $(OBJDIR)/h2_io.o \
+ $(OBJDIR)/h2_io_set.o \
+ $(OBJDIR)/h2_mplx.o \
+ $(OBJDIR)/h2_push.o \
+ $(OBJDIR)/h2_request.o \
+ $(OBJDIR)/h2_response.o \
+ $(OBJDIR)/h2_session.o \
+ $(OBJDIR)/h2_stream.o \
+ $(OBJDIR)/h2_switch.o \
+ $(OBJDIR)/h2_task.o \
+ $(OBJDIR)/h2_task_input.o \
+ $(OBJDIR)/h2_task_output.o \
+ $(OBJDIR)/h2_util.o \
+ $(OBJDIR)/h2_worker.o \
+ $(OBJDIR)/h2_workers.o \
+ $(OBJDIR)/mod_http2.o \
+ $(EOLIST)
+
+#
+# These are the LIB files needed to create the NLM target above.
+# These will be added as a library command in the link.opt file.
+#
+FILES_nlm_libs = \
+ $(PRELUDE) \
+ $(OBJDIR)/nghttp2.lib \
+ $(EOLIST)
+
+#
+# These are the modules that the above NLM target depends on to load.
+# These will be added as a module command in the link.opt file.
+#
+FILES_nlm_modules = \
+ Libc \
+ Apache2 \
+ $(EOLIST)
+
+#
+# If the nlm has a msg file, put it's path here
+#
+FILE_nlm_msg =
+
+#
+# If the nlm has a hlp file put it's path here
+#
+FILE_nlm_hlp =
+
+#
+# If this is specified, it will override $(NWOS)\copyright.txt.
+#
+FILE_nlm_copyright =
+
+#
+# Any additional imports go here
+#
+FILES_nlm_Ximports = \
+ @libc.imp \
+ @aprlib.imp \
+ @httpd.imp \
+ $(EOLIST)
+
+#
+# Any symbols exported to here
+#
+FILES_nlm_exports = \
+ http2_module \
+ $(EOLIST)
+
+#
+# These are the OBJ files needed to create the LIB target above.
+# Paths must all use the '/' character
+#
+FILES_lib_objs := $(sort $(patsubst $(NGH2SRC)/lib/%.c,$(OBJDIR)/%.o,$(wildcard $(NGH2SRC)/lib/*.c)))
+#
+# implement targets and dependancies (leave this section alone)
+#
+
+libs :: $(OBJDIR) $(NGH2SRC)/lib/config.h $(TARGET_lib)
+
+nlms :: libs $(TARGET_nlm)
+
+#
+# Updated this target to create necessary directories and copy files to the
+# correct place. (See $(AP_WORK)/build/NWGNUhead.inc for examples)
+#
+install :: nlms FORCE
+ $(call COPY,$(OBJDIR)/*.nlm, $(INSTALLBASE)/modules/)
+
+clean ::
+ $(call DEL,$(NGH2SRC)/lib/config.h)
+#
+# Any specialized rules here
+#
+vpath %.c $(NGH2SRC)/lib
+
+$(NGH2SRC)/lib/config.h : NWGNUmakefile
+ @echo $(DL)GEN $@$(DL)
+ @echo $(DL)/* For NetWare target.$(DL) > $@
+ @echo $(DL)** Do not edit - created by Make!$(DL) >> $@
+ @echo $(DL)*/$(DL) >> $@
+ @echo $(DL)#ifndef NGH2_CONFIG_H$(DL) >> $@
+ @echo $(DL)#define NGH2_CONFIG_H$(DL) >> $@
+ @echo #define HAVE_ARPA_INET_H 1 >> $@
+ @echo #define HAVE_CHOWN 1 >> $@
+ @echo #define HAVE_DECL_STRERROR_R 1 >> $@
+ @echo #define HAVE_DLFCN_H 1 >> $@
+ @echo #define HAVE_DUP2 1 >> $@
+ @echo #define HAVE_FCNTL_H 1 >> $@
+ @echo #define HAVE_GETCWD 1 >> $@
+ @echo #define HAVE_INTTYPES_H 1 >> $@
+ @echo #define HAVE_LIMITS_H 1 >> $@
+ @echo #define HAVE_LOCALTIME_R 1 >> $@
+ @echo #define HAVE_MALLOC 1 >> $@
+ @echo #define HAVE_MEMCHR 1 >> $@
+ @echo #define HAVE_MEMMOVE 1 >> $@
+ @echo #define HAVE_MEMORY_H 1 >> $@
+ @echo #define HAVE_MEMSET 1 >> $@
+ @echo #define HAVE_NETDB_H 1 >> $@
+ @echo #define HAVE_NETINET_IN_H 1 >> $@
+ @echo #define HAVE_PTRDIFF_T 1 >> $@
+ @echo #define HAVE_PWD_H 1 >> $@
+ @echo #define HAVE_SOCKET 1 >> $@
+ @echo #define HAVE_SQRT 1 >> $@
+ @echo #define HAVE_STDDEF_H 1 >> $@
+ @echo #define HAVE_STDINT_H 1 >> $@
+ @echo #define HAVE_STDLIB_H 1 >> $@
+ @echo #define HAVE_STRCHR 1 >> $@
+ @echo #define HAVE_STRDUP 1 >> $@
+ @echo #define HAVE_STRERROR 1 >> $@
+ @echo #define HAVE_STRERROR_R 1 >> $@
+ @echo #define HAVE_STRINGS_H 1 >> $@
+ @echo #define HAVE_STRING_H 1 >> $@
+ @echo #define HAVE_STRSTR 1 >> $@
+ @echo #define HAVE_STRTOL 1 >> $@
+ @echo #define HAVE_STRTOUL 1 >> $@
+ @echo #define HAVE_SYSLOG_H 1 >> $@
+ @echo #define HAVE_SYS_SOCKET_H 1 >> $@
+ @echo #define HAVE_SYS_STAT_H 1 >> $@
+ @echo #define HAVE_SYS_TIME_H 1 >> $@
+ @echo #define HAVE_SYS_TYPES_H 1 >> $@
+ @echo #define HAVE_TIME_H 1 >> $@
+ @echo #define HAVE_UNISTD_H 1 >> $@
+
+ @echo #define SIZEOF_INT_P 4 >> $@
+ @echo #define STDC_HEADERS 1 >> $@
+ @echo #define STRERROR_R_CHAR_P 4 >> $@
+
+# Hint to compiler a function parameter is not used
+ @echo #define _U_ >> $@
+
+ @echo #ifndef __cplusplus >> $@
+ @echo #define inline __inline >> $@
+ @echo #endif >> $@
+
+ @echo $(DL)#endif /* NGH2_CONFIG_H */$(DL) >> $@
+
+#
+# Include the 'tail' makefile that has targets that depend on variables defined
+# in this makefile
+#
+
+include $(APBUILD)/NWGNUtail.inc
+
+
h2_filter.lo dnl
h2_from_h1.lo dnl
h2_h2.lo dnl
+h2_int_queue.lo dnl
h2_io.lo dnl
h2_io_set.lo dnl
h2_mplx.lo dnl
h2_response.lo dnl
h2_session.lo dnl
h2_stream.lo dnl
-h2_stream_set.lo dnl
h2_switch.lo dnl
h2_task.lo dnl
h2_task_input.lo dnl
h2_task_output.lo dnl
-h2_task_queue.lo dnl
h2_util.lo dnl
h2_worker.lo dnl
h2_workers.lo dnl
AC_MSG_WARN([nghttp2 library is unusable])
fi
dnl # nghttp2 >= 1.3.0: access to stream weights
- AC_CHECK_FUNCS([nghttp2_stream_get_weight],
- [APR_ADDTO(MOD_CPPFLAGS, ["-DH2_NG2_STREAM_API"])], [])
+ AC_CHECK_FUNCS([nghttp2_stream_get_weight], [], [liberrors="yes"])
+ if test "x$liberrors" != "x"; then
+ AC_MSG_WARN([nghttp2 version >= 1.3.0 is required])
+ fi
dnl # nghttp2 >= 1.5.0: changing stream priorities
AC_CHECK_FUNCS([nghttp2_session_change_stream_priority],
[APR_ADDTO(MOD_CPPFLAGS, ["-DH2_NG2_CHANGE_PRIO"])], [])
handling. Implemented by mod_http2. This module requires a libnghttp2 installation.
See --with-nghttp2 on how to manage non-standard locations. This module
is usually linked shared and requires loading. ], $http2_objs, , most, [
-# APACHE_CHECK_OPENSSL
-# if test "$ac_cv_openssl" = "yes" ; then
-# APR_ADDTO(MOD_CPPFLAGS, ["-DH2_OPENSSL"])
-# fi
+ APACHE_CHECK_OPENSSL
+ if test "$ac_cv_openssl" = "yes" ; then
+ APR_ADDTO(MOD_CPPFLAGS, ["-DH2_OPENSSL"])
+ fi
APACHE_CHECK_NGHTTP2
if test "$ac_cv_nghttp2" = "yes" ; then
])
# Ensure that other modules can pick up mod_http2.h
-APR_ADDTO(INCLUDES, [-I\$(top_srcdir)/$modpath_current])
+# APR_ADDTO(INCLUDES, [-I\$(top_srcdir)/$modpath_current])
dnl # end of module specific part
APACHE_MODPATH_FINISH
--- /dev/null
+/* Copyright 2015 greenbytes GmbH (https://www.greenbytes.de)
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef __mod_h2__h2__
+#define __mod_h2__h2__
+
+/**
+ * The magic PRIamble of RFC 7540 that is always sent when starting
+ * a h2 communication.
+ */
+extern const char *H2_MAGIC_TOKEN;
+
+#define H2_ERR_NO_ERROR (0x00)
+#define H2_ERR_PROTOCOL_ERROR (0x01)
+#define H2_ERR_INTERNAL_ERROR (0x02)
+#define H2_ERR_FLOW_CONTROL_ERROR (0x03)
+#define H2_ERR_SETTINGS_TIMEOUT (0x04)
+#define H2_ERR_STREAM_CLOSED (0x05)
+#define H2_ERR_FRAME_SIZE_ERROR (0x06)
+#define H2_ERR_REFUSED_STREAM (0x07)
+#define H2_ERR_CANCEL (0x08)
+#define H2_ERR_COMPRESSION_ERROR (0x09)
+#define H2_ERR_CONNECT_ERROR (0x0a)
+#define H2_ERR_ENHANCE_YOUR_CALM (0x0b)
+#define H2_ERR_INADEQUATE_SECURITY (0x0c)
+#define H2_ERR_HTTP_1_1_REQUIRED (0x0d)
+
+#define H2_HEADER_METHOD ":method"
+#define H2_HEADER_METHOD_LEN 7
+#define H2_HEADER_SCHEME ":scheme"
+#define H2_HEADER_SCHEME_LEN 7
+#define H2_HEADER_AUTH ":authority"
+#define H2_HEADER_AUTH_LEN 10
+#define H2_HEADER_PATH ":path"
+#define H2_HEADER_PATH_LEN 5
+#define H2_CRLF "\r\n"
+
+/* Maximum number of padding bytes in a frame, rfc7540 */
+#define H2_MAX_PADLEN 256
+/* Initial default window size, RFC 7540 ch. 6.5.2 */
+#define H2_INITIAL_WINDOW_SIZE ((64*1024)-1)
+
+#define H2_HTTP_2XX(a) ((a) >= 200 && (a) < 300)
+
+#define H2_STREAM_CLIENT_INITIATED(id) (id&0x01)
+
+#define H2_ALEN(a) (sizeof(a)/sizeof((a)[0]))
+
+#define H2MAX(x,y) ((x) > (y) ? (x) : (y))
+#define H2MIN(x,y) ((x) < (y) ? (x) : (y))
+
+typedef enum {
+ H2_DEPENDANT_AFTER,
+ H2_DEPENDANT_INTERLEAVED,
+ H2_DEPENDANT_BEFORE,
+} h2_dependency;
+
+typedef struct h2_priority {
+ h2_dependency dependency;
+ int weight;
+} h2_priority;
+
+typedef enum {
+ H2_PUSH_NONE,
+ H2_PUSH_DEFAULT,
+ H2_PUSH_HEAD,
+ H2_PUSH_FAST_LOAD,
+} h2_push_policy;
+
+typedef enum {
+ H2_STREAM_ST_IDLE,
+ H2_STREAM_ST_OPEN,
+ H2_STREAM_ST_RESV_LOCAL,
+ H2_STREAM_ST_RESV_REMOTE,
+ H2_STREAM_ST_CLOSED_INPUT,
+ H2_STREAM_ST_CLOSED_OUTPUT,
+ H2_STREAM_ST_CLOSED,
+} h2_stream_state_t;
+
+typedef enum {
+ H2_SESSION_ST_INIT, /* send initial SETTINGS, etc. */
+ H2_SESSION_ST_DONE, /* finished, connection close */
+ H2_SESSION_ST_IDLE, /* nothing to write, expecting data inc */
+ H2_SESSION_ST_BUSY, /* read/write without stop */
+ H2_SESSION_ST_WAIT, /* waiting for tasks reporting back */
+ H2_SESSION_ST_LOCAL_SHUTDOWN, /* we announced GOAWAY */
+ H2_SESSION_ST_REMOTE_SHUTDOWN, /* client announced GOAWAY */
+} h2_session_state;
+
+/* h2_request is the transformer of HTTP2 streams into HTTP/1.1 internal
+ * format that will be fed to various httpd input filters to finally
+ * become a request_rec to be handled by soemone.
+ */
+typedef struct h2_request h2_request;
+
+struct h2_request {
+ int id; /* stream id */
+
+ const char *method; /* pseudo header values, see ch. 8.1.2.3 */
+ const char *scheme;
+ const char *authority;
+ const char *path;
+
+ apr_table_t *headers;
+ apr_table_t *trailers;
+
+ apr_time_t request_time;
+ apr_off_t content_length;
+
+ unsigned int chunked : 1; /* iff requst body needs to be forwarded as chunked */
+ unsigned int eoh : 1; /* iff end-of-headers has been seen and request is complete */
+ unsigned int body : 1; /* iff this request has a body */
+ unsigned int serialize : 1; /* iff this request is written in HTTP/1.1 serialization */
+ unsigned int push_policy; /* which push policy to use for this request */
+};
+
+typedef struct h2_response h2_response;
+
+struct h2_response {
+ int stream_id;
+ int rst_error;
+ int http_status;
+ apr_off_t content_length;
+ apr_table_t *headers;
+ apr_table_t *trailers;
+ const char *sos_filter;
+};
+
+
+#endif /* defined(__mod_h2__h2__) */
#include <apr_strings.h>
+#include "h2.h"
#include "h2_alt_svc.h"
#include "h2_ctx.h"
#include "h2_conn.h"
1, /* TLS cooldown secs */
1, /* HTTP/2 server push enabled */
NULL, /* map of content-type to priorities */
- -1, /* connection timeout */
- -1, /* keepalive timeout */
- 0, /* stream timeout */
256, /* push diary size */
};
conf->tls_cooldown_secs = DEF_VAL;
conf->h2_push = DEF_VAL;
conf->priorities = NULL;
- conf->h2_timeout = DEF_VAL;
- conf->h2_keepalive = DEF_VAL;
- conf->h2_stream_timeout = DEF_VAL;
conf->push_diary_size = DEF_VAL;
return conf;
else {
n->priorities = add->priorities? add->priorities : base->priorities;
}
- n->h2_timeout = H2_CONFIG_GET(add, base, h2_timeout);
- n->h2_keepalive = H2_CONFIG_GET(add, base, h2_keepalive);
- n->h2_stream_timeout = H2_CONFIG_GET(add, base, h2_stream_timeout);
n->push_diary_size = H2_CONFIG_GET(add, base, push_diary_size);
return n;
return H2_CONFIG_GET(conf, &defconf, tls_cooldown_secs);
case H2_CONF_PUSH:
return H2_CONFIG_GET(conf, &defconf, h2_push);
- case H2_CONF_TIMEOUT_SECS:
- return H2_CONFIG_GET(conf, &defconf, h2_timeout);
- case H2_CONF_KEEPALIVE_SECS:
- return H2_CONFIG_GET(conf, &defconf, h2_keepalive);
- case H2_CONF_STREAM_TIMEOUT_SECS:
- return H2_CONFIG_GET(conf, &defconf, h2_stream_timeout);
case H2_CONF_PUSH_DIARY_SIZE:
return H2_CONFIG_GET(conf, &defconf, push_diary_size);
default:
return NULL;
}
-static const char *h2_conf_set_timeout(cmd_parms *parms,
- void *arg, const char *value)
-{
- h2_config *cfg = (h2_config *)h2_config_sget(parms->server);
- (void)arg;
- cfg->h2_timeout = (int)apr_atoi64(value);
- if (cfg->h2_timeout < 0) {
- return "value must be >= 0";
- }
- return NULL;
-}
-
-static const char *h2_conf_set_keepalive(cmd_parms *parms,
- void *arg, const char *value)
-{
- h2_config *cfg = (h2_config *)h2_config_sget(parms->server);
- (void)arg;
- cfg->h2_keepalive = (int)apr_atoi64(value);
- if (cfg->h2_keepalive < 0) {
- return "value must be >= 0";
- }
- return NULL;
-}
-
-static const char *h2_conf_set_stream_timeout(cmd_parms *parms,
- void *arg, const char *value)
-{
- h2_config *cfg = (h2_config *)h2_config_sget(parms->server);
- (void)arg;
- cfg->h2_stream_timeout = (int)apr_atoi64(value);
- if (cfg->h2_stream_timeout < 0) {
- return "value must be >= 0";
- }
- return NULL;
-}
-
static const char *h2_conf_set_push_diary_size(cmd_parms *parms,
void *arg, const char *value)
{
RSRC_CONF, "off to disable HTTP/2 server push"),
AP_INIT_TAKE23("H2PushPriority", h2_conf_add_push_priority, NULL,
RSRC_CONF, "define priority of PUSHed resources per content type"),
- AP_INIT_TAKE1("H2Timeout", h2_conf_set_timeout, NULL,
- RSRC_CONF, "read/write timeout (seconds) for HTTP/2 connections"),
- AP_INIT_TAKE1("H2KeepAliveTimeout", h2_conf_set_keepalive, NULL,
- RSRC_CONF, "timeout (seconds) for idle HTTP/2 connections, no streams open"),
- AP_INIT_TAKE1("H2StreamTimeout", h2_conf_set_stream_timeout, NULL,
- RSRC_CONF, "read/write timeout (seconds) for HTTP/2 streams"),
AP_INIT_TAKE1("H2PushDiarySize", h2_conf_set_push_diary_size, NULL,
RSRC_CONF, "size of push diary"),
AP_END_CMD
H2_CONF_TLS_WARMUP_SIZE,
H2_CONF_TLS_COOLDOWN_SECS,
H2_CONF_PUSH,
- H2_CONF_TIMEOUT_SECS,
- H2_CONF_KEEPALIVE_SECS,
- H2_CONF_STREAM_TIMEOUT_SECS,
H2_CONF_PUSH_DIARY_SIZE,
} h2_config_var_t;
int h2_push; /* if HTTP/2 server push is enabled */
struct apr_hash_t *priorities;/* map of content-type to h2_priority records */
- int h2_timeout; /* timeout for http/2 connections */
- int h2_keepalive; /* timeout for idle connections, no streams */
- int h2_stream_timeout; /* timeout for http/2 streams, slave connections */
int push_diary_size; /* # of entries in push diary */
} h2_config;
#include "h2_mplx.h"
#include "h2_session.h"
#include "h2_stream.h"
-#include "h2_stream_set.h"
#include "h2_h2.h"
#include "h2_task.h"
#include "h2_worker.h"
static h2_mpm_type_t mpm_type = H2_MPM_UNKNOWN;
static module *mpm_module;
static int async_mpm;
+static apr_socket_t *dummy_socket;
static void check_modules(int force)
{
mpm_module = m;
break;
}
- else if (!strcmp("worker.c", m->name)) {
- mpm_type = H2_MPM_WORKER;
+ else if (!strcmp("motorz.c", m->name)) {
+ mpm_type = H2_MPM_MOTORZ;
+ mpm_module = m;
+ break;
+ }
+ else if (!strcmp("mpm_netware.c", m->name)) {
+ mpm_type = H2_MPM_NETWARE;
mpm_module = m;
break;
}
mpm_module = m;
break;
}
+ else if (!strcmp("simple_api.c", m->name)) {
+ mpm_type = H2_MPM_SIMPLE;
+ mpm_module = m;
+ break;
+ }
+ else if (!strcmp("mpm_winnt.c", m->name)) {
+ mpm_type = H2_MPM_WINNT;
+ mpm_module = m;
+ break;
+ }
+ else if (!strcmp("worker.c", m->name)) {
+ mpm_type = H2_MPM_WORKER;
+ mpm_module = m;
+ break;
+ }
}
checked = 1;
}
status = ap_mpm_query(AP_MPMQ_IS_ASYNC, &async_mpm);
if (status != APR_SUCCESS) {
- ap_log_error(APLOG_MARK, APLOG_TRACE1, status, s, "querying MPM for async");
/* some MPMs do not implemnent this */
async_mpm = 0;
status = APR_SUCCESS;
ap_register_input_filter("H2_IN", h2_filter_core_input,
NULL, AP_FTYPE_CONNECTION);
+ status = h2_mplx_child_init(pool, s);
+
+ if (status == APR_SUCCESS) {
+ status = apr_socket_create(&dummy_socket, APR_INET, SOCK_STREAM,
+ APR_PROTO_TCP, pool);
+ }
+
return status;
}
}
h2_ctx_session_set(ctx, session);
+
return APR_SUCCESS;
}
}
status = h2_session_process(h2_ctx_session_get(ctx), async_mpm);
- if (c->cs) {
- c->cs->state = CONN_STATE_WRITE_COMPLETION;
- }
if (APR_STATUS_IS_EOF(status)) {
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, status, c, APLOGNO(03045)
"h2_session(%ld): process, closing conn", c->id);
return DONE;
}
-
-static void fix_event_conn(conn_rec *c, conn_rec *master);
-
-conn_rec *h2_slave_create(conn_rec *master, apr_pool_t *p,
- apr_thread_t *thread, apr_socket_t *socket)
+apr_status_t h2_conn_pre_close(struct h2_ctx *ctx, conn_rec *c)
{
- conn_rec *c;
-
- AP_DEBUG_ASSERT(master);
- ap_log_cerror(APLOG_MARK, APLOG_TRACE3, 0, master,
- "h2_conn(%ld): created from master", master->id);
-
- /* This is like the slave connection creation from 2.5-DEV. A
- * very efficient way - not sure how compatible this is, since
- * the core hooks are no longer run.
- * But maybe it's is better this way, not sure yet.
- */
- c = (conn_rec *) apr_palloc(p, sizeof(conn_rec));
- if (c == NULL) {
- ap_log_cerror(APLOG_MARK, APLOG_ERR, APR_ENOMEM, master,
- APLOGNO(02913) "h2_task: creating conn");
- return NULL;
- }
-
- memcpy(c, master, sizeof(conn_rec));
-
- /* Replace these */
- c->id = (master->id & (long)p);
- c->master = master;
- c->pool = p;
- c->current_thread = thread;
- c->conn_config = ap_create_conn_config(p);
- c->notes = apr_table_make(p, 5);
- c->input_filters = NULL;
- c->output_filters = NULL;
- c->bucket_alloc = apr_bucket_alloc_create(p);
- c->cs = NULL;
- c->data_in_input_filters = 0;
- c->data_in_output_filters = 0;
- c->clogging_input_filters = 1;
- c->log = NULL;
- c->log_id = NULL;
-
- /* TODO: these should be unique to this thread */
- c->sbh = master->sbh;
-
- /* Simulate that we had already a request on this connection. */
- c->keepalives = 1;
-
- ap_set_module_config(c->conn_config, &core_module, socket);
+ apr_status_t status;
- /* This works for mpm_worker so far. Other mpm modules have
- * different needs, unfortunately. The most interesting one
- * being mpm_event...
- */
- switch (h2_conn_mpm_type()) {
- case H2_MPM_WORKER:
- /* all fine */
- break;
- case H2_MPM_EVENT:
- fix_event_conn(c, master);
- break;
- default:
- /* fingers crossed */
- break;
+ status = h2_session_pre_close(h2_ctx_session_get(ctx), async_mpm);
+ if (status == APR_SUCCESS) {
+ return DONE; /* This is the same, right? */
}
-
- return c;
+ return status;
}
/* This is an internal mpm event.c struct which is disguised
c->cs = &(cs->pub);
}
+conn_rec *h2_slave_create(conn_rec *master, apr_pool_t *parent,
+ apr_allocator_t *allocator)
+{
+ apr_pool_t *pool;
+ conn_rec *c;
+ void *cfg;
+
+ AP_DEBUG_ASSERT(master);
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE3, 0, master,
+ "h2_conn(%ld): create slave", master->id);
+
+ /* We create a pool with its own allocator to be used for
+ * processing a request. This is the only way to have the processing
+ * independant of its parent pool in the sense that it can work in
+ * another thread.
+ */
+ if (!allocator) {
+ apr_allocator_create(&allocator);
+ }
+ apr_pool_create_ex(&pool, parent, NULL, allocator);
+ apr_pool_tag(pool, "h2_slave_conn");
+ apr_allocator_owner_set(allocator, parent);
+
+ c = (conn_rec *) apr_palloc(pool, sizeof(conn_rec));
+ if (c == NULL) {
+ ap_log_cerror(APLOG_MARK, APLOG_ERR, APR_ENOMEM, master,
+ APLOGNO(02913) "h2_task: creating conn");
+ return NULL;
+ }
+
+ memcpy(c, master, sizeof(conn_rec));
+
+ /* Replace these */
+ c->master = master;
+ c->pool = pool;
+ c->conn_config = ap_create_conn_config(pool);
+ c->notes = apr_table_make(pool, 5);
+ c->input_filters = NULL;
+ c->output_filters = NULL;
+ c->bucket_alloc = apr_bucket_alloc_create(pool);
+ c->data_in_input_filters = 0;
+ c->data_in_output_filters = 0;
+ c->clogging_input_filters = 1;
+ c->log = NULL;
+ c->log_id = NULL;
+ /* Simulate that we had already a request on this connection. */
+ c->keepalives = 1;
+ /* We cannot install the master connection socket on the slaves, as
+ * modules mess with timeouts/blocking of the socket, with
+ * unwanted side effects to the master connection processing.
+ * Fortunately, since we never use the slave socket, we can just install
+ * a single, process-wide dummy and everyone is happy.
+ */
+ ap_set_module_config(c->conn_config, &core_module, dummy_socket);
+ /* TODO: these should be unique to this thread */
+ c->sbh = master->sbh;
+ /* TODO: not all mpm modules have learned about slave connections yet.
+ * copy their config from master to slave.
+ */
+ if (h2_conn_mpm_module()) {
+ cfg = ap_get_module_config(master->conn_config, h2_conn_mpm_module());
+ ap_set_module_config(c->conn_config, h2_conn_mpm_module(), cfg);
+ }
+
+ switch (h2_conn_mpm_type()) {
+ case H2_MPM_EVENT:
+ fix_event_conn(c, master);
+ break;
+ default:
+ break;
+ }
+
+ return c;
+}
+
+void h2_slave_destroy(conn_rec *slave, apr_allocator_t **pallocator)
+{
+ apr_allocator_t *allocator = apr_pool_allocator_get(slave->pool);
+ apr_pool_destroy(slave->pool);
+ if (pallocator) {
+ *pallocator = allocator;
+ }
+ else {
+ apr_allocator_destroy(allocator);
+ }
+}
*/
apr_status_t h2_conn_run(struct h2_ctx *ctx, conn_rec *c);
+/**
+ * The connection is about to close. If we have not send a GOAWAY
+ * yet, this is the last chance.
+ */
+apr_status_t h2_conn_pre_close(struct h2_ctx *ctx, conn_rec *c);
+
/* Initialize this child process for h2 connection work,
* to be called once during child init before multi processing
* starts.
H2_MPM_WORKER,
H2_MPM_EVENT,
H2_MPM_PREFORK,
+ H2_MPM_MOTORZ,
+ H2_MPM_SIMPLE,
+ H2_MPM_NETWARE,
+ H2_MPM_WINNT,
} h2_mpm_type_t;
/* Returns the type of MPM module detected */
h2_mpm_type_t h2_conn_mpm_type(void);
-conn_rec *h2_slave_create(conn_rec *master, apr_pool_t *p,
- apr_thread_t *thread, apr_socket_t *socket);
+conn_rec *h2_slave_create(conn_rec *master, apr_pool_t *parent,
+ apr_allocator_t *allocator);
+void h2_slave_destroy(conn_rec *slave, apr_allocator_t **pallocator);
#endif /* defined(__mod_h2__h2_conn__) */
*/
#define WRITE_SIZE_MAX (TLS_DATA_MAX - 100)
-#define WRITE_BUFFER_SIZE (8*WRITE_SIZE_MAX)
+#define WRITE_BUFFER_SIZE (5*WRITE_SIZE_MAX)
apr_status_t h2_conn_io_init(h2_conn_io *io, conn_rec *c,
const h2_config *cfg,
return APR_SUCCESS;
}
- ap_update_child_status(c->sbh, SERVER_BUSY_WRITE, NULL);
+ ap_update_child_status_from_conn(c->sbh, SERVER_BUSY_WRITE, c);
status = apr_brigade_length(bb, 0, &bblen);
if (status == APR_SUCCESS) {
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, c, APLOGNO(03044)
return APR_SUCCESS;
}
-apr_status_t h2_conn_io_write(h2_conn_io *io,
- const char *buf, size_t length)
-{
- apr_status_t status = APR_SUCCESS;
- pass_out_ctx ctx;
-
- ctx.c = io->connection;
- ctx.io = io;
- io->unflushed = 1;
- if (io->bufsize > 0) {
- ap_log_cerror(APLOG_MARK, APLOG_TRACE4, 0, io->connection,
- "h2_conn_io: buffering %ld bytes", (long)length);
-
- if (!APR_BRIGADE_EMPTY(io->output)) {
- status = h2_conn_io_pass(io);
- io->unflushed = 1;
- }
-
- while (length > 0 && (status == APR_SUCCESS)) {
- apr_size_t avail = io->bufsize - io->buflen;
- if (avail <= 0) {
-
- bucketeer_buffer(io);
- status = pass_out(io->output, &ctx);
- io->buflen = 0;
- }
- else if (length > avail) {
- memcpy(io->buffer + io->buflen, buf, avail);
- io->buflen += avail;
- length -= avail;
- buf += avail;
- }
- else {
- memcpy(io->buffer + io->buflen, buf, length);
- io->buflen += length;
- length = 0;
- break;
- }
- }
-
- }
- else {
- ap_log_cerror(APLOG_MARK, APLOG_TRACE4, status, io->connection,
- "h2_conn_io: writing %ld bytes to brigade", (long)length);
- status = apr_brigade_write(io->output, pass_out, &ctx, buf, length);
- }
-
- return status;
-}
-
apr_status_t h2_conn_io_writeb(h2_conn_io *io, apr_bucket *b)
{
APR_BRIGADE_INSERT_TAIL(io->output, b);
- io->unflushed = 1;
return APR_SUCCESS;
}
-apr_status_t h2_conn_io_consider_flush(h2_conn_io *io)
-{
- apr_status_t status = APR_SUCCESS;
-
- /* The HTTP/1.1 network output buffer/flush behaviour does not
- * give optimal performance in the HTTP/2 case, as the pattern of
- * buckets (data/eor/eos) is different.
- * As long as we have not found out the "best" way to deal with
- * this, force a flush at least every WRITE_BUFFER_SIZE amount
- * of data.
- */
- if (io->unflushed) {
- apr_off_t len = 0;
- if (!APR_BRIGADE_EMPTY(io->output)) {
- apr_brigade_length(io->output, 0, &len);
- }
- len += io->buflen;
- if (len >= WRITE_BUFFER_SIZE) {
- return h2_conn_io_pass(io);
- }
- }
- return status;
-}
-
static apr_status_t h2_conn_io_flush_int(h2_conn_io *io, int force, int eoc)
{
- if (io->unflushed || force) {
+ if (io->buflen > 0 || !APR_BRIGADE_EMPTY(io->output)) {
pass_out_ctx ctx;
if (io->buflen > 0) {
ap_log_cerror(APLOG_MARK, APLOG_TRACE4, 0, io->connection,
"h2_conn_io: flush, flushing %ld bytes", (long)io->buflen);
bucketeer_buffer(io);
- io->buflen = 0;
}
if (force) {
ap_log_cerror(APLOG_MARK, APLOG_TRACE4, 0, io->connection,
"h2_conn_io: flush");
/* Send it out */
- io->unflushed = 0;
-
+ io->buflen = 0;
ctx.c = io->connection;
ctx.io = eoc? NULL : io;
+
return pass_out(io->output, &ctx);
/* no more access after this, as we might have flushed an EOC bucket
* that de-allocated us all. */
return APR_SUCCESS;
}
-apr_status_t h2_conn_io_write_eoc(h2_conn_io *io, apr_bucket *b)
+apr_status_t h2_conn_io_pass(h2_conn_io *io, int flush)
{
- APR_BRIGADE_INSERT_TAIL(io->output, b);
- return h2_conn_io_flush_int(io, 1, 1);
+ return h2_conn_io_flush_int(io, flush, 0);
}
-apr_status_t h2_conn_io_flush(h2_conn_io *io)
+apr_status_t h2_conn_io_consider_pass(h2_conn_io *io)
{
- return h2_conn_io_flush_int(io, 1, 0);
+ apr_off_t len = 0;
+
+ if (!APR_BRIGADE_EMPTY(io->output)) {
+ apr_brigade_length(io->output, 0, &len);
+ }
+ len += io->buflen;
+ if (len >= WRITE_BUFFER_SIZE) {
+ return h2_conn_io_pass(io, 0);
+ }
+ return APR_SUCCESS;
}
-apr_status_t h2_conn_io_pass(h2_conn_io *io)
+apr_status_t h2_conn_io_write_eoc(h2_conn_io *io, h2_session *session)
+{
+ apr_bucket *b = h2_bucket_eoc_create(io->connection->bucket_alloc, session);
+ APR_BRIGADE_INSERT_TAIL(io->output, b);
+ b = apr_bucket_flush_create(io->connection->bucket_alloc);
+ APR_BRIGADE_INSERT_TAIL(io->output, b);
+ return h2_conn_io_flush_int(io, 0, 1);
+}
+
+apr_status_t h2_conn_io_write(h2_conn_io *io,
+ const char *buf, size_t length)
{
- return h2_conn_io_flush_int(io, 0, 0);
+ apr_status_t status = APR_SUCCESS;
+ pass_out_ctx ctx;
+
+ ctx.c = io->connection;
+ ctx.io = io;
+ if (io->bufsize > 0) {
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE4, 0, io->connection,
+ "h2_conn_io: buffering %ld bytes", (long)length);
+
+ if (!APR_BRIGADE_EMPTY(io->output)) {
+ status = h2_conn_io_pass(io, 0);
+ }
+
+ while (length > 0 && (status == APR_SUCCESS)) {
+ apr_size_t avail = io->bufsize - io->buflen;
+ if (avail <= 0) {
+ h2_conn_io_pass(io, 0);
+ }
+ else if (length > avail) {
+ memcpy(io->buffer + io->buflen, buf, avail);
+ io->buflen += avail;
+ length -= avail;
+ buf += avail;
+ }
+ else {
+ memcpy(io->buffer + io->buflen, buf, length);
+ io->buflen += length;
+ length = 0;
+ break;
+ }
+ }
+
+ }
+ else {
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE4, status, io->connection,
+ "h2_conn_io: writing %ld bytes to brigade", (long)length);
+ status = apr_brigade_write(io->output, pass_out, &ctx, buf, length);
+ }
+
+ return status;
}
char *buffer;
apr_size_t buflen;
apr_size_t bufsize;
- int unflushed;
} h2_conn_io;
apr_status_t h2_conn_io_init(h2_conn_io *io, conn_rec *c,
int h2_conn_io_is_buffered(h2_conn_io *io);
+/**
+ * Append data to the buffered output.
+ * @param buf the data to append
+ * @param length the length of the data to append
+ */
apr_status_t h2_conn_io_write(h2_conn_io *io,
const char *buf,
size_t length);
-
+
+/**
+ * Append a bucket to the buffered output.
+ * @param io the connection io
+ * @param b the bucket to append
+ */
apr_status_t h2_conn_io_writeb(h2_conn_io *io, apr_bucket *b);
-apr_status_t h2_conn_io_consider_flush(h2_conn_io *io);
+/**
+ * Append an End-Of-Connection bucket to the output that, once destroyed,
+ * will tear down the complete http2 session.
+ */
+apr_status_t h2_conn_io_write_eoc(h2_conn_io *io, struct h2_session *session);
-apr_status_t h2_conn_io_pass(h2_conn_io *io);
-apr_status_t h2_conn_io_flush(h2_conn_io *io);
-apr_status_t h2_conn_io_write_eoc(h2_conn_io *io, apr_bucket *b);
+/**
+ * Pass any buffered data on to the connection output filters.
+ * @param io the connection io
+ * @param flush if a flush bucket should be appended to any output
+ */
+apr_status_t h2_conn_io_pass(h2_conn_io *io, int flush);
+
+/**
+ * Check the amount of buffered output and pass it on if enough has accumulated.
+ * @param io the connection io
+ * @param flush if a flush bucket should be appended to any output
+ */
+apr_status_t h2_conn_io_consider_pass(h2_conn_io *io);
#endif /* defined(__mod_h2__h2_conn_io__) */
return ctx && ctx->task;
}
-struct h2_task *h2_ctx_get_task(h2_ctx *ctx)
+h2_task *h2_ctx_get_task(h2_ctx *ctx)
{
return ctx? ctx->task : NULL;
}
+
+h2_task *h2_ctx_cget_task(conn_rec *c)
+{
+ return h2_ctx_get_task(h2_ctx_get(c, 0));
+}
+
+h2_task *h2_ctx_rget_task(request_rec *r)
+{
+ return h2_ctx_get_task(h2_ctx_rget(r));
+}
int h2_ctx_is_task(h2_ctx *ctx);
struct h2_task *h2_ctx_get_task(h2_ctx *ctx);
+struct h2_task *h2_ctx_cget_task(conn_rec *c);
+struct h2_task *h2_ctx_rget_task(request_rec *r);
#endif /* defined(__mod_h2__h2_ctx__) */
#include "h2_push.h"
#include "h2_task.h"
#include "h2_stream.h"
-#include "h2_stream_set.h"
#include "h2_request.h"
#include "h2_response.h"
#include "h2_session.h"
return cin;
}
-void h2_filter_cin_timeout_set(h2_filter_cin *cin, int timeout_secs)
+void h2_filter_cin_timeout_set(h2_filter_cin *cin, apr_interval_time_t timeout)
{
- cin->timeout_secs = timeout_secs;
+ cin->timeout = timeout;
}
apr_status_t h2_filter_core_input(ap_filter_t* f,
{
h2_filter_cin *cin = f->ctx;
apr_status_t status = APR_SUCCESS;
- apr_time_t saved_timeout = UNSET;
+ apr_interval_time_t saved_timeout = UNSET;
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, f->c,
- "core_input(%ld): read, %s, mode=%d, readbytes=%ld, timeout=%d",
+ "core_input(%ld): read, %s, mode=%d, readbytes=%ld",
(long)f->c->id, (block == APR_BLOCK_READ)? "BLOCK_READ" : "NONBLOCK_READ",
- mode, (long)readbytes, cin->timeout_secs);
+ mode, (long)readbytes);
if (mode == AP_MODE_INIT || mode == AP_MODE_SPECULATIVE) {
return ap_get_brigade(f->next, brigade, mode, block, readbytes);
* in the scoreboard is preserved.
*/
if (block == APR_BLOCK_READ) {
- if (cin->timeout_secs > 0) {
- apr_time_t t = apr_time_from_sec(cin->timeout_secs);
+ if (cin->timeout > 0) {
apr_socket_timeout_get(cin->socket, &saved_timeout);
- apr_socket_timeout_set(cin->socket, t);
+ apr_socket_timeout_set(cin->socket, cin->timeout);
}
}
status = ap_get_brigade(f->next, cin->bb, AP_MODE_READBYTES,
if (saved_timeout != UNSET) {
apr_socket_timeout_set(cin->socket, saved_timeout);
}
- ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, f->c,
- "core_input(%ld): got_brigade", (long)f->c->id);
}
switch (status) {
case APR_EOF:
case APR_EAGAIN:
case APR_TIMEUP:
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, f->c,
+ "core_input(%ld): read", (long)f->c->id);
break;
default:
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, status, f->c, APLOGNO(03046)
bbout(" \"session_id\": %ld,\n", (long)session->id);
bbout(" \"streams_max\": %d,\n", (int)session->max_stream_count);
bbout(" \"this_stream\": %d,\n", stream->id);
- bbout(" \"streams_open\": %d,\n", (int)h2_stream_set_size(session->streams));
+ bbout(" \"streams_open\": %d,\n", (int)h2_ihash_count(session->streams));
bbout(" \"max_stream_started\": %d,\n", mplx->max_stream_started);
bbout(" \"requests_received\": %d,\n", session->requests_received);
bbout(" \"responses_submitted\": %d,\n", session->responses_submitted);
h2_filter_cin_cb *cb;
void *cb_ctx;
apr_socket_t *socket;
- int timeout_secs;
+ apr_interval_time_t timeout;
apr_time_t start_read;
} h2_filter_cin;
h2_filter_cin *h2_filter_cin_create(apr_pool_t *p, h2_filter_cin_cb *cb, void *ctx);
-void h2_filter_cin_timeout_set(h2_filter_cin *cin, int timeout_secs);
+void h2_filter_cin_timeout_set(h2_filter_cin *cin, apr_interval_time_t timeout);
apr_status_t h2_filter_core_input(ap_filter_t* filter,
apr_bucket_brigade* brigade,
return from_h1;
}
-apr_status_t h2_from_h1_destroy(h2_from_h1 *from_h1)
-{
- from_h1->bb = NULL;
- return APR_SUCCESS;
-}
-
static void set_state(h2_from_h1 *from_h1, h2_from_h1_state_t state)
{
if (from_h1->state != state) {
h2_from_h1 *h2_from_h1_create(int stream_id, apr_pool_t *pool);
-apr_status_t h2_from_h1_destroy(h2_from_h1 *response);
-
apr_status_t h2_from_h1_read_response(h2_from_h1 *from_h1,
ap_filter_t* f, apr_bucket_brigade* bb);
#include <http_request.h>
#include <http_log.h>
+#include "mod_ssl.h"
+
#include "mod_http2.h"
#include "h2_private.h"
/*******************************************************************************
* The optional mod_ssl functions we need.
*/
-APR_DECLARE_OPTIONAL_FN(int, ssl_engine_disable, (conn_rec*));
-APR_DECLARE_OPTIONAL_FN(int, ssl_is_https, (conn_rec*));
-
-static int (*opt_ssl_engine_disable)(conn_rec*);
-static int (*opt_ssl_is_https)(conn_rec*);
-/*******************************************************************************
- * SSL var lookup
- */
-APR_DECLARE_OPTIONAL_FN(char *, ssl_var_lookup,
- (apr_pool_t *, server_rec *,
- conn_rec *, request_rec *,
- char *));
-static char *(*opt_ssl_var_lookup)(apr_pool_t *, server_rec *,
- conn_rec *, request_rec *,
- char *);
+static APR_OPTIONAL_FN_TYPE(ssl_engine_disable) *opt_ssl_engine_disable;
+static APR_OPTIONAL_FN_TYPE(ssl_is_https) *opt_ssl_is_https;
+static APR_OPTIONAL_FN_TYPE(ssl_var_lookup) *opt_ssl_var_lookup;
/*******************************************************************************
* - process_conn take over connection in case of h2
*/
static int h2_h2_process_conn(conn_rec* c);
+static int h2_h2_pre_close_conn(conn_rec* c);
static int h2_h2_post_read_req(request_rec *r);
/*******************************************************************************
*/
ap_hook_process_connection(h2_h2_process_conn,
mod_ssl, mod_reqtimeout, APR_HOOK_LAST);
-
+
+ /* One last chance to properly say goodbye if we have not done so
+ * already. */
+ ap_hook_pre_close_connection(h2_h2_pre_close_conn, NULL, mod_ssl, APR_HOOK_LAST);
+
/* With "H2SerializeHeaders On", we install the filter in this hook
* that parses the response. This needs to happen before any other post
* read function terminates the request with an error. Otherwise we will
return DECLINED;
}
+static int h2_h2_pre_close_conn(conn_rec *c)
+{
+ h2_ctx *ctx;
+
+ /* slave connection? */
+ if (c->master) {
+ return DECLINED;
+ }
+
+ ctx = h2_ctx_get(c, 0);
+ if (ctx) {
+ /* If the session has been closed correctly already, we will not
+ * fiond a h2_ctx here. The presence indicates that the session
+ * is still ongoing. */
+ return h2_conn_pre_close(ctx, c);
+ }
+ return DECLINED;
+}
+
static int h2_h2_post_read_req(request_rec *r)
{
/* slave connection? */
*/
extern const char *h2_tls_protos[];
-/**
- * The magic PRIamble of RFC 7540 that is always sent when starting
- * a h2 communication.
- */
-extern const char *H2_MAGIC_TOKEN;
-
-#define H2_ERR_NO_ERROR (0x00)
-#define H2_ERR_PROTOCOL_ERROR (0x01)
-#define H2_ERR_INTERNAL_ERROR (0x02)
-#define H2_ERR_FLOW_CONTROL_ERROR (0x03)
-#define H2_ERR_SETTINGS_TIMEOUT (0x04)
-#define H2_ERR_STREAM_CLOSED (0x05)
-#define H2_ERR_FRAME_SIZE_ERROR (0x06)
-#define H2_ERR_REFUSED_STREAM (0x07)
-#define H2_ERR_CANCEL (0x08)
-#define H2_ERR_COMPRESSION_ERROR (0x09)
-#define H2_ERR_CONNECT_ERROR (0x0a)
-#define H2_ERR_ENHANCE_YOUR_CALM (0x0b)
-#define H2_ERR_INADEQUATE_SECURITY (0x0c)
-#define H2_ERR_HTTP_1_1_REQUIRED (0x0d)
-
-/* Maximum number of padding bytes in a frame, rfc7540 */
-#define H2_MAX_PADLEN 256
-/* Initial default window size, RFC 7540 ch. 6.5.2 */
-#define H2_INITIAL_WINDOW_SIZE ((64*1024)-1)
-
-#define H2_HTTP_2XX(a) ((a) >= 200 && (a) < 300)
-
-#define H2_STREAM_CLIENT_INITIATED(id) (id&0x01)
-
-typedef enum {
- H2_DEPENDANT_AFTER,
- H2_DEPENDANT_INTERLEAVED,
- H2_DEPENDANT_BEFORE,
-} h2_dependency;
-
-typedef struct h2_priority {
- h2_dependency dependency;
- int weight;
-} h2_priority;
-
/**
* Provide a user readable description of the HTTP/2 error code-
* @param h2_error http/2 error code, as in rfc 7540, ch. 7
--- /dev/null
+/* Copyright 2015 greenbytes GmbH (https://www.greenbytes.de)
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <assert.h>
+#include <stddef.h>
+#include <apr_pools.h>
+
+#include "h2_int_queue.h"
+
+
+static void tq_grow(h2_int_queue *q, int nlen);
+static void tq_swap(h2_int_queue *q, int i, int j);
+static int tq_bubble_up(h2_int_queue *q, int i, int top,
+ h2_iq_cmp *cmp, void *ctx);
+static int tq_bubble_down(h2_int_queue *q, int i, int bottom,
+ h2_iq_cmp *cmp, void *ctx);
+
+h2_int_queue *h2_iq_create(apr_pool_t *pool, int capacity)
+{
+ h2_int_queue *q = apr_pcalloc(pool, sizeof(h2_int_queue));
+ if (q) {
+ q->pool = pool;
+ tq_grow(q, capacity);
+ q->nelts = 0;
+ }
+ return q;
+}
+
+int h2_iq_empty(h2_int_queue *q)
+{
+ return q->nelts == 0;
+}
+
+int h2_iq_size(h2_int_queue *q)
+{
+ return q->nelts;
+}
+
+
+void h2_iq_add(h2_int_queue *q, int sid, h2_iq_cmp *cmp, void *ctx)
+{
+ int i;
+
+ if (q->nelts >= q->nalloc) {
+ tq_grow(q, q->nalloc * 2);
+ }
+
+ i = (q->head + q->nelts) % q->nalloc;
+ q->elts[i] = sid;
+ ++q->nelts;
+
+ if (cmp) {
+ /* bubble it to the front of the queue */
+ tq_bubble_up(q, i, q->head, cmp, ctx);
+ }
+}
+
+int h2_iq_remove(h2_int_queue *q, int sid)
+{
+ int i;
+ for (i = 0; i < q->nelts; ++i) {
+ if (sid == q->elts[(q->head + i) % q->nalloc]) {
+ break;
+ }
+ }
+
+ if (i < q->nelts) {
+ ++i;
+ for (; i < q->nelts; ++i) {
+ q->elts[(q->head+i-1)%q->nalloc] = q->elts[(q->head+i)%q->nalloc];
+ }
+ --q->nelts;
+ return 1;
+ }
+ return 0;
+}
+
+void h2_iq_clear(h2_int_queue *q)
+{
+ q->nelts = 0;
+}
+
+void h2_iq_sort(h2_int_queue *q, h2_iq_cmp *cmp, void *ctx)
+{
+ /* Assume that changes in ordering are minimal. This needs,
+ * best case, q->nelts - 1 comparisions to check that nothing
+ * changed.
+ */
+ if (q->nelts > 0) {
+ int i, ni, prev, last;
+
+ /* Start at the end of the queue and create a tail of sorted
+ * entries. Make that tail one element longer in each iteration.
+ */
+ last = i = (q->head + q->nelts - 1) % q->nalloc;
+ while (i != q->head) {
+ prev = (q->nalloc + i - 1) % q->nalloc;
+
+ ni = tq_bubble_up(q, i, prev, cmp, ctx);
+ if (ni == prev) {
+ /* i bubbled one up, bubble the new i down, which
+ * keeps all tasks below i sorted. */
+ tq_bubble_down(q, i, last, cmp, ctx);
+ }
+ i = prev;
+ };
+ }
+}
+
+
+int h2_iq_shift(h2_int_queue *q)
+{
+ int sid;
+
+ if (q->nelts <= 0) {
+ return 0;
+ }
+
+ sid = q->elts[q->head];
+ q->head = (q->head + 1) % q->nalloc;
+ q->nelts--;
+
+ return sid;
+}
+
+static void tq_grow(h2_int_queue *q, int nlen)
+{
+ if (nlen > q->nalloc) {
+ int *nq = apr_pcalloc(q->pool, sizeof(int) * nlen);
+ if (q->nelts > 0) {
+ int l = ((q->head + q->nelts) % q->nalloc) - q->head;
+
+ memmove(nq, q->elts + q->head, sizeof(int) * l);
+ if (l < q->nelts) {
+ /* elts wrapped, append elts in [0, remain] to nq */
+ int remain = q->nelts - l;
+ memmove(nq + l, q->elts, sizeof(int) * remain);
+ }
+ }
+ q->elts = nq;
+ q->nalloc = nlen;
+ q->head = 0;
+ }
+}
+
+static void tq_swap(h2_int_queue *q, int i, int j)
+{
+ int x = q->elts[i];
+ q->elts[i] = q->elts[j];
+ q->elts[j] = x;
+}
+
+static int tq_bubble_up(h2_int_queue *q, int i, int top,
+ h2_iq_cmp *cmp, void *ctx)
+{
+ int prev;
+ while (((prev = (q->nalloc + i - 1) % q->nalloc), i != top)
+ && (*cmp)(q->elts[i], q->elts[prev], ctx) < 0) {
+ tq_swap(q, prev, i);
+ i = prev;
+ }
+ return i;
+}
+
+static int tq_bubble_down(h2_int_queue *q, int i, int bottom,
+ h2_iq_cmp *cmp, void *ctx)
+{
+ int next;
+ while (((next = (q->nalloc + i + 1) % q->nalloc), i != bottom)
+ && (*cmp)(q->elts[i], q->elts[next], ctx) > 0) {
+ tq_swap(q, next, i);
+ i = next;
+ }
+ return i;
+}
--- /dev/null
+/* Copyright 2015 greenbytes GmbH (https://www.greenbytes.de)
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef __mod_h2__h2_int_queue__
+#define __mod_h2__h2_int_queue__
+
+/**
+ * h2_int_queue keeps a list of sorted h2_task* in ascending order.
+ */
+typedef struct h2_int_queue h2_int_queue;
+
+struct h2_int_queue {
+ int *elts;
+ int head;
+ int nelts;
+ int nalloc;
+ apr_pool_t *pool;
+};
+
+/**
+ * Comparator for two task to determine their order.
+ *
+ * @param s1 stream id to compare
+ * @param s2 stream id to compare
+ * @param ctx provided user data
+ * @return value is the same as for strcmp() and has the effect:
+ * == 0: s1 and s2 are treated equal in ordering
+ * < 0: s1 should be sorted before s2
+ * > 0: s2 should be sorted before s1
+ */
+typedef int h2_iq_cmp(int s1, int s2, void *ctx);
+
+
+/**
+ * Allocate a new queue from the pool and initialize.
+ * @param id the identifier of the queue
+ * @param pool the memory pool
+ */
+h2_int_queue *h2_iq_create(apr_pool_t *pool, int capacity);
+
+/**
+ * Return != 0 iff there are no tasks in the queue.
+ * @param q the queue to check
+ */
+int h2_iq_empty(h2_int_queue *q);
+
+/**
+ * Return the number of int in the queue.
+ * @param q the queue to get size on
+ */
+int h2_iq_size(h2_int_queue *q);
+
+/**
+ * Add a stream idto the queue.
+ *
+ * @param q the queue to append the task to
+ * @param sid the stream id to add
+ * @param cmp the comparator for sorting
+ * @param ctx user data for comparator
+ */
+void h2_iq_add(h2_int_queue *q, int sid, h2_iq_cmp *cmp, void *ctx);
+
+/**
+ * Remove the stream id from the queue. Return != 0 iff task
+ * was found in queue.
+ * @param q the task queue
+ * @param sid the stream id to remove
+ * @return != 0 iff task was found in queue
+ */
+int h2_iq_remove(h2_int_queue *q, int sid);
+
+/**
+ * Remove all entries in the queue.
+ */
+void h2_iq_clear(h2_int_queue *q);
+
+/**
+ * Sort the stream idqueue again. Call if the task ordering
+ * has changed.
+ *
+ * @param q the queue to sort
+ * @param cmp the comparator for sorting
+ * @param ctx user data for the comparator
+ */
+void h2_iq_sort(h2_int_queue *q, h2_iq_cmp *cmp, void *ctx);
+
+/**
+ * Get the first stream id from the queue or NULL if the queue is empty.
+ * The task will be removed.
+ *
+ * @param q the queue to get the first task from
+ * @return the first stream id of the queue, 0 if empty
+ */
+int h2_iq_shift(h2_int_queue *q);
+
+#endif /* defined(__mod_h2__h2_int_queue__) */
#include "h2_task.h"
#include "h2_util.h"
-h2_io *h2_io_create(int id, apr_pool_t *pool)
+h2_io *h2_io_create(int id, apr_pool_t *pool, const h2_request *request)
{
h2_io *io = apr_pcalloc(pool, sizeof(*io));
if (io) {
io->id = id;
io->pool = pool;
io->bucket_alloc = apr_bucket_alloc_create(pool);
+ io->request = h2_request_clone(pool, request);
}
return io;
}
-void h2_io_destroy(h2_io *io)
+void h2_io_redo(h2_io *io)
{
- if (io->pool) {
- apr_pool_destroy(io->pool);
- /* gone */
+ io->worker_started = 0;
+ io->response = NULL;
+ io->rst_error = 0;
+ if (io->bbin) {
+ apr_brigade_cleanup(io->bbin);
+ }
+ if (io->bbout) {
+ apr_brigade_cleanup(io->bbout);
+ }
+ if (io->tmp) {
+ apr_brigade_cleanup(io->tmp);
+ }
+ io->started_at = io->done_at = 0;
+}
+
+int h2_io_is_repeatable(h2_io *io) {
+ if (io->submitted
+ || io->input_consumed > 0
+ || !io->request) {
+ /* cannot repeat that. */
+ return 0;
}
+ return (!strcmp("GET", io->request->method)
+ || !strcmp("HEAD", io->request->method)
+ || !strcmp("OPTIONS", io->request->method));
}
void h2_io_set_response(h2_io *io, h2_response *response)
return io->eos_in || (io->bbin && h2_util_has_eos(io->bbin, -1));
}
+int h2_io_in_has_data(h2_io *io)
+{
+ return io->bbin && h2_util_bb_has_data_or_eos(io->bbin);
+}
+
int h2_io_out_has_data(h2_io *io)
{
return io->bbout && h2_util_bb_has_data_or_eos(io->bbout);
}
-void h2_io_signal_init(h2_io *io, h2_io_op op, int timeout_secs, apr_thread_cond_t *cond)
+void h2_io_signal_init(h2_io *io, h2_io_op op, apr_interval_time_t timeout,
+ apr_thread_cond_t *cond)
{
io->timed_op = op;
io->timed_cond = cond;
- if (timeout_secs > 0) {
- io->timeout_at = apr_time_now() + apr_time_from_sec(timeout_secs);
+ if (timeout > 0) {
+ io->timeout_at = apr_time_now() + timeout;
}
else {
io->timeout_at = 0;
}
}
+ if (status == APR_SUCCESS && (!io->bbin || APR_BRIGADE_EMPTY(io->bbin))) {
+ if (io->eos_in) {
+ if (!io->eos_in_written) {
+ status = append_eos(io, bb, trailers);
+ io->eos_in_written = 1;
+ }
+ }
+ }
+
+ if (status == APR_SUCCESS && APR_BRIGADE_EMPTY(bb)) {
+ return APR_EAGAIN;
+ }
return status;
}
return APR_ECONNABORTED;
}
- if (io->eos_out) {
+ if (io->eos_out_read) {
*plen = 0;
*peos = 1;
return APR_SUCCESS;
else {
status = h2_util_bb_readx(io->bbout, cb, ctx, plen, peos);
if (status == APR_SUCCESS) {
- io->eos_out = *peos;
+ io->eos_out_read = *peos;
}
}
return APR_ECONNABORTED;
}
- if (io->eos_out) {
+ if (io->eos_out_read) {
*plen = 0;
*peos = 1;
return APR_SUCCESS;
return APR_EAGAIN;
}
- io->eos_out = *peos = h2_util_has_eos(io->bbout, *plen);
+ io->eos_out_read = *peos = h2_util_has_eos(io->bbout, *plen);
return h2_util_move(bb, io->bbout, *plen, NULL, "h2_io_read_to");
}
if (io->rst_error) {
return APR_ECONNABORTED;
}
- if (!io->eos_out) { /* EOS has not been read yet */
+ if (!io->eos_out_read) { /* EOS has not been read yet */
process_trailers(io, trailers);
if (!io->bbout) {
io->bbout = apr_brigade_create(io->pool, io->bucket_alloc);
}
- if (!h2_util_has_eos(io->bbout, -1)) {
- APR_BRIGADE_INSERT_TAIL(io->bbout,
- apr_bucket_eos_create(io->bucket_alloc));
+ if (!io->eos_out) {
+ io->eos_out = 1;
+ if (!h2_util_has_eos(io->bbout, -1)) {
+ APR_BRIGADE_INSERT_TAIL(io->bbout,
+ apr_bucket_eos_create(io->bucket_alloc));
+ }
}
}
return APR_SUCCESS;
struct apr_thread_cond_t;
struct h2_mplx;
struct h2_request;
+struct h2_task;
typedef apr_status_t h2_io_data_cb(void *ctx, const char *data, apr_off_t len);
H2_IO_READ,
H2_IO_WRITE,
H2_IO_ANY,
-}
-h2_io_op;
+} h2_io_op;
typedef struct h2_io h2_io;
unsigned int orphaned : 1; /* h2_stream is gone for this io */
unsigned int worker_started : 1; /* h2_worker started processing for this io */
unsigned int worker_done : 1; /* h2_worker finished for this io */
+ unsigned int submitted : 1; /* response has been submitted to client */
unsigned int request_body : 1; /* iff request has body */
unsigned int eos_in : 1; /* input eos has been seen */
unsigned int eos_in_written : 1; /* input eos has been forwarded */
- unsigned int eos_out : 1; /* output eos has been seen */
+ unsigned int eos_out : 1; /* output eos is present */
+ unsigned int eos_out_read : 1; /* output eos has been forwarded */
h2_io_op timed_op; /* which operation is waited on, if any */
struct apr_thread_cond_t *timed_cond; /* condition to wait on, maybe NULL */
apr_time_t timeout_at; /* when IO wait will time out */
+ apr_time_t started_at; /* when processing started */
+ apr_time_t done_at; /* when processing was done */
apr_size_t input_consumed; /* how many bytes have been read */
int files_handles_owned;
/**
* Creates a new h2_io for the given stream id.
*/
-h2_io *h2_io_create(int id, apr_pool_t *pool);
-
-/**
- * Frees any resources hold by the h2_io instance.
- */
-void h2_io_destroy(h2_io *io);
+h2_io *h2_io_create(int id, apr_pool_t *pool, const struct h2_request *request);
/**
* Set the response of this stream.
*/
void h2_io_rst(h2_io *io, int error);
+int h2_io_is_repeatable(h2_io *io);
+void h2_io_redo(h2_io *io);
+
/**
* The input data is completely queued. Blocked reads will return immediately
* and give either data or EOF.
* Output data is available.
*/
int h2_io_out_has_data(h2_io *io);
+/**
+ * Input data is available.
+ */
+int h2_io_in_has_data(h2_io *io);
void h2_io_signal(h2_io *io, h2_io_op op);
-void h2_io_signal_init(h2_io *io, h2_io_op op, int timeout_secs,
+void h2_io_signal_init(h2_io *io, h2_io_op op, apr_interval_time_t timeout,
struct apr_thread_cond_t *cond);
void h2_io_signal_exit(h2_io *io);
apr_status_t h2_io_signal_wait(struct h2_mplx *m, h2_io *io);
return sp;
}
-void h2_io_set_destroy(h2_io_set *sp)
-{
- int i;
- for (i = 0; i < sp->list->nelts; ++i) {
- h2_io *io = h2_io_IDX(sp->list, i);
- h2_io_destroy(io);
- }
- sp->list->nelts = 0;
-}
-
static int h2_stream_id_cmp(const void *s1, const void *s2)
{
h2_io **pio1 = (h2_io **)s1;
int last;
APR_ARRAY_PUSH(sp->list, h2_io*) = io;
/* Normally, streams get added in ascending order if id. We
- * keep the array sorted, so we just need to check of the newly
+ * keep the array sorted, so we just need to check if the newly
* appended stream has a lower id than the last one. if not,
* sorting is not necessary.
*/
--sp->list->nelts;
n = sp->list->nelts - idx;
if (n > 0) {
- /* Close the hole in the array by moving the upper
- * parts down one step.
- */
+ /* There are n h2_io* behind idx. Move the rest down */
h2_io **selts = (h2_io**)sp->list->elts;
memmove(selts + idx, selts + idx + 1, n * sizeof(h2_io*));
}
int i;
for (i = 0; i < sp->list->nelts; ++i) {
h2_io *e = h2_io_IDX(sp->list, i);
- if (e == io) {
+ if (e->id == io->id) {
remove_idx(sp, i);
return e;
}
return NULL;
}
-h2_io *h2_io_set_pop_highest_prio(h2_io_set *set)
+h2_io *h2_io_set_shift(h2_io_set *set)
{
/* For now, this just removes the first element in the set.
* the name is misleading...
h2_io_set *h2_io_set_create(apr_pool_t *pool);
-void h2_io_set_destroy(h2_io_set *set);
-
apr_status_t h2_io_set_add(h2_io_set *set, struct h2_io *io);
h2_io *h2_io_set_get(h2_io_set *set, int stream_id);
h2_io *h2_io_set_remove(h2_io_set *set, struct h2_io *io);
* @param ctx user data for the callback
* @return 1 iff iteration completed for all members
*/
-int h2_io_set_iter(h2_io_set *set,
- h2_io_set_iter_fn *iter, void *ctx);
+int h2_io_set_iter(h2_io_set *set, h2_io_set_iter_fn *iter, void *ctx);
-h2_io *h2_io_set_pop_highest_prio(h2_io_set *set);
+h2_io *h2_io_set_shift(h2_io_set *set);
#endif /* defined(__mod_h2__h2_io_set__) */
#include <stddef.h>
#include <stdlib.h>
-#include <apr_atomic.h>
#include <apr_thread_mutex.h>
#include <apr_thread_cond.h>
#include <apr_strings.h>
#include <http_core.h>
#include <http_log.h>
+#include "mod_http2.h"
+
#include "h2_private.h"
#include "h2_config.h"
#include "h2_conn.h"
+#include "h2_ctx.h"
#include "h2_h2.h"
+#include "h2_int_queue.h"
#include "h2_io.h"
#include "h2_io_set.h"
#include "h2_response.h"
#include "h2_mplx.h"
#include "h2_request.h"
#include "h2_stream.h"
-#include "h2_stream_set.h"
#include "h2_task.h"
#include "h2_task_input.h"
#include "h2_task_output.h"
-#include "h2_task_queue.h"
#include "h2_worker.h"
#include "h2_workers.h"
#include "h2_util.h"
} while(0)
+/* NULL or the mutex hold by this thread, used for recursive calls
+ */
+static apr_threadkey_t *thread_lock;
+
+apr_status_t h2_mplx_child_init(apr_pool_t *pool, server_rec *s)
+{
+ return apr_threadkey_private_create(&thread_lock, NULL, pool);
+}
+
+static apr_status_t enter_mutex(h2_mplx *m, int *pacquired)
+{
+ apr_status_t status;
+ void *mutex = NULL;
+
+ /* Enter the mutex if this thread already holds the lock or
+ * if we can acquire it. Only on the later case do we unlock
+ * onleaving the mutex.
+ * This allow recursive entering of the mutex from the saem thread,
+ * which is what we need in certain situations involving callbacks
+ */
+ apr_threadkey_private_get(&mutex, thread_lock);
+ if (mutex == m->lock) {
+ *pacquired = 0;
+ return APR_SUCCESS;
+ }
+
+ status = apr_thread_mutex_lock(m->lock);
+ *pacquired = (status == APR_SUCCESS);
+ if (*pacquired) {
+ apr_threadkey_private_set(m->lock, thread_lock);
+ }
+ return status;
+}
+
+static void leave_mutex(h2_mplx *m, int acquired)
+{
+ if (acquired) {
+ apr_threadkey_private_set(NULL, thread_lock);
+ apr_thread_mutex_unlock(m->lock);
+ }
+}
+
static int is_aborted(h2_mplx *m, apr_status_t *pstatus)
{
AP_DEBUG_ASSERT(m);
"h2_mplx(%ld): destroy, ios=%d",
m->id, (int)h2_io_set_size(m->stream_ios));
m->aborted = 1;
- if (m->ready_ios) {
- h2_io_set_destroy(m->ready_ios);
- m->ready_ios = NULL;
- }
- if (m->stream_ios) {
- h2_io_set_destroy(m->stream_ios);
- m->stream_ios = NULL;
- }
check_tx_free(m);
* than protecting a shared h2_session one with an own lock.
*/
h2_mplx *h2_mplx_create(conn_rec *c, apr_pool_t *parent,
- const h2_config *conf,
+ const h2_config *conf,
+ apr_interval_time_t stream_timeout,
h2_workers *workers)
{
apr_status_t status = APR_SUCCESS;
if (!m->pool) {
return NULL;
}
+ apr_pool_tag(m->pool, "h2_mplx");
apr_allocator_owner_set(allocator, m->pool);
status = apr_thread_mutex_create(&m->lock, APR_THREAD_MUTEX_DEFAULT,
return NULL;
}
- m->q = h2_tq_create(m->pool, h2_config_geti(conf, H2_CONF_MAX_STREAMS));
+ status = apr_thread_cond_create(&m->task_done, m->pool);
+ if (status != APR_SUCCESS) {
+ h2_mplx_destroy(m);
+ return NULL;
+ }
+
+ m->q = h2_iq_create(m->pool, h2_config_geti(conf, H2_CONF_MAX_STREAMS));
m->stream_ios = h2_io_set_create(m->pool);
m->ready_ios = h2_io_set_create(m->pool);
m->stream_max_mem = h2_config_geti(conf, H2_CONF_STREAM_MAX_MEM);
+ m->stream_timeout = stream_timeout;
m->workers = workers;
+ m->workers_max = h2_config_geti(conf, H2_CONF_MAX_WORKERS);
+ m->workers_def_limit = 4;
+ m->workers_limit = m->workers_def_limit;
+ m->last_limit_change = m->last_idle_block = apr_time_now();
+ m->limit_change_interval = apr_time_from_msec(200);
m->tx_handles_reserved = 0;
m->tx_chunk_size = 4;
-
- m->stream_timeout_secs = h2_config_geti(conf, H2_CONF_STREAM_TIMEOUT_SECS);
}
return m;
}
int h2_mplx_get_max_stream_started(h2_mplx *m)
{
int stream_id = 0;
+ int acquired;
- apr_thread_mutex_lock(m->lock);
+ enter_mutex(m, &acquired);
stream_id = m->max_stream_started;
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
return stream_id;
}
* Therefore: ref counting for h2_workers in not needed, ref counting
* for h2_worker using this is critical.
*/
+ m->need_registration = 0;
h2_workers_register(m->workers, m);
}
h2_io_set_remove(m->stream_ios, io);
h2_io_set_remove(m->ready_ios, io);
- h2_io_destroy(io);
+ if (m->redo_ios) {
+ h2_io_set_remove(m->redo_ios, io);
+ }
if (pool) {
apr_pool_clear(pool);
h2_io_set_remove(m->ready_ios, io);
if (!io->worker_started || io->worker_done) {
/* already finished or not even started yet */
- h2_tq_remove(m->q, io->id);
+ h2_iq_remove(m->q, io->id);
io_destroy(m, io, 1);
return 0;
}
return io_stream_done((h2_mplx*)ctx, io, 0);
}
+static int stream_print(void *ctx, h2_io *io)
+{
+ h2_mplx *m = ctx;
+ if (io && io->request) {
+ ap_log_cerror(APLOG_MARK, APLOG_WARNING, 0, m->c, /* NO APLOGNO */
+ "->03198: h2_stream(%ld-%d): %s %s %s -> %s %d"
+ "[orph=%d/started=%d/done=%d/eos_in=%d/eos_out=%d]",
+ m->id, io->id,
+ io->request->method, io->request->authority, io->request->path,
+ io->response? "http" : (io->rst_error? "reset" : "?"),
+ io->response? io->response->http_status : io->rst_error,
+ io->orphaned, io->worker_started, io->worker_done,
+ io->eos_in, io->eos_out);
+ }
+ else if (io) {
+ ap_log_cerror(APLOG_MARK, APLOG_WARNING, 0, m->c, /* NO APLOGNO */
+ "->03198: h2_stream(%ld-%d): NULL -> %s %d"
+ "[orph=%d/started=%d/done=%d/eos_in=%d/eos_out=%d]",
+ m->id, io->id,
+ io->response? "http" : (io->rst_error? "reset" : "?"),
+ io->response? io->response->http_status : io->rst_error,
+ io->orphaned, io->worker_started, io->worker_done,
+ io->eos_in, io->eos_out);
+ }
+ else {
+ ap_log_cerror(APLOG_MARK, APLOG_WARNING, 0, m->c, /* NO APLOGNO */
+ "->03198: h2_stream(%ld-NULL): NULL", m->id);
+ }
+ return 1;
+}
+
apr_status_t h2_mplx_release_and_join(h2_mplx *m, apr_thread_cond_t *wait)
{
apr_status_t status;
-
+ int acquired;
+
h2_workers_unregister(m->workers, m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
int i, wait_secs = 5;
/* disable WINDOW_UPDATE callbacks */
h2_mplx_set_consumed_cb(m, NULL, NULL);
+ h2_iq_clear(m->q);
+ apr_thread_cond_broadcast(m->task_done);
while (!h2_io_set_iter(m->stream_ios, stream_done_iter, m)) {
/* iterate until all ios have been orphaned or destroyed */
}
- /* Any remaining ios have handed out requests to workers that are
- * not done yet. Any operation they do on their assigned stream ios will
- * be errored ECONNRESET/ABORTED, so that should find out pretty soon.
+ /* If we still have busy workers, we cannot release our memory
+ * pool yet, as slave connections have child pools of their respective
+ * h2_io's.
+ * Any remaining ios are processed in these workers. Any operation
+ * they do on their input/outputs will be errored ECONNRESET/ABORTED,
+ * so processing them should fail and workers *should* return.
*/
- for (i = 0; h2_io_set_size(m->stream_ios) > 0; ++i) {
+ for (i = 0; m->workers_busy > 0; ++i) {
m->join_wait = wait;
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
"h2_mplx(%ld): release_join, waiting on %d worker to report back",
*/
ap_log_cerror(APLOG_MARK, APLOG_WARNING, 0, m->c, APLOGNO(03198)
"h2_mplx(%ld): release, waiting for %d seconds now for "
- "all h2_workers to return, have still %d requests outstanding",
- m->id, i*wait_secs, (int)h2_io_set_size(m->stream_ios));
+ "%d h2_workers to return, have still %d requests outstanding",
+ m->id, i*wait_secs, m->workers_busy,
+ (int)h2_io_set_size(m->stream_ios));
+ if (i == 1) {
+ h2_io_set_iter(m->stream_ios, stream_print, m);
+ }
}
+ m->aborted = 1;
+ apr_thread_cond_broadcast(m->task_done);
}
}
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, m->c, APLOGNO(03056)
"h2_mplx(%ld): release_join -> destroy", m->id);
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
h2_mplx_destroy(m);
/* all gone */
}
void h2_mplx_abort(h2_mplx *m)
{
apr_status_t status;
+ int acquired;
AP_DEBUG_ASSERT(m);
if (!m->aborted) {
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
m->aborted = 1;
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
}
}
apr_status_t h2_mplx_stream_done(h2_mplx *m, int stream_id, int rst_error)
{
- apr_status_t status;
+ apr_status_t status = APR_SUCCESS;
+ int acquired;
+ /* This maybe called from inside callbacks that already hold the lock.
+ * E.g. when we are streaming out DATA and the EOF triggers the stream
+ * release.
+ */
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
/* there should be an h2_io, once the stream has been scheduled
* for processing, e.g. when we received all HEADERs. But when
* a stream is cancelled very early, it will not exist. */
if (io) {
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
+ "h2_mplx(%ld-%d): marking stream as done.",
+ m->id, stream_id);
io_stream_done(m, io, rst_error);
}
-
- apr_thread_mutex_unlock(m->lock);
- }
- return status;
-}
-static const h2_request *pop_request(h2_mplx *m)
-{
- const h2_request *req = NULL;
- int sid;
- while (!m->aborted && !req && (sid = h2_tq_shift(m->q)) > 0) {
- h2_io *io = h2_io_set_get(m->stream_ios, sid);
- if (io) {
- req = io->request;
- io->worker_started = 1;
- if (sid > m->max_stream_started) {
- m->max_stream_started = sid;
- }
- }
- }
- return req;
-}
-
-void h2_mplx_request_done(h2_mplx **pm, int stream_id, const h2_request **preq)
-{
- h2_mplx *m = *pm;
-
- apr_status_t status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
- h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
- ap_log_cerror(APLOG_MARK, APLOG_TRACE2, 0, m->c,
- "h2_mplx(%ld): request(%d) done", m->id, stream_id);
- if (io) {
- io->worker_done = 1;
- if (io->orphaned) {
- io_destroy(m, io, 0);
- if (m->join_wait) {
- apr_thread_cond_signal(m->join_wait);
- }
- }
- else {
- /* hang around until the stream deregisteres */
- }
- }
-
- if (preq) {
- /* someone wants another request, if we have */
- *preq = pop_request(m);
- }
- if (!preq || !*preq) {
- /* No request to hand back to the worker, NULLify reference
- * and decrement count */
- *pm = NULL;
- }
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
+ return status;
}
apr_status_t h2_mplx_in_read(h2_mplx *m, apr_read_type_e block,
struct apr_thread_cond_t *iowait)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
H2_MPLX_IO_IN(APLOG_TRACE2, m, io, "h2_mplx_in_read_pre");
- h2_io_signal_init(io, H2_IO_READ, m->stream_timeout_secs, iowait);
+ h2_io_signal_init(io, H2_IO_READ, m->stream_timeout, iowait);
status = h2_io_in_read(io, bb, -1, trailers);
while (APR_STATUS_IS_EAGAIN(status)
&& !is_aborted(m, &status)
else {
status = APR_EOF;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
apr_bucket_brigade *bb)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
H2_MPLX_IO_IN(APLOG_TRACE2, m, io, "h2_mplx_in_write_pre");
else {
status = APR_ECONNABORTED;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
apr_status_t h2_mplx_in_close(h2_mplx *m, int stream_id)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
status = h2_io_in_close(io);
else {
status = APR_ECONNABORTED;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
apr_status_t h2_mplx_in_update_windows(h2_mplx *m)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
if (m->aborted) {
return APR_ECONNABORTED;
}
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
update_ctx ctx;
ctx.m = m;
if (ctx.streams_updated) {
status = APR_SUCCESS;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
apr_table_t **ptrailers)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
H2_MPLX_IO_OUT(APLOG_TRACE2, m, io, "h2_mplx_out_readx_pre");
}
*ptrailers = (*peos && io->response)? io->response->trailers : NULL;
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
apr_table_t **ptrailers)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
H2_MPLX_IO_OUT(APLOG_TRACE2, m, io, "h2_mplx_out_read_to_pre");
status = APR_ECONNABORTED;
}
*ptrailers = (*peos && io->response)? io->response->trailers : NULL;
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
-h2_stream *h2_mplx_next_submit(h2_mplx *m, h2_stream_set *streams)
+h2_stream *h2_mplx_next_submit(h2_mplx *m, h2_ihash_t *streams)
{
apr_status_t status;
h2_stream *stream = NULL;
+ int acquired;
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
- h2_io *io = h2_io_set_pop_highest_prio(m->ready_ios);
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
+ h2_io *io = h2_io_set_shift(m->ready_ios);
if (io && !m->aborted) {
- stream = h2_stream_set_get(streams, io->id);
+ stream = h2_ihash_get(streams, io->id);
if (stream) {
+ io->submitted = 1;
if (io->rst_error) {
h2_stream_rst(stream, io->rst_error);
}
* reset by the client. Should no longer happen since such
* streams should clear io's from the ready queue.
*/
- ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
+ ap_log_cerror(APLOG_MARK, APLOG_WARNING, 0, m->c, APLOGNO(03347)
"h2_mplx(%ld): stream for response %d closed, "
"resetting io to close request processing",
m->id, io->id);
h2_io_signal(io, H2_IO_WRITE);
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return stream;
}
&m->tx_handles_reserved);
/* Wait for data to drain until there is room again or
* stream timeout expires */
- h2_io_signal_init(io, H2_IO_WRITE, m->stream_timeout_secs, iowait);
+ h2_io_signal_init(io, H2_IO_WRITE, m->stream_timeout, iowait);
while (status == APR_SUCCESS
&& !APR_BRIGADE_EMPTY(bb)
&& iowait
struct apr_thread_cond_t *iowait)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
if (m->aborted) {
status = APR_ECONNABORTED;
}
h2_util_bb_log(m->c, stream_id, APLOG_TRACE1, "h2_mplx_out_open", bb);
}
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
struct apr_thread_cond_t *iowait)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
status = out_write(m, io, f, bb, trailers, iowait);
- ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, m->c,
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE2, status, m->c,
"h2_mplx(%ld-%d): write with trailers=%s",
m->id, io->id, trailers? "yes" : "no");
H2_MPLX_IO_OUT(APLOG_TRACE2, m, io, "h2_mplx_out_write");
else {
status = APR_ECONNABORTED;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
apr_status_t h2_mplx_out_close(h2_mplx *m, int stream_id, apr_table_t *trailers)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
if (!io->response && !io->rst_error) {
else {
status = APR_ECONNABORTED;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
apr_status_t h2_mplx_out_rst(h2_mplx *m, int stream_id, int error)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->rst_error && !io->orphaned) {
h2_io_rst(io, error);
else {
status = APR_ECONNABORTED;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
int h2_mplx_in_has_eos_for(h2_mplx *m, int stream_id)
{
int has_eos = 0;
+ int acquired;
+
apr_status_t status;
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
has_eos = h2_io_in_has_eos_for(io);
else {
has_eos = 1;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return has_eos;
}
+int h2_mplx_in_has_data_for(h2_mplx *m, int stream_id)
+{
+ apr_status_t status;
+ int has_data = 0;
+ int acquired;
+
+ AP_DEBUG_ASSERT(m);
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
+ h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
+ if (io && !io->orphaned) {
+ has_data = h2_io_in_has_data(io);
+ }
+ else {
+ has_data = 0;
+ }
+ leave_mutex(m, acquired);
+ }
+ return has_data;
+}
+
int h2_mplx_out_has_data_for(h2_mplx *m, int stream_id)
{
apr_status_t status;
int has_data = 0;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
h2_io *io = h2_io_set_get(m->stream_ios, stream_id);
if (io && !io->orphaned) {
has_data = h2_io_out_has_data(io);
else {
has_data = 0;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return has_data;
}
apr_thread_cond_t *iowait)
{
apr_status_t status;
+ int acquired;
+
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
if (m->aborted) {
status = APR_ECONNABORTED;
}
}
m->added_output = NULL;
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
apr_status_t h2_mplx_reprioritize(h2_mplx *m, h2_stream_pri_cmp *cmp, void *ctx)
{
apr_status_t status;
+ int acquired;
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
if (m->aborted) {
status = APR_ECONNABORTED;
}
else {
- h2_tq_sort(m->q, cmp, ctx);
+ h2_iq_sort(m->q, cmp, ctx);
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
"h2_mplx(%ld): reprioritize tasks", m->id);
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
return status;
}
-static h2_io *open_io(h2_mplx *m, int stream_id)
+static h2_io *open_io(h2_mplx *m, int stream_id, const h2_request *request)
{
apr_pool_t *io_pool = m->spare_pool;
h2_io *io;
if (!io_pool) {
apr_pool_create(&io_pool, m->pool);
+ apr_pool_tag(io_pool, "h2_io");
}
else {
m->spare_pool = NULL;
}
- io = h2_io_create(stream_id, io_pool);
+ io = h2_io_create(stream_id, io_pool, request);
h2_io_set_add(m->stream_ios, io);
return io;
h2_stream_pri_cmp *cmp, void *ctx)
{
apr_status_t status;
- int was_empty = 0;
+ int do_registration = 0;
+ int acquired;
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
if (m->aborted) {
status = APR_ECONNABORTED;
}
else {
- h2_io *io = open_io(m, stream_id);
- io->request = req;
+ h2_io *io = open_io(m, stream_id, req);
if (!io->request->body) {
status = h2_io_in_close(io);
}
- was_empty = h2_tq_empty(m->q);
- h2_tq_add(m->q, io->id, cmp, ctx);
+ m->need_registration = m->need_registration || h2_iq_empty(m->q);
+ do_registration = (m->need_registration && m->workers_busy < m->workers_max);
+ h2_iq_add(m->q, io->id, cmp, ctx);
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, m->c,
"h2_mplx(%ld-%d): process", m->c->id, stream_id);
H2_MPLX_IO_IN(APLOG_TRACE2, m, io, "h2_mplx_process");
}
- apr_thread_mutex_unlock(m->lock);
+ leave_mutex(m, acquired);
}
- if (status == APR_SUCCESS && was_empty) {
+ if (status == APR_SUCCESS && do_registration) {
workers_register(m);
}
return status;
}
-const h2_request *h2_mplx_pop_request(h2_mplx *m, int *has_more)
+static h2_task *pop_task(h2_mplx *m)
{
- const h2_request *req = NULL;
+ h2_task *task = NULL;
+ int sid;
+ while (!m->aborted && !task
+ && (m->workers_busy < m->workers_limit)
+ && (sid = h2_iq_shift(m->q)) > 0) {
+ h2_io *io = h2_io_set_get(m->stream_ios, sid);
+ if (io && io->orphaned) {
+ io_destroy(m, io, 0);
+ if (m->join_wait) {
+ apr_thread_cond_signal(m->join_wait);
+ }
+ }
+ else if (io) {
+ conn_rec *slave = h2_slave_create(m->c, m->pool, m->spare_allocator);
+ m->spare_allocator = NULL;
+ task = h2_task_create(m->id, io->request, slave, m);
+ io->worker_started = 1;
+ io->started_at = apr_time_now();
+ if (sid > m->max_stream_started) {
+ m->max_stream_started = sid;
+ }
+ ++m->workers_busy;
+ }
+ }
+ return task;
+}
+
+h2_task *h2_mplx_pop_task(h2_mplx *m, int *has_more)
+{
+ h2_task *task = NULL;
apr_status_t status;
+ int acquired;
AP_DEBUG_ASSERT(m);
- status = apr_thread_mutex_lock(m->lock);
- if (APR_SUCCESS == status) {
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
if (m->aborted) {
- req = NULL;
*has_more = 0;
}
else {
- req = pop_request(m);
- *has_more = !h2_tq_empty(m->q);
+ task = pop_task(m);
+ *has_more = !h2_iq_empty(m->q);
}
- apr_thread_mutex_unlock(m->lock);
+
+ if (has_more && !task) {
+ m->need_registration = 1;
+ }
+ leave_mutex(m, acquired);
}
- return req;
+ return task;
}
+static void task_done(h2_mplx *m, h2_task *task)
+{
+ if (task) {
+ if (task->frozen) {
+ /* this task was handed over to an engine for processing */
+ h2_task_thaw(task);
+ /* TODO: can we signal an engine that it can now start on this? */
+ }
+ else {
+ h2_io *io = h2_io_set_get(m->stream_ios, task->stream_id);
+
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE2, 0, m->c,
+ "h2_mplx(%ld): task(%s) done", m->id, task->id);
+ /* clean our references and report request as done. Signal
+ * that we want another unless we have been aborted */
+ /* TODO: this will keep a worker attached to this h2_mplx as
+ * long as it has requests to handle. Might no be fair to
+ * other mplx's. Perhaps leave after n requests? */
+ h2_mplx_out_close(m, task->stream_id, NULL);
+ if (m->spare_allocator) {
+ apr_allocator_destroy(m->spare_allocator);
+ m->spare_allocator = NULL;
+ }
+ h2_slave_destroy(task->c, &m->spare_allocator);
+ task = NULL;
+ if (io) {
+ apr_time_t now = apr_time_now();
+ if (!io->orphaned && m->redo_ios
+ && h2_io_set_get(m->redo_ios, io->id)) {
+ /* reset and schedule again */
+ h2_io_redo(io);
+ h2_io_set_remove(m->redo_ios, io);
+ h2_iq_add(m->q, io->id, NULL, NULL);
+ }
+ else {
+ io->worker_done = 1;
+ io->done_at = now;
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
+ "h2_mplx(%ld): request(%d) done, %f ms"
+ " elapsed", m->id, io->id,
+ (io->done_at - io->started_at) / 1000.0);
+ if (io->started_at > m->last_idle_block) {
+ /* this task finished without causing an 'idle block', e.g.
+ * a block by flow control.
+ */
+ if (now - m->last_limit_change >= m->limit_change_interval
+ && m->workers_limit < m->workers_max) {
+ /* Well behaving stream, allow it more workers */
+ m->workers_limit = H2MIN(m->workers_limit * 2,
+ m->workers_max);
+ m->last_limit_change = now;
+ m->need_registration = 1;
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
+ "h2_mplx(%ld): increase worker limit to %d",
+ m->id, m->workers_limit);
+ }
+ }
+ }
+
+ if (io->orphaned) {
+ io_destroy(m, io, 0);
+ if (m->join_wait) {
+ apr_thread_cond_signal(m->join_wait);
+ }
+ }
+ else {
+ /* hang around until the stream deregisteres */
+ }
+ }
+ apr_thread_cond_broadcast(m->task_done);
+ }
+ }
+}
+
+void h2_mplx_task_done(h2_mplx *m, h2_task *task, h2_task **ptask)
+{
+ int acquired;
+
+ if (enter_mutex(m, &acquired) == APR_SUCCESS) {
+ task_done(m, task);
+ --m->workers_busy;
+ if (ptask) {
+ /* caller wants another task */
+ *ptask = pop_task(m);
+ }
+ leave_mutex(m, acquired);
+ }
+}
+
+/*******************************************************************************
+ * h2_mplx DoS protection
+ ******************************************************************************/
+
+typedef struct {
+ h2_mplx *m;
+ h2_io *io;
+ apr_time_t now;
+} io_iter_ctx;
+
+static int latest_repeatable_busy_unsubmitted_iter(void *data, h2_io *io)
+{
+ io_iter_ctx *ctx = data;
+ if (io->worker_started && !io->worker_done
+ && h2_io_is_repeatable(io)
+ && !h2_io_set_get(ctx->m->redo_ios, io->id)) {
+ /* this io occupies a worker, the response has not been submitted yet,
+ * not been cancelled and it is a repeatable request
+ * -> it can be re-scheduled later */
+ if (!ctx->io || ctx->io->started_at < io->started_at) {
+ /* we did not have one or this one was started later */
+ ctx->io = io;
+ }
+ }
+ return 1;
+}
+
+static h2_io *get_latest_repeatable_busy_unsubmitted_io(h2_mplx *m)
+{
+ io_iter_ctx ctx;
+ ctx.m = m;
+ ctx.io = NULL;
+ h2_io_set_iter(m->stream_ios, latest_repeatable_busy_unsubmitted_iter, &ctx);
+ return ctx.io;
+}
+
+static int timed_out_busy_iter(void *data, h2_io *io)
+{
+ io_iter_ctx *ctx = data;
+ if (io->worker_started && !io->worker_done
+ && (ctx->now - io->started_at) > ctx->m->stream_timeout) {
+ /* timed out stream occupying a worker, found */
+ ctx->io = io;
+ return 0;
+ }
+ return 1;
+}
+static h2_io *get_timed_out_busy_stream(h2_mplx *m)
+{
+ io_iter_ctx ctx;
+ ctx.m = m;
+ ctx.io = NULL;
+ ctx.now = apr_time_now();
+ h2_io_set_iter(m->stream_ios, timed_out_busy_iter, &ctx);
+ return ctx.io;
+}
+
+static apr_status_t unschedule_slow_ios(h2_mplx *m)
+{
+ h2_io *io;
+ int n;
+
+ if (!m->redo_ios) {
+ m->redo_ios = h2_io_set_create(m->pool);
+ }
+ /* Try to get rid of streams that occupy workers. Look for safe requests
+ * that are repeatable. If none found, fail the connection.
+ */
+ n = (m->workers_busy - m->workers_limit - h2_io_set_size(m->redo_ios));
+ while (n > 0 && (io = get_latest_repeatable_busy_unsubmitted_io(m))) {
+ h2_io_set_add(m->redo_ios, io);
+ h2_io_rst(io, H2_ERR_CANCEL);
+ --n;
+ }
+
+ if ((m->workers_busy - h2_io_set_size(m->redo_ios)) > m->workers_limit) {
+ io = get_timed_out_busy_stream(m);
+ if (io) {
+ /* Too many busy workers, unable to cancel enough streams
+ * and with a busy, timed out stream, we tell the client
+ * to go away... */
+ return APR_TIMEUP;
+ }
+ }
+ return APR_SUCCESS;
+}
+
+apr_status_t h2_mplx_idle(h2_mplx *m)
+{
+ apr_status_t status = APR_SUCCESS;
+ apr_time_t now;
+ int acquired;
+
+ if (enter_mutex(m, &acquired) == APR_SUCCESS) {
+ apr_size_t scount = h2_io_set_size(m->stream_ios);
+ if (scount > 0 && m->workers_busy) {
+ /* If we have streams in connection state 'IDLE', meaning
+ * all streams are ready to sent data out, but lack
+ * WINDOW_UPDATEs.
+ *
+ * This is ok, unless we have streams that still occupy
+ * h2 workers. As worker threads are a scarce resource,
+ * we need to take measures that we do not get DoSed.
+ *
+ * This is what we call an 'idle block'. Limit the amount
+ * of busy workers we allow for this connection until it
+ * well behaves.
+ */
+ now = apr_time_now();
+ m->last_idle_block = now;
+ if (m->workers_limit > 2
+ && now - m->last_limit_change >= m->limit_change_interval) {
+ if (m->workers_limit > 16) {
+ m->workers_limit = 16;
+ }
+ else if (m->workers_limit > 8) {
+ m->workers_limit = 8;
+ }
+ else if (m->workers_limit > 4) {
+ m->workers_limit = 4;
+ }
+ else if (m->workers_limit > 2) {
+ m->workers_limit = 2;
+ }
+ m->last_limit_change = now;
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
+ "h2_mplx(%ld): decrease worker limit to %d",
+ m->id, m->workers_limit);
+ }
+
+ if (m->workers_busy > m->workers_limit) {
+ status = unschedule_slow_ios(m);
+ }
+ }
+ leave_mutex(m, acquired);
+ }
+ return status;
+}
+
+/*******************************************************************************
+ * HTTP/2 request engines
+ ******************************************************************************/
+
+typedef struct h2_req_entry h2_req_entry;
+struct h2_req_entry {
+ APR_RING_ENTRY(h2_req_entry) link;
+ request_rec *r;
+};
+
+#define H2_REQ_ENTRY_NEXT(e) APR_RING_NEXT((e), link)
+#define H2_REQ_ENTRY_PREV(e) APR_RING_PREV((e), link)
+#define H2_REQ_ENTRY_REMOVE(e) APR_RING_REMOVE((e), link)
+
+typedef struct h2_req_engine_i h2_req_engine_i;
+struct h2_req_engine_i {
+ h2_req_engine pub;
+ conn_rec *c; /* connection this engine is assigned to */
+ h2_mplx *m;
+ unsigned int shutdown : 1; /* engine is being shut down */
+ apr_thread_cond_t *io; /* condition var for waiting on data */
+ APR_RING_HEAD(h2_req_entries, h2_req_entry) entries;
+ apr_size_t no_assigned; /* # of assigned requests */
+ apr_size_t no_live; /* # of live */
+ apr_size_t no_finished; /* # of finished */
+};
+
+#define H2_REQ_ENTRIES_SENTINEL(b) APR_RING_SENTINEL((b), h2_req_entry, link)
+#define H2_REQ_ENTRIES_EMPTY(b) APR_RING_EMPTY((b), h2_req_entry, link)
+#define H2_REQ_ENTRIES_FIRST(b) APR_RING_FIRST(b)
+#define H2_REQ_ENTRIES_LAST(b) APR_RING_LAST(b)
+
+#define H2_REQ_ENTRIES_INSERT_HEAD(b, e) do { \
+h2_req_entry *ap__b = (e); \
+APR_RING_INSERT_HEAD((b), ap__b, h2_req_entry, link); \
+} while (0)
+
+#define H2_REQ_ENTRIES_INSERT_TAIL(b, e) do { \
+h2_req_entry *ap__b = (e); \
+APR_RING_INSERT_TAIL((b), ap__b, h2_req_entry, link); \
+} while (0)
+
+static apr_status_t h2_mplx_engine_schedule(h2_mplx *m,
+ h2_req_engine_i *engine,
+ request_rec *r)
+{
+ h2_req_entry *entry = apr_pcalloc(r->pool, sizeof(*entry));
+
+ APR_RING_ELEM_INIT(entry, link);
+ entry->r = r;
+ H2_REQ_ENTRIES_INSERT_TAIL(&engine->entries, entry);
+ return APR_SUCCESS;
+}
+
+
+apr_status_t h2_mplx_engine_push(const char *engine_type,
+ request_rec *r, h2_mplx_engine_init *einit)
+{
+ apr_status_t status;
+ h2_mplx *m;
+ h2_task *task;
+ int acquired;
+
+ task = h2_ctx_rget_task(r);
+ if (!task) {
+ return APR_ECONNABORTED;
+ }
+ m = task->mplx;
+ AP_DEBUG_ASSERT(m);
+
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
+ h2_io *io = h2_io_set_get(m->stream_ios, task->stream_id);
+ if (!io || io->orphaned) {
+ status = APR_ECONNABORTED;
+ }
+ else {
+ h2_req_engine_i *engine = (h2_req_engine_i*)m->engine;
+
+ apr_table_set(r->connection->notes, H2_TASK_ID_NOTE, task->id);
+ status = APR_EOF;
+
+ if (task->ser_headers) {
+ /* Max compatibility, deny processing of this */
+ }
+ else if (engine && !strcmp(engine->pub.type, engine_type)) {
+ if (engine->shutdown
+ || engine->no_assigned >= H2MIN(engine->pub.capacity, 100)) {
+ ap_log_rerror(APLOG_MARK, APLOG_TRACE1, status, r,
+ "h2_mplx(%ld): engine shutdown or over %s",
+ m->c->id, engine->pub.id);
+ engine = NULL;
+ }
+ else if (h2_mplx_engine_schedule(m, engine, r) == APR_SUCCESS) {
+ /* this task will be processed in another thread,
+ * freeze any I/O for the time being. */
+ h2_task_freeze(task, r);
+ engine->no_assigned++;
+ status = APR_SUCCESS;
+ ap_log_rerror(APLOG_MARK, APLOG_DEBUG, status, r,
+ "h2_mplx(%ld): push request %s",
+ m->c->id, r->the_request);
+ }
+ else {
+ ap_log_rerror(APLOG_MARK, APLOG_TRACE1, status, r,
+ "h2_mplx(%ld): engine error adding req %s",
+ m->c->id, engine->pub.id);
+ engine = NULL;
+ }
+ }
+
+ if (!engine && einit) {
+ engine = apr_pcalloc(task->c->pool, sizeof(*engine));
+ engine->pub.id = apr_psprintf(task->c->pool, "eng-%ld-%d",
+ m->id, m->next_eng_id++);
+ engine->pub.pool = task->c->pool;
+ engine->pub.type = apr_pstrdup(task->c->pool, engine_type);
+ engine->pub.window_bits = 30;
+ engine->pub.req_window_bits = h2_log2(m->stream_max_mem);
+ engine->c = r->connection;
+ APR_RING_INIT(&engine->entries, h2_req_entry, link);
+ engine->m = m;
+ engine->io = task->io;
+ engine->no_assigned = 1;
+ engine->no_live = 1;
+
+ status = einit(&engine->pub, r);
+ ap_log_rerror(APLOG_MARK, APLOG_TRACE1, status, r,
+ "h2_mplx(%ld): init engine %s (%s)",
+ m->c->id, engine->pub.id, engine->pub.type);
+ if (status == APR_SUCCESS) {
+ m->engine = &engine->pub;
+ }
+ }
+ }
+
+ leave_mutex(m, acquired);
+ }
+ return status;
+}
+
+static h2_req_entry *pop_non_frozen(h2_req_engine_i *engine)
+{
+ h2_req_entry *entry;
+ h2_task *task;
+
+ for (entry = H2_REQ_ENTRIES_FIRST(&engine->entries);
+ entry != H2_REQ_ENTRIES_SENTINEL(&engine->entries);
+ entry = H2_REQ_ENTRY_NEXT(entry)) {
+ task = h2_ctx_rget_task(entry->r);
+ AP_DEBUG_ASSERT(task);
+ if (!task->frozen) {
+ H2_REQ_ENTRY_REMOVE(entry);
+ return entry;
+ }
+ }
+ return NULL;
+}
+
+static apr_status_t engine_pull(h2_mplx *m, h2_req_engine_i *engine,
+ apr_read_type_e block, request_rec **pr)
+{
+ h2_req_entry *entry;
+
+ AP_DEBUG_ASSERT(m);
+ AP_DEBUG_ASSERT(engine);
+ while (1) {
+ if (m->aborted) {
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE2, 0, m->c,
+ "h2_mplx(%ld): mplx abort while pulling requests %s",
+ m->id, engine->pub.id);
+ *pr = NULL;
+ return APR_EOF;
+ }
+
+ if (!H2_REQ_ENTRIES_EMPTY(&engine->entries)
+ && (entry = pop_non_frozen(engine))) {
+ ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, entry->r,
+ "h2_mplx(%ld): request %s pulled by engine %s",
+ m->c->id, entry->r->the_request, engine->pub.id);
+ engine->no_live++;
+ entry->r->connection->current_thread = engine->c->current_thread;
+ *pr = entry->r;
+ return APR_SUCCESS;
+ }
+ else if (APR_NONBLOCK_READ == block) {
+ *pr = NULL;
+ return APR_EAGAIN;
+ }
+ else if (H2_REQ_ENTRIES_EMPTY(&engine->entries)) {
+ engine->shutdown = 1;
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
+ "h2_mplx(%ld): emtpy queue, shutdown engine %s",
+ m->id, engine->pub.id);
+ *pr = NULL;
+ return APR_EOF;
+ }
+ apr_thread_cond_timedwait(m->task_done, m->lock,
+ apr_time_from_msec(100));
+ }
+}
+
+apr_status_t h2_mplx_engine_pull(h2_req_engine *pub_engine,
+ apr_read_type_e block, request_rec **pr)
+{
+ h2_req_engine_i *engine = (h2_req_engine_i*)pub_engine;
+ h2_mplx *m = engine->m;
+ apr_status_t status;
+ int acquired;
+
+ *pr = NULL;
+ if ((status = enter_mutex(m, &acquired)) == APR_SUCCESS) {
+ status = engine_pull(m, engine, block, pr);
+ leave_mutex(m, acquired);
+ }
+ return status;
+}
+
+static void engine_done(h2_mplx *m, h2_req_engine_i *engine, h2_task *task,
+ int waslive, int aborted)
+{
+ int acquired;
+ ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, m->c,
+ "h2_mplx(%ld): task %s %s by %s",
+ m->id, task->id, aborted? "aborted":"done",
+ engine->pub.id);
+ h2_task_output_close(task->output);
+ engine->no_finished++;
+ if (waslive) engine->no_live--;
+ engine->no_assigned--;
+ if (task->c != engine->c) { /* do not release what the engine runs on */
+ if (enter_mutex(m, &acquired) == APR_SUCCESS) {
+ task_done(m, task);
+ leave_mutex(m, acquired);
+ }
+ }
+}
+
+void h2_mplx_engine_done(h2_req_engine *pub_engine, conn_rec *r_conn)
+{
+ h2_req_engine_i *engine = (h2_req_engine_i*)pub_engine;
+ h2_mplx *m = engine->m;
+ h2_task *task;
+ int acquired;
+
+ task = h2_ctx_cget_task(r_conn);
+ if (task && (enter_mutex(m, &acquired) == APR_SUCCESS)) {
+ engine_done(m, engine, task, 1, 0);
+ leave_mutex(m, acquired);
+ }
+}
+
+void h2_mplx_engine_exit(h2_req_engine *pub_engine)
+{
+ h2_req_engine_i *engine = (h2_req_engine_i*)pub_engine;
+ h2_mplx *m = engine->m;
+ int acquired;
+
+ if (enter_mutex(m, &acquired) == APR_SUCCESS) {
+ if (!m->aborted
+ && !H2_REQ_ENTRIES_EMPTY(&engine->entries)) {
+ h2_req_entry *entry;
+ ap_log_cerror(APLOG_MARK, APLOG_WARNING, 0, m->c,
+ "h2_mplx(%ld): exit engine %s (%s), "
+ "has still requests queued, shutdown=%d,"
+ "assigned=%ld, live=%ld, finished=%ld",
+ m->c->id, engine->pub.id, engine->pub.type,
+ engine->shutdown,
+ (long)engine->no_assigned, (long)engine->no_live,
+ (long)engine->no_finished);
+ for (entry = H2_REQ_ENTRIES_FIRST(&engine->entries);
+ entry != H2_REQ_ENTRIES_SENTINEL(&engine->entries);
+ entry = H2_REQ_ENTRY_NEXT(entry)) {
+ request_rec *r = entry->r;
+ h2_task *task = h2_ctx_rget_task(r);
+ ap_log_cerror(APLOG_MARK, APLOG_WARNING, 0, m->c,
+ "h2_mplx(%ld): engine %s has queued task %s, "
+ "frozen=%d, aborting",
+ m->c->id, engine->pub.id, task->id, task->frozen);
+ engine_done(m, engine, task, 0, 1);
+ }
+ }
+ if (!m->aborted && (engine->no_assigned > 1 || engine->no_live > 1)) {
+ ap_log_cerror(APLOG_MARK, APLOG_WARNING, 0, m->c,
+ "h2_mplx(%ld): exit engine %s (%s), "
+ "assigned=%ld, live=%ld, finished=%ld",
+ m->c->id, engine->pub.id, engine->pub.type,
+ (long)engine->no_assigned, (long)engine->no_live,
+ (long)engine->no_finished);
+ }
+ else {
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, m->c,
+ "h2_mplx(%ld): exit engine %s (%s)",
+ m->c->id, engine->pub.id, engine->pub.type);
+ }
+ if (m->engine == &engine->pub) {
+ m->engine = NULL; /* TODO */
+ }
+ leave_mutex(m, acquired);
+ }
+}
struct apr_thread_mutex_t;
struct apr_thread_cond_t;
struct h2_config;
+struct h2_ihash_t;
struct h2_response;
struct h2_task;
struct h2_stream;
struct h2_io_set;
struct apr_thread_cond_t;
struct h2_workers;
-struct h2_stream_set;
-struct h2_task_queue;
+struct h2_int_queue;
+struct h2_req_engine;
+#include <apr_queue.h>
#include "h2_io.h"
typedef struct h2_mplx h2_mplx;
apr_pool_t *pool;
unsigned int aborted : 1;
+ unsigned int need_registration : 1;
- struct h2_task_queue *q;
+ struct h2_int_queue *q;
struct h2_io_set *stream_ios;
struct h2_io_set *ready_ios;
+ struct h2_io_set *redo_ios;
int max_stream_started; /* highest stream id that started processing */
+ int workers_busy; /* # of workers processing on this mplx */
+ int workers_limit; /* current # of workers limit, dynamic */
+ int workers_def_limit; /* default # of workers limit */
+ int workers_max; /* max, hard limit # of workers in a process */
+ apr_time_t last_idle_block; /* last time, this mplx entered IDLE while
+ * streams were ready */
+ apr_time_t last_limit_change;/* last time, worker limit changed */
+ apr_interval_time_t limit_change_interval;
apr_thread_mutex_t *lock;
struct apr_thread_cond_t *added_output;
+ struct apr_thread_cond_t *task_done;
struct apr_thread_cond_t *join_wait;
apr_size_t stream_max_mem;
- int stream_timeout_secs;
+ apr_interval_time_t stream_timeout;
apr_pool_t *spare_pool; /* spare pool, ready for next io */
+ apr_allocator_t *spare_allocator;
+
struct h2_workers *workers;
apr_size_t tx_handles_reserved;
apr_size_t tx_chunk_size;
h2_mplx_consumed_cb *input_consumed;
void *input_consumed_ctx;
+
+ struct h2_req_engine *engine;
+ /* TODO: signal for waiting tasks*/
+ apr_queue_t *engine_queue;
+ int next_eng_id;
};
* Object lifecycle and information.
******************************************************************************/
+apr_status_t h2_mplx_child_init(apr_pool_t *pool, server_rec *s);
+
/**
* Create the multiplexer for the given HTTP2 session.
* Implicitly has reference count 1.
*/
h2_mplx *h2_mplx_create(conn_rec *c, apr_pool_t *master,
const struct h2_config *conf,
+ apr_interval_time_t stream_timeout,
struct h2_workers *workers);
/**
*/
void h2_mplx_abort(h2_mplx *mplx);
-void h2_mplx_request_done(h2_mplx **pm, int stream_id, const struct h2_request **preq);
+struct h2_task *h2_mplx_pop_task(h2_mplx *mplx, int *has_more);
+
+void h2_mplx_task_done(h2_mplx *m, struct h2_task *task, struct h2_task **ptask);
/**
* Get the highest stream identifier that has been passed on to processing.
*/
apr_status_t h2_mplx_stream_done(h2_mplx *m, int stream_id, int rst_error);
-/* Return != 0 iff the multiplexer has data for the given stream.
+/* Return != 0 iff the multiplexer has output data for the given stream.
*/
int h2_mplx_out_has_data_for(h2_mplx *m, int stream_id);
+/* Return != 0 iff the multiplexer has input data for the given stream.
+ */
+int h2_mplx_in_has_data_for(h2_mplx *m, int stream_id);
+
/**
* Waits on output data from any stream in this session to become available.
* Returns APR_TIMEUP if no data arrived in the given time.
*/
apr_status_t h2_mplx_reprioritize(h2_mplx *m, h2_stream_pri_cmp *cmp, void *ctx);
-const struct h2_request *h2_mplx_pop_request(h2_mplx *mplx, int *has_more);
-
/**
* Register a callback for the amount of input data consumed per stream. The
* will only ever be invoked from the thread creating this h2_mplx, e.g. when
* @param bb the brigade to place any existing repsonse body data into
*/
struct h2_stream *h2_mplx_next_submit(h2_mplx *m,
- struct h2_stream_set *streams);
+ struct h2_ihash_t *streams);
/**
* Reads output data from the given stream. Will never block, but
*/
#define H2_MPLX_REMOVE(e) APR_RING_REMOVE((e), link)
+/*******************************************************************************
+ * h2_mplx DoS protection
+ ******************************************************************************/
+
+/**
+ * Master connection has entered idle mode.
+ * @param m the mplx instance of the master connection
+ * @return != SUCCESS iff connection should be terminated
+ */
+apr_status_t h2_mplx_idle(h2_mplx *m);
+
+/*******************************************************************************
+ * h2_mplx h2_req_engine handling.
+ ******************************************************************************/
+
+typedef apr_status_t h2_mplx_engine_init(struct h2_req_engine *engine,
+ request_rec *r);
+
+apr_status_t h2_mplx_engine_push(const char *engine_type,
+ request_rec *r, h2_mplx_engine_init *einit);
+
+apr_status_t h2_mplx_engine_pull(struct h2_req_engine *engine,
+ apr_read_type_e block, request_rec **pr);
+
+void h2_mplx_engine_done(struct h2_req_engine *engine, conn_rec *r_conn);
+
+void h2_mplx_engine_exit(struct h2_req_engine *engine);
#endif /* defined(__mod_h2__h2_mplx__) */
#ifndef mod_h2_h2_private_h
#define mod_h2_h2_private_h
+#include <apr_time.h>
+
#include <nghttp2/nghttp2.h>
extern module AP_MODULE_DECLARE_DATA http2_module;
APLOG_USE_MODULE(http2);
-
-#define H2_HEADER_METHOD ":method"
-#define H2_HEADER_METHOD_LEN 7
-#define H2_HEADER_SCHEME ":scheme"
-#define H2_HEADER_SCHEME_LEN 7
-#define H2_HEADER_AUTH ":authority"
-#define H2_HEADER_AUTH_LEN 10
-#define H2_HEADER_PATH ":path"
-#define H2_HEADER_PATH_LEN 5
-#define H2_CRLF "\r\n"
-
-#define H2_ALEN(a) (sizeof(a)/sizeof((a)[0]))
-
-#define H2MAX(x,y) ((x) > (y) ? (x) : (y))
-#define H2MIN(x,y) ((x) < (y) ? (x) : (y))
-
#endif
return 1;
}
-static int set_header(void *ctx, const char *key, const char *value)
+static int set_push_header(void *ctx, const char *key, const char *value)
{
- apr_table_setn(ctx, key, value);
+ size_t klen = strlen(key);
+ if (H2_HD_MATCH_LIT("User-Agent", key, klen)
+ || H2_HD_MATCH_LIT("Accept", key, klen)
+ || H2_HD_MATCH_LIT("Accept-Encoding", key, klen)
+ || H2_HD_MATCH_LIT("Accept-Language", key, klen)
+ || H2_HD_MATCH_LIT("Cache-Control", key, klen)) {
+ apr_table_setn(ctx, key, value);
+ }
return 1;
}
+static int has_param(link_ctx *ctx, const char *param)
+{
+ const char *p = apr_table_get(ctx->params, param);
+ return !!p;
+}
+
+static int has_relation(link_ctx *ctx, const char *rel)
+{
+ const char *s, *val = apr_table_get(ctx->params, "rel");
+ if (val) {
+ if (!strcmp(rel, val)) {
+ return 1;
+ }
+ s = ap_strstr_c(val, rel);
+ if (s && (s == val || s[-1] == ' ')) {
+ s += strlen(rel);
+ if (!*s || *s == ' ') {
+ return 1;
+ }
+ }
+ }
+ return 0;
+}
static int add_push(link_ctx *ctx)
{
/* so, we have read a Link header and need to decide
* if we transform it into a push.
*/
- const char *rel = apr_table_get(ctx->params, "rel");
- if (rel && !strcmp("preload", rel)) {
+ if (has_relation(ctx, "preload") && !has_param(ctx, "nopush")) {
apr_uri_t uri;
if (apr_uri_parse(ctx->pool, ctx->link, &uri) == APR_SUCCESS) {
if (uri.path && same_authority(ctx->req, &uri)) {
* TLS (if any) parameters.
*/
path = apr_uri_unparse(ctx->pool, &uri, APR_URI_UNP_OMITSITEPART);
-
push = apr_pcalloc(ctx->pool, sizeof(*push));
-
switch (ctx->req->push_policy) {
case H2_PUSH_HEAD:
method = "HEAD";
break;
}
headers = apr_table_make(ctx->pool, 5);
- apr_table_do(set_header, headers, ctx->req->headers,
- "User-Agent",
- "Cache-Control",
- "Accept-Language",
- NULL);
- req = h2_request_createn(0, ctx->pool, ctx->req->config,
- method, ctx->req->scheme,
- ctx->req->authority,
- path, headers);
+ apr_table_do(set_push_header, headers, ctx->req->headers, NULL);
+ req = h2_request_createn(0, ctx->pool, method, ctx->req->scheme,
+ ctx->req->authority, path, headers,
+ ctx->req->serialize);
/* atm, we do not push on pushes */
h2_request_end_headers(req, ctx->pool, 1, 0);
push->req = req;
return NULL;
}
-void h2_push_policy_determine(struct h2_request *req, apr_pool_t *p, int push_enabled)
-{
- h2_push_policy policy = H2_PUSH_NONE;
- if (push_enabled) {
- const char *val = apr_table_get(req->headers, "accept-push-policy");
- if (val) {
- if (ap_find_token(p, val, "fast-load")) {
- policy = H2_PUSH_FAST_LOAD;
- }
- else if (ap_find_token(p, val, "head")) {
- policy = H2_PUSH_HEAD;
- }
- else if (ap_find_token(p, val, "default")) {
- policy = H2_PUSH_DEFAULT;
- }
- else if (ap_find_token(p, val, "none")) {
- policy = H2_PUSH_NONE;
- }
- else {
- /* nothing known found in this header, go by default */
- policy = H2_PUSH_DEFAULT;
- }
- }
- else {
- policy = H2_PUSH_DEFAULT;
- }
- }
- req->push_policy = policy;
-}
-
/*******************************************************************************
* push diary
+ *
+ * - The push diary keeps track of resources already PUSHed via HTTP/2 on this
+ * connection. It records a hash value from the absolute URL of the resource
+ * pushed.
+ * - Lacking openssl, it uses 'apr_hashfunc_default' for the value
+ * - with openssl, it uses SHA256 to calculate the hash value
+ * - whatever the method to generate the hash, the diary keeps a maximum of 64
+ * bits per hash, limiting the memory consumption to about
+ * H2PushDiarySize * 8
+ * bytes. Entries are sorted by most recently used and oldest entries are
+ * forgotten first.
+ * - Clients can initialize/replace the push diary by sending a 'Cache-Digest'
+ * header. Currently, this is the base64url encoded value of the cache digest
+ * as specified in https://datatracker.ietf.org/doc/draft-kazuho-h2-cache-digest/
+ * This draft can be expected to evolve and the definition of the header
+ * will be added there and refined.
+ * - The cache digest header is a Golomb Coded Set of hash values, but it may
+ * limit the amount of bits per hash value even further. For a good description
+ * of GCS, read here:
+ * http://giovanni.bajo.it/post/47119962313/golomb-coded-sets-smaller-than-bloom-filters
+ * - The means that the push diary might be initialized with hash values of much
+ * less than 64 bits, leading to more false positives, but smaller digest size.
******************************************************************************/
return h2_push_diary_update(stream->session, pushes);
}
-/* h2_log2(n) iff n is a power of 2 */
-static unsigned char h2_log2(apr_uint32_t n)
-{
- int lz = 0;
- if (!n) {
- return 0;
- }
- if (!(n & 0xffff0000u)) {
- lz += 16;
- n = (n << 16);
- }
- if (!(n & 0xff000000u)) {
- lz += 8;
- n = (n << 8);
- }
- if (!(n & 0xf0000000u)) {
- lz += 4;
- n = (n << 4);
- }
- if (!(n & 0xc0000000u)) {
- lz += 2;
- n = (n << 2);
- }
- if (!(n & 0x80000000u)) {
- lz += 1;
- }
-
- return 31 - lz;
-}
-
static apr_int32_t h2_log2inv(unsigned char log2)
{
return log2? (1 << log2) : 1;
/* Intentional no APLOGNO */
ap_log_perror(APLOG_MARK, GCSLOG_LEVEL, 0, encoder->pool,
"h2_push_diary_enc: val=%"APR_UINT64_T_HEX_FMT", delta=%"
- APR_UINT64_T_HEX_FMT" flex_bits=%ld, "
- "fixed_bits=%d, fixed_val=%"APR_UINT64_T_HEX_FMT,
+ APR_UINT64_T_HEX_FMT" flex_bits=%"APR_UINT64_T_FMT", "
+ ", fixed_bits=%d, fixed_val=%"APR_UINT64_T_HEX_FMT,
pval, delta, flex_bits, encoder->fixed_bits, delta&encoder->fixed_mask);
for (; flex_bits != 0; --flex_bits) {
status = gset_encode_bit(encoder, 1);
#ifndef __mod_h2__h2_push__
#define __mod_h2__h2_push__
+#include "h2.h"
+
struct h2_request;
struct h2_response;
struct h2_ngheader;
struct h2_session;
struct h2_stream;
-typedef enum {
- H2_PUSH_NONE,
- H2_PUSH_DEFAULT,
- H2_PUSH_HEAD,
- H2_PUSH_FAST_LOAD,
-} h2_push_policy;
-
typedef struct h2_push {
const struct h2_request *req;
} h2_push;
const struct h2_request *req,
const struct h2_response *res);
-/**
- * Set the push policy for the given request. Takes request headers into
- * account, see draft https://tools.ietf.org/html/draft-ruellan-http-accept-push-policy-00
- * for details.
- *
- * @param req the request to determine the policy for
- * @param p the pool to use
- * @param push_enabled if HTTP/2 server push is generally enabled for this request
- */
-void h2_push_policy_determine(struct h2_request *req, apr_pool_t *p, int push_enabled);
-
/**
* Create a new push diary for the given maximum number of entries.
*
#include <scoreboard.h>
#include "h2_private.h"
-#include "h2_config.h"
-#include "h2_mplx.h"
#include "h2_push.h"
#include "h2_request.h"
-#include "h2_task.h"
#include "h2_util.h"
-h2_request *h2_request_create(int id, apr_pool_t *pool,
- const struct h2_config *config)
+h2_request *h2_request_create(int id, apr_pool_t *pool, int serialize)
{
- return h2_request_createn(id, pool, config,
- NULL, NULL, NULL, NULL, NULL);
+ return h2_request_createn(id, pool, NULL, NULL, NULL, NULL, NULL,
+ serialize);
}
h2_request *h2_request_createn(int id, apr_pool_t *pool,
- const struct h2_config *config,
const char *method, const char *scheme,
const char *authority, const char *path,
- apr_table_t *header)
+ apr_table_t *header, int serialize)
{
h2_request *req = apr_pcalloc(pool, sizeof(h2_request));
req->id = id;
- req->config = config;
req->method = method;
req->scheme = scheme;
req->authority = authority;
req->path = path;
req->headers = header? header : apr_table_make(pool, 10);
req->request_time = apr_time_now();
-
+ req->serialize = serialize;
+
return req;
}
-void h2_request_destroy(h2_request *req)
-{
-}
-
static apr_status_t inspect_clen(h2_request *req, const char *s)
{
char *end;
}
+apr_status_t h2_request_make(h2_request *req, apr_pool_t *pool,
+ const char *method, const char *scheme,
+ const char *authority, const char *path,
+ apr_table_t *headers)
+{
+ req->method = method;
+ req->scheme = scheme;
+ req->authority = authority;
+ req->path = path;
+
+ AP_DEBUG_ASSERT(req->scheme);
+ AP_DEBUG_ASSERT(req->authority);
+ AP_DEBUG_ASSERT(req->path);
+ AP_DEBUG_ASSERT(req->method);
+
+ return add_all_h1_header(req, pool, headers);
+}
+
apr_status_t h2_request_rwrite(h2_request *req, request_rec *r)
{
apr_status_t status;
+ const char *scheme, *authority;
- req->config = h2_config_rget(r);
- req->method = r->method;
- req->scheme = (r->parsed_uri.scheme? r->parsed_uri.scheme
- : ap_http_scheme(r));
- req->authority = r->hostname;
- req->path = apr_uri_unparse(r->pool, &r->parsed_uri,
- APR_URI_UNP_OMITSITEPART);
-
- if (!ap_strchr_c(req->authority, ':') && r->server && r->server->port) {
- apr_port_t defport = apr_uri_port_of_scheme(req->scheme);
+ scheme = (r->parsed_uri.scheme? r->parsed_uri.scheme
+ : ap_http_scheme(r));
+ authority = r->hostname;
+ if (!ap_strchr_c(authority, ':') && r->server && r->server->port) {
+ apr_port_t defport = apr_uri_port_of_scheme(scheme);
if (defport != r->server->port) {
/* port info missing and port is not default for scheme: append */
- req->authority = apr_psprintf(r->pool, "%s:%d", req->authority,
- (int)r->server->port);
+ authority = apr_psprintf(r->pool, "%s:%d", authority,
+ (int)r->server->port);
}
}
- AP_DEBUG_ASSERT(req->scheme);
- AP_DEBUG_ASSERT(req->authority);
- AP_DEBUG_ASSERT(req->path);
- AP_DEBUG_ASSERT(req->method);
-
- status = add_all_h1_header(req, r->pool, r->headers_in);
-
+ status = h2_request_make(req, r->pool, r->method, scheme, authority,
+ apr_uri_unparse(r->pool, &r->parsed_uri,
+ APR_URI_UNP_OMITSITEPART),
+ r->headers_in);
ap_log_rerror(APLOG_MARK, APLOG_DEBUG, status, r, APLOGNO(03058)
"h2_request(%d): rwrite %s host=%s://%s%s",
req->id, req->method, req->scheme, req->authority, req->path);
-
return status;
}
dst->authority = OPT_COPY(p, src->authority);
dst->path = OPT_COPY(p, src->path);
dst->headers = apr_table_clone(p, src->headers);
+ if (src->trailers) {
+ dst->trailers = apr_table_clone(p, src->trailers);
+ }
dst->content_length = src->content_length;
dst->chunked = src->chunked;
dst->eoh = src->eoh;
}
+h2_request *h2_request_clone(apr_pool_t *p, const h2_request *src)
+{
+ h2_request *nreq = apr_pcalloc(p, sizeof(*nreq));
+ memcpy(nreq, src, sizeof(*nreq));
+ h2_request_copy(p, nreq, src);
+ return nreq;
+}
+
request_rec *h2_request_create_rec(const h2_request *req, conn_rec *conn)
{
request_rec *r;
r->allowed_methods = ap_make_method_list(p, 2);
- r->headers_in = apr_table_copy(r->pool, req->headers);
+ r->headers_in = apr_table_clone(r->pool, req->headers);
r->trailers_in = apr_table_make(r->pool, 5);
r->subprocess_env = apr_table_make(r->pool, 25);
r->headers_out = apr_table_make(r->pool, 12);
}
ap_parse_uri(r, req->path);
- r->protocol = (char*)"HTTP/2";
+ r->protocol = "HTTP/2";
r->proto_num = HTTP_VERSION(2, 0);
r->the_request = apr_psprintf(r->pool, "%s %s %s",
#ifndef __mod_h2__h2_request__
#define __mod_h2__h2_request__
-/* h2_request is the transformer of HTTP2 streams into HTTP/1.1 internal
- * format that will be fed to various httpd input filters to finally
- * become a request_rec to be handled by soemone.
- */
-struct h2_config;
-struct h2_to_h1;
-struct h2_mplx;
-struct h2_task;
-
-typedef struct h2_request h2_request;
-
-struct h2_request {
- int id; /* stream id */
+#include "h2.h"
- const char *method; /* pseudo header values, see ch. 8.1.2.3 */
- const char *scheme;
- const char *authority;
- const char *path;
-
- apr_table_t *headers;
- apr_table_t *trailers;
-
- apr_time_t request_time;
- apr_off_t content_length;
-
- unsigned int chunked : 1; /* iff requst body needs to be forwarded as chunked */
- unsigned int eoh : 1; /* iff end-of-headers has been seen and request is complete */
- unsigned int body : 1; /* iff this request has a body */
- unsigned int push_policy; /* which push policy to use for this request */
- const struct h2_config *config;
-};
-
-h2_request *h2_request_create(int id, apr_pool_t *pool,
- const struct h2_config *config);
+h2_request *h2_request_create(int id, apr_pool_t *pool, int serialize);
h2_request *h2_request_createn(int id, apr_pool_t *pool,
- const struct h2_config *config,
const char *method, const char *scheme,
const char *authority, const char *path,
- apr_table_t *headers);
+ apr_table_t *headers, int serialize);
-void h2_request_destroy(h2_request *req);
+apr_status_t h2_request_make(h2_request *req, apr_pool_t *pool,
+ const char *method, const char *scheme,
+ const char *authority, const char *path,
+ apr_table_t *headers);
apr_status_t h2_request_rwrite(h2_request *req, request_rec *r);
void h2_request_copy(apr_pool_t *p, h2_request *dst, const h2_request *src);
+h2_request *h2_request_clone(apr_pool_t *p, const h2_request *src);
+
/**
* Create a request_rec representing the h2_request to be
* processed on the given connection.
#ifndef __mod_h2__h2_response__
#define __mod_h2__h2_response__
-struct h2_request;
-struct h2_push;
-
-typedef struct h2_response {
- int stream_id;
- int rst_error;
- int http_status;
- apr_off_t content_length;
- apr_table_t *headers;
- apr_table_t *trailers;
- const char *sos_filter;
-} h2_response;
+#include "h2.h"
/**
* Create the response from the status and parsed header lines.
*/
#include <assert.h>
+#include <stddef.h>
#include <apr_thread_cond.h>
#include <apr_base64.h>
#include <apr_strings.h>
+#include <ap_mpm.h>
+
#include <httpd.h>
#include <http_core.h>
#include <http_config.h>
#include "h2_request.h"
#include "h2_response.h"
#include "h2_stream.h"
-#include "h2_stream_set.h"
#include "h2_from_h1.h"
#include "h2_task.h"
#include "h2_session.h"
#include "h2_workers.h"
-static int frame_print(const nghttp2_frame *frame, char *buffer, size_t maxlen);
-
static int h2_session_status_from_apr_status(apr_status_t rv)
{
if (rv == APR_SUCCESS) {
}
else {
apr_pool_create(&stream_pool, session->pool);
+ apr_pool_tag(stream_pool, "h2_stream");
}
stream = h2_stream_open(stream_id, stream_pool, session);
- h2_stream_set_add(session->streams, stream);
+ h2_ihash_add(session->streams, stream);
if (H2_STREAM_CLIENT_INITIATED(stream_id)
&& stream_id > session->max_stream_received) {
++session->requests_received;
return stream;
}
-#ifdef H2_NG2_STREAM_API
-
/**
* Determine the importance of streams when scheduling tasks.
* - if both stream depend on the same one, compare weights
return spri_cmp(sid1, s1, sid2, s2, session);
}
-#else /* ifdef H2_NG2_STREAM_API */
-
-/* In absence of nghttp2_stream API, which gives information about
- * priorities since nghttp2 1.3.x, we just sort the streams by
- * their identifier, aka. order of arrival.
- */
-static int stream_pri_cmp(int sid1, int sid2, void *ctx)
-{
- (void)ctx;
- return sid1 - sid2;
-}
-
-#endif /* (ifdef else) H2_NG2_STREAM_API */
-
static apr_status_t stream_schedule(h2_session *session,
h2_stream *stream, int eos)
{
if (APLOGcdebug(session->c)) {
char buffer[256];
- frame_print(frame, buffer, sizeof(buffer)/sizeof(buffer[0]));
+ h2_util_frame_print(frame, buffer, sizeof(buffer)/sizeof(buffer[0]));
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, session->c, APLOGNO(03063)
"h2_session(%ld): recv unknown FRAME[%s], frames=%ld/%ld (r/s)",
session->id, buffer, (long)session->frames_received,
if (APLOGcdebug(session->c)) {
char buffer[256];
- frame_print(frame, buffer, sizeof(buffer)/sizeof(buffer[0]));
+ h2_util_frame_print(frame, buffer, sizeof(buffer)/sizeof(buffer[0]));
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, session->c, APLOGNO(03066)
"h2_session(%ld): recv FRAME[%s], frames=%ld/%ld (r/s)",
session->id, buffer, (long)session->frames_received,
if (APLOGctrace2(session->c)) {
char buffer[256];
- frame_print(frame, buffer,
- sizeof(buffer)/sizeof(buffer[0]));
+ h2_util_frame_print(frame, buffer,
+ sizeof(buffer)/sizeof(buffer[0]));
ap_log_cerror(APLOG_MARK, APLOG_TRACE2, 0, session->c,
"h2_session: on_frame_rcv %s", buffer);
}
if (status == APR_SUCCESS) {
stream->data_frames_sent++;
- h2_conn_io_consider_flush(&session->io);
+ h2_conn_io_consider_pass(&session->io);
return 0;
}
else {
if (APLOGcdebug(session->c)) {
char buffer[256];
- frame_print(frame, buffer, sizeof(buffer)/sizeof(buffer[0]));
+ h2_util_frame_print(frame, buffer, sizeof(buffer)/sizeof(buffer[0]));
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, session->c, APLOGNO(03068)
"h2_session(%ld): sent FRAME[%s], frames=%ld/%ld (r/s)",
session->id, buffer, (long)session->frames_received,
(long)session->frames_sent);
}
++session->frames_sent;
+ switch (frame->hd.type) {
+ case NGHTTP2_HEADERS:
+ case NGHTTP2_DATA:
+ /* no explicit flushing necessary */
+ break;
+ default:
+ session->flush = 1;
+ break;
+ }
return 0;
}
if (APLOGctrace1(session->c)) {
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, session->c,
"h2_session(%ld): destroy, %d streams open",
- session->id, (int)h2_stream_set_size(session->streams));
+ session->id, (int)h2_ihash_count(session->streams));
}
if (session->mplx) {
h2_mplx_set_consumed_cb(session->mplx, NULL, NULL);
h2_mplx_release_and_join(session->mplx, session->iowait);
session->mplx = NULL;
}
- if (session->streams) {
- h2_stream_set_destroy(session->streams);
- session->streams = NULL;
- }
if (session->pool) {
apr_pool_destroy(session->pool);
}
}
-static apr_status_t h2_session_shutdown(h2_session *session, int reason, const char *msg)
+static apr_status_t h2_session_shutdown(h2_session *session, int reason,
+ const char *msg, int force_close)
{
apr_status_t status = APR_SUCCESS;
const char *err = msg;
h2_mplx_get_max_stream_started(session->mplx),
reason, (uint8_t*)err, err? strlen(err):0);
status = nghttp2_session_send(session->ngh2);
- h2_conn_io_flush(&session->io);
+ h2_conn_io_pass(&session->io, 1);
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, session->c, APLOGNO(03069)
"session(%ld): sent GOAWAY, err=%d, msg=%s",
session->id, reason, err? err : "");
dispatch_event(session, H2_SESSION_EV_LOCAL_GOAWAY, reason, err);
+
+ if (force_close) {
+ h2_mplx_abort(session->mplx);
+ }
+
return status;
}
if (status != APR_SUCCESS) {
return NULL;
}
+ apr_pool_tag(pool, "h2_session");
session = apr_pcalloc(pool, sizeof(h2_session));
if (session) {
session->max_stream_count = h2_config_geti(session->config, H2_CONF_MAX_STREAMS);
session->max_stream_mem = h2_config_geti(session->config, H2_CONF_STREAM_MAX_MEM);
- session->timeout_secs = h2_config_geti(session->config, H2_CONF_TIMEOUT_SECS);
- if (session->timeout_secs <= 0) {
- session->timeout_secs = apr_time_sec(session->s->timeout);
- }
- session->keepalive_secs = h2_config_geti(session->config, H2_CONF_KEEPALIVE_SECS);
- if (session->keepalive_secs <= 0) {
- session->keepalive_secs = apr_time_sec(session->s->keep_alive_timeout);
- }
-
+
status = apr_thread_cond_create(&session->iowait, session->pool);
if (status != APR_SUCCESS) {
return NULL;
}
- session->streams = h2_stream_set_create(session->pool, session->max_stream_count);
-
+ session->streams = h2_ihash_create(session->pool,offsetof(h2_stream, id));
session->workers = workers;
- session->mplx = h2_mplx_create(c, session->pool, session->config, workers);
+ session->mplx = h2_mplx_create(c, session->pool, session->config,
+ session->s->timeout, workers);
h2_mplx_set_consumed_cb(session->mplx, update_window, session);
if (APLOGcdebug(c)) {
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, c, APLOGNO(03200)
- "session(%ld) created, timeout=%d, keepalive_timeout=%d, "
- "max_streams=%d, stream_mem=%d, push_diary(type=%d,N=%d)",
- session->id, session->timeout_secs, session->keepalive_secs,
- (int)session->max_stream_count, (int)session->max_stream_mem,
- session->push_diary->dtype,
- (int)session->push_diary->N);
+ "session(%ld) created, max_streams=%d, stream_mem=%d, push_diary(type=%d,N=%d)",
+ session->id, (int)session->max_stream_count, (int)session->max_stream_mem,
+ session->push_diary->dtype, (int)session->push_diary->N);
}
}
return session;
nghttp2_strerror(*rv));
}
}
+
+ h2_conn_io_pass(&session->io, 1);
return status;
}
int resume_count;
} resume_ctx;
-static int resume_on_data(void *ctx, h2_stream *stream)
+static int resume_on_data(void *ctx, void *val)
{
+ h2_stream *stream = val;
resume_ctx *rctx = (resume_ctx*)ctx;
h2_session *session = rctx->session;
AP_DEBUG_ASSERT(session);
static int h2_session_resume_streams_with_data(h2_session *session)
{
AP_DEBUG_ASSERT(session);
- if (!h2_stream_set_is_empty(session->streams)
+ if (!h2_ihash_is_empty(session->streams)
&& session->mplx && !session->mplx->aborted) {
resume_ctx ctx;
/* Resume all streams where we have data in the out queue and
* which had been suspended before. */
- h2_stream_set_iter(session->streams, resume_on_data, &ctx);
+ h2_ihash_iter(session->streams, resume_on_data, &ctx);
return ctx.resume_count;
}
return 0;
h2_stream *h2_session_get_stream(h2_session *session, int stream_id)
{
if (!session->last_stream || stream_id != session->last_stream->id) {
- session->last_stream = h2_stream_set_get(session->streams, stream_id);
+ session->last_stream = h2_ihash_get(session->streams, stream_id);
}
return session->last_stream;
}
apr_pool_t *pool = h2_stream_detach_pool(stream);
/* this may be called while the session has already freed
- * some internal structures. */
+ * some internal structures or even when the mplx is locked. */
if (session->mplx) {
h2_mplx_stream_done(session->mplx, stream->id, stream->rst_error);
- if (session->last_stream == stream) {
- session->last_stream = NULL;
- }
}
+ if (session->last_stream == stream) {
+ session->last_stream = NULL;
+ }
if (session->streams) {
- h2_stream_set_remove(session->streams, stream->id);
+ h2_ihash_remove(session->streams, stream->id);
}
h2_stream_destroy(stream);
return APR_SUCCESS;
}
-static int frame_print(const nghttp2_frame *frame, char *buffer, size_t maxlen)
-{
- char scratch[128];
- size_t s_len = sizeof(scratch)/sizeof(scratch[0]);
-
- switch (frame->hd.type) {
- case NGHTTP2_DATA: {
- return apr_snprintf(buffer, maxlen,
- "DATA[length=%d, flags=%d, stream=%d, padlen=%d]",
- (int)frame->hd.length, frame->hd.flags,
- frame->hd.stream_id, (int)frame->data.padlen);
- }
- case NGHTTP2_HEADERS: {
- return apr_snprintf(buffer, maxlen,
- "HEADERS[length=%d, hend=%d, stream=%d, eos=%d]",
- (int)frame->hd.length,
- !!(frame->hd.flags & NGHTTP2_FLAG_END_HEADERS),
- frame->hd.stream_id,
- !!(frame->hd.flags & NGHTTP2_FLAG_END_STREAM));
- }
- case NGHTTP2_PRIORITY: {
- return apr_snprintf(buffer, maxlen,
- "PRIORITY[length=%d, flags=%d, stream=%d]",
- (int)frame->hd.length,
- frame->hd.flags, frame->hd.stream_id);
- }
- case NGHTTP2_RST_STREAM: {
- return apr_snprintf(buffer, maxlen,
- "RST_STREAM[length=%d, flags=%d, stream=%d]",
- (int)frame->hd.length,
- frame->hd.flags, frame->hd.stream_id);
- }
- case NGHTTP2_SETTINGS: {
- if (frame->hd.flags & NGHTTP2_FLAG_ACK) {
- return apr_snprintf(buffer, maxlen,
- "SETTINGS[ack=1, stream=%d]",
- frame->hd.stream_id);
- }
- return apr_snprintf(buffer, maxlen,
- "SETTINGS[length=%d, stream=%d]",
- (int)frame->hd.length, frame->hd.stream_id);
- }
- case NGHTTP2_PUSH_PROMISE: {
- return apr_snprintf(buffer, maxlen,
- "PUSH_PROMISE[length=%d, hend=%d, stream=%d]",
- (int)frame->hd.length,
- !!(frame->hd.flags & NGHTTP2_FLAG_END_HEADERS),
- frame->hd.stream_id);
- }
- case NGHTTP2_PING: {
- return apr_snprintf(buffer, maxlen,
- "PING[length=%d, ack=%d, stream=%d]",
- (int)frame->hd.length,
- frame->hd.flags&NGHTTP2_FLAG_ACK,
- frame->hd.stream_id);
- }
- case NGHTTP2_GOAWAY: {
- size_t len = (frame->goaway.opaque_data_len < s_len)?
- frame->goaway.opaque_data_len : s_len-1;
- memcpy(scratch, frame->goaway.opaque_data, len);
- scratch[len+1] = '\0';
- return apr_snprintf(buffer, maxlen, "GOAWAY[error=%d, reason='%s']",
- frame->goaway.error_code, scratch);
- }
- case NGHTTP2_WINDOW_UPDATE: {
- return apr_snprintf(buffer, maxlen,
- "WINDOW_UPDATE[length=%d, stream=%d]",
- (int)frame->hd.length, frame->hd.stream_id);
- }
- default:
- return apr_snprintf(buffer, maxlen,
- "type=%d[length=%d, flags=%d, stream=%d]",
- frame->hd.type, (int)frame->hd.length,
- frame->hd.flags, frame->hd.stream_id);
- }
-}
-
int h2_session_push_enabled(h2_session *session)
{
/* iff we can and they can */
static apr_status_t h2_session_send(h2_session *session)
{
- int rv = nghttp2_session_send(session->ngh2);
+ apr_interval_time_t saved_timeout;
+ int rv;
+ apr_socket_t *socket;
+
+ socket = ap_get_conn_socket(session->c);
+ if (socket) {
+ apr_socket_timeout_get(socket, &saved_timeout);
+ apr_socket_timeout_set(socket, session->s->timeout);
+ }
+
+ rv = nghttp2_session_send(session->ngh2);
+
+ if (socket) {
+ apr_socket_timeout_set(socket, saved_timeout);
+ }
if (rv != 0) {
if (nghttp2_is_fatal(rv)) {
dispatch_event(session, H2_SESSION_EV_PROTO_ERROR, rv, nghttp2_strerror(rv));
return APR_SUCCESS;
}
-static apr_status_t h2_session_read(h2_session *session, int block, int loops)
+static apr_status_t h2_session_read(h2_session *session, int block)
{
apr_status_t status, rstatus = APR_EAGAIN;
conn_rec *c = session->c;
- int i;
+ apr_off_t read_start = session->io.bytes_read;
- for (i = 0; i < loops; ++i) {
+ while (1) {
/* H2_IN filter handles all incoming data against the session.
* We just pull at the filter chain to make it happen */
status = ap_get_brigade(c->input_filters,
case APR_TIMEUP:
return status;
default:
- if (!i) {
+ if (session->io.bytes_read == read_start) {
/* first attempt failed */
if (APR_STATUS_IS_ETIMEDOUT(status)
|| APR_STATUS_IS_ECONNABORTED(status)
if (!is_accepting_streams(session)) {
break;
}
+ if ((session->io.bytes_read - read_start) > (64*1024)) {
+ /* read enough in one go, give write a chance */
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE2, status, c,
+ "h2_session(%ld): read 64k, returning", session->id);
+ break;
+ }
}
return rstatus;
}
+static int unsubmitted_iter(void *ctx, void *val)
+{
+ h2_stream *stream = val;
+ if (h2_stream_needs_submit(stream)) {
+ *((int *)ctx) = 1;
+ return 0;
+ }
+ return 1;
+}
+
+static int has_unsubmitted_streams(h2_session *session)
+{
+ int has_unsubmitted = 0;
+ h2_ihash_iter(session->streams, unsubmitted_iter, &has_unsubmitted);
+ return has_unsubmitted;
+}
+
+static int suspended_iter(void *ctx, void *val)
+{
+ h2_stream *stream = val;
+ if (h2_stream_is_suspended(stream)) {
+ *((int *)ctx) = 1;
+ return 0;
+ }
+ return 1;
+}
+
+static int has_suspended_streams(h2_session *session)
+{
+ int has_suspended = 0;
+ h2_ihash_iter(session->streams, suspended_iter, &has_suspended);
+ return has_suspended;
+}
+
static apr_status_t h2_session_submit(h2_session *session)
{
apr_status_t status = APR_EAGAIN;
h2_stream *stream;
- if (h2_stream_set_has_unsubmitted(session->streams)) {
+ if (has_unsubmitted_streams(session)) {
/* If we have responses ready, submit them now. */
while ((stream = h2_mplx_next_submit(session->mplx, session->streams))) {
status = submit_response(session, stream);
default:
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, session->c,
"h2_session(%ld): conn error -> shutdown", session->id);
- h2_session_shutdown(session, arg, msg);
+ h2_session_shutdown(session, arg, msg, 0);
break;
}
}
default:
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, session->c,
"h2_session(%ld): proto error -> shutdown", session->id);
- h2_session_shutdown(session, arg, msg);
+ h2_session_shutdown(session, arg, msg, 0);
break;
}
}
transit(session, "conn timeout", H2_SESSION_ST_DONE);
break;
default:
- h2_session_shutdown(session, arg, msg);
+ h2_session_shutdown(session, arg, msg, 1);
transit(session, "conn timeout", H2_SESSION_ST_DONE);
break;
}
{
switch (session->state) {
case H2_SESSION_ST_BUSY:
+ case H2_SESSION_ST_LOCAL_SHUTDOWN:
+ case H2_SESSION_ST_REMOTE_SHUTDOWN:
/* nothing for input and output to do. If we remain
* in this state, we go into a tight loop and suck up
* CPU cycles. Ideally, we'd like to do a blocking read, but that
* is not possible if we have scheduled tasks and wait
* for them to produce something. */
- if (h2_stream_set_is_empty(session->streams)) {
- /* When we have no streams, no task event are possible,
- * switch to blocking reads */
- transit(session, "no io", H2_SESSION_ST_IDLE);
+ if (h2_ihash_is_empty(session->streams)) {
+ if (!is_accepting_streams(session)) {
+ /* We are no longer accepting new streams and have
+ * finished processing existing ones. Time to leave. */
+ h2_session_shutdown(session, arg, msg, 0);
+ transit(session, "no io", H2_SESSION_ST_DONE);
+ }
+ else {
+ /* When we have no streams, no task event are possible,
+ * switch to blocking reads */
+ transit(session, "no io", H2_SESSION_ST_IDLE);
+ session->idle_until = (session->requests_received?
+ session->s->keep_alive_timeout :
+ session->s->timeout) + apr_time_now();
+ }
}
- else if (!h2_stream_set_has_unsubmitted(session->streams)
- && !h2_stream_set_has_suspended(session->streams)) {
+ else if (!has_unsubmitted_streams(session)
+ && !has_suspended_streams(session)) {
/* none of our streams is waiting for a response or
* new output data from task processing,
- * switch to blocking reads. */
+ * switch to blocking reads. We are probably waiting on
+ * window updates. */
transit(session, "no io", H2_SESSION_ST_IDLE);
+ session->idle_until = apr_time_now() + session->s->timeout;
}
else {
/* Unable to do blocking reads, as we wait on events from
}
}
-static void h2_session_ev_wait_timeout(h2_session *session, int arg, const char *msg)
+static void h2_session_ev_stream_ready(h2_session *session, int arg, const char *msg)
{
switch (session->state) {
case H2_SESSION_ST_WAIT:
- transit(session, "wait timeout", H2_SESSION_ST_BUSY);
+ transit(session, "stream ready", H2_SESSION_ST_BUSY);
break;
default:
/* nop */
}
}
-static void h2_session_ev_stream_ready(h2_session *session, int arg, const char *msg)
+static void h2_session_ev_data_read(h2_session *session, int arg, const char *msg)
{
switch (session->state) {
+ case H2_SESSION_ST_IDLE:
case H2_SESSION_ST_WAIT:
- transit(session, "stream ready", H2_SESSION_ST_BUSY);
+ transit(session, "data read", H2_SESSION_ST_BUSY);
break;
+ /* fall through */
default:
/* nop */
break;
}
}
-static void h2_session_ev_data_read(h2_session *session, int arg, const char *msg)
+static void h2_session_ev_ngh2_done(h2_session *session, int arg, const char *msg)
{
switch (session->state) {
- case H2_SESSION_ST_IDLE:
- transit(session, "data read", H2_SESSION_ST_BUSY);
+ case H2_SESSION_ST_DONE:
+ /* nop */
break;
- /* fall through */
default:
+ transit(session, "nghttp2 done", H2_SESSION_ST_DONE);
+ break;
+ }
+}
+
+static void h2_session_ev_mpm_stopping(h2_session *session, int arg, const char *msg)
+{
+ switch (session->state) {
+ case H2_SESSION_ST_DONE:
+ case H2_SESSION_ST_LOCAL_SHUTDOWN:
/* nop */
break;
+ default:
+ h2_session_shutdown(session, arg, msg, 0);
+ break;
}
}
-static void h2_session_ev_ngh2_done(h2_session *session, int arg, const char *msg)
+static void h2_session_ev_pre_close(h2_session *session, int arg, const char *msg)
{
switch (session->state) {
case H2_SESSION_ST_DONE:
+ case H2_SESSION_ST_LOCAL_SHUTDOWN:
/* nop */
break;
default:
- transit(session, "nghttp2 done", H2_SESSION_ST_DONE);
+ h2_session_shutdown(session, arg, msg, 1);
break;
}
}
case H2_SESSION_EV_NO_IO:
h2_session_ev_no_io(session, arg, msg);
break;
- case H2_SESSION_EV_WAIT_TIMEOUT:
- h2_session_ev_wait_timeout(session, arg, msg);
- break;
case H2_SESSION_EV_STREAM_READY:
h2_session_ev_stream_ready(session, arg, msg);
break;
case H2_SESSION_EV_NGH2_DONE:
h2_session_ev_ngh2_done(session, arg, msg);
break;
+ case H2_SESSION_EV_MPM_STOPPING:
+ h2_session_ev_mpm_stopping(session, arg, msg);
+ break;
+ case H2_SESSION_EV_PRE_CLOSE:
+ h2_session_ev_pre_close(session, arg, msg);
+ break;
default:
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, session->c,
"h2_session(%ld): unknown event %d",
static const int MAX_WAIT_MICROS = 200 * 1000;
+static void update_child_status(h2_session *session, int status, const char *msg)
+{
+ apr_snprintf(session->status, sizeof(session->status),
+ "%s, streams: %d/%d/%d/%d/%d (open/recv/resp/push/rst)",
+ msg? msg : "-",
+ (int)h2_ihash_count(session->streams),
+ (int)session->requests_received,
+ (int)session->responses_submitted,
+ (int)session->pushes_submitted,
+ (int)session->pushes_reset + session->streams_reset);
+ ap_update_child_status_descr(session->c->sbh, status, session->status);
+}
+
apr_status_t h2_session_process(h2_session *session, int async)
{
apr_status_t status = APR_SUCCESS;
conn_rec *c = session->c;
- int rv, have_written, have_read;
+ int rv, have_written, have_read, mpm_state, no_streams;
ap_log_cerror( APLOG_MARK, APLOG_TRACE1, status, c,
"h2_session(%ld): process start, async=%d", session->id, async);
+ if (c->cs) {
+ c->cs->state = CONN_STATE_WRITE_COMPLETION;
+ }
+
while (1) {
have_read = have_written = 0;
+ if (!ap_mpm_query(AP_MPMQ_MPM_STATE, &mpm_state)) {
+ if (mpm_state == AP_MPMQ_STOPPING) {
+ dispatch_event(session, H2_SESSION_EV_MPM_STOPPING, 0, NULL);
+ break;
+ }
+ }
+
+ session->status[0] = '\0';
+
switch (session->state) {
case H2_SESSION_ST_INIT:
+ ap_update_child_status_from_conn(c->sbh, SERVER_BUSY_READ, c);
if (!h2_is_acceptable_connection(c, 1)) {
- h2_session_shutdown(session, NGHTTP2_INADEQUATE_SECURITY, NULL);
+ update_child_status(session, SERVER_BUSY_READ, "inadequate security");
+ h2_session_shutdown(session, NGHTTP2_INADEQUATE_SECURITY, NULL, 1);
}
else {
- ap_update_child_status(c->sbh, SERVER_BUSY_READ, NULL);
+ update_child_status(session, SERVER_BUSY_READ, "init");
status = h2_session_start(session, &rv);
- ap_log_cerror(APLOG_MARK, APLOG_DEBUG, status, c,
- APLOGNO(03079)
+ ap_log_cerror(APLOG_MARK, APLOG_DEBUG, status, c, APLOGNO(03079)
"h2_session(%ld): started on %s:%d", session->id,
session->s->server_hostname,
c->local_addr->port);
break;
case H2_SESSION_ST_IDLE:
- h2_filter_cin_timeout_set(session->cin, session->keepalive_secs);
- ap_update_child_status(c->sbh, SERVER_BUSY_KEEPALIVE, NULL);
- status = h2_session_read(session, 1, 10);
- if (status == APR_SUCCESS) {
- have_read = 1;
- dispatch_event(session, H2_SESSION_EV_DATA_READ, 0, NULL);
- }
- else if (status == APR_EAGAIN) {
- /* nothing to read */
- }
- else if (APR_STATUS_IS_TIMEUP(status)) {
- dispatch_event(session, H2_SESSION_EV_CONN_TIMEOUT, 0, NULL);
- break;
+ no_streams = h2_ihash_is_empty(session->streams);
+ update_child_status(session, (no_streams? SERVER_BUSY_KEEPALIVE
+ : SERVER_BUSY_READ), "idle");
+ if (async && no_streams && !session->r && session->requests_received) {
+ ap_log_cerror( APLOG_MARK, APLOG_TRACE1, status, c,
+ "h2_session(%ld): async idle, nonblock read", session->id);
+ /* We do not return to the async mpm immediately, since under
+ * load, mpms show the tendency to throw keep_alive connections
+ * away very rapidly.
+ * So, if we are still processing streams, we wait for the
+ * normal timeout first and, on timeout, close.
+ * If we have no streams, we still wait a short amount of
+ * time here for the next frame to arrive, before handing
+ * it to keep_alive processing of the mpm.
+ */
+ status = h2_session_read(session, 0);
+
+ if (status == APR_SUCCESS) {
+ have_read = 1;
+ dispatch_event(session, H2_SESSION_EV_DATA_READ, 0, NULL);
+ }
+ else if (APR_STATUS_IS_EAGAIN(status) || APR_STATUS_IS_TIMEUP(status)) {
+ if (apr_time_now() > session->idle_until) {
+ dispatch_event(session, H2_SESSION_EV_CONN_TIMEOUT, 0, NULL);
+ }
+ else {
+ status = APR_EAGAIN;
+ goto out;
+ }
+ }
+ else {
+ ap_log_cerror( APLOG_MARK, APLOG_DEBUG, status, c,
+ "h2_session(%ld): idle, no data, error",
+ session->id);
+ dispatch_event(session, H2_SESSION_EV_CONN_ERROR, 0, "timeout");
+ }
}
else {
- dispatch_event(session, H2_SESSION_EV_CONN_ERROR, 0, NULL);
+ /* We wait in smaller increments, using a 1 second timeout.
+ * That gives us the chance to check for MPMQ_STOPPING often.
+ */
+ status = h2_mplx_idle(session->mplx);
+ if (status != APR_SUCCESS) {
+ dispatch_event(session, H2_SESSION_EV_CONN_ERROR,
+ H2_ERR_ENHANCE_YOUR_CALM, "less is more");
+ }
+ h2_filter_cin_timeout_set(session->cin, apr_time_from_sec(1));
+ status = h2_session_read(session, 1);
+ if (status == APR_SUCCESS) {
+ have_read = 1;
+ dispatch_event(session, H2_SESSION_EV_DATA_READ, 0, NULL);
+ }
+ else if (status == APR_EAGAIN) {
+ /* nothing to read */
+ }
+ else if (APR_STATUS_IS_TIMEUP(status)) {
+ if (apr_time_now() > session->idle_until) {
+ dispatch_event(session, H2_SESSION_EV_CONN_TIMEOUT, 0, "timeout");
+ }
+ /* continue reading handling */
+ }
+ else {
+ dispatch_event(session, H2_SESSION_EV_CONN_ERROR, 0, "error");
+ }
}
+
break;
case H2_SESSION_ST_BUSY:
case H2_SESSION_ST_LOCAL_SHUTDOWN:
case H2_SESSION_ST_REMOTE_SHUTDOWN:
if (nghttp2_session_want_read(session->ngh2)) {
- ap_update_child_status(c->sbh, SERVER_BUSY_READ, NULL);
- h2_filter_cin_timeout_set(session->cin, session->timeout_secs);
- status = h2_session_read(session, 0, 10);
+ h2_filter_cin_timeout_set(session->cin, session->s->timeout);
+ status = h2_session_read(session, 0);
if (status == APR_SUCCESS) {
have_read = 1;
dispatch_event(session, H2_SESSION_EV_DATA_READ, 0, NULL);
}
}
- if (!h2_stream_set_is_empty(session->streams)) {
+ if (!h2_ihash_is_empty(session->streams)) {
/* resume any streams for which data is available again */
h2_session_resume_streams_with_data(session);
/* Submit any responses/push_promises that are ready */
}
}
- if (nghttp2_session_want_write(session->ngh2)) {
+ while (nghttp2_session_want_write(session->ngh2)) {
+ ap_update_child_status(session->c->sbh, SERVER_BUSY_WRITE, NULL);
status = h2_session_send(session);
if (status == APR_SUCCESS) {
have_written = 1;
}
if (have_read || have_written) {
- session->wait_us = 0;
+ if (session->wait_us) {
+ session->wait_us = 0;
+ update_child_status(session, SERVER_BUSY_READ, "busy");
+ }
}
- else {
+ else if (!nghttp2_session_want_write(session->ngh2)) {
dispatch_event(session, H2_SESSION_EV_NO_IO, 0, NULL);
}
break;
case H2_SESSION_ST_WAIT:
- session->wait_us = H2MAX(session->wait_us, 10);
+ if (session->wait_us <= 0) {
+ session->wait_us = 10;
+ session->start_wait = apr_time_now();
+ update_child_status(session, SERVER_BUSY_READ, "wait");
+ }
+ else if ((apr_time_now() - session->start_wait) >= session->s->timeout) {
+ /* waited long enough */
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, APR_TIMEUP, c,
+ "h2_session: wait for data");
+ dispatch_event(session, H2_SESSION_EV_CONN_TIMEOUT, 0, NULL);
+ }
+ else {
+ /* repeating, increase timer for graceful backoff */
+ session->wait_us = H2MIN(session->wait_us*2, MAX_WAIT_MICROS);
+ }
+
if (APLOGctrace1(c)) {
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, c,
"h2_session: wait for data, %ld micros",
(long)session->wait_us);
}
-
- ap_log_cerror( APLOG_MARK, APLOG_TRACE2, status, c,
- "h2_session(%ld): process -> trywait", session->id);
status = h2_mplx_out_trywait(session->mplx, session->wait_us,
session->iowait);
if (status == APR_SUCCESS) {
- dispatch_event(session, H2_SESSION_EV_STREAM_READY, 0, NULL);
+ session->wait_us = 0;
+ dispatch_event(session, H2_SESSION_EV_DATA_READ, 0, NULL);
}
else if (status == APR_TIMEUP) {
- /* nothing, increase timer for graceful backup */
- session->wait_us = H2MIN(session->wait_us*2, MAX_WAIT_MICROS);
- dispatch_event(session, H2_SESSION_EV_WAIT_TIMEOUT, 0, NULL);
+ /* go back to checking all inputs again */
+ transit(session, "wait cycle", H2_SESSION_ST_BUSY);
}
else {
- h2_session_shutdown(session, H2_ERR_INTERNAL_ERROR, "cond wait error");
+ h2_session_shutdown(session, H2_ERR_INTERNAL_ERROR, "cond wait error", 0);
}
break;
case H2_SESSION_ST_DONE:
+ update_child_status(session, SERVER_CLOSING, "done");
status = APR_EOF;
goto out;
break;
}
- if (have_written) {
- h2_conn_io_flush(&session->io);
- }
- else if (!nghttp2_session_want_read(session->ngh2)
+ h2_conn_io_pass(&session->io, 1);
+ if (!nghttp2_session_want_read(session->ngh2)
&& !nghttp2_session_want_write(session->ngh2)) {
dispatch_event(session, H2_SESSION_EV_NGH2_DONE, 0, NULL);
}
}
out:
- if (have_written) {
- h2_conn_io_flush(&session->io);
- }
+ h2_conn_io_pass(&session->io, session->flush);
+ session->flush = 0;
ap_log_cerror( APLOG_MARK, APLOG_TRACE1, status, c,
"h2_session(%ld): [%s] process returns",
if (session->state == H2_SESSION_ST_DONE) {
if (!session->eoc_written) {
session->eoc_written = 1;
- h2_conn_io_write_eoc(&session->io,
- h2_bucket_eoc_create(session->c->bucket_alloc, session));
+ h2_conn_io_write_eoc(&session->io, session);
}
}
return status;
}
+
+apr_status_t h2_session_pre_close(h2_session *session, int async)
+{
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, session->c,
+ "h2_session(%ld): pre_close", session->id);
+ dispatch_event(session, H2_SESSION_EV_PRE_CLOSE, 0, "timeout");
+ return APR_SUCCESS;
+}
*
*/
+#include "h2.h"
+
struct apr_thread_mutext_t;
struct apr_thread_cond_t;
struct h2_ctx;
struct h2_config;
struct h2_filter_cin;
+struct h2_ihash_t;
struct h2_mplx;
struct h2_priority;
struct h2_push;
struct nghttp2_session;
-typedef enum {
- H2_SESSION_ST_INIT, /* send initial SETTINGS, etc. */
- H2_SESSION_ST_DONE, /* finished, connection close */
- H2_SESSION_ST_IDLE, /* nothing to write, expecting data inc */
- H2_SESSION_ST_BUSY, /* read/write without stop */
- H2_SESSION_ST_WAIT, /* waiting for tasks reporting back */
- H2_SESSION_ST_LOCAL_SHUTDOWN, /* we announced GOAWAY */
- H2_SESSION_ST_REMOTE_SHUTDOWN, /* client announced GOAWAY */
-} h2_session_state;
-
typedef enum {
H2_SESSION_EV_INIT, /* session was initialized */
H2_SESSION_EV_LOCAL_GOAWAY, /* we send a GOAWAY */
H2_SESSION_EV_PROTO_ERROR, /* protocol error */
H2_SESSION_EV_CONN_TIMEOUT, /* connection timeout */
H2_SESSION_EV_NO_IO, /* nothing has been read or written */
- H2_SESSION_EV_WAIT_TIMEOUT, /* timeout waiting for tasks */
H2_SESSION_EV_STREAM_READY, /* stream signalled availability of headers/data */
H2_SESSION_EV_DATA_READ, /* connection data has been read */
H2_SESSION_EV_NGH2_DONE, /* nghttp2 wants neither read nor write anything */
+ H2_SESSION_EV_MPM_STOPPING, /* the process is stopping */
+ H2_SESSION_EV_PRE_CLOSE, /* connection will close after this */
} h2_session_event_t;
typedef struct h2_session {
h2_session_state state; /* state session is in */
unsigned int reprioritize : 1; /* scheduled streams priority changed */
unsigned int eoc_written : 1; /* h2 eoc bucket written */
+ unsigned int flush : 1; /* flushing output necessary */
apr_interval_time_t wait_us; /* timout during BUSY_WAIT state, micro secs */
int unsent_submits; /* number of submitted, but not yet written responses. */
apr_size_t max_stream_count; /* max number of open streams */
apr_size_t max_stream_mem; /* max buffer memory for a single stream */
- int timeout_secs; /* connection timeout (seconds) */
- int keepalive_secs; /* connection idle timeout (seconds) */
+ apr_time_t start_wait; /* Time we started waiting for sth. to happen */
+ apr_time_t idle_until; /* Time we shut down due to sheer boredom */
apr_pool_t *pool; /* pool to use in session handling */
apr_bucket_brigade *bbtmp; /* brigade for keeping temporary data */
struct h2_mplx *mplx; /* multiplexer for stream data */
struct h2_stream *last_stream; /* last stream worked with */
- struct h2_stream_set *streams; /* streams handled by this session */
+ struct h2_ihash_t *streams; /* streams handled by this session */
apr_pool_t *spare; /* spare stream pool */
struct h2_workers *workers; /* for executing stream tasks */
struct h2_push_diary *push_diary; /* remember pushes, avoid duplicates */
+
+ char status[64]; /* status message for scoreboard */
} h2_session;
*/
apr_status_t h2_session_process(h2_session *session, int async);
+/**
+ * Last chance to do anything before the connection is closed.
+ */
+apr_status_t h2_session_pre_close(h2_session *session, int async);
+
/**
* Cleanup the session and all objects it still contains. This will not
* destroy h2_task instances that have not finished yet.
return 1;
}
-static int input_open(h2_stream *stream)
+static int input_open(const h2_stream *stream)
{
switch (stream->state) {
case H2_STREAM_ST_OPEN:
{
h2_stream *stream = h2_stream_create(id, pool, session);
set_state(stream, H2_STREAM_ST_OPEN);
- stream->request = h2_request_create(id, pool, session->config);
+ stream->request = h2_request_create(id, pool,
+ h2_config_geti(session->config, H2_CONF_SER_HEADERS));
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, session->c, APLOGNO(03082)
"h2_stream(%ld-%d): opened", session->id, stream->id);
apr_status_t h2_stream_destroy(h2_stream *stream)
{
AP_DEBUG_ASSERT(stream);
- if (stream->request) {
- h2_request_destroy(stream->request);
- stream->request = NULL;
- }
-
if (stream->pool) {
apr_pool_destroy(stream->pool);
}
}
set_state(stream, H2_STREAM_ST_OPEN);
status = h2_request_rwrite(stream->request, r);
+ stream->request->serialize = h2_config_geti(h2_config_rget(r),
+ H2_CONF_SER_HEADERS);
+
return status;
}
return status;
}
-int h2_stream_is_scheduled(h2_stream *stream)
+int h2_stream_is_scheduled(const h2_stream *stream)
{
return stream->scheduled;
}
stream->session->id, stream->id, stream->suspended);
}
-int h2_stream_is_suspended(h2_stream *stream)
+int h2_stream_is_suspended(const h2_stream *stream)
{
AP_DEBUG_ASSERT(stream);
return stream->suspended;
return stream->sos->read_to(stream->sos, bb, plen, peos);
}
-int h2_stream_input_is_open(h2_stream *stream)
+int h2_stream_input_is_open(const h2_stream *stream)
{
return input_open(stream);
}
-int h2_stream_needs_submit(h2_stream *stream)
+int h2_stream_needs_submit(const h2_stream *stream)
{
switch (stream->state) {
case H2_STREAM_ST_OPEN:
#ifndef __mod_h2__h2_stream__
#define __mod_h2__h2_stream__
+#include "h2.h"
+
/**
* A HTTP/2 stream, e.g. a client request+response in HTTP/1.1 terms.
*
*/
#include "h2_io.h"
-typedef enum {
- H2_STREAM_ST_IDLE,
- H2_STREAM_ST_OPEN,
- H2_STREAM_ST_RESV_LOCAL,
- H2_STREAM_ST_RESV_REMOTE,
- H2_STREAM_ST_CLOSED_INPUT,
- H2_STREAM_ST_CLOSED_OUTPUT,
- H2_STREAM_ST_CLOSED,
-} h2_stream_state_t;
-
struct h2_mplx;
struct h2_priority;
struct h2_request;
* @param stream the stream to check on
* @return != 0 iff stream has been scheduled
*/
-int h2_stream_is_scheduled(h2_stream *stream);
+int h2_stream_is_scheduled(const h2_stream *stream);
struct h2_response *h2_stream_get_response(h2_stream *stream);
* @param stream the stream to check
* @return != 0 iff stream is suspended.
*/
-int h2_stream_is_suspended(h2_stream *stream);
+int h2_stream_is_suspended(const h2_stream *stream);
/**
* Check if the stream has open input.
* @param stream the stream to check
* @return != 0 iff stream has open input.
*/
-int h2_stream_input_is_open(h2_stream *stream);
+int h2_stream_input_is_open(const h2_stream *stream);
/**
* Check if the stream has not submitted a response or RST yet.
* @param stream the stream to check
* @return != 0 iff stream has not submitted a response or RST.
*/
-int h2_stream_needs_submit(h2_stream *stream);
+int h2_stream_needs_submit(const h2_stream *stream);
/**
* Submit any server push promises on this stream and schedule
#include "h2_private.h"
#include "h2_conn.h"
#include "h2_config.h"
+#include "h2_ctx.h"
#include "h2_from_h1.h"
#include "h2_h2.h"
#include "h2_mplx.h"
return h2_from_h1_read_response(task->output->from_h1, f, bb);
}
+static apr_status_t h2_response_freeze_filter(ap_filter_t* f,
+ apr_bucket_brigade* bb)
+{
+ h2_task *task = f->ctx;
+ AP_DEBUG_ASSERT(task);
+
+ if (task->frozen) {
+ ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, f->r,
+ "h2_response_freeze_filter, saving");
+ return ap_save_brigade(f, &task->frozen_out, &bb, task->c->pool);
+ }
+
+ if (APR_BRIGADE_EMPTY(bb)) {
+ return APR_SUCCESS;
+ }
+
+ ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, f->r,
+ "h2_response_freeze_filter, passing");
+ return ap_pass_brigade(f->next, bb);
+}
+
/*******************************************************************************
* Register various hooks
*/
static int h2_task_pre_conn(conn_rec* c, void *arg);
static int h2_task_process_conn(conn_rec* c);
+APR_OPTIONAL_FN_TYPE(ap_logio_add_bytes_in) *h2_task_logio_add_bytes_in;
+APR_OPTIONAL_FN_TYPE(ap_logio_add_bytes_out) *h2_task_logio_add_bytes_out;
+
void h2_task_register_hooks(void)
{
/* This hook runs on new connections before mod_ssl has a say.
NULL, AP_FTYPE_PROTOCOL);
ap_register_output_filter("H2_TRAILERS", h2_response_trailers_filter,
NULL, AP_FTYPE_PROTOCOL);
+ ap_register_output_filter("H2_RESPONSE_FREEZE", h2_response_freeze_filter,
+ NULL, AP_FTYPE_RESOURCE);
+}
+
+/* post config init */
+apr_status_t h2_task_init(apr_pool_t *pool, server_rec *s)
+{
+ h2_task_logio_add_bytes_in = APR_RETRIEVE_OPTIONAL_FN(ap_logio_add_bytes_in);
+ h2_task_logio_add_bytes_out = APR_RETRIEVE_OPTIONAL_FN(ap_logio_add_bytes_out);
+
+ return APR_SUCCESS;
}
static int h2_task_pre_conn(conn_rec* c, void *arg)
}
h2_task *h2_task_create(long session_id, const h2_request *req,
- apr_pool_t *pool, h2_mplx *mplx)
+ conn_rec *c, h2_mplx *mplx)
{
- h2_task *task = apr_pcalloc(pool, sizeof(h2_task));
+ h2_task *task = apr_pcalloc(c->pool, sizeof(h2_task));
if (task == NULL) {
- ap_log_perror(APLOG_MARK, APLOG_ERR, APR_ENOMEM, pool,
+ ap_log_cerror(APLOG_MARK, APLOG_ERR, APR_ENOMEM, c,
APLOGNO(02941) "h2_task(%ld-%d): create stream task",
session_id, req->id);
h2_mplx_out_close(mplx, req->id, NULL);
return NULL;
}
- task->id = apr_psprintf(pool, "%ld-%d", session_id, req->id);
+ task->id = apr_psprintf(c->pool, "%ld-%d", session_id, req->id);
task->stream_id = req->id;
+ task->c = c;
task->mplx = mplx;
task->request = req;
task->input_eos = !req->body;
- task->ser_headers = h2_config_geti(req->config, H2_CONF_SER_HEADERS);
+ task->ser_headers = req->serialize;
+
+ h2_ctx_create_for(c, task);
return task;
}
-apr_status_t h2_task_do(h2_task *task, conn_rec *c, apr_thread_cond_t *cond,
- apr_socket_t *socket)
+apr_status_t h2_task_do(h2_task *task, apr_thread_cond_t *cond)
{
+ apr_status_t status;
+
AP_DEBUG_ASSERT(task);
task->io = cond;
- task->input = h2_task_input_create(task, c->pool, c->bucket_alloc);
- task->output = h2_task_output_create(task, c->pool);
-
- ap_process_connection(c, socket);
+ task->input = h2_task_input_create(task, task->c);
+ task->output = h2_task_output_create(task, task->c);
- ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, c,
- "h2_task(%s): processing done", task->id);
+ ap_process_connection(task->c, ap_get_conn_socket(task->c));
- h2_task_input_destroy(task->input);
- h2_task_output_close(task->output);
- h2_task_output_destroy(task->output);
- task->io = NULL;
+ if (task->frozen) {
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, task->c,
+ "h2_task(%s): process_conn returned frozen task",
+ task->id);
+ /* cleanup delayed */
+ status = APR_EAGAIN;
+ }
+ else {
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, task->c,
+ "h2_task(%s): processing done", task->id);
+ status = APR_SUCCESS;
+ }
- return APR_SUCCESS;
+ return status;
}
-static apr_status_t h2_task_process_request(const h2_request *req, conn_rec *c)
+static apr_status_t h2_task_process_request(h2_task *task, conn_rec *c)
{
- request_rec *r;
+ const h2_request *req = task->request;
conn_state_t *cs = c->cs;
+ request_rec *r;
r = h2_request_create_rec(req, c);
if (r && (r->status == HTTP_OK)) {
ap_update_child_status(c->sbh, SERVER_BUSY_READ, r);
- if (cs)
+ if (cs) {
cs->state = CONN_STATE_HANDLER;
+ }
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, c,
+ "h2_task(%s): start process_request", task->id);
ap_process_request(r);
+ if (task->frozen) {
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, c,
+ "h2_task(%s): process_request frozen", task->id);
+ }
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, c,
+ "h2_task(%s): process_request done", task->id);
+
/* After the call to ap_process_request, the
* request pool will have been deleted. We set
* r=NULL here to ensure that any dereference
* will result in a segfault immediately instead
* of nondeterministic failures later.
*/
- if (cs)
+ if (cs)
cs->state = CONN_STATE_WRITE_COMPLETION;
r = NULL;
}
- ap_update_child_status(c->sbh, SERVER_BUSY_WRITE, NULL);
c->sbh = NULL;
return APR_SUCCESS;
if (!ctx->task->ser_headers) {
ap_log_cerror(APLOG_MARK, APLOG_TRACE2, 0, c,
"h2_h2, processing request directly");
- h2_task_process_request(ctx->task->request, c);
+ h2_task_process_request(ctx->task, c);
return DONE;
}
ap_log_cerror(APLOG_MARK, APLOG_TRACE2, 0, c,
}
return DECLINED;
}
+
+apr_status_t h2_task_freeze(h2_task *task, request_rec *r)
+{
+ if (!task->frozen) {
+ conn_rec *c = task->c;
+
+ task->frozen = 1;
+ task->frozen_out = apr_brigade_create(c->pool, c->bucket_alloc);
+ ap_add_output_filter("H2_RESPONSE_FREEZE", task, r, r->connection);
+ ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, c,
+ "h2_task(%s), frozen", task->id);
+ }
+ return APR_SUCCESS;
+}
+
+apr_status_t h2_task_thaw(h2_task *task)
+{
+ if (task->frozen) {
+ task->frozen = 0;
+ ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, task->c,
+ "h2_task(%s), thawed", task->id);
+ }
+ return APR_SUCCESS;
+}
+
#ifndef __mod_h2__h2_task__
#define __mod_h2__h2_task__
+#include <http_core.h>
+
/**
* A h2_task fakes a HTTP/1.1 request from the data in a HTTP/2 stream
* (HEADER+CONT.+DATA) the module recieves.
struct h2_task {
const char *id;
int stream_id;
+ conn_rec *c;
struct h2_mplx *mplx;
const struct h2_request *request;
unsigned int filters_set : 1;
unsigned int input_eos : 1;
unsigned int ser_headers : 1;
+ unsigned int frozen : 1;
struct h2_task_input *input;
struct h2_task_output *output;
struct apr_thread_cond_t *io; /* used to wait for events on */
+
+ apr_bucket_brigade *frozen_out;
};
h2_task *h2_task_create(long session_id, const struct h2_request *req,
- apr_pool_t *pool, struct h2_mplx *mplx);
+ conn_rec *c, struct h2_mplx *mplx);
-apr_status_t h2_task_do(h2_task *task, conn_rec *c,
- struct apr_thread_cond_t *cond, apr_socket_t *socket);
+apr_status_t h2_task_do(h2_task *task, struct apr_thread_cond_t *cond);
void h2_task_register_hooks(void);
+/*
+ * One time, post config intialization.
+ */
+apr_status_t h2_task_init(apr_pool_t *pool, server_rec *s);
+
+extern APR_OPTIONAL_FN_TYPE(ap_logio_add_bytes_in) *h2_task_logio_add_bytes_in;
+extern APR_OPTIONAL_FN_TYPE(ap_logio_add_bytes_out) *h2_task_logio_add_bytes_out;
+
+apr_status_t h2_task_freeze(h2_task *task, request_rec *r);
+apr_status_t h2_task_thaw(h2_task *task);
#endif /* defined(__mod_h2__h2_task__) */
return 1;
}
-h2_task_input *h2_task_input_create(h2_task *task, apr_pool_t *pool,
- apr_bucket_alloc_t *bucket_alloc)
+h2_task_input *h2_task_input_create(h2_task *task, conn_rec *c)
{
- h2_task_input *input = apr_pcalloc(pool, sizeof(h2_task_input));
+ h2_task_input *input = apr_pcalloc(c->pool, sizeof(h2_task_input));
if (input) {
+ input->c = c;
input->task = task;
input->bb = NULL;
+ input->block = APR_BLOCK_READ;
if (task->ser_headers) {
- ap_log_perror(APLOG_MARK, APLOG_TRACE1, 0, pool,
+ ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, c,
"h2_task_input(%s): serialize request %s %s",
task->id, task->request->method, task->request->path);
- input->bb = apr_brigade_create(pool, bucket_alloc);
+ input->bb = apr_brigade_create(c->pool, c->bucket_alloc);
apr_brigade_printf(input->bb, NULL, NULL, "%s %s HTTP/1.1\r\n",
task->request->method, task->request->path);
apr_table_do(ser_header, input, task->request->headers, NULL);
apr_brigade_puts(input->bb, NULL, NULL, "\r\n");
if (input->task->input_eos) {
- APR_BRIGADE_INSERT_TAIL(input->bb, apr_bucket_eos_create(bucket_alloc));
+ APR_BRIGADE_INSERT_TAIL(input->bb, apr_bucket_eos_create(c->bucket_alloc));
}
}
else if (!input->task->input_eos) {
- input->bb = apr_brigade_create(pool, bucket_alloc);
+ input->bb = apr_brigade_create(c->pool, c->bucket_alloc);
}
else {
/* We do not serialize and have eos already, no need to
return input;
}
-void h2_task_input_destroy(h2_task_input *input)
+void h2_task_input_block_set(h2_task_input *input, apr_read_type_e block)
{
- input->bb = NULL;
+ input->block = block;
}
apr_status_t h2_task_input_read(h2_task_input *input,
return APR_EOF;
}
- while ((bblen == 0) || (mode == AP_MODE_READBYTES && bblen < readbytes)) {
+ while (bblen == 0) {
/* Get more data for our stream from mplx.
*/
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, f->c,
input->task->id, block,
(long)readbytes, (long)bblen);
- /* Although we sometimes get called with APR_NONBLOCK_READs,
- we seem to fill our buffer blocking. Otherwise we get EAGAIN,
- return that to our caller and everyone throws up their hands,
- never calling us again. */
- status = h2_mplx_in_read(input->task->mplx, APR_BLOCK_READ,
+ /* Override the block mode we get called with depending on the input's
+ * setting.
+ */
+ status = h2_mplx_in_read(input->task->mplx, block,
input->task->stream_id, input->bb,
f->r? f->r->trailers_in : NULL,
input->task->io);
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, f->c,
"h2_task_input(%s): mplx in read returned",
input->task->id);
- if (status != APR_SUCCESS) {
+ if (APR_STATUS_IS_EAGAIN(status)
+ && (mode == AP_MODE_GETLINE || block == APR_BLOCK_READ)) {
+ /* chunked input handling does not seem to like it if we
+ * return with APR_EAGAIN from a GETLINE read...
+ * upload 100k test on test-ser.example.org hangs */
+ status = APR_SUCCESS;
+ }
+ else if (status != APR_SUCCESS) {
return status;
}
+
status = apr_brigade_length(input->bb, 1, &bblen);
if (status != APR_SUCCESS) {
return status;
}
- if ((bblen == 0) && (block == APR_NONBLOCK_READ)) {
- return h2_util_has_eos(input->bb, -1)? APR_EOF : APR_EAGAIN;
- }
+
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, f->c,
"h2_task_input(%s): mplx in read, %ld bytes in brigade",
input->task->id, (long)bblen);
+ if (h2_task_logio_add_bytes_in) {
+ h2_task_logio_add_bytes_in(f->c, bblen);
+ }
}
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, f->c,
typedef struct h2_task_input h2_task_input;
struct h2_task_input {
+ conn_rec *c;
struct h2_task *task;
apr_bucket_brigade *bb;
+ apr_read_type_e block;
};
-h2_task_input *h2_task_input_create(struct h2_task *task, apr_pool_t *pool,
- apr_bucket_alloc_t *bucket_alloc);
-
-void h2_task_input_destroy(h2_task_input *input);
+h2_task_input *h2_task_input_create(struct h2_task *task, conn_rec *c);
apr_status_t h2_task_input_read(h2_task_input *input,
ap_filter_t* filter,
apr_read_type_e block,
apr_off_t readbytes);
+void h2_task_input_block_set(h2_task_input *input, apr_read_type_e block);
+
#endif /* defined(__mod_h2__h2_task_input__) */
#include "h2_util.h"
-h2_task_output *h2_task_output_create(h2_task *task, apr_pool_t *pool)
+h2_task_output *h2_task_output_create(h2_task *task, conn_rec *c)
{
- h2_task_output *output = apr_pcalloc(pool, sizeof(h2_task_output));
-
+ h2_task_output *output = apr_pcalloc(c->pool, sizeof(h2_task_output));
if (output) {
+ output->c = c;
output->task = task;
output->state = H2_TASK_OUT_INIT;
- output->from_h1 = h2_from_h1_create(task->stream_id, pool);
+ output->from_h1 = h2_from_h1_create(task->stream_id, c->pool);
if (!output->from_h1) {
return NULL;
}
return output;
}
-void h2_task_output_destroy(h2_task_output *output)
+static apr_table_t *get_trailers(h2_task_output *output)
{
- if (output->from_h1) {
- h2_from_h1_destroy(output->from_h1);
- output->from_h1 = NULL;
+ if (!output->trailers_passed) {
+ h2_response *response = h2_from_h1_get_response(output->from_h1);
+ if (response && response->trailers) {
+ output->trailers_passed = 1;
+ if (h2_task_logio_add_bytes_out) {
+ /* counter trailers as if we'd do a HTTP/1.1 serialization */
+ h2_task_logio_add_bytes_out(output->c,
+ h2_util_table_bytes(response->trailers, 3)+1);
+ }
+ return response->trailers;
+ }
}
+ return NULL;
}
static apr_status_t open_if_needed(h2_task_output *output, ap_filter_t *f,
- apr_bucket_brigade *bb)
+ apr_bucket_brigade *bb, const char *caller)
{
if (output->state == H2_TASK_OUT_INIT) {
h2_response *response;
if (!response) {
if (f) {
/* This happens currently when ap_die(status, r) is invoked
- * by a read request filter.
- */
+ * by a read request filter. */
ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, f->c, APLOGNO(03204)
- "h2_task_output(%s): write without response "
+ "h2_task_output(%s): write without response by %s "
"for %s %s %s",
- output->task->id, output->task->request->method,
+ output->task->id, caller,
+ output->task->request->method,
output->task->request->authority,
output->task->request->path);
f->c->aborted = 1;
return APR_ECONNABORTED;
}
- output->trailers_passed = !!response->trailers;
+ if (h2_task_logio_add_bytes_out) {
+ /* counter headers as if we'd do a HTTP/1.1 serialization */
+ /* TODO: counter a virtual status line? */
+ apr_off_t bytes_written;
+ apr_brigade_length(bb, 0, &bytes_written);
+ bytes_written += h2_util_table_bytes(response->headers, 3)+1;
+ h2_task_logio_add_bytes_out(f->c, bytes_written);
+ }
+ get_trailers(output);
+ ap_log_cerror(APLOG_MARK, APLOG_DEBUG, 0, f->c, APLOGNO(03348)
+ "h2_task_output(%s): open as needed %s %s %s",
+ output->task->id, output->task->request->method,
+ output->task->request->authority,
+ output->task->request->path);
return h2_mplx_out_open(output->task->mplx, output->task->stream_id,
response, f, bb, output->task->io);
}
return APR_EOF;
}
-static apr_table_t *get_trailers(h2_task_output *output)
-{
- if (!output->trailers_passed) {
- h2_response *response = h2_from_h1_get_response(output->from_h1);
- if (response && response->trailers) {
- output->trailers_passed = 1;
- return response->trailers;
- }
- }
- return NULL;
-}
-
void h2_task_output_close(h2_task_output *output)
{
- open_if_needed(output, NULL, NULL);
+ open_if_needed(output, NULL, NULL, "close");
if (output->state != H2_TASK_OUT_DONE) {
+ if (output->task->frozen_out
+ && !APR_BRIGADE_EMPTY(output->task->frozen_out)) {
+ h2_mplx_out_write(output->task->mplx, output->task->stream_id,
+ NULL, output->task->frozen_out, NULL, NULL);
+ }
h2_mplx_out_close(output->task->mplx, output->task->stream_id,
get_trailers(output));
output->state = H2_TASK_OUT_DONE;
}
}
-int h2_task_output_has_started(h2_task_output *output)
-{
- return output->state >= H2_TASK_OUT_STARTED;
-}
-
/* Bring the data from the brigade (which represents the result of the
* request_rec out filter chain) into the h2_mplx for further sending
* on the master connection.
return APR_SUCCESS;
}
- status = open_if_needed(output, f, bb);
+ if (output->task->frozen) {
+ h2_util_bb_log(output->c, output->task->stream_id, APLOG_TRACE2,
+ "frozen task output write", bb);
+ return ap_save_brigade(f, &output->task->frozen_out, &bb,
+ output->c->pool);
+ }
+
+ status = open_if_needed(output, f, bb, "write");
if (status != APR_EOF) {
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, status, f->c,
"h2_task_output(%s): opened and passed brigade",
ap_log_cerror(APLOG_MARK, APLOG_TRACE1, 0, f->c,
"h2_task_output(%s): write brigade", output->task->id);
+ if (h2_task_logio_add_bytes_out) {
+ apr_off_t bytes_written;
+ apr_brigade_length(bb, 0, &bytes_written);
+ h2_task_logio_add_bytes_out(f->c, bytes_written);
+ }
return h2_mplx_out_write(output->task->mplx, output->task->stream_id,
f, bb, get_trailers(output), output->task->io);
}
typedef struct h2_task_output h2_task_output;
struct h2_task_output {
+ conn_rec *c;
struct h2_task *task;
h2_task_output_state_t state;
struct h2_from_h1 *from_h1;
unsigned int trailers_passed : 1;
};
-h2_task_output *h2_task_output_create(struct h2_task *task, apr_pool_t *pool);
-
-void h2_task_output_destroy(h2_task_output *output);
+h2_task_output *h2_task_output_create(struct h2_task *task, conn_rec *c);
apr_status_t h2_task_output_write(h2_task_output *output,
ap_filter_t* filter,
void h2_task_output_close(h2_task_output *output);
-int h2_task_output_has_started(h2_task_output *output);
+apr_status_t h2_task_output_freeze(h2_task_output *output);
+apr_status_t h2_task_output_thaw(h2_task_output *output);
#endif /* defined(__mod_h2__h2_task_output__) */
#include "h2_request.h"
#include "h2_util.h"
+/* h2_log2(n) iff n is a power of 2 */
+unsigned char h2_log2(apr_uint32_t n)
+{
+ int lz = 0;
+ if (!n) {
+ return 0;
+ }
+ if (!(n & 0xffff0000u)) {
+ lz += 16;
+ n = (n << 16);
+ }
+ if (!(n & 0xff000000u)) {
+ lz += 8;
+ n = (n << 8);
+ }
+ if (!(n & 0xf0000000u)) {
+ lz += 4;
+ n = (n << 4);
+ }
+ if (!(n & 0xc0000000u)) {
+ lz += 2;
+ n = (n << 2);
+ }
+ if (!(n & 0x80000000u)) {
+ lz += 1;
+ }
+
+ return 31 - lz;
+}
+
size_t h2_util_hex_dump(char *buffer, size_t maxlen,
const char *data, size_t datalen)
{
return NULL;
}
+
+/*******************************************************************************
+ * ihash - hash for structs with int identifier
+ ******************************************************************************/
+struct h2_ihash_t {
+ apr_hash_t *hash;
+ size_t ioff;
+};
+
+static unsigned int ihash(const char *key, apr_ssize_t *klen)
+{
+ return (unsigned int)(*((int*)key));
+}
+
+h2_ihash_t *h2_ihash_create(apr_pool_t *pool, size_t offset_of_int)
+{
+ h2_ihash_t *ih = apr_pcalloc(pool, sizeof(h2_ihash_t));
+ ih->hash = apr_hash_make_custom(pool, ihash);
+ ih->ioff = offset_of_int;
+ return ih;
+}
+
+size_t h2_ihash_count(h2_ihash_t *ih)
+{
+ return apr_hash_count(ih->hash);
+}
+
+int h2_ihash_is_empty(h2_ihash_t *ih)
+{
+ return apr_hash_count(ih->hash) == 0;
+}
+
+void *h2_ihash_get(h2_ihash_t *ih, int id)
+{
+ return apr_hash_get(ih->hash, &id, sizeof(id));
+}
+
+typedef struct {
+ h2_ihash_iter_t *iter;
+ void *ctx;
+} iter_ctx;
+
+static int ihash_iter(void *ctx, const void *key, apr_ssize_t klen,
+ const void *val)
+{
+ iter_ctx *ictx = ctx;
+ return ictx->iter(ictx->ctx, (void*)val); /* why is this passed const?*/
+}
+
+void h2_ihash_iter(h2_ihash_t *ih, h2_ihash_iter_t *fn, void *ctx)
+{
+ iter_ctx ictx;
+ ictx.iter = fn;
+ ictx.ctx = ctx;
+ apr_hash_do(ihash_iter, &ictx, ih->hash);
+}
+
+void h2_ihash_add(h2_ihash_t *ih, void *val)
+{
+ apr_hash_set(ih->hash, ((char *)val + ih->ioff), sizeof(int), val);
+}
+
+void h2_ihash_remove(h2_ihash_t *ih, int id)
+{
+ apr_hash_set(ih->hash, &id, sizeof(id), NULL);
+}
+
+void h2_ihash_clear(h2_ihash_t *ih)
+{
+ apr_hash_clear(ih->hash);
+}
+
+/*******************************************************************************
+ * h2_util for apt_table_t
+ ******************************************************************************/
+
+typedef struct {
+ apr_size_t bytes;
+ apr_size_t pair_extra;
+} table_bytes_ctx;
+
+static int count_bytes(void *x, const char *key, const char *value)
+{
+ table_bytes_ctx *ctx = x;
+ if (key) {
+ ctx->bytes += strlen(key);
+ }
+ if (value) {
+ ctx->bytes += strlen(value);
+ }
+ ctx->bytes += ctx->pair_extra;
+ return 1;
+}
+
+apr_size_t h2_util_table_bytes(apr_table_t *t, apr_size_t pair_extra)
+{
+ table_bytes_ctx ctx;
+
+ ctx.bytes = 0;
+ ctx.pair_extra = pair_extra;
+ apr_table_do(count_bytes, &ctx, t, NULL);
+ return ctx.bytes;
+}
+
+
/*******************************************************************************
* h2_util for bucket brigades
******************************************************************************/
H2_DEF_LITERAL("www-authenticate"),
H2_DEF_LITERAL("proxy-authenticate"),
};
+static literal IgnoredProxyRespHds[] = {
+ H2_DEF_LITERAL("alt-svc"),
+};
static int ignore_header(const literal *lits, size_t llen,
const char *name, size_t nlen)
return ignore_header(H2_LIT_ARGS(IgnoredResponseTrailers), name, len);
}
-void h2_req_strip_ignored_header(apr_table_t *headers)
+int h2_proxy_res_ignore_header(const char *name, size_t len)
{
- int i;
- for (i = 0; i < H2_ALEN(IgnoredRequestHeaders); ++i) {
- apr_table_unset(headers, IgnoredRequestHeaders[i].name);
+ return (h2_req_ignore_header(name, len)
+ || ignore_header(H2_LIT_ARGS(IgnoredProxyRespHds), name, len));
+}
+
+
+/*******************************************************************************
+ * frame logging
+ ******************************************************************************/
+
+int h2_util_frame_print(const nghttp2_frame *frame, char *buffer, size_t maxlen)
+{
+ char scratch[128];
+ size_t s_len = sizeof(scratch)/sizeof(scratch[0]);
+
+ switch (frame->hd.type) {
+ case NGHTTP2_DATA: {
+ return apr_snprintf(buffer, maxlen,
+ "DATA[length=%d, flags=%d, stream=%d, padlen=%d]",
+ (int)frame->hd.length, frame->hd.flags,
+ frame->hd.stream_id, (int)frame->data.padlen);
+ }
+ case NGHTTP2_HEADERS: {
+ return apr_snprintf(buffer, maxlen,
+ "HEADERS[length=%d, hend=%d, stream=%d, eos=%d]",
+ (int)frame->hd.length,
+ !!(frame->hd.flags & NGHTTP2_FLAG_END_HEADERS),
+ frame->hd.stream_id,
+ !!(frame->hd.flags & NGHTTP2_FLAG_END_STREAM));
+ }
+ case NGHTTP2_PRIORITY: {
+ return apr_snprintf(buffer, maxlen,
+ "PRIORITY[length=%d, flags=%d, stream=%d]",
+ (int)frame->hd.length,
+ frame->hd.flags, frame->hd.stream_id);
+ }
+ case NGHTTP2_RST_STREAM: {
+ return apr_snprintf(buffer, maxlen,
+ "RST_STREAM[length=%d, flags=%d, stream=%d]",
+ (int)frame->hd.length,
+ frame->hd.flags, frame->hd.stream_id);
+ }
+ case NGHTTP2_SETTINGS: {
+ if (frame->hd.flags & NGHTTP2_FLAG_ACK) {
+ return apr_snprintf(buffer, maxlen,
+ "SETTINGS[ack=1, stream=%d]",
+ frame->hd.stream_id);
+ }
+ return apr_snprintf(buffer, maxlen,
+ "SETTINGS[length=%d, stream=%d]",
+ (int)frame->hd.length, frame->hd.stream_id);
+ }
+ case NGHTTP2_PUSH_PROMISE: {
+ return apr_snprintf(buffer, maxlen,
+ "PUSH_PROMISE[length=%d, hend=%d, stream=%d]",
+ (int)frame->hd.length,
+ !!(frame->hd.flags & NGHTTP2_FLAG_END_HEADERS),
+ frame->hd.stream_id);
+ }
+ case NGHTTP2_PING: {
+ return apr_snprintf(buffer, maxlen,
+ "PING[length=%d, ack=%d, stream=%d]",
+ (int)frame->hd.length,
+ frame->hd.flags&NGHTTP2_FLAG_ACK,
+ frame->hd.stream_id);
+ }
+ case NGHTTP2_GOAWAY: {
+ size_t len = (frame->goaway.opaque_data_len < s_len)?
+ frame->goaway.opaque_data_len : s_len-1;
+ memcpy(scratch, frame->goaway.opaque_data, len);
+ scratch[len] = '\0';
+ return apr_snprintf(buffer, maxlen, "GOAWAY[error=%d, reason='%s']",
+ frame->goaway.error_code, scratch);
+ }
+ case NGHTTP2_WINDOW_UPDATE: {
+ return apr_snprintf(buffer, maxlen,
+ "WINDOW_UPDATE[stream=%d, incr=%d]",
+ frame->hd.stream_id,
+ frame->window_update.window_size_increment);
+ }
+ default:
+ return apr_snprintf(buffer, maxlen,
+ "type=%d[length=%d, flags=%d, stream=%d]",
+ frame->hd.type, (int)frame->hd.length,
+ frame->hd.flags, frame->hd.stream_id);
}
}
+/*******************************************************************************
+ * push policy
+ ******************************************************************************/
+void h2_push_policy_determine(struct h2_request *req, apr_pool_t *p, int push_enabled)
+{
+ h2_push_policy policy = H2_PUSH_NONE;
+ if (push_enabled) {
+ const char *val = apr_table_get(req->headers, "accept-push-policy");
+ if (val) {
+ if (ap_find_token(p, val, "fast-load")) {
+ policy = H2_PUSH_FAST_LOAD;
+ }
+ else if (ap_find_token(p, val, "head")) {
+ policy = H2_PUSH_HEAD;
+ }
+ else if (ap_find_token(p, val, "default")) {
+ policy = H2_PUSH_DEFAULT;
+ }
+ else if (ap_find_token(p, val, "none")) {
+ policy = H2_PUSH_NONE;
+ }
+ else {
+ /* nothing known found in this header, go by default */
+ policy = H2_PUSH_DEFAULT;
+ }
+ }
+ else {
+ policy = H2_PUSH_DEFAULT;
+ }
+ }
+ req->push_policy = policy;
+}
#ifndef __mod_h2__h2_util__
#define __mod_h2__h2_util__
+/*******************************************************************************
+ * some debugging/format helpers
+ ******************************************************************************/
struct h2_request;
struct nghttp2_frame;
void h2_util_camel_case_header(char *s, size_t len);
-int h2_req_ignore_header(const char *name, size_t len);
-int h2_req_ignore_trailer(const char *name, size_t len);
-void h2_req_strip_ignored_header(apr_table_t *headers);
-int h2_res_ignore_trailer(const char *name, size_t len);
+int h2_util_frame_print(const nghttp2_frame *frame, char *buffer, size_t maxlen);
+
+/*******************************************************************************
+ * ihash - hash for structs with int identifier
+ ******************************************************************************/
+typedef struct h2_ihash_t h2_ihash_t;
+typedef int h2_ihash_iter_t(void *ctx, void *val);
+
+/**
+ * Create a hash for structures that have an identifying int member.
+ * @param pool the pool to use
+ * @param offset_of_int the offsetof() the int member in the struct
+ */
+h2_ihash_t *h2_ihash_create(apr_pool_t *pool, size_t offset_of_int);
+
+size_t h2_ihash_count(h2_ihash_t *ih);
+int h2_ihash_is_empty(h2_ihash_t *ih);
+void *h2_ihash_get(h2_ihash_t *ih, int id);
+
+/**
+ * Iterate over the hash members (without defined order) and invoke
+ * fn for each member until 0 is returned.
+ * @param ih the hash to iterate over
+ * @param fn the function to invoke on each member
+ * @param ctx user supplied data passed into each iteration call
+ */
+void h2_ihash_iter(h2_ihash_t *ih, h2_ihash_iter_t *fn, void *ctx);
+
+void h2_ihash_add(h2_ihash_t *ih, void *val);
+void h2_ihash_remove(h2_ihash_t *ih, int id);
+void h2_ihash_clear(h2_ihash_t *ih);
+
+/*******************************************************************************
+ * common helpers
+ ******************************************************************************/
+/* h2_log2(n) iff n is a power of 2 */
+unsigned char h2_log2(apr_uint32_t n);
+
+/**
+ * Count the bytes that all key/value pairs in a table have
+ * in length (exlucding terminating 0s), plus additional extra per pair.
+ *
+ * @param t the table to inspect
+ * @param pair_extra the extra amount to add per pair
+ * @return the number of bytes all key/value pairs have
+ */
+apr_size_t h2_util_table_bytes(apr_table_t *t, apr_size_t pair_extra);
/**
* Return != 0 iff the string s contains the token, as specified in
const char *h2_util_first_token_match(apr_pool_t *pool, const char *s,
const char *tokens[], apr_size_t len);
+/** Match a header value against a string constance, case insensitive */
+#define H2_HD_MATCH_LIT(l, name, nlen) \
+ ((nlen == sizeof(l) - 1) && !apr_strnatcasecmp(l, name))
+
+/*******************************************************************************
+ * HTTP/2 header helpers
+ ******************************************************************************/
+int h2_req_ignore_header(const char *name, size_t len);
+int h2_req_ignore_trailer(const char *name, size_t len);
+int h2_res_ignore_trailer(const char *name, size_t len);
+int h2_proxy_res_ignore_header(const char *name, size_t len);
+
+/**
+ * Set the push policy for the given request. Takes request headers into
+ * account, see draft https://tools.ietf.org/html/draft-ruellan-http-accept-push-policy-00
+ * for details.
+ *
+ * @param req the request to determine the policy for
+ * @param p the pool to use
+ * @param push_enabled if HTTP/2 server push is generally enabled for this request
+ */
+void h2_push_policy_determine(struct h2_request *req, apr_pool_t *p, int push_enabled);
+
+/*******************************************************************************
+ * base64 url encoding, different table from normal base64
+ ******************************************************************************/
/**
* I always wanted to write my own base64url decoder...not. See
* https://tools.ietf.org/html/rfc4648#section-5 for description.
const char *h2_util_base64url_encode(const char *data,
apr_size_t len, apr_pool_t *pool);
-#define H2_HD_MATCH_LIT(l, name, nlen) \
- ((nlen == sizeof(l) - 1) && !apr_strnatcasecmp(l, name))
+/*******************************************************************************
+ * nghttp2 helpers
+ ******************************************************************************/
#define H2_HD_MATCH_LIT_CS(l, name) \
((strlen(name) == sizeof(l) - 1) && !apr_strnatcasecmp(l, name))
h2_ngheader *h2_util_ngheader_make_req(apr_pool_t *p,
const struct h2_request *req);
+/*******************************************************************************
+ * apr brigade helpers
+ ******************************************************************************/
/**
* Moves data from one brigade into another. If maxlen > 0, it only
* moves up to maxlen bytes into the target brigade, making bucket splits
* @macro
* Version number of the http2 module as c string
*/
-#define MOD_HTTP2_VERSION "1.2.2"
+#define MOD_HTTP2_VERSION "1.3.2"
/**
* @macro
* release. This is a 24 bit number with 8 bits for major number, 8 bits
* for minor and 8 bits for patch. Version 1.2.3 becomes 0x010203.
*/
-#define MOD_HTTP2_VERSION_NUM 0x010202
+#define MOD_HTTP2_VERSION_NUM 0x010302
#endif /* mod_h2_h2_version_h */
#include <http_core.h>
#include <http_log.h>
+#include "h2.h"
#include "h2_private.h"
#include "h2_conn.h"
#include "h2_ctx.h"
#include "h2_h2.h"
#include "h2_mplx.h"
-#include "h2_request.h"
#include "h2_task.h"
#include "h2_worker.h"
static void* APR_THREAD_FUNC execute(apr_thread_t *thread, void *wctx)
{
h2_worker *worker = (h2_worker *)wctx;
- apr_status_t status;
-
- (void)thread;
- /* Other code might want to see a socket for this connection this
- * worker processes. Allocate one without further function...
- */
- status = apr_socket_create(&worker->socket,
- APR_INET, SOCK_STREAM,
- APR_PROTO_TCP, worker->pool);
- if (status != APR_SUCCESS) {
- ap_log_perror(APLOG_MARK, APLOG_ERR, status, worker->pool,
- APLOGNO(02948) "h2_worker(%d): alloc socket",
- worker->id);
- worker->worker_done(worker, worker->ctx);
- return NULL;
- }
+ int sticky;
while (!worker->aborted) {
- h2_mplx *m;
- const h2_request *req;
+ h2_task *task;
- /* Get a h2_mplx + h2_request from the main workers queue. */
- status = worker->get_next(worker, &m, &req, worker->ctx);
-
- while (req) {
- conn_rec *c, *master = m->c;
- int stream_id = req->id;
+ /* Get a h2_task from the main workers queue. */
+ worker->get_next(worker, worker->ctx, &task, &sticky);
+ while (task) {
+ h2_task_do(task, worker->io);
- c = h2_slave_create(master, worker->task_pool,
- worker->thread, worker->socket);
- if (!c) {
- ap_log_cerror(APLOG_MARK, APLOG_WARNING, status, c,
- APLOGNO(02957) "h2_request(%ld-%d): error setting up slave connection",
- m->id, stream_id);
- h2_mplx_out_rst(m, stream_id, H2_ERR_INTERNAL_ERROR);
+ /* if someone was waiting on this task, time to wake up */
+ apr_thread_cond_signal(worker->io);
+ /* report the task done and maybe get another one from the same
+ * mplx (= master connection), if we can be sticky.
+ */
+ if (sticky && !worker->aborted) {
+ h2_mplx_task_done(task->mplx, task, &task);
}
else {
- h2_task *task;
-
- task = h2_task_create(m->id, req, worker->task_pool, m);
- h2_ctx_create_for(c, task);
- h2_task_do(task, c, worker->io, worker->socket);
+ h2_mplx_task_done(task->mplx, task, NULL);
task = NULL;
-
- apr_thread_cond_signal(worker->io);
}
-
- /* clean our references and report request as done. Signal
- * that we want another unless we have been aborted */
- /* TODO: this will keep a worker attached to this h2_mplx as
- * long as it has requests to handle. Might no be fair to
- * other mplx's. Perhaps leave after n requests? */
- req = NULL;
- apr_pool_clear(worker->task_pool);
- h2_mplx_request_done(&m, stream_id, worker->aborted? NULL : &req);
}
}
- if (worker->socket) {
- apr_socket_close(worker->socket);
- worker->socket = NULL;
- }
-
worker->worker_done(worker, worker->ctx);
return NULL;
}
apr_allocator_create(&allocator);
apr_allocator_max_free_set(allocator, ap_max_mem_free);
apr_pool_create_ex(&pool, parent_pool, NULL, allocator);
+ apr_pool_tag(pool, "h2_worker");
apr_allocator_owner_set(allocator, pool);
w = apr_pcalloc(pool, sizeof(h2_worker));
return NULL;
}
- apr_pool_create(&w->task_pool, w->pool);
apr_thread_create(&w->thread, attr, execute, w, w->pool);
}
return w;
return worker->aborted;
}
-h2_task *h2_worker_create_task(h2_worker *worker, h2_mplx *m,
- const h2_request *req)
-{
- h2_task *task;
-
- task = h2_task_create(m->id, req, worker->task_pool, m);
- return task;
-}
-
* until a h2_mplx becomes available or the worker itself
* gets aborted (idle timeout, for example). */
typedef apr_status_t h2_worker_mplx_next_fn(h2_worker *worker,
- struct h2_mplx **pm,
- const struct h2_request **preq,
- void *ctx);
+ void *ctx,
+ struct h2_task **ptask,
+ int *psticky);
/* Invoked just before the worker thread exits. */
typedef void h2_worker_done_fn(h2_worker *worker, void *ctx);
int id;
apr_thread_t *thread;
apr_pool_t *pool;
- apr_pool_t *task_pool;
struct apr_thread_cond_t *io;
- apr_socket_t *socket;
h2_worker_mplx_next_fn *get_next;
h2_worker_done_fn *worker_done;
int h2_worker_is_aborted(h2_worker *worker);
-struct h2_task *h2_worker_create_task(h2_worker *worker, struct h2_mplx *m,
- const struct h2_request *req);
-
#endif /* defined(__mod_h2__h2_worker__) */
#include <http_core.h>
#include <http_log.h>
+#include "h2.h"
#include "h2_private.h"
#include "h2_mplx.h"
-#include "h2_request.h"
-#include "h2_task_queue.h"
+#include "h2_task.h"
#include "h2_worker.h"
#include "h2_workers.h"
}
}
+static h2_task *next_task(h2_workers *workers)
+{
+ h2_task *task = NULL;
+ h2_mplx *last = NULL;
+ int has_more;
+
+ /* Get the next h2_mplx to process that has a task to hand out.
+ * If it does, place it at the end of the queu and return the
+ * task to the worker.
+ * If it (currently) has no tasks, remove it so that it needs
+ * to register again for scheduling.
+ * If we run out of h2_mplx in the queue, we need to wait for
+ * new mplx to arrive. Depending on how many workers do exist,
+ * we do a timed wait or block indefinitely.
+ */
+ while (!task && !H2_MPLX_LIST_EMPTY(&workers->mplxs)) {
+ h2_mplx *m = H2_MPLX_LIST_FIRST(&workers->mplxs);
+
+ if (last == m) {
+ break;
+ }
+ H2_MPLX_REMOVE(m);
+ --workers->mplx_count;
+
+ task = h2_mplx_pop_task(m, &has_more);
+ if (has_more) {
+ H2_MPLX_LIST_INSERT_TAIL(&workers->mplxs, m);
+ ++workers->mplx_count;
+ if (!last) {
+ last = m;
+ }
+ }
+ }
+ return task;
+}
+
/**
* Get the next task for the given worker. Will block until a task arrives
* or the max_wait timer expires and more than min workers exist.
- * The previous h2_mplx instance might be passed in and will be served
- * with preference, since we can ask it for the next task without aquiring
- * the h2_workers lock.
*/
-static apr_status_t get_mplx_next(h2_worker *worker, h2_mplx **pm,
- const h2_request **preq, void *ctx)
+static apr_status_t get_mplx_next(h2_worker *worker, void *ctx,
+ h2_task **ptask, int *psticky)
{
apr_status_t status;
- apr_time_t max_wait, start_wait;
- h2_workers *workers = (h2_workers *)ctx;
+ apr_time_t wait_until = 0, now;
+ h2_workers *workers = ctx;
+ h2_task *task = NULL;
- max_wait = apr_time_from_sec(apr_atomic_read32(&workers->max_idle_secs));
- start_wait = apr_time_now();
+ *ptask = NULL;
+ *psticky = 0;
status = apr_thread_mutex_lock(workers->lock);
if (status == APR_SUCCESS) {
- const h2_request *req = NULL;
- h2_mplx *m = NULL;
- int has_more = 0;
-
- ++workers->idle_worker_count;
+ ++workers->idle_workers;
ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
"h2_worker(%d): looking for work", h2_worker_get_id(worker));
- while (!req && !h2_worker_is_aborted(worker) && !workers->aborted) {
-
- /* Get the next h2_mplx to process that has a task to hand out.
- * If it does, place it at the end of the queu and return the
- * task to the worker.
- * If it (currently) has no tasks, remove it so that it needs
- * to register again for scheduling.
- * If we run out of h2_mplx in the queue, we need to wait for
- * new mplx to arrive. Depending on how many workers do exist,
- * we do a timed wait or block indefinitely.
- */
- m = NULL;
- while (!req && !H2_MPLX_LIST_EMPTY(&workers->mplxs)) {
- m = H2_MPLX_LIST_FIRST(&workers->mplxs);
- H2_MPLX_REMOVE(m);
+ while (!h2_worker_is_aborted(worker) && !workers->aborted
+ && !(task = next_task(workers))) {
+
+ /* Need to wait for a new tasks to arrive. If we are above
+ * minimum workers, we do a timed wait. When timeout occurs
+ * and we have still more workers, we shut down one after
+ * the other. */
+ cleanup_zombies(workers, 0);
+ if (workers->worker_count > workers->min_workers) {
+ now = apr_time_now();
+ if (now >= wait_until) {
+ wait_until = now + apr_time_from_sec(workers->max_idle_secs);
+ }
- req = h2_mplx_pop_request(m, &has_more);
- if (req) {
- if (has_more) {
- H2_MPLX_LIST_INSERT_TAIL(&workers->mplxs, m);
- }
- else {
- has_more = !H2_MPLX_LIST_EMPTY(&workers->mplxs);
- }
+ ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
+ "h2_worker(%d): waiting signal, "
+ "workers=%d, idle=%d", worker->id,
+ (int)workers->worker_count,
+ workers->idle_workers);
+ status = apr_thread_cond_timedwait(workers->mplx_added,
+ workers->lock,
+ wait_until - now);
+ if (status == APR_TIMEUP
+ && workers->worker_count > workers->min_workers) {
+ /* waited long enough without getting a task and
+ * we are above min workers, abort this one. */
+ ap_log_error(APLOG_MARK, APLOG_TRACE3, 0,
+ workers->s,
+ "h2_workers: aborting idle worker");
+ h2_worker_abort(worker);
break;
}
}
-
- if (!req) {
- /* Need to wait for a new mplx to arrive.
- */
- cleanup_zombies(workers, 0);
-
- if (workers->worker_count > workers->min_size) {
- apr_time_t now = apr_time_now();
- if (now >= (start_wait + max_wait)) {
- /* waited long enough without getting a task. */
- if (workers->worker_count > workers->min_size) {
- ap_log_error(APLOG_MARK, APLOG_TRACE3, 0,
- workers->s,
- "h2_workers: aborting idle worker");
- h2_worker_abort(worker);
- break;
- }
- }
- ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
- "h2_worker(%d): waiting signal, "
- "worker_count=%d", worker->id,
- (int)workers->worker_count);
- apr_thread_cond_timedwait(workers->mplx_added,
- workers->lock, max_wait);
- }
- else {
- ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
- "h2_worker(%d): waiting signal (eternal), "
- "worker_count=%d", worker->id,
- (int)workers->worker_count);
- apr_thread_cond_wait(workers->mplx_added, workers->lock);
- }
+ else {
+ ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
+ "h2_worker(%d): waiting signal (eternal), "
+ "worker_count=%d, idle=%d", worker->id,
+ (int)workers->worker_count,
+ workers->idle_workers);
+ apr_thread_cond_wait(workers->mplx_added, workers->lock);
}
}
- /* Here, we either have gotten task and mplx for the worker or
- * needed to give up with more than enough workers.
+ /* Here, we either have gotten task or decided to shut down
+ * the calling worker.
*/
- if (req) {
- ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
- "h2_worker(%d): start request(%ld-%d)",
- h2_worker_get_id(worker), m->id, req->id);
- *pm = m;
- *preq = req;
+ if (task) {
+ /* Ok, we got something to give back to the worker for execution.
+ * If we have more idle workers than h2_mplx in our queue, then
+ * we let the worker be sticky, e.g. making it poll the task's
+ * h2_mplx instance for more work before asking back here.
+ * This avoids entering our global lock as long as enough idle
+ * workers remain. Stickiness of a worker ends when the connection
+ * has no new tasks to process, so the worker will get back here
+ * eventually.
+ */
+ *ptask = task;
+ *psticky = (workers->max_workers >= workers->mplx_count);
- if (has_more && workers->idle_worker_count > 1) {
+ if (workers->mplx_count && workers->idle_workers > 1) {
apr_thread_cond_signal(workers->mplx_added);
}
- status = APR_SUCCESS;
- }
- else {
- status = APR_EOF;
}
- --workers->idle_worker_count;
+ --workers->idle_workers;
apr_thread_mutex_unlock(workers->lock);
}
- return status;
+ return *ptask? APR_SUCCESS : APR_EOF;
}
static void worker_done(h2_worker *worker, void *ctx)
{
- h2_workers *workers = (h2_workers *)ctx;
+ h2_workers *workers = ctx;
apr_status_t status = apr_thread_mutex_lock(workers->lock);
if (status == APR_SUCCESS) {
ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
"h2_workers: starting");
- while (workers->worker_count < workers->min_size
+ while (workers->worker_count < workers->min_workers
&& status == APR_SUCCESS) {
status = add_worker(workers);
}
}
h2_workers *h2_workers_create(server_rec *s, apr_pool_t *server_pool,
- int min_size, int max_size,
+ int min_workers, int max_workers,
apr_size_t max_tx_handles)
{
apr_status_t status;
* happen on the pool handed to us, which we do not guard.
*/
apr_pool_create(&pool, server_pool);
+ apr_pool_tag(pool, "h2_workers");
workers = apr_pcalloc(pool, sizeof(h2_workers));
if (workers) {
workers->s = s;
workers->pool = pool;
- workers->min_size = min_size;
- workers->max_size = max_size;
- apr_atomic_set32(&workers->max_idle_secs, 10);
+ workers->min_workers = min_workers;
+ workers->max_workers = max_workers;
+ workers->max_idle_secs = 10;
workers->max_tx_handles = max_tx_handles;
workers->spare_tx_handles = workers->max_tx_handles;
apr_status_t status = apr_thread_mutex_lock(workers->lock);
if (status == APR_SUCCESS) {
ap_log_error(APLOG_MARK, APLOG_TRACE3, status, workers->s,
- "h2_workers: register mplx(%ld)", m->id);
+ "h2_workers: register mplx(%ld), idle=%d",
+ m->id, workers->idle_workers);
if (in_list(workers, m)) {
- ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
- "h2_workers: already registered mplx(%ld)", m->id);
status = APR_EAGAIN;
}
else {
H2_MPLX_LIST_INSERT_TAIL(&workers->mplxs, m);
+ ++workers->mplx_count;
status = APR_SUCCESS;
}
- if (workers->idle_worker_count > 0) {
+ if (workers->idle_workers > 0) {
apr_thread_cond_signal(workers->mplx_added);
}
else if (status == APR_SUCCESS
- && workers->worker_count < workers->max_size) {
+ && workers->worker_count < workers->max_workers) {
ap_log_error(APLOG_MARK, APLOG_TRACE3, 0, workers->s,
"h2_workers: got %d worker, adding 1",
workers->worker_count);
" is not valid, ignored.", idle_secs);
return;
}
- apr_atomic_set32(&workers->max_idle_secs, idle_secs);
+ workers->max_idle_secs = idle_secs;
}
apr_size_t h2_workers_tx_reserve(h2_workers *workers, apr_size_t count)
struct h2_mplx;
struct h2_request;
struct h2_task;
-struct h2_task_queue;
typedef struct h2_workers h2_workers;
apr_pool_t *pool;
int next_worker_id;
- int min_size;
- int max_size;
+ int min_workers;
+ int max_workers;
+ int worker_count;
+ int idle_workers;
+ int max_idle_secs;
apr_size_t max_tx_handles;
apr_size_t spare_tx_handles;
APR_RING_HEAD(h2_worker_list, h2_worker) workers;
APR_RING_HEAD(h2_worker_zombies, h2_worker) zombies;
APR_RING_HEAD(h2_mplx_list, h2_mplx) mplxs;
-
- int worker_count;
- volatile apr_uint32_t max_idle_secs;
- volatile apr_uint32_t idle_worker_count;
+ int mplx_count;
struct apr_thread_mutex_t *lock;
struct apr_thread_cond_t *mplx_added;
#include <apr_optional.h>
#include <apr_optional_hooks.h>
+#include <apr_time.h>
#include <apr_want.h>
#include <httpd.h>
#include "h2_config.h"
#include "h2_ctx.h"
#include "h2_h2.h"
+#include "h2_mplx.h"
#include "h2_push.h"
#include "h2_request.h"
#include "h2_switch.h"
apr_pool_t *ptemp, server_rec *s)
{
void *data = NULL;
- const char *mod_h2_init_key = "mod_h2_init_counter";
+ const char *mod_h2_init_key = "mod_http2_init_counter";
nghttp2_info *ngh2;
apr_status_t status;
(void)plog;(void)ptemp;
MOD_HTTP2_VERSION, ngh2? ngh2->version_str : "unknown");
switch (h2_conn_mpm_type()) {
+ case H2_MPM_SIMPLE:
+ case H2_MPM_MOTORZ:
+ case H2_MPM_NETWARE:
+ case H2_MPM_WINNT:
+ /* not sure we need something extra for those. */
+ break;
case H2_MPM_EVENT:
case H2_MPM_WORKER:
/* all fine, we know these ones */
if (status == APR_SUCCESS) {
status = h2_switch_init(p, s);
}
+ if (status == APR_SUCCESS) {
+ status = h2_task_init(p, s);
+ }
return status;
}
conn_rec *, request_rec *, char *name);
static int http2_is_h2(conn_rec *);
+static apr_status_t http2_req_engine_push(const char *engine_type,
+ request_rec *r,
+ h2_req_engine_init *einit)
+{
+ return h2_mplx_engine_push(engine_type, r, einit);
+}
+
+static apr_status_t http2_req_engine_pull(h2_req_engine *engine,
+ apr_read_type_e block,
+ request_rec **pr)
+{
+ return h2_mplx_engine_pull(engine, block, pr);
+}
+
+static void http2_req_engine_done(h2_req_engine *engine, conn_rec *r_conn)
+{
+ h2_mplx_engine_done(engine, r_conn);
+}
+
+static void http2_req_engine_exit(h2_req_engine *engine)
+{
+ h2_mplx_engine_exit(engine);
+}
+
+
/* Runs once per created child process. Perform any process
* related initionalization here.
*/
APLOGNO(02949) "initializing connection handling");
}
- APR_REGISTER_OPTIONAL_FN(http2_is_h2);
- APR_REGISTER_OPTIONAL_FN(http2_var_lookup);
}
/* Install this module into the apache2 infrastructure.
{
static const char *const mod_ssl[] = { "mod_ssl.c", NULL};
+ APR_REGISTER_OPTIONAL_FN(http2_is_h2);
+ APR_REGISTER_OPTIONAL_FN(http2_var_lookup);
+ APR_REGISTER_OPTIONAL_FN(http2_req_engine_push);
+ APR_REGISTER_OPTIONAL_FN(http2_req_engine_pull);
+ APR_REGISTER_OPTIONAL_FN(http2_req_engine_done);
+ APR_REGISTER_OPTIONAL_FN(http2_req_engine_exit);
+
ap_log_perror(APLOG_MARK, APLOG_TRACE1, 0, pool, "installing hooks");
/* Run once after configuration is set, but before mpm children initialize.
# PROP Ignore_Export_Lib 0
# PROP Target_Dir ""
# ADD BASE CPP /nologo /MD /W3 /O2 /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /D "ssize_t=long" /FD /c
-# ADD CPP /nologo /MD /W3 /O2 /Oy- /Zi /I "../../include" /I "../../srclib/apr/include" /I "../../srclib/apr-util/include" /I "../../srclib/nghttp2/lib/includes" /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /D "ssize_t=long" /Fd"Release\mod_http2_src" /FD /c
+# ADD CPP /nologo /MD /W3 /O2 /Oy- /Zi /I "../ssl" /I "../../include" /I "../../srclib/apr/include" /I "../../srclib/apr-util/include" /I "../../srclib/nghttp2/lib/includes" /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /D "ssize_t=long" /Fd"Release\mod_http2_src" /FD /c
# ADD BASE MTL /nologo /D "NDEBUG" /win32
# ADD MTL /nologo /D "NDEBUG" /mktyplib203 /win32
# ADD BASE RSC /l 0x409 /d "NDEBUG"
# PROP Ignore_Export_Lib 0
# PROP Target_Dir ""
# ADD BASE CPP /nologo /MDd /W3 /EHsc /Zi /Od /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /D "ssize_t=long" /FD /c
-# ADD CPP /nologo /MDd /W3 /EHsc /Zi /Od /I "../../include" /I "../../srclib/apr/include" /I "../../srclib/apr-util/include" /I "../../srclib/nghttp2/lib/includes" /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /D "ssize_t=long" /Fd"Debug\mod_http2_src" /FD /c
+# ADD CPP /nologo /MDd /W3 /EHsc /Zi /Od /I "../ssl" /I "../../include" /I "../../srclib/apr/include" /I "../../srclib/apr-util/include" /I "../../srclib/nghttp2/lib/includes" /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /D "ssize_t=long" /Fd"Debug\mod_http2_src" /FD /c
# ADD BASE MTL /nologo /D "_DEBUG" /win32
# ADD MTL /nologo /D "_DEBUG" /mktyplib203 /win32
# ADD BASE RSC /l 0x409 /d "_DEBUG"
# End Source File
# Begin Source File
+SOURCE=./h2_int_queue.c
+# End Source File
+# Begin Source File
+
SOURCE=./h2_io.c
# End Source File
# Begin Source File
# End Source File
# Begin Source File
-SOURCE=./h2_stream_set.c
-# End Source File
-# Begin Source File
-
SOURCE=./h2_switch.c
# End Source File
# Begin Source File
# End Source File
# Begin Source File
-SOURCE=./h2_task_queue.c
-# End Source File
-# Begin Source File
-
SOURCE=./h2_util.c
# End Source File
# Begin Source File
* limitations under the License.
*/
-#ifndef mod_http2_mod_http2_h
-#define mod_http2_mod_http2_h
+#ifndef __MOD_HTTP2_H__
+#define __MOD_HTTP2_H__
/** The http2_var_lookup() optional function retrieves HTTP2 environment
* variables. */
-APR_DECLARE_OPTIONAL_FN(char *, http2_var_lookup,
- (apr_pool_t *, server_rec *,
- conn_rec *, request_rec *,
- char *));
+APR_DECLARE_OPTIONAL_FN(char *,
+ http2_var_lookup, (apr_pool_t *, server_rec *,
+ conn_rec *, request_rec *, char *));
/** An optional function which returns non-zero if the given connection
* or its master connection is using HTTP/2. */
-APR_DECLARE_OPTIONAL_FN(int, http2_is_h2, (conn_rec *));
+APR_DECLARE_OPTIONAL_FN(int,
+ http2_is_h2, (conn_rec *));
+
+
+/*******************************************************************************
+ * HTTP/2 request engines
+ ******************************************************************************/
+
+struct apr_thread_cond_t;
+
+typedef struct h2_req_engine h2_req_engine;
+
+/**
+ * Initialize a h2_req_engine. The structure will be passed in but
+ * only the name and master are set. The function should initialize
+ * all fields.
+ * @param engine the allocated, partially filled structure
+ * @param r the first request to process, or NULL
+ */
+typedef apr_status_t h2_req_engine_init(h2_req_engine *engine, request_rec *r);
+
+/**
+ * The public structure of a h2_req_engine. It gets allocated by the http2
+ * infrastructure, assigned id, type, pool, io and connection and passed to the
+ * h2_req_engine_init() callback to complete initialization.
+ * This happens whenever a new request gets "push"ed for an engine type and
+ * no instance, or no free instance, for the type is available.
+ */
+struct h2_req_engine {
+ const char *id; /* identifier */
+ apr_pool_t *pool; /* pool for engine specific allocations */
+ const char *type; /* name of the engine type */
+ unsigned char window_bits;/* preferred size of overall response data
+ * mod_http2 is willing to buffer as log2 */
+ unsigned char req_window_bits;/* preferred size of response body data
+ * mod_http2 is willing to buffer per request,
+ * as log2 */
+ apr_size_t capacity; /* maximum concurrent requests */
+ void *user_data; /* user specific data */
+};
+
+/**
+ * Push a request to an engine with the specified name for further processing.
+ * If no such engine is available, einit is not NULL, einit is called
+ * with a new engine record and the caller is responsible for running the
+ * new engine instance.
+ * @param engine_type the type of the engine to add the request to
+ * @param r the request to push to an engine for processing
+ * @param einit an optional initialization callback for a new engine
+ * of the requested type, should no instance be available.
+ * By passing a non-NULL callback, the caller is willing
+ * to init and run a new engine itself.
+ * @return APR_SUCCESS iff slave was successfully added to an engine
+ */
+APR_DECLARE_OPTIONAL_FN(apr_status_t,
+ http2_req_engine_push, (const char *engine_type,
+ request_rec *r,
+ h2_req_engine_init *einit));
+
+/**
+ * Get a new request for processing in this engine.
+ * @param engine the engine which is done processing the slave
+ * @param timeout wait a maximum amount of time for a new slave, 0 will not wait
+ * @param pslave the slave connection that needs processing or NULL
+ * @return APR_SUCCESS if new request was assigned
+ * APR_EAGAIN/APR_TIMEUP if no new request is available
+ * APR_ECONNABORTED if the engine needs to shut down
+ */
+APR_DECLARE_OPTIONAL_FN(apr_status_t,
+ http2_req_engine_pull, (h2_req_engine *engine,
+ apr_read_type_e block,
+ request_rec **pr));
+APR_DECLARE_OPTIONAL_FN(void,
+ http2_req_engine_done, (h2_req_engine *engine,
+ conn_rec *rconn));
+/**
+ * The given request engine is done processing and needs to be excluded
+ * from further handling.
+ * @param engine the engine to exit
+ */
+APR_DECLARE_OPTIONAL_FN(void,
+ http2_req_engine_exit, (h2_req_engine *engine));
+
+
+#define H2_TASK_ID_NOTE "http2-task-id"
#endif