From a08b5085e8f9fc3337af2552de92c35700ae613f Mon Sep 17 00:00:00 2001
From: Jim Jagielski
This module is experimental. Its behaviors, directives, and - defaults are subject to more change from release to - release relative to other standard modules. Users are encouraged to - consult the "CHANGES" file for potential updates.
+This module is experimental. Its behaviors, directives, and + defaults are subject to more change from release to + release relative to other standard modules. Users are encouraged to + consult the "CHANGES" file for potential updates.
You must enable HTTP/2 via Protocols
in order to use the
functionality described in this document:
Protocols h2 http/1.1- +
This module can be configured to provide HTTP/2 related information +
+ This module can be configured to provide HTTP/2 related information as additional environment variables to the SSI and CGI namespace.
@@ -93,10 +95,104 @@HTTPe
H2PUSH
HTTP2
H2PUSH
+ Enabling HTTP/2 on your Apache Server has impact on the resource + consumption and if you have a busy site, you may need to consider + carefully the implications. +
++ The first noticable thing after enabling HTTP/2 is that your server + processes will start additional threads. The reason for this is that + HTTP/2 gives all requests that it receives to its own Worker + threads for processing, collects the results and streams them out + to the client. +
+
+ In the current implementation, these workers use a separate thread
+ pool from the MPM workers that you might be familiar with. This is
+ just how things are right now and not intended to be like this forever.
+ (It might be forever for the 2.4.x release line, though.) So, HTTP/2
+ workers, or shorter H2Workers, will not show up in mod_status. They
+ are also not counted against directives such as ThreadsPerChild. However
+ they take ThreadsPerChild as default if you have not configured something
+ else via <H2MinWorkers>
and
+ <H2MaxWorkers>
.
+
+ Another thing to watch out for is is memory consumption. Since HTTP/2
+ keeps more state on the server to manage all the open request, priorities
+ for and dependencies between them, it will always need more memory
+ than HTTP/1.1 processing. There are three directives which steer the
+ memory footprint of a HTTP/2 connection:
+ <H2MaxSessionStreams>
,
+ <H2WindowSize>
and
+ <H2StreamMaxMemSize>
.
+
+ <H2MaxSessionStreams>
limits the
+ number of parallel requests that a client can make on a HTTP/2 connection.
+ It depends on your site how many you should allow. The default is 100 which
+ is plenty and unless you run into memory problems, I would keep it this
+ way. Most requests that browsers send are GETs without a body, so they
+ use up only a little bit of memory until the actual processing starts.
+
+ <H2WindowSize>
controls how much
+ the client is allowed to send as body of a request, before it waits
+ for the server to encourage more. Or, the other way around, it is the
+ amount of request body data the server needs to be able to buffer. This
+ is per request.
+
+ And last, but not least, <H2StreamMaxMemSize>
+ controls how much response data shall be buffered. The request sits in
+ a H2Worker thread and is producing data, the HTTP/2 connection tries
+ to send this to the client. If the client does not read fast enough,
+ the connection will buffer this amount of data and then suspend the
+ H2Worker.
+
+ If you serve a lot of static files, <H2SessionExtraFiles>
+ is of interest. This tells the server how many file handles per
+ HTTP/2 connection it is allowed to waste for better performance. Because
+ when a request produces a static file as the response, the file handle
+ gets passed around and is buffered and not the file contents. That allows
+ to serve many large files without wasting memory or copying data
+ unnecessarily. However file handles are a limited resource for a process,
+ and if too many are used this way, requests may fail under load as
+ the amount of open handles has been exceeded.
+
+ Many sites use the same TLS certificate for multiple virtual hosts. The + certificate either has a wildcard name, such as '*.example.org' or carries + several alternate names. Browsers using HTTP/2 will recognize that and reuse + an already opened connection for such hosts. +
++ While this is great for performance, it comes at a price: such vhosts + need more care in their configuration. The problem is that you will have + multiple requests for multiple hosts on the same TLS connection. And that + makes renegotiation impossible, in face the HTTP/2 standard forbids it. +
++ So, if you have several virtual hosts using the same certificate and + want to use HTTP/2 for them, you need to make sure that all vhosts have + exactly the same SSL configuration. You need the same protocol, + ciphers and settings for client verification. +
++ If you mix things, Apache httpd will detect it and return a special + response code, 421 Misidrected Request, to the client. +
mod_headers
as:
- <Location /index.html> - Header add Link "</css/site.css>;rel=preload" - Header add Link "</images/logo.jpg>;rel=preload" -</Location>+
<Location /index.html> + Header add Link "</css/site.css>;rel=preload" + Header add Link "</images/logo.jpg>;rel=preload" + </Location>
As the example shows, there can be several link headers added @@ -369,7 +465,7 @@
This directive defines the priority handling of pushed responses @@ -465,14 +561,14 @@
H2PushPriority application/json 32 # an After rule -H2PushPriority image/jpeg before # weight inherited -H2PushPriority text/css interleaved # weight 256 default+
H2PushPriority application/json 32 # an After rule + H2PushPriority image/jpeg before # weight inherited + H2PushPriority text/css interleaved # weight 256 default
Measurements by google performance - labs show that best performance on TLS connections is reached, + labs show that best performance on TLS connections is reached, if initial record sizes stay below the MTU level, to allow a complete record to fit into an IP packet.
-- 2.50.0