1 <?xml version="1.0" encoding="UTF-8" ?>
2 <!DOCTYPE manualpage SYSTEM "../style/manualpage.dtd">
3 <?xml-stylesheet type="text/xsl" href="../style/manual.en.xsl"?>
4 <!-- $LastChangedRevision$ -->
7 Copyright 2002-2004 The Apache Software Foundation
9 Licensed under the Apache License, Version 2.0 (the "License");
10 you may not use this file except in compliance with the License.
11 You may obtain a copy of the License at
13 http://www.apache.org/licenses/LICENSE-2.0
15 Unless required by applicable law or agreed to in writing, software
16 distributed under the License is distributed on an "AS IS" BASIS,
17 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18 See the License for the specific language governing permissions and
19 limitations under the License.
22 <manualpage metafile="rewriteguide.xml.meta">
23 <parentdocument href="./">Miscellaneous Documentation</parentdocument>
25 <title>URL Rewriting Guide</title>
29 <p>Originally written by<br />
30 <cite>Ralf S. Engelschall <rse@apache.org></cite><br />
34 <p>This document supplements the <module>mod_rewrite</module>
35 <a href="../mod/mod_rewrite.html">reference documentation</a>.
36 It describes how one can use Apache's <module>mod_rewrite</module>
37 to solve typical URL-based problems with which webmasters are
38 commonony confronted. We give detailed descriptions on how to
39 solve each problem by configuring URL rewriting rulesets.</p>
45 <title>Introduction to <code>mod_rewrite</code></title>
47 <p>The Apache module <module>mod_rewrite</module> is a killer
48 one, i.e. it is a really sophisticated module which provides
49 a powerful way to do URL manipulations. With it you can do nearly
50 all types of URL manipulations you ever dreamed about.
51 The price you have to pay is to accept complexity, because
52 <module>mod_rewrite</module>'s major drawback is that it is
53 not easy to understand and use for the beginner. And even
54 Apache experts sometimes discover new aspects where
55 <module>mod_rewrite</module> can help.</p>
57 <p>In other words: With <module>mod_rewrite</module> you either
58 shoot yourself in the foot the first time and never use it again
59 or love it for the rest of your life because of its power.
60 This paper tries to give you a few initial success events to
61 avoid the first case by presenting already invented solutions
68 <title>Practical Solutions</title>
70 <p>Here come a lot of practical solutions I've either invented
71 myself or collected from other people's solutions in the past.
72 Feel free to learn the black magic of URL rewriting from
75 <note type="warning">ATTENTION: Depending on your server-configuration
76 it can be necessary to slightly change the examples for your
77 situation, e.g. adding the <code>[PT]</code> flag when
78 additionally using <module>mod_alias</module> and
79 <module>mod_userdir</module>, etc. Or rewriting a ruleset
80 to fit in <code>.htaccess</code> context instead
81 of per-server context. Always try to understand what a
82 particular ruleset really does before you use it. It
83 avoid problems.</note>
89 <title>URL Layout</title>
93 <title>Canonical URLs</title>
99 <p>On some webservers there are more than one URL for a
100 resource. Usually there are canonical URLs (which should be
101 actually used and distributed) and those which are just
102 shortcuts, internal ones, etc. Independent of which URL the
103 user supplied with the request he should finally see the
104 canonical one only.</p>
110 <p>We do an external HTTP redirect for all non-canonical
111 URLs to fix them in the location view of the Browser and
112 for all subsequent requests. In the example ruleset below
113 we replace <code>/~user</code> by the canonical
114 <code>/u/user</code> and fix a missing trailing slash for
115 <code>/u/user</code>.</p>
118 RewriteRule ^/<strong>~</strong>([^/]+)/?(.*) /<strong>u</strong>/$1/$2 [<strong>R</strong>]
119 RewriteRule ^/([uge])/(<strong>[^/]+</strong>)$ /$1/$2<strong>/</strong> [<strong>R</strong>]
128 <title>Canonical Hostnames</title>
131 <dt>Description:</dt>
139 RewriteCond %{HTTP_HOST} !^fully\.qualified\.domain\.name [NC]
140 RewriteCond %{HTTP_HOST} !^$
141 RewriteCond %{SERVER_PORT} !^80$
142 RewriteRule ^/(.*) http://fully.qualified.domain.name:%{SERVER_PORT}/$1 [L,R]
143 RewriteCond %{HTTP_HOST} !^fully\.qualified\.domain\.name [NC]
144 RewriteCond %{HTTP_HOST} !^$
145 RewriteRule ^/(.*) http://fully.qualified.domain.name/$1 [L,R]
154 <title>Moved <code>DocumentRoot</code></title>
157 <dt>Description:</dt>
160 <p>Usually the <directive module="core">DocumentRoot</directive>
161 of the webserver directly relates to the URL "<code>/</code>".
162 But often this data is not really of top-level priority, it is
163 perhaps just one entity of a lot of data pools. For instance at
164 our Intranet sites there are <code>/e/www/</code>
165 (the homepage for WWW), <code>/e/sww/</code> (the homepage for
166 the Intranet) etc. Now because the data of the <directive module="core"
167 >DocumentRoot</directive> stays at <code>/e/www/</code> we had
168 to make sure that all inlined images and other stuff inside this
169 data pool work for subsequent requests.</p>
175 <p>We just redirect the URL <code>/</code> to
176 <code>/e/www/</code>. While is seems trivial it is
177 actually trivial with <module>mod_rewrite</module>, only.
178 Because the typical old mechanisms of URL <em>Aliases</em>
179 (as provides by <module>mod_alias</module> and friends)
180 only used <em>prefix</em> matching. With this you cannot
181 do such a redirection because the <directive module="core"
182 >DocumentRoot</directive> is a prefix of all URLs. With
183 <module>mod_rewrite</module> it is really trivial:</p>
187 RewriteRule <strong>^/$</strong> /e/www/ [<strong>R</strong>]
196 <title>Trailing Slash Problem</title>
199 <dt>Description:</dt>
202 <p>Every webmaster can sing a song about the problem of
203 the trailing slash on URLs referencing directories. If they
204 are missing, the server dumps an error, because if you say
205 <code>/~quux/foo</code> instead of <code>/~quux/foo/</code>
206 then the server searches for a <em>file</em> named
207 <code>foo</code>. And because this file is a directory it
208 complains. Actually it tries to fix it itself in most of
209 the cases, but sometimes this mechanism need to be emulated
210 by you. For instance after you have done a lot of
211 complicated URL rewritings to CGI scripts etc.</p>
217 <p>The solution to this subtle problem is to let the server
218 add the trailing slash automatically. To do this
219 correctly we have to use an external redirect, so the
220 browser correctly requests subsequent images etc. If we
221 only did a internal rewrite, this would only work for the
222 directory page, but would go wrong when any images are
223 included into this page with relative URLs, because the
224 browser would request an in-lined object. For instance, a
225 request for <code>image.gif</code> in
226 <code>/~quux/foo/index.html</code> would become
227 <code>/~quux/image.gif</code> without the external
230 <p>So, to do this trick we write:</p>
235 RewriteRule ^foo<strong>$</strong> foo<strong>/</strong> [<strong>R</strong>]
238 <p>The crazy and lazy can even do the following in the
239 top-level <code>.htaccess</code> file of their homedir.
240 But notice that this creates some processing
246 RewriteCond %{REQUEST_FILENAME} <strong>-d</strong>
247 RewriteRule ^(.+<strong>[^/]</strong>)$ $1<strong>/</strong> [R]
256 <title>Webcluster through Homogeneous URL Layout</title>
259 <dt>Description:</dt>
262 <p>We want to create a homogeneous and consistent URL
263 layout over all WWW servers on a Intranet webcluster, i.e.
264 all URLs (per definition server local and thus server
265 dependent!) become actually server <em>independent</em>!
266 What we want is to give the WWW namespace a consistent
267 server-independent layout: no URL should have to include
268 any physically correct target server. The cluster itself
269 should drive us automatically to the physical target
276 <p>First, the knowledge of the target servers come from
277 (distributed) external maps which contain information
278 where our users, groups and entities stay. The have the
282 user1 server_of_user1
283 user2 server_of_user2
287 <p>We put them into files <code>map.xxx-to-host</code>.
288 Second we need to instruct all servers to redirect URLs
300 http://physical-host/u/user/anypath
301 http://physical-host/g/group/anypath
302 http://physical-host/e/entity/anypath
305 <p>when the URL is not locally valid to a server. The
306 following ruleset does this for us by the help of the map
307 files (assuming that server0 is a default server which
308 will be used if a user has no entry in the map):</p>
313 RewriteMap user-to-host txt:/path/to/map.user-to-host
314 RewriteMap group-to-host txt:/path/to/map.group-to-host
315 RewriteMap entity-to-host txt:/path/to/map.entity-to-host
317 RewriteRule ^/u/<strong>([^/]+)</strong>/?(.*) http://<strong>${user-to-host:$1|server0}</strong>/u/$1/$2
318 RewriteRule ^/g/<strong>([^/]+)</strong>/?(.*) http://<strong>${group-to-host:$1|server0}</strong>/g/$1/$2
319 RewriteRule ^/e/<strong>([^/]+)</strong>/?(.*) http://<strong>${entity-to-host:$1|server0}</strong>/e/$1/$2
321 RewriteRule ^/([uge])/([^/]+)/?$ /$1/$2/.www/
322 RewriteRule ^/([uge])/([^/]+)/([^.]+.+) /$1/$2/.www/$3\
331 <title>Move Homedirs to Different Webserver</title>
334 <dt>Description:</dt>
337 <p>Many webmasters have asked for a solution to the
338 following situation: They wanted to redirect just all
339 homedirs on a webserver to another webserver. They usually
340 need such things when establishing a newer webserver which
341 will replace the old one over time.</p>
347 <p>The solution is trivial with <module>mod_rewrite</module>.
348 On the old webserver we just redirect all
349 <code>/~user/anypath</code> URLs to
350 <code>http://newserver/~user/anypath</code>.</p>
354 RewriteRule ^/~(.+) http://<strong>newserver</strong>/~$1 [R,L]
363 <title>Structured Homedirs</title>
366 <dt>Description:</dt>
369 <p>Some sites with thousands of users usually use a
370 structured homedir layout, i.e. each homedir is in a
371 subdirectory which begins for instance with the first
372 character of the username. So, <code>/~foo/anypath</code>
373 is <code>/home/<strong>f</strong>/foo/.www/anypath</code>
374 while <code>/~bar/anypath</code> is
375 <code>/home/<strong>b</strong>/bar/.www/anypath</code>.</p>
381 <p>We use the following ruleset to expand the tilde URLs
382 into exactly the above layout.</p>
386 RewriteRule ^/~(<strong>([a-z])</strong>[a-z0-9]+)(.*) /home/<strong>$2</strong>/$1/.www$3
395 <title>Filesystem Reorganization</title>
398 <dt>Description:</dt>
401 <p>This really is a hardcore example: a killer application
402 which heavily uses per-directory
403 <code>RewriteRules</code> to get a smooth look and feel
404 on the Web while its data structure is never touched or
405 adjusted. Background: <strong><em>net.sw</em></strong> is
406 my archive of freely available Unix software packages,
407 which I started to collect in 1992. It is both my hobby
408 and job to to this, because while I'm studying computer
409 science I have also worked for many years as a system and
410 network administrator in my spare time. Every week I need
411 some sort of software so I created a deep hierarchy of
412 directories where I stored the packages:</p>
415 drwxrwxr-x 2 netsw users 512 Aug 3 18:39 Audio/
416 drwxrwxr-x 2 netsw users 512 Jul 9 14:37 Benchmark/
417 drwxrwxr-x 12 netsw users 512 Jul 9 00:34 Crypto/
418 drwxrwxr-x 5 netsw users 512 Jul 9 00:41 Database/
419 drwxrwxr-x 4 netsw users 512 Jul 30 19:25 Dicts/
420 drwxrwxr-x 10 netsw users 512 Jul 9 01:54 Graphic/
421 drwxrwxr-x 5 netsw users 512 Jul 9 01:58 Hackers/
422 drwxrwxr-x 8 netsw users 512 Jul 9 03:19 InfoSys/
423 drwxrwxr-x 3 netsw users 512 Jul 9 03:21 Math/
424 drwxrwxr-x 3 netsw users 512 Jul 9 03:24 Misc/
425 drwxrwxr-x 9 netsw users 512 Aug 1 16:33 Network/
426 drwxrwxr-x 2 netsw users 512 Jul 9 05:53 Office/
427 drwxrwxr-x 7 netsw users 512 Jul 9 09:24 SoftEng/
428 drwxrwxr-x 7 netsw users 512 Jul 9 12:17 System/
429 drwxrwxr-x 12 netsw users 512 Aug 3 20:15 Typesetting/
430 drwxrwxr-x 10 netsw users 512 Jul 9 14:08 X11/
433 <p>In July 1996 I decided to make this archive public to
434 the world via a nice Web interface. "Nice" means that I
435 wanted to offer an interface where you can browse
436 directly through the archive hierarchy. And "nice" means
437 that I didn't wanted to change anything inside this
438 hierarchy - not even by putting some CGI scripts at the
439 top of it. Why? Because the above structure should be
440 later accessible via FTP as well, and I didn't want any
441 Web or CGI stuff to be there.</p>
447 <p>The solution has two parts: The first is a set of CGI
448 scripts which create all the pages at all directory
449 levels on-the-fly. I put them under
450 <code>/e/netsw/.www/</code> as follows:</p>
453 -rw-r--r-- 1 netsw users 1318 Aug 1 18:10 .wwwacl
454 drwxr-xr-x 18 netsw users 512 Aug 5 15:51 DATA/
455 -rw-rw-rw- 1 netsw users 372982 Aug 5 16:35 LOGFILE
456 -rw-r--r-- 1 netsw users 659 Aug 4 09:27 TODO
457 -rw-r--r-- 1 netsw users 5697 Aug 1 18:01 netsw-about.html
458 -rwxr-xr-x 1 netsw users 579 Aug 2 10:33 netsw-access.pl
459 -rwxr-xr-x 1 netsw users 1532 Aug 1 17:35 netsw-changes.cgi
460 -rwxr-xr-x 1 netsw users 2866 Aug 5 14:49 netsw-home.cgi
461 drwxr-xr-x 2 netsw users 512 Jul 8 23:47 netsw-img/
462 -rwxr-xr-x 1 netsw users 24050 Aug 5 15:49 netsw-lsdir.cgi
463 -rwxr-xr-x 1 netsw users 1589 Aug 3 18:43 netsw-search.cgi
464 -rwxr-xr-x 1 netsw users 1885 Aug 1 17:41 netsw-tree.cgi
465 -rw-r--r-- 1 netsw users 234 Jul 30 16:35 netsw-unlimit.lst
468 <p>The <code>DATA/</code> subdirectory holds the above
469 directory structure, i.e. the real
470 <strong><em>net.sw</em></strong> stuff and gets
471 automatically updated via <code>rdist</code> from time to
472 time. The second part of the problem remains: how to link
473 these two structures together into one smooth-looking URL
474 tree? We want to hide the <code>DATA/</code> directory
475 from the user while running the appropriate CGI scripts
476 for the various URLs. Here is the solution: first I put
477 the following into the per-directory configuration file
478 in the <directive module="core">DocumentRoot</directive>
479 of the server to rewrite the announced URL
480 <code>/net.sw/</code> to the internal path
481 <code>/e/netsw</code>:</p>
484 RewriteRule ^net.sw$ net.sw/ [R]
485 RewriteRule ^net.sw/(.*)$ e/netsw/$1
488 <p>The first rule is for requests which miss the trailing
489 slash! The second rule does the real thing. And then
490 comes the killer configuration which stays in the
491 per-directory config file
492 <code>/e/netsw/.www/.wwwacl</code>:</p>
495 Options ExecCGI FollowSymLinks Includes MultiViews
499 # we are reached via /net.sw/ prefix
502 # first we rewrite the root dir to
503 # the handling cgi script
504 RewriteRule ^$ netsw-home.cgi [L]
505 RewriteRule ^index\.html$ netsw-home.cgi [L]
507 # strip out the subdirs when
508 # the browser requests us from perdir pages
509 RewriteRule ^.+/(netsw-[^/]+/.+)$ $1 [L]
511 # and now break the rewriting for local files
512 RewriteRule ^netsw-home\.cgi.* - [L]
513 RewriteRule ^netsw-changes\.cgi.* - [L]
514 RewriteRule ^netsw-search\.cgi.* - [L]
515 RewriteRule ^netsw-tree\.cgi$ - [L]
516 RewriteRule ^netsw-about\.html$ - [L]
517 RewriteRule ^netsw-img/.*$ - [L]
519 # anything else is a subdir which gets handled
520 # by another cgi script
521 RewriteRule !^netsw-lsdir\.cgi.* - [C]
522 RewriteRule (.*) netsw-lsdir.cgi/$1
525 <p>Some hints for interpretation:</p>
528 <li>Notice the <code>L</code> (last) flag and no
529 substitution field ('<code>-</code>') in the forth part</li>
531 <li>Notice the <code>!</code> (not) character and
532 the <code>C</code> (chain) flag at the first rule
533 in the last part</li>
535 <li>Notice the catch-all pattern in the last rule</li>
544 <title>NCSA imagemap to Apache <code>mod_imap</code></title>
547 <dt>Description:</dt>
550 <p>When switching from the NCSA webserver to the more
551 modern Apache webserver a lot of people want a smooth
552 transition. So they want pages which use their old NCSA
553 <code>imagemap</code> program to work under Apache with the
554 modern <module>mod_imap</module>. The problem is that there
555 are a lot of hyperlinks around which reference the
556 <code>imagemap</code> program via
557 <code>/cgi-bin/imagemap/path/to/page.map</code>. Under
558 Apache this has to read just
559 <code>/path/to/page.map</code>.</p>
565 <p>We use a global rule to remove the prefix on-the-fly for
570 RewriteRule ^/cgi-bin/imagemap(.*) $1 [PT]
579 <title>Search pages in more than one directory</title>
582 <dt>Description:</dt>
585 <p>Sometimes it is necessary to let the webserver search
586 for pages in more than one directory. Here MultiViews or
587 other techniques cannot help.</p>
593 <p>We program a explicit ruleset which searches for the
594 files in the directories.</p>
599 # first try to find it in custom/...
600 # ...and if found stop and be happy:
601 RewriteCond /your/docroot/<strong>dir1</strong>/%{REQUEST_FILENAME} -f
602 RewriteRule ^(.+) /your/docroot/<strong>dir1</strong>/$1 [L]
604 # second try to find it in pub/...
605 # ...and if found stop and be happy:
606 RewriteCond /your/docroot/<strong>dir2</strong>/%{REQUEST_FILENAME} -f
607 RewriteRule ^(.+) /your/docroot/<strong>dir2</strong>/$1 [L]
609 # else go on for other Alias or ScriptAlias directives,
611 RewriteRule ^(.+) - [PT]
620 <title>Set Environment Variables According To URL Parts</title>
623 <dt>Description:</dt>
626 <p>Perhaps you want to keep status information between
627 requests and use the URL to encode it. But you don't want
628 to use a CGI wrapper for all pages just to strip out this
635 <p>We use a rewrite rule to strip out the status information
636 and remember it via an environment variable which can be
637 later dereferenced from within XSSI or CGI. This way a
638 URL <code>/foo/S=java/bar/</code> gets translated to
639 <code>/foo/bar/</code> and the environment variable named
640 <code>STATUS</code> is set to the value "java".</p>
644 RewriteRule ^(.*)/<strong>S=([^/]+)</strong>/(.*) $1/$3 [E=<strong>STATUS:$2</strong>]
653 <title>Virtual User Hosts</title>
656 <dt>Description:</dt>
659 <p>Assume that you want to provide
660 <code>www.<strong>username</strong>.host.domain.com</code>
661 for the homepage of username via just DNS A records to the
662 same machine and without any virtualhosts on this
669 <p>For HTTP/1.0 requests there is no solution, but for
670 HTTP/1.1 requests which contain a Host: HTTP header we
671 can use the following ruleset to rewrite
672 <code>http://www.username.host.com/anypath</code>
673 internally to <code>/home/username/anypath</code>:</p>
677 RewriteCond %{<strong>HTTP_HOST</strong>} ^www\.<strong>[^.]+</strong>\.host\.com$
678 RewriteRule ^(.+) %{HTTP_HOST}$1 [C]
679 RewriteRule ^www\.<strong>([^.]+)</strong>\.host\.com(.*) /home/<strong>$1</strong>$2
688 <title>Redirect Homedirs For Foreigners</title>
691 <dt>Description:</dt>
694 <p>We want to redirect homedir URLs to another webserver
695 <code>www.somewhere.com</code> when the requesting user
696 does not stay in the local domain
697 <code>ourdomain.com</code>. This is sometimes used in
698 virtual host contexts.</p>
704 <p>Just a rewrite condition:</p>
708 RewriteCond %{REMOTE_HOST} <strong>!^.+\.ourdomain\.com$</strong>
709 RewriteRule ^(/~.+) http://www.somewhere.com/$1 [R,L]
718 <title>Redirect Failing URLs To Other Webserver</title>
721 <dt>Description:</dt>
724 <p>A typical FAQ about URL rewriting is how to redirect
725 failing requests on webserver A to webserver B. Usually
726 this is done via <directive module="core"
727 >ErrorDocument</directive> CGI-scripts in Perl, but
728 there is also a <module>mod_rewrite</module> solution.
729 But notice that this performs more poorly than using an
730 <directive module="core">ErrorDocument</directive>
737 <p>The first solution has the best performance but less
738 flexibility, and is less error safe:</p>
742 RewriteCond /your/docroot/%{REQUEST_FILENAME} <strong>!-f</strong>
743 RewriteRule ^(.+) http://<strong>webserverB</strong>.dom/$1
746 <p>The problem here is that this will only work for pages
747 inside the <directive module="core">DocumentRoot</directive>. While you can add more
748 Conditions (for instance to also handle homedirs, etc.)
749 there is better variant:</p>
753 RewriteCond %{REQUEST_URI} <strong>!-U</strong>
754 RewriteRule ^(.+) http://<strong>webserverB</strong>.dom/$1
757 <p>This uses the URL look-ahead feature of <module>mod_rewrite</module>.
758 The result is that this will work for all types of URLs
759 and is a safe way. But it does a performance impact on
760 the webserver, because for every request there is one
761 more internal subrequest. So, if your webserver runs on a
762 powerful CPU, use this one. If it is a slow machine, use
763 the first approach or better a <directive module="core"
764 >ErrorDocument</directive> CGI-script.</p>
772 <title>Extended Redirection</title>
775 <dt>Description:</dt>
778 <p>Sometimes we need more control (concerning the
779 character escaping mechanism) of URLs on redirects.
780 Usually the Apache kernels URL escape function also
781 escapes anchors, i.e. URLs like "<code>url#anchor</code>".
782 You cannot use this directly on redirects with
783 <module>mod_rewrite</module> because the
784 <code>uri_escape()</code> function of Apache
785 would also escape the hash character.
786 How can we redirect to such a URL?</p>
792 <p>We have to use a kludge by the use of a NPH-CGI script
793 which does the redirect itself. Because here no escaping
794 is done (NPH=non-parseable headers). First we introduce a
795 new URL scheme <code>xredirect:</code> by the following
796 per-server config-line (should be one of the last rewrite
800 RewriteRule ^xredirect:(.+) /path/to/nph-xredirect.cgi/$1 \
801 [T=application/x-httpd-cgi,L]
804 <p>This forces all URLs prefixed with
805 <code>xredirect:</code> to be piped through the
806 <code>nph-xredirect.cgi</code> program. And this program
812 ## nph-xredirect.cgi -- NPH/CGI script for extended redirects
813 ## Copyright (c) 1997 Ralf S. Engelschall, All Rights Reserved.
817 $url = $ENV{'PATH_INFO'};
819 print "HTTP/1.0 302 Moved Temporarily\n";
820 print "Server: $ENV{'SERVER_SOFTWARE'}\n";
821 print "Location: $url\n";
822 print "Content-type: text/html\n";
824 print "<html>\n";
825 print "<head>\n";
826 print "<title>302 Moved Temporarily (EXTENDED)</title>\n";
827 print "</head>\n";
828 print "<body>\n";
829 print "<h1>Moved Temporarily (EXTENDED)</h1>\n";
830 print "The document has moved <a HREF=\"$url\">here</a>.<p>\n";
831 print "</body>\n";
832 print "</html>\n";
837 <p>This provides you with the functionality to do
838 redirects to all URL schemes, i.e. including the one
839 which are not directly accepted by <module>mod_rewrite</module>.
840 For instance you can now also redirect to
841 <code>news:newsgroup</code> via</p>
844 RewriteRule ^anyurl xredirect:news:newsgroup
847 <note>Notice: You have not to put <code>[R]</code> or
848 <code>[R,L]</code> to the above rule because the
849 <code>xredirect:</code> need to be expanded later
850 by our special "pipe through" rule above.</note>
858 <title>Archive Access Multiplexer</title>
861 <dt>Description:</dt>
864 <p>Do you know the great CPAN (Comprehensive Perl Archive
865 Network) under <a href="http://www.perl.com/CPAN"
866 >http://www.perl.com/CPAN</a>?
867 This does a redirect to one of several FTP servers around
868 the world which carry a CPAN mirror and is approximately
869 near the location of the requesting client. Actually this
870 can be called an FTP access multiplexing service. While
871 CPAN runs via CGI scripts, how can a similar approach
872 implemented via <module>mod_rewrite</module>?</p>
878 <p>First we notice that from version 3.0.0
879 <module>mod_rewrite</module> can
880 also use the "<code>ftp:</code>" scheme on redirects.
881 And second, the location approximation can be done by a
882 <directive module="mod_rewrite">RewriteMap</directive>
883 over the top-level domain of the client.
884 With a tricky chained ruleset we can use this top-level
885 domain as a key to our multiplexing map.</p>
889 RewriteMap multiplex txt:/path/to/map.cxan
890 RewriteRule ^/CxAN/(.*) %{REMOTE_HOST}::$1 [C]
891 RewriteRule ^.+\.<strong>([a-zA-Z]+)</strong>::(.*)$ ${multiplex:<strong>$1</strong>|ftp.default.dom}$2 [R,L]
896 ## map.cxan -- Multiplexing Map for CxAN
899 de ftp://ftp.cxan.de/CxAN/
900 uk ftp://ftp.cxan.uk/CxAN/
901 com ftp://ftp.cxan.com/CxAN/
912 <title>Time-Dependent Rewriting</title>
915 <dt>Description:</dt>
918 <p>When tricks like time-dependent content should happen a
919 lot of webmasters still use CGI scripts which do for
920 instance redirects to specialized pages. How can it be done
921 via <module>mod_rewrite</module>?</p>
927 <p>There are a lot of variables named <code>TIME_xxx</code>
928 for rewrite conditions. In conjunction with the special
929 lexicographic comparison patterns <code><STRING</code>,
930 <code>>STRING</code> and <code>=STRING</code> we can
931 do time-dependent redirects:</p>
935 RewriteCond %{TIME_HOUR}%{TIME_MIN} >0700
936 RewriteCond %{TIME_HOUR}%{TIME_MIN} <1900
937 RewriteRule ^foo\.html$ foo.day.html
938 RewriteRule ^foo\.html$ foo.night.html
941 <p>This provides the content of <code>foo.day.html</code>
942 under the URL <code>foo.html</code> from
943 <code>07:00-19:00</code> and at the remaining time the
944 contents of <code>foo.night.html</code>. Just a nice
945 feature for a homepage...</p>
953 <title>Backward Compatibility for YYYY to XXXX migration</title>
956 <dt>Description:</dt>
959 <p>How can we make URLs backward compatible (still
960 existing virtually) after migrating <code>document.YYYY</code>
961 to <code>document.XXXX</code>, e.g. after translating a
962 bunch of <code>.html</code> files to <code>.phtml</code>?</p>
968 <p>We just rewrite the name to its basename and test for
969 existence of the new extension. If it exists, we take
970 that name, else we rewrite the URL to its original state.</p>
974 # backward compatibility ruleset for
975 # rewriting document.html to document.phtml
976 # when and only when document.phtml exists
977 # but no longer document.html
980 # parse out basename, but remember the fact
981 RewriteRule ^(.*)\.html$ $1 [C,E=WasHTML:yes]
982 # rewrite to document.phtml if exists
983 RewriteCond %{REQUEST_FILENAME}.phtml -f
984 RewriteRule ^(.*)$ $1.phtml [S=1]
985 # else reverse the previous basename cutout
986 RewriteCond %{ENV:WasHTML} ^yes$
987 RewriteRule ^(.*)$ $1.html
996 <section id="content">
998 <title>Content Handling</title>
1002 <title>From Old to New (intern)</title>
1005 <dt>Description:</dt>
1008 <p>Assume we have recently renamed the page
1009 <code>foo.html</code> to <code>bar.html</code> and now want
1010 to provide the old URL for backward compatibility. Actually
1011 we want that users of the old URL even not recognize that
1012 the pages was renamed.</p>
1018 <p>We rewrite the old URL to the new one internally via the
1024 RewriteRule ^<strong>foo</strong>\.html$ <strong>bar</strong>.html
1033 <title>From Old to New (extern)</title>
1036 <dt>Description:</dt>
1039 <p>Assume again that we have recently renamed the page
1040 <code>foo.html</code> to <code>bar.html</code> and now want
1041 to provide the old URL for backward compatibility. But this
1042 time we want that the users of the old URL get hinted to
1043 the new one, i.e. their browsers Location field should
1050 <p>We force a HTTP redirect to the new URL which leads to a
1051 change of the browsers and thus the users view:</p>
1056 RewriteRule ^<strong>foo</strong>\.html$ <strong>bar</strong>.html [<strong>R</strong>]
1065 <title>Browser Dependent Content</title>
1068 <dt>Description:</dt>
1071 <p>At least for important top-level pages it is sometimes
1072 necessary to provide the optimum of browser dependent
1073 content, i.e. one has to provide a maximum version for the
1074 latest Netscape variants, a minimum version for the Lynx
1075 browsers and a average feature version for all others.</p>
1081 <p>We cannot use content negotiation because the browsers do
1082 not provide their type in that form. Instead we have to
1083 act on the HTTP header "User-Agent". The following condig
1084 does the following: If the HTTP header "User-Agent"
1085 begins with "Mozilla/3", the page <code>foo.html</code>
1086 is rewritten to <code>foo.NS.html</code> and and the
1087 rewriting stops. If the browser is "Lynx" or "Mozilla" of
1088 version 1 or 2 the URL becomes <code>foo.20.html</code>.
1089 All other browsers receive page <code>foo.32.html</code>.
1090 This is done by the following ruleset:</p>
1093 RewriteCond %{HTTP_USER_AGENT} ^<strong>Mozilla/3</strong>.*
1094 RewriteRule ^foo\.html$ foo.<strong>NS</strong>.html [<strong>L</strong>]
1096 RewriteCond %{HTTP_USER_AGENT} ^<strong>Lynx/</strong>.* [OR]
1097 RewriteCond %{HTTP_USER_AGENT} ^<strong>Mozilla/[12]</strong>.*
1098 RewriteRule ^foo\.html$ foo.<strong>20</strong>.html [<strong>L</strong>]
1100 RewriteRule ^foo\.html$ foo.<strong>32</strong>.html [<strong>L</strong>]
1109 <title>Dynamic Mirror</title>
1112 <dt>Description:</dt>
1115 <p>Assume there are nice webpages on remote hosts we want
1116 to bring into our namespace. For FTP servers we would use
1117 the <code>mirror</code> program which actually maintains an
1118 explicit up-to-date copy of the remote data on the local
1119 machine. For a webserver we could use the program
1120 <code>webcopy</code> which acts similar via HTTP. But both
1121 techniques have one major drawback: The local copy is
1122 always just as up-to-date as often we run the program. It
1123 would be much better if the mirror is not a static one we
1124 have to establish explicitly. Instead we want a dynamic
1125 mirror with data which gets updated automatically when
1126 there is need (updated data on the remote host).</p>
1132 <p>To provide this feature we map the remote webpage or even
1133 the complete remote webarea to our namespace by the use
1134 of the <dfn>Proxy Throughput</dfn> feature
1135 (flag <code>[P]</code>):</p>
1140 RewriteRule ^<strong>hotsheet/</strong>(.*)$ <strong>http://www.tstimpreso.com/hotsheet/</strong>$1 [<strong>P</strong>]
1146 RewriteRule ^<strong>usa-news\.html</strong>$ <strong>http://www.quux-corp.com/news/index.html</strong> [<strong>P</strong>]
1155 <title>Reverse Dynamic Mirror</title>
1158 <dt>Description:</dt>
1167 RewriteCond /mirror/of/remotesite/$1 -U
1168 RewriteRule ^http://www\.remotesite\.com/(.*)$ /mirror/of/remotesite/$1
1177 <title>Retrieve Missing Data from Intranet</title>
1180 <dt>Description:</dt>
1183 <p>This is a tricky way of virtually running a corporate
1184 (external) Internet webserver
1185 (<code>www.quux-corp.dom</code>), while actually keeping
1186 and maintaining its data on a (internal) Intranet webserver
1187 (<code>www2.quux-corp.dom</code>) which is protected by a
1188 firewall. The trick is that on the external webserver we
1189 retrieve the requested data on-the-fly from the internal
1196 <p>First, we have to make sure that our firewall still
1197 protects the internal webserver and that only the
1198 external webserver is allowed to retrieve data from it.
1199 For a packet-filtering firewall we could for instance
1200 configure a firewall ruleset like the following:</p>
1203 <strong>ALLOW</strong> Host www.quux-corp.dom Port >1024 --> Host www2.quux-corp.dom Port <strong>80</strong>
1204 <strong>DENY</strong> Host * Port * --> Host www2.quux-corp.dom Port <strong>80</strong>
1207 <p>Just adjust it to your actual configuration syntax.
1208 Now we can establish the <module>mod_rewrite</module>
1209 rules which request the missing data in the background
1210 through the proxy throughput feature:</p>
1213 RewriteRule ^/~([^/]+)/?(.*) /home/$1/.www/$2
1214 RewriteCond %{REQUEST_FILENAME} <strong>!-f</strong>
1215 RewriteCond %{REQUEST_FILENAME} <strong>!-d</strong>
1216 RewriteRule ^/home/([^/]+)/.www/?(.*) http://<strong>www2</strong>.quux-corp.dom/~$1/pub/$2 [<strong>P</strong>]
1225 <title>Load Balancing</title>
1228 <dt>Description:</dt>
1231 <p>Suppose we want to load balance the traffic to
1232 <code>www.foo.com</code> over <code>www[0-5].foo.com</code>
1233 (a total of 6 servers). How can this be done?</p>
1239 <p>There are a lot of possible solutions for this problem.
1240 We will discuss first a commonly known DNS-based variant
1241 and then the special one with <module>mod_rewrite</module>:</p>
1245 <strong>DNS Round-Robin</strong>
1247 <p>The simplest method for load-balancing is to use
1248 the DNS round-robin feature of <code>BIND</code>.
1249 Here you just configure <code>www[0-9].foo.com</code>
1250 as usual in your DNS with A(address) records, e.g.</p>
1261 <p>Then you additionally add the following entry:</p>
1264 www IN CNAME www0.foo.com.
1265 IN CNAME www1.foo.com.
1266 IN CNAME www2.foo.com.
1267 IN CNAME www3.foo.com.
1268 IN CNAME www4.foo.com.
1269 IN CNAME www5.foo.com.
1270 IN CNAME www6.foo.com.
1273 <p>Notice that this seems wrong, but is actually an
1274 intended feature of <code>BIND</code> and can be used
1275 in this way. However, now when <code>www.foo.com</code> gets
1276 resolved, <code>BIND</code> gives out <code>www0-www6</code>
1277 - but in a slightly permutated/rotated order every time.
1278 This way the clients are spread over the various
1279 servers. But notice that this not a perfect load
1280 balancing scheme, because DNS resolve information
1281 gets cached by the other nameservers on the net, so
1282 once a client has resolved <code>www.foo.com</code>
1283 to a particular <code>wwwN.foo.com</code>, all
1284 subsequent requests also go to this particular name
1285 <code>wwwN.foo.com</code>. But the final result is
1286 ok, because the total sum of the requests are really
1287 spread over the various webservers.</p>
1291 <strong>DNS Load-Balancing</strong>
1293 <p>A sophisticated DNS-based method for
1294 load-balancing is to use the program
1295 <code>lbnamed</code> which can be found at <a
1296 href="http://www.stanford.edu/~schemers/docs/lbnamed/lbnamed.html">
1297 http://www.stanford.edu/~schemers/docs/lbnamed/lbnamed.html</a>.
1298 It is a Perl 5 program in conjunction with auxilliary
1299 tools which provides a real load-balancing for
1304 <strong>Proxy Throughput Round-Robin</strong>
1306 <p>In this variant we use <module>mod_rewrite</module>
1307 and its proxy throughput feature. First we dedicate
1308 <code>www0.foo.com</code> to be actually
1309 <code>www.foo.com</code> by using a single</p>
1312 www IN CNAME www0.foo.com.
1315 <p>entry in the DNS. Then we convert
1316 <code>www0.foo.com</code> to a proxy-only server,
1317 i.e. we configure this machine so all arriving URLs
1318 are just pushed through the internal proxy to one of
1319 the 5 other servers (<code>www1-www5</code>). To
1320 accomplish this we first establish a ruleset which
1321 contacts a load balancing script <code>lb.pl</code>
1326 RewriteMap lb prg:/path/to/lb.pl
1327 RewriteRule ^/(.+)$ ${lb:$1} [P,L]
1330 <p>Then we write <code>lb.pl</code>:</p>
1335 ## lb.pl -- load balancing script
1340 $name = "www"; # the hostname base
1341 $first = 1; # the first server (not 0 here, because 0 is myself)
1342 $last = 5; # the last server in the round-robin
1343 $domain = "foo.dom"; # the domainname
1346 while (<STDIN>) {
1347 $cnt = (($cnt+1) % ($last+1-$first));
1348 $server = sprintf("%s%d.%s", $name, $cnt+$first, $domain);
1349 print "http://$server/$_";
1355 <note>A last notice: Why is this useful? Seems like
1356 <code>www0.foo.com</code> still is overloaded? The
1357 answer is yes, it is overloaded, but with plain proxy
1358 throughput requests, only! All SSI, CGI, ePerl, etc.
1359 processing is completely done on the other machines.
1360 This is the essential point.</note>
1364 <strong>Hardware/TCP Round-Robin</strong>
1366 <p>There is a hardware solution available, too. Cisco
1367 has a beast called LocalDirector which does a load
1368 balancing at the TCP/IP level. Actually this is some
1369 sort of a circuit level gateway in front of a
1370 webcluster. If you have enough money and really need
1371 a solution with high performance, use this one.</p>
1381 <title>New MIME-type, New Service</title>
1384 <dt>Description:</dt>
1387 <p>On the net there are a lot of nifty CGI programs. But
1388 their usage is usually boring, so a lot of webmaster
1389 don't use them. Even Apache's Action handler feature for
1390 MIME-types is only appropriate when the CGI programs
1391 don't need special URLs (actually <code>PATH_INFO</code>
1392 and <code>QUERY_STRINGS</code>) as their input. First,
1393 let us configure a new file type with extension
1394 <code>.scgi</code> (for secure CGI) which will be processed
1395 by the popular <code>cgiwrap</code> program. The problem
1396 here is that for instance we use a Homogeneous URL Layout
1397 (see above) a file inside the user homedirs has the URL
1398 <code>/u/user/foo/bar.scgi</code>. But
1399 <code>cgiwrap</code> needs the URL in the form
1400 <code>/~user/foo/bar.scgi/</code>. The following rule
1401 solves the problem:</p>
1404 RewriteRule ^/[uge]/<strong>([^/]+)</strong>/\.www/(.+)\.scgi(.*) ...
1405 ... /internal/cgi/user/cgiwrap/~<strong>$1</strong>/$2.scgi$3 [NS,<strong>T=application/x-http-cgi</strong>]
1408 <p>Or assume we have some more nifty programs:
1409 <code>wwwlog</code> (which displays the
1410 <code>access.log</code> for a URL subtree and
1411 <code>wwwidx</code> (which runs Glimpse on a URL
1412 subtree). We have to provide the URL area to these
1413 programs so they know on which area they have to act on.
1414 But usually this ugly, because they are all the times
1415 still requested from that areas, i.e. typically we would
1416 run the <code>swwidx</code> program from within
1417 <code>/u/user/foo/</code> via hyperlink to</p>
1420 /internal/cgi/user/swwidx?i=/u/user/foo/
1423 <p>which is ugly. Because we have to hard-code
1424 <strong>both</strong> the location of the area
1425 <strong>and</strong> the location of the CGI inside the
1426 hyperlink. When we have to reorganize the area, we spend a
1427 lot of time changing the various hyperlinks.</p>
1433 <p>The solution here is to provide a special new URL format
1434 which automatically leads to the proper CGI invocation.
1435 We configure the following:</p>
1438 RewriteRule ^/([uge])/([^/]+)(/?.*)/\* /internal/cgi/user/wwwidx?i=/$1/$2$3/
1439 RewriteRule ^/([uge])/([^/]+)(/?.*):log /internal/cgi/user/wwwlog?f=/$1/$2$3
1442 <p>Now the hyperlink to search at
1443 <code>/u/user/foo/</code> reads only</p>
1449 <p>which internally gets automatically transformed to</p>
1452 /internal/cgi/user/wwwidx?i=/u/user/foo/
1455 <p>The same approach leads to an invocation for the
1456 access log CGI program when the hyperlink
1457 <code>:log</code> gets used.</p>
1465 <title>From Static to Dynamic</title>
1468 <dt>Description:</dt>
1471 <p>How can we transform a static page
1472 <code>foo.html</code> into a dynamic variant
1473 <code>foo.cgi</code> in a seamless way, i.e. without notice
1474 by the browser/user.</p>
1480 <p>We just rewrite the URL to the CGI-script and force the
1481 correct MIME-type so it gets really run as a CGI-script.
1482 This way a request to <code>/~quux/foo.html</code>
1483 internally leads to the invocation of
1484 <code>/~quux/foo.cgi</code>.</p>
1489 RewriteRule ^foo\.<strong>html</strong>$ foo.<strong>cgi</strong> [T=<strong>application/x-httpd-cgi</strong>]
1498 <title>On-the-fly Content-Regeneration</title>
1501 <dt>Description:</dt>
1504 <p>Here comes a really esoteric feature: Dynamically
1505 generated but statically served pages, i.e. pages should be
1506 delivered as pure static pages (read from the filesystem
1507 and just passed through), but they have to be generated
1508 dynamically by the webserver if missing. This way you can
1509 have CGI-generated pages which are statically served unless
1510 one (or a cronjob) removes the static contents. Then the
1511 contents gets refreshed.</p>
1517 This is done via the following ruleset:
1520 RewriteCond %{REQUEST_FILENAME} <strong>!-s</strong>
1521 RewriteRule ^page\.<strong>html</strong>$ page.<strong>cgi</strong> [T=application/x-httpd-cgi,L]
1524 <p>Here a request to <code>page.html</code> leads to a
1525 internal run of a corresponding <code>page.cgi</code> if
1526 <code>page.html</code> is still missing or has filesize
1527 null. The trick here is that <code>page.cgi</code> is a
1528 usual CGI script which (additionally to its <code>STDOUT</code>)
1529 writes its output to the file <code>page.html</code>.
1530 Once it was run, the server sends out the data of
1531 <code>page.html</code>. When the webmaster wants to force
1532 a refresh the contents, he just removes
1533 <code>page.html</code> (usually done by a cronjob).</p>
1541 <title>Document With Autorefresh</title>
1544 <dt>Description:</dt>
1547 <p>Wouldn't it be nice while creating a complex webpage if
1548 the webbrowser would automatically refresh the page every
1549 time we write a new version from within our editor?
1556 <p>No! We just combine the MIME multipart feature, the
1557 webserver NPH feature and the URL manipulation power of
1558 <module>mod_rewrite</module>. First, we establish a new
1559 URL feature: Adding just <code>:refresh</code> to any
1560 URL causes this to be refreshed every time it gets
1561 updated on the filesystem.</p>
1564 RewriteRule ^(/[uge]/[^/]+/?.*):refresh /internal/cgi/apache/nph-refresh?f=$1
1567 <p>Now when we reference the URL</p>
1570 /u/foo/bar/page.html:refresh
1573 <p>this leads to the internal invocation of the URL</p>
1576 /internal/cgi/apache/nph-refresh?f=/u/foo/bar/page.html
1579 <p>The only missing part is the NPH-CGI script. Although
1580 one would usually say "left as an exercise to the reader"
1581 ;-) I will provide this, too.</p>
1586 ## nph-refresh -- NPH/CGI script for auto refreshing pages
1587 ## Copyright (c) 1997 Ralf S. Engelschall, All Rights Reserved.
1591 # split the QUERY_STRING variable
1592 @pairs = split(/&/, $ENV{'QUERY_STRING'});
1593 foreach $pair (@pairs) {
1594 ($name, $value) = split(/=/, $pair);
1595 $name =~ tr/A-Z/a-z/;
1596 $name = 'QS_' . $name;
1597 $value =~ s/%([a-fA-F0-9][a-fA-F0-9])/pack("C", hex($1))/eg;
1598 eval "\$$name = \"$value\"";
1600 $QS_s = 1 if ($QS_s eq '');
1601 $QS_n = 3600 if ($QS_n eq '');
1603 print "HTTP/1.0 200 OK\n";
1604 print "Content-type: text/html\n\n";
1605 print "&lt;b&gt;ERROR&lt;/b&gt;: No file given\n";
1609 print "HTTP/1.0 200 OK\n";
1610 print "Content-type: text/html\n\n";
1611 print "&lt;b&gt;ERROR&lt;/b&gt;: File $QS_f not found\n";
1615 sub print_http_headers_multipart_begin {
1616 print "HTTP/1.0 200 OK\n";
1617 $bound = "ThisRandomString12345";
1618 print "Content-type: multipart/x-mixed-replace;boundary=$bound\n";
1619 &print_http_headers_multipart_next;
1622 sub print_http_headers_multipart_next {
1623 print "\n--$bound\n";
1626 sub print_http_headers_multipart_end {
1627 print "\n--$bound--\n";
1631 local($buffer) = @_;
1632 $len = length($buffer);
1633 print "Content-type: text/html\n";
1634 print "Content-length: $len\n\n";
1640 local(*FP, $size, $buffer, $bytes);
1641 ($x, $x, $x, $x, $x, $x, $x, $size) = stat($file);
1642 $size = sprintf("%d", $size);
1643 open(FP, "&lt;$file");
1644 $bytes = sysread(FP, $buffer, $size);
1649 $buffer = &readfile($QS_f);
1650 &print_http_headers_multipart_begin;
1651 &displayhtml($buffer);
1654 local($file) = $_[0];
1657 ($x, $x, $x, $x, $x, $x, $x, $x, $x, $mtime) = stat($file);
1661 $mtimeL = &mystat($QS_f);
1663 for ($n = 0; $n &lt; $QS_n; $n++) {
1665 $mtime = &mystat($QS_f);
1666 if ($mtime ne $mtimeL) {
1669 $buffer = &readfile($QS_f);
1670 &print_http_headers_multipart_next;
1671 &displayhtml($buffer);
1673 $mtimeL = &mystat($QS_f);
1680 &print_http_headers_multipart_end;
1693 <title>Mass Virtual Hosting</title>
1696 <dt>Description:</dt>
1699 <p>The <directive type="section" module="core"
1700 >VirtualHost</directive> feature of Apache is nice
1701 and works great when you just have a few dozens
1702 virtual hosts. But when you are an ISP and have hundreds of
1703 virtual hosts to provide this feature is not the best
1710 <p>To provide this feature we map the remote webpage or even
1711 the complete remote webarea to our namespace by the use
1712 of the <dfn>Proxy Throughput</dfn> feature (flag <code>[P]</code>):</p>
1718 www.vhost1.dom:80 /path/to/docroot/vhost1
1719 www.vhost2.dom:80 /path/to/docroot/vhost2
1721 www.vhostN.dom:80 /path/to/docroot/vhostN
1729 # use the canonical hostname on redirects, etc.
1733 # add the virtual host in front of the CLF-format
1734 CustomLog /path/to/access_log "%{VHOST}e %h %l %u %t \"%r\" %>s %b"
1737 # enable the rewriting engine in the main server
1740 # define two maps: one for fixing the URL and one which defines
1741 # the available virtual hosts with their corresponding
1743 RewriteMap lowercase int:tolower
1744 RewriteMap vhost txt:/path/to/vhost.map
1746 # Now do the actual virtual host mapping
1747 # via a huge and complicated single rule:
1749 # 1. make sure we don't map for common locations
1750 RewriteCond %{REQUEST_URL} !^/commonurl1/.*
1751 RewriteCond %{REQUEST_URL} !^/commonurl2/.*
1753 RewriteCond %{REQUEST_URL} !^/commonurlN/.*
1755 # 2. make sure we have a Host header, because
1756 # currently our approach only supports
1757 # virtual hosting through this header
1758 RewriteCond %{HTTP_HOST} !^$
1760 # 3. lowercase the hostname
1761 RewriteCond ${lowercase:%{HTTP_HOST}|NONE} ^(.+)$
1763 # 4. lookup this hostname in vhost.map and
1764 # remember it only when it is a path
1765 # (and not "NONE" from above)
1766 RewriteCond ${vhost:%1} ^(/.*)$
1768 # 5. finally we can map the URL to its docroot location
1769 # and remember the virtual host for logging puposes
1770 RewriteRule ^/(.*)$ %1/$1 [E=VHOST:${lowercase:%{HTTP_HOST}}]
1780 <section id="access">
1782 <title>Access Restriction</title>
1786 <title>Blocking of Robots</title>
1789 <dt>Description:</dt>
1792 <p>How can we block a really annoying robot from
1793 retrieving pages of a specific webarea? A
1794 <code>/robots.txt</code> file containing entries of the
1795 "Robot Exclusion Protocol" is typically not enough to get
1796 rid of such a robot.</p>
1802 <p>We use a ruleset which forbids the URLs of the webarea
1803 <code>/~quux/foo/arc/</code> (perhaps a very deep
1804 directory indexed area where the robot traversal would
1805 create big server load). We have to make sure that we
1806 forbid access only to the particular robot, i.e. just
1807 forbidding the host where the robot runs is not enough.
1808 This would block users from this host, too. We accomplish
1809 this by also matching the User-Agent HTTP header
1813 RewriteCond %{HTTP_USER_AGENT} ^<strong>NameOfBadRobot</strong>.*
1814 RewriteCond %{REMOTE_ADDR} ^<strong>123\.45\.67\.[8-9]</strong>$
1815 RewriteRule ^<strong>/~quux/foo/arc/</strong>.+ - [<strong>F</strong>]
1824 <title>Blocked Inline-Images</title>
1827 <dt>Description:</dt>
1830 <p>Assume we have under <code>http://www.quux-corp.de/~quux/</code>
1831 some pages with inlined GIF graphics. These graphics are
1832 nice, so others directly incorporate them via hyperlinks to
1833 their pages. We don't like this practice because it adds
1834 useless traffic to our server.</p>
1840 <p>While we cannot 100% protect the images from inclusion,
1841 we can at least restrict the cases where the browser
1842 sends a HTTP Referer header.</p>
1845 RewriteCond %{HTTP_REFERER} <strong>!^$</strong>
1846 RewriteCond %{HTTP_REFERER} !^http://www.quux-corp.de/~quux/.*$ [NC]
1847 RewriteRule <strong>.*\.gif$</strong> - [F]
1851 RewriteCond %{HTTP_REFERER} !^$
1852 RewriteCond %{HTTP_REFERER} !.*/foo-with-gif\.html$
1853 RewriteRule <strong>^inlined-in-foo\.gif$</strong> - [F]
1862 <title>Host Deny</title>
1865 <dt>Description:</dt>
1868 <p>How can we forbid a list of externally configured hosts
1869 from using our server?</p>
1875 <p>For Apache >= 1.3b6:</p>
1879 RewriteMap hosts-deny txt:/path/to/hosts.deny
1880 RewriteCond ${hosts-deny:%{REMOTE_HOST}|NOT-FOUND} !=NOT-FOUND [OR]
1881 RewriteCond ${hosts-deny:%{REMOTE_ADDR}|NOT-FOUND} !=NOT-FOUND
1882 RewriteRule ^/.* - [F]
1885 <p>For Apache <= 1.3b6:</p>
1889 RewriteMap hosts-deny txt:/path/to/hosts.deny
1890 RewriteRule ^/(.*)$ ${hosts-deny:%{REMOTE_HOST}|NOT-FOUND}/$1
1891 RewriteRule !^NOT-FOUND/.* - [F]
1892 RewriteRule ^NOT-FOUND/(.*)$ ${hosts-deny:%{REMOTE_ADDR}|NOT-FOUND}/$1
1893 RewriteRule !^NOT-FOUND/.* - [F]
1894 RewriteRule ^NOT-FOUND/(.*)$ /$1
1901 ## ATTENTION! This is a map, not a list, even when we treat it as such.
1902 ## mod_rewrite parses it for key/value pairs, so at least a
1903 ## dummy value "-" must be present for each entry.
1917 <title>Proxy Deny</title>
1920 <dt>Description:</dt>
1923 <p>How can we forbid a certain host or even a user of a
1924 special host from using the Apache proxy?</p>
1930 <p>We first have to make sure <module>mod_rewrite</module>
1931 is below(!) <module>mod_proxy</module> in the Configuration
1932 file when compiling the Apache webserver. This way it gets
1933 called <em>before</em> <module>mod_proxy</module>. Then we
1934 configure the following for a host-dependent deny...</p>
1937 RewriteCond %{REMOTE_HOST} <strong>^badhost\.mydomain\.com$</strong>
1938 RewriteRule !^http://[^/.]\.mydomain.com.* - [F]
1941 <p>...and this one for a user@host-dependent deny:</p>
1944 RewriteCond %{REMOTE_IDENT}@%{REMOTE_HOST} <strong>^badguy@badhost\.mydomain\.com$</strong>
1945 RewriteRule !^http://[^/.]\.mydomain.com.* - [F]
1954 <title>Special Authentication Variant</title>
1957 <dt>Description:</dt>
1960 <p>Sometimes a very special authentication is needed, for
1961 instance a authentication which checks for a set of
1962 explicitly configured users. Only these should receive
1963 access and without explicit prompting (which would occur
1964 when using the Basic Auth via <module>mod_auth_basic</module>).</p>
1970 <p>We use a list of rewrite conditions to exclude all except
1974 RewriteCond %{REMOTE_IDENT}@%{REMOTE_HOST} <strong>!^friend1@client1.quux-corp\.com$</strong>
1975 RewriteCond %{REMOTE_IDENT}@%{REMOTE_HOST} <strong>!^friend2</strong>@client2.quux-corp\.com$
1976 RewriteCond %{REMOTE_IDENT}@%{REMOTE_HOST} <strong>!^friend3</strong>@client3.quux-corp\.com$
1977 RewriteRule ^/~quux/only-for-friends/ - [F]
1986 <title>Referer-based Deflector</title>
1989 <dt>Description:</dt>
1992 <p>How can we program a flexible URL Deflector which acts
1993 on the "Referer" HTTP header and can be configured with as
1994 many referring pages as we like?</p>
2000 <p>Use the following really tricky ruleset...</p>
2003 RewriteMap deflector txt:/path/to/deflector.map
2005 RewriteCond %{HTTP_REFERER} !=""
2006 RewriteCond ${deflector:%{HTTP_REFERER}} ^-$
2007 RewriteRule ^.* %{HTTP_REFERER} [R,L]
2009 RewriteCond %{HTTP_REFERER} !=""
2010 RewriteCond ${deflector:%{HTTP_REFERER}|NOT-FOUND} !=NOT-FOUND
2011 RewriteRule ^.* ${deflector:%{HTTP_REFERER}} [R,L]
2014 <p>... in conjunction with a corresponding rewrite
2022 http://www.badguys.com/bad/index.html -
2023 http://www.badguys.com/bad/index2.html -
2024 http://www.badguys.com/bad/index3.html http://somewhere.com/
2027 <p>This automatically redirects the request back to the
2028 referring page (when "<code>-</code>" is used as the value
2029 in the map) or to a specific URL (when an URL is specified
2030 in the map as the second argument).</p>
2038 <section id="other">
2040 <title>Other</title>
2044 <title>External Rewriting Engine</title>
2047 <dt>Description:</dt>
2050 <p>A FAQ: How can we solve the FOO/BAR/QUUX/etc.
2051 problem? There seems no solution by the use of
2052 <module>mod_rewrite</module>...</p>
2058 <p>Use an external <directive module="mod_rewrite"
2059 >RewriteMap</directive>, i.e. a program which acts
2060 like a <directive module="mod_rewrite"
2061 >RewriteMap</directive>. It is run once on startup of Apache
2062 receives the requested URLs on <code>STDIN</code> and has
2063 to put the resulting (usually rewritten) URL on
2064 <code>STDOUT</code> (same order!).</p>
2068 RewriteMap quux-map <strong>prg:</strong>/path/to/map.quux.pl
2069 RewriteRule ^/~quux/(.*)$ /~quux/<strong>${quux-map:$1}</strong>
2075 # disable buffered I/O which would lead
2076 # to deadloops for the Apache server
2079 # read URLs one per line from stdin and
2080 # generate substitution URL on stdout
2087 <p>This is a demonstration-only example and just rewrites
2088 all URLs <code>/~quux/foo/...</code> to
2089 <code>/~quux/bar/...</code>. Actually you can program
2090 whatever you like. But notice that while such maps can be
2091 <strong>used</strong> also by an average user, only the
2092 system administrator can <strong>define</strong> it.</p>