Kamil Dudka [Tue, 28 Mar 2017 13:50:59 +0000 (15:50 +0200)]
http: do not treat FTPS over CONNECT as HTTPS
If we use FTPS over CONNECT, the TLS handshake for the FTPS control
connection needs to be initiated in the SENDPROTOCONNECT state, not
the WAITPROXYCONNECT state. Otherwise, if the TLS handshake completed
without blocking, the information about the completed TLS handshake
would be saved to a wrong flag. Consequently, the TLS handshake would
be initiated in the SENDPROTOCONNECT state once again on the same
connection, resulting in a failure of the TLS handshake. I was able to
observe the failure with the NSS backend if curl ran through valgrind.
Note that this commit partially reverts curl-7_21_6-52-ge34131d.
Daniel Stenberg [Mon, 27 Mar 2017 10:14:57 +0000 (12:14 +0200)]
pause: handle mixed types of data when paused
When receiving chunked encoded data with trailers, and the write
callback returns PAUSE, there might be both body and header to store to
resend on unpause. Previously libcurl returned error for that case.
Added test case 1540 to verify.
Reported-by: Stephen Toub
Fixes #1354
Closes #1357
Isaac Boukris [Thu, 23 Mar 2017 19:28:28 +0000 (21:28 +0200)]
http: Fix proxy connection reuse with basic-auth
When using basic-auth, connections and proxy connections
can be re-used with different Authorization headers since
it does not authenticate the connection (like NTLM does).
For instance, the below command should re-use the proxy
connection, but it currently doesn't:
curl -v -U alice:a -x http://localhost:8181 http://localhost/
--next -U bob:b -x http://localhost:8181 http://localhost/
Fix the above by removing the username and password compare
when re-using proxy connection at proxy_info_matches().
However, this fix brings back another bug would make curl
to re-print the old proxy-authorization header of previous
proxy basic-auth connection because it wasn't cleared.
For instance, in the below command the second request should
fail if the proxy requires authentication, but would succeed
after the above fix (and before aforementioned commit):
curl -v -U alice:a -x http://localhost:8181 http://localhost/
--next -x http://localhost:8181 http://localhost/
Fix this by clearing conn->allocptr.proxyuserpwd after use
unconditionally, same as we do for conn->allocptr.userpwd.
Also fix test 540 to not expect digest auth header to be
resent when connection is reused.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Closes https://github.com/curl/curl/pull/1350
Marcel Raad [Tue, 21 Mar 2017 22:20:03 +0000 (23:20 +0100)]
.gitattributes: turn off CRLF for *.am
If Makefile.am uses CRLF, buildconf in a Windows checkout fails with:
".ibtoolize: error: AC_CONFIG_MACRO_DIRS([m4]) conflicts with
ACLOCAL_AMFLAGS=-I m4"
Jay Satiro [Wed, 22 Mar 2017 18:39:41 +0000 (14:39 -0400)]
openssl: fall back on SSL_ERROR_* string when no error detail
- If SSL_get_error is called but no extended error detail is available
then show that SSL_ERROR_* as a string.
Prior to this change there was some inconsistency in that case: the
SSL_ERROR_* code may or may not have been shown, or may have been shown
as unknown even if it was known.
Peter Wu [Sat, 25 Feb 2017 13:40:24 +0000 (14:40 +0100)]
cmake: build manual pages (including curl.1)
Also make Perl mandatory to allow building the docs.
While CMakeLists.txt could probably read the list of manual pages from
Makefile.am, actually putting those in CMakeLists.txt is cleaner so that
is what is done here.
Jay Satiro [Thu, 16 Mar 2017 22:23:31 +0000 (18:23 -0400)]
tool_operate: Fix showing HTTPS-Proxy options on CURLE_SSL_CACERT
- Show the HTTPS-proxy options on CURLE_SSL_CACERT if libcurl was built
with HTTPS-proxy support.
Prior to this change those options were shown only if an HTTPS-proxy was
specified by --proxy, but that did not take into account environment
variables such as http_proxy, https_proxy, etc. Follow-up to e1187c4.
Dan Fandrich [Sun, 12 Mar 2017 22:23:31 +0000 (23:23 +0100)]
test1440/1: depend on well-defined file: behaviour
Depend on the known behaviour of URLs for nonexistent files rather than
the undefined behaviour of URLs for directories (which fails on Windows).
The test isn't about file: URLs at all, so the URL used doesn't really
matter.
Dan Fandrich [Sat, 11 Mar 2017 09:59:34 +0000 (10:59 +0100)]
tool_writeout: fixed a buffer read overrun on --write-out
If a % ended the statement, the string's trailing NUL would be skipped
and memory past the end of the buffer would be accessed and potentially
displayed as part of the --write-out output. Added tests 1440 and 1441
to check for this kind of condition.
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
Jay Satiro [Sat, 11 Mar 2017 23:21:31 +0000 (18:21 -0500)]
http_proxy: Ignore TE and CL in CONNECT 2xx responses
A client MUST ignore any Content-Length or Transfer-Encoding header
fields received in a successful response to CONNECT.
"Successful" described as: 2xx (Successful). RFC 7231 4.3.6
Prior to this change such a case would cause an error.
In some ways this bug appears to be a regression since c50b878. Prior to
that libcurl may have appeared to function correctly in such cases by
acting on those headers instead of causing an error. But that behavior
was also incorrect.
Daniel Stenberg [Thu, 9 Mar 2017 22:55:30 +0000 (23:55 +0100)]
tests: disabled 1903 now
Test 1903 is doing HTTP pipelining, and that is a timing and ordering
sensitive operation and this fails far too often on the Travis CI
leading to people more or less ignoring test failures there. Not good.
The end of pipelning is probably coming sooner rather than later
anyway...
Dan Fandrich [Thu, 9 Mar 2017 21:45:40 +0000 (22:45 +0100)]
build: fixed making man page in out-of-tree tarball builds
The man page taken from the release package is found in a different
location than if it's built from source. It must be referenced as $< in
the rule to get its correct location in the VPATH.
Dan Fandrich [Thu, 9 Mar 2017 21:15:54 +0000 (22:15 +0100)]
mkhelp: simplified the gzip code
This eliminates the need for an external gzip program, which wasn't
working with Busybox's gzip, anyway. It now compresses using perl's
IO::Compress::Gzip
updatemanpages.pl: Update man pages to use current date and versions
Added script to update man pages to use the current date and
curl/libcurl versions.
updatemanpages.pl has three arrays: list of directories to look in,
list of extensions to process, list of files to exclude from
processing.
Check man page in git repoistory using the date from the existing man
page before updating to avoid updating the man page if no change is
made.
If data is received from the git command then update the man page with
the current date and version otherwise leave alone.
Applied patch from badger to make the date argument optional, change the
git command used, added date argument to processfile subroutine and
print to STDERR if no date is found in a man page.
Added code to process the changed man page into a new man page with
.dist added to the filename to keep the original source files unchanged.
Updated POD documentation to reflect that the date argument optional.
This fixes assertion error which occurs when redirect is done with 0
length body via HTTP/2, and the easy handle is reused, but new
connection is established due to hostname change:
Daniel Stenberg [Mon, 6 Mar 2017 15:08:21 +0000 (16:08 +0100)]
URL: return error on malformed URLs with junk after port number
... because it causes confusion with users. Example URLs:
"http://[127.0.0.1]:11211:80" which a lot of languages' URL parsers will
parse and claim uses port number 80, while libcurl would use port number
11211.
"http://user@example.com:80@localhost" which by the WHATWG URL spec will
be treated to contain user name 'user@example.com' but according to
RFC3986 is user name 'user' for the host 'example.com' and then port 80
is followed by "@localhost"
Both these formats are now rejected, and verified so in test 1260.