David Woodhouse [Wed, 17 Aug 2016 09:30:21 +0000 (11:30 +0200)]
curl: allow "pkcs11:" prefix for client certificates
RFC7512 provides a standard method to reference certificates in PKCS#11
tokens, by means of a URI starting 'pkcs11:'.
We're working on fixing various applications so that whenever they would
have been able to use certificates from a file, users can simply insert
a PKCS#11 URI instead and expect it to work. This expectation is now a
part of the Fedora packaging guidelines, for example.
This doesn't work with cURL because of the way that the colon is used
to separate the certificate argument from the passphrase. So instead of
curl -E 'pkcs11:manufacturer=piv_II;id=%01' …
I instead need to invoke cURL with the colon escaped, like this:
curl -E 'pkcs11\:manufacturer=piv_II;id=%01' …
This is suboptimal because we want *consistency* — the URI should be
usable in place of a filename anywhere, without having strange
differences for different applications.
This patch therefore disables the processing in parse_cert_parameter()
when the string starts with 'pkcs11:'. It means you can't pass a
passphrase with an unescaped PKCS#11 URI, but there's no need to do so
because RFC7512 allows a PIN to be given as a 'pin-value' attribute in
the URI itself.
Also, if users are already using RFC7512 URIs with the colon escaped as
in the above example — even providing a passphrase for cURL to handling
instead of using a pin-value attribute, that will continue to work
because their string will start 'pkcs11\:' and won't match the check.
What *does* break with this patch is the extremely unlikely case that a
user has a file which is in the local directory and literally named
just "pkcs11", and they have a passphrase on it. If that ever happened,
the user would need to refer to it as './pkcs11:<passphrase>' instead.
This fixes tests that were added after 113f04e664b as the tests would
fail otherwise.
We bring back "Proxy-Connection: Keep-Alive" now unconditionally to fix
regressions with old and stupid proxies, but we could possibly switch to
using it only for CONNECT or only for NTLM in a future if we want to
gradually reduce it.
Daniel Stenberg [Mon, 15 Aug 2016 08:46:27 +0000 (10:46 +0200)]
proxy: reject attempts to use unsupported proxy schemes
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
Jay Satiro [Fri, 12 Aug 2016 08:10:29 +0000 (04:10 -0400)]
openssl: accept subjectAltName iPAddress if no dNSName match
Undo change introduced in d4643d6 which caused iPAddress match to be
ignored if dNSName was present but did not match.
Also, if iPAddress is present but does not match, and dNSName is not
present, fail as no-match. Prior to this change in such a case the CN
would be checked for a match.
Daniel Stenberg [Thu, 11 Aug 2016 06:33:36 +0000 (08:33 +0200)]
HTTP: retry failed HEAD requests too
Mark's new document about HTTP Retries
(https://mnot.github.io/I-D/httpbis-retry/) made me check our code and I
spotted that we don't retry failed HEAD requests which seems totally
inconsistent and I can't see any reason for that separate treatment.
So, no separate treatment for HEAD starting now. A HTTP request sent
over a reused connection that gets cut off before a single byte is
received will be retried on a fresh connection.
Erik Janssen [Wed, 10 Aug 2016 06:58:10 +0000 (08:58 +0200)]
rtsp: accept any RTSP session id
Makes libcurl work in communication with gstreamer-based RTSP
servers. The original code validates the session id to be in accordance
with the RFC. I think it is better not to do that:
- For curl the actual content is a don't care.
- The clarity of the RFC is debatable, is $ allowed or only as \$, that
is imho not clear
- Gstreamer seems to url-encode the session id but % is not allowed by
the RFC
- less code
With this patch curl will correctly handle real-life lines like:
Session: biTN4Kc.8%2B1w-AF.; timeout=60
Simon Warta [Tue, 9 Aug 2016 06:29:59 +0000 (08:29 +0200)]
winbuild: Free name $(CC) in Makefile (#950)
In the old line number 290, CC and CURL_CC had the same value. After
that, /DCURL_STATICLIB was added to CC but not CURL_CC (intended?).
This gets rid of the CC variable entirely. It is a first step to make it
possible to manualyl set a CC variable in order to be able to change the
compiler.
Jay Satiro [Mon, 8 Aug 2016 04:25:03 +0000 (00:25 -0400)]
cmake: Enable win32 large file support by default
All compilers used by cmake in Windows should support large files.
- Add test SIZEOF_OFF_T
- Remove outdated test SIZEOF_CURL_OFF_T
- Turn on USE_WIN32_LARGE_FILES in Windows
- Check for 'Largefile' during the features output
Daniel Stenberg [Thu, 4 Aug 2016 22:42:52 +0000 (00:42 +0200)]
http2: always wait for readable socket
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
mbedtls: set debug threshold to 4 (verbose) when MBEDTLS_DEBUG is defined
In order to make MBEDTLS_DEBUG work, the debug threshold must be unequal
to 0. This patch also adds a comment how mbedtls must be compiled in
order to make debugging work, and explains the possible debug levels.
Daniel Stenberg [Thu, 30 Jun 2016 12:56:02 +0000 (14:56 +0200)]
CURLOPT_TCP_NODELAY: now enabled by default
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
Serj Kalichev [Tue, 2 Aug 2016 22:29:09 +0000 (00:29 +0200)]
TFTP: Fix upload problem with piped input
When input stream for curl is stdin and input stream is not a file but
generated by a script then curl can truncate data transfer to arbitrary
size since a partial packet is treated as end of transfer by TFTP.
Daniel Stenberg [Tue, 2 Aug 2016 08:57:30 +0000 (10:57 +0200)]
multi: make Curl_expire() work with 0 ms timeouts
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Daniel Stenberg [Mon, 1 Aug 2016 22:48:23 +0000 (00:48 +0200)]
transfer: return without select when the read loop reached maxcount
Regression added in 790d6de48515. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
Daniel Stenberg [Sun, 31 Jul 2016 09:48:44 +0000 (11:48 +0200)]
include: revert 9adf3c4 and make public types void * again
Many applications assume the actual contents of the public types and use
that do for example forward declarations (saving them from including our
public header) which then breaks when we switch from void * to a struct
*.
I'm not convinced we were wrong, but since this practise seems
widespread enough I'm willing to (partly) step down.
Now libcurl uses the struct itself when it is built and it allows
applications to use the struct type if CURL_STRICTER is defined at the
time of the #include.
Jay Satiro [Thu, 21 Jul 2016 02:00:45 +0000 (22:00 -0400)]
vauth: Fix memleak by freeing credentials if out of memory
This is a follow up to the parent commit dcdd4be which fixes one leak
but creates another by failing to free the credentials handle if out of
memory. Also there's a second location a few lines down where we fail to
do same. This commit fixes both of those issues.