zsh.pl: update regex to better match curl -h output
The current regex fails to match '<...>' arguments properly (e.g. those
with spaces in them), which causes an completion script with wrong
descriptions for some options.
Here's a diff of the generated completion script, comparing the previous
version to the one with this fix:
_arguments -C -S \
--happy-eyeballs-timeout-ms'[How long to wait in milliseconds for IPv6 before trying IPv4]':'<milliseconds>' \
+ --resolve'[Resolve the host+port to this address]':'<host:port:address[,address]...>' \
{-c,--cookie-jar}'[Write cookies to <filename> after operation]':'<filename>':_files \
{-D,--dump-header}'[Write the received headers to <filename>]':'<filename>':_files \
{-y,--speed-time}'[Trigger '\''speed-limit'\'' abort after this time]':'<seconds>' \
--proxy-cacert'[CA certificate to verify peer against for proxy]':'<file>':_files \
- --tls13-ciphers'[of TLS 1.3 ciphersuites> TLS 1.3 cipher suites to use]':'<list' \
+ --tls13-ciphers'[TLS 1.3 cipher suites to use]':'<list of TLS 1.3 ciphersuites>' \
{-E,--cert}'[Client certificate file and password]':'<certificate[:password]>' \
--libcurl'[Dump libcurl equivalent code of this command line]':'<file>':_files \
--proxy-capath'[CA directory to verify peer against for proxy]':'<dir>':_files \
- --proxy-negotiate'[HTTP Negotiate (SPNEGO) authentication on the proxy]':'Use' \
--proxy-pinnedpubkey'[FILE/HASHES public key to verify proxy with]':'<hashes>' \
--crlfile'[Get a CRL list in PEM format from the given file]':'<file>':_files \
- --proxy-insecure'[HTTPS proxy connections without verifying the proxy]':'Do' \
- --proxy-ssl-allow-beast'[security flaw for interop for HTTPS proxy]':'Allow' \
+ --proxy-negotiate'[Use HTTP Negotiate (SPNEGO) authentication on the proxy]' \
--abstract-unix-socket'[Connect via abstract Unix domain socket]':'<path>' \
--pinnedpubkey'[FILE/HASHES Public key to verify peer against]':'<hashes>' \
+ --proxy-insecure'[Do HTTPS proxy connections without verifying the proxy]' \
--proxy-pass'[Pass phrase for the private key for HTTPS proxy]':'<phrase>' \
+ --proxy-ssl-allow-beast'[Allow security flaw for interop for HTTPS proxy]' \
{-p,--proxytunnel}'[Operate through an HTTP proxy tunnel (using CONNECT)]' \
--socks5-hostname'[SOCKS5 proxy, pass host name to proxy]':'<host[:port]>' \
--proto-default'[Use PROTOCOL for any URL missing a scheme]':'<protocol>' \
- --proxy-tls13-ciphers'[list> TLS 1.3 proxy cipher suites]':'<ciphersuite' \
+ --proxy-tls13-ciphers'[TLS 1.3 proxy cipher suites]':'<ciphersuite list>' \
--socks5-gssapi-service'[SOCKS5 proxy service name for GSS-API]':'<name>' \
--ftp-alternative-to-user'[String to replace USER \[name\]]':'<command>' \
- --ftp-ssl-control'[SSL/TLS for FTP login, clear for transfer]':'Require' \
{-T,--upload-file}'[Transfer local FILE to destination]':'<file>':_files \
--local-port'[Force use of RANGE for local port numbers]':'<num/range>' \
--proxy-tlsauthtype'[TLS authentication type for HTTPS proxy]':'<type>' \
{-R,--remote-time}'[Set the remote file'\''s time on the local output]' \
- --retry-connrefused'[on connection refused (use with --retry)]':'Retry' \
- --suppress-connect-headers'[proxy CONNECT response headers]':'Suppress' \
- {-j,--junk-session-cookies}'[session cookies read from file]':'Ignore' \
- --location-trusted'[--location, and send auth to other hosts]':'Like' \
+ --ftp-ssl-control'[Require SSL/TLS for FTP login, clear for transfer]' \
--proxy-cert-type'[Client certificate type for HTTPS proxy]':'<type>' \
{-O,--remote-name}'[Write output to a file named as the remote file]' \
+ --retry-connrefused'[Retry on connection refused (use with --retry)]' \
+ --suppress-connect-headers'[Suppress proxy CONNECT response headers]' \
--trace-ascii'[Like --trace, but without hex output]':'<file>':_files \
--connect-timeout'[Maximum time allowed for connection]':'<seconds>' \
--expect100-timeout'[How long to wait for 100-continue]':'<seconds>' \
{-g,--globoff}'[Disable URL sequences and ranges using {} and \[\]]' \
+ {-j,--junk-session-cookies}'[Ignore session cookies read from file]' \
{-m,--max-time}'[Maximum time allowed for the transfer]':'<seconds>' \
--dns-ipv4-addr'[IPv4 address to use for DNS requests]':'<address>' \
--dns-ipv6-addr'[IPv6 address to use for DNS requests]':'<address>' \
- --ignore-content-length'[the size of the remote resource]':'Ignore' \
{-k,--insecure}'[Allow insecure server connections when using SSL]' \
+ --location-trusted'[Like --location, and send auth to other hosts]' \
--mail-auth'[Originator address of the original email]':'<address>' \
--noproxy'[List of hosts which do not use proxy]':'<no-proxy-list>' \
--proto-redir'[Enable/disable PROTOCOLS on redirect]':'<protocols>' \
@@ -62,18 +62,19 @@
--socks5-basic'[Enable username/password auth for SOCKS5 proxies]' \
--cacert'[CA certificate to verify peer against]':'<file>':_files \
{-H,--header}'[Pass custom header(s) to server]':'<header/@file>' \
+ --ignore-content-length'[Ignore the size of the remote resource]' \
{-i,--include}'[Include protocol response headers in the output]' \
--proxy-header'[Pass custom header(s) to proxy]':'<header/@file>' \
--unix-socket'[Connect through this Unix domain socket]':'<path>' \
{-w,--write-out}'[Use output FORMAT after completion]':'<format>' \
- --http2-prior-knowledge'[HTTP 2 without HTTP/1.1 Upgrade]':'Use' \
{-o,--output}'[Write to file instead of stdout]':'<file>':_files \
- {-J,--remote-header-name}'[the header-provided filename]':'Use' \
+ --preproxy'[\[protocol://\]host\[:port\] Use this proxy first]' \
--socks4a'[SOCKS4a proxy on given host + port]':'<host[:port]>' \
{-Y,--speed-limit}'[Stop transfers slower than this]':'<speed>' \
{-z,--time-cond}'[Transfer based on a time condition]':'<time>' \
--capath'[CA directory to verify peer against]':'<dir>':_files \
{-f,--fail}'[Fail silently (no output at all) on HTTP errors]' \
+ --http2-prior-knowledge'[Use HTTP 2 without HTTP/1.1 Upgrade]' \
--proxy-tlspassword'[TLS password for HTTPS proxy]':'<string>' \
{-U,--proxy-user}'[Proxy user and password]':'<user:password>' \
--proxy1.0'[Use HTTP/1.0 proxy on given port]':'<host[:port]>' \
@@ -81,52 +82,49 @@
{-A,--user-agent}'[Send User-Agent <name> to server]':'<name>' \
--egd-file'[EGD socket path for random data]':'<file>':_files \
--fail-early'[Fail on first transfer error, do not continue]' \
- --haproxy-protocol'[HAProxy PROXY protocol v1 header]':'Send' \
- --preproxy'[Use this proxy first]':'[protocol://]host[:port]' \
+ {-J,--remote-header-name}'[Use the header-provided filename]' \
--retry-max-time'[Retry only within this period]':'<seconds>' \
--socks4'[SOCKS4 proxy on given host + port]':'<host[:port]>' \
--socks5'[SOCKS5 proxy on given host + port]':'<host[:port]>' \
- --socks5-gssapi-nec'[with NEC SOCKS5 server]':'Compatibility' \
- --ssl-allow-beast'[security flaw to improve interop]':'Allow' \
--cert-status'[Verify the status of the server certificate]' \
- --ftp-create-dirs'[the remote dirs if not present]':'Create' \
{-:,--next}'[Make next URL use its separate set of options]' \
--proxy-key-type'[Private key file type for proxy]':'<type>' \
- --remote-name-all'[the remote file name for all URLs]':'Use' \
{-X,--request}'[Specify request command to use]':'<command>' \
--retry'[Retry request if transient problems occur]':'<num>' \
- --ssl-no-revoke'[cert revocation checks (WinSSL)]':'Disable' \
--cert-type'[Certificate file type (DER/PEM/ENG)]':'<type>' \
--connect-to'[Connect to host]':'<HOST1:PORT1:HOST2:PORT2>' \
--create-dirs'[Create necessary local directory hierarchy]' \
+ --haproxy-protocol'[Send HAProxy PROXY protocol v1 header]' \
--max-redirs'[Maximum number of redirects allowed]':'<num>' \
{-n,--netrc}'[Must read .netrc for user name and password]' \
+ {-x,--proxy}'[\[protocol://\]host\[:port\] Use this proxy]' \
--proxy-crlfile'[Set a CRL list for proxy]':'<file>':_files \
--sasl-ir'[Enable initial response in SASL authentication]' \
- --socks5-gssapi'[GSS-API auth for SOCKS5 proxies]':'Enable' \
+ --socks5-gssapi-nec'[Compatibility with NEC SOCKS5 server]' \
+ --ssl-allow-beast'[Allow security flaw to improve interop]' \
+ --ftp-create-dirs'[Create the remote dirs if not present]' \
--interface'[Use network INTERFACE (or address)]':'<name>' \
--key-type'[Private key file type (DER/PEM/ENG)]':'<type>' \
--netrc-file'[Specify FILE for netrc]':'<filename>':_files \
{-N,--no-buffer}'[Disable buffering of the output stream]' \
--proxy-service-name'[SPNEGO proxy service name]':'<name>' \
- --styled-output'[styled output for HTTP headers]':'Enable' \
+ --remote-name-all'[Use the remote file name for all URLs]' \
+ --ssl-no-revoke'[Disable cert revocation checks (WinSSL)]' \
--max-filesize'[Maximum file size to download]':'<bytes>' \
--negotiate'[Use HTTP Negotiate (SPNEGO) authentication]' \
--no-keepalive'[Disable TCP keepalive on the connection]' \
{-#,--progress-bar}'[Display transfer progress as a bar]' \
- {-x,--proxy}'[Use this proxy]':'[protocol://]host[:port]' \
- --proxy-anyauth'[any proxy authentication method]':'Pick' \
{-Q,--quote}'[Send command(s) to server before transfer]' \
- --request-target'[the target for this request]':'Specify' \
+ --socks5-gssapi'[Enable GSS-API auth for SOCKS5 proxies]' \
{-u,--user}'[Server user and password]':'<user:password>' \
{-K,--config}'[Read config from a file]':'<file>':_files \
{-C,--continue-at}'[Resumed transfer offset]':'<offset>' \
--data-raw'[HTTP POST data, '\''@'\'' allowed]':'<data>' \
- --disallow-username-in-url'[username in url]':'Disallow' \
--krb'[Enable Kerberos with security <level>]':'<level>' \
--proxy-ciphers'[SSL ciphers to use for proxy]':'<list>' \
--proxy-digest'[Use Digest authentication on the proxy]' \
--proxy-tlsuser'[TLS username for HTTPS proxy]':'<name>' \
+ --styled-output'[Enable styled output for HTTP headers]' \
{-b,--cookie}'[Send cookies from string/file]':'<data>' \
--data-urlencode'[HTTP POST data url encoded]':'<data>' \
--delegation'[GSS-API delegation permission]':'<LEVEL>' \
@@ -134,7 +132,10 @@
--post301'[Do not switch to GET after following a 301]' \
--post302'[Do not switch to GET after following a 302]' \
--post303'[Do not switch to GET after following a 303]' \
+ --proxy-anyauth'[Pick any proxy authentication method]' \
+ --request-target'[Specify the target for this request]' \
--trace-time'[Add time stamps to trace/verbose output]' \
+ --disallow-username-in-url'[Disallow username in url]' \
--dns-servers'[DNS server addrs to use]':'<addresses>' \
{-G,--get}'[Put the post data in the URL and use GET]' \
--limit-rate'[Limit transfer speed to RATE]':'<speed>' \
@@ -148,21 +149,21 @@
--metalink'[Process given URLs as metalink XML file]' \
--tr-encoding'[Request compressed transfer encoding]' \
--xattr'[Store metadata in extended file attributes]' \
- --ftp-skip-pasv-ip'[the IP address for PASV]':'Skip' \
--pass'[Pass phrase for the private key]':'<phrase>' \
--proxy-ntlm'[Use NTLM authentication on the proxy]' \
{-S,--show-error}'[Show error even when -s is used]' \
- --ciphers'[of ciphers> SSL ciphers to use]':'<list' \
+ --ciphers'[SSL ciphers to use]':'<list of ciphers>' \
--form-string'[Specify multipart MIME data]':'<name=string>' \
--login-options'[Server login options]':'<options>' \
--tftp-blksize'[Set TFTP BLKSIZE option]':'<value>' \
- --tftp-no-options'[not send any TFTP options]':'Do' \
{-v,--verbose}'[Make the operation more talkative]' \
+ --ftp-skip-pasv-ip'[Skip the IP address for PASV]' \
--proxy-key'[Private key for HTTPS proxy]':'<key>' \
{-F,--form}'[Specify multipart MIME data]':'<name=content>' \
--mail-from'[Mail from this address]':'<address>' \
--oauth2-bearer'[OAuth 2 Bearer Token]':'<token>' \
--proto'[Enable/disable PROTOCOLS]':'<protocols>' \
+ --tftp-no-options'[Do not send any TFTP options]' \
--tlsauthtype'[TLS authentication type]':'<type>' \
--doh-url'[Resolve host names over DOH]':'<URL>' \
--no-sessionid'[Disable SSL session-ID reusing]' \
@@ -173,14 +174,13 @@
--ftp-ssl-ccc'[Send CCC after authenticating]' \
{-4,--ipv4}'[Resolve names to IPv4 addresses]' \
{-6,--ipv6}'[Resolve names to IPv6 addresses]' \
- --netrc-optional'[either .netrc or URL]':'Use' \
--service-name'[SPNEGO service name]':'<name>' \
{-V,--version}'[Show version number and quit]' \
--data-ascii'[HTTP POST ASCII data]':'<data>' \
--ftp-account'[Account data string]':'<data>' \
- --compressed-ssh'[SSH compression]':'Enable' \
--disable-eprt'[Inhibit using EPRT or LPRT]' \
--ftp-method'[Control CWD usage]':'<method>' \
+ --netrc-optional'[Use either .netrc or URL]' \
--pubkey'[SSH Public key file name]':'<key>' \
--raw'[Do HTTP "raw"; no transfer decoding]' \
--anyauth'[Pick any authentication method]' \
@@ -189,6 +189,7 @@
--no-alpn'[Disable the ALPN TLS extension]' \
--tcp-nodelay'[Use the TCP_NODELAY option]' \
{-B,--use-ascii}'[Use ASCII/text transfer]' \
+ --compressed-ssh'[Enable SSH compression]' \
--digest'[Use HTTP Digest Authentication]' \
--proxy-tlsv1'[Use TLSv1 for HTTPS proxy]' \
--engine'[Crypto engine to use]':'<name>' \
Marcel Raad [Wed, 6 Feb 2019 13:59:15 +0000 (14:59 +0100)]
tool_operate: fix typecheck warning
Use long for CURLOPT_HTTP09_ALLOWED to fix the following warning:
tool_operate.c: In function 'operate_do':
../include/curl/typecheck-gcc.h:47:9: error: call to
'_curl_easy_setopt_err_long' declared with attribute warning:
curl_easy_setopt expects a long argument for this option [-Werror]
Chris Araman [Wed, 6 Feb 2019 05:56:36 +0000 (21:56 -0800)]
url: close TLS before removing conn from cache
- Fix potential crashes in schannel shutdown.
Ensure any TLS shutdown messages are sent before removing the
association between the connection and the easy handle. Reverts
@bagder's previous partial fix for #3412.
Commit 7a09b52c98ac8d840a8a9907b1a1d9a9e684bcf5 introduced support
for the draft-ietf-httpbis-cookie-alone-01 cookie draft, and while
the entry was removed from the TODO it was mistakenly left here.
Fix by removing and rewording the entry slightly.
Closes #3530 Reviewed-by: Daniel Stenberg <daniel@haxx.se>
But on line 3868, that option is omitted. This caused problems for me,
as the symbol-scan.pl script in particular couldn't find its
dependencies properly:
If the incoming len 5, but the buffer does not have a termination
after 5 bytes, the strtol() call may keep reading through the line
buffer until is exceeds its boundary. Fix by ensuring that we are
using a bounded read with a temporary buffer on the stack.
Bug: https://curl.haxx.se/docs/CVE-2019-3823.html Reported-by: Brian Carpenter (Geeknik Labs)
CVE-2019-3823
georgeok [Tue, 29 Jan 2019 17:26:31 +0000 (18:26 +0100)]
spnego_sspi: add support for channel binding
Attempt to add support for Secure Channel binding when negotiate
authentication is used. The problem to solve is that by default IIS
accepts channel binding and curl doesn't utilise them. The result was a
401 response. Scope affects only the Schannel(winssl)-SSPI combination.
John Marshall [Thu, 31 Jan 2019 11:52:51 +0000 (11:52 +0000)]
doc: use meaningless port number in CURLOPT_LOCALPORT example
Use an ephemeral port number here; previously the example had 8080
which could be confusing as the common web server port number might
be misinterpreted as suggesting this option affects the remote port.
Jay Satiro [Tue, 29 Jan 2019 05:33:14 +0000 (00:33 -0500)]
TODO: WinSSL: 'Add option to disable client cert auto-send'
By default WinSSL selects and send a client certificate automatically,
but for privacy and consistency we should offer an option to disable the
default auto-send behavior.
Jeremie Rapin [Wed, 23 Jan 2019 14:35:46 +0000 (15:35 +0100)]
sigpipe: if mbedTLS is used, ignore SIGPIPE
mbedTLS doesn't have a sigpipe management. If a write/read occurs when
the remote closes the socket, the signal is raised and kills the
application. Use the curl mecanisms fix this behavior.
Felix Hädicke [Wed, 23 Jan 2019 22:10:39 +0000 (23:10 +0100)]
setopt: enable CURLOPT_SSH_KNOWNHOSTS and CURLOPT_SSH_KEYFUNCTION for libssh
CURLOPT_SSH_KNOWNHOSTS and CURLOPT_SSH_KEYFUNCTION are supported for
libssh as well. So accepting these options only when compiling with
libssh2 is wrong here.
Felix Hädicke [Wed, 23 Jan 2019 22:47:55 +0000 (23:47 +0100)]
libssh: do not let libssh create socket
By default, libssh creates a new socket, instead of using the socket
created by curl for SSH connections.
Pass the socket created by curl to libssh using ssh_options_set() with
SSH_OPTIONS_FD directly after ssh_new(). So libssh uses our socket
instead of creating a new one.
This approach is very similar to what is done in the libssh2 code, where
the socket created by curl is passed to libssh2 when
libssh2_session_startup() is called.
There is no real gain in performing memcmp() comparisons on single
characters, so change these to array subscript inspections which
saves a call and makes the code clearer.
Closes #3486 Reviewed-by: Daniel Stenberg <daniel@haxx.se> Reviewed-by: Jay Satiro <raysatiro@yahoo.com>
The overloadable attribute is removed again starting from
NDK17. Actually they only exist in two NDK versions (15 and 16). With
overloadable, the first condition tried will succeed. Results in wrong
detection result.
JDepooter [Thu, 17 Jan 2019 01:18:20 +0000 (17:18 -0800)]
ssh: log the libssh2 error message when ssh session startup fails
When a ssh session startup fails, it is useful to know why it has
failed. This commit changes the message from:
"Failure establishing ssh session"
to something like this, for example:
"Failure establishing ssh session: -5, Unable to exchange encryption keys"
Daniel Stenberg [Fri, 11 Jan 2019 22:43:38 +0000 (23:43 +0100)]
extract_if_dead: use a known working transfer when checking connections
Make sure that this function sets a proper "live" transfer for the
connection before calling the protocol-specific connection check
function, and then clear it again afterward as a non-used connection has
no current transfer.
Daniel Stenberg [Wed, 2 Jan 2019 17:04:58 +0000 (18:04 +0100)]
urldata: rename easy_conn to just conn
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
travis: turn off copyright year checks in checksrc
Invoking the maintainer intended COPYRIGHTYEAR check for everyone
in the PR pipeline is too invasive, especially at the turn of the
year when many files get affected. Remove and leave it as a tool
for maintainers to verify patches before commits.
Daniel Stenberg [Tue, 8 Jan 2019 13:24:15 +0000 (14:24 +0100)]
multi: multiplexing improvements
Fixes #3436
Closes #3448
Problem 1
After LOTS of scratching my head, I eventually realized that even when doing
10 uploads in parallel, sometimes the socket callback to the application that
tells it what to wait for on the socket, looked like it would reflect the
status of just the single transfer that just changed state.
Digging into the code revealed that this was indeed the truth. When multiple
transfers are using the same connection, the application did not correctly get
the *combined* flags for all transfers which then could make it switch to READ
(only) when in fact most transfers wanted to get told when the socket was
WRITEABLE.
Problem 1b
A separate but related regression had also been introduced by me when I
cleared connection/transfer association better a while ago, as now the logic
couldn't find the connection and see if that was marked as used by more
transfers and then it would also prematurely remove the socket from the socket
hash table even in times other transfers were still using it!
Fix 1
Make sure that each socket stored in the socket hash has a "combined" action
field of what to ask the application to wait for, that is potentially the ORed
action of multiple parallel transfers. And remove that socket hash entry only
if there are no transfers left using it.
Problem 2
The socket hash entry stored an association to a single transfer using that
socket - and when curl_multi_socket_action() was called to tell libcurl about
activities on that specific socket only that transfer was "handled".
This was WRONG, as a single socket/connection can be used by numerous parallel
transfers and not necessarily a single one.
Fix 2
We now store a list of handles in the socket hashtable entry and when libcurl
is told there's traffic for a particular socket, it now iterates over all
known transfers using that single socket.
Added Curl_resolver_kill() for all three resolver modes, which only
blocks when necessary, along with test 1592 to confirm
curl_multi_remove_handle() doesn't block unless it must.
Marcel Raad [Thu, 3 Jan 2019 14:22:44 +0000 (15:22 +0100)]
schannel: fix compiler warning
When building with Unicode on MSVC, the compiler warns about freeing a
pointer to const in Curl_unicodefree. Fix this by declaring it as
non-const and casting the argument to Curl_convert_UTF8_to_tchar to
non-const too, like we do in all other places.
When a non-empty list is appended to, and used as the returnvalue,
the list pointer can leak in case of an allocation failure in the
curl_slist_append() call. This is correctly handled in curl code
usage but we weren't explicitly pointing it out in the API call
documentation. Fix by extending the RETURNVALUE manpage section
and example code.
Closes #3424 Reported-by: dnivras on github Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Marcel Raad [Tue, 1 Jan 2019 17:03:11 +0000 (18:03 +0100)]
tvnow: silence conversion warnings
MinGW-w64 defaults to targeting Windows 7 now, so GetTickCount64 is
used and the milliseconds are represented as unsigned long long,
leading to a compiler warning when implicitly converting them to long.
The previous fix for parsing IPv6 URLs with a zone index was a paddle
short for URLs without an explicit port. This patch fixes that case
and adds a unit test case.
This bug was highlighted by issue #3408, and while it's not the full
fix for the problem there it is an isolated bug that should be fixed
regardless.
Closes #3411 Reported-by: GitYuanQu on github Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Claes Jakobsson [Thu, 27 Dec 2018 13:23:13 +0000 (14:23 +0100)]
hostip: support wildcard hosts
This adds support for wildcard hosts in CURLOPT_RESOLVE. These are
try-last so any non-wildcard entry is resolved first. If specified,
any host not matched by another CURLOPT_RESOLVE config will use this
as fallback.
Example send a.com to 10.0.0.1 and everything else to 10.0.0.2:
curl --resolve *:443:10.0.0.2 --resolve a.com:443:10.0.0.1 \
https://a.com https://b.com
This is probably quite similar to using:
--connect-to a.com:443:10.0.0.1:443 --connect-to :443:10.0.0.2:443
Closes #3406 Reviewed-by: Daniel Stenberg <daniel@haxx.se>