Daniel Stenberg [Wed, 3 Jan 2007 22:18:38 +0000 (22:18 +0000)]
- Matt Witherspoon fixed the flaw which made libcurl 7.16.0 always store
downloaded data in two buffers, just to be able to deal with a special HTTP
pipelining case. That is now only activated for pipelined transfers. In
Matt's case, it showed as a considerable performance difference,
Daniel Stenberg [Tue, 2 Jan 2007 22:34:56 +0000 (22:34 +0000)]
- Victor Snezhko helped us fix bug report #1603712
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate
(CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken
on Windows (since 7.16.0, but that's when they were introduced as previous
to that the limiting logic was made in the application only and not in the
library). It was actually also broken on select()-based systems (as apposed
to poll()) but we haven't had any such reports. We now use select(), Sleep()
or delay() properly to sleep a while without waiting for anything input or
output when the rate limiting is activated with the easy interface.
Daniel Stenberg [Tue, 2 Jan 2007 12:14:21 +0000 (12:14 +0000)]
- Modified libcurl.pc.in to use Libs.private for the libs libcurl itself needs
to get built static. It has been mentioned before and was again brought to
our attention by Nathanael Nerode who filed debian bug report #405226
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=405226).
Daniel Stenberg [Fri, 22 Dec 2006 15:04:59 +0000 (15:04 +0000)]
- Robert Foreman provided a prime example snippet showing how libcurl would
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
Daniel Stenberg [Fri, 22 Dec 2006 07:30:21 +0000 (07:30 +0000)]
When setting a proxy with environment variables and (for example) running
'curl [URL]' with a URL without a protocol prefix, curl would not send a
correct request as it failed to add the protocol prefix.
Daniel Stenberg [Thu, 21 Dec 2006 10:15:38 +0000 (10:15 +0000)]
Robson Braga Araujo reported bug #1618359
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
Daniel Stenberg [Tue, 19 Dec 2006 14:28:01 +0000 (14:28 +0000)]
* removed the SSH-based protocols as they are now being implemented
* added mentioning of doing the stunnel equivalent ourselves for the test suite
* spell-check
Daniel Stenberg [Tue, 19 Dec 2006 09:09:44 +0000 (09:09 +0000)]
37. Having more than one connection to the same host when doing NTLM
authentication (with performs multiple "passes" and authenticates a
connection rather than a HTTP request), and particularly when using the
multi interface, there's a risk that libcurl will re-use a wrong connection
when doing the different passes in the NTLM negotiation and thus fail to
negotiate (in seemingly mysterious ways).
36. --limit-rate (CURLOPT_MAX_SEND_SPEED_LARGE and
CURLOPT_MAX_RECV_SPEED_LARGE) are broken on Windows (since 7.16.0, but
that's when they were introduced as previous to that the limiting logic was
made in the application only and not in the library). This problem is easily
repeated and it takes a Windows person to fire up his/hers debugger in order
to fix. http://curl.haxx.se/bug/view.cgi?id=1603712
Daniel Stenberg [Mon, 11 Dec 2006 09:32:58 +0000 (09:32 +0000)]
Alexey Simak found out that when doing FTP with the multi interface and
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
Daniel Stenberg [Wed, 6 Dec 2006 09:52:04 +0000 (09:52 +0000)]
clarify --limit-rate somewhat: it might send away/receive chunks of date in
temporarily higher speeds than requested, but the given limiting is considered
"over time" and is an average
Daniel Stenberg [Wed, 6 Dec 2006 09:37:40 +0000 (09:37 +0000)]
Sebastien Willemijns reported bug #1603712
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections
getting cut off prematurely when --limit-rate is used. While I found no such
problems in my tests nor in my reading of the code, I found that the
--limit-rate code was severly flawed (since it was moved into the lib, since
7.15.5) when used with the easy interface and it didn't work as documented so
I reworked it somewhat and now it works for my tests.
Daniel Stenberg [Tue, 5 Dec 2006 21:40:14 +0000 (21:40 +0000)]
Stefan Krause pointed out a compiler warning with a picky MSCV compiler when
passing a curl_off_t argument to the Curl_read_rewind() function which takes
an size_t argument. Curl_read_rewind() also had debug code left in it and it
was put in a different source file with no good reason when only used from
one single spot.
Daniel Stenberg [Tue, 5 Dec 2006 16:04:01 +0000 (16:04 +0000)]
Sh Diao reported that CURLOPT_CLOSEPOLICY doesn't work, and indeed, there is
no code present in the library that receives the option. Since it was not
possible to use, we know that no current users exist and thus we simply
removed it from the docs and made the code always use the default path of
the code.
Daniel Stenberg [Tue, 5 Dec 2006 15:36:26 +0000 (15:36 +0000)]
Jared Lundell filed bug report #1604956
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
Daniel Stenberg [Tue, 5 Dec 2006 14:57:43 +0000 (14:57 +0000)]
Martin Skinner brought back bug report #1230118 to haunt us once again.
(http://curl.haxx.se/bug/view.cgi?id=1230118) curl_getdate() did not work
properly for all input dates on Windows. It was mostly seen on some TZ time
zones using DST. Luckily, Martin also provided a fix.
Daniel Stenberg [Tue, 5 Dec 2006 13:49:29 +0000 (13:49 +0000)]
Alexey Simak filed bug report #1600447
(http://curl.haxx.se/bug/view.cgi?id=1600447) in which he noted that active
FTP connections don't work with the multi interface. The problem is here that
the multi interface state machine has a state during which it can wait for the
data connection to connect, but the active connection is not done in the same
step in the sequence as the passive one is so it doesn't quite work for
active. The active FTP code still use a blocking function to allow the remote
server to connect.
The fix (work-around is a better word) for this problem is to set the
boolean prematurely that the data connection is completed, so that the "wait
for connect" phase ends at once.
Daniel Stenberg [Tue, 5 Dec 2006 13:37:05 +0000 (13:37 +0000)]
Matt Witherspoon fixed a problem case when the CPU load went to 100% when a
HTTP upload was disconnected:
"What appears to be happening is that my system (Linux 2.6.17 and 2.6.13) is
setting *only* POLLHUP on poll() when the conditions in my previous mail
occur. As you can see, select.c:Curl_select() does not check for POLLHUP. So
basically what was happening, is poll() was returning immediately (with
POLLHUP set), but when Curl_select() looked at the bits, neither POLLERR or
POLLOUT was set. This still caused Curl_readwrite() to be called, which
quickly returned. Then the transfer() loop kept continuing at full speed
forever."
Daniel Stenberg [Fri, 1 Dec 2006 07:49:22 +0000 (07:49 +0000)]
Toon Verwaest reported that there are servers that send the Content-Range:
header in a third, not suppported by libcurl, format and we agreed that we
could make the parser more forgiving to accept all the three found
variations.
Daniel Stenberg [Sat, 25 Nov 2006 13:32:04 +0000 (13:32 +0000)]
Venkat Akella found out that libcurl did not like HTTP responses that simply
responded with a single status line and no headers nor body. Starting now, a
HTTP response on a persistent connection (i.e not set to be closed after the
response has been taken care of) must have Content-Length or chunked
encoding set, or libcurl will simply assume that there is no body.
To my horror I learned that we had no less than 57(!) test cases that did bad
HTTP responses like this, and even the test http server (sws) responded badly
when queried by the test system if it is the test system. So although the
actual fix for the problem was tiny, going through all the newly failing test
cases got really painful and boring.
Daniel Stenberg [Wed, 22 Nov 2006 22:54:41 +0000 (22:54 +0000)]
Michael Wallner fixed this problem: When I set domains in the options
struct, and there are domain/search entries in /etc/resolv.conf, the domains
of the options struct will be overridden.
Yang Tse [Wed, 22 Nov 2006 18:41:34 +0000 (18:41 +0000)]
Added a check in configure that verifies if <signal.h> is available,
defining HAVE_SIGNAL_H if the header is available.
Added a check in configure that tests if the sig_atomic_t type is
available, defining HAVE_SIG_ATOMIC_T if it is available. Providing
a suitable default in setup_once.h if not available.
Added a check in configure that tests if the sig_atomic_t type is
already defined as volatile, defining HAVE_SIG_ATOMIC_T_VOLATILE
if it is available and already defined as volatile.
Yang Tse [Fri, 17 Nov 2006 16:44:22 +0000 (16:44 +0000)]
The hash of running servers is now a hash of hashes which for each running
server holds not only its two main pids, but also the pidfile of the test
server and the 'slavepidfiles' for ftp* servers. This allows a better control
when stopping servers.
Now from runtests.pl when test servers are stopped they are signalled in
sequence TERM, INT and KILL allowing time in between for them to die. This
will give us a chance of gracefully stopping test servers, which we didn't
have when we were killing them in first instance.
Daniel Stenberg [Mon, 13 Nov 2006 17:29:07 +0000 (17:29 +0000)]
Ron in bug #1595348 (http://curl.haxx.se/bug/view.cgi?id=1595348) pointed
out a stack overwrite (and the corresponding fix) on 64bit Windows when
dealing with HTTP chunked encoding.
Daniel Stenberg [Thu, 9 Nov 2006 21:54:33 +0000 (21:54 +0000)]
Dmitriy Sergeyev found a SIGSEGV with his test04.c example posted on 7 Nov
2006. It turned out we wrongly assumed that the connection cache was present
when tearing down a connection.
Daniel Stenberg [Thu, 9 Nov 2006 21:36:18 +0000 (21:36 +0000)]
Ciprian Badescu found a SIGSEGV when doing multiple TFTP transfers using the
multi interface, but I could also repeat it doing multiple sequential ones
with the easy interface. Using Ciprian's test case, I could fix it.
Daniel Stenberg [Wed, 8 Nov 2006 21:49:14 +0000 (21:49 +0000)]
Bradford Bruce reported that when setting CURLOPT_DEBUGFUNCTION without
CURLOPT_VERBOSE set to non-zero, you still got a few debug messages from the
SSL handshake. This is now stopped.