Matt Caswell [Tue, 8 Mar 2016 20:59:50 +0000 (20:59 +0000)]
Fix memory leak in ssltest
The new Rand usage of Thread API exposed a bug in ssltest. ssltest "cheats"
and uses internal headers to directly call functions that normally you
wouldn't be able to do. This means that auto-init doesn't happen, and
therefore auto-deinit doesn't happen either, meaning that the new rand locks
don't get cleaned up properly.
Richard Levitte [Wed, 9 Mar 2016 00:17:27 +0000 (01:17 +0100)]
Adapt unix Makefile template to 'no-makedepend'
This change is a bit more complex, as it involves several recipe
variants.
Also, remove the $(CROSS_COMPILE) prefix for the makedepend program.
When we use the program "makedepend", this doesn't serve anything,
and when we use the compiler, this value isn't even used.
Richard Levitte [Tue, 8 Mar 2016 18:19:53 +0000 (19:19 +0100)]
Redo the Unix source code generator
For assembler, we want the final target to be foo.s (lowercase s).
However, the build.info may have lines like this (note upper case S):
GENERATE[foo.S]=foo.pl
This indicates that foo.s (lowercase s) is still to be produced, but
that producing it will take an extra step via $(CC) -E. Therefore,
the following variants (simplified for display) can be generated:
This adds a new accessor function DSA_SIG_get0.
The customisation of DSA_SIG structure initialisation has been removed this
means that the 'r' and 's' components are automatically allocated when
DSA_SIG_new() is called. Update documentation.
Andy Polyakov [Tue, 8 Mar 2016 08:46:19 +0000 (09:46 +0100)]
SPARCv9 assembly pack: unify build rules and argument handling.
Make all scripts produce .S, make interpretation of $(CFLAGS)
pre-processor's responsibility, start accepting $(PERLASM_SCHEME).
[$(PERLASM_SCHEME) is redundant in this case, because there are
no deviataions between Solaris and Linux assemblers. This is
purely to unify .pl->.S handling across all targets.]
Reviewed-by: Richard Levitte <levitte@openssl.org>
Todd Short [Sat, 5 Mar 2016 13:47:55 +0000 (08:47 -0500)]
GH787: Fix ALPN
* Perform ALPN after the SNI callback; the SSL_CTX may change due to
that processing
* Add flags to indicate that we actually sent ALPN, to properly error
out if unexpectedly received.
* clean up ssl3_free() no need to explicitly clear when doing memset
* document ALPN functions
Matt Caswell [Fri, 12 Feb 2016 12:03:58 +0000 (12:03 +0000)]
Add an SSL_has_pending() function
This is similar to SSL_pending() but just returns a 1 if there is data
pending in the internal OpenSSL buffers or 0 otherwise (as opposed to
SSL_pending() which returns the number of bytes available). Unlike
SSL_pending() this will work even if "read_ahead" is set (which is the
case if you are using read pipelining, or if you are doing DTLS). A 1
return value means that we have unprocessed data. It does *not* necessarily
indicate that there will be application data returned from a call to
SSL_read(). The unprocessed data may not be application data or there
could be errors when we attempt to parse the records.
Matt Caswell [Wed, 13 Jan 2016 14:20:25 +0000 (14:20 +0000)]
Add an ability to set the SSL read buffer size
This capability is required for read pipelining. We will only read in as
many records as will fit in the read buffer (and the network can provide
in one go). The bigger the buffer the more records we can process in
parallel.
Benjamin Kaduk [Tue, 8 Mar 2016 00:00:03 +0000 (18:00 -0600)]
GH815: The ChaCha20/Poly1305 codepoints are official
CCA8, CCA9, CCAA, CCAB, CCAC, CCAD, and CCAE are now present in
https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml
so remove the "as per draft-ietf-tls-chacha20-poly1305-03" note
accordingly.
Signed-off-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Matt Caswell <matt@openssl.org>
Todd Short [Sat, 5 Mar 2016 13:47:55 +0000 (08:47 -0500)]
GH787: Fix ALPN
* Perform ALPN after the SNI callback; the SSL_CTX may change due to
that processing
* Add flags to indicate that we actually sent ALPN, to properly error
out if unexpectedly received.
* clean up ssl3_free() no need to explicitly clear when doing memset
* document ALPN functions
Richard Levitte [Mon, 7 Mar 2016 23:04:27 +0000 (00:04 +0100)]
Change the INSTALL documentation for unified builds
Because of the unified scheme, building on different platforms is very
similar. We currently have Unix and OpenVMS on the unified scheme,
which means that a separate INSTALL.VMS is no longer needed.
Matt Caswell [Mon, 7 Mar 2016 12:17:42 +0000 (12:17 +0000)]
Rename the numpipes argument to ssl3_enc/tls1_enc
The numpipes argument to ssl3_enc/tls1_enc is actually the number of
records passed in the array. To make this clearer rename the argument to
|n_recs|.
Matt Caswell [Mon, 7 Mar 2016 10:17:27 +0000 (10:17 +0000)]
Rename EVP_CIPHER_CTX_cipher_data to EVP_CIPHER_CTX_get_cipher_data
We had the function EVP_CIPHER_CTX_cipher_data which is newly added for
1.1.0. As we now also need an EVP_CIPHER_CTX_set_cipher_data it makes
more sense for the former to be called EVP_CIPHER_CTX_get_cipher_data.
Matt Caswell [Tue, 16 Feb 2016 14:00:55 +0000 (14:00 +0000)]
Add documentation for the EVP_CIPHER_CTX_cipher_data functions
The new pipeline code added a new function
EVP_CIPHER_CTX_set_cipher_data(). Add documentation for this and the
existing EVP_CIPHER_CTX_cipher_data() function.
Matt Caswell [Tue, 16 Feb 2016 12:10:53 +0000 (12:10 +0000)]
Remove the wrec record layer field
We used to use the wrec field in the record layer for keeping track of the
current record that we are writing out. As part of the pipelining changes
this has been moved to stack allocated variables to do the same thing,
therefore the field is no longer needed.
Matt Caswell [Tue, 16 Feb 2016 10:36:18 +0000 (10:36 +0000)]
Add documentation for SSL_has_pending()
A previous commit added the SSL_has_pending() function which provides a
method for knowing whether OpenSSL has buffered, but as yet unprocessed
record data.
Matt Caswell [Fri, 12 Feb 2016 13:33:45 +0000 (13:33 +0000)]
Ensure s_client and s_server work when read_ahead is set
Previously s_client and s_server relied on using SSL_pending() which does
not take into account read_ahead. For read pipelining to work, read_ahead
gets set automatically. Therefore s_client and s_server have been
converted to use SSL_has_pending() instead.
Matt Caswell [Fri, 12 Feb 2016 12:03:58 +0000 (12:03 +0000)]
Add an SSL_has_pending() function
This is similar to SSL_pending() but just returns a 1 if there is data
pending in the internal OpenSSL buffers or 0 otherwise (as opposed to
SSL_pending() which returns the number of bytes available). Unlike
SSL_pending() this will work even if "read_ahead" is set (which is the
case if you are using read pipelining, or if you are doing DTLS). A 1
return value means that we have unprocessed data. It does *not* necessarily
indicate that there will be application data returned from a call to
SSL_read(). The unprocessed data may not be application data or there
could be errors when we attempt to parse the records.
Matt Caswell [Wed, 13 Jan 2016 14:20:25 +0000 (14:20 +0000)]
Add an ability to set the SSL read buffer size
This capability is required for read pipelining. We will only read in as
many records as will fit in the read buffer (and the network can provide
in one go). The bigger the buffer the more records we can process in
parallel.
Matt Caswell [Wed, 13 Jan 2016 11:44:04 +0000 (11:44 +0000)]
Lazily initialise the compression buffer
With read pipelining we use multiple SSL3_RECORD structures for reading.
There are SSL_MAX_PIPELINES (32) of them defined (typically not all of these
would be used). Each one has a 16k compression buffer allocated! This
results in a significant amount of memory being consumed which, most of the
time, is not needed. This change swaps the allocation of the compression
buffer to be lazy so that it is only done immediately before it is actually
used.
Matt Caswell [Tue, 12 Jan 2016 14:52:35 +0000 (14:52 +0000)]
Implement read pipeline support in libssl
Read pipelining is controlled in a slightly different way than with write
pipelining. While reading we are constrained by the number of records that
the peer (and the network) can provide to us in one go. The more records
we can get in one go the more opportunity we have to parallelise the
processing.
There are two parameters that affect this:
* The number of pipelines that we are willing to process in one go. This is
controlled by max_pipelines (as for write pipelining)
* The size of our read buffer. A subsequent commit will provide an API for
adjusting the size of the buffer.
Another requirement for this to work is that "read_ahead" must be set. The
read_ahead parameter will attempt to read as much data into our read buffer
as the network can provide. Without this set, data is read into the read
buffer on demand. Setting the max_pipelines parameter to a value greater
than 1 will automatically also turn read_ahead on.
Finally, the read pipelining as currently implemented will only parallelise
the processing of application data records. This would only make a
difference for renegotiation so is unlikely to have a significant impact.
Matt Caswell [Tue, 22 Sep 2015 10:23:33 +0000 (11:23 +0100)]
Add pipeline support to s_server and s_client
Add the options min_send_frag and max_pipelines to s_server and s_client
in order to control pipelining capabilities. This will only have an effect
if a pipeline capable cipher is used (such as the one provided by the
dasync engine).