Matt Caswell [Fri, 12 Feb 2016 13:33:45 +0000 (13:33 +0000)]
Ensure s_client and s_server work when read_ahead is set
Previously s_client and s_server relied on using SSL_pending() which does
not take into account read_ahead. For read pipelining to work, read_ahead
gets set automatically. Therefore s_client and s_server have been
converted to use SSL_has_pending() instead.
Matt Caswell [Fri, 12 Feb 2016 12:03:58 +0000 (12:03 +0000)]
Add an SSL_has_pending() function
This is similar to SSL_pending() but just returns a 1 if there is data
pending in the internal OpenSSL buffers or 0 otherwise (as opposed to
SSL_pending() which returns the number of bytes available). Unlike
SSL_pending() this will work even if "read_ahead" is set (which is the
case if you are using read pipelining, or if you are doing DTLS). A 1
return value means that we have unprocessed data. It does *not* necessarily
indicate that there will be application data returned from a call to
SSL_read(). The unprocessed data may not be application data or there
could be errors when we attempt to parse the records.
Matt Caswell [Wed, 13 Jan 2016 14:20:25 +0000 (14:20 +0000)]
Add an ability to set the SSL read buffer size
This capability is required for read pipelining. We will only read in as
many records as will fit in the read buffer (and the network can provide
in one go). The bigger the buffer the more records we can process in
parallel.
Matt Caswell [Wed, 13 Jan 2016 11:44:04 +0000 (11:44 +0000)]
Lazily initialise the compression buffer
With read pipelining we use multiple SSL3_RECORD structures for reading.
There are SSL_MAX_PIPELINES (32) of them defined (typically not all of these
would be used). Each one has a 16k compression buffer allocated! This
results in a significant amount of memory being consumed which, most of the
time, is not needed. This change swaps the allocation of the compression
buffer to be lazy so that it is only done immediately before it is actually
used.
Matt Caswell [Tue, 12 Jan 2016 14:52:35 +0000 (14:52 +0000)]
Implement read pipeline support in libssl
Read pipelining is controlled in a slightly different way than with write
pipelining. While reading we are constrained by the number of records that
the peer (and the network) can provide to us in one go. The more records
we can get in one go the more opportunity we have to parallelise the
processing.
There are two parameters that affect this:
* The number of pipelines that we are willing to process in one go. This is
controlled by max_pipelines (as for write pipelining)
* The size of our read buffer. A subsequent commit will provide an API for
adjusting the size of the buffer.
Another requirement for this to work is that "read_ahead" must be set. The
read_ahead parameter will attempt to read as much data into our read buffer
as the network can provide. Without this set, data is read into the read
buffer on demand. Setting the max_pipelines parameter to a value greater
than 1 will automatically also turn read_ahead on.
Finally, the read pipelining as currently implemented will only parallelise
the processing of application data records. This would only make a
difference for renegotiation so is unlikely to have a significant impact.
Matt Caswell [Tue, 22 Sep 2015 10:23:33 +0000 (11:23 +0100)]
Add pipeline support to s_server and s_client
Add the options min_send_frag and max_pipelines to s_server and s_client
in order to control pipelining capabilities. This will only have an effect
if a pipeline capable cipher is used (such as the one provided by the
dasync engine).
Matt Caswell [Tue, 22 Sep 2015 10:12:50 +0000 (11:12 +0100)]
Implement write pipeline support in libssl
Use the new pipeline cipher capability to encrypt multiple records being
written out all in one go. Two new SSL/SSL_CTX parameters can be used to
control how this works: max_pipelines and split_send_fragment.
max_pipelines defines the maximum number of pipelines that can ever be used
in one go for a single connection. It must always be less than or equal to
SSL_MAX_PIPELINES (currently defined to be 32). By default only one
pipeline will be used (i.e. normal non-parallel operation).
split_send_fragment defines how data is split up into pipelines. The number
of pipelines used will be determined by the amount of data provided to the
SSL_write call divided by split_send_fragment. For example if
split_send_fragment is set to 2000 and max_pipelines is 4 then:
SSL_write called with 0-2000 bytes == 1 pipeline used
SSL_write called with 2001-4000 bytes == 2 pipelines used
SSL_write called with 4001-6000 bytes == 3 pipelines used
SSL_write_called with 6001+ bytes == 4 pipelines used
split_send_fragment must always be less than or equal to max_send_fragment.
By default it is set to be equal to max_send_fragment. This will mean that
the same number of records will always be created as would have been
created in the non-parallel case, although the data will be apportioned
differently. In the parallel case data will be spread equally between the
pipelines.
Matt Caswell [Tue, 22 Sep 2015 10:11:24 +0000 (11:11 +0100)]
Update the dasync engine to add a pipeline cipher
Implement aes128-cbc as a pipeline capable cipher in the dasync engine.
As dasync is just a dummy engine, it actually just performs the parallel
encrypts/decrypts in serial.
Matt Caswell [Tue, 22 Sep 2015 10:08:25 +0000 (11:08 +0100)]
Add defines for pipeline capable ciphers
Add a flag to indicate that a cipher is capable of performing
"pipelining", i.e. multiple encrypts/decrypts in parallel. Also add some
new ctrls that ciphers will need to implement if they are pipeline capable.
Emilia Kasper [Mon, 7 Mar 2016 14:15:20 +0000 (15:15 +0100)]
Trim Travis config part 3
- Only build & test two configurations. Make all the
other build variants buildonly on gcc (clang on osx).
- Don't build with default clang at all on linux.
- Only use gcc-5 and clang-3.6 for the sanitizer builds. Re-running
e.g. CONFIG_OPTS="shared" with them seems redundant.
Reviewed-by: Richard Levitte <levitte@openssl.org>
David Woodhouse [Mon, 22 Feb 2016 16:44:46 +0000 (16:44 +0000)]
Elide OPENSSL_INIT_set_config_filename() for no-stdio build
Strictly speaking, it isn't stdio and file access which offend me here;
it's the fact that UEFI doesn't provide a strdup() function. But the
fact that it's pointless without file access is a good enough excuse for
compiling it out.
Reviewed-by: Tim Hudson <tjh@openssl.org> Reviewed-by: Rich Salz <rsalz@openssl.org>
Richard Levitte [Sat, 5 Mar 2016 18:05:25 +0000 (19:05 +0100)]
Make OpenSSL::Test::setup() a bit more forgiving
It was unexpected that OpenSSL::Test::setup() should be called twice
by the same recipe. However, that may happen if a recipe combines
OpenSSL::Test and OpenSSL::Test::Simple, which can be a sensible thing
to do. Therefore, we now allow it.
Richard Levitte [Mon, 7 Mar 2016 13:50:37 +0000 (14:50 +0100)]
Unified - Add the build.info command OVERRIDE, to avoid build file clashes
Should it be needed because the recipes within a RAW section might
clash with those generated by Configure, it's possible to tell it
not to generate them with the use of OVERRIDES, for example:
The value of each GENERATE line is a command line or part of it.
Configure places no rules on the command line, except the the first
item muct be the generator file. It is, however, entirely up to the
build file template to define exactly how those command lines should
be handled, how the output is captured and so on.
Andrea Grandi [Wed, 9 Dec 2015 07:26:38 +0000 (07:26 +0000)]
Add support for async jobs in OpenSSL speed
Summary of the changes:
* Move the calls to the crypto operations inside wrapper functions.
This is required because ASYNC_start_job takes a function as an argument.
* Add new function run_benchmark() that manages the jobs for all the operations.
In the POSIX case it uses a select() to receive the events from the engine
and resume the jobs that are paused, while in the WIN case it uses PeekNamedPipe()
* Add new option argument async_jobs to enable and specify the number of async jobs
Emilia Kasper [Thu, 3 Mar 2016 18:50:03 +0000 (19:50 +0100)]
Rework the default cipherlist.
- Always prefer forward-secure handshakes.
- Consistently order ECDSA above RSA.
- Next, always prefer AEADs to non-AEADs, irrespective of strength.
- Within AEADs, prefer GCM > CHACHA > CCM for a given strength.
- Prefer TLS v1.2 ciphers to legacy ciphers.
- Remove rarely used DSS, IDEA, SEED, CAMELLIA, CCM from the default
list to reduce ClientHello bloat.
Andy Polyakov [Fri, 4 Mar 2016 10:39:11 +0000 (11:39 +0100)]
bn/asm/x86[_64]-mont*.pl: complement alloca with page-walking.
Some OSes, *cough*-dows, insist on stack being "wired" to
physical memory in strictly sequential manner, i.e. if stack
allocation spans two pages, then reference to farmost one can
be punishable by SEGV. But page walking can do good even on
other OSes, because it guarantees that villain thread hits
the guard page before it can make damage to innocent one...
clucey [Tue, 23 Feb 2016 08:01:01 +0000 (08:01 +0000)]
Rework based on feedback:
1. Cleaned up eventfd handling
2. Reworked socket setup code to allow other algorithms to be added in
future
3. Fixed compile errors for static build
4. Added error to error stack in all cases of ALG_PERR/ALG_ERR
5. Called afalg_aes_128_cbc() from bind() to avoid race conditions
6. Used MAX_INFLIGHT define in io_getevents system call
7. Coding style fixes
Reviewed-by: Richard Levitte <levitte@openssl.org> Reviewed-by: Matt Caswell <matt@openssl.org>
Emilia Kasper [Sun, 6 Mar 2016 21:31:18 +0000 (22:31 +0100)]
Trim Travis config part 2
- Remove Win builds (temporarily). They're slow, allowed to fail,
and therefore not useful as they are.
- Make the --unified part of the matrix build-only. (This can be
swapped if --unified becomes the default)
- Only build 'no-engine' once, don't run any tests, but don't allow it
to fail.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Emilia Kasper [Sun, 6 Mar 2016 20:59:53 +0000 (21:59 +0100)]
Trim the Travis config
- Remove no-asm. We've got to cut something, and this is at least
partially covered by the sanitizer builds.
- Remove enable-crypto-mdebug from sanitizer
builds. enable-crypto-mdebug has been shown to catch some static
initialization bugs that the standard leak sanitizer can't so
perhaps it has _some_ value; but we shouldn't let the two compete.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Richard Levitte [Fri, 4 Mar 2016 12:48:59 +0000 (13:48 +0100)]
No -fno-common for Darwin
When object files with common block symbols are added to static
libraries on Darwin, those symbols are invisible to the linker that
tries to use them. Our solution was to use -fno-common when compiling
C source.
Unfortunately, there is assembler code that defines OPENSSL_ia32cap_P
as a common block symbol, unconditionally, and in some cases, there is
no other definition. -fno-common doesn't help in this case.
However, 'ranlib -c' adds common block symbols to the index of the
static library, which makes them visible to the linker using it, and
that solves the problem we've seen.
The common conclusion is, either use -fno-common or ranlib -c on
Darwin. Since we have common block symbols unconditionally, choosing
the method for our source is easy.
Add support for application supplied any defined by callback. An
application can change the selector value if it wishes. This is
mainly intended for values which are only known at runtime, for
example dynamically created OIDs.
Rob Percival [Thu, 3 Mar 2016 16:06:59 +0000 (16:06 +0000)]
If a CT log entry in CTLOG_FILE is invalid, skip it and continue loading
Previously, the remaining CT log entries would not be loaded.
Also, CTLOG_STORE_load_file would return 1 even if a log entry was
invalid, resulting in no errors being shown.
Reviewed-by: Ben Laurie <ben@openssl.org> Reviewed-by: Rich Salz <rsalz@openssl.org>
Matt Caswell [Thu, 3 Mar 2016 15:40:51 +0000 (15:40 +0000)]
Don't build RC4 ciphersuites into libssl by default
RC4 based ciphersuites in libssl have been disabled by default. They can
be added back by building OpenSSL with the "enable-weak-ssl-ciphers"
Configure option at compile time.
Richard Levitte [Thu, 3 Mar 2016 09:07:29 +0000 (10:07 +0100)]
Restore the zlib / zlib-dynamic logic
The proper logic is that both zlib and zlib-dynamic are disabled by
default and that enabling zlib-dynamic would enable zlib. Somewhere
along the way, the logic got changed, zlib-dynamic was enabled by
default and zlib didn't get automatically enabled.
PVK files with abnormally large length or salt fields can cause an
integer overflow which can result in an OOB read and heap corruption.
However this is an rarely used format and private key files do not
normally come from untrusted sources the security implications not
significant.
Fix by limiting PVK length field to 100K and salt to 10K: these should be
more than enough to cover any files encountered in practice.