Branden Archer [Mon, 7 Nov 2016 01:32:26 +0000 (20:32 -0500)]
Use ctest on VC builds to run unit tests
msbuild was running the unit tests, but if a failure was hit
the output of the unit tests was not printed. As a result,
there was no indication as to what actually failed in a given
unit test program. Using ctest (part of CMake) should
output the unit test program's output for better failure
diagnosis.
Branden Archer [Sat, 5 Nov 2016 04:27:58 +0000 (00:27 -0400)]
Better error reporting for check_check_sub failures
If a unit test covered in check_check_sub fails report
the specific test case and test name, as to better identify
which test is failing and under what environment.
Branden Archer [Sun, 6 Nov 2016 14:01:40 +0000 (09:01 -0500)]
List test names in the master_tests table
Sometimes it is tricky to track down a unit test failure
as it is difficult to see which test actually failed.
To help make this easier all unit tests in the master_tests
table will now have the test name recorded. This will be
verified at unit test time as well, to ensure the test name
is accurate.
Branden Archer [Sat, 5 Nov 2016 04:14:52 +0000 (00:14 -0400)]
Add call for retrieving test name
This will be used internally to determine the current running test
and recording it during the check_check_sub tests in order to make
unit test failure debugging easier. The call may also be useful
for users of Check.
Support arbitrary tagging and selection of testcases.
A testcase can optionally have a list of tags associated with it.
Srunner can be run with an optional include list of tags and an optional
exclude list of tags. These will have the effect of filtering testcases
that would otherwise be run.
Reduce Timeout Double Scaling Tests scale to avoid UT failures
The test "test_sleep2_fail" is occasionally passing (failing to fail)
in the jenkins UT - perhaps 1.6 seconds doesn't give enough leeway ?
Making it 1.5 seconds so its in exactly in the middle of sleep1_pass
and sleep2_fail.
Branden Archer [Wed, 29 Jun 2016 02:12:23 +0000 (22:12 -0400)]
Do not report failure line numbers if file not setup
When running the master suite failure line numbers are written to
a file. Usually this is fine. However, the check_mem_leak.c test
re-uses the same tests. As a result, every time a failure is hit
a SEGV occurs because the file is never setup.
The memory leak test does not check against the line numbers,
so omitting these are fine. This fix prevents lots of SEGV
failures when running check_mem_leak.c under valgrind.
Branden Archer [Sun, 19 Jun 2016 20:52:26 +0000 (16:52 -0400)]
Disallow a test case from being added to a suite twice
It was possible to add a test case to a suite more than
once. When the suite is freed later on the test case is
freed twice, causing a double free error.
Although is likely a bug from the test writer, this change
allows check to disallow double additions to prevent
double freeing a test case.
Branden Archer [Sun, 19 Jun 2016 20:49:20 +0000 (16:49 -0400)]
Remove double addition of test suite
The original purpose of adding the default timeout tests twice
does not seem to be relevant anymore; the timeout tests have been
split into several different types, each using either the
tcase_set_timeout or an environment variable. The tc_timeout_default
suite never has anything but the default timeout set, so the
spirit of why it was originally added no longer applies.
In addition, if the suite were formally torn down it would cause
a double free error.
Branden Archer [Thu, 26 May 2016 03:15:17 +0000 (23:15 -0400)]
Add test case where %n is passed into a ck_assert call
In the past it was possible to cause issues if arguments
were passed into ck_assert* macros which included %n.
Exist unit tests check for output format options, such as
%d and %f. No test existed for %n, which writes a value to
memory if used.
Ulrich Eckhardt observed that passing something with a %n in it
would result in issues running Check 0.9.10. Although the
issue is already resolved, this commit adds the test case mentioned in
Ulrich's GitHub issue: https://github.com/libcheck/check/issues/41
the ltmain.sh script, generated by libtoolize, does not seem to honor the AC_CONFIG_AUX_DIR parameter. As a
result, the configure script will fail. To work around this, remove the AC_CONFIG_AUX_DIR option
to place all generated scripts into the current directory.
This is half an issue report and half a pull request.
The reason for this is that while autoreconf -fi works for most of the
needed scripts it doesn't copy ltmain.sh into build-aux which results
the following ./configure failure:
configure.ac:137: error: required file 'build-aux/ltmain.sh' not found
This patch, which is a similar one I've provided to cloog[0] which
allows ./configure to at least complete.
I'm not sure what you think since obviously the directory was used for
some reason, likely to reduce top-level directory clutter, but this at
least works for me.
Joshua D. Boyd [Tue, 22 Mar 2016 04:50:34 +0000 (00:50 -0400)]
In cmake build, use mkstemp when present
This fixes this error, seen when building with cmake:
check/src/check_msg.c:247: warning: the use of 'tempnam' is dangerous, better use `mkstemp'
CMakeLists.txt checks for mkstemp being present and sents a cmake
variable HAVE_MKSTEMP. The check_msg.c file will use mkstemp if
HAVE_MKSTEMP is true, but the cmake variable wasn't causing HAVE_MKSTEMP
to be defined for the C compiler.
lib/strsignal.c: strsignal() should be not be declared const
POSIX requires strsignal() to return a pointer to a char, not a pointer
to a const char. [1] On uClibc, and possibly other libcs, this causes
problems with the correct declaration in string.h.
[1] man 3 strsignal
Signed-off-by: Anthony G. Basile <blueness@gentoo.org>
Jan Pokorný [Tue, 1 Mar 2016 17:41:58 +0000 (18:41 +0100)]
Fix segfault when memmoving with negative/enormous n
I observed a segmentation fault caused by trying to memmove by -15,
which makes 18446744073709551601 on my 64-bit platform after an argument
type promotion (from int into size_t). In my case, this was connected
with filling up disk during the test facilitated by check, hence I derive
that the main issue was that not enough bytes for particular type of
message was actually read (and previously written, for that matter) and
because of this incompleteness, get_result happily consumed more bytes
than was read.
Additional debugging info at the point of segfault (src/check_pack.c):
> 468│ /* Move remaining data in buffer to the beginning */
> 469├> memmove(buf, buf + n, nparse);
> 470│ /* If EOF has not been seen */
> 471│ if(nread > 0)
>
> (gdb) p nparse
> $1 = -15
> (gdb) p n
> $2 = 23
> (gdb) p nread
> $3 = 0
Georg Sauthoff [Tue, 29 Dec 2015 17:10:52 +0000 (18:10 +0100)]
Use only POSIX conforming features of printf
This fixes 3 test failures on Solaris 10.
POSIX standardized printf, but the hex style \xHH sequences
aren't included in the standard. POSIX printf does understand
octal style \NNN sequences, though.
This should also work with shells where printf is a builtin (e.g.
on Lubuntu which probably uses dash).
Tested with xpg4-sh on Solaris 10 and bash/zsh/dash on Linux.