Bruce Momjian [Sat, 15 May 1999 22:31:07 +0000 (22:31 +0000)]
I made it so it rolled over files at 1MB. My table ended up with 120
segments, and my indexes had 3(Yes, it DOES work!).
DROP TABLE removed ALL segments from the table, but only the main index
segment.
So it looks like removing the table itself is using mdunlink in md.c,
while removing indexes uses FileNameUnlink() which only unlinks 1 file.
As far as I can tell, calling FileNameUnlink() and mdunlink() is basically
the same, except mdunlink() deletes any extra segments.
I've done some testing and it seems to work. It also passes regression
tests(except float8, geometry and rules, but that's normal).
If this patch is right, this fixes all known multi-segment problems on
Linux.
Bruce Momjian [Sat, 15 May 1999 22:18:51 +0000 (22:18 +0000)]
I've got 2 pretty small patches.
configtype.patch simply fixes a typo in config.h.in
pg_dump.c.patch Updates a bunch of error messages to include a reason
from
the backend, and also removes a couple of unnecessary
if's
Add double quotes around the sequence name generated to support the
SERIAL data type DEFAULT clause.
This fixes a problem finding the sequence name when mixed case table names
are involved.
Tom Lane [Thu, 13 May 1999 07:29:22 +0000 (07:29 +0000)]
Rip out QueryTreeList structure, root and branch. Querytree
lists are now plain old garden-variety Lists, allocated with palloc,
rather than specialized expansible-array data allocated with malloc.
This substantially simplifies their handling and eliminates several
sources of memory leakage.
Several basic types of erroneous queries (syntax error, attempt to
insert a duplicate key into a unique index) now demonstrably leak
zero bytes per query.
Bruce Momjian [Wed, 12 May 1999 12:47:24 +0000 (12:47 +0000)]
I am sorry, I misinterpreted the still failing trigger regression test.
The
offending code
has been removed, the action is now always dependent :-)
I suggest the following patch, to finally make trigger regression happy
again:
<<refint1.patch>>
After that you can remove the following from TODO:
Remove ERROR: check_primary_key: even number of arguments should be
specified
Trigger regression test fails
Include mention of CASE, COALESCE, and IFNULL.
Add date/time parsing procedure (perhaps should be in appendix).
Add time zone information (ditto).
Update keyword list.
Add keywords to implement Vadim's transaction isolation
and lock syntax as fully parsed tokens.
Two keywords for isolation are non-reserved SQL92
(COMMITTED, SERIALIZABLE).
All other new keywords are non-reserved Postgres (not SQL92)
(ACCESS, EXCLUSIVE, MODE, SHARE).
Add syntax to allow CREATE [GLOBAL|LOCAL] TEMPORARY TABLE, throwing an
error if GLOBAL is specified.
Fix problem with multiple indices defined if using column- and table-
constraints. Reported by Tom Lane.
Now, check for duplicate indices and retain the one which is a primary-key.
Adjust elog NOTICE messages to surround table and column names with single
quotes.
Keep long non-quoted numeric strings *as* untyped strings if they fail
the obvious conversion.
Define a new pattern "decimal" which is non-exponential floating point
for use with numeric() and decimal() types.
Bruce Momjian [Mon, 10 May 1999 15:27:19 +0000 (15:27 +0000)]
libpq++ uses fe_setauthsvc which is deprecated and results in an error
on connection. This patch changes it to use PQconnectdb rather than
{fe_setauthsvc,PQsetdb}. This still isn't the complete solution, as
there
is no provision for user,password in class PgEnv, but it does get rid of
the error message. Tested with gcc version egcs-2.91.60 19981201
(egcs-1.1.1 release) under NetBSD-1.3K/i386.
Bruce Momjian [Mon, 10 May 1999 04:57:07 +0000 (04:57 +0000)]
This patch is to add more comments
to postgres.init.sh , clairify the options
available, and to add easy support
for installation of postgres into the
runlevel system.
"sh postgres.init.sh install"
Will now install "postgres" in the
/etc/rc.d/init.d directory and execute
/sbin/chkconfig to hook up the symbolic
links. An uninstall option is also added.
Tom Lane [Sun, 9 May 1999 00:52:08 +0000 (00:52 +0000)]
Add 'temporary file' facility to fd.c, and arrange for temp
files to be closed automatically at transaction abort or commit, should
they still be open. Also close any still-open stdio files allocated with
AllocateFile at abort/commit. This should eliminate problems with leakage
of file descriptors after an error. Also, put in some primitive buffered-IO
support so that psort.c can use virtual files without severe performance
penalties.
Bruce Momjian [Fri, 7 May 1999 02:37:08 +0000 (02:37 +0000)]
Please apply the following patch for regress.sh to do something useful with
"SYSTEM", and unpack the files in the uuencoded .tar.gz file at the end in
src/test/regress so that the int2, int4 and geometry tests pass on NetBSD/i386.
They just fail on different wording of error messages and eg printing "0"
rather than "-0". At a guess the same will be true for the other NetBSD ports,
but I can't test them.
Tom Lane [Thu, 6 May 1999 01:30:58 +0000 (01:30 +0000)]
fix_indxqual_references didn't cope with ArrayRef nodes,
meaning that this failed:
select proname,typname,prosrc from pg_proc,pg_type
where proname = 'float8' and pg_proc.proargtypes[0] = pg_type.oid;
Tom Lane [Thu, 6 May 1999 00:30:47 +0000 (00:30 +0000)]
Fix some nasty coredump bugs in hashjoin. This code was just
about certain to fail anytime it decided the relation to be hashed was
too big to fit in memory --- the code for 'batching' a series of hashjoins
had multiple errors. I've fixed the easier problems. A remaining big
problem is that you can get 'hashtable out of memory' if the code's
guesstimate about how much overflow space it will need turns out wrong.
That will require much more extensive revisions to fix, so I'm committing
these fixes now before I start on that problem.
Use sprintf() to convert float8 to a string during conversion to numeric.
Original code used float8out(), but the resulting exponential notation
was not handled (e.g. '3E9' was decoded as '3').
Tom Lane [Tue, 4 May 1999 00:00:20 +0000 (00:00 +0000)]
Make sure targetlist generated for subplan does not share
nodes with HAVING qualifier of upper plan. Have not seen any failures,
just being a little bit paranoid...
Bruce Momjian [Mon, 3 May 1999 19:10:48 +0000 (19:10 +0000)]
here are some patches for 6.5.0 which I already submitted but have never
been applied. The patches are in the .tar.gz attachment at the end:
varchar-array.patch this patch adds support for arrays of bpchar() and
varchar(), which where always missing from postgres.
These datatypes can be used to replace the _char4,
_char8, etc., which were dropped some time ago.
block-size.patch this patch fixes many errors in the parser and other
program which happen with very large query statements
(> 8K) when using a page size larger than 8192.
This patch is needed if you want to submit queries
larger than 8K. Postgres supports tuples up to 32K
but you can't insert them because you can't submit
queries larger than 8K. My patch fixes this problem.
The patch also replaces all the occurrences of `8192'
and `1<<13' in the sources with the proper constants
defined in include files. You should now never find
8192 hardwired in C code, just to make code clearer.
Tom Lane [Mon, 3 May 1999 00:38:44 +0000 (00:38 +0000)]
Revise union_planner and associated routines to clean up breakage
from EXCEPT/HAVING patch. Cases involving nontrivial GROUP BY expressions
now work again. Also, the code is at least somewhat better documented...
Tom Lane [Sat, 1 May 1999 19:09:46 +0000 (19:09 +0000)]
Arrange for VACUUM to delete the init file that relcache.c uses
to save a little bit of backend startup time. This way, the first
backend started after a VACUUM will rebuild the init file with up-to-date
statistics for the critical system indexes.
Tom Lane [Fri, 30 Apr 1999 04:04:27 +0000 (04:04 +0000)]
Fill in reasonable-looking cost estimates in inserted nodes.
This makes no difference to the optimizer, which has already decided what
it's gonna do, but it makes the output of EXPLAIN much more plausible.
Tom Lane [Fri, 30 Apr 1999 04:01:44 +0000 (04:01 +0000)]
Clean up some bogosities in path cost estimation, like
sometimes estimating an index scan of a table to be cheaper than a
sequential scan of the same tuples...
Tom Lane [Fri, 30 Apr 1999 03:59:06 +0000 (03:59 +0000)]
Fix nasty little typo that prevented get_cheapest_path_for_joinkeys
from ever returning a path. This put a bit of a crimp in the system's
ability to generate intelligent merge-join plans...
Tom Lane [Thu, 29 Apr 1999 03:01:50 +0000 (03:01 +0000)]
Defend against 'update oid'. Someday we might want to support
that, but it'd be a New Feature, wouldn't it ... in the meantime,
avoiding a backend crash seems worthwhile.