--- /dev/null
+Array iterator functions, by Massimo Dal Zotto <dz@cs.unitn.it>
+Copyright (C) 1999, Massimo Dal Zotto <dz@cs.unitn.it>
+
+This software is distributed under the GNU General Public License
+either version 2, or (at your option) any later version.
+
+
+This loadable module defines a new class of functions which take
+an array and a scalar value, iterate a scalar operator over the
+elements of the array and the value, and compute a result as
+the logical OR or AND of the iteration results.
+For example array_int4eq returns true if some of the elements
+of an array of int4 is equal to the given value:
+
+ array_int4eq({1,2,3}, 1) --> true
+ array_int4eq({1,2,3}, 4) --> false
+
+If we have defined T array types and O scalar operators we can
+define T x O x 2 array functions, each of them has a name like
+"array_[all_]<basetype><operation>" and takes an array of type T
+iterating the operator O over all the elements. Note however
+that some of the possible combination are invalid, for example
+the array_int4_like because there is no like operator for int4.
+
+We can then define new operators based on these functions and use
+them to write queries with qualification clauses based on the
+values of some of the elements of an array.
+For example to select rows having some or all element of an array
+attribute equal to a given value or matching a regular expression:
+
+ create table t(id int4[], txt text[]);
+
+ -- select tuples with some id element equal to 123
+ select * from t where t.id *= 123;
+
+ -- select tuples with some txt element matching '[a-z]'
+ select * from t where t.txt *~ '[a-z]';
+
+ -- select tuples with all txt elements matching '^[A-Z]'
+ select * from t where t.txt[1:3] **~ '^[A-Z]';
+
+The scheme is quite general, each operator which operates on a base type
+can be iterated over the elements of an array. It seem to work well but
+defining each new operators requires writing a different C function.
+Furthermore in each function there are two hardcoded OIDs which reference
+a base type and a procedure. Not very portable. Can anyone suggest a
+better and more portable way to do it ?
+
+See also array_iterator.sql for an example on how to use this module.
--- /dev/null
+Date: Wed, 1 Apr 1998 15:19:32 -0600 (CST)
+From: Hal Snyder <hal@vailsys.com>
+To: vmehr@ctp.com
+Subject: [QUESTIONS] Re: Spatial data, R-Trees
+
+> From: Vivek Mehra <vmehr@ctp.com>
+> Date: Wed, 1 Apr 1998 10:06:50 -0500
+
+> Am just starting out with PostgreSQL and would like to learn more about
+> the spatial data handling ablilities of postgreSQL - in terms of using
+> R-tree indexes, user defined types, operators and functions.
+>
+> Would you be able to suggest where I could find some code and SQL to
+> look at to create these?
+
+Here's the setup for adding an operator '<@>' to give distance in
+statute miles between two points on the earth's surface. Coordinates
+are in degrees. Points are taken as (longitude, latitude) and not vice
+versa as longitude is closer to the intuitive idea of x-axis and
+latitude to y-axis.
+
+There's C source, Makefile for FreeBSD, and SQL for installing and
+testing the function.
+
+Let me know if anything looks fishy!
+
+A note on testing C extensions - it seems not enough to drop a function
+and re-create it - if I change a function, I have to stop and restart
+the backend for the new version to be seen. I guess it would be too
+messy to track which functions are added from a .so and do a dlclose
+when the last one is dropped.
--- /dev/null
+
+ findoidjoins
+
+This program scans a database, and prints oid fields (also regproc fields)
+and the tables they join to. CAUTION: it is ver-r-r-y slow on a large
+database, or even a not-so-large one. We don't really recommend running
+it on anything but an empty database, such as template1.
+
+Uses pgeasy library.
+
+Run on an empty database, it returns the system join relationships (shown
+below for 7.0). Note that unexpected matches may indicate bogus entries
+in system tables --- don't accept a peculiar match without question.
+In particular, a field shown as joining to more than one target table is
+probably messed up. In 7.0, the *only* field that should join to more
+than one target is pg_description.objoid. (Running make_oidjoins_check
+is an easy way to spot fields joining to more than one table, BTW.)
+
+The shell script make_oidjoins_check converts findoidjoins' output
+into an SQL script that checks for dangling links (entries in an
+OID or REGPROC column that don't match any row in the expected table).
+Note that fields joining to more than one table are NOT processed.
+
+The result of make_oidjoins_check should be installed as the "oidjoins"
+regression test. The oidjoins test should be updated after any
+revision in the patterns of cross-links between system tables.
+(Ideally we'd just regenerate the script as part of the regression
+tests themselves, but that seems too slow...)
+
+---------------------------------------------------------------------------
+
+Join pg_aggregate.aggtransfn1 => pg_proc.oid
+Join pg_aggregate.aggtransfn2 => pg_proc.oid
+Join pg_aggregate.aggfinalfn => pg_proc.oid
+Join pg_aggregate.aggbasetype => pg_type.oid
+Join pg_aggregate.aggtranstype1 => pg_type.oid
+Join pg_aggregate.aggtranstype2 => pg_type.oid
+Join pg_aggregate.aggfinaltype => pg_type.oid
+Join pg_am.amgettuple => pg_proc.oid
+Join pg_am.aminsert => pg_proc.oid
+Join pg_am.amdelete => pg_proc.oid
+Join pg_am.ambeginscan => pg_proc.oid
+Join pg_am.amrescan => pg_proc.oid
+Join pg_am.amendscan => pg_proc.oid
+Join pg_am.ammarkpos => pg_proc.oid
+Join pg_am.amrestrpos => pg_proc.oid
+Join pg_am.ambuild => pg_proc.oid
+Join pg_am.amcostestimate => pg_proc.oid
+Join pg_amop.amopid => pg_am.oid
+Join pg_amop.amopclaid => pg_opclass.oid
+Join pg_amop.amopopr => pg_operator.oid
+Join pg_amproc.amid => pg_am.oid
+Join pg_amproc.amopclaid => pg_opclass.oid
+Join pg_amproc.amproc => pg_proc.oid
+Join pg_attribute.attrelid => pg_class.oid
+Join pg_attribute.atttypid => pg_type.oid
+Join pg_class.reltype => pg_type.oid
+Join pg_class.relam => pg_am.oid
+Join pg_description.objoid => pg_proc.oid
+Join pg_description.objoid => pg_type.oid
+Join pg_index.indexrelid => pg_class.oid
+Join pg_index.indrelid => pg_class.oid
+Join pg_opclass.opcdeftype => pg_type.oid
+Join pg_operator.oprleft => pg_type.oid
+Join pg_operator.oprright => pg_type.oid
+Join pg_operator.oprresult => pg_type.oid
+Join pg_operator.oprcom => pg_operator.oid
+Join pg_operator.oprnegate => pg_operator.oid
+Join pg_operator.oprlsortop => pg_operator.oid
+Join pg_operator.oprrsortop => pg_operator.oid
+Join pg_operator.oprcode => pg_proc.oid
+Join pg_operator.oprrest => pg_proc.oid
+Join pg_operator.oprjoin => pg_proc.oid
+Join pg_proc.prolang => pg_language.oid
+Join pg_proc.prorettype => pg_type.oid
+Join pg_rewrite.ev_class => pg_class.oid
+Join pg_statistic.starelid => pg_class.oid
+Join pg_statistic.staop => pg_operator.oid
+Join pg_trigger.tgrelid => pg_class.oid
+Join pg_trigger.tgfoid => pg_proc.oid
+Join pg_type.typrelid => pg_class.oid
+Join pg_type.typelem => pg_type.oid
+Join pg_type.typinput => pg_proc.oid
+Join pg_type.typoutput => pg_proc.oid
+Join pg_type.typreceive => pg_proc.oid
+Join pg_type.typsend => pg_proc.oid
+
+---------------------------------------------------------------------------
+
+Bruce Momjian (root@candle.pha.pa.us)
--- /dev/null
+An attempt at some sort of Full Text Indexing for PostgreSQL.
+
+The included software is an attempt to add some sort of Full Text Indexing
+support to PostgreSQL. I mean by this that we can ask questions like:
+
+ Give me all rows that have 'still' and 'nash' in the 'artist' field.
+
+Ofcourse we can write this as:
+
+ select * from cds where artist ~* 'stills' and artist ~* 'nash';
+
+But this does not use any indices, and therefore, if your database
+gets very large, it will not have very high performance (the above query
+requires at least one sequential scan, it probably takes 2 due to the
+self-join).
+
+The approach used by this add-on is to define a trigger on the table and
+column you want to do this queries on. On every insert in the table, it
+takes the value in the specified column, breaks the text in this column
+up into pieces, and stores all sub-strings into another table, together
+with a reference to the row in the original table that contained this
+sub-string (it uses the oid of that row).
+
+By now creating an index over the 'fti-table', we can search for
+substrings that occur in the original table. By making a join between
+the fti-table and the orig-table, we can get the actual rows we want
+(this can also be done by using subselects, and maybe there're other
+ways too).
+
+The trigger code also allows an array called StopWords, that prevents
+certain words from being indexed.
+
+As an example we take the previous query, where we assume we have all
+sub-strings in the table 'cds-fti':
+
+ select c.*
+ from cds c, cds-fti f1, cds-fti f2
+ where f1.string ~ '^stills' and
+ f2.string ~ '^nash' and
+ f1.id = c.oid and
+ f2.id = c.oid ;
+
+We can use the ~ (case-sensitive regular expression) here, because of
+the way sub-strings are built: from right to left, ie. house -> 'se' +
+'use' + 'ouse' + 'house'. If a ~ search starts with a ^ (match start of
+string), btree indices can be used by PostgreSQL.
+
+Now, how do we create the trigger that maintains the fti-table? First: the
+fti-table should have the following schema:
+
+ create cds-fti ( string varchar(N), id oid );
+
+Don't change the *names* of the columns, the varchar() can in fact also
+be of text-type. If you do use varchar, make sure the largest possible
+sub-string will fit.
+
+The create the function that contains the trigger::
+
+ create function fti() returns opaque as
+ '/path/to/fti.so' language 'newC';
+
+And finally define the trigger on the 'cds' table:
+
+ create trigger cds-fti-trigger after update or insert or delete on cds
+ for each row execute procedure fti(cds-fti, artist);
+
+Here, the trigger will be defined on table 'cds', it will create
+sub-strings from the field 'artist', and it will place those sub-strings
+in the table 'cds-fti'.
+
+Now populate the table 'cds'. This will also populate the table 'cds-fti'.
+It's fastest to populate the table *before* you create the indices.
+
+Before you start using the system, you should at least have the following
+indices:
+
+ create index cds-fti-idx on cds-fti (string, id);
+ create index cds-oid-idx on cds (oid);
+
+To get the most performance out of this, you should have 'cds-fti'
+clustered on disk, ie. all rows with the same sub-strings should be
+close to each other. There are 3 ways of doing this:
+
+1. After you have created the indices, execute 'cluster cds-fti-idx on cds-fti'.
+2. Do a 'select * into tmp-table from cds-fti order by string' *before*
+ you create the indices, then 'drop table cds-fti' and
+ 'alter table tmp-table rename to cds-fti'
+3. *Before* creating indices, dump the contents of the cds-fti table using
+ 'pg_dump -a -t cds-fti dbase-name', remove the \connect
+ from the beginning and the \. from the end, and sort it using the
+ UNIX 'sort' program, and reload the data.
+
+Method 1 is very slow, 2 a lot faster, and for very large tables, 3 is
+preferred.
+
+
+BENCH:
+~~~~~
+
+Maarten Boekhold <maartenb@dutepp0.et.tudelft.nl>
+The following data was generated by the 'timings.sh' script included
+in this directory. It uses a very large table with music-related
+articles as a source for the fti-table. The tables used are:
+
+product : contains product information : 540.429 rows
+artist_fti : fti table for product : 4.501.321 rows
+clustered : same as above, only clustered : 4.501.321 rows
+
+A sequential scan of the artist_fti table (and thus also the clustered table)
+takes around 6:16 minutes....
+
+Unfortunately I cannot probide anybody else with this test-date, since I
+am not allowed to redistribute the data (it's a database being sold by
+a couple of wholesale companies). Anyways, it's megabytes, so you probably
+wouldn't want it in this distribution anyways.
+
+I haven't tested this with less data.
+
+The test-machine is a Pentium 133, 64 MB, Linux 2.0.32 with the database
+on a 'QUANTUM BIGFOOT_CY4320A, 4134MB w/67kB Cache, CHS=8960/15/63'. This
+is a very slow disk.
+
+The postmaster was running with:
+
+ postmaster -i -b /usr/local/pgsql/bin/postgres -S 1024 -B 256 \
+ -o -o /usr/local/pgsql/debug-output -F -d 1
+
+('trashing' means a 'select count(*) from artist_fti' to completely trash
+any disk-caches and buffers....)
+
+TESTING ON UNCLUSTERED FTI
+trashing
+1: ^lapton and ^ric : 0.050u 0.000s 5m37.484s 0.01%
+2: ^lapton and ^ric : 0.050u 0.030s 5m32.447s 0.02%
+3: ^lapton and ^ric : 0.030u 0.020s 5m28.822s 0.01%
+trashing
+1: ^lling and ^tones : 0.020u 0.030s 0m54.313s 0.09%
+2: ^lling and ^tones : 0.040u 0.030s 0m5.057s 1.38%
+3: ^lling and ^tones : 0.010u 0.050s 0m2.072s 2.89%
+trashing
+1: ^aughan and ^evie : 0.020u 0.030s 0m26.241s 0.19%
+2: ^aughan and ^evie : 0.050u 0.010s 0m1.316s 4.55%
+3: ^aughan and ^evie : 0.030u 0.020s 0m1.029s 4.85%
+trashing
+1: ^lling : 0.040u 0.010s 0m55.104s 0.09%
+2: ^lling : 0.030u 0.030s 0m4.716s 1.27%
+3: ^lling : 0.040u 0.010s 0m2.157s 2.31%
+trashing
+1: ^stev and ^ray and ^vaugh : 0.040u 0.000s 1m5.630s 0.06%
+2: ^stev and ^ray and ^vaugh : 0.050u 0.020s 1m3.561s 0.11%
+3: ^stev and ^ray and ^vaugh : 0.050u 0.010s 1m5.923s 0.09%
+trashing
+1: ^lling (no join) : 0.050u 0.020s 0m24.139s 0.28%
+2: ^lling (no join) : 0.040u 0.040s 0m1.087s 7.35%
+3: ^lling (no join) : 0.020u 0.030s 0m0.772s 6.48%
+trashing
+1: ^vaughan (no join) : 0.040u 0.030s 0m9.075s 0.77%
+2: ^vaughan (no join) : 0.030u 0.010s 0m0.609s 6.56%
+3: ^vaughan (no join) : 0.040u 0.010s 0m0.503s 9.94%
+trashing
+1: ^rol (no join) : 0.020u 0.030s 0m49.898s 0.10%
+2: ^rol (no join) : 0.030u 0.020s 0m3.136s 1.59%
+3: ^rol (no join) : 0.030u 0.020s 0m1.231s 4.06%
+
+TESTING ON CLUSTERED FTI
+trashing
+1: ^lapton and ^ric : 0.020u 0.020s 2m17.120s 0.02%
+2: ^lapton and ^ric : 0.030u 0.020s 2m11.767s 0.03%
+3: ^lapton and ^ric : 0.040u 0.010s 2m8.128s 0.03%
+trashing
+1: ^lling and ^tones : 0.020u 0.030s 0m18.179s 0.27%
+2: ^lling and ^tones : 0.030u 0.010s 0m1.897s 2.10%
+3: ^lling and ^tones : 0.040u 0.010s 0m1.619s 3.08%
+trashing
+1: ^aughan and ^evie : 0.070u 0.010s 0m11.765s 0.67%
+2: ^aughan and ^evie : 0.040u 0.010s 0m1.198s 4.17%
+3: ^aughan and ^evie : 0.030u 0.020s 0m0.872s 5.73%
+trashing
+1: ^lling : 0.040u 0.000s 0m28.623s 0.13%
+2: ^lling : 0.030u 0.010s 0m2.339s 1.70%
+3: ^lling : 0.030u 0.010s 0m1.975s 2.02%
+trashing
+1: ^stev and ^ray and ^vaugh : 0.020u 0.010s 0m17.667s 0.16%
+2: ^stev and ^ray and ^vaugh : 0.030u 0.010s 0m3.745s 1.06%
+3: ^stev and ^ray and ^vaugh : 0.030u 0.020s 0m3.439s 1.45%
+trashing
+1: ^lling (no join) : 0.020u 0.040s 0m2.218s 2.70%
+2: ^lling (no join) : 0.020u 0.020s 0m0.506s 7.90%
+3: ^lling (no join) : 0.030u 0.030s 0m0.510s 11.76%
+trashing
+1: ^vaughan (no join) : 0.040u 0.050s 0m2.048s 4.39%
+2: ^vaughan (no join) : 0.030u 0.020s 0m0.332s 15.04%
+3: ^vaughan (no join) : 0.040u 0.010s 0m0.318s 15.72%
+trashing
+1: ^rol (no join) : 0.020u 0.030s 0m2.384s 2.09%
+2: ^rol (no join) : 0.020u 0.030s 0m0.676s 7.39%
+3: ^rol (no join) : 0.020u 0.030s 0m0.697s 7.17%
--- /dev/null
+
+ISBN (books) and ISSN (serials)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This directory contains definitions for a couple of PostgreSQL
+external types, for a couple of international-standard namespaces:
+ISBN (books) and ISSN (serials). Rather than just using a char()
+member of the appropriate length, I wanted my database to include
+the validity-checking that both these numbering systems were designed
+to encompass. A little bit of research revealed the formulae
+for computing the check digits, and I also included some validity
+constraints on the number of hyphens.
+
+The internal representation of these types is intended to be
+compatible with `char16', in the (perhaps vain) hope that
+this will make it possible to create indices of these types
+using char16_ops.
+
+These are based on Tom Ivar Helbekkmo's IP address type definition,
+from which I have copied the entire form of the implementation.
+
+Garrett A. Wollman, August 1998
--- /dev/null
+PostgreSQL type extension for managing Large Objects
+----------------------------------------------------
+
+Overview
+
+One of the problems with the JDBC driver (and this affects the ODBC driver
+also), is that the specification assumes that references to BLOBS (Binary
+Large OBjectS) are stored within a table, and if that entry is changed, the
+associated BLOB is deleted from the database.
+
+As PostgreSQL stands, this doesn't occur. It allocates an OID for each object,
+and it is up to the application to store, and ultimately delete the objects.
+
+Now this is fine for new postgresql specific applications, but existing ones
+using JDBC or ODBC wont delete the objects, arising to orphaning - objects
+that are not referenced by anything, and simply occupy disk space.
+
+The Fix
+
+I've fixed this by creating a new data type 'lo', some support functions, and
+a Trigger which handles the orphaning problem.
+
+The 'lo' type was created because we needed to differenciate between normal
+Oid's and Large Objects. Currently the JDBC driver handles this dilema easily,
+but (after talking to Byron), the ODBC driver needed a unique type. They had created an 'lo' type, but not the solution to orphaning.
+
+Install
+
+Ok, first build the shared library, and install. Typing 'make install' in the
+contrib/lo directory should do it.
+
+Then, as the postgres super user, run the lo.sql script. This will install the
+type, and define the support functions.
+
+How to Use
+
+The easiest way is by an example:
+
+> create table image (title text,raster lo);
+> create trigger t_image before update or delete on image for each row execute procedure lo_manage(raster);
+
+Here, a trigger is created for each column that contains a lo type.
+
+Issues
+
+* dropping a table will still orphan any objects it contains, as the trigger
+ is not actioned.
+
+ For now, precede the 'drop table' with 'delete from {table}'. However, this
+ could be fixed by having 'drop table' perform an additional
+
+ 'select lo_unlink({colname}::oid) from {tablename}'
+
+ for each column, before actually dropping the table.
+
+* Some frontends may create their own tables, and will not create the
+ associated trigger(s). Also, users may not remember (or know) to create
+ the triggers.
+
+ This can be solved, but would involve changes to the parser.
+
+As the ODBC driver needs a permanent lo type (& JDBC could be optimised to
+use it if it's Oid is fixed), and as the above issues can only be fixed by
+some internal changes, I feel it should become a permanent built-in type.
+
+I'm releasing this into contrib, just to get it out, and tested.
+
+Peter Mount <peter@retep.org.uk> June 13 1998
+
--- /dev/null
+
+Hello! :)
+
+(Sorry for my english. But if i wrote in portuguese, you wouldn't
+ understand nothing. :])
+
+ I found it's the right place to post this. I'm a newcomer in these
+lists. I hope i did it right. :]
+
+<BOREDOM>
+ When i started using SQL, i started with mSQL. I developed a lot
+of useful apps for me and my job with C, mainly because i loved it's
+elegant, simple api. But for a large project i'm doing in these days, i
+thought is was not enough, because it felt a lot of features i started to
+need, like security and subselects. (and it's not free :))
+ So after looking at the options, choose to start again with
+postgres. It offered everything that i needed, and the documentation is
+really good (remember me to thank the one who wrote'em).
+ But for my little apps, i needed to start porting them to libpq.
+After looking at pq's syntax, i found it was better to write a bridge
+between the mSQL api and libpq. I found that rewriting the libmsql.a
+routines that calls libpq would made things much easier. I guess the
+results are quite good right now.
+</BOREDOM>
+
+ Ok. Lets' summarize it:
+
+ mpgsql.c is the bridge. Acting as a wrapper, it's really good,
+since i could run mSQL. But it's not accurate. Some highlights:
+
+ CONS:
+ * It's not well documented
+ (this post is it's first documentation attempt, in fact);
+ * It doesn't handle field types correctly. I plan to fix it,
+ if people start doing feedbacks;
+ * It's limited to 10 simultaneous connections. I plan to enhance
+ this, i'm just figuring out;
+ * I'd like to make it reentrant/thread safe, although i don't
+ think this could be done without changing the API structure;
+ * Error Management should be better. This is my first priority
+ now;
+ * Some calls are just empty implementations.
+
+ PROS:
+ * the mSQL Monitor runs Okay. :]
+ * It's really cool. :)
+ * Make mSQL-made applications compatible with postgresql just by
+ changing link options.
+ * Uses postgreSQL. :]
+ * the mSQL API it's far easier to use and understand than libpq.
+ Consider this example:
+
+#include "msql.h"
+
+void main(int argc, char **argv, char **envp) {
+ int sid;
+
+ sid = msqlConnect(NULL); /* Connects via unix socket */
+
+ if (sid >= 0) {
+ m_result *rlt;
+ m_row *row;
+ msqlSelectDB(sid, "hosts");
+ if (msqlQuery(sid, "select host_id from hosts")) {
+ rlt = msqlStoreResult();
+ while (row = (m_row*)msqlFetchRow(rlt))
+ printf("hostid: %s\n", row[0]);
+ msqlFreeResult(rlt);
+ }
+ msqlClose(sid);
+ }
+}
+
+ I enclose mpgsql.c inside. I'd like to maintain it, and (maybe, am
+i dreaming) make it as part of the pgsql distribution. I guess it doesn't
+depends on me, but mainly on it's acceptance by its users.
+
+ Hm... i forgot: you'll need a msql.h copy, since it's copyrighted
+by Hughes Technologies Pty Ltd. If you haven't it yes, fetch one
+from www.hughes.com.au.
+
+ I would like to catch users ideas. My next goal is to add better
+error handling, and to make it better documented, and try to let relshow
+run through it. :)
+
+ done. Aldrin Leal <aldrin@americasnet.com>
--- /dev/null
+Miscellaneous utility functions for PostgreSQL.
+Copyright (C) 1999, Massimo Dal Zotto <dz@cs.unitn.it>
+
+This software is distributed under the GNU General Public License
+either version 2, or (at your option) any later version.
+
+query_limit(n)
+
+ sets a limit on the maximum numbers of query returned from
+ a backend. It can be used to limit the result size retrieved
+ by the application for poor input data or to avoid accidental
+ table product while playying with sql.
+
+backend_pid()
+
+ return the pid of our corresponding backend.
+
+unlisten(relname)
+
+ unlisten from a relation or from all relations if the argument
+ is null, empty or '*'.
+ It is now obsoleted by the new unlisten command but still useful
+ if you want unlisten a name computed by the query.
+ Note that a listen/notify relname can be any ascii string, not
+ just valid relation names.
+
+min(x,y)
+max(x,y)
+
+ return the min or max bteween two integers.
+
+assert_enable(bool)
+
+ enable/disable assert checkings in the backend, if it has been
+ compiled with USE_ASSERT_CHECKING.
+
+assert_test(bool)
+
+ test the assert enable/disable code, if the backend has been
+ compiled with ASSERT_CHECKING_TEST.
+
+--
+Massimo Dal Zotto <dz@cs.unitn.it>
--- /dev/null
+
+
+noupdate
+~~~~~~~~
+
+ - trigger to prevent updates on single columns.
+
+
+Example:
+~~~~~~~
+
+CREATE TABLE TEST ( COL1 INT, COL2 INT, COL3 INT );
+
+CREATE TRIGGER BT BEFORE UPDATE ON TEST FOR EACH ROW
+ EXECUTE PROCEDURE
+ noup ('COL1');
+
+-- Now Try
+INSERT INTO TEST VALUES (10,20,30);
+UPDATE TEST SET COL1 = 5;
--- /dev/null
+This directory contains support functions for the ODBC driver
+supplied with PostgreSQL-7.0.
+
+To enable additional ODBC functions with PostgreSQL-7.0, simply
+execute the commands in odbc.sql:
+
+psql
+Welcome to psql, the PostgreSQL interactive terminal.
+
+Type: \copyright for distribution terms
+ \h for help with SQL commands
+ \? for help on internal slash commands
+ \g or terminate with semicolon to execute query
+ \q to quit
+
+postgres=# \i odbc.sql
+CREATE
+...
+
+
+To enable additional ODBC functions with versions of PostgreSQL
+prior to PostgreSQL-7.0 (e.g. PostgreSQL-6.5.3), build the shared
+library and SQL commands as follows:
+
+make pre7
+psql
+Welcome to psql, the PostgreSQL interactive terminal.
+
+Type: \copyright for distribution terms
+ \h for help with SQL commands
+ \? for help on internal slash commands
+ \g or terminate with semicolon to execute query
+ \q to quit
+
+postgres=# \i odbc-pre7.sql
+CREATE
+...
+
--- /dev/null
+
+How to use pg_dumplo?
+=====================
+
+(c) 2000, Pavel JanÃk ml. <Pavel.Janik@linux.cz>
+
+
+Q: How do you use pg_dumplo?
+============================
+
+A: This is a small demo of backing up the database table with Large Objects:
+
+
+We will create a demo database and a small and useless table `lo' inside
+it:
+
+SnowWhite:$ createdb test
+CREATE DATABASE
+
+Ok, our database with the name 'test' is created. Now we should create demo
+table which will contain only one column with the name 'id' which will hold
+the oid number of Large Object:
+
+SnowWhite:$ psql test
+Welcome to psql, the PostgreSQL interactive terminal.
+
+Type: \copyright for distribution terms
+ \h for help with SQL commands
+ \? for help on internal slash commands
+ \g or terminate with semicolon to execute query
+ \q to quit
+
+test=# CREATE TABLE lo (id oid);
+CREATE
+test=# \lo_import /etc/aliases
+lo_import 19338
+test=# INSERT INTO lo VALUES (19338);
+INSERT 19352 1
+test=# select * from lo;
+ id
+-------
+ 19338
+(1 row)
+
+test=# \q
+
+In the above example you can see that we have also imported one "Large
+Object" - the file /etc/aliases. It has an oid of 19338 so we have inserted
+this oid number to the database table lo to the column id. The final SELECT
+shows that we have one record in the table.
+
+Now we can demonstrate the work of pg_dumplo. We will create dump directory
+which will contain the whole dump of large objects (/tmp/dump):
+
+mkdir -p /tmp/dump
+
+Now we can dump all large objects from the database `test' which has an oid
+stored in the column `id' in the table `lo':
+
+SnowWhite:$ pg_dumplo -s /tmp/dump -d test -l lo.id
+pg_dumplo: dump lo.id (1 large obj)
+
+Voila, we have the dump of all Large Objects in our directory:
+
+SnowWhite:$ tree /tmp/dump/
+/tmp/dump/
+`-- test
+ |-- lo
+ | `-- id
+ | `-- 19338
+ `-- lo_dump.index
+
+3 directories, 2 files
+SnowWhite:$
+
+Isn't it nice :-) Yes, it is, but we are on the half of our way. We should
+also be able to recreate the contents of the table lo and the Large Object
+database when something went wrong. It is very easy, we will demonstrate
+this via dropping the database and recreating it from scratch with
+pg_dumplo:
+
+SnowwWite:$ dropdb test
+DROP DATABASE
+
+SnowWhite:$ createdb test
+CREATE DATABASE
+
+Ok, our database with the name `test' is created again. We should also
+create the table `lo' again:
+
+SnowWhite:$ psql test
+Welcome to psql, the PostgreSQL interactive terminal.
+
+Type: \copyright for distribution terms
+ \h for help with SQL commands
+ \? for help on internal slash commands
+ \g or terminate with semicolon to execute query
+ \q to quit
+
+test=# CREATE TABLE lo (id oid);
+CREATE
+test=# \q
+SnowWhite:$
+
+Now the database with the table `lo' is created again, but we do not have
+any information stored in it. But have the dump of complete Large Object
+database, so we can recreate the contents of the whole database from the
+directory /tmp/dump:
+
+SnowWhite:$ pg_dumplo -s /tmp/dump -d test -i
+19338 lo id test/lo/id/19338
+SnowWhite:$
+
+And this is everything.
+
+Summary: In this small example we have shown that pg_dumplo can be used to
+completely dump the database's Large Objects very easily.
--- /dev/null
+pgbench 1.2 README 2000/1/15 Tatsuo Ishii (t-ishii@sra.co.jp)
+
+o What is pgbench?
+
+ pgbench is a simple program to run a benchmark test sort of
+ "TPC-B". pgbench is a client application of PostgreSQL and runs
+ with PostgreSQL only. It performs lots of small and simple
+ transactions including select/update/insert operations then
+ calculates number of transactions successfully completed within a
+ second (transactions per second, tps). Targeting data includes a
+ table with at least 100k tuples.
+
+ Example outputs from pgbench look like:
+
+ number of clients: 4
+ number of transactions per client: 100
+ number of processed transactions: 400/400
+ tps = 19.875015(including connections establishing)
+ tps = 20.098827(excluding connections establishing)
+
+ Similar program called "JDBCBench" already exists, but it requires
+ Java that may not be available on every platform. Moreover some
+ people concerned about the overhead of Java that might lead
+ inaccurate results. So I decided to write in pure C, and named
+ it "pgbench."
+
+o features of pgbench
+
+ - pgbench is written in C using libpq only. So it is very portable
+ and easy to install.
+
+ - pgbench can simulate concurrent connections using asynchronous
+ capability of libpq. No threading is required.
+
+o How to install pgbench
+
+ (1) Edit the first line in Makefile
+
+ POSTGRESHOME = /usr/local/pgsql
+
+ so that it points to the directory where PostgreSQL installed.
+
+ (2) Run configure
+
+ (3) Run make. You will see an executable file "pgbench" there.
+
+o How to use pgbench?
+
+ (1) Initialize database by:
+
+ pgbench -i <dbname>
+
+ where <dbname> is the name of database. pgbench uses four tables
+ accounts, branches, history and tellers. These tables will be
+ destroyed. Be very carefully if you have tables having same
+ names. Default test data contains:
+
+ table # of tuples
+ -------------------------
+ branches 1
+ tellers 10
+ accounts 100000
+ history 0
+
+ You can increase the number of tuples by using -s option. See
+ below.
+
+ (2) Run the benchmark test
+
+ pgbench <dbname>
+
+ The default configuration is:
+
+ number of clients: 1
+ number of transactions per client: 10
+
+o options
+
+ pgbench has number of options.
+
+ -h hostname
+ hostname where the backend is running. If this option
+ is omitted, pgbench will connect to the localhost via
+ Unix domain socket.
+
+ -p port
+ the port number that the backend is accepting. default is
+ 5432.
+
+ -c number_of_clients
+ Number of clients simulated. default is 1.
+
+ -t number_of_transactions
+ Number of transactions each client runs. default is 10.
+
+ -s scaling_factor
+ this should be used with -i (initialize) option.
+ number of tuples generated will be multiple of the
+ scaling factor. For example, -s 100 will imply 10M
+ (10,000,000) tuples in the accounts table.
+ default is 1.
+
+ -n
+ No vacuuming and cleaning the history table prior the
+ test is performed.
+
+ -v
+ Do vacuuming before testing. This will take some time.
+ Without both -n and -v pgbench will vacuum tellers and
+ branches tables only.
+
+ -S
+ Perform select only transactions instead of TPC-B.
+
+ -d
+ debug option.
+
+
+o What is the "transaction" actually performed in pgbench?
+
+ (1) begin;
+
+ (2) update accounts set abalance = abalance + :delta where aid = :aid;
+
+ (3) select abalance from accounts where aid = :aid;
+
+ (4) update tellers set tbalance = tbalance + :delta where tid = :tid;
+
+ (5) update branches set bbalance = bbalance + :delta where bid = :bid;
+
+ (6) insert into history(tid,bid,aid,delta) values(:tid,:bid,:aid,:delta);
+
+ (7) end;
+
+o License?
+
+Basically it is same as BSD license. See pgbench.c for more details.
+
+o History
+
+2000/1/15 pgbench-1.2 contributed to PostgreSQL
+ * Add -v option
+
+1999/09/29 pgbench-1.1 released
+ * Apply cygwin patches contributed by Yutaka Tanida
+ * More robust when backends die
+ * Add -S option (select only)
+
+1999/09/04 pgbench-1.0 released
\ No newline at end of file
--- /dev/null
+pgbench 1.2 README 2000/1/15 Tatsuo Ishii (t-ishii@sra.co.jp)
+
+\e$B"#\e(Bpgbench \e$B$H$O!)\e(B
+
+pgbench \e$B$O\e(B TPC-B\e$B$K;w$?%Y%s%A%^!<%/%F%9%H$r9T$J$&%W%m%0%i%`$G$9!#:#$N$H\e(B
+\e$B$3$m\e(B PostgreSQL \e$B@lMQ$G$9!#\e(B
+
+pgbench \e$B$O\e(B select/update/insert \e$B$r4^$`%H%i%s%6%/%7%g%s$r<B9T$7!"A4BN$N\e(B
+\e$B<B9T;~4V$H<B:]$K40N;$7$?%H%i%s%6%/%7%g%s$N?t$+$i\e(B 1 \e$BIC4V$K<B9T$G$-$?%H\e(B
+\e$B%i%s%6%/%7%g%s?t\e(B (tps) \e$B$rI=<($7$^$9!#=hM}$NBP>]$H$J$k%F!<%V%k$O%G%U%)\e(B
+\e$B%k%H$G$O\e(B 10\e$BK|%?%W%k$N%G!<%?$r4^$_$^$9!#\e(B
+
+\e$B<B:]$NI=<($O0J2<$N$h$&$J46$8$G$9!#\e(B
+
+number of clients: 4
+number of transactions per client: 100
+number of processed transactions: 400/400
+tps = 19.875015(including connections establishing)
+tps = 20.098827(excluding connections establishing)
+
+pgbench \e$B$O\e(B JDBCBench \e$B$H$$$&!"$b$H$b$H$O\e(B MySQL \e$BMQ$K=q$+$l$?\e(B JDBC \e$BMQ$N%Y\e(B
+\e$B%s%A%^!<%/%W%m%0%i%`$r;29M$K:n@.$5$l$^$7$?!#\e(B
+
+\e$B"#\e(Bpgbench \e$B$NFCD'\e(B
+
+o C \e$B8@8l$H\e(B libpq \e$B$@$1$G=q$+$l$F$$$k$N$G0\?"@-$,9b$/!"4JC1$K%$%s%9%H!<\e(B
+\e$B%k$G$-$^$9!#\e(B
+
+o pgbench \e$B$O\e(B libpq \e$B$NHsF14|=hM}5!G=$r;H$C$F%^%k%A%f!<%64D6-$r%7%_%e%l!<\e(B
+\e$B%H$7$^$9!#MF0W$KF1;~@\B34D6-$r%F%9%H$G$-$^$9!#\e(B
+
+\e$B"#\e(Bpgbench \e$B$N%$%s%9%H!<%k\e(B
+
+Makefile\e$B$N0lHV>e$K$"$k\e(B
+
+ POSTGRESHOME = /usr/local/pgsql
+
+\e$B$rI,MW$K1~$8$F=$@5$7!"\e(Bconfigure;make \e$B$9$k$@$1$G$9!#\e(B
+
+\e$B"#\e(Bpgbench \e$B$N;H$$J}\e(B
+
+\e$B4pK\E*$J;H$$J}$O!"\e(B
+
+$ pgbench [\e$B%G!<%?%Y!<%9L>\e(B]
+
+\e$B$G$9!#%G!<%?%Y!<%9L>$r>JN,$9$k$H!"%f!<%6L>$HF1$8%G!<%?%Y!<%9$r;XDj$7$?\e(B
+\e$B$b$N$H$_$J$7$^$9!#%G!<%?%Y!<%9$O8e=R$N\e(B -i \e$B%*%W%7%g%s$r;H$C$F$"$i$+$8$a\e(B
+\e$B=i4|2=$7$F$*$/I,MW$,$"$j$^$9!#\e(B
+
+pgbench \e$B$K$O$$$m$$$m$J%*%W%7%g%s$,$"$j$^$9!#\e(B
+
+-h \e$B%[%9%HL>\e(B PostgreSQL\e$B$N%G!<%?%Y!<%9%G!<%b%s\e(B postmaster \e$B$NF0\e(B
+ \e$B$$$F$$$k%[%9%HL>$r;XDj$7$^$9!#>JN,$9$k$H<+%[%9%H$K\e(B Unix domain
+ socket \e$B$G@\B3$7$^$9!#\e(B
+
+-p \e$B%]!<%HHV9f\e(B postmaster \e$B$N;HMQ$9$k%]!<%HHV9f$r;XDj$7$^$9!#>JN,$9$k$H\e(B 5432
+ \e$B$,;XDj$5$l$?$b$N$H$_$J$7$^$9!#\e(B
+
+-c \e$B%/%i%$%"%s%H?t\e(B \e$BF1;~<B9T%/%i%$%"%s%H?t$r;XDj$7$^$9!#>JN,;~$O\e(B
+ 1 \e$B$H$J$j$^$9!#\e(Bpgbench \e$B$OF1;~<B9T%/%i%$%"%s%HKh$K\e(B
+ \e$B%U%!%$%k%G%#%9%/%j%W%?$r;HMQ$9$k$N$G!";HMQ2DG=\e(B
+ \e$B%U%!%$%k%G%#%9%/%j%W%??t$r1[$($k%/%i%$%"%s%H?t$O\e(B
+ \e$B;XDj$G$-$^$;$s!#;HMQ2DG=%U%!%$%k%G%#%9%/%j%W%??t\e(B
+ \e$B$O\e(B limit \e$B$d\e(B ulimit \e$B%3%^%s%I$GCN$k$3$H$,$G$-$^$9!#\e(B
+
+-t \e$B%H%i%s%6%/%7%g%s?t\e(B \e$B3F%/%i%$%"%s%H$,<B9T$9$k%H%i%s%6%/%7%g%s?t$r\e(B
+ \e$B;XDj$7$^$9!#>JN,;~$O\e(B 10 \e$B$H$J$j$^$9!#\e(B
+
+-s \e$B%9%1!<%j%s%0%U%!%/%?!<\e(B
+
+ -i \e$B%*%W%7%g%s$H0l=o$K;HMQ$7$^$9!#\e(B
+ \e$B%9%1!<%j%s%0%U%!%/%?!<$O\e(B1\e$B0J>e$N@0?t!#%9%1!<%j%s%0%U%!\e(B
+ \e$B%/%?!<$rJQ$($k$3$H$K$h$j!"%F%9%H$NBP>]$H$J$k%F!<%V%k$N\e(B
+ \e$BBg$-$5$,\e(B 10\e$BK|\e(B x [\e$B%9%1!<%j%s%0%U%!%/%?!<\e(B]\e$B$K$J$j$^$9!#\e(B
+ \e$B%G%U%)%k%H$N%9%1!<%j%s%0%U%!%/%?!<$O\e(B 1 \e$B$G$9!#\e(B
+
+-v \e$B$3$N%*%W%7%g%s$r;XDj$9$k$H!"%Y%s%A%^!<%/3+;OA0$K\e(B vacuum \e$B$H\e(B
+ history \e$B$N%/%j%"$r9T$J$$$^$9!#\e(B-v \e$B$H\e(B -n \e$B$r>JN,$9$k$H!"\e(B
+ \e$B:G>.8B$N\e(B vacuum \e$B$J$I$r9T$$$^$9!#$9$J$o$A!"\e(Bhistory \e$B$N:o=|!"\e(B
+ \e$B$H\e(B history, branches, history \e$B$N\e(B vacuum \e$B$r9T$$$^$9!#\e(B
+ \e$B$3$l$O!"\e(Bvacuum \e$B$N;~4V$r:G>.8B$K$7$J$,$i!"%Q%U%)!<%^%s%9$K\e(B
+ \e$B1F6A$9$k%4%_A]=|$r8z2LE*$K9T$$$^$9!#DL>o$O\e(B -v \e$B$H\e(B -n \e$B$r\e(B
+ \e$B>JN,$9$k$3$H$r$*$9$9$a$7$^$9!#\e(B
+
+-n \e$B$3$N%*%W%7%g%s$r;XDj$9$k$H!"%Y%s%A%^!<%/3+;OA0$K\e(B vacuum \e$B$H\e(B
+ history \e$B$N%/%j%"$r9T$J$$$^$;$s!#\e(B
+
+-S TPC-B\e$B$N%H%i%s%6%/%7%g%s$G$O$J$/!"8!:w$N$_$N%H%i%s%6%/%7%g%s$r\e(B
+ \e$B<B9T$7$^$9!#8!:w%9%T!<%I$rB,Dj$7$?$$$H$-$K;H$$$^$9!#\e(B
+
+-d \e$B%G%P%C%0%*%W%7%g%s!#MM!9$J>pJs$,I=<($5$l$^$9!#\e(B
+
+\e$B"#%G!<%?%Y!<%9$N=i4|2=\e(B
+
+pgbench \e$B$G%Y%s%A%^!<%/%F%9%H$r<B;\$9$k$?$a$K$O!"$"$i$+$8$a%G!<%?%Y!<%9\e(B
+\e$B$r=i4|2=$7!"%F%9%H%G!<%?$r:n$kI,MW$,$"$j$^$9!#\e(B
+
+$ pgbench -i [\e$B%G!<%?%Y!<%9L>\e(B]
+
+\e$B$3$l$K$h$j0J2<$N%F!<%V%k$,:n$i$l$^$9\e(B(\e$B%9%1!<%j%s%0%U%!%/%?!<\e(B == 1 \e$B$N>l9g\e(B)\e$B!#\e(B
+
+\e$B!vCm0U!v\e(B
+\e$BF1$8L>A0$N%F!<%V%k$,$"$k$H:o=|$5$l$F$7$^$&$N$G$4Cm0U2<$5$$!*!*\e(B
+
+\e$B%F!<%V%kL>\e(B \e$B%?%W%k?t\e(B
+-------------------------
+branches 1
+tellers 10
+accounts 100000
+history 0
+
+\e$B%9%1!<%j%s%0%U%!%/%?!<$r\e(B 10,100,1000 \e$B$J$I$KJQ99$9$k$H!">e5-%?%W%k?t$O\e(B
+\e$B$=$l$K1~$8$F\e(B10\e$BG\!"\e(B100\e$BG\!"\e(B1000\e$BG\$K$J$j$^$9!#$?$H$($P!"%9%1!<%j%s%0%U%!\e(B
+\e$B%/%?!<$r\e(B 10 \e$B$H$9$k$H!"\e(B
+
+\e$B%F!<%V%kL>\e(B \e$B%?%W%k?t\e(B
+-------------------------
+branches 10
+tellers 100
+accounts 1000000
+history 0
+
+\e$B$K$J$j$^$9!#\e(B
+
+\e$B"#!V%H%i%s%6%/%7%g%s!W$NDj5A\e(B
+
+pgbench \e$B$G$O!"0J2<$N%7!<%1%s%9$rA4It40N;$7$F\e(B1\e$B%H%i%s%6%/%7%g%s$H?t$($F\e(B
+\e$B$$$^$9!#\e(B
+
+(1) begin;
+
+(2) update accounts set abalance = abalance + :delta where aid = :aid;
+ \e$B$3$3$G!"\e(B:delta\e$B$O\e(B1\e$B$+$i\e(B1000\e$B$^$G$NCM$r<h$kMp?t!"\e(B:aid \e$B$O\e(B 1\e$B$+$i\e(B100000\e$B$^$G\e(B
+ \e$B$NCM$r<h$kMp?t$G$9!#0J2<!"Mp?t$NCM$O$=$l$>$l$3$N%H%i%s%6%/%7%g%s$N\e(B
+ \e$BCf$G$OF1$8CM$r;H$$$^$9!#\e(B
+
+(3) select abalance from accounts where aid = :aid;
+ \e$B$3$3$G$O\e(B1\e$B7o$@$18!:w$5$l$^$9!#\e(B
+
+(4) update tellers set tbalance = tbalance + :delta where tid = :tid;
+ \e$B$3$3$G\e(B :tid \e$B$O\e(B 1\e$B$+$i\e(B10\e$B$N4V$NCM$r$H$kMp?t$G$9!#\e(B
+
+(5) update branches set bbalance = bbalance + :delta where bid = :bid;
+ \e$B$3$3$G\e(B :bid \e$B$O\e(B 1 \e$B$+$i\e(B[\e$B%9%1%j%s%0%U%!%/%?!<\e(B]\e$B$N4V$NCM$r<h$kMp?t$G$9!#\e(B
+
+(6) insert into history(tid,bid,aid,delta) values(:tid,:bid,:aid,:delta);
+
+(7) end;
+
+\e$B"#:n<T$H%i%$%;%s%9>r7o\e(B
+
+pgbench \e$B$O@P0f\e(B \e$BC#IW$K$h$C$F=q$+$l$^$7$?!#%i%$%;%s%9>r7o$O\e(B pgbench.c \e$B$N\e(B
+\e$BKAF,$K=q$$$F$"$j$^$9!#$3$N>r7o$r<i$k8B$jL5=~$GMxMQ$7!"$^$?<+M3$K:FG[IU\e(B
+\e$B$G$-$^$9!#\e(B
+
+\e$B"#2~DjMzNr\e(B
+
+2000/1/15 pgbench-1.2 \e$B$O\e(B PostgreSQL \e$B$K\e(B contribute \e$B$5$l$^$7$?!#\e(B
+ * -v \e$B%*%W%7%g%sDI2C\e(B
+
+1999/09/29 pgbench-1.1 \e$B%j%j!<%9\e(B
+ * \e$BC+ED$5$s$K$h$k\e(Bcygwin\e$BBP1~%Q%C%A<h$j9~$_\e(B
+ * \e$B%P%C%/%(%s%I%/%i%C%7%e;~$NBP1~\e(B
+ * -S \e$B%*%W%7%g%sDI2C\e(B
+
+1999/09/04 pgbench-1.0 \e$B%j%j!<%9\e(B
--- /dev/null
+
+SELECT text_soundex('hello world!');
+
+CREATE TABLE s (nm text)\g
+
+insert into s values ('john')\g
+insert into s values ('joan')\g
+insert into s values ('wobbly')\g
+
+select * from s
+where text_soundex(nm) = text_soundex('john')\g
+
+select nm from s a, s b
+where text_soundex(a.nm) = text_soundex(b.nm)
+and a.oid <> b.oid\g
+
+CREATE FUNCTION text_sx_eq(text, text) RETURNS bool AS
+'select text_soundex($1) = text_soundex($2)'
+LANGUAGE 'sql'\g
+
+CREATE FUNCTION text_sx_lt(text,text) RETURNS bool AS
+'select text_soundex($1) < text_soundex($2)'
+LANGUAGE 'sql'\g
+
+CREATE FUNCTION text_sx_gt(text,text) RETURNS bool AS
+'select text_soundex($1) > text_soundex($2)'
+LANGUAGE 'sql';
+
+CREATE FUNCTION text_sx_le(text,text) RETURNS bool AS
+'select text_soundex($1) <= text_soundex($2)'
+LANGUAGE 'sql';
+
+CREATE FUNCTION text_sx_ge(text,text) RETURNS bool AS
+'select text_soundex($1) >= text_soundex($2)'
+LANGUAGE 'sql';
+
+CREATE FUNCTION text_sx_ne(text,text) RETURNS bool AS
+'select text_soundex($1) <> text_soundex($2)'
+LANGUAGE 'sql';
+
+DROP OPERATOR #= (text,text)\g
+
+CREATE OPERATOR #= (leftarg=text, rightarg=text, procedure=text_sx_eq,
+commutator=text_sx_eq)\g
+
+SELECT *
+FROM s
+WHERE text_sx_eq(nm,'john')\g
+
+SELECT *
+from s
+where s.nm #= 'john';
+
--- /dev/null
+
+Here are general trigger functions provided as workable examples
+of using SPI and triggers. "General" means that functions may be
+used for defining triggers for any tables but you have to specify
+table/field names (as described below) while creating a trigger.
+
+1. refint.c - functions for implementing referential integrity.
+
+check_primary_key () is to used for foreign keys of a table.
+
+ You are to create trigger (BEFORE INSERT OR UPDATE) using this
+function on a table referencing another table. You are to specify
+as function arguments: triggered table column names which correspond
+to foreign key, referenced table name and column names in referenced
+table which correspond to primary/unique key.
+ You may create as many triggers as you need - one trigger for
+one reference.
+
+check_foreign_key () is to used for primary/unique keys of a table.
+
+ You are to create trigger (BEFORE DELETE OR UPDATE) using this
+function on a table referenced by another table(s). You are to specify
+as function arguments: number of references for which function has to
+performe checking, action if referencing key found ('cascade' - to delete
+corresponding foreign key, 'restrict' - to abort transaction if foreign keys
+exist, 'setnull' - to set foreign key referencing primary/unique key
+being deleted to null), triggered table column names which correspond
+to primary/unique key, referencing table name and column names corresponding
+to foreign key (, ... - as many referencing tables/keys as specified
+by first argument).
+ Note, that NOT NULL constraint and unique index have to be defined by
+youself.
+
+ There are examples in refint.example and regression tests
+(sql/triggers.sql).
+
+ To CREATE FUNCTIONs use refint.sql (will be made by gmake from
+refint.source).
+
+
+2. timetravel.c - functions for implementing time travel feature.
+
+ Old internally supported time-travel (TT) used insert/delete
+transaction commit times. To get the same feature using triggers
+you are to add to a table two columns of abstime type to store
+date when a tuple was inserted (start_date) and changed/deleted
+(stop_date):
+
+CREATE TABLE XXX (
+ ... ...
+ date_on abstime default currabstime(),
+ date_off abstime default 'infinity'
+ ... ...
+);
+
+- so, tuples being inserted with NULLs in date_on/date_off will get
+_current_date_ in date_on (name of start_date column in XXX) and INFINITY in
+date_off (name of stop_date column in XXX).
+
+ Tuples with stop_date equal INFINITY are "valid now": when trigger will
+be fired for UPDATE/DELETE of a tuple with stop_date NOT equal INFINITY then
+this tuple will not be changed/deleted!
+
+ If stop_date equal INFINITY then on
+
+UPDATE: only stop_date in tuple being updated will be changed to current
+date and new tuple with new data (coming from SET ... in UPDATE) will be
+inserted. Start_date in this new tuple will be setted to current date and
+stop_date - to INFINITY.
+
+DELETE: new tuple will be inserted with stop_date setted to current date
+(and with the same data in other columns as in tuple being deleted).
+
+ NOTE:
+1. To get tuples "valid now" you are to add _stop_date_ = 'infinity'
+ to WHERE. Internally supported TT allowed to avoid this...
+ Fixed rewriting RULEs could help here...
+ As work arround you may use VIEWs...
+2. You can't change start/stop date columns with UPDATE!
+ Use set_timetravel (below) if you need in this.
+
+ FUNCTIONs:
+
+timetravel() is general trigger function.
+
+ You are to create trigger BEFORE (!!!) UPDATE OR DELETE using this
+function on a time-traveled table. You are to specify two arguments: name of
+start_date column and name of stop_date column in triggered table.
+
+currabstime() may be used in DEFAULT for start_date column to get
+current date.
+
+set_timetravel() allows you turn time-travel ON/OFF for a table:
+
+ set_timetravel('XXX', 1) will turn TT ON for table XXX (and report
+old status).
+ set_timetravel('XXX', 0) will turn TT OFF for table XXX (-"-).
+
+Turning TT OFF allows you do with a table ALL what you want.
+
+ There is example in timetravel.example.
+
+ To CREATE FUNCTIONs use timetravel.sql (will be made by gmake from
+timetravel.source).
--- /dev/null
+--Column ID of table A is primary key:
+
+CREATE TABLE A (
+ ID int4 not null,
+ id1 int4 not null,
+primary key (ID,ID1)
+);
+
+--Columns REFB of table B and REFC of C are foreign keys referenting ID of A:
+
+CREATE TABLE B (
+ REFB int4,
+ REFB1 INT4
+);
+CREATE INDEX BI ON B (REFB);
+
+CREATE TABLE C (
+ REFC int4,
+ REFC1 int4
+);
+CREATE INDEX CI ON C (REFC);
+
+--Trigger for table A:
+
+CREATE TRIGGER AT BEFORE DELETE ON A FOR EACH ROW
+EXECUTE PROCEDURE
+check_foreign_key (2, 'cascade', 'ID','id1', 'B', 'REFB','REFB1', 'C', 'REFC','REFC1');
+
+
+CREATE TRIGGER AT1 AFTER UPDATE ON A FOR EACH ROW
+EXECUTE PROCEDURE
+check_foreign_key (2, 'cascade', 'ID','id1', 'B', 'REFB','REFB1', 'C', 'REFC','REFC1');
+
+
+CREATE TRIGGER BT BEFORE INSERT OR UPDATE ON B FOR EACH ROW
+EXECUTE PROCEDURE
+check_primary_key ('REFB','REFB1', 'A', 'ID','ID1');
+
+CREATE TRIGGER CT BEFORE INSERT OR UPDATE ON C FOR EACH ROW
+EXECUTE PROCEDURE
+check_primary_key ('REFC','REFC1', 'A', 'ID','ID1');
+
+
+
+-- Now try
+
+INSERT INTO A VALUES (10,10);
+INSERT INTO A VALUES (20,20);
+INSERT INTO A VALUES (30,30);
+INSERT INTO A VALUES (40,41);
+INSERT INTO A VALUES (50,50);
+
+INSERT INTO B VALUES (1); -- invalid reference
+INSERT INTO B VALUES (10,10);
+INSERT INTO B VALUES (30,30);
+INSERT INTO B VALUES (30,30);
+
+INSERT INTO C VALUES (11); -- invalid reference
+INSERT INTO C VALUES (20,20);
+INSERT INTO C VALUES (20,21);
+INSERT INTO C VALUES (30,30);
+
+-- now update work well
+update A set ID = 100 , ID1 = 199 where ID=30 ;
+
+SELECT * FROM A;
+SELECT * FROM B;
+SELECT * FROM C;
--- /dev/null
+String io module for postgresql.
+Copyright (C) 1999, Massimo Dal Zotto <dz@cs.unitn.it>
+
+This software is distributed under the GNU General Public License
+either version 2, or (at your option) any later version.
+
+
+These output functions can be used as substitution of the standard text
+output functions to get the value of text fields printed in the format
+used for C strings. This allows the output of queries or the exported
+files to be processed more easily using standard unix filter programs
+like perl or awk.
+
+If you use the standard functions instead you could find a single tuple
+splitted into many lines and the tabs embedded in the values could be
+confused with those used as field delimters.
+
+My function translates all non-printing characters into corresponding
+esacape sequences as defined by the C syntax. All you need to reconstruct
+the exact value in your application is a corresponding unescape function
+like the string_input defined in the source code.
+
+Massimo Dal Zotto <dz@cs.unitn.it>
--- /dev/null
+User locks, by Massimo Dal Zotto <dz@cs.unitn.it>
+Copyright (C) 1999, Massimo Dal Zotto <dz@cs.unitn.it>
+
+This software is distributed under the GNU General Public License
+either version 2, or (at your option) any later version.
+
+
+This loadable module, together with my user-lock.patch applied to the
+backend, provides support for user-level long-term cooperative locks.
+For example one can write:
+
+ select some_fields, user_write_lock_oid(oid) from table where id='key';
+
+Now if the returned user_write_lock_oid field is 1 you have acquired an
+user lock on the oid of the selected tuple and can now do some long operation
+on it, like let the data being edited by the user.
+If it is 0 it means that the lock has been already acquired by some other
+process and you should not use that item until the other has finished.
+Note that in this case the query returns 0 immediately without waiting on
+the lock. This is good if the lock is held for long time.
+After you have finished your work on that item you can do:
+
+ update table set some_fields where id='key';
+ select user_write_unlock_oid(oid) from table where id='key';
+
+You can also ignore the failure and go ahead but this could produce conflicts
+or inconsistent data in your application. User locks require a cooperative
+behavior between users. User locks don't interfere with the normal locks
+used by postgres for transaction processing.
+
+This could also be done by setting a flag in the record itself but in
+this case you have the overhead of the updates to the records and there
+could be some locks not released if the backend or the application crashes
+before resetting the lock flag.
+It could also be done with a begin/end block but in this case the entire
+table would be locked by postgres and it is not acceptable to do this for
+a long period because other transactions would block completely.
+
+The generic user locks use two values, group and id, to identify a lock,
+which correspond to ip_posid and ip_blkid of an ItemPointerData.
+Group is a 16 bit value while id is a 32 bit integer which could also be
+an oid. The oid user lock functions, which take only an oid as argument,
+use a group equal to 0.
+
+The meaning of group and id is defined by the application. The user
+lock code just takes two numbers and tells you if the corresponding
+entity has been succesfully locked. What this mean is up to you.
+
+My succestion is that you use the group to identify an area of your
+application and the id to identify an object in this area.
+Or you can just lock the oid of the tuples which are by definition unique.
+
+Note also that a process can acquire more than one lock on the same entity
+and it must release the lock the corresponding number of times. This can
+be done calling the unlock funtion until it returns 0.
--- /dev/null
+$Header: /cvsroot/pgsql/contrib/vacuumlo/Attic/README.vacuumlo,v 1.1 2000/06/19 14:02:16 momjian Exp $
+
+This is a simple utility that will remove any orphaned large objects out of a
+PostgreSQL database.
+
+Compiling
+--------
+
+Simply run make. A single executable "vacuumlo" is created.
+
+Useage
+------
+
+vacuumlo [-v] database [db2 ... dbn]
+
+The -v flag outputs some progress messages to stdout.
+
+Method
+------
+
+First, it builds a temporary table which contains all of the oid's of the
+large objects in that database.
+
+It then scans through any columns in the database that are of type 'oid', and
+removes any entries from the temporary table.
+
+Finally, it runs through the first table, and removes from the second table, any
+oid's it finds. What is left are the orphans, and these are removed.
+
+I decided to place this in contrib as it needs further testing, but hopefully,
+this (or a variant of it) would make it into the backed as a "vacuum lo" command
+in a later release.
+
+Peter Mount <peter@retep.org.uk>
+http://www.retep.org.uk
+March 21 1999
+
+Committed April 10 1999 Peter