<TITLE>PostgreSQL Developers FAQ</TITLE>
</HEAD>
- <BODY bgcolor="#FFFFFF" text="#000000" link="#FF0000" vlink="#A00000" alink="#0000FF">
- <H1>Developer's Frequently Asked Questions (FAQ) for PostgreSQL</H1>
+ <BODY bgcolor="#FFFFFF" text="#000000" link="#FF0000" vlink="#A00000"
+ alink="#0000FF">
+ <H1>Developer's Frequently Asked Questions (FAQ) for
+ PostgreSQL</H1>
<P>Last updated: Tue Dec 4 01:20:03 EST 2001</P>
- <P>Current maintainer: Bruce Momjian (<A href="mailto:pgman@candle.pha.pa.us">pgman@candle.pha.pa.us</A>)<BR>
+ <P>Current maintainer: Bruce Momjian (<A href=
+ "mailto:pgman@candle.pha.pa.us">pgman@candle.pha.pa.us</A>)<BR>
</P>
- <P>The most recent version of this document can be viewed at the postgreSQL Web site, <A href="http://www.PostgreSQL.org">http://www.PostgreSQL.org</A>.<BR>
+ <P>The most recent version of this document can be viewed at the
+ postgreSQL Web site, <A href=
+ "http://www.PostgreSQL.org">http://www.PostgreSQL.org</A>.<BR>
</P>
<HR>
<BR>
</CENTER>
<A href="#1">1</A>) What tools are available for developers?<BR>
<A href="#2">2</A>) What books are good for developers?<BR>
- <A href="#3">3</A>) Why do we use <I>palloc</I>() and <I>pfree</I>() to allocate memory?<BR>
- <A href="#4">4</A>) Why do we use <I>Node</I> and <I>List</I> to make data structures?<BR>
+ <A href="#3">3</A>) Why do we use <I>palloc</I>() and
+ <I>pfree</I>() to allocate memory?<BR>
+ <A href="#4">4</A>) Why do we use <I>Node</I> and <I>List</I> to
+ make data structures?<BR>
<A href="#5">5</A>) How do I add a feature or fix a bug?<BR>
- <A href="#6">6</A>) How do I download/update the current source tree?<BR>
+ <A href="#6">6</A>) How do I download/update the current source
+ tree?<BR>
<A href="#7">7</A>) How do I test my changes?<BR>
- <A href="#7">7</A>) I just added a field to a structure. What else should I do?<BR>
- <A href="#8">8</A>) Why are table, column, type, function, view names sometimes referenced as <I>Name</I> or <I>NameData,</I> and sometimes as <I>char *?</I><BR>
- <A href="#9">9</A>) How do I efficiently access information in tables from the backend code?<BR>
+ <A href="#7">7</A>) I just added a field to a structure. What else
+ should I do?<BR>
+ <A href="#8">8</A>) Why are table, column, type, function, view
+ names sometimes referenced as <I>Name</I> or <I>NameData,</I> and
+ sometimes as <I>char *?</I><BR>
+ <A href="#9">9</A>) How do I efficiently access information in
+ tables from the backend code?<BR>
<A href="#10">10</A>) What is elog()?<BR>
<A href="#11">11</A>) What is configure all about?<BR>
<A href="#12">12</A>) How do I add a new port?<BR>
<A href="#14">14</A>) Why don't we use threads in the backend?<BR>
<A href="#15">15</A>) How are RPM's packaged?<BR>
<A href="#16">16</A>) How are CVS branches handled?<BR>
- <A href="#17">17</A>) How do I get involved in PostgreSQL development?<BR>
+ <A href="#17">17</A>) How do I get involved in PostgreSQL
+ development?<BR>
<BR>
<HR>
- <H3><A name="1">1</A>) What tools are available for developers?</H3>
+ <H3><A name="1">1</A>) What tools are available for
+ developers?</H3>
- <P>Aside from the User documentation mentioned in the regular FAQ, there are several development tools available. First, all the files in the <I>/tools</I> directory are designed for developers.</P>
+ <P>Aside from the User documentation mentioned in the regular FAQ,
+ there are several development tools available. First, all the files
+ in the <I>/tools</I> directory are designed for developers.</P>
<PRE>
RELEASE_CHANGES changes we have to make for each release
SQL_keywords standard SQL'92 keywords
pginclude scripts for adding/removing include files
unused_oids in pgsql/src/include/catalog
</PRE>
- Let me note some of these. If you point your browser at the <I>file:/usr/local/src/pgsql/src/tools/backend/index.html</I> directory, you will see few paragraphs describing the data flow, the backend components in a flow chart, and a description of the shared memory area. You can click on any flowchart box to see a description. If you then click on the directory name, you will be taken to the source directory, to browse the actual source code behind it. We also have several README files in some source directories to describe the function of the module. The browser will display these when you enter the directory also. The <I>tools/backend</I> directory is also contained on our web page under the title <I>How PostgreSQL Processes a Query.</I>
-
- <P>Second, you really should have an editor that can handle tags, so you can tag a function call to see the function definition, and then tag inside that function to see an even lower-level function, and then back out twice to return to the original function. Most editors support this via <I>tags</I> or <I>etags</I> files.</P>
+ Let me note some of these. If you point your browser at the
+ <I>file:/usr/local/src/pgsql/src/tools/backend/index.html</I>
+ directory, you will see few paragraphs describing the data flow,
+ the backend components in a flow chart, and a description of the
+ shared memory area. You can click on any flowchart box to see a
+ description. If you then click on the directory name, you will be
+ taken to the source directory, to browse the actual source code
+ behind it. We also have several README files in some source
+ directories to describe the function of the module. The browser
+ will display these when you enter the directory also. The
+ <I>tools/backend</I> directory is also contained on our web page
+ under the title <I>How PostgreSQL Processes a Query.</I>
+
+ <P>Second, you really should have an editor that can handle tags,
+ so you can tag a function call to see the function definition, and
+ then tag inside that function to see an even lower-level function,
+ and then back out twice to return to the original function. Most
+ editors support this via <I>tags</I> or <I>etags</I> files.</P>
<P>Third, you need to get <I>id-utils</I> from:</P>
<PRE>
- <A href="ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz">ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz</A>
- <A href="ftp://tug.org/gnu/id-utils-3.2d.tar.gz">ftp://tug.org/gnu/id-utils-3.2d.tar.gz</A>
- <A href="ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz">ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz</A>
+ <A href=
+"ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz">ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz</A>
+ <A href=
+"ftp://tug.org/gnu/id-utils-3.2d.tar.gz">ftp://tug.org/gnu/id-utils-3.2d.tar.gz</A>
+ <A href=
+"ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz">ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz</A>
</PRE>
- By running <I>tools/make_mkid</I>, an archive of source symbols can be created that can be rapidly queried like <I>grep</I> or edited. Others prefer <I>glimpse.</I>
+ By running <I>tools/make_mkid</I>, an archive of source symbols can
+ be created that can be rapidly queried like <I>grep</I> or edited.
+ Others prefer <I>glimpse.</I>
- <P><I>make_diff</I> has tools to create patch diff files that can be applied to the distribution. This produces context diffs, which is our preferred format.</P>
+ <P><I>make_diff</I> has tools to create patch diff files that can
+ be applied to the distribution. This produces context diffs, which
+ is our preferred format.</P>
- <P>Our standard format is to indent each code level with one tab, where each tab is four spaces. You will need to set your editor to display tabs as four spaces:<BR>
+ <P>Our standard format is to indent each code level with one tab,
+ where each tab is four spaces. You will need to set your editor to
+ display tabs as four spaces:<BR>
</P>
<PRE>
vi in ~/.exrc:
*/
</PRE>
<BR>
- <I>pgindent</I> will the format code by specifying flags to your operating system's utility <I>indent.</I>
+ <I>pgindent</I> will the format code by specifying flags to your
+ operating system's utility <I>indent.</I>
- <P><I>pgindent</I> is run on all source files just before each beta test period. It auto-formats all source files to make them consistent. Comment blocks that need specific line breaks should be formatted as <I>block comments,</I> where the comment starts as <CODE>/*------</CODE>. These comments will not be reformatted in any way.</P>
+ <P><I>pgindent</I> is run on all source files just before each beta
+ test period. It auto-formats all source files to make them
+ consistent. Comment blocks that need specific line breaks should be
+ formatted as <I>block comments,</I> where the comment starts as
+ <CODE>/*------</CODE>. These comments will not be reformatted in
+ any way.</P>
- <P><I>pginclude</I> contains scripts used to add needed <CODE>#include</CODE>'s to include files, and removed unneeded <CODE>#include</CODE>'s.</P>
+ <P><I>pginclude</I> contains scripts used to add needed
+ <CODE>#include</CODE>'s to include files, and removed unneeded
+ <CODE>#include</CODE>'s.</P>
- <P>When adding system types, you will need to assign oids to them. There is also a script called <I>unused_oids</I> in <I>pgsql/src/include/catalog</I> that shows the unused oids.</P>
+ <P>When adding system types, you will need to assign oids to them.
+ There is also a script called <I>unused_oids</I> in
+ <I>pgsql/src/include/catalog</I> that shows the unused oids.</P>
<H3><A name="2">2</A>) What books are good for developers?</H3>
- <P>I have four good books, <I>An Introduction to Database Systems,</I> by C.J. Date, Addison, Wesley, <I>A Guide to the SQL Standard,</I> by C.J. Date, et. al, Addison, Wesley, <I>Fundamentals of Database Systems,</I> by Elmasri and Navathe, and <I>Transaction Processing,</I> by Jim Gray, Morgan, Kaufmann</P>
-
- <P>There is also a database performance site, with a handbook on-line written by Jim Gray at <A href="http://www.benchmarkresources.com">http://www.benchmarkresources.com.</A></P>
-
- <H3><A name="3">3</A>) Why do we use <I>palloc</I>() and <I>pfree</I>() to allocate memory?</H3>
-
- <P><I>palloc()</I> and <I>pfree()</I> are used in place of malloc() and free() because we automatically free all memory allocated when a transaction completes. This makes it easier to make sure we free memory that gets allocated in one place, but only freed much later. There are several contexts that memory can be allocated in, and this controls when the allocated memory is automatically freed by the backend.</P>
-
- <H3><A name="4">4</A>) Why do we use <I>Node</I> and <I>List</I> to make data structures?</H3>
-
- <P>We do this because this allows a consistent way to pass data inside the backend in a flexible way. Every node has a <I>NodeTag</I> which specifies what type of data is inside the Node. <I>Lists</I> are groups of <I>Nodes chained together as a forward-linked list.</I></P>
+ <P>I have four good books, <I>An Introduction to Database
+ Systems,</I> by C.J. Date, Addison, Wesley, <I>A Guide to the SQL
+ Standard,</I> by C.J. Date, et. al, Addison, Wesley,
+ <I>Fundamentals of Database Systems,</I> by Elmasri and Navathe,
+ and <I>Transaction Processing,</I> by Jim Gray, Morgan,
+ Kaufmann</P>
+
+ <P>There is also a database performance site, with a handbook
+ on-line written by Jim Gray at <A href=
+ "http://www.benchmarkresources.com">http://www.benchmarkresources.com.</A></P>
+
+ <H3><A name="3">3</A>) Why do we use <I>palloc</I>() and
+ <I>pfree</I>() to allocate memory?</H3>
+
+ <P><I>palloc()</I> and <I>pfree()</I> are used in place of malloc()
+ and free() because we automatically free all memory allocated when
+ a transaction completes. This makes it easier to make sure we free
+ memory that gets allocated in one place, but only freed much later.
+ There are several contexts that memory can be allocated in, and
+ this controls when the allocated memory is automatically freed by
+ the backend.</P>
+
+ <H3><A name="4">4</A>) Why do we use <I>Node</I> and <I>List</I> to
+ make data structures?</H3>
+
+ <P>We do this because this allows a consistent way to pass data
+ inside the backend in a flexible way. Every node has a
+ <I>NodeTag</I> which specifies what type of data is inside the
+ Node. <I>Lists</I> are groups of <I>Nodes chained together as a
+ forward-linked list.</I></P>
<P>Here are some of the <I>List</I> manipulation commands:</P>
<DT>foreach(i, list)</DT>
<DD>
- loop through <I>list,</I> assigning each list element to <I>i.</I> It is important to note that <I>i</I> is a List *, not the data in the <I>List</I> element. You need to use <I>lfirst(i)</I> to get at the data. Here is a typical code snipped that loops through a List containing <I>Var *'s</I> and processes each one:
+ loop through <I>list,</I> assigning each list element to
+ <I>i.</I> It is important to note that <I>i</I> is a List *,
+ not the data in the <I>List</I> element. You need to use
+ <I>lfirst(i)</I> to get at the data. Here is a typical code
+ snipped that loops through a List containing <I>Var *'s</I>
+ and processes each one:
<PRE>
<CODE>List *i, *list;
<DT>lcons(node, list)</DT>
- <DD>add <I>node</I> to the front of <I>list,</I> or create a new list with <I>node</I> if <I>list</I> is <I>NIL.</I></DD>
+ <DD>add <I>node</I> to the front of <I>list,</I> or create a
+ new list with <I>node</I> if <I>list</I> is <I>NIL.</I></DD>
<DT>lappend(list, node)</DT>
- <DD>add <I>node</I> to the end of <I>list.</I> This is more expensive that lcons.</DD>
+ <DD>add <I>node</I> to the end of <I>list.</I> This is more
+ expensive that lcons.</DD>
<DT>nconc(list1, list2)</DT>
<DT>lconsi, ...</DT>
- <DD>There are integer versions of these: <I>lconsi, lappendi, nthi.</I> <I>List's</I> containing integers instead of Node pointers are used to hold list of relation object id's and other integer quantities.</DD>
+ <DD>There are integer versions of these: <I>lconsi, lappendi,
+ nthi.</I> <I>List's</I> containing integers instead of Node
+ pointers are used to hold list of relation object id's and
+ other integer quantities.</DD>
</DL>
</BLOCKQUOTE>
- You can print nodes easily inside <I>gdb.</I> First, to disable output truncation when you use the gdb <I>print</I> command:
+ You can print nodes easily inside <I>gdb.</I> First, to disable
+ output truncation when you use the gdb <I>print</I> command:
<PRE>
<CODE>(gdb) set print elements 0
</CODE>
</PRE>
- Instead of printing values in gdb format, you can use the next two commands to print out List, Node, and structure contents in a verbose format that is easier to understand. List's are unrolled into nodes, and nodes are printed in detail. The first prints in a short format, and the second in a long format:
+ Instead of printing values in gdb format, you can use the next two
+ commands to print out List, Node, and structure contents in a
+ verbose format that is easier to understand. List's are unrolled
+ into nodes, and nodes are printed in detail. The first prints in a
+ short format, and the second in a long format:
<PRE>
<CODE>(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)
</CODE>
</PRE>
- The output appears in the postmaster log file, or on your screen if you are running a backend directly without a postmaster.
+ The output appears in the postmaster log file, or on your screen if
+ you are running a backend directly without a postmaster.
<H3><A name="5">5</A>) How do I add a feature or fix a bug?</H3>
- <P>The source code is over 250,000 lines. Many problems/features are isolated to one specific area of the code. Others require knowledge of much of the source. If you are confused about where to start, ask the hackers list, and they will be glad to assess the complexity and give pointers on where to start.</P>
-
- <P>Another thing to keep in mind is that many fixes and features can be added with surprisingly little code. I often start by adding code, then looking at other areas in the code where similar things are done, and by the time I am finished, the patch is quite small and compact.</P>
-
- <P>When adding code, keep in mind that it should use the existing facilities in the source, for performance reasons and for simplicity. Often a review of existing code doing similar things is helpful.</P>
-
- <H3><A name="6">6</A>) How do I download/update the current source tree?</H3>
-
- <P>There are several ways to obtain the source tree. Occasional developers can just get the most recent source tree snapshot from ftp.postgresql.org. For regular developers, you can use CVS. CVS allows you to download the source tree, then occasionally update your copy of the source tree with any new changes. Using CVS, you don't have to download the entire source each time, only the changed files. Anonymous CVS does not allows developers to update the remote source tree, though privileged developers can do this. There is a CVS FAQ on our web site that describes how to use remote CVS. You can also use CVSup, which has similarly functionality, and is available from ftp.postgresql.org.</P>
-
- <P>To update the source tree, there are two ways. You can generate a patch against your current source tree, perhaps using the make_diff tools mentioned above, and send them to the patches list. They will be reviewed, and applied in a timely manner. If the patch is major, and we are in beta testing, the developers may wait for the final release before applying your patches.</P>
-
- <P>For hard-core developers, Marc(scrappy@postgresql.org) will give you a Unix shell account on postgresql.org, so you can use CVS to update the main source tree, or you can ftp your files into your account, patch, and cvs install the changes directly into the source tree.</P>
+ <P>The source code is over 250,000 lines. Many problems/features
+ are isolated to one specific area of the code. Others require
+ knowledge of much of the source. If you are confused about where to
+ start, ask the hackers list, and they will be glad to assess the
+ complexity and give pointers on where to start.</P>
+
+ <P>Another thing to keep in mind is that many fixes and features
+ can be added with surprisingly little code. I often start by adding
+ code, then looking at other areas in the code where similar things
+ are done, and by the time I am finished, the patch is quite small
+ and compact.</P>
+
+ <P>When adding code, keep in mind that it should use the existing
+ facilities in the source, for performance reasons and for
+ simplicity. Often a review of existing code doing similar things is
+ helpful.</P>
+
+ <H3><A name="6">6</A>) How do I download/update the current source
+ tree?</H3>
+
+ <P>There are several ways to obtain the source tree. Occasional
+ developers can just get the most recent source tree snapshot from
+ ftp.postgresql.org. For regular developers, you can use CVS. CVS
+ allows you to download the source tree, then occasionally update
+ your copy of the source tree with any new changes. Using CVS, you
+ don't have to download the entire source each time, only the
+ changed files. Anonymous CVS does not allows developers to update
+ the remote source tree, though privileged developers can do this.
+ There is a CVS FAQ on our web site that describes how to use remote
+ CVS. You can also use CVSup, which has similarly functionality, and
+ is available from ftp.postgresql.org.</P>
+
+ <P>To update the source tree, there are two ways. You can generate
+ a patch against your current source tree, perhaps using the
+ make_diff tools mentioned above, and send them to the patches list.
+ They will be reviewed, and applied in a timely manner. If the patch
+ is major, and we are in beta testing, the developers may wait for
+ the final release before applying your patches.</P>
+
+ <P>For hard-core developers, Marc(scrappy@postgresql.org) will give
+ you a Unix shell account on postgresql.org, so you can use CVS to
+ update the main source tree, or you can ftp your files into your
+ account, patch, and cvs install the changes directly into the
+ source tree.</P>
<H3><A name="6">6</A>) How do I test my changes?</H3>
- <P>First, use <I>psql</I> to make sure it is working as you expect. Then run <I>src/test/regress</I> and get the output of <I>src/test/regress/checkresults</I> with and without your changes, to see that your patch does not change the regression test in unexpected ways. This practice has saved me many times. The regression tests test the code in ways I would never do, and has caught many bugs in my patches. By finding the problems now, you save yourself a lot of debugging later when things are broken, and you can't figure out when it happened.</P>
-
- <H3><A name="7">7</A>) I just added a field to a structure. What else should I do?</H3>
-
- <P>The structures passing around from the parser, rewrite, optimizer, and executor require quite a bit of support. Most structures have support routines in <I>src/backend/nodes</I> used to create, copy, read, and output those structures. Make sure you add support for your new field to these files. Find any other places the structure may need code for your new field. <I>mkid</I> is helpful with this (see above).</P>
-
- <H3><A name="8">8</A>) Why are table, column, type, function, view names sometimes referenced as <I>Name</I> or <I>NameData,</I> and sometimes as <I>char *?</I></H3>
-
- <P>Table, column, type, function, and view names are stored in system tables in columns of type <I>Name.</I> Name is a fixed-length, null-terminated type of <I>NAMEDATALEN</I> bytes. (The default value for NAMEDATALEN is 32 bytes.)</P>
+ <P>First, use <I>psql</I> to make sure it is working as you expect.
+ Then run <I>src/test/regress</I> and get the output of
+ <I>src/test/regress/checkresults</I> with and without your changes,
+ to see that your patch does not change the regression test in
+ unexpected ways. This practice has saved me many times. The
+ regression tests test the code in ways I would never do, and has
+ caught many bugs in my patches. By finding the problems now, you
+ save yourself a lot of debugging later when things are broken, and
+ you can't figure out when it happened.</P>
+
+ <H3><A name="7">7</A>) I just added a field to a structure. What
+ else should I do?</H3>
+
+ <P>The structures passing around from the parser, rewrite,
+ optimizer, and executor require quite a bit of support. Most
+ structures have support routines in <I>src/backend/nodes</I> used
+ to create, copy, read, and output those structures. Make sure you
+ add support for your new field to these files. Find any other
+ places the structure may need code for your new field. <I>mkid</I>
+ is helpful with this (see above).</P>
+
+ <H3><A name="8">8</A>) Why are table, column, type, function, view
+ names sometimes referenced as <I>Name</I> or <I>NameData,</I> and
+ sometimes as <I>char *?</I></H3>
+
+ <P>Table, column, type, function, and view names are stored in
+ system tables in columns of type <I>Name.</I> Name is a
+ fixed-length, null-terminated type of <I>NAMEDATALEN</I> bytes.
+ (The default value for NAMEDATALEN is 32 bytes.)</P>
<PRE>
<CODE>typedef struct nameData
{
typedef NameData *Name;
</CODE>
</PRE>
- Table, column, type, function, and view names that come into the backend via user queries are stored as variable-length, null-terminated character strings.
-
- <P>Many functions are called with both types of names, ie. <I>heap_open().</I> Because the Name type is null-terminated, it is safe to pass it to a function expecting a char *. Because there are many cases where on-disk names(Name) are compared to user-supplied names(char *), there are many cases where Name and char * are used interchangeably.</P>
-
- <H3><A name="9">9</A>) How do I efficiently access information in tables from the backend code?</H3>
-
- <P>You first need to find the tuples(rows) you are interested in. There are two ways. First, <I>SearchSysCache()</I> and related functions allow you to query the system catalogs. This is the preferred way to access system tables, because the first call to the cache loads the needed rows, and future requests can return the results without accessing the base table. The caches use system table indexes to look up tuples. A list of available caches is located in <I>src/backend/utils/cache/syscache.c.</I> <I>src/backend/utils/cache/lsyscache.c</I> contains many column-specific cache lookup functions.</P>
-
- <P>The rows returned are cache-owned versions of the heap rows. Therefore, you must not modify or delete the tuple returned by <I>SearchSysCache()</I>. What you <I>should</I> do is release it with <I>ReleaseSysCache()</I> when you are done using it; this informs the cache that it can discard that tuple if necessary. If you neglect to call <I>ReleaseSysCache()</I>, then the cache entry will remain locked in the cache until end of transaction, which is tolerable but not very desirable.</P>
-
- <P>If you can't use the system cache, you will need to retrieve the data directly from the heap table, using the buffer cache that is shared by all backends. The backend automatically takes care of loading the rows into the buffer cache.</P>
-
- <P>Open the table with <I>heap_open().</I> You can then start a table scan with <I>heap_beginscan(),</I> then use <I>heap_getnext()</I> and continue as long as <I>HeapTupleIsValid()</I> returns true. Then do a <I>heap_endscan().</I> <I>Keys</I> can be assigned to the <I>scan.</I> No indexes are used, so all rows are going to be compared to the keys, and only the valid rows returned.</P>
-
- <P>You can also use <I>heap_fetch()</I> to fetch rows by block number/offset. While scans automatically lock/unlock rows from the buffer cache, with <I>heap_fetch(),</I> you must pass a <I>Buffer</I> pointer, and <I>ReleaseBuffer()</I> it when completed.</P>
-
- <P>Once you have the row, you can get data that is common to all tuples, like <I>t_self</I> and <I>t_oid,</I> by merely accessing the <I>HeapTuple</I> structure entries. If you need a table-specific column, you should take the HeapTuple pointer, and use the <I>GETSTRUCT()</I> macro to access the table-specific start of the tuple. You then cast the pointer as a <I>Form_pg_proc</I> pointer if you are accessing the pg_proc table, or <I>Form_pg_type</I> if you are accessing pg_type. You can then access the columns by using a structure pointer:</P>
+ Table, column, type, function, and view names that come into the
+ backend via user queries are stored as variable-length,
+ null-terminated character strings.
+
+ <P>Many functions are called with both types of names, ie.
+ <I>heap_open().</I> Because the Name type is null-terminated, it is
+ safe to pass it to a function expecting a char *. Because there are
+ many cases where on-disk names(Name) are compared to user-supplied
+ names(char *), there are many cases where Name and char * are used
+ interchangeably.</P>
+
+ <H3><A name="9">9</A>) How do I efficiently access information in
+ tables from the backend code?</H3>
+
+ <P>You first need to find the tuples(rows) you are interested in.
+ There are two ways. First, <I>SearchSysCache()</I> and related
+ functions allow you to query the system catalogs. This is the
+ preferred way to access system tables, because the first call to
+ the cache loads the needed rows, and future requests can return the
+ results without accessing the base table. The caches use system
+ table indexes to look up tuples. A list of available caches is
+ located in <I>src/backend/utils/cache/syscache.c.</I>
+ <I>src/backend/utils/cache/lsyscache.c</I> contains many
+ column-specific cache lookup functions.</P>
+
+ <P>The rows returned are cache-owned versions of the heap rows.
+ Therefore, you must not modify or delete the tuple returned by
+ <I>SearchSysCache()</I>. What you <I>should</I> do is release it
+ with <I>ReleaseSysCache()</I> when you are done using it; this
+ informs the cache that it can discard that tuple if necessary. If
+ you neglect to call <I>ReleaseSysCache()</I>, then the cache entry
+ will remain locked in the cache until end of transaction, which is
+ tolerable but not very desirable.</P>
+
+ <P>If you can't use the system cache, you will need to retrieve the
+ data directly from the heap table, using the buffer cache that is
+ shared by all backends. The backend automatically takes care of
+ loading the rows into the buffer cache.</P>
+
+ <P>Open the table with <I>heap_open().</I> You can then start a
+ table scan with <I>heap_beginscan(),</I> then use
+ <I>heap_getnext()</I> and continue as long as
+ <I>HeapTupleIsValid()</I> returns true. Then do a
+ <I>heap_endscan().</I> <I>Keys</I> can be assigned to the
+ <I>scan.</I> No indexes are used, so all rows are going to be
+ compared to the keys, and only the valid rows returned.</P>
+
+ <P>You can also use <I>heap_fetch()</I> to fetch rows by block
+ number/offset. While scans automatically lock/unlock rows from the
+ buffer cache, with <I>heap_fetch(),</I> you must pass a
+ <I>Buffer</I> pointer, and <I>ReleaseBuffer()</I> it when
+ completed.</P>
+
+ <P>Once you have the row, you can get data that is common to all
+ tuples, like <I>t_self</I> and <I>t_oid,</I> by merely accessing
+ the <I>HeapTuple</I> structure entries. If you need a
+ table-specific column, you should take the HeapTuple pointer, and
+ use the <I>GETSTRUCT()</I> macro to access the table-specific start
+ of the tuple. You then cast the pointer as a <I>Form_pg_proc</I>
+ pointer if you are accessing the pg_proc table, or
+ <I>Form_pg_type</I> if you are accessing pg_type. You can then
+ access the columns by using a structure pointer:</P>
<PRE>
<CODE>((Form_pg_class) GETSTRUCT(tuple))->relnatts
</CODE>
</PRE>
- You must not directly change <I>live</I> tuples in this way. The best way is to use <I>heap_modifytuple()</I> and pass it your original tuple, and the values you want changed. It returns a palloc'ed tuple, which you pass to <I>heap_replace().</I> You can delete tuples by passing the tuple's <I>t_self</I> to <I>heap_destroy().</I> You use <I>t_self</I> for <I>heap_update()</I> too. Remember, tuples can be either system cache copies, which may go away after you call <I>ReleaseSysCache()</I>, or read directly from disk buffers, which go away when you <I>heap_getnext()</I>, <I>heap_endscan</I>, or <I>ReleaseBuffer()</I>, in the <I>heap_fetch()</I> case. Or it may be a palloc'ed tuple, that you must <I>pfree()</I> when finished.
+ You must not directly change <I>live</I> tuples in this way. The
+ best way is to use <I>heap_modifytuple()</I> and pass it your
+ original tuple, and the values you want changed. It returns a
+ palloc'ed tuple, which you pass to <I>heap_replace().</I> You can
+ delete tuples by passing the tuple's <I>t_self</I> to
+ <I>heap_destroy().</I> You use <I>t_self</I> for
+ <I>heap_update()</I> too. Remember, tuples can be either system
+ cache copies, which may go away after you call
+ <I>ReleaseSysCache()</I>, or read directly from disk buffers, which
+ go away when you <I>heap_getnext()</I>, <I>heap_endscan</I>, or
+ <I>ReleaseBuffer()</I>, in the <I>heap_fetch()</I> case. Or it may
+ be a palloc'ed tuple, that you must <I>pfree()</I> when finished.
<H3><A name="10">10</A>) What is elog()?</H3>
- <P><I>elog()</I> is used to send messages to the front-end, and optionally terminate the current query being processed. The first parameter is an elog level of <I>NOTICE,</I> <I>DEBUG,</I> <I>ERROR,</I> or <I>FATAL.</I> <I>NOTICE</I> prints on the user's terminal and the postmaster logs. <I>DEBUG</I> prints only in the postmaster logs. <I>ERROR</I> prints in both places, and terminates the current query, never returning from the call. <I>FATAL</I> terminates the backend process. The remaining parameters of <I>elog</I> are a <I>printf</I>-style set of parameters to print.</P>
+ <P><I>elog()</I> is used to send messages to the front-end, and
+ optionally terminate the current query being processed. The first
+ parameter is an elog level of <I>NOTICE,</I> <I>DEBUG,</I>
+ <I>ERROR,</I> or <I>FATAL.</I> <I>NOTICE</I> prints on the user's
+ terminal and the postmaster logs. <I>DEBUG</I> prints only in the
+ postmaster logs. <I>ERROR</I> prints in both places, and terminates
+ the current query, never returning from the call. <I>FATAL</I>
+ terminates the backend process. The remaining parameters of
+ <I>elog</I> are a <I>printf</I>-style set of parameters to
+ print.</P>
<H3><A name="11">11</A>) What is configure all about?</H3>
- <P>The files <I>configure</I> and <I>configure.in</I> are part of the GNU <I>autoconf</I> package. Configure allows us to test for various capabilities of the OS, and to set variables that can then be tested in C programs and Makefiles. Autoconf is installed on the PostgreSQL main server. To add options to configure, edit <I>configure.in,</I> and then run <I>autoconf</I> to generate <I>configure.</I></P>
-
- <P>When <I>configure</I> is run by the user, it tests various OS capabilities, stores those in <I>config.status</I> and <I>config.cache,</I> and modifies a list of <I>*.in</I> files. For example, if there exists a <I>Makefile.in,</I> configure generates a <I>Makefile</I> that contains substitutions for all @var@ parameters found by configure.</P>
-
- <P>When you need to edit files, make sure you don't waste time modifying files generated by <I>configure.</I> Edit the <I>*.in</I> file, and re-run <I>configure</I> to recreate the needed file. If you run <I>make distclean</I> from the top-level source directory, all files derived by configure are removed, so you see only the file contained in the source distribution.</P>
+ <P>The files <I>configure</I> and <I>configure.in</I> are part of
+ the GNU <I>autoconf</I> package. Configure allows us to test for
+ various capabilities of the OS, and to set variables that can then
+ be tested in C programs and Makefiles. Autoconf is installed on the
+ PostgreSQL main server. To add options to configure, edit
+ <I>configure.in,</I> and then run <I>autoconf</I> to generate
+ <I>configure.</I></P>
+
+ <P>When <I>configure</I> is run by the user, it tests various OS
+ capabilities, stores those in <I>config.status</I> and
+ <I>config.cache,</I> and modifies a list of <I>*.in</I> files. For
+ example, if there exists a <I>Makefile.in,</I> configure generates
+ a <I>Makefile</I> that contains substitutions for all @var@
+ parameters found by configure.</P>
+
+ <P>When you need to edit files, make sure you don't waste time
+ modifying files generated by <I>configure.</I> Edit the <I>*.in</I>
+ file, and re-run <I>configure</I> to recreate the needed file. If
+ you run <I>make distclean</I> from the top-level source directory,
+ all files derived by configure are removed, so you see only the
+ file contained in the source distribution.</P>
<H3><A name="12">12</A>) How do I add a new port?</H3>
- <P>There are a variety of places that need to be modified to add a new port. First, start in the <I>src/template</I> directory. Add an appropriate entry for your OS. Also, use <I>src/config.guess</I> to add your OS to <I>src/template/.similar.</I> You shouldn't match the OS version exactly. The <I>configure</I> test will look for an exact OS version number, and if not found, find a match without version number. Edit <I>src/configure.in</I> to add your new OS. (See configure item above.) You will need to run autoconf, or patch <I>src/configure</I> too.</P>
-
- <P>Then, check <I>src/include/port</I> and add your new OS file, with appropriate values. Hopefully, there is already locking code in <I>src/include/storage/s_lock.h</I> for your CPU. There is also a <I>src/makefiles</I> directory for port-specific Makefile handling. There is a <I>backend/port</I> directory if you need special files for your OS.</P>
+ <P>There are a variety of places that need to be modified to add a
+ new port. First, start in the <I>src/template</I> directory. Add an
+ appropriate entry for your OS. Also, use <I>src/config.guess</I> to
+ add your OS to <I>src/template/.similar.</I> You shouldn't match
+ the OS version exactly. The <I>configure</I> test will look for an
+ exact OS version number, and if not found, find a match without
+ version number. Edit <I>src/configure.in</I> to add your new OS.
+ (See configure item above.) You will need to run autoconf, or patch
+ <I>src/configure</I> too.</P>
+
+ <P>Then, check <I>src/include/port</I> and add your new OS file,
+ with appropriate values. Hopefully, there is already locking code
+ in <I>src/include/storage/s_lock.h</I> for your CPU. There is also
+ a <I>src/makefiles</I> directory for port-specific Makefile
+ handling. There is a <I>backend/port</I> directory if you need
+ special files for your OS.</P>
<H3><A name="13">13</A>) What is CommandCounterIncrement()?</H3>
- <P>Normally, transactions can not see the rows they modify. This allows <CODE>UPDATE foo SET x = x + 1</CODE> to work correctly.</P>
+ <P>Normally, transactions can not see the rows they modify. This
+ allows <CODE>UPDATE foo SET x = x + 1</CODE> to work correctly.</P>
- <P>However, there are cases where a transactions needs to see rows affected in previous parts of the transaction. This is accomplished using a Command Counter. Incrementing the counter allows transactions to be broken into pieces so each piece can see rows modified by previous pieces. <I>CommandCounterIncrement()</I> increments the Command Counter, creating a new part of the transaction.</P>
+ <P>However, there are cases where a transactions needs to see rows
+ affected in previous parts of the transaction. This is accomplished
+ using a Command Counter. Incrementing the counter allows
+ transactions to be broken into pieces so each piece can see rows
+ modified by previous pieces. <I>CommandCounterIncrement()</I>
+ increments the Command Counter, creating a new part of the
+ transaction.</P>
- <H3><A name="14">14</A>) Why don't we use threads in the backend?</H3>
+ <H3><A name="14">14</A>) Why don't we use threads in the
+ backend?</H3>
<P>There are several reasons threads are not used:</P>
<LI>An error in one backend can corrupt other backends.</LI>
- <LI>Speed improvements using threads are small compared to the remaining backend startup time.</LI>
+ <LI>Speed improvements using threads are small compared to the
+ remaining backend startup time.</LI>
<LI>The backend code would be more complex.</LI>
</UL>
<P>2001-05-03</P>
- <P>As to how the RPMs are built -- to answer that question sanely requires me to know how much experience you have with the whole RPM paradigm. 'How is the RPM built?' is a multifaceted question. The obvious simple answer is that I maintain:</P>
+ <P>As to how the RPMs are built -- to answer that question sanely
+ requires me to know how much experience you have with the whole RPM
+ paradigm. 'How is the RPM built?' is a multifaceted question. The
+ obvious simple answer is that I maintain:</P>
<OL>
- <LI>A set of patches to make certain portions of the source tree 'behave' in the different environment of the RPMset;</LI>
+ <LI>A set of patches to make certain portions of the source tree
+ 'behave' in the different environment of the RPMset;</LI>
<LI>The initscript;</LI>
<LI>Any other ancilliary scripts and files;</LI>
- <LI>A README.rpm-dist document that tries to adequately document both the differences between the RPM build and the WHY of the differences, as well as useful RPM environment operations (like, using syslog, upgrading, getting postmaster to start at OS boot, etc);</LI>
+ <LI>A README.rpm-dist document that tries to adequately document
+ both the differences between the RPM build and the WHY of the
+ differences, as well as useful RPM environment operations (like,
+ using syslog, upgrading, getting postmaster to start at OS boot,
+ etc);</LI>
- <LI>The spec file that throws it all together. This is not a trivial undertaking in a package of this size.</LI>
+ <LI>The spec file that throws it all together. This is not a
+ trivial undertaking in a package of this size.</LI>
</OL>
- <P>I then download and build on as many different canonical distributions as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive opportunity from certain commercial enterprises such as Great Bridge and PostgreSQL, Inc. to build on other distributions.</P>
-
- <P>I test the build by installing the resulting packages and running the regression tests. Once the build passes these tests, I upload to the postgresql.org ftp server and make a release announcement. I am also responsible for maintaining the RPM download area on the ftp site.</P>
-
- <P>You'll notice I said 'canonical' distributions above. That simply means that the machine is as stock 'out of the box' as practical -- that is, everything (except select few programs) on these boxen are installed by RPM; only official Red Hat released RPMs are used (except in unusual circumstances involving software that will not alter the build -- for example, installing a newer non-RedHat version of the Dia diagramming package is OK -- installing Python 2.1 on the box that has Python 1.5.2 installed is not, as that alters the PostgreSQL build). The RPM as uploaded is built to as close to out-of-the-box pristine as is possible. Only the standard released 'official to that release' compiler is used -- and only the standard official kernel is used as well.</P>
-
- <P>For a time I built on Mandrake for RedHat consumption -- no more. Nonstandard RPM building systems are worse than useless. Which is not to say that Mandrake is useless! By no means is Mandrake useless -- unless you are building Red Hat RPMs -- and Red Hat is useless if you're trying to build Mandrake or SuSE RPMs, for that matter. But I would be foolish to use 'Lamar Owen's Super Special RPM Blend Distro 0.1.2' to build for public consumption! :-)</P>
-
- <P>I _do_ attempt to make the _source_ RPM compatible with as many distributions as possible -- however, since I have limited resources (as a volunteer RPM maintainer) I am limited as to the amount of testing said build will get on other distributions, architectures, or systems.</P>
-
- <P>And, while I understand people's desire to immediately upgrade to the newest version, realize that I do this as a side interest -- I have a regular, full-time job as a broadcast engineer/webmaster/sysadmin/Technical Director which occasionally prevents me from making timely RPM releases. This happened during the early part of the 7.1 beta cycle -- but I believe I was pretty much on the ball for the Release Candidates and the final release.</P>
-
- <P>I am working towards a more open RPM distribution -- I would dearly love to more fully document the process and put everything into CVS -- once I figure out how I want to represent things such as the spec file in a CVS form. It makes no sense to maintain a changelog, for instance, in the spec file in CVS when CVS does a better job of changelogs -- I will need to write a tool to generate a real spec file from a CVS spec-source file that would add version numbers, changelog entries, etc to the result before building the RPM. IOW, I need to rethink the process -- and then go through the motions of putting my long RPM history into CVS one version at a time so that version history information isn't lost.</P>
-
- <P>As to why all these files aren't part of the source tree, well, unless there was a large cry for it to happen, I don't believe it should. PostgreSQL is very platform-agnostic -- and I like that. Including the RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that agnostic stance in a negative way. But maybe I'm too sensitive to that. I'm not opposed to doing that if that is the consensus of the core group -- and that would be a sneaky way to get the stuff into CVS :-). But if the core group isn't thrilled with the idea (and my instinct says they're not likely to be), I am opposed to the idea -- not to keep the stuff to myself, but to not hinder the platform-neutral stance. IMHO, of course.</P>
-
- <P>Of course, there are many projects that DO include all the files necessary to build RPMs from their Official Tarball (TM).</P>
+ <P>I then download and build on as many different canonical
+ distributions as I can -- currently I am able to build on Red Hat
+ 6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive
+ opportunity from certain commercial enterprises such as Great
+ Bridge and PostgreSQL, Inc. to build on other distributions.</P>
+
+ <P>I test the build by installing the resulting packages and
+ running the regression tests. Once the build passes these tests, I
+ upload to the postgresql.org ftp server and make a release
+ announcement. I am also responsible for maintaining the RPM
+ download area on the ftp site.</P>
+
+ <P>You'll notice I said 'canonical' distributions above. That
+ simply means that the machine is as stock 'out of the box' as
+ practical -- that is, everything (except select few programs) on
+ these boxen are installed by RPM; only official Red Hat released
+ RPMs are used (except in unusual circumstances involving software
+ that will not alter the build -- for example, installing a newer
+ non-RedHat version of the Dia diagramming package is OK --
+ installing Python 2.1 on the box that has Python 1.5.2 installed is
+ not, as that alters the PostgreSQL build). The RPM as uploaded is
+ built to as close to out-of-the-box pristine as is possible. Only
+ the standard released 'official to that release' compiler is used
+ -- and only the standard official kernel is used as well.</P>
+
+ <P>For a time I built on Mandrake for RedHat consumption -- no
+ more. Nonstandard RPM building systems are worse than useless.
+ Which is not to say that Mandrake is useless! By no means is
+ Mandrake useless -- unless you are building Red Hat RPMs -- and Red
+ Hat is useless if you're trying to build Mandrake or SuSE RPMs, for
+ that matter. But I would be foolish to use 'Lamar Owen's Super
+ Special RPM Blend Distro 0.1.2' to build for public consumption!
+ :-)</P>
+
+ <P>I _do_ attempt to make the _source_ RPM compatible with as many
+ distributions as possible -- however, since I have limited
+ resources (as a volunteer RPM maintainer) I am limited as to the
+ amount of testing said build will get on other distributions,
+ architectures, or systems.</P>
+
+ <P>And, while I understand people's desire to immediately upgrade
+ to the newest version, realize that I do this as a side interest --
+ I have a regular, full-time job as a broadcast
+ engineer/webmaster/sysadmin/Technical Director which occasionally
+ prevents me from making timely RPM releases. This happened during
+ the early part of the 7.1 beta cycle -- but I believe I was pretty
+ much on the ball for the Release Candidates and the final
+ release.</P>
+
+ <P>I am working towards a more open RPM distribution -- I would
+ dearly love to more fully document the process and put everything
+ into CVS -- once I figure out how I want to represent things such
+ as the spec file in a CVS form. It makes no sense to maintain a
+ changelog, for instance, in the spec file in CVS when CVS does a
+ better job of changelogs -- I will need to write a tool to generate
+ a real spec file from a CVS spec-source file that would add version
+ numbers, changelog entries, etc to the result before building the
+ RPM. IOW, I need to rethink the process -- and then go through the
+ motions of putting my long RPM history into CVS one version at a
+ time so that version history information isn't lost.</P>
+
+ <P>As to why all these files aren't part of the source tree, well,
+ unless there was a large cry for it to happen, I don't believe it
+ should. PostgreSQL is very platform-agnostic -- and I like that.
+ Including the RPM stuff as part of the Official Tarball (TM) would,
+ IMHO, slant that agnostic stance in a negative way. But maybe I'm
+ too sensitive to that. I'm not opposed to doing that if that is the
+ consensus of the core group -- and that would be a sneaky way to
+ get the stuff into CVS :-). But if the core group isn't thrilled
+ with the idea (and my instinct says they're not likely to be), I am
+ opposed to the idea -- not to keep the stuff to myself, but to not
+ hinder the platform-neutral stance. IMHO, of course.</P>
+
+ <P>Of course, there are many projects that DO include all the files
+ necessary to build RPMs from their Official Tarball (TM).</P>
<H3><A name="16">16</A>) How are CVS branches managed?</H3>
<P>2001-05-07</P>
- <P>If you just do basic "cvs checkout", "cvs update", "cvs commit", then you'll always be dealing with the HEAD version of the files in CVS. That's what you want for development, but if you need to patch past stable releases then you have to be able to access and update the "branch" portions of our CVS repository. We normally fork off a branch for a stable release just before starting the development cycle for the next release.</P>
-
- <P>The first thing you have to know is the branch name for the branch you are interested in getting at. To do this, look at some long-lived file, say the top-level HISTORY file, with "cvs status -v" to see what the branch names are. (Thanks to Ian Lance Taylor for pointing out that this is the easiest way to do it.) Typical branch names are:</P>
+ <P>If you just do basic "cvs checkout", "cvs update", "cvs commit",
+ then you'll always be dealing with the HEAD version of the files in
+ CVS. That's what you want for development, but if you need to patch
+ past stable releases then you have to be able to access and update
+ the "branch" portions of our CVS repository. We normally fork off a
+ branch for a stable release just before starting the development
+ cycle for the next release.</P>
+
+ <P>The first thing you have to know is the branch name for the
+ branch you are interested in getting at. To do this, look at some
+ long-lived file, say the top-level HISTORY file, with "cvs status
+ -v" to see what the branch names are. (Thanks to Ian Lance Taylor
+ for pointing out that this is the easiest way to do it.) Typical
+ branch names are:</P>
<PRE>
REL7_1_STABLE
REL7_0_PATCHES
REL6_5_PATCHES
</PRE>
- <P>OK, so how do you do work on a branch? By far the best way is to create a separate checkout tree for the branch and do your work in that. Not only is that the easiest way to deal with CVS, but you really need to have the whole past tree available anyway to test your work. (And you *better* test your work. Never forget that dot-releases tend to go out with very little beta testing --- so whenever you commit an update to a stable branch, you'd better be doubly sure that it's correct.)</P>
-
- <P>Normally, to checkout the head branch, you just cd to the place you want to contain the toplevel "pgsql" directory and say</P>
+ <P>OK, so how do you do work on a branch? By far the best way is to
+ create a separate checkout tree for the branch and do your work in
+ that. Not only is that the easiest way to deal with CVS, but you
+ really need to have the whole past tree available anyway to test
+ your work. (And you *better* test your work. Never forget that
+ dot-releases tend to go out with very little beta testing --- so
+ whenever you commit an update to a stable branch, you'd better be
+ doubly sure that it's correct.)</P>
+
+ <P>Normally, to checkout the head branch, you just cd to the place
+ you want to contain the toplevel "pgsql" directory and say</P>
<PRE>
cvs ... checkout pgsql
</PRE>
- <P>To get a past branch, you cd to whereever you want it and say</P>
+ <P>To get a past branch, you cd to whereever you want it and
+ say</P>
<PRE>
cvs ... checkout -r BRANCHNAME pgsql
</PRE>
<P>and now I have a maintenance copy of 7.1.*.</P>
- <P>When you've done a checkout in this way, the branch name is "sticky": CVS automatically knows that this directory tree is for the branch, and whenever you do "cvs update" or "cvs commit" in this tree, you'll fetch or store the latest version in the branch, not the head version. Easy as can be.</P>
+ <P>When you've done a checkout in this way, the branch name is
+ "sticky": CVS automatically knows that this directory tree is for
+ the branch, and whenever you do "cvs update" or "cvs commit" in
+ this tree, you'll fetch or store the latest version in the branch,
+ not the head version. Easy as can be.</P>
- <P>So, if you have a patch that needs to apply to both the head and a recent stable branch, you have to make the edits and do the commit twice, once in your development tree and once in your stable branch tree. This is kind of a pain, which is why we don't normally fork the tree right away after a major release --- we wait for a dot-release or two, so that we won't have to double-patch the first wave of fixes.</P>
+ <P>So, if you have a patch that needs to apply to both the head and
+ a recent stable branch, you have to make the edits and do the
+ commit twice, once in your development tree and once in your stable
+ branch tree. This is kind of a pain, which is why we don't normally
+ fork the tree right away after a major release --- we wait for a
+ dot-release or two, so that we won't have to double-patch the first
+ wave of fixes.</P>
- <H3><A name="17">17</A>) How go I get involved in PostgreSQL development?</H3>
+ <H3><A name="17">17</A>) How go I get involved in PostgreSQL
+ development?</H3>
<P>This was written by Lamar Owen:</P>
<P>2001-06-22</P>
- <H3>What open source development process is used by the PostgreSQL team?</H3>
+ <H3>What open source development process is used by the PostgreSQL
+ team?</H3>
- <P>Read HACKERS for six months (or a full release cycle, whichever is longer). Really. HACKERS _is_the process. The process is not well documented (AFAIK -- it may be somewhere that I am not aware of) -- and it changes continually.</P>
+ <P>Read HACKERS for six months (or a full release cycle, whichever
+ is longer). Really. HACKERS _is_the process. The process is not
+ well documented (AFAIK -- it may be somewhere that I am not aware
+ of) -- and it changes continually.</P>
- <H3>What development environment (OS, system, compilers, etc) is required to develop code?</H3>
+ <H3>What development environment (OS, system, compilers, etc) is
+ required to develop code?</H3>
- <P><A href="developers.postgresql.org">Developers Corner</A> on the website has links to this information. The distribution tarball itself includes all the extra tools and documents that go beyond a good Unix-like development environment. In general, a modern unix with a modern gcc, GNU make or equivalent, autoconf (of a particular version), and good working knowledge of those tools are required.</P>
+ <P><A href="developers.postgresql.org">Developers Corner</A> on the
+ website has links to this information. The distribution tarball
+ itself includes all the extra tools and documents that go beyond a
+ good Unix-like development environment. In general, a modern unix
+ with a modern gcc, GNU make or equivalent, autoconf (of a
+ particular version), and good working knowledge of those tools are
+ required.</P>
<H3>What area or two needs some support?</H3>
<P>The TODO list.</P>
- <P>You've made the first step, by finding and subscribing to HACKERS. Once you find an area to look at in the TODO, and have read the documentation on the internals, etc, then you check out a current CVS,write what you are going to write (keeping your CVS checkout up to date in the process), and make up a patch (as a context diff only) and send to the PATCHES list, prefereably.</P>
-
- <P>Discussion on the patch typically happens here. If the patch adds a major feature, it would be a good idea to talk about it first on the HACKERS list, in order to increase the chances of it being accepted, as well as toavoid duplication of effort. Note that experienced developers with a proven track record usually get the big jobs -- for more than one reason. Also note that PostgreSQL is highly portable -- nonportable code will likely be dismissed out of hand.</P>
-
- <P>Once your contributions get accepted, things move from there. Typically, you would be added as a developer on the list on the website when one of the other developers recommends it. Membership on the steering committee is by invitation only, by the other steering committee members, from what I have gathered watching froma distance.</P>
-
- <P>I make these statements from having watched the process for over two years.</P>
-
- <P>To see a good example of how one goes about this, search the archives for the name 'Tom Lane' and see what his first post consisted of, and where he took things. In particular, note that this hasn't been _that_ long ago -- and his bugfixing and general deep knowledge with this codebase is legendary. Take a few days to read after him. And pay special attention to both the sheer quantity as well as the painstaking quality of his work. Both are in high demand.</P>
+ <P>You've made the first step, by finding and subscribing to
+ HACKERS. Once you find an area to look at in the TODO, and have
+ read the documentation on the internals, etc, then you check out a
+ current CVS,write what you are going to write (keeping your CVS
+ checkout up to date in the process), and make up a patch (as a
+ context diff only) and send to the PATCHES list, prefereably.</P>
+
+ <P>Discussion on the patch typically happens here. If the patch
+ adds a major feature, it would be a good idea to talk about it
+ first on the HACKERS list, in order to increase the chances of it
+ being accepted, as well as toavoid duplication of effort. Note that
+ experienced developers with a proven track record usually get the
+ big jobs -- for more than one reason. Also note that PostgreSQL is
+ highly portable -- nonportable code will likely be dismissed out of
+ hand.</P>
+
+ <P>Once your contributions get accepted, things move from there.
+ Typically, you would be added as a developer on the list on the
+ website when one of the other developers recommends it. Membership
+ on the steering committee is by invitation only, by the other
+ steering committee members, from what I have gathered watching
+ froma distance.</P>
+
+ <P>I make these statements from having watched the process for over
+ two years.</P>
+
+ <P>To see a good example of how one goes about this, search the
+ archives for the name 'Tom Lane' and see what his first post
+ consisted of, and where he took things. In particular, note that
+ this hasn't been _that_ long ago -- and his bugfixing and general
+ deep knowledge with this codebase is legendary. Take a few days to
+ read after him. And pay special attention to both the sheer
+ quantity as well as the painstaking quality of his work. Both are
+ in high demand.</P>
</BODY>
</HTML>