+The developer FAQ can be found on the PostgreSQL wiki:
- Developer's Frequently Asked Questions (FAQ) for PostgreSQL
-
- Last updated: Tue Feb 10 10:16:31 EST 2004
-
- Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
-
- The most recent version of this document can be viewed at
- http://www.PostgreSQL.org/docs/faqs/FAQ_DEV.html.
- _________________________________________________________________
-
- General Questions
-
- 1.1) How do I get involved in PostgreSQL development?
- 1.2) How do I add a feature or fix a bug?
- 1.3) How do I download/update the current source tree?
- 1.4) How do I test my changes?
- 1.5) What tools are available for developers?
- 1.6) What books are good for developers?
- 1.7) What is configure all about?
- 1.8) How do I add a new port?
- 1.9) Why don't you use threads/raw devices/async-I/O, <insert your
- favorite wizz-bang feature here>?
- 1.10) How are RPM's packaged?
- 1.11) How are CVS branches handled?
- 1.12) Where can I get a copy of the SQL standards?
-
- Technical Questions
-
- 2.1) How do I efficiently access information in tables from the
- backend code?
- 2.2) Why are table, column, type, function, view names sometimes
- referenced as Name or NameData, and sometimes as char *?
- 2.3) Why do we use Node and List to make data structures?
- 2.4) I just added a field to a structure. What else should I do?
- 2.5) Why do we use palloc() and pfree() to allocate memory?
- 2.6) What is ereport()?
- 2.7) What is CommandCounterIncrement()?
- _________________________________________________________________
-
- General Questions
-
- 1.1) How go I get involved in PostgreSQL development?
-
- This was written by Lamar Owen:
-
- 2001-06-22
- What open source development process is used by the PostgreSQL team?
-
- Read HACKERS for six months (or a full release cycle, whichever is
- longer). Really. HACKERS _is_the process. The process is not well
- documented (AFAIK -- it may be somewhere that I am not aware of) --
- and it changes continually.
- What development environment (OS, system, compilers, etc) is required
- to develop code?
-
- Developers Corner on the website has links to this information. The
- distribution tarball itself includes all the extra tools and documents
- that go beyond a good Unix-like development environment. In general, a
- modern unix with a modern gcc, GNU make or equivalent, autoconf (of a
- particular version), and good working knowledge of those tools are
- required.
- What areas need support?
-
- The TODO list.
-
- You've made the first step, by finding and subscribing to HACKERS.
- Once you find an area to look at in the TODO, and have read the
- documentation on the internals, etc, then you check out a current
- CVS,write what you are going to write (keeping your CVS checkout up to
- date in the process), and make up a patch (as a context diff only) and
- send to the PATCHES list, prefereably.
-
- Discussion on the patch typically happens here. If the patch adds a
- major feature, it would be a good idea to talk about it first on the
- HACKERS list, in order to increase the chances of it being accepted,
- as well as toavoid duplication of effort. Note that experienced
- developers with a proven track record usually get the big jobs -- for
- more than one reason. Also note that PostgreSQL is highly portable --
- nonportable code will likely be dismissed out of hand.
-
- Once your contributions get accepted, things move from there.
- Typically, you would be added as a developer on the list on the
- website when one of the other developers recommends it. Membership on
- the steering committee is by invitation only, by the other steering
- committee members, from what I have gathered watching froma distance.
-
- I make these statements from having watched the process for over two
- years.
-
- To see a good example of how one goes about this, search the archives
- for the name 'Tom Lane' and see what his first post consisted of, and
- where he took things. In particular, note that this hasn't been _that_
- long ago -- and his bugfixing and general deep knowledge with this
- codebase is legendary. Take a few days to read after him. And pay
- special attention to both the sheer quantity as well as the
- painstaking quality of his work. Both are in high demand.
-
- 1.2) How do I add a feature or fix a bug?
-
- The source code is over 350,000 lines. Many fixes/features are
- isolated to one specific area of the code. Others require knowledge of
- much of the source. If you are confused about where to start, ask the
- hackers list, and they will be glad to assess the complexity and give
- pointers on where to start.
-
- Another thing to keep in mind is that many fixes and features can be
- added with surprisingly little code. I often start by adding code,
- then looking at other areas in the code where similar things are done,
- and by the time I am finished, the patch is quite small and compact.
-
- When adding code, keep in mind that it should use the existing
- facilities in the source, for performance reasons and for simplicity.
- Often a review of existing code doing similar things is helpful.
-
- The usual process for source additions is:
- * Review the TODO list.
- * Discuss hackers the desirability of the fix/feature.
- * How should it behave in complex circumstances?
- * How should it be implemented?
- * Submit the patch to the patches list.
- * Answer email questions.
- * Wait for the patch to be applied.
-
- 1.3) How do I download/update the current source tree?
-
- There are several ways to obtain the source tree. Occasional
- developers can just get the most recent source tree snapshot from
- ftp.postgresql.org. For regular developers, you can use CVS. CVS
- allows you to download the source tree, then occasionally update your
- copy of the source tree with any new changes. Using CVS, you don't
- have to download the entire source each time, only the changed files.
- Anonymous CVS does not allows developers to update the remote source
- tree, though privileged developers can do this. There is a CVS FAQ on
- our web site that describes how to use remote CVS. You can also use
- CVSup, which has similarly functionality, and is available from
- ftp.postgresql.org.
-
- To update the source tree, there are two ways. You can generate a
- patch against your current source tree, perhaps using the make_diff
- tools mentioned above, and send them to the patches list. They will be
- reviewed, and applied in a timely manner. If the patch is major, and
- we are in beta testing, the developers may wait for the final release
- before applying your patches.
-
- For hard-core developers, Marc(scrappy@postgresql.org) will give you a
- Unix shell account on postgresql.org, so you can use CVS to update the
- main source tree, or you can ftp your files into your account, patch,
- and cvs install the changes directly into the source tree.
-
- 1.4) How do I test my changes?
-
- First, use psql to make sure it is working as you expect. Then run
- src/test/regress and get the output of src/test/regress/checkresults
- with and without your changes, to see that your patch does not change
- the regression test in unexpected ways. This practice has saved me
- many times. The regression tests test the code in ways I would never
- do, and has caught many bugs in my patches. By finding the problems
- now, you save yourself a lot of debugging later when things are
- broken, and you can't figure out when it happened.
-
- 1.5) What tools are available for developers?
-
- Aside from the User documentation mentioned in the regular FAQ, there
- are several development tools available. First, all the files in the
- /tools directory are designed for developers.
- RELEASE_CHANGES changes we have to make for each release
- SQL_keywords standard SQL'92 keywords
- backend description/flowchart of the backend directories
- ccsym find standard defines made by your compiler
- entab converts tabs to spaces, used by pgindent
- find_static finds functions that could be made static
- find_typedef finds typedefs in the source code
- find_badmacros finds macros that use braces incorrectly
- make_ctags make vi 'tags' file in each directory
- make_diff make *.orig and diffs of source
- make_etags make emacs 'etags' files
- make_keywords make comparison of our keywords and SQL'92
- make_mkid make mkid ID files
- mkldexport create AIX exports file
- pgindent indents C source files
- pgjindent indents Java source files
- pginclude scripts for adding/removing include files
- unused_oids in pgsql/src/include/catalog
-
- Let me note some of these. If you point your browser at the
- file:/usr/local/src/pgsql/src/tools/backend/index.html directory, you
- will see few paragraphs describing the data flow, the backend
- components in a flow chart, and a description of the shared memory
- area. You can click on any flowchart box to see a description. If you
- then click on the directory name, you will be taken to the source
- directory, to browse the actual source code behind it. We also have
- several README files in some source directories to describe the
- function of the module. The browser will display these when you enter
- the directory also. The tools/backend directory is also contained on
- our web page under the title How PostgreSQL Processes a Query.
-
- Second, you really should have an editor that can handle tags, so you
- can tag a function call to see the function definition, and then tag
- inside that function to see an even lower-level function, and then
- back out twice to return to the original function. Most editors
- support this via tags or etags files.
-
- Third, you need to get id-utils from:
- ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz
- ftp://tug.org/gnu/id-utils-3.2d.tar.gz
- ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz
-
- By running tools/make_mkid, an archive of source symbols can be
- created that can be rapidly queried like grep or edited. Others prefer
- glimpse.
-
- make_diff has tools to create patch diff files that can be applied to
- the distribution. This produces context diffs, which is our preferred
- format.
-
- Our standard format is to indent each code level with one tab, where
- each tab is four spaces. You will need to set your editor to display
- tabs as four spaces:
- vi in ~/.exrc:
- set tabstop=4
- set sw=4
- more:
- more -x4
- less:
- less -x4
- emacs:
- M-x set-variable tab-width
-
- or
-
- (c-add-style "pgsql"
- '("bsd"
- (indent-tabs-mode . t)
- (c-basic-offset . 4)
- (tab-width . 4)
- (c-offsets-alist .
- ((case-label . +)))
- )
- nil ) ; t = set this style, nil = don't
-
- (defun pgsql-c-mode ()
- (c-mode)
- (c-set-style "pgsql")
- )
-
- and add this to your autoload list (modify file path in macro):
-
- (setq auto-mode-alist
- (cons '("\\`/home/andrew/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode)
- auto-mode-alist))
- or
- /*
- * Local variables:
- * tab-width: 4
- * c-indent-level: 4
- * c-basic-offset: 4
- * End:
- */
-
- pgindent will the format code by specifying flags to your operating
- system's utility indent. This article describes the value of a
- constent coding style.
-
- pgindent is run on all source files just before each beta test period.
- It auto-formats all source files to make them consistent. Comment
- blocks that need specific line breaks should be formatted as block
- comments, where the comment starts as /*------. These comments will
- not be reformatted in any way.
-
- pginclude contains scripts used to add needed #include's to include
- files, and removed unneeded #include's.
-
- When adding system types, you will need to assign oids to them. There
- is also a script called unused_oids in pgsql/src/include/catalog that
- shows the unused oids.
-
- 1.6) What books are good for developers?
-
- I have four good books, An Introduction to Database Systems, by C.J.
- Date, Addison, Wesley, A Guide to the SQL Standard, by C.J. Date, et.
- al, Addison, Wesley, Fundamentals of Database Systems, by Elmasri and
- Navathe, and Transaction Processing, by Jim Gray, Morgan, Kaufmann
-
- There is also a database performance site, with a handbook on-line
- written by Jim Gray at http://www.benchmarkresources.com.
-
- 1.7) What is configure all about?
-
- The files configure and configure.in are part of the GNU autoconf
- package. Configure allows us to test for various capabilities of the
- OS, and to set variables that can then be tested in C programs and
- Makefiles. Autoconf is installed on the PostgreSQL main server. To add
- options to configure, edit configure.in, and then run autoconf to
- generate configure.
-
- When configure is run by the user, it tests various OS capabilities,
- stores those in config.status and config.cache, and modifies a list of
- *.in files. For example, if there exists a Makefile.in, configure
- generates a Makefile that contains substitutions for all @var@
- parameters found by configure.
-
- When you need to edit files, make sure you don't waste time modifying
- files generated by configure. Edit the *.in file, and re-run configure
- to recreate the needed file. If you run make distclean from the
- top-level source directory, all files derived by configure are
- removed, so you see only the file contained in the source
- distribution.
-
- 1.8) How do I add a new port?
-
- There are a variety of places that need to be modified to add a new
- port. First, start in the src/template directory. Add an appropriate
- entry for your OS. Also, use src/config.guess to add your OS to
- src/template/.similar. You shouldn't match the OS version exactly. The
- configure test will look for an exact OS version number, and if not
- found, find a match without version number. Edit src/configure.in to
- add your new OS. (See configure item above.) You will need to run
- autoconf, or patch src/configure too.
-
- Then, check src/include/port and add your new OS file, with
- appropriate values. Hopefully, there is already locking code in
- src/include/storage/s_lock.h for your CPU. There is also a
- src/makefiles directory for port-specific Makefile handling. There is
- a backend/port directory if you need special files for your OS.
-
- 1.9) Why don't you use threads/raw devices/async-I/O, <insert your favorite
- wizz-bang feature here>?
-
- There is always a temptation to use the newest operating system
- features as soon as they arrive. We resist that temptation.
-
- First, we support 15+ operating systems, so any new feature has to be
- well established before we will consider it. Second, most new
- wizz-bang features don't provide dramatic improvements. Third, they
- usually have some downside, such as decreased reliability or
- additional code required. Therefore, we don't rush to use new features
- but rather wait for the feature to be established, then ask for
- testing to show that a measurable improvement is possible.
-
- As an example, threads are not currently used in the backend code
- because:
- * Historically, threads were unsupported and buggy.
- * An error in one backend can corrupt other backends.
- * Speed improvements using threads are small compared to the
- remaining backend startup time.
- * The backend code would be more complex.
-
- So, we are not ignorant of new features. It is just that we are
- cautious about their adoption. The TODO list often contains links to
- discussions showing our reasoning in these areas.
-
- 1.10) How are RPM's packaged?
-
- This was written by Lamar Owen:
-
- 2001-05-03
-
- As to how the RPMs are built -- to answer that question sanely
- requires me to know how much experience you have with the whole RPM
- paradigm. 'How is the RPM built?' is a multifaceted question. The
- obvious simple answer is that I maintain:
- 1. A set of patches to make certain portions of the source tree
- 'behave' in the different environment of the RPMset;
- 2. The initscript;
- 3. Any other ancilliary scripts and files;
- 4. A README.rpm-dist document that tries to adequately document both
- the differences between the RPM build and the WHY of the
- differences, as well as useful RPM environment operations (like,
- using syslog, upgrading, getting postmaster to start at OS boot,
- etc);
- 5. The spec file that throws it all together. This is not a trivial
- undertaking in a package of this size.
-
- I then download and build on as many different canonical distributions
- as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1
- on my personal hardware. Occasionally I receive opportunity from
- certain commercial enterprises such as Great Bridge and PostgreSQL,
- Inc. to build on other distributions.
-
- I test the build by installing the resulting packages and running the
- regression tests. Once the build passes these tests, I upload to the
- postgresql.org ftp server and make a release announcement. I am also
- responsible for maintaining the RPM download area on the ftp site.
-
- You'll notice I said 'canonical' distributions above. That simply
- means that the machine is as stock 'out of the box' as practical --
- that is, everything (except select few programs) on these boxen are
- installed by RPM; only official Red Hat released RPMs are used (except
- in unusual circumstances involving software that will not alter the
- build -- for example, installing a newer non-RedHat version of the Dia
- diagramming package is OK -- installing Python 2.1 on the box that has
- Python 1.5.2 installed is not, as that alters the PostgreSQL build).
- The RPM as uploaded is built to as close to out-of-the-box pristine as
- is possible. Only the standard released 'official to that release'
- compiler is used -- and only the standard official kernel is used as
- well.
-
- For a time I built on Mandrake for RedHat consumption -- no more.
- Nonstandard RPM building systems are worse than useless. Which is not
- to say that Mandrake is useless! By no means is Mandrake useless --
- unless you are building Red Hat RPMs -- and Red Hat is useless if
- you're trying to build Mandrake or SuSE RPMs, for that matter. But I
- would be foolish to use 'Lamar Owen's Super Special RPM Blend Distro
- 0.1.2' to build for public consumption! :-)
-
- I _do_ attempt to make the _source_ RPM compatible with as many
- distributions as possible -- however, since I have limited resources
- (as a volunteer RPM maintainer) I am limited as to the amount of
- testing said build will get on other distributions, architectures, or
- systems.
-
- And, while I understand people's desire to immediately upgrade to the
- newest version, realize that I do this as a side interest -- I have a
- regular, full-time job as a broadcast
- engineer/webmaster/sysadmin/Technical Director which occasionally
- prevents me from making timely RPM releases. This happened during the
- early part of the 7.1 beta cycle -- but I believe I was pretty much on
- the ball for the Release Candidates and the final release.
-
- I am working towards a more open RPM distribution -- I would dearly
- love to more fully document the process and put everything into CVS --
- once I figure out how I want to represent things such as the spec file
- in a CVS form. It makes no sense to maintain a changelog, for
- instance, in the spec file in CVS when CVS does a better job of
- changelogs -- I will need to write a tool to generate a real spec file
- from a CVS spec-source file that would add version numbers, changelog
- entries, etc to the result before building the RPM. IOW, I need to
- rethink the process -- and then go through the motions of putting my
- long RPM history into CVS one version at a time so that version
- history information isn't lost.
-
- As to why all these files aren't part of the source tree, well, unless
- there was a large cry for it to happen, I don't believe it should.
- PostgreSQL is very platform-agnostic -- and I like that. Including the
- RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that
- agnostic stance in a negative way. But maybe I'm too sensitive to
- that. I'm not opposed to doing that if that is the consensus of the
- core group -- and that would be a sneaky way to get the stuff into CVS
- :-). But if the core group isn't thrilled with the idea (and my
- instinct says they're not likely to be), I am opposed to the idea --
- not to keep the stuff to myself, but to not hinder the
- platform-neutral stance. IMHO, of course.
-
- Of course, there are many projects that DO include all the files
- necessary to build RPMs from their Official Tarball (TM).
-
- 1.11) How are CVS branches managed?
-
- This was written by Tom Lane:
-
- 2001-05-07
-
- If you just do basic "cvs checkout", "cvs update", "cvs commit", then
- you'll always be dealing with the HEAD version of the files in CVS.
- That's what you want for development, but if you need to patch past
- stable releases then you have to be able to access and update the
- "branch" portions of our CVS repository. We normally fork off a branch
- for a stable release just before starting the development cycle for
- the next release.
-
- The first thing you have to know is the branch name for the branch you
- are interested in getting at. To do this, look at some long-lived
- file, say the top-level HISTORY file, with "cvs status -v" to see what
- the branch names are. (Thanks to Ian Lance Taylor for pointing out
- that this is the easiest way to do it.) Typical branch names are:
- REL7_1_STABLE
- REL7_0_PATCHES
- REL6_5_PATCHES
-
- OK, so how do you do work on a branch? By far the best way is to
- create a separate checkout tree for the branch and do your work in
- that. Not only is that the easiest way to deal with CVS, but you
- really need to have the whole past tree available anyway to test your
- work. (And you *better* test your work. Never forget that dot-releases
- tend to go out with very little beta testing --- so whenever you
- commit an update to a stable branch, you'd better be doubly sure that
- it's correct.)
-
- Normally, to checkout the head branch, you just cd to the place you
- want to contain the toplevel "pgsql" directory and say
- cvs ... checkout pgsql
-
- To get a past branch, you cd to whereever you want it and say
- cvs ... checkout -r BRANCHNAME pgsql
-
- For example, just a couple days ago I did
- mkdir ~postgres/REL7_1
- cd ~postgres/REL7_1
- cvs ... checkout -r REL7_1_STABLE pgsql
-
- and now I have a maintenance copy of 7.1.*.
-
- When you've done a checkout in this way, the branch name is "sticky":
- CVS automatically knows that this directory tree is for the branch,
- and whenever you do "cvs update" or "cvs commit" in this tree, you'll
- fetch or store the latest version in the branch, not the head version.
- Easy as can be.
-
- So, if you have a patch that needs to apply to both the head and a
- recent stable branch, you have to make the edits and do the commit
- twice, once in your development tree and once in your stable branch
- tree. This is kind of a pain, which is why we don't normally fork the
- tree right away after a major release --- we wait for a dot-release or
- two, so that we won't have to double-patch the first wave of fixes.
-
- 1.12) Where can I get a copy of the SQL standards?
-
- There are two pertinent standards, SQL92 and SQL99. These standards
- are endorsed by ANSI and ISO. A draft of the SQL92 standard is
- available at http://www.contrib.andrew.cmu.edu/~shadow/. The SQL99
- standard must be purchased from ANSI at
- http://webstore.ansi.org/ansidocstore/default.asp. The main standards
- documents are ANSI X3.135-1992 for SQL92 and ANSI/ISO/IEC 9075-2-1999
- for SQL99. The SQL 200X standards are at
- ftp://sqlstandards.org/SC32/WG3/Progression_Documents/FCD
-
- A summary of these standards is at
- http://dbs.uni-leipzig.de/en/lokal/standards.pdf and
- http://db.konkuk.ac.kr/present/SQL3.pdf.
-
- Technical Questions
-
- 2.1) How do I efficiently access information in tables from the backend code?
-
- You first need to find the tuples(rows) you are interested in. There
- are two ways. First, SearchSysCache() and related functions allow you
- to query the system catalogs. This is the preferred way to access
- system tables, because the first call to the cache loads the needed
- rows, and future requests can return the results without accessing the
- base table. The caches use system table indexes to look up tuples. A
- list of available caches is located in
- src/backend/utils/cache/syscache.c.
- src/backend/utils/cache/lsyscache.c contains many column-specific
- cache lookup functions.
-
- The rows returned are cache-owned versions of the heap rows.
- Therefore, you must not modify or delete the tuple returned by
- SearchSysCache(). What you should do is release it with
- ReleaseSysCache() when you are done using it; this informs the cache
- that it can discard that tuple if necessary. If you neglect to call
- ReleaseSysCache(), then the cache entry will remain locked in the
- cache until end of transaction, which is tolerable but not very
- desirable.
-
- If you can't use the system cache, you will need to retrieve the data
- directly from the heap table, using the buffer cache that is shared by
- all backends. The backend automatically takes care of loading the rows
- into the buffer cache.
-
- Open the table with heap_open(). You can then start a table scan with
- heap_beginscan(), then use heap_getnext() and continue as long as
- HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be
- assigned to the scan. No indexes are used, so all rows are going to be
- compared to the keys, and only the valid rows returned.
-
- You can also use heap_fetch() to fetch rows by block number/offset.
- While scans automatically lock/unlock rows from the buffer cache, with
- heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it
- when completed.
-
- Once you have the row, you can get data that is common to all tuples,
- like t_self and t_oid, by merely accessing the HeapTuple structure
- entries. If you need a table-specific column, you should take the
- HeapTuple pointer, and use the GETSTRUCT() macro to access the
- table-specific start of the tuple. You then cast the pointer as a
- Form_pg_proc pointer if you are accessing the pg_proc table, or
- Form_pg_type if you are accessing pg_type. You can then access the
- columns by using a structure pointer:
-((Form_pg_class) GETSTRUCT(tuple))->relnatts
-
- You must not directly change live tuples in this way. The best way is
- to use heap_modifytuple() and pass it your original tuple, and the
- values you want changed. It returns a palloc'ed tuple, which you pass
- to heap_replace(). You can delete tuples by passing the tuple's t_self
- to heap_destroy(). You use t_self for heap_update() too. Remember,
- tuples can be either system cache copies, which may go away after you
- call ReleaseSysCache(), or read directly from disk buffers, which go
- away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the
- heap_fetch() case. Or it may be a palloc'ed tuple, that you must
- pfree() when finished.
-
- 2.2) Why are table, column, type, function, view names sometimes referenced
- as Name or NameData, and sometimes as char *?
-
- Table, column, type, function, and view names are stored in system
- tables in columns of type Name. Name is a fixed-length,
- null-terminated type of NAMEDATALEN bytes. (The default value for
- NAMEDATALEN is 64 bytes.)
-typedef struct nameData
- {
- char data[NAMEDATALEN];
- } NameData;
- typedef NameData *Name;
-
- Table, column, type, function, and view names that come into the
- backend via user queries are stored as variable-length,
- null-terminated character strings.
-
- Many functions are called with both types of names, ie. heap_open().
- Because the Name type is null-terminated, it is safe to pass it to a
- function expecting a char *. Because there are many cases where
- on-disk names(Name) are compared to user-supplied names(char *), there
- are many cases where Name and char * are used interchangeably.
-
- 2.3) Why do we use Node and List to make data structures?
-
- We do this because this allows a consistent way to pass data inside
- the backend in a flexible way. Every node has a NodeTag which
- specifies what type of data is inside the Node. Lists are groups of
- Nodes chained together as a forward-linked list.
-
- Here are some of the List manipulation commands:
-
- lfirst(i)
- return the data at list element i.
-
- lnext(i)
- return the next list element after i.
-
- foreach(i, list)
- loop through list, assigning each list element to i. It is
- important to note that i is a List *, not the data in the List
- element. You need to use lfirst(i) to get at the data. Here is
- a typical code snippet that loops through a List containing Var
- *'s and processes each one:
-
-List *i, *list;
-
- foreach(i, list)
- {
- Var *var = lfirst(i);
-
- /* process var here */
- }
-
- lcons(node, list)
- add node to the front of list, or create a new list with node
- if list is NIL.
-
- lappend(list, node)
- add node to the end of list. This is more expensive that lcons.
-
- nconc(list1, list2)
- Concat list2 on to the end of list1.
-
- length(list)
- return the length of the list.
-
- nth(i, list)
- return the i'th element in list.
-
- lconsi, ...
- There are integer versions of these: lconsi, lappendi, etc.
- Also versions for OID lists: lconso, lappendo, etc.
-
- You can print nodes easily inside gdb. First, to disable output
- truncation when you use the gdb print command:
-(gdb) set print elements 0
-
- Instead of printing values in gdb format, you can use the next two
- commands to print out List, Node, and structure contents in a verbose
- format that is easier to understand. List's are unrolled into nodes,
- and nodes are printed in detail. The first prints in a short format,
- and the second in a long format:
-(gdb) call print(any_pointer)
- (gdb) call pprint(any_pointer)
-
- The output appears in the postmaster log file, or on your screen if
- you are running a backend directly without a postmaster.
-
- 2.4) I just added a field to a structure. What else should I do?
-
- The structures passing around from the parser, rewrite, optimizer, and
- executor require quite a bit of support. Most structures have support
- routines in src/backend/nodes used to create, copy, read, and output
- those structures. Make sure you add support for your new field to
- these files. Find any other places the structure may need code for
- your new field. mkid is helpful with this (see above).
-
- 2.5) Why do we use palloc() and pfree() to allocate memory?
-
- palloc() and pfree() are used in place of malloc() and free() because
- we find it easier to automatically free all memory allocated when a
- query completes. This assures us that all memory that was allocated
- gets freed even if we have lost track of where we allocated it. There
- are special non-query contexts that memory can be allocated in. These
- affect when the allocated memory is freed by the backend.
-
- 2.6) What is ereport()?
-
- ereport() is used to send messages to the front-end, and optionally
- terminate the current query being processed. The first parameter is an
- ereport level of DEBUG (levels 1-5), LOG, INFO, NOTICE, ERROR, FATAL,
- or PANIC. NOTICE prints on the user's terminal and the postmaster
- logs. INFO prints only to the user's terminal and LOG prints only to
- the server logs. (These can be changed from postgresql.conf.) ERROR
- prints in both places, and terminates the current query, never
- returning from the call. FATAL terminates the backend process. The
- remaining parameters of ereport are a printf-style set of parameters
- to print.
-
- ereport(ERROR) frees most memory and open file descriptors so you
- don't need to clean these up before the call.
-
- 2.7) What is CommandCounterIncrement()?
-
- Normally, transactions can not see the rows they modify. This allows
- UPDATE foo SET x = x + 1 to work correctly.
-
- However, there are cases where a transactions needs to see rows
- affected in previous parts of the transaction. This is accomplished
- using a Command Counter. Incrementing the counter allows transactions
- to be broken into pieces so each piece can see rows modified by
- previous pieces. CommandCounterIncrement() increments the Command
- Counter, creating a new part of the transaction.
+ http://wiki.postgresql.org/wiki/Development_information
<HTML>
<HEAD>
- <META name="generator" content="HTML Tidy, see www.w3.org">
-
<TITLE>PostgreSQL Developers FAQ</TITLE>
</HEAD>
<BODY bgcolor="#FFFFFF" text="#000000" link="#FF0000" vlink="#A00000"
alink="#0000FF">
- <H1>Developer's Frequently Asked Questions (FAQ) for
- PostgreSQL</H1>
-
- <P>Last updated: Tue Feb 10 10:16:31 EST 2004</P>
-
- <P>Current maintainer: Bruce Momjian (<A href=
- "mailto:pgman@candle.pha.pa.us">pgman@candle.pha.pa.us</A>)<BR>
- </P>
-
- <P>The most recent version of this document can be viewed at <A href=
- "http://www.PostgreSQL.org/docs/faqs/FAQ_DEV.html">http://www.PostgreSQL.org/docs/faqs/FAQ_DEV.html</A>.</P>
-
- <HR>
- <BR>
-
-
- <CENTER>
- <H2>General Questions</H2>
- </CENTER>
- <A href="#1.1">1.1</A>) How do I get involved in PostgreSQL
- development?<BR>
- <A href="#1.2">1.2</A>) How do I add a feature or fix a bug?<BR>
- <A href="#1.3">1.3</A>) How do I download/update the current source
- tree?<BR>
- <A href="#1.4">1.4</A>) How do I test my changes?<BR>
- <A href="#1.5">1.5</A>) What tools are available for developers?<BR>
- <A href="#1.6">1.6</A>) What books are good for developers?<BR>
- <A href="#1.7">1.7</A>) What is configure all about?<BR>
- <A href="#1.8">1.8</A>) How do I add a new port?<BR>
- <A href="#1.9">1.9</A>) Why don't you use threads/raw
- devices/async-I/O, <insert your favorite wizz-bang feature
- here>?<BR>
- <A href="#1.10">1.10</A>) How are RPM's packaged?<BR>
- <A href="#1.11">1.11</A>) How are CVS branches handled?<BR>
- <A href="#1.12">1.12</A>) Where can I get a copy of the SQL
- standards?<BR>
-
- <CENTER>
- <H2>Technical Questions</H2>
- </CENTER>
- <A href="#2.1">2.1</A>) How do I efficiently access information in
- tables from the backend code?<BR>
- <A href="#2.2">2.2</A>) Why are table, column, type, function, view
- names sometimes referenced as <I>Name</I> or <I>NameData,</I> and
- sometimes as <I>char *?</I><BR>
- <A href="#2.3">2.3</A>) Why do we use <I>Node</I> and <I>List</I> to
- make data structures?<BR>
- <A href="#2.4">2.4</A>) I just added a field to a structure. What else
- should I do?<BR>
- <A href="#2.5">2.5</A>) Why do we use <I>palloc</I>() and
- <I>pfree</I>() to allocate memory?<BR>
- <A href="#2.6">2.6</A>) What is ereport()?<BR>
- <A href="#2.7">2.7</A>) What is CommandCounterIncrement()?<BR>
- <BR>
-
- <HR>
-
- <CENTER>
- <H2>General Questions</H2>
- </CENTER>
-
- <H3><A name="1.1">1.1</A>) How go I get involved in PostgreSQL
- development?</H3>
-
- <P>This was written by Lamar Owen:</P>
-
- <P>2001-06-22</P>
-
- <B>What open source development process is used by the PostgreSQL
- team?</B>
-
- <P>Read HACKERS for six months (or a full release cycle, whichever
- is longer). Really. HACKERS _is_the process. The process is not
- well documented (AFAIK -- it may be somewhere that I am not aware
- of) -- and it changes continually.</P>
-
- <B>What development environment (OS, system, compilers, etc) is
- required to develop code?</B>
-
- <P><A href="http://developer.postgresql.org">Developers Corner</A> on the
- website has links to this information. The distribution tarball
- itself includes all the extra tools and documents that go beyond a
- good Unix-like development environment. In general, a modern unix
- with a modern gcc, GNU make or equivalent, autoconf (of a
- particular version), and good working knowledge of those tools are
- required.</P>
-
- <B>What areas need support?</B>
-
- <P>The TODO list.</P>
-
- <P>You've made the first step, by finding and subscribing to
- HACKERS. Once you find an area to look at in the TODO, and have
- read the documentation on the internals, etc, then you check out a
- current CVS,write what you are going to write (keeping your CVS
- checkout up to date in the process), and make up a patch (as a
- context diff only) and send to the PATCHES list, prefereably.</P>
-
- <P>Discussion on the patch typically happens here. If the patch
- adds a major feature, it would be a good idea to talk about it
- first on the HACKERS list, in order to increase the chances of it
- being accepted, as well as toavoid duplication of effort. Note that
- experienced developers with a proven track record usually get the
- big jobs -- for more than one reason. Also note that PostgreSQL is
- highly portable -- nonportable code will likely be dismissed out of
- hand.</P>
-
- <P>Once your contributions get accepted, things move from there.
- Typically, you would be added as a developer on the list on the
- website when one of the other developers recommends it. Membership
- on the steering committee is by invitation only, by the other
- steering committee members, from what I have gathered watching
- froma distance.</P>
-
- <P>I make these statements from having watched the process for over
- two years.</P>
-
- <P>To see a good example of how one goes about this, search the
- archives for the name 'Tom Lane' and see what his first post
- consisted of, and where he took things. In particular, note that
- this hasn't been _that_ long ago -- and his bugfixing and general
- deep knowledge with this codebase is legendary. Take a few days to
- read after him. And pay special attention to both the sheer
- quantity as well as the painstaking quality of his work. Both are
- in high demand.</P>
-
- <H3><A name="1.2">1.2</A>) How do I add a feature or fix a bug?</H3>
-
- <P>The source code is over 350,000 lines. Many fixes/features
- are isolated to one specific area of the code. Others require
- knowledge of much of the source. If you are confused about where to
- start, ask the hackers list, and they will be glad to assess the
- complexity and give pointers on where to start.</P>
-
- <P>Another thing to keep in mind is that many fixes and features
- can be added with surprisingly little code. I often start by adding
- code, then looking at other areas in the code where similar things
- are done, and by the time I am finished, the patch is quite small
- and compact.</P>
-
- <P>When adding code, keep in mind that it should use the existing
- facilities in the source, for performance reasons and for
- simplicity. Often a review of existing code doing similar things is
- helpful.</P>
-
- <P>The usual process for source additions is:
- <UL>
- <LI>Review the TODO list.</LI>
- <LI>Discuss hackers the desirability of the fix/feature.</LI>
- <LI>How should it behave in complex circumstances?</LI>
- <LI>How should it be implemented?</LI>
- <LI>Submit the patch to the patches list.</LI>
- <LI>Answer email questions.</LI>
- <LI>Wait for the patch to be applied.</LI>
- </UL></P>
- <H3><A name="1.3">1.3</A>) How do I download/update the current source
- tree?</H3>
-
- <P>There are several ways to obtain the source tree. Occasional
- developers can just get the most recent source tree snapshot from
- ftp.postgresql.org. For regular developers, you can use CVS. CVS
- allows you to download the source tree, then occasionally update
- your copy of the source tree with any new changes. Using CVS, you
- don't have to download the entire source each time, only the
- changed files. Anonymous CVS does not allows developers to update
- the remote source tree, though privileged developers can do this.
- There is a CVS FAQ on our web site that describes how to use remote
- CVS. You can also use CVSup, which has similarly functionality, and
- is available from ftp.postgresql.org.</P>
-
- <P>To update the source tree, there are two ways. You can generate
- a patch against your current source tree, perhaps using the
- make_diff tools mentioned above, and send them to the patches list.
- They will be reviewed, and applied in a timely manner. If the patch
- is major, and we are in beta testing, the developers may wait for
- the final release before applying your patches.</P>
-
- <P>For hard-core developers, Marc(scrappy@postgresql.org) will give
- you a Unix shell account on postgresql.org, so you can use CVS to
- update the main source tree, or you can ftp your files into your
- account, patch, and cvs install the changes directly into the
- source tree.</P>
-
- <H3><A name="1.4">1.4</A>) How do I test my changes?</H3>
-
- <P>First, use <I>psql</I> to make sure it is working as you expect.
- Then run <I>src/test/regress</I> and get the output of
- <I>src/test/regress/checkresults</I> with and without your changes,
- to see that your patch does not change the regression test in
- unexpected ways. This practice has saved me many times. The
- regression tests test the code in ways I would never do, and has
- caught many bugs in my patches. By finding the problems now, you
- save yourself a lot of debugging later when things are broken, and
- you can't figure out when it happened.</P>
-
- <H3><A name="1.5">1.5</A>) What tools are available for
- developers?</H3>
-
- <P>Aside from the User documentation mentioned in the regular FAQ,
- there are several development tools available. First, all the files
- in the <I>/tools</I> directory are designed for developers.</P>
-<PRE>
- RELEASE_CHANGES changes we have to make for each release
- SQL_keywords standard SQL'92 keywords
- backend description/flowchart of the backend directories
- ccsym find standard defines made by your compiler
- entab converts tabs to spaces, used by pgindent
- find_static finds functions that could be made static
- find_typedef finds typedefs in the source code
- find_badmacros finds macros that use braces incorrectly
- make_ctags make vi 'tags' file in each directory
- make_diff make *.orig and diffs of source
- make_etags make emacs 'etags' files
- make_keywords make comparison of our keywords and SQL'92
- make_mkid make mkid ID files
- mkldexport create AIX exports file
- pgindent indents C source files
- pgjindent indents Java source files
- pginclude scripts for adding/removing include files
- unused_oids in pgsql/src/include/catalog
-</PRE>
- Let me note some of these. If you point your browser at the
- <I>file:/usr/local/src/pgsql/src/tools/backend/index.html</I>
- directory, you will see few paragraphs describing the data flow,
- the backend components in a flow chart, and a description of the
- shared memory area. You can click on any flowchart box to see a
- description. If you then click on the directory name, you will be
- taken to the source directory, to browse the actual source code
- behind it. We also have several README files in some source
- directories to describe the function of the module. The browser
- will display these when you enter the directory also. The
- <I>tools/backend</I> directory is also contained on our web page
- under the title <I>How PostgreSQL Processes a Query.</I>
-
- <P>Second, you really should have an editor that can handle tags,
- so you can tag a function call to see the function definition, and
- then tag inside that function to see an even lower-level function,
- and then back out twice to return to the original function. Most
- editors support this via <I>tags</I> or <I>etags</I> files.</P>
-
- <P>Third, you need to get <I>id-utils</I> from:</P>
-<PRE>
- <A href=
-"ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz">ftp://alpha.gnu.org/gnu/id-utils-3.2d.tar.gz</A>
- <A href=
-"ftp://tug.org/gnu/id-utils-3.2d.tar.gz">ftp://tug.org/gnu/id-utils-3.2d.tar.gz</A>
- <A href=
-"ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz">ftp://ftp.enst.fr/pub/gnu/gnits/id-utils-3.2d.tar.gz</A>
-</PRE>
- By running <I>tools/make_mkid</I>, an archive of source symbols can
- be created that can be rapidly queried like <I>grep</I> or edited.
- Others prefer <I>glimpse.</I>
-
- <P><I>make_diff</I> has tools to create patch diff files that can
- be applied to the distribution. This produces context diffs, which
- is our preferred format.</P>
-
- <P>Our standard format is to indent each code level with one tab,
- where each tab is four spaces. You will need to set your editor to
- display tabs as four spaces:<BR>
- </P>
-<PRE>
- vi in ~/.exrc:
- set tabstop=4
- set sw=4
- more:
- more -x4
- less:
- less -x4
- emacs:
- M-x set-variable tab-width
-
- or
-
- (c-add-style "pgsql"
- '("bsd"
- (indent-tabs-mode . t)
- (c-basic-offset . 4)
- (tab-width . 4)
- (c-offsets-alist .
- ((case-label . +)))
- )
- nil ) ; t = set this style, nil = don't
-
- (defun pgsql-c-mode ()
- (c-mode)
- (c-set-style "pgsql")
- )
-
- and add this to your autoload list (modify file path in macro):
-
- (setq auto-mode-alist
- (cons '("\\`/home/andrew/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode)
- auto-mode-alist))
- or
- /*
- * Local variables:
- * tab-width: 4
- * c-indent-level: 4
- * c-basic-offset: 4
- * End:
- */
-</PRE>
- <BR>
- <I>pgindent</I> will the format code by specifying flags to your
- operating system's utility <I>indent.</I> This
- <A HREF="http://ezine.daemonnews.org/200112/single_coding_style.html">
- article</A> describes the value of a constent coding style.
-
- <P><I>pgindent</I> is run on all source files just before each beta
- test period. It auto-formats all source files to make them
- consistent. Comment blocks that need specific line breaks should be
- formatted as <I>block comments,</I> where the comment starts as
- <CODE>/*------</CODE>. These comments will not be reformatted in
- any way.</P>
-
- <P><I>pginclude</I> contains scripts used to add needed
- <CODE>#include</CODE>'s to include files, and removed unneeded
- <CODE>#include</CODE>'s.</P>
-
- <P>When adding system types, you will need to assign oids to them.
- There is also a script called <I>unused_oids</I> in
- <I>pgsql/src/include/catalog</I> that shows the unused oids.</P>
-
- <H3><A name="1.6">1.6</A>) What books are good for developers?</H3>
-
- <P>I have four good books, <I>An Introduction to Database
- Systems,</I> by C.J. Date, Addison, Wesley, <I>A Guide to the SQL
- Standard,</I> by C.J. Date, et. al, Addison, Wesley,
- <I>Fundamentals of Database Systems,</I> by Elmasri and Navathe,
- and <I>Transaction Processing,</I> by Jim Gray, Morgan,
- Kaufmann</P>
-
- <P>There is also a database performance site, with a handbook
- on-line written by Jim Gray at <A href=
- "http://www.benchmarkresources.com">http://www.benchmarkresources.com.</A></P>
-
- <H3><A name="1.7">1.7</A>) What is configure all about?</H3>
-
- <P>The files <I>configure</I> and <I>configure.in</I> are part of
- the GNU <I>autoconf</I> package. Configure allows us to test for
- various capabilities of the OS, and to set variables that can then
- be tested in C programs and Makefiles. Autoconf is installed on the
- PostgreSQL main server. To add options to configure, edit
- <I>configure.in,</I> and then run <I>autoconf</I> to generate
- <I>configure.</I></P>
-
- <P>When <I>configure</I> is run by the user, it tests various OS
- capabilities, stores those in <I>config.status</I> and
- <I>config.cache,</I> and modifies a list of <I>*.in</I> files. For
- example, if there exists a <I>Makefile.in,</I> configure generates
- a <I>Makefile</I> that contains substitutions for all @var@
- parameters found by configure.</P>
-
- <P>When you need to edit files, make sure you don't waste time
- modifying files generated by <I>configure.</I> Edit the <I>*.in</I>
- file, and re-run <I>configure</I> to recreate the needed file. If
- you run <I>make distclean</I> from the top-level source directory,
- all files derived by configure are removed, so you see only the
- file contained in the source distribution.</P>
-
- <H3><A name="1.8">1.8</A>) How do I add a new port?</H3>
-
- <P>There are a variety of places that need to be modified to add a
- new port. First, start in the <I>src/template</I> directory. Add an
- appropriate entry for your OS. Also, use <I>src/config.guess</I> to
- add your OS to <I>src/template/.similar.</I> You shouldn't match
- the OS version exactly. The <I>configure</I> test will look for an
- exact OS version number, and if not found, find a match without
- version number. Edit <I>src/configure.in</I> to add your new OS.
- (See configure item above.) You will need to run autoconf, or patch
- <I>src/configure</I> too.</P>
-
- <P>Then, check <I>src/include/port</I> and add your new OS file,
- with appropriate values. Hopefully, there is already locking code
- in <I>src/include/storage/s_lock.h</I> for your CPU. There is also
- a <I>src/makefiles</I> directory for port-specific Makefile
- handling. There is a <I>backend/port</I> directory if you need
- special files for your OS.</P>
-
- <H3><A name="1.9">1.9</A>) Why don't you use threads/raw
- devices/async-I/O, <insert your favorite wizz-bang feature
- here>?</H3>
-
- <P>There is always a temptation to use the newest operating system
- features as soon as they arrive. We resist that temptation.</P>
-
- <P>First, we support 15+ operating systems, so any new feature has
- to be well established before we will consider it. Second, most new
- <I>wizz-bang</I> features don't provide <I>dramatic</I>
- improvements. Third, they usually have some downside, such as
- decreased reliability or additional code required. Therefore, we
- don't rush to use new features but rather wait for the feature to be
- established, then ask for testing to show that a measurable
- improvement is possible.</P>
-
- <P>As an example, threads are not currently used in the backend code
- because:</P>
-
- <UL>
- <LI>Historically, threads were unsupported and buggy.</LI>
-
- <LI>An error in one backend can corrupt other backends.</LI>
-
- <LI>Speed improvements using threads are small compared to the
- remaining backend startup time.</LI>
-
- <LI>The backend code would be more complex.</LI>
- </UL>
-
- <P>So, we are not ignorant of new features. It is just that
- we are cautious about their adoption. The TODO list often
- contains links to discussions showing our reasoning in
- these areas.</P>
-
- <H3><A name="1.10">1.10</A>) How are RPM's packaged?</H3>
-
- <P>This was written by Lamar Owen:</P>
-
- <P>2001-05-03</P>
-
- <P>As to how the RPMs are built -- to answer that question sanely
- requires me to know how much experience you have with the whole RPM
- paradigm. 'How is the RPM built?' is a multifaceted question. The
- obvious simple answer is that I maintain:</P>
-
- <OL>
- <LI>A set of patches to make certain portions of the source tree
- 'behave' in the different environment of the RPMset;</LI>
-
- <LI>The initscript;</LI>
-
- <LI>Any other ancilliary scripts and files;</LI>
-
- <LI>A README.rpm-dist document that tries to adequately document
- both the differences between the RPM build and the WHY of the
- differences, as well as useful RPM environment operations (like,
- using syslog, upgrading, getting postmaster to start at OS boot,
- etc);</LI>
-
- <LI>The spec file that throws it all together. This is not a
- trivial undertaking in a package of this size.</LI>
- </OL>
-
- <P>I then download and build on as many different canonical
- distributions as I can -- currently I am able to build on Red Hat
- 6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive
- opportunity from certain commercial enterprises such as Great
- Bridge and PostgreSQL, Inc. to build on other distributions.</P>
-
- <P>I test the build by installing the resulting packages and
- running the regression tests. Once the build passes these tests, I
- upload to the postgresql.org ftp server and make a release
- announcement. I am also responsible for maintaining the RPM
- download area on the ftp site.</P>
-
- <P>You'll notice I said 'canonical' distributions above. That
- simply means that the machine is as stock 'out of the box' as
- practical -- that is, everything (except select few programs) on
- these boxen are installed by RPM; only official Red Hat released
- RPMs are used (except in unusual circumstances involving software
- that will not alter the build -- for example, installing a newer
- non-RedHat version of the Dia diagramming package is OK --
- installing Python 2.1 on the box that has Python 1.5.2 installed is
- not, as that alters the PostgreSQL build). The RPM as uploaded is
- built to as close to out-of-the-box pristine as is possible. Only
- the standard released 'official to that release' compiler is used
- -- and only the standard official kernel is used as well.</P>
-
- <P>For a time I built on Mandrake for RedHat consumption -- no
- more. Nonstandard RPM building systems are worse than useless.
- Which is not to say that Mandrake is useless! By no means is
- Mandrake useless -- unless you are building Red Hat RPMs -- and Red
- Hat is useless if you're trying to build Mandrake or SuSE RPMs, for
- that matter. But I would be foolish to use 'Lamar Owen's Super
- Special RPM Blend Distro 0.1.2' to build for public consumption!
- :-)</P>
-
- <P>I _do_ attempt to make the _source_ RPM compatible with as many
- distributions as possible -- however, since I have limited
- resources (as a volunteer RPM maintainer) I am limited as to the
- amount of testing said build will get on other distributions,
- architectures, or systems.</P>
-
- <P>And, while I understand people's desire to immediately upgrade
- to the newest version, realize that I do this as a side interest --
- I have a regular, full-time job as a broadcast
- engineer/webmaster/sysadmin/Technical Director which occasionally
- prevents me from making timely RPM releases. This happened during
- the early part of the 7.1 beta cycle -- but I believe I was pretty
- much on the ball for the Release Candidates and the final
- release.</P>
-
- <P>I am working towards a more open RPM distribution -- I would
- dearly love to more fully document the process and put everything
- into CVS -- once I figure out how I want to represent things such
- as the spec file in a CVS form. It makes no sense to maintain a
- changelog, for instance, in the spec file in CVS when CVS does a
- better job of changelogs -- I will need to write a tool to generate
- a real spec file from a CVS spec-source file that would add version
- numbers, changelog entries, etc to the result before building the
- RPM. IOW, I need to rethink the process -- and then go through the
- motions of putting my long RPM history into CVS one version at a
- time so that version history information isn't lost.</P>
-
- <P>As to why all these files aren't part of the source tree, well,
- unless there was a large cry for it to happen, I don't believe it
- should. PostgreSQL is very platform-agnostic -- and I like that.
- Including the RPM stuff as part of the Official Tarball (TM) would,
- IMHO, slant that agnostic stance in a negative way. But maybe I'm
- too sensitive to that. I'm not opposed to doing that if that is the
- consensus of the core group -- and that would be a sneaky way to
- get the stuff into CVS :-). But if the core group isn't thrilled
- with the idea (and my instinct says they're not likely to be), I am
- opposed to the idea -- not to keep the stuff to myself, but to not
- hinder the platform-neutral stance. IMHO, of course.</P>
-
- <P>Of course, there are many projects that DO include all the files
- necessary to build RPMs from their Official Tarball (TM).</P>
-
- <H3><A name="1.11">1.11</A>) How are CVS branches managed?</H3>
-
- <P>This was written by Tom Lane:</P>
-
- <P>2001-05-07</P>
-
- <P>If you just do basic "cvs checkout", "cvs update", "cvs commit",
- then you'll always be dealing with the HEAD version of the files in
- CVS. That's what you want for development, but if you need to patch
- past stable releases then you have to be able to access and update
- the "branch" portions of our CVS repository. We normally fork off a
- branch for a stable release just before starting the development
- cycle for the next release.</P>
-
- <P>The first thing you have to know is the branch name for the
- branch you are interested in getting at. To do this, look at some
- long-lived file, say the top-level HISTORY file, with "cvs status
- -v" to see what the branch names are. (Thanks to Ian Lance Taylor
- for pointing out that this is the easiest way to do it.) Typical
- branch names are:</P>
-<PRE>
- REL7_1_STABLE
- REL7_0_PATCHES
- REL6_5_PATCHES
-</PRE>
-
- <P>OK, so how do you do work on a branch? By far the best way is to
- create a separate checkout tree for the branch and do your work in
- that. Not only is that the easiest way to deal with CVS, but you
- really need to have the whole past tree available anyway to test
- your work. (And you *better* test your work. Never forget that
- dot-releases tend to go out with very little beta testing --- so
- whenever you commit an update to a stable branch, you'd better be
- doubly sure that it's correct.)</P>
-
- <P>Normally, to checkout the head branch, you just cd to the place
- you want to contain the toplevel "pgsql" directory and say</P>
-<PRE>
- cvs ... checkout pgsql
-</PRE>
-
- <P>To get a past branch, you cd to whereever you want it and
- say</P>
-<PRE>
- cvs ... checkout -r BRANCHNAME pgsql
-</PRE>
-
- <P>For example, just a couple days ago I did</P>
-<PRE>
- mkdir ~postgres/REL7_1
- cd ~postgres/REL7_1
- cvs ... checkout -r REL7_1_STABLE pgsql
-</PRE>
-
- <P>and now I have a maintenance copy of 7.1.*.</P>
-
- <P>When you've done a checkout in this way, the branch name is
- "sticky": CVS automatically knows that this directory tree is for
- the branch, and whenever you do "cvs update" or "cvs commit" in
- this tree, you'll fetch or store the latest version in the branch,
- not the head version. Easy as can be.</P>
-
- <P>So, if you have a patch that needs to apply to both the head and
- a recent stable branch, you have to make the edits and do the
- commit twice, once in your development tree and once in your stable
- branch tree. This is kind of a pain, which is why we don't normally
- fork the tree right away after a major release --- we wait for a
- dot-release or two, so that we won't have to double-patch the first
- wave of fixes.</P>
-
- <H3><A name="1.12">1.12</A>) Where can I get a copy of the SQL
- standards?</H3>
-
- <P>There are two pertinent standards, SQL92 and SQL99. These
- standards are endorsed by ANSI and ISO. A draft of the SQL92
- standard is available at <a
- href="http://www.contrib.andrew.cmu.edu/~shadow/">
- http://www.contrib.andrew.cmu.edu/~shadow/</a>. The SQL99 standard
- must be purchased from ANSI at <a
- href="http://webstore.ansi.org/ansidocstore/default.asp">
- http://webstore.ansi.org/ansidocstore/default.asp</a>. The main
- standards documents are ANSI X3.135-1992 for SQL92 and ANSI/ISO/IEC
- 9075-2-1999 for SQL99. The SQL 200X standards are at <a href=
- "ftp://sqlstandards.org/SC32/WG3/Progression_Documents/FCD">
- ftp://sqlstandards.org/SC32/WG3/Progression_Documents/FCD</A></P>
-
- <P>A summary of these standards is at <a
- href="http://dbs.uni-leipzig.de/en/lokal/standards.pdf">
- http://dbs.uni-leipzig.de/en/lokal/standards.pdf</a> and <a
- href="http://db.konkuk.ac.kr/present/SQL3.pdf">
- http://db.konkuk.ac.kr/present/SQL3.pdf</a>.</P>
-
- <CENTER>
- <H2>Technical Questions</H2>
- </CENTER>
-
- <H3><A name="2.1">2.1</A>) How do I efficiently access information in
- tables from the backend code?</H3>
-
- <P>You first need to find the tuples(rows) you are interested in.
- There are two ways. First, <I>SearchSysCache()</I> and related
- functions allow you to query the system catalogs. This is the
- preferred way to access system tables, because the first call to
- the cache loads the needed rows, and future requests can return the
- results without accessing the base table. The caches use system
- table indexes to look up tuples. A list of available caches is
- located in <I>src/backend/utils/cache/syscache.c.</I>
- <I>src/backend/utils/cache/lsyscache.c</I> contains many
- column-specific cache lookup functions.</P>
-
- <P>The rows returned are cache-owned versions of the heap rows.
- Therefore, you must not modify or delete the tuple returned by
- <I>SearchSysCache()</I>. What you <I>should</I> do is release it
- with <I>ReleaseSysCache()</I> when you are done using it; this
- informs the cache that it can discard that tuple if necessary. If
- you neglect to call <I>ReleaseSysCache()</I>, then the cache entry
- will remain locked in the cache until end of transaction, which is
- tolerable but not very desirable.</P>
-
- <P>If you can't use the system cache, you will need to retrieve the
- data directly from the heap table, using the buffer cache that is
- shared by all backends. The backend automatically takes care of
- loading the rows into the buffer cache.</P>
-
- <P>Open the table with <I>heap_open().</I> You can then start a
- table scan with <I>heap_beginscan(),</I> then use
- <I>heap_getnext()</I> and continue as long as
- <I>HeapTupleIsValid()</I> returns true. Then do a
- <I>heap_endscan().</I> <I>Keys</I> can be assigned to the
- <I>scan.</I> No indexes are used, so all rows are going to be
- compared to the keys, and only the valid rows returned.</P>
-
- <P>You can also use <I>heap_fetch()</I> to fetch rows by block
- number/offset. While scans automatically lock/unlock rows from the
- buffer cache, with <I>heap_fetch(),</I> you must pass a
- <I>Buffer</I> pointer, and <I>ReleaseBuffer()</I> it when
- completed.</P>
-
- <P>Once you have the row, you can get data that is common to all
- tuples, like <I>t_self</I> and <I>t_oid,</I> by merely accessing
- the <I>HeapTuple</I> structure entries. If you need a
- table-specific column, you should take the HeapTuple pointer, and
- use the <I>GETSTRUCT()</I> macro to access the table-specific start
- of the tuple. You then cast the pointer as a <I>Form_pg_proc</I>
- pointer if you are accessing the pg_proc table, or
- <I>Form_pg_type</I> if you are accessing pg_type. You can then
- access the columns by using a structure pointer:</P>
-<PRE>
-<CODE>((Form_pg_class) GETSTRUCT(tuple))->relnatts
-</CODE>
-</PRE>
- You must not directly change <I>live</I> tuples in this way. The
- best way is to use <I>heap_modifytuple()</I> and pass it your
- original tuple, and the values you want changed. It returns a
- palloc'ed tuple, which you pass to <I>heap_replace().</I> You can
- delete tuples by passing the tuple's <I>t_self</I> to
- <I>heap_destroy().</I> You use <I>t_self</I> for
- <I>heap_update()</I> too. Remember, tuples can be either system
- cache copies, which may go away after you call
- <I>ReleaseSysCache()</I>, or read directly from disk buffers, which
- go away when you <I>heap_getnext()</I>, <I>heap_endscan</I>, or
- <I>ReleaseBuffer()</I>, in the <I>heap_fetch()</I> case. Or it may
- be a palloc'ed tuple, that you must <I>pfree()</I> when finished.
-
- <H3><A name="2.2">2.2</A>) Why are table, column, type, function, view
- names sometimes referenced as <I>Name</I> or <I>NameData,</I> and
- sometimes as <I>char *?</I></H3>
-
- <P>Table, column, type, function, and view names are stored in
- system tables in columns of type <I>Name.</I> Name is a
- fixed-length, null-terminated type of <I>NAMEDATALEN</I> bytes.
- (The default value for NAMEDATALEN is 64 bytes.)</P>
-<PRE>
-<CODE>typedef struct nameData
- {
- char data[NAMEDATALEN];
- } NameData;
- typedef NameData *Name;
-</CODE>
-</PRE>
- Table, column, type, function, and view names that come into the
- backend via user queries are stored as variable-length,
- null-terminated character strings.
-
- <P>Many functions are called with both types of names, ie.
- <I>heap_open().</I> Because the Name type is null-terminated, it is
- safe to pass it to a function expecting a char *. Because there are
- many cases where on-disk names(Name) are compared to user-supplied
- names(char *), there are many cases where Name and char * are used
- interchangeably.</P>
-
- <H3><A name="2.3">2.3</A>) Why do we use <I>Node</I> and <I>List</I> to
- make data structures?</H3>
-
- <P>We do this because this allows a consistent way to pass data
- inside the backend in a flexible way. Every node has a
- <I>NodeTag</I> which specifies what type of data is inside the
- Node. <I>Lists</I> are groups of <I>Nodes chained together as a
- forward-linked list.</I></P>
-
- <P>Here are some of the <I>List</I> manipulation commands:</P>
-
- <BLOCKQUOTE>
- <DL>
- <DT>lfirst(i)</DT>
-
- <DD>return the data at list element <I>i.</I></DD>
-
- <DT>lnext(i)</DT>
-
- <DD>return the next list element after <I>i.</I></DD>
-
- <DT>foreach(i, list)</DT>
-
- <DD>
- loop through <I>list,</I> assigning each list element to
- <I>i.</I> It is important to note that <I>i</I> is a List *,
- not the data in the <I>List</I> element. You need to use
- <I>lfirst(i)</I> to get at the data. Here is a typical code
- snippet that loops through a List containing <I>Var *'s</I>
- and processes each one:
-<PRE>
-<CODE>List *i, *list;
-
- foreach(i, list)
- {
- Var *var = lfirst(i);
-
- /* process var here */
- }
-</CODE>
-</PRE>
- </DD>
-
- <DT>lcons(node, list)</DT>
-
- <DD>add <I>node</I> to the front of <I>list,</I> or create a
- new list with <I>node</I> if <I>list</I> is <I>NIL.</I></DD>
-
- <DT>lappend(list, node)</DT>
-
- <DD>add <I>node</I> to the end of <I>list.</I> This is more
- expensive that lcons.</DD>
-
- <DT>nconc(list1, list2)</DT>
-
- <DD>Concat <I>list2</I> on to the end of <I>list1.</I></DD>
-
- <DT>length(list)</DT>
-
- <DD>return the length of the <I>list.</I></DD>
-
- <DT>nth(i, list)</DT>
-
- <DD>return the <I>i</I>'th element in <I>list.</I></DD>
-
- <DT>lconsi, ...</DT>
-
- <DD>There are integer versions of these: <I>lconsi, lappendi</I>,
- etc. Also versions for OID lists: <I>lconso, lappendo</I>, etc.</DD>
- </DL>
- </BLOCKQUOTE>
- You can print nodes easily inside <I>gdb.</I> First, to disable
- output truncation when you use the gdb <I>print</I> command:
-<PRE>
-<CODE>(gdb) set print elements 0
-</CODE>
-</PRE>
- Instead of printing values in gdb format, you can use the next two
- commands to print out List, Node, and structure contents in a
- verbose format that is easier to understand. List's are unrolled
- into nodes, and nodes are printed in detail. The first prints in a
- short format, and the second in a long format:
-<PRE>
-<CODE>(gdb) call print(any_pointer)
- (gdb) call pprint(any_pointer)
-</CODE>
-</PRE>
- The output appears in the postmaster log file, or on your screen if
- you are running a backend directly without a postmaster.
-
- <H3><A name="2.4">2.4</A>) I just added a field to a structure. What
- else should I do?</H3>
-
- <P>The structures passing around from the parser, rewrite,
- optimizer, and executor require quite a bit of support. Most
- structures have support routines in <I>src/backend/nodes</I> used
- to create, copy, read, and output those structures. Make sure you
- add support for your new field to these files. Find any other
- places the structure may need code for your new field. <I>mkid</I>
- is helpful with this (see above).</P>
-
- <H3><A name="2.5">2.5</A>) Why do we use <I>palloc</I>() and
- <I>pfree</I>() to allocate memory?</H3>
-
- <P><I>palloc()</I> and <I>pfree()</I> are used in place of malloc()
- and free() because we find it easier to automatically free all
- memory allocated when a query completes. This assures us that all
- memory that was allocated gets freed even if we have lost track of
- where we allocated it. There are special non-query contexts that
- memory can be allocated in. These affect when the allocated memory
- is freed by the backend.</P>
-
- <H3><A name="2.6">2.6</A>) What is ereport()?</H3>
-
- <P><I>ereport()</I> is used to send messages to the front-end, and
- optionally terminate the current query being processed. The first
- parameter is an ereport level of <I>DEBUG</I> (levels 1-5), <I>LOG,</I>
- <I>INFO,</I> <I>NOTICE,</I> <I>ERROR,</I> <I>FATAL,</I> or
- <I>PANIC.</I> <I>NOTICE</I> prints on the user's terminal and the
- postmaster logs. <I>INFO</I> prints only to the user's terminal and
- <I>LOG</I> prints only to the server logs. (These can be changed
- from <I>postgresql.conf.</I>) <I>ERROR</I> prints in both places,
- and terminates the current query, never returning from the call.
- <I>FATAL</I> terminates the backend process. The remaining
- parameters of <I>ereport</I> are a <I>printf</I>-style set of
- parameters to print.</P>
-
- <P><I>ereport(ERROR)</I> frees most memory and open file descriptors so
- you don't need to clean these up before the call.</P>
-
- <H3><A name="2.7">2.7</A>) What is CommandCounterIncrement()?</H3>
-
- <P>Normally, transactions can not see the rows they modify. This
- allows <CODE>UPDATE foo SET x = x + 1</CODE> to work correctly.</P>
-
- <P>However, there are cases where a transactions needs to see rows
- affected in previous parts of the transaction. This is accomplished
- using a Command Counter. Incrementing the counter allows
- transactions to be broken into pieces so each piece can see rows
- modified by previous pieces. <I>CommandCounterIncrement()</I>
- increments the Command Counter, creating a new part of the
- transaction.</P>
+ <P>The developer FAQ can be found on the PostgreSQL wiki:</P>
+<P><A href="http://wiki.postgresql.org/wiki/Development_information">http://wiki.postgresql.org/wiki/Development_information</A></P>
</BODY>
</HTML>