As far as I know, all of the tools used for making the files on this site, not to mention the server you are getting them from, are open-source software.
This site is served by the Apache web server, running on a machine that belongs to my my ISP (Internet Service Provider), a2i communication. It is either a Sun running Solaris, or an Intel box running FreeBSD. As long as it's some kind of Unix Apache doesn't care, I don't care, and neither should you.
Apache is by far the most popular server on the Web. It has many add-on modules (sort of like plug-ins) none of which I'm using at the moment.
I test my pages on my home Linux system, using a copy of Apache running on port 8080 to avoid conflict with my local, intranet server.
``Authoring'' is a barbarism. The correct word is ``writing.'' I write the files on this site. OK?
All of my writing is done on my home Linux
system using the GNU Emacs text editor,
written by Richard
Stallman of the Free Software Foundation. I use html-helper-mode
for editing HTML. Emacs uses a variant of Lisp as its extension language,
which makes it easy to design modes for different languages.
On those rare occasions when I need graphics, I will create or edit them with one of the following:
xpaint
, a simple paint program for the X window system.
xfig
, a rather elaborate object-oriented drawing program.
It can export Postscript, GIF, and other formats, but I usually use the
associated transfig
package to produce Postscript offline,
then use the pbm
programs to turn that into web-friendly
compressed graphics with transparent backgrounds.
gimp
, a high-end image-manipulation program.
Older photographs were scanned in; newer ones were mostly taken directly using a Ricoh digital camera which I borrowed from work. The Ricoh takes images direct to JPEG (and amazingly quickly). The best way to transfer them to a computer is by sneakernet: pull the PCMCIA card out of the camera and put it into the slot on the computer.
Previewing is done using the Netscape and Lynx browsers. Lynx is a very fast, text-only browser; a DOS version is very popular with blind people because it works well with a text-to-speech program. Netscape is now free and open-source.
A certain amount of text processing is necessary. Any one-time operation that can't be done as a one- or two-line shell script I will normally write in PERL. In particular, I used to use scripts of this sort to perform the same action (such as turning the background black) on groups of pages. I now use the for this.
The (Platform for
Information Applications) from RiSource.org is an amazingly useful tool
that lets you define your own tags for HTML. I use it to enforce a
uniform look and feel on the site by expanding things like
<header>
and <footer>
. It's free,
and because it's written in Java it's highly portable.
Disclaimer: I am the chief architect of the PIA, so you have to take my enthusiasm in the preceeding paragraph with a sizeable grain of salt. Maybe three or four tablespoons.
I use the CVS (Concurrent Version
Control) package for version control, in conjunction with the
pcl-cvs
package in Emacs. It's designed for use in large,
distributed projects, but it works just fine for an individual.
It would be possible to use CVS to ``check out'' files on my service
provider's system, but he hasn't installed it yet, and besides it would
require logging in on his machine (which I can do, but why bother?).
Instead, I use FTP to transfer files from my home system system. I use
make
to recursively go through my working directory, find all
of the files that have changed since the last time I did make
put
, and prepare a script in each directory to drive FTP.
I developed these make
scripts several years ago; they have
gone through several iterations and are quite reliable. At some point I
may switch to rsync
instead of ftp
, or replace
the whole recursive make
system with a single recursive
rsync
at the top level. rsync
sends compressed
differences, which makes it much more efficient than ftp
, but
it's not quite as portable. Also, using it recursively is much less
selective, which can be either an advantage or a disadvantage depending on
what you need.
I firmly believe in the original philosophy behind the Web and generic markup languages (which includes HTML): an author should concentrate on a document's content, and let the user (and their agent, the browser program) worry about what it looks like. All of the documents on this site are designed to look good on any browser whatsoever.
(In fact, they are designed to look almost equally good in source form. After all, this is the way I have to look at them. Go ahead -- try ``View Source'' on any of my pages and see what they look like when I'm editing them.
I use tables sparingly; not all browsers render them very well. This is especially true of old versions of NCSA Mosaic and browsers derived from it. I try to make sure that when I do use a table, it will look OK on a browser that doesn't support them.
All images, without exception, have an ALT
, HEIGHT
,
and WIDTH
attributes. ALT
defines the text to use
if you are not downloading images; defining the size ensures that the browser
can lay out the page without having to wait for all of the images.
I don't use CGI programs, server-side includes, or other forms of active content unless I absolutely have to. (So far, on this site, I haven't had to.) The fact that my ISP scales CGI hits as 10 ordinary hits is more-or-less irrelevant -- generating pages on the fly takes longer and uses server-side resources that would be better employed getting your data to you as quickly as possible.
make
when a page is changed, rather than every time it's served. Because
they run offline, these tools can spend as much time as it takes to do a good
job, without having to worry about how long you're waiting for your page to
download.