Get NAT Public IP

Neat use of wget to get the IP of the public-facing interface from behind a NAT:

$ wget -O - -q icanhazip.com

There are other alternatives to icanhazip.com, such as canihazip.com/s.

Tags:

Subversion Branching Guide

A Quick Guide On Subversion Branching

This article summarises how to create and manage branching in a Subversion repository. For thorough coverage of the subject see the Red Bean Subversion book for your release of Subversion.

Creating a New Branch

A new branch is created in a repository using the svn copy command:

$ svn copy http://svn.example.com/path/to/trunk \
    http://svn.example.com/path/to/branches/branch-name

Working on a Branch

If you have a working copy checked out from trunk (usually the main line of development) and have local changes that should be committed to the newly created branch, then update your local working copy so that you have the same updates from trunk as are currently on the branch.

You may now use the svn switch command to change your local working copy to work with your new branch.

$ cd my/local/working/copy/
$ svn update
updated to revision 123.
$ svn switch http://svn.example.com/path/to/branches/branch-name

Alternatively, you can perform a check-out directly from a branch:

$ svn checkout http://svn.example.com/path/to/branches/branch-name .
 $ cd branch-name

You can now commit any changes that are in your local working copy and they will be applied to the branch and not the trunk.

$ svn commit -m "Some changes for my new branch."

Merging Trunk Changes into the Branch

You should keep your branch up-to-date with changes that are committed onto the trunk. This minimises the potential for conflicts when the time comes to merge your branch back into the trunk. You use the svn merge command to pull trunk changes into your local working copy of the branch. If any conflicts are found, then you should resolve them before committing the effect of the merge and any conflict resolution changes:

$ svn update
 updated to revision 124
 $ svn merge http://svn.example.com/path/to/trunk
 ...
 $ svn commit -m "Bringing branch up-to-date with changes to trunk."

Reintegrating Branch Changes into Trunk

Before merging your branch back into the trunk, ensure it is up-to-date with any changes that have been applied on the trunk – see the section above for details. You can then apply the changes on your branch to the trunk using the svn merge command and the reintegrate flag. First get a clean working copy of the latest version of the trunk:

$ svn checkout http://svn.example.com/path/to/trunk trunk ...
$ cd trunk
$ svn merge --reintegrate \
http://svn.example.com/path/to/branches/branch-name
--- Merging differences between repository URLs into '.':
...

As with merging trunk changes into a branch, you should now resolve any conflicts that may have arisen merging the branch into your local working copy of the trunk. Once you’ve done that you can commit the effect of the merge into the trunk:

$ svn commit -m "Merge my branch into trunk."

You’re done!

After merging a branch into the trunk using the –reintegrate flag the branch can no longer be used (at least using Subversion version 1.5). You could tidy things up on your repository’s branches folder by deleting the old branch, safe in the knowledge that it is still retained by Subversion in it’s database at a specific revision number. All that is required is to find that revision number (say, using the svn log command) and use the svn copy command to copy the revision to a repository branch location.

WiFi Configuration On Ubuntu Workstation/Server

802.11g Wireless Networking

A recent change of office required some wireless networking, involving installation of TP-Link TL-WN321G USB dongles on an Ubuntu Linux 9.10 workstation, 8.04 server and a couple of Windows XP workstations. This article summarises getting the device to work on an Ubuntu Linux system.

I broadly followed the (out-of-date) Ubuntu Community Documentation Wifi How To, with a few tips from elsewhere, and a recollection that it’s sometimes better to be without Gnome’s Network Manager.

Kernel Driver Support

After plugging in the dongle, `lsusb` gave the following (snipped)
information:

$ lsusb
 Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
 Bus 001 Device 002: ID 148f:2573 Ralink Technology, Corp. RT2501USB Wireless Adapter
 ...

The second line of output gives the vendor (148f) and product (2573)
identifiers for the device. Checking a Linux wireless adapter chipset directory showed that supporting kernel drivers exist for this device’s chipset.

Listing the currently loaded kernel drivers showed that both the rt73usb and rt2500usb drivers had been automatically loaded:

$ lsmod
 Module Size Used by
 ...
 rt73usb 26336 0
 rt2500usb xxxxx 0
 ...

Seeing that two drivers had been loaded I chose to blacklist one of them – the rt2500usb. For that I created a new, custom blacklist file in the /etc/modprobe/ directory, avoiding editing any existing blacklist file that my distribution had created. I chose to create the file /etc/modprobe/blacklist-custom.conf:

$ cat /etc/modprobe.d/blacklist-custom.conf
 blacklist rt2500usb

The rt2500usb driver may work, though I haven’t tested it.

Gnome Network Manager Problems

The Ubuntu workstation default setup makes use of Gnome Network Manager to manage wired and wireless network connectivity. Inspecting the kernel message log showed that although the wireless interface was being activated, it was then very quickly deactivating:

$ dmesg
 ...
 [ xxxx.xxxxxx] rt73usb 1-2:1.0: firmware: requesting rt73.bin
 [ xxxx.xxxxxx] ADDRCONF(NETDEV_UP): wlan2: link is not ready
 [ xxxx.xxxxxx] wlan2: authenticate with AP xx:xx:xx:xx:xx:xx
 [ xxxx.xxxxxx] wlan2: authenticated
 ...
 [ xxxx.xxxxxx] wlan2: deauthenticating by local choice (reason=3)
 ...

Suspecting that Network Manager may be trying to maintain a wired-only connection and knowing the workstation doesn’t move anywhere, I decided to dispense with the network-manager package altogether:

$ sudo apt-get remove network-manager
 ...

Network Interface Configuration

In order to test the connection with my WAP I first disabled authentication and encryption on the WAP through its configuration interface. Here’s the simple configuration details for the USB dongle in the workstation’s /etc/network/interfaces file:

auto lo
    iface lo inet loopback
    address 127.0.0.1
    netmask 255.0.0.0

auto wlan2
    iface wlan2 inet static
    address 192.168.1.115
    netmask 255.255.255.0
    gateway 192.168.1.1
    # Wireless config
    wireless-essid myssid
    wireless-channel 1
    wireless-mode managed

Using iwconfig I could see that the USB dongle was connecting to the WAP.

Now that we know that we can connect to the wireless access point, we can protect our communication. I’m using WPA2 to connect to and encrypt the communication channel to my WAP. Here’s the snipped /etc/network/interfaces:

auto lo
iface lo inet loopback
    address 127.0.0.1
    netmask 255.0.0.0

auto wlan2
iface wlan2 inet static
    address 192.168.1.115
    netmask 255.255.255.0
    gateway 192.168.1.1
    # WPA2 config
    wpa-psk secretpasskey
    wpa-driver wext
    wpa-key-mgmt WPA-PSK
    wpa-proto WPA2
    wpa-ssid myssid
    pre-up sleep 5

WPA2 details can go straight into the interfaces file, though you may wish to externalise the WPA2 supplicant information into a separate file that is read-protected from anything but the root user and group.

A check of the network interfaces shows a working wlan network interface:

$ ifconfig
lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:33 errors:0 dropped:0 overruns:0 frame:0
    TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:1831 (1.8 KB) TX bytes:1831 (1.8 KB)

...

wlan2 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
    inet addr:192.168.1.115 Bcast:192.168.1.255 Mask:255.255.255.0
    inet6 addr: fe80::227:19ff:feb7:668a/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:468550 errors:0 dropped:0 overruns:0 frame:0
    TX packets:398811 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:406451098 (406.4 MB) TX bytes:184734469 (184.7 MB)

Checking wireless connection information with iwconfig shows the wlan interface is connected:

$ iwconfig
lo no wireless extensions.

...

wmaster0 no wireless extensions.

wlan2 IEEE 802.11bg ESSID:"myssid"
    Mode:Managed Frequency:2.462 GHz Access Point: xx:xx:xx:xx:xx:xx
    Bit Rate=54 Mb/s Tx-Power=9 dBm
    Retry long limit:7 RTS thr:off Fragment thr:off
    Power Management:on
    Link Quality=38/70 Signal level=-72 dBm
    Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
    Tx excessive retries:0 Invalid misc:0 Missed beacon:0
Tags: ,

Object Oriented JavaScript – Quick Reference

Object Definition and Creation

Constructor-based Creation

var Example = function(val) {
    this.foo = val;
}

var example = new Example('something');

// Although it's unusual to see to this technique, new instances of
// Example may be created using Example's built-in constructor property:
var anotherExample = new example.constructor('something else');

JSON-based Creation

var example = {
    foo: 'something'
};

Non-shared Member Attributes

var Example = function() {
    // Each instance of Example gets its own copy of the foo() function.
    this.foo = function() { ... };
};

Shared Member Attributes

var Example = function() {};

// Add a shared foo() function attribute to all past and future instances
// of Example.
Example.prototype.foo = function() { ... };

Note that when declaring a shared function on an object’s prototype, any constructor-scoped varaibles will not be accessible from within the shared function – e.g. foo, declared inside the Example constructor function, will not be directly available from within foo().

Function Access to the Parent ‘this’

var Example = function() {
    var that = this;
    this.foo = '';

    var bar = function(val) {
        // Because inner functions have their own 'this' attribute, the
        // parent object's 'this' attribute is obscured. A common
        // workaround is to assign 'this' to a variable (conventionally
        // named 'that' at parent scope:
        that.foo = val;
    }
 };

Object Inheritance

JavaScript is a very flexible language and although it does not have the classical inheritance mechanisms of, say, Java and C++, there are a number of commonly used JavaScript idioms that are used to provide object inheritance. Two common techniques are shown below.

Start with a base object, Animal:

var Animal = function() { ... };
Animal.prototype.getLegCount = function() { ... };

Prototype Chaining

var Cat = function() { ... };
Cat.prototype.meow = function() { ... };

// Prototype chaining allows Cat to inherit from Animal.
Cat.prototype = new Animal();

// New instances of Cat are created using the ‘new’ operator.
var fluffy = new Cat();

Parasitic Inheritance

var Dog = function() {
    var that = new Animal();

    // parasitic extension of Animal...
    that.woof = function() { ... };

    return that;
};

// New instances of Dog are created using the 'new' operator.
var spot = new Dog();

Because the `Dog` constructor function returns an object, the new operator will use that object as its result, assigning it to the variable spot, above (see Step 9 of Construct in ECMAScript Language Specification).

Eclipse: An Empty ‘Available Software’ List

The Problem

I recently (reluctantly) installed Eclipse on my Linux development box (Ubuntu 9.10) for the first time in over a year. I quickly ran into problems when trying to view and install available plug-ins on the ‘Available Software’ list (available from the menu: Help > Install New Software…). The available software list appears to be empty, however, it seems that it is actually not being painted correctly.

A quick search pulled up a bug report that suggests a problem with Eclipse’s use of the GTK, and a little more searching found an explanation about mixing calls through the GTK with calls to the native windowing system.

The Solution

The solution is to make sure all Eclipse GUI calls go directly to the native windowing system, avoiding the mix of native and GTK calls. This involves setting the GDK_NATIVE_WINDOWS shell environment variable. If starting eclipse from the command line, then this can be done as follows:

$ export GDK_NATIVE_WINDOWS=1
$ eclipse

If starting from a desktop short-cut or similar, then rename the eclipse executable to, say, `eclipse.bin` and create the following shell script, named eclipse, in the same directory:

#!/bin/bash

export GDK_NATIVE_WINDOWS=1
$(dirname $0)/eclipse.bin

Be sure to allow users with the requisite permissions to execute this script – from the command line:

$ chmod ug+x eclipse

Ownership of the new script file may also need changing:

$ sudo chown $(ls -l eclipse.bin | awk '{OFS=":"; print $3,$4}') eclipse

(That’s the long way round, but it saves a little explaining and introduces an interesting use of awk!)

Eclipse Again – An Aside

I switched to Netbeans a while ago, having grown weary of the bugs I was encountering in Eclipse – I find Netbeans to be extremely stable and trouble-free, by-the-way.

I installed Eclipse again recently in order to take advantage of the Android Development Tools (ADT) plug-in for Eclipse and soon faced having to deal with the bug described above. I’d like to get on with the job of developing decent software and an IDE that requires trouble-shooting, work-arounds and restarts takes me away from that and breaks concentration. That’s frustrating!

I’m inclined to attribute this bug to quality issues in the Eclipse code-base. That’s based upon using Eclipse for a number of years (primarily for Java and C++ development) and seeing it become increasingly unstable. The explanation regarding mixing GUI calls may also add weight to this view – Eclipse would seem to breaking Demeter’s Law in an interesting way by going extra lengths to by-pass the GDK layer from within the JRE.

My recent experience with Eclipse has done nothing to bring me back to using it again, and so I’ll continue to use the ever improving Netbeans where possible.

Tags: ,

PHP Output to Syslog on Ubuntu

There are just a few things you need to know in order to output trace or error information to syslog on an Ubuntu system from PHP (the following is probably true for other Debian-based distributions, too).

  • Set error logging to syslog in php.ini (the system-wide php.ini can be found at /etc/php5/php.ini):
error_log = syslog
  • Restart Apache:
$ sudo /etc/init.d/apache2 restart

Use the error_log() function to output your error messages.

  • View the output of error messages:
$ tail -f /var/log/syslog

The -f flag causes new messages to be appended to the output of tail.

Tags: ,

Selecting A Free Software License

Motives

I’ve been stuck in the world of open source software licensing recently. I’ve written a reasonable amount of software for use by my company, getpepper, and I would be happy to make some of that software open source. Although I’m inclined to help others, and generally hope to do things that are considered to be socially useful, my choice to open source isn’t purely altruistic. Certainly I hope it benefits others as it benefits getpepper by its use. But I also hope that getpepper and maybe me, individually, will benefit from the wider exposure that open sourcing can bring.

The initial items of software that I’m considering open sourcing are plugins for WordPress and jQuery, both of which currently use the GPL version 2 license. There is a view that developing extensions for at least one of these platforms means that those extensions themselves become GPL’d. I want to understand the commercial implications of this as it effects my company when it releases its software under an open source license – and here I probably mean the GPL, or a GPL-compatible license.

Concerns

So with the above in mind here are a couple of my concerns.

  • As the original author and copyright holder of the software, can I recover that software to apply a different, non-GPL-compatible license to it?
  • Are the rights of the original author any different to the rights of any other user of the software?

Maybe I’m being overly cautious and a little edgy about committing my time and hard work to an open source license, or maybe I’m not really giving too many of my rights away. I’m not sure as I’m not a lawyer and prefer to spend my time writing software, not figuring out the intricacies of licensing law – which seems pretty dull compared to writing most, but not all, types of software. Anyway, hopefully some of my future posts will reflect a more legally aware view!

A Quick JDBC How-To

In order to access a persistence store using JDBC, it’s necessary to load the JDBC driver using the Class class’s static method forName(). Once the JDBC driver has been loaded, it should be possible to make a connection to a database managed by the DBMS. Here’s how the postgresql JDBC driver is loaded (don’t forget to make sure the JDBC driver jar file is either on the class path, or referenced directly), and then used to get a database connection:

// Load the driver class
 Class.forName("org.postgresql.Driver");

// Obtain a connection to the DBMS
java.sql.Connection connection =
    java.sql.DriverManager.getConnection(
        "jdbc:postgresql://localhost/dbname",
        "username",
        "password");

Once you have a connection you can start manipulating the database. Use the java.sql.Statement class to do this with the java.sql.ResultSet to manage the results of executing a statement.

There are two other types of statement available, namely CallableStatement and PreparedStatement, which offer stored procedure statements and pre-parsed SQL statements, respectively. These have the advantage of reducing the overhead of parsing executed SQL at run-time. The PreparedStatement is likely the more useful due to its simplicity and improved efficiency. Here’s an example of its use:

try {
    PreparedStatement ps = null;

    ps = c.prepareStatement("INSERT INTO authors VALUES (?, ?, ?)");
    ps.setInt(1, 495);
    ps.setString(2, "Light-Williams");
    ps.setString(3, "Corwin");

    ps.executeUpdate();
} catch (SQLException se) {
    System.out.println(
        "We got an exception while preparing a statement:" +
        "Probably bad SQL.");
    se.printStackTrace();
    System.exit(1);
}

After executing an SQL statement, obtain the results set data, including the names and types of columns that have been updated by a statement, using an instance of ResultsSetMetaData obtained from the ResultsSet instance.

Similarly the DatabaseMetaData can be used to obtain information such as the catalogues available from the connected database, the producer of the database and the user who is used for the connection. Get an instance of the DatabaseMetaData via the Connection object instance through the getMetaData() method.

It’s worth noting a few points about the results set returned by executing an SQL statement. Firstly the returned ResultsSet object starts off pointing to the the position prior to the first record. This means that the next() method must be called on the ResultsSet object in order to get the first record. Also, there is no way of finding out the number of records held by a returned ResultsSet instance except by stepping through it and counting the number of records. Finally, in multi-threaded applications ensure that each thread uses its own ResultsSet objects.

Tags: ,

Data Back-up Bash Script

Data Back-up Requirement

Last year (2008), when getting processes in place for our new web and software development business, getpepper, I put together an all important data backup procedure. My aim was, in the worse case, to ensure that we could restore all but the last few days-worth of data for our own systems and that of our clients.

We mostly use open source systems and tools to create and manage our websites and write our software, with the Ubuntu Linux distribution forming the platform upon which our server-based systems (source control (Subversion), issue tracking (Trac), CRM, etc.) and some of our desktop systems run.

After some research on Linux-based back-up facilities I settled on using a Bash shell script that would allow us to run a regular back-up cycle using external portable hard disks. The weekly back-up shell script, which saves to USB hard drives, is based upon a script provided by Mike Rubel.

The Script

The script uses the rsync tool to provide the incremental functionality of the back-up. Here’s the adapted script that we use.

# ============================
# Author: Paul Pepper (though see description below for credits)
# Created: 6 November 2008
# Description:
# Rotating-snapshot utility adapted from Mike Rubel's make_snapshop.sh which
# can be found at http://www.mikerubel.org/computers/rsync_snapshots/
# Basically, this script performs rotating backup-snapshots of /home whenever
# it is called.
# ============================

#!/bin/bash

unset PATH # suggestion from H. Milz: avoid accidental use of $PATH

# ============================
# System commands used by this script
# ============================
ID=/usr/bin/id
ECHO=/bin/echo

MOUNT=/bin/mount
UMOUNT=/bin/umount
RM=/bin/rm
MV=/bin/mv
CP=/bin/cp
TOUCH=/bin/touch
RSYNC=/usr/bin/rsync

# ============================
# File names and locations
# ============================

MOUNT_DEVICE=/dev/sdb1
MOUNT_POINT=/media/sdb
BACKUP_TO_DIR=/backup/
BACKUP_FROM_DIR=/home
SNAPSHOT=snapshot
EXCLUDES=
BACKUP_TO_PATH=${MOUNT_POINT}/${BACKUP_TO_DIR}/${SNAPSHOT}

# ============================
# The script
# ============================

# Make sure we're running as root
if [ `$ID -u` != '0' ]; then
    $ECHO "$0 must be executed as root. Exiting!"
    exit 1
fi

# Attempt to mount the backup device, else abort
$MOUNT -o rw $MOUNT_DEVICE $MOUNT_POINT
if [ $? -ne 0 ]; then
    $ECHO "$0: Could not mount $MOUNT_DEVICE on $MOUNT_POINT as readwrite"
    exit 1
fi

# Step 1: delete the oldest snapshot, if it exists:
if [ -d ${BACKUP_TO_PATH}.3 ] ; then
    $RM -rf ${BACKUP_TO_PATH}.3
fi

# Step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d ${BACKUP_TO_PATH}.2 ] ; then
    $MV ${BACKUP_TO_PATH}.2 \
    ${BACKUP_TO_PATH}.3
fi

if [ -d ${BACKUP_TO_PATH}.1 ] ; then
    $MV ${BACKUP_TO_PATH}.1 \
    ${BACKUP_TO_PATH}.2
fi

# Step 3: make a hard-link-only (except for dirs) copy of the latest snapshot,
# if that exists
if [ -d ${BACKUP_TO_PATH}.0 ] ; then
    $CP -al ${BACKUP_TO_PATH}.0 \
    ${BACKUP_TO_PATH}.1
fi

# Step 4: rsync from the system into the latest snapshot (notice that
# rsync behaves like cp --remove-destination by default, so the destination
# is unlinked first. If it were not so, this would copy over the other
# snapshot(s) too!
$RSYNC -va --delete --delete-excluded \
    --exclude-from="$EXCLUDES" \
       ${BACKUP_FROM_DIR} ${BACKUP_TO_PATH}.0

# Step 5: update the mtime of our most recent snapshot.
$TOUCH ${BACKUP_TO_PATH}.0

# Unmount the device to which we've written the backup
${UMOUNT} ${MOUNT_POINT}
if [ $? -ne 0 ]; then
    $ECHO "$0: Could not unmount ${MOUNT_POINT}"
    exit 1
fi

Here’s an outline of our weekly back-up process:

1. Grab one of the external hard disks.

2. Attach hard disk to server via a USB connector.

3. Log in to server as a regular user – don’t _su_ to root!

4. Run the back-up shell script:

$ sudo ./backup-snapshot.sh

5. Disconnect hard disk and return to its place of safekeeping!

And that’s it!

All users who perform the back-up must have the necessary permissions to run the shell. I enforce this by adding those users as sudoers, only permitting privileged access to the back-up shell script. Here’s the relevant parts of the sudoers file that grants that access – note that it is recommended that you use visudo when editing this file.

# User alias specification
User_Alias BACKUP_USERS = ann, bill

# Cmnd alias specification
Cmnd_Alias BACKUP_CMND = /somepath/backup-snapshot.sh, /bin/mount, /bin/umount, /bin/touch, /usr/bin/rsync

# Permit BACKUP_USERS to run the back-up script as root from all locations
BACKUP_USERS ALL=(root) BACKUP_CMND

Enabling Apache Digest User Authentication

Background

These notes relate to Debian-based systems running Apache 2.2, so you’ll have to make the appropriate changes to paths, and possibly commands, for your operating system or Linux distro.

The example setup that I’ve provided here allows users with an operational Apache user directory (mod_userdir) to set their own user access permissions, rather than a system-wide approach.

System-wide Settings

The Apache2 configuration files can be found in /etc/apache2/. Update the file /etc/apache2/apache2.conf to include the following directives:

<Directory /home/*/public_html>
    AllowOverride FileInfo AuthConfig Limit
    Options Indexes SymLinksIfOwnerMatch IncludesNoExec
</Directory>

You might simply be able to uncomment existing text within the config file. Among other things, this permits users to enable authentication checking in their public_html directories, or whatever you set the directory name to. You’ll also have to enable Apache’s mod_userdir if it isn’t already enabled:

$ sudo a2enmod userdir

Support for digest authentication is also provided in an Apache module. The digest authentication module is not enabled by default, but can also be enabled using a2enmod:

$ sudo a2enmod auth_digest

If a2enmod isn’t available on your distribution, then you may wish to enable Apache modules by providing a sym link to the appropriate module in the following manner:

ln -s /etc/apache2/mods-available/auth_digest.load /etc/apache2/mods-enable

Password Generation

Passwords are generated using the htdigest tool that ships with the Apache2 distribution. The file created using this tool places username, realm and hashed password together on a colon-delimited line. This file should be placed in a location where Apache cannot serve it up to a client (e.g. don’t place it in /var/www).

In order to add an entry to the password file run the htdigest tool as follows:

$ htdigest -c /directory/path/digest.htpasswd myrealm username

Caution: the -c flag forces htdigest to delete the existing digest  password file, if it already exists. Drop its use of you need to add new entries into the file. You should also replace the values myrealm and username with values appropriate for your system. The realm value is a security context that should be recognisable to the user in order to allow them to provide the correct username and password.

Directory-Level Configuration

You can now create a .htaccess file within each of those directories and subdirectories that you would like to be maintain access control to using digest authentication. Here’s an example .htaccess file that may, for example, be placed immediately within a user’s public_html directory:

AuthType Digest
AuthName "myrealm"
AuthDigestDomain / http://subdomain.mydomain.com/
AuthUserFile /directory/path/digest.htpasswd
Require valid-user

Here’s an explanation of each of the above Apache directives.

  • The AuthName value is the same value that was given when using the htdigest program (see details above).
  • AuthDigestDomain provides the list of URIs that are in the protection space. These URIs can be absolute or relative and sub-directories of those given are matched also.
  • The value of AuthDigestFile points to the location of the file that was created using the htdigest tool.
  • Require takes two values here, but can take more so that extra requirements are imposed. The values used are used to indicate that the user-level authentication mechanism is being used (rather than group-level) and that only valid users (created using htdigest, as shown above) are granted access.

Now Try It!

Restart apache

$ sudo /etc/init.d/apache2 restart

If all has gone well, you should be challenged for your credentials when you try to browse your protected directories.

Tags: , ,