HTTPS with Letsencrypt on nginx

Open port 443 on Shorewall firewall using the ACCEPT rule /etc/shorewall/rules

# Allow access to web server default ports (secure and unsecured HTTP)
ACCEPT net $FW tcp 80
ACCEPT net $FW tcp 443

Download the certbot-auto program:

$ cd /etc/nginx
$ sudo mkdir letsencrypt
$ cd letsencrypt
$ sudo wget https://dl.eff.org/certbot-auto
$ sudo chmod a+x certbot-auto

To obtain certificates for a domain, certbot will verify you have ownership by creating files in a hidden directory under the domain’s web root and issuing a HTTP request for them. To ensure nginx will serve the files, without denying access (403 Forbidden), then add the following to your nginx config. For instance, you may add the rule to the file /etc/nginx/custom-conf/restrictions.con before any rules restricting access to dot files:

# Allow the letsencrypt ACME Challenge.
location ~ /\.well-known\/acme-challenge {
    allow all;
}

Some of the web applications that I maintain serve static files from /static/. For those domains I place the following location block inside the application’s server block:

# Allow the letsencrypt ACME Challenge.
location '/\.well-known/acme-challenge' {
    root /var/www/www.paulpepper.com/html/letsencrypt;
    default_type 'text/plain';
    allow all;
}

Run certbot-auto, specifying the root from which files for domains are served from and the name of those domains that the requested certificate applies to:

$ sudo ./certbot-auto certonly --agree-tos --webroot -w /var/www/www.paulpepper.com/html -d paulpepper.com -d www.paulpepper.com

Some of the web applications that I maintain serve static files from /static/. For those domains I place the following location block inside the application’s server block, changing the location from which the letsencrypt verification files are served:

# Allow the letsencrypt ACME Challenge.
location '/\.well-known/acme-challenge' {
    root /var/www/www.paulpepper.com/html/letsencrypt;
    default_type 'text/plain';
    allow all;
}

Certbot is then executed as follows, with a change in the

$ sudo ./certbot-auto certonly --agree-tos --webroot -w /var/www/www.paulpepper.com/html/letsencrypt -d paulpepper.com -d www.paulpepper.com

If successful, then new key and certificate files should have been created under /etc/letsencrypt/live/paulpepper.com after running the above command.

Next configure nginx to service HTTPS requests for the domains using the new certificate. In the virtual server config:

server {
    listen 443 ssl;
    server_name paulpepper.com;

    root /var/www/www.paulpepper.com/html;

    include custom-conf/restrictions.conf;

    ssl_certificate /etc/letsencrypt/live/paulpepper.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/paulpepper.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/paulpepper.com/chain.pem;
}

You might also wish to redirect HTTP requests to HTTPS too:

server {
    listen 443 ssl;
    server_name www.paulpepper.com;

    root /var/www/www.paulpepper.com/html;

    include custom-conf/restrictions.conf;

    ssl_certificate /etc/letsencrypt/live/paulpepper.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/paulpepper.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/paulpepper.com/chain.pem;
}

server {
    listen 80;
    server_name paulpepper.com www.paulpepper.com;
    return 301 https://www.paulpepper.com$request_uri;
}

server {
    listen 443 ssl;
    server_name paulpepper.com;
    return 301 https://www.paulpepper.com$request_uri;
}

Check your config and restart nginx

$ sudo nginx -t
...
$ sudo systemctl restart nginx.service

If your distribution is still using the Upstart init system then restart nginx as follows:

$ sudo service nginx restart

The certbot documentation recommends running a cron job twice per day to renew certificates. Let’s Encrypt will only renew certificates if they are due to expire, so it’s safe and good practice to run the renewal frequently.

Create a new file under /etc/cron.d/ called letsencrypt with the following content:

 # Run the letsencrypt renewal service using certbot-auto.
 8 06 * * * root /etc/nginx/letsencrypt/certbot-auto renew --no-self-upgrade --post-hook '/bin/systemctl reload nginx.service'

The above crontab entry will attempt to run the certbot-auto renewal once per day at 6:08am and then reload nginx if ssl certificates were due for renewal (the --post-hook only executes its argument if certificates were due for renewal).

WordPress content that uses the old http protocol prefix can be replaced using an SQL UPDATE. Here’s how you might do that:

$ mysqldump -u root -p paulpepper > paulpepper-db-backup.sql
$ mysql -u root -p
...
mysql> CONNECT paulpepper;
mysql> UPDATE `wp_posts` SET `post_content` = REPLACE(`post_content`, 'http://www.paulpepper.com', 'https://www.paulpepper.com');
mysql> UPDATE `wp_posts` SET `post_content` = REPLACE(`post_content`, 'http://paulpepper.com', 'https://paulpepper.com');

The table wp_posts is the one containing content for pages and post, but may be named slightly depending upon the value of the $table_prefix variable found in your wp-config.php file. The standard value given to this variable is ‘wp_’ and so wp_posts is normally table that should be targeted by the above SQL.

Tags: ,

WordPress Development Environment

Here’s a little detail about the development environment setup that I use whenever I need to develop a WordPress theme or plugin. My aim is to somewhat isolate each project (sadly, not quite as effectively as Python’s VirtualEnv) and keep it from impacting more general workstation setup such as configs under /etc/.

Run PHP in Isolation

No need to set up a new virtual web server for each site. Use PHP’s CLI and a router…

Get and unzip the latest version of WordPress and rename as necessary for your new project:

$ wget https://wordpress.org/latest.tar.gz
$ tar xzvf latest.tar.gz
$ mv wordpress newproject
$ cd newproject
$ mv wp-config-sample.php wp-config.php

Ensure the PHP command-line is installed:

$ sudo apt-get install php7.0-cli

Create a router.php file in the root directory (we named ours ‘newproject’, above) of the WordPress install:

<?php
$root = $_SERVER['DOCUMENT_ROOT'];
chdir($root);

$path = '/'.ltrim(parse_url($_SERVER['REQUEST_URI'] )['path'],'/');

if (file_exists($root.$path)) {
    if (is_dir($root.$path) && substr($path,strlen($path) - 1, 1) !== '/') {
        header('Location: '.rtrim( $path,'/' ).'/');
        exit;
    }

    if (strpos($path,'.php') === false) {
        return false;
    } else {
        chdir(dirname($root.$path));
        require_once $root.$path;
    } 
} else {
    include_once 'index.php';
}

From within the root directory of the WordPress installation run PHP’s built-in server from the command-line, passing the name of our router.php file as an argument:

$ php -S localhost:8080 router.php

PHP will then pass all HTTP requests through the router, which either, redirects, passes control to WordPress or serves up static assets as necessary.

From a browser access the newly created project (http://localhost:8080) and the familiar WordPress installation routine should be presented.

If you’d additionally like to avoid setting up a database instance for your new dev project, then read on…

Disposable Database Setup

Get latest version of WordPress SQLite plugin and extract inside the plugins directory:

$ cd wp-content/plugins
$ wget https://downloads.wordpress.org/plugin/sqlite-integration.zip
$ unzip sqlite-integration.zip

Copy db.php from the sqlite-integration plugin directory into the wp-content directory:

$ cp sqlite-integration/db.php ../.

Ensure php7.0-sqlite3 and sqlite3 are both installed:

$ sudo apt-get install sqlite3 php7.0-sqlite3

You should now be able to run the PHP built-in webserver, as described above, but now your WordPress data will be persisted to an SQLite file, wp-content/database/.ht.sqlite.

Tags:

AngularJS Providers, Factories and Services

Angular Services

Angular relies heavily upon objects that it refers to as Services. An Angular Service is a singleton object that can be injected into other Angular components (other Services, Controllers, Directives, Filters, etc).

Services may be defined and registered in a number of ways using Angular’s Module interface. The Module interface offers the functions, provider, factory and service for service definition. (There are other service definition functions on this interface, but they’re of less interest here.) Each of these functions is used in a manner that the Angular documentation refers to as a recipe, with each offering varying degrees of sophistication and configurability.

The $provider Service can be used to register Services after Angular’s configuration phase, however we normally only require the Module interface.

Provider

Module.provider() offers the greatest flexibility in defining and registering a Service. Module.factory() and Module.service() wrap and simplify Module.provider() to achieve their results.

Here’s an example of a Pony Service defined and registered using Module.provider().

var Pony = function(ponyColour, FoodService) {
    // ...
};

var ponyModule = angular.module('ponyModule');

ponyModule.provider('Pony', function() {
    var colour = 'pink';

    this.setColour = function(value) {
        colour = value;
    };

    this.$get = ['Cheese', function(Cheese) {
        return new Pony(colour, Cheese);
    }];
};

After creating an Angular module named ponyModule, the above code defines and registers an Angular Service Provider which is used to create the Pony Service. The Service Provider is responsible for constructing and returning the Angular Service instance of Pony from its $get() factory function. Notice that the Pony service is also injected with the Cheese service (defined elsewhere).

The Provider method of creating a Service is overkill in most circumstances, but it is useful to understand Service creation in it’s more fundamental form. Only use this technique when a Service is used by more than one application and requires application-specific configuration during Angular’s module configuration phase.

In the above code, a pony’s colour may be overridden by calling the setColour() function (defined on the Provider object instance) during Angular’s configuration phase. Angular makes the Provider instance available to us by appending the word Provider to the Service name – PonyProvider in our case. The Provider instance can then be injected into the application’s configuration function:

var app = angular.module('app', ['Pony']);

app.config(['PonyProvider', function(PonyProvider) {
  PonyProvider.setColour('blue');
  // ...
});

Incidentally, the Provider instance itself can be injected with other Service Provider instances during Angular’s configuration phase, in much the same manner that dependency injection is performed on a Service.

Factory

Module.factory() offers the next most sophisticated way to define and register a Service. Using this technique, only the Provider’s $get() factory function is defined and registered:

ponyModule.factory('Pony', ['Cheese', function(Cheese) {
  return new Pony('blue', Cheese);
}]);

Here Service configurability (during Angular’s configuration phase) has been sacrifised in order to simplify Service registration.

Service

Module.service is the simplest technique for defining and registering a Service:

ponyModule.service('Pony', ['Cheese', function(Cheese) {
  this.colour: 'blue',
  // ...
}]);

Angular will call new on the Service constructor function for us. Slightly more configurability has been lost again, though the Service definition and registration is simpler. In the above use of Module.service() we have set a Pony‘s colour within the Module.service() function.

How flexible you require Service configuration will likely drive which of the above three techniques you employ. Module.service will likely be sufficient for defining and registering the Services used by most applications.

Preserve the Shell Environment Using sudo

When executing a command or script as another user, it may be necessary to preserve the current shell’s environment. sudo provides the -E flag for this.

However, on Ubuntu systems the PATH environment variable is not preserved by the -E flag. Work around this by passing the current shell’s PATH environment variable on the command-line in the form of PATH=$PATH.

$ sudo PATH=$PATH -E -u anotheruser ./some-command.sh

There are other ways to manage environment variables with sudo by editing /etc/sudoers (use sudoedit, don’t edit it directly!), but the above can be useful to quickly get the job done.

Tags: ,

Locating Django Applications in Their Own Sub-Directory

It’s straightforward to keep Django applications in their own directory if necessary – though the convention is to place them in the project’s root directory.

Instructions below assume pwd/$PWD is the project’s root directory.

  • Create the directory in which Django applications will be located:
$ mkdir apps
  • Create a new application named _newapp_ under apps/newapp using startapp‘s optional directory parameter (this requires that the target directory exist, so create it first):
$ mkdir apps/newapp
 $ ./manage.py startapp newapp apps/newapp
  • Modify the project’s manage.py file to ensure the development server can find the applications. It should end up looking something like the following:
#!/usr/bin/env python
 import os
 import sys

if __name__ == "__main__":
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")

    # Add the apps directoriy to Python's path. In production it will
    # be necessary to add the apps directory to the path, too.
    from os.path import abspath, dirname, join
    PROJECT_ROOT = abspath(dirname(__file__))
    sys.path.append(join(PROJECT_ROOT, "apps"))

    from django.core.management import execute_from_command_line

    execute_from_command_line(sys.argv)
  • For a development environment, that’s it. For production, ensure the apps directory is included on the project’s PYTHONPATH. If using Apache and mod_wsgi then ensure that the WSGIDaemonProcess directive in the project’s <virtualhost/> includes the path to the newly added apps directory, as well as the path to the (virtualenv’s) python site packages directory and the project’s root directory:
WSGIDaemonProcess myapp python-path=/myapp/apps:/myapp/python-packages:/myapp
Tags:

Django, X-SendFile and Apache

Static File Access Management

Django documentation advises against using an application to directly serve static files when in a production environment. A dedicated HTTP server, optimised for serving static files, should be used for this purpose. That’s fine for serving publicly accessible files. But serving static files which require user access permission checking necessarily involves routing the download request via application code.

It’s possible to get the best of both Django view processing, where access permissions can be managed and dedicated HTTP server download handling, by using a HTTP server extension that processes response headers applied by the web application. The mod_xsendfile Apache module is one such module. Used with django-sendfile, application management of file downloads can be cleanly separated into interface and implementation.

django-xsendfile and Apache mod_xsendfile Installation

Install the mod_xsendfile module:

$ sudo apt-get install libapache2-mod-xsendfile

The mod_xsendfile module is usually enabled by the install scripts. Use a2enmod to further manage its enabled status.

Enable X-SENDFILE header processing within the application’s Apache virtual host. For instance, a virtual host for for a Django application to serve example.com, /etc/apache2/sites-available/example.com:

<VirtualHost *:80>
    ServerName example.com

    # mod_wsgi settings
    WSGIDaemonProcess example python-path=/var/www/example.com/app
    WSGIProcessGroup example
    WSGIScriptAlias / /var/www/example.com/app/example/wsgi.py

    # Publicly available static files directly available via Apache.
    Alias /static /var/www/example.com/app/static
    Alias /pub-uploads /var/www/example.com/pub-uploads

    # Restricted access files via Apache mod_xsendfile.
    XSendFile On
    XSendFilePath /var/www/example.com/priv-uploads

    # Apache <directory/> directives here...
</VirtualHost>

From the Django application:

$ pip install django-xsendfile

Within the Django application’s settings.py:

SENDFILE_BACKEND = 'sendfile.backends.xsendfile'

Conventionally this can be overridden in a local_settings.py and imported by settings.py for a development environments:

SENDFILE_BACKEND = 'sendfile.backends.development'

The Django view used to manage download access permissions may then look something like the following:

from django.views.generic.base import View
from sendfile import sendfile

class DownloadFile(View):
    def get(self, request):
    # Get user access rights and the file's file-system path.
    # ...
    # If access denied return HttpResponseForbidden(), else:
    return sendfile(request, file_path)

The sendfile API also provides response header management, such as Content-Disposition header content:

def sendfile(request, filename,
    attachment=False, attachment_filename=None,
    mimetype=None, encoding=None)
Tags: , ,

Set up git and gitosis on Ubuntu

Introduction

Gitosis is used to help centrally manage git repositories. Gitosis will allow:

  • SSH access to repositories (with the help of openssh-server).
  • User management without the need to add server shell accounts for each person accessing repositories.
  • While gitosis manages user repository access, gitosis is accessed through a single shell account (its use is limited to a specific gitosis command in ssh config).

Central Repository Server

Install gitosis (apt-get should install all dependencies):

paul@server$ sudo apt-get install gitosis

As the first administrator of the gitosis installation, grant access to gitosys for yourself by passing in your SSH public key (the one you currently use to securely access the server via ssh can be used, but better practice is to use one specially created for gitosis access – see section below on creating and managing ssh keys) to the gitosis-init command:

paul@server$ sudo -H -u gitosis gitosis-init < ~/.ssh/id_rsa.pub

After executing the above command you should notice that the gitosis authorized_keys file (~gitosis/.ssh/authorized_keys) has been populated with your public key. gitosis will add new entries to this file when new users are granted access to the gitosis system.

Cloning the gitosis-admin Project

You should now be able to clone the gitosis admin repository to your workstation:

paul@workstation$ git clone gitosis@server:gitosis-admin.git

Be sure that your workstation is correctly configured to use the ssh private key counterpart to the public key that you used when initialising gitosis (see above).

As the admin you can now manage gitosis system access by adding and removing user public keys in the keydir directory of the gitosis admin project directory (shown cloned above as gitosis-admin). Projects and user access to those projects is managed by editing the gitosis.conf file found in the gitosis admin project directory.

SSH Key Management

This isn’t a tutorial on ssh; just a little assistance with commonly required ssh config when adding access to new gitosis users on your system.

Each user should should create a public/private key pair for exclusive use in accessing your gitosis service. The key pair can be created using ssh-keygen as follows:

$ ssh-keygen -t rsa

When ssh-keygen requests a filename, provide something that will help you, the workstation user, associate the key file names with their intended use, e.g. gitosis@server-name.id_rsa

In your workstation’s ~/.ssh/config you should instruct ssh to use those keys against your server for the gitosis user:

Host gitosis.server-name.com
User gitosis
Hostname server-name.com
PreferredAuthentications publickey
IdentityFile ~/.ssh/gitosis@server-name.id_rsa

The ssh public key file (the one ending .pub) can then be added to the keydir directory of the gitosis admin project. You may wish to rename the public key files to something like paul@workstation when copying them into the keydir directory.

Adding New Projects

As an administrator of a gitosis system, it is possible to add new projects. Within the gitosis-admin project, add a new project entry, adding the names of the public key files (less the .pub extension) for the members you wish to grant access:

[group project-team]
writable = new_project
members = paul@workstation fred@anotherworkstation
 
[group gitosis-admin]
...

Commit and push the changes to the gitosis server:

 $ git commit -a -m "Added new_project as a new project."
 $ git push

It should now be possible to push the project files up to your gitosis server:

$ mkdir myproject
$ cd myproject
$ :> hello.py
$ git init
$ git add .
$ git commit -a -m "Initial commit."
$ remote add origin gitosis@server.com:new_project.git
$ git push origin master

Gitosis Username and Project Directory

Warning: you probably shouldn’t do this… The Apt scripts will assume the original username and home directory, so the following changes may break future Apt updates.

The Ubuntu Apt system creates the user gitosis to access the server. If a different username and/or home directory are required then it’s necessary to apply changes to the gitosis user account. To change the home directory (from the default /srv/gitosis to /home/git):

 paul@server$ sudo usermod --home /home/git gitosis

To change the username used to access gitosis (from gitosis to git):

paul@server$ sudo usermod --login git gitosis

 

Overusing Software Reuse

Well intentioned, but poorly considered efforts at code reuse can cause allsorts of maintenance pain – engineering best practice shouldn’t be applied unthinkingly. Although software reuse can be good (see Dry Principle, for example) it should only be applied where there isn’t too much variation from the common case.

Taking code reuse as an example, a good software engineer will judge what should be considered the common case and how much variation from it would result in poor application of reuse. That type of judgement often comes with the experience of having attempted reuse where it isn’t best suited!

Poor results often come where a good case for reuse hasn’t yet been established – i.e. where it is only anticipated there’s such a case for reuse. Before embarking on an effort to create generic, reusable code, the rule-of-thumb, ‘use before reuse’, normally applies. A corollary to this is that good quality reusable code is often factored from existing (well designed) code, or at least evolves.

Setting date and time on Debian and Ubuntu

The Linux date command can take several formats to set the system date andtime.

Here’s the format that I use when setting the date and the time:

mmddHHiiYYYY.ss

where,

mm = month
 dd = day of month
 HH = hour in 24 hour format
 ii = minutes
 ss = seconds

The following is an example usage using the date and time, 22nd May 2012, 11:57am, respectively:

$ sudo date&nbsp_place_holder;052211572012.00

If only the time needs to be set then use the -s flag as follows:

$ sudo date&nbsp_place_holder;-s&nbsp_place_holder;"11:57:00"

If the system’s time zone requires changing, then on Debian-based systems use:

$ sudo dpkg-reconfigure tzdata
Tags:

Installing pip, virtualenv and virtualenvwrapper

As recommended in a public announcement

cd into a directory where distribute_setup.py can be downloaded and executed.

$ wget http://python-distribute.org/distribute_setup.py
$ sudo python distribute_setup.py
$ sudo easy_install pip
$ sudo pip install virtualenv
$ sudo pip install virtualenvwrapper
Tags: