Glen Turner

Postcards from Semaphore

There are only two ethernet settings

I can't beleive I have to write this in 2016, more that twenty years after the bug in the DEC "Tulip" ethernet controller chip which created this mess.

There are only two ethernet speed and autonegotiation settings you should configure on a switch port or host:


Auto negotiation = on


Auto negotiation = off
Speed = 10Mbps
Duplex = half

These are the only two settings which work when the partner interface is set to autonegotiation = on.

If you are considering other settings then buy new hardware. It will work out cheaper.

That is all.


Oh, so you know what you are doing. You know that explicitly setting a speed or duplex implicitly disables autonegotiation and therefore you need to explicitly set the partner interface's speed and duplex as well.

But if you know all that then you also know the world is not a perfect place. Equipment breaks. Operating systems get reinstalled. And you've left a landmine there, waiting for an opportunity...

A goal of modern network and systems administration is to push down the cost of overhead. That means being ruthless with exceptions which store away trouble for the future.


Embedding files into the executable

Say you've got a file you want to put into an executable. Some help text, a copyright notice. Putting these into the source code is painful:

static const char *copyright_notice[] = {
 "This program is free software; you can redistribute it and/or modify",
 "it under the terms of the GNU General Public License as published by",
 "the Free Software Foundation; either version 2 of the License, or (at",
 "your option) any later version.",
 NULL   /* Marks end of text. */
#include <stdio.h>
const char **line_p;
for (line_p = copyright_notice; *line_p != NULL; line_p++) {

If the file is binary, such as an image, then the pain rises exponentially. If you must take this approach then you'll want to know about VIM's xxd hexdump tool:

$ xxd -i copyright.txt > copyright.i

which gives a file which can be included into a C program:

unsigned char copyright_txt[] = {
  0x54, 0x68, 0x69, 0x73, 0x20, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x61, 0x6d,
  0x20, 0x69, 0x73, 0x20, 0x66, 0x72, 0x65, 0x65, 0x20, 0x73, 0x6f, 0x66,
  0x30, 0x31, 0x2c, 0x20, 0x55, 0x53, 0x41, 0x2e, 0x0a
unsigned int copyright_txt_len = 681;

That program looks like so:

#include "copyright.i"
unsigned char *p;
unsigned int len;
for (p = copyright_txt, len = 0;
     len < copyright_txt_len;
     p++, len++) {

If you are going to use this in anger then modify the generated .i file to declare a static const unsigned char …[]. A sed command can do that easily enough; that way the Makefile can re-create the .i file upon any change to the input binary file.

It is much easier to insert a binary file using the linker, and the rest of this blog post explores how that is done. Again the example file will be copyright.txt, but the technique applies to any file, not just text.

Fortunately the GNU linker supports a binary object format, so using the typical linkage tools a binary file can be transformed into an object file simply with:

$ ld --relocatable --format=binary --output=copyright.o copyright.txt
$ cc -c helloworld.c
$ cc -o helloworld helloworld.o copyright.o

The GNU linker's --relocatable indicates that this object file is to be linked with other object files, and therefore addresses in this object file will need to be relocated at the final linkage.

The final cc in the example doesn't compile anything: it runs ld to link the object files of C programs on this particular architecture and operating system.

The linker defines some symbols in the object file marking the start, end and size of the copied copyright.txt:

$ nm copyright.o
000003bb D _binary_copyright_txt_end
000003bb A _binary_copyright_txt_size
00000000 D _binary_copyright_txt_start

Ignore the address of 00000000, this is relocatable object file and the final linkage will assign a final address and clean up references to it.

A C program can access these symbols with:

extern const unsigned char _binary_copyright_txt_start[];
extern const unsigned char _binary_copyright_txt_end[];
extern const size_t *_binary_copyright_txt_size;

Don't rush ahead and puts() this variable. The copyright.txt file has no final ASCII NUL character which C uses to mark the end of strings. Perhaps use the old-fashioned UNIX write():

#include <stdio.h>
#include <unistd.h>
fflush(stdout);  /* Synchronise C's stdio and UNIX's I/O. */

Alternatively, add a final NUL to the copyright.txt file:

$ echo -e -n "\x00" >> copyright.txt

and program:

#include <stdio.h>
extern const unsigned char _binary_copyright_txt_start[];
fputs(_binary_copyright_txt_start, stdout);

There's one small wrinkle:

$ objdump -s copyright.o
copyright.o:   file format elf32-littlearm
Contents of section .data:
 0000 54686973 2070726f 6772616d 20697320  This program is 
 0010 66726565 20736f66 74776172 653b2079  free software; y
 0020 6f752063 616e2072 65646973 74726962  ou can redistrib
 0030 75746520 69742061 6e642f6f 72206d6f  ute it and/or mo

The .data section is copied into memory for all running instances of the executable. We really want the contents of the copyright.txt file to be in the .rodata section so that there is only ever one copy in memory no matter how many copies are running.

objcopy could have copied an input ‘binary’ copyright.txt file to a particular section in an output object file, and that particular section could have been .rodata. But objcopy's options require us to state the architecture of the output object file. We really don't want a different command for compiling on x86, AMD64, ARM and so on.

So here's a hack: let ld set the architecture details when it generates its default output and then use objcopy to rename the section from .data to .rodata. Remember that .data contains only the three _binary_… symbols and so they are the only symbols which will move from .data to .rodata:

$ ld --relocatable --format=binary --output=copyright.tmp.o copyright.txt
$ objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents copyright.tmp.o copyright.o
$ objdump -s copyright.o
copyright.o:   file format elf32-littlearm
Contents of section .rodata:
 0000 54686973 2070726f 6772616d 20697320  This program is 
 0010 66726565 20736f66 74776172 653b2079  free software; y
 0020 6f752063 616e2072 65646973 74726962  ou can redistrib
 0030 75746520 69742061 6e642f6f 72206d6f  ute it and/or mo

Link this copyright.o with the remainder of the program as before:

$ cc -c helloworld.c
$ cc -o helloworld helloworld.o copyright.o

Getting started with Northbound Networks' Zodiac FX OpenFlow switch

Yesterday I received a Zodiac FX four 100Base-TX port OpenFlow switch as a result of Northbound Networks' KickStarter. Today I put the Zodiac FX through its paces.

Plug the supplied USB cable into the Zodiac FX and into a PC. The Zodiac FX will appear in Debian as the serial device /dev/ttyACM0. The kernel log says:

debian:~ $ dmesg
usb 1-1.1.1: new full-speed USB device number 1 using dwc_otg
usb 1-1.1.1: New USB device found, idVendor=03eb, idProduct=2404
usb 1-1.1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 1-1.1.1: Product: Zodiac
usb 1-1.1.1: Manufacturer: Northbound Networks
cdc_acm 1-1.1.1:1.0: ttyACM0: USB ACM device

You can use Minicom (obtained with sudo apt-get install minicom) to speak to that serial port by starting it with minicom --device /dev/ttyACM0. You'll want to be in the "dialout" group, you can add youself with sudo usermod --append --groups dialout $USER but you'll need to log in again for that to take effect. The serial parameters are speed = 115,200bps, data bits = 8, parity = none, stop bits = 1, CTS/RTS = off, XON/XOFF = off.

The entry text is:

 _____             ___               _______  __
/__  /  ____  ____/ (_)___ ______   / ____/ |/ /
  / /  / __ \/ __  / / __ `/ ___/  / /_   |   /
 / /__/ /_/ / /_/ / / /_/ / /__   / __/  /   |
/____/\____/\__,_/_/\__,_/\___/  /_/    /_/|_|
            by Northbound Networks
Type 'help' for a list of available commands
Typing "help" gives:
The following commands are currently available:
 show ports
 show status
 show version
 show config
 show vlans
 set name <name>
 set mac-address <mac address>
 set ip-address <ip address>
 set netmask <netmasks>
 set gateway <gateway ip address>
 set of-controller <openflow controller ip address>
 set of-port <openflow controller tcp port>
 set failstate <secure|safe>
 add vlan <vlan id> <vlan name>
 delete vlan <vlan id>
 set vlan-type <openflow|native>
 add vlan-port <vlan id> <port>
 delete vlan-port <port>
 factory reset
 set of-version <version(0|1|4)>
 show status
 show flows
 clear flows
 read <register>
 write <register> <value>

Some baseline messing about:

Zodiac_FX# show ports
Port 1
 Status: DOWN
 VLAN type: OpenFlow
 VLAN ID: 100
Port 2
 Status: DOWN
 VLAN type: OpenFlow
 VLAN ID: 100
Port 3
 Status: DOWN
 VLAN type: OpenFlow
 VLAN ID: 100
Port 4
 Status: DOWN
 VLAN type: Native
 VLAN ID: 200

Zodiac_FX# show status
Device Status
 Firmware Version: 0.57
 CPU Temp: 37 C
 Uptime: 00:00:01

Zodiac_FX# show version
Firmware version: 0.57

Zodiac_FX# config

Zodiac_FX(config)# show config
 Name: Zodiac_FX
 MAC Address: 70:B3:D5:00:00:00
 IP Address:
 OpenFlow Controller:
 OpenFlow Port: 6633
 Openflow Status: Enabled
 Failstate: Secure
 Force OpenFlow version: Disabled
 Stacking Select: MASTER
 Stacking Status: Unavailable

Zodiac_FX(config)# show vlans
	VLAN ID		Name			Type
	100		'Openflow'		OpenFlow
	200		'Controller'		Native

Zodiac_FX(config)# exit

Zodiac_FX# openflow

Zodiac_FX(openflow)# show status
OpenFlow Status Status: Disconnected
 No tables: 1
 No flows: 0
 Table Lookups: 0
 Table Matches: 0

Zodiac_FX(openflow)# show flows
No Flows installed!

Zodiac_FX(openflow)# exit

We want to use the controller address on our PC and connect eth0 on the PC to Port 4 of the switch (probably by plugging them both into the same local area network).

Zodiac_FX# show ports
Port 4
 Status: UP
 VLAN type: Native
 VLAN ID: 200
debian:~ $ sudo ip addr add label eth0:zodiacfx dev eth0
debian:~ $ ip addr show label eth0:zodiacfx
    inet scope global eth0:zodiacfx
       valid_lft forever preferred_lft forever
debian:~ $ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=0.287 ms
64 bytes from icmp_seq=2 ttl=255 time=0.296 ms
64 bytes from icmp_seq=3 ttl=255 time=0.271 ms
--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.271/0.284/0.296/0.022 ms

Now to check the OpenFlow basics. We'll use the POX controller, which is a simple controller written in Python 2.7.

debian:~ $ git clone https://github.com/noxrepo/pox.git
debian:~ $ cd pox
debian:~ $ ./pox.py openflow.of_01 --address= --port=6633 --verbose
POX 0.2.0 (carp) / Copyright 2011-2013 James McCauley, et al.
DEBUG:core:POX 0.2.0 (carp) going up...
DEBUG:core:Running on CPython (2.7.9/Mar 8 2015 00:52:26)
DEBUG:core:Platform is Linux-4.1.19-v7+-armv7l-with-debian-8.0
INFO:core:POX 0.2.0 (carp) is up.
DEBUG:openflow.of_01:Listening on
INFO:openflow.of_01:[70-b3-d5-00-00-00 1] connected
Zodiac_FX(openflow)# show status
 Status: Connected
 Version: 1.0 (0x01)
 No tables: 1
 No flows: 0
 Table Lookups: 0
 Table Matches: 0

You can then load POX programs to manuipulate the network. A popular first choice might be to turn the Zodiac FX into a flooding hub.

debian:~ $ ./pox.py --verbose openflow.of_01 --address= --port=6633 forwarding.hub
POX 0.2.0 (carp) / Copyright 2011-2013 James McCauley, et al.
INFO:forwarding.hub:Hub running.
DEBUG:core:POX 0.2.0 (carp) going up...
DEBUG:core:Running on CPython (2.7.9/Mar 8 2015 00:52:26)
DEBUG:core:Platform is Linux-4.1.19-v7+-armv7l-with-debian-8.0
INFO:core:POX 0.2.0 (carp) is up.
DEBUG:openflow.of_01:Listening on
INFO:openflow.of_01:[70-b3-d5-00-00-00 1] connected
INFO:forwarding.hub:Hubifying 70-b3-d5-00-00-00
Zodiac_FX(openflow)# show flows
Flow 1
  Incoming Port: 0			Ethernet Type: 0x0000
  Source MAC: 00:00:00:00:00:00		Destination MAC: 00:00:00:00:00:00
  VLAN ID: 0				VLAN Priority: 0x0
  IP Protocol: 0			IP ToS Bits: 0x00
  TCP Source Address:
  TCP Destination Address:
  TCP/UDP Source Port: 0		TCP/UDP Destination Port: 0
  Wildcards: 0x0010001f			Cookie: 0x0
  Priority: 32768			Duration: 9 secs
  Hard Timeout: 0 secs			Idle Timeout: 0 secs
  Byte Count: 0			Packet Count: 0
  Action 1:
   Output: FLOOD

If we now send a packet into Port 1 we see it flooded to Port 2 and Port 3.

We also see it flooded to Port 4 (which is in 'native' mode). Flooding the packet up the same port as the OpenFlow controller isn't a great design choice. It would be better if the switch had four possible modes for ports with traffic kept distinct between them: native switch forwarding, OpenFlow forwarding, OpenFlow control, and switch management. The strict separation of forwarding, control and management is one of the benefits of software defined networks (that does lead to questions around how to bootstrap a remote switch, but the Zodiac FX isn't the class of equipment where that is a realistic issue).

VLANs between ports only seem to matter for native mode. A OpenFlow program can — and will — happily ignore the port's VLAN assignment.

The Zodiac FX is currently a OpenFlow 1.0 switch. So it can currently manipulate MAC addresses but not other packet headers. That still gives a suprising number of applications. Northbound Networks say OpenFlow 1.3 -- with it's manipulation of IP addresses -- is imminent.

The Zodiac FX is an interesting bit of kit. It is well worth buying one even at this early stage of development because it is much better at getting your hands dirty (and thus learn) than is the case with software-only simulated OpenFlow networks.

The source code is open source. It is on Github in some Atmel programming workbench format [Errata: these were some Microsoft Visual Studio 'solution' files]. I suppose it's time to unpack that, see if there's a free software Atmel toolchain, and set about fixing this port mode bug. I do hope simple modification of the switch's software is possible: a switch to teach people OpenFlow is great; a switch to teach people embedded network programming would be magnificent.


Moments in Linux history: Pentium II

One neglected moment in Linux history was the arrival of the Pentium II processor with Deschutes core in 1998. Intel had been making capable 32-bit processors since the 80486, but these processors were handily outperfomed by Alpha, MIPS and SPARC. The Pentium II 450MHz turned the tables. These high-end PCs easily outperformed the MIPS- and SPARC-based workstations and drew level with the much more expensive Alpha.

UNIX™ users looking to update their expensive workstations looked at a high-end PC and thought "I wonder if that runs Unix?". Inserting a Red Hat Linux 6.0 CD into the drive slot and installing the OS lead to the discovery of a capable and mature operating system, a better Unix than the UNIX™ they had been using previously. With a few years the majority of UNIX™ systems administrators were familiar with Linux, because they were running it on their own workstations, whatever Unixen they were administering over SSH.

This familiarity in turn lead to an appreciation for Linux's stablity. When it was time to field new small services — such as DNS and DHCP — then it was financially attractive to serve these from a Linux platform rather than a UNIX™ platform. Moreover the Linux distributors did a much better job of packaging the software which people used, whereas the traditional Unix manufacturers took a "not invented here" attitude: shipping very old versions of software such as DNS servers, and making users download and compile simple tools rather than having rhe tools pre-packaged for simple installation.

The Linux distributors did such a good job that it was much easier to run a web site from Linux than from Windows. The relative importance of these 'Internet' applications was missed by a Microsoft keen to dominate the 'enterprise' market. Before 1999 the ambition of Microsoft to crush the Unixen looked likely. After 2000 that ambition was unrealistic hubris.


Raspberry Pi 3 performance, power and heat

When you order a Raspberry Pi 3 then do yourself a favour and also order the matching 5.1VDC 2.5A power supply (eg: STONTRONICS T5875DV, Element 14 item 2520785). The RPi3 is four cores of 64-bit ARM with an impressive GPU -- that's a lot to power. If you present it with too little power the circuitry will make the red "power" LED blink and the software will reduce the CPU's clock rate.

You'll notice the clever use of tolerances to allow the RPi3 power supply to charge a phone, as you might expect from its Micro USB connector (5.0V + 10% = 5.5V, 5.1V + 5% ≅ 5.4V). The cable on the RPi3 power supply has an impressive amount of copper, so they are serious about avoid voltage drop due to thin cables.

You can argue that this is poor design, that the RPi should really use one of the higher power delivery solutions designed for mobile phones. But with Google, Apple and Samsung all choosing different solutions? Whatever the RPi's designers chose to do then most purchasers would have to buy the matching power supply. At least this design is simple for makers and hobbyists to power the RPi3 (simply provide the specified voltage and current, no USB signalling is needed).

The RPi3 will also slow down when it gets too hot; this is called throttling and is a feature of all modern CPUs. People are currently experimenting with heat sinks. Even a traditional aluminium 10mm heat sink seems to make a worthwhile difference in preventing throttling on CPU-intensive tasks; although how often such tasks occur in practice is another question. The newer ceramic heat sinks are about four times more effective than the classic black aluminium heat sinks, so keep your eyes out for someone offering a kit of those for the RPi3. This is a further complication when looking at cases, as the airflow through most RPi2 cases is quite poor. I've simply taken a drill to the plastic RPi2 case I am using, although there are ugly industrial cases and expensive attractive cases with good airflow.

Further reading: Raspberry Pi 3 Cooling / Heat Sink Ideas, Pi3B thermal throttling.

Academic publishing, now being tried by the bottom-feeders

There are a lot of fake academic journals out there seeking to defraud authors. Not really surprising: academic publishing has such a high profit margin that even established publishers have a whiff of running a scam[1], open access has blurred the edge of what a journal is, and the sharks have moved in.

But now it seems that even scammers who once may have been Nigerian princes are now trying their hand:

From: Kate .M. (editor)
Sent: Friday, 28 August 2015 9:21 AM
Subject: Assist in Peer-Reviewing Research Papers

Dear Professor,

Thank you for your time for reading this mail. Science Publication wishes to invite you to become our Journal Review Board member.

Your professional expertise will be greatly appreciated by us as well as authors who have submitted their research manuscript for peer-review evaluation and publication.

Our journal deals on the following key studies:

Microbiology | Biochemistry | Medicine and Clinical Trials | Biotechnology | Agricultural Research and Management | Physics | Mathematics and Statistics | Pure and Applied Chemistry | Environmental Engineering Research | Electrical and Electronic Engineering | Civil Engineering and Architecture | Chemical Engineering Research | Economics | Business Management | Psychology | Sociology and Anthropology |

Please inform us of your interest to participate in our Review Board. More information will be provided to you upon your reply.

Thank you.
Assistant Editor
Science Journal Publication

NOTE: Simply Send A Blank Message With Unsubscribe As Subject to Remove Your E-mail From Our List.

For the record: not a professor.

[1] Someone else pays for research to be done. Peer review and editorial is done for free. "Page fees" are charged to authors cover the publishing costs. The journal's subscription revenue is handsome profit.

Customising a systemd unit file

Once in a while you want to start a daemon with differing parameters from the norm.

For example, the default parameters to Fedora's packaging of ladvd give too much access to unauthenticated remote network units when it allows those units to set the port description on your interfaces[1]. So let's use that as our example.

With systemd unit files in /etc/systemd/system/ shadow those in /usr/lib/systemd/system/. So we could copy the ladvd.service unit file from /usr/lib/... to /etc/..., but we're old, experienced sysadmins and we know that this will lead to long run trouble. /usr/lib/systemd/system/ladvd.service will be updated to support some new systemd feature and we'll miss that update in the copy of the file.

What we want is an "include" command which will pull in the text of the distributor's configuration file. Then we can set about changing it. Systemd has a ".include" command. Unfortunately its parser also checks that some commands occur exactly once, so we can't modify those commands as including the file consumes that one definition.

In response, systemd allows a variable to be cleared; when the variable is set again it is counted as being set once.

Thus our modification of ladvd.service occurs by creating a new file /etc/systemd/system/ladvd.service containing:

.include /usr/lib/systemd/system/ladvd.service
# was ExecStart=/usr/sbin/ladvd -f -a -z
# but -z allows string to be passed to kernel by unauthed external user
ExecStart=/usr/sbin/ladvd -f -a

[1] At the very least, a security issue equal to the "rude words in SSID lists" problem. At it's worst, an overflow attack vector.


Configuring Zotero PDF full text indexing in Debian Jessie


Zoterto is an excellent reference and citation manager. It runs within Firefox, making it very easy to record sources that you encounter on the web (and in this age of publication databases almost everything is on the web). There are plugins for LibreOffice and for Word which can then format those citations to meet your paper's requirements. Zotero's Firefox application can also output for other systems, such as Wikipedia and LaTeX. You can keep your references in the Zotero cloud, which is a huge help if you use different computers at home and work or school.

The competing product is EndNote. Frankly, EndNote belongs to a previous era of researcher methods. If you use Windows, Word and Internet Explorer and have a spare $100 then you might wish to consider it. For me there's a host of showstoppers, such as not running on Linux and not being able to bookmark a reference from my phone when it is mentioned in a seminar.

Anyway, this article isn't a Zotero versus EndNote smackdown, there's plenty of those on the web. This article is to show a how to configure Zotero's full text indexing for the RaspberryPi and other Debian machines.

Installing Zotero

There are two parts to install: a plugin for Firefox, and extensions for Word or LibreOffice. (OpenOffice works too, but to be frank again, LibreOffice is the mainstream project of that application these days.)

Zotero keeps its database as part of your Firefox profile. Now if you're about to embark on a multi-year research project you may one day have trouble with Firefox and someone will suggest clearing your Firefox profile, and Firefox once again works fine. But then you wonder, "where are my years of carefully-collected references?" And then you cry before carefully trying to re-sync.

So the first task in serious use of Zotero on Linux is to move that database out of Firefox. After installing Zotero on Firefox press the "Z" button, press the Gear icon, select "Preferences" from the dropbox menu. On the resulting panel select "Advanced" and "Files and folders". Press the radio button "Data directory location -- custom" and enter a directory name.

I'd suggest using a directory named "/home/vk5tu/.zotero" or "/home/vk5tu/zotero" (amended for your own userid, of course). The standalone client uses a directory named "/home/vk5tu/.zotero" but there are advantages to not keeping years of precious data in some hidden directory.

After making the change quit from Firefox. Now move the directory in the Firefox profile to whereever you told Zotero to look:

$ cd
$ mv .mozilla/firefox/*.default/zotero .zotero

Full text indexing of PDF files

Zotero can create a full-text index of PDF files. You want that. The directions for configuring the tools are simple.

Too simple. Because downloading a statically-linked binary from the internet which is then run over PDFs from a huge range of sources is not the best of ideas.

The page does have instructions for manual configuration but the page lacks a worked example. Let's do that here.

Manual configuration of PDF full indexing utilities on Debian

Install the pdftotext and pdfinfo programs:

    $ sudo apt-get install poppler-utils

Find the kernel and architecture:

$ uname --kernel-name --machine
Linux armv7l

In the Zotero data directory create a symbolic link to the installed programs. The printed kernel-name and machine is part of the link's name:

$ cd ~/.zotero
$ ln -s $(which pdftotext) pdftotext-$(uname -s)-$(uname -m)
$ ln -s $(which pdfinfo) pdfinfo-$(uname -s)-$(uname -m)

Install a small helper script to alter pdftotext paramaters:

$ cd ~/.zotero
$ wget -O redirect.sh https://raw.githubusercontent.com/zotero/zotero/4.0/resource/redirect.sh
$ chmod a+x redirect.sh

Create some files named *.version containing the version numbers of the utilities. The version number appears in the third field of the first line on stderr:

$ cd ~/.zotero
$ pdftotext -v 2>&1 | head -1 | cut -d ' ' -f3 > pdftotext-$(uname -s)-$(uname -m).version
$ pdfinfo -v 2>&1 | head -1 | cut -d ' ' -f3 > pdfinfo-$(uname -s)-$(uname -m).version

Start Firefox and Zotero's gear icon, "Preferences", "Search" should report something like:

PDF indexing
  pdftotext version 0.26.5 is installed
  pdfinfo version 0.26.5 is installed

Do not press "check for update". The usual maintenance of the operating system will keep those utilities up to date.


Notes for upgrading RaspberryPi from Raspbian Wheezy to Raspbian Jessie

Debian distributions for the Raspberry Pis

The Raspian distribution is Debian recompiled and tuned for the ARM instruction set used in the original Raspberry Pi Model A, Model B, and Model B+.

The Raspberry Pi2 has a more recent ARM instruction set. That gives RaspberryPi2 users two paths to Debian Jessie: use the Raspbian distribution or use the stock Debian ARM distribution with a hack for the Raspberry Pi kernel.

This article is about upgrading an existing Raspbian Wheezy distribution to Raspbian Jessie. Some Linux systems administration skill is required to do this.

Alter /etc/apt/sources.list

Edit the files /etc/apt/sources.list and /etc/apt/sources.list.d/*.list replacing every occurance of "wheezy" with "jessie".

For example is /etc/apt/sources.list says:

deb http://mirrordirector.raspbian.org/raspbian/ wheezy main contrib non-free rpi

then alter that to:

deb http://mirrordirector.raspbian.org/raspbian/ jessie main contrib non-free rpi

Similarly /etc/apt/sources.list.d/raspi.list contained:

deb http://archive.raspberrypi.org/debian/ wheezy main

and this becomes:

deb http://archive.raspberrypi.org/debian/ jessie main

The repository described by the file /etc/apt/sources.list.d/collabora.list doesn't yet have a Jessie section.


The number of packages to upgrade will depend on how many packages you installed in addition to those which originally arrived with Raspbian Wheezy. In general, the mark is somewhere around 1GB of data.

Upgrades are best done from the old-fashioned text console. Press Crtl-Alt-F1 and login as root.


# apt-get update
# apt-get dist-upgrade
# apt-get autoremove

Do not reboot when that command completes. We'll fix a few of the more common issues with the upgrade at this most convenient moment.

Correct errors

udev rules

There are two files containing syntax errors in /lib/udev/rules.d/ which cause udev to fail to start: 60-python-pifacecommon.rules and 60-python3-pifacecommon.rules. These files are not owned by any packages, which is a little annoying and naive of their authors. Rename them to stop udev attempting to read them and failing.

# cd /etc/udev/rules.d
# mv 60-python-pifacecommon.rules 60-python-pifacecommon.rules.failed
# mv 60-python3-pifacecommon.rules 60-python3-pifacecommon.rules.failed

ifplugd replaced by wicd

Networking of plugin interfaces is done using ifplugd in Wheezy. This is done using wicd in Jessie.

# apt-get purge ifplugd

systemd is the cgroups controller

The init system is done using System V-style scripts in Wheezy. This is done using systemd in Jessie.

Systemd uses control groups so that unexpected process stop is reported to systemd. In Linux a control group can only have one controlling process, which has to be systemd in Jessie. This isn't a poor outcome, as systemd makes a fine controller.

However if another process attempts to be the control groups controller then systemd can fail when starting processes. So remove any existing controllers:

# apt-get purge cgmanager cgroup-bin cgroup-tools libcgroup1

systemd is the init system

A package called systemd-shim allow other init systems to use logind and other programs, as systemd-shim provides just enough of systemd's function. Jessie uses systemd, so we don't need systemd-shim. Unfortunately the dist-upgrade seems to pull this in:

# apt-get purge systemd-shim

[Thanks to ktb on the RaspberryPi.org forums for correcting a typo here.]

Allow logging to the journal

systemd doubles the number of system loggers, by adding a new logger called journald. It can provide logs to the usual syslogd. However when debugging startup issues it can be useful to have journald write the files itself. To do this create the directory /var/log/journal/

journald keeps its logs in binary. Use journalctl -xb to see the logs. The -1 parameter is hand to view the log of the previous boot.

Consider booting in single user mode

You might choose to give yourself a way to debug startup issues by having the kernel start in single user mode. Edit /boot/cmdline.txt, appending the text single

The workflow here is:

  • Boot RPi.

  • Press Ctrl-D when asked for a password to enter single user mode. The boot will continue into multi-user mode.

  • If this hangs then press Ctrl+Alt+Del to shut down.

  • Restart RPi. This time provide the root password at the single user mode password prompt. Use journalctl to view the log of the previous boot and examine what went wrong.

  • Correct the error, and use shutdown -r now to try again from the top.

Once you have sorted issues during system init then remove the single phrase from /boot/cmdline.txt so that the system boots into multiuser mode.


Reboot and work through issues resulting from the upgrade.

Clean up

# apt-get clean

Journald is a enterprise-level logging solution, so it is keen on flushing data to disk. This radically increases the number of flash blocks written and this reduction in flash card lifetime isn't appreciated on the RPi. So it's probably best to remove /var/log/journal/* and allow journald to log to RAM and syslog instead.

Fedora 21: automatic software updates

The way Fedora does automatic software updates has changed with the replacement of yum(8) with dnf(8).

Start by disabling yum's automatic updates, if installed:

# dnf remove yum-cron yum-cron-daily

Then install the dnf automatic update software:

# dnf install dnf-automatic

Alter /etc/dnf/automatic.conf to change the "apply_updates" line:

apply_updates = yes

Instruct systemd to run the updates periodically:

# systemctl enable dnf-automatic.timer
# systemctl start dnf-automatic.timer


Log in