Category Archives: http://planet.igalia.com

Introducing the meta-webkit Yocto layer.

Lately, in my daily work at Igalia, I have been learning to use Yocto / OpenEmbedded to create custom distributions and products targeting different embedded hardware. One of the goals was to create a Kiosk-like browser that was based on a modern Web engine, light (specially regarding RAM requirements) and fast.

WebKit fits perfectly this requirements, so I started checking the available recipes on the different layers of Yocto to build WebKit, unfortunately the available recipes for WebKitGTK+ available were for ancient versions of the engine (no WebKit2 multi-process model). This has changed recently, as the WebKitGTK+ recipe available in the oe-core layer has been updated to a new version. In any case, we needed this for an older release of Yocto so I ended creating a new layer.

I have added also some recipes for WebKitForWayland and released it as meta-webkit. The idea is to collect here recipes for building browsers and web engines based on WebKit. For the moment it includes recipes for building the WebKitGTK+ and WebKitForWayland ports.

WebKitGTK+ vs WebKitForWayland

First a few words about the main differences between this two different ports of WebKit, so you have a better understanding on where to use one or the other.

Let’s start by defining both engines:

  • WebKit for Wayland port pairs the WebKit engine with the Wayland display protocol, allowing embedders to create simple and performant systems based on Web platform technologies. It is designed with hardware acceleration in mind, relying on EGL, the Wayland EGL platform, and OpenGL ES.
  • WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers. It offers WebKit’s full functionality and is useful in a wide range of systems from desktop computers to embedded systems like phones, tablets, and televisions.

From the definitions you may guess already some differences. WebKitForWayland focus on simple and performant Web-oriented systems, meanwhile WebKitGTK+ focus on any kind of product (complex or simple) that requires full Web integration.

WebKitForWayland is what you need when you want a very fast and light HTML5/js runtime, capable of squeeze the hardware acceleration of your platform (Wayland EGL platform) up to the maximum performance. But is not suitable if you are thinking in building a general purpose browser on top of it. It lacks some features that are not needed for a Web runtime but that are desirable for a more generic browser.

Here you can see a video of WebKitForWayland running on the weston ivi-shell and several instances of WebkitForWayland showing different demos (poster circle and several webgl demos) running on the Intel NUC (Intel Atom E3815 – 1 core). The OS is a custom image built with Yocto 1.8 and this meta-webkit layer.

On the Weston terminal you can appreciate the low resource usage (CPU and RAM) despite having several browser process running the demanding demos at full speed. Link to the video on YouTube here.

In case you want to try it, you might want to build an image for your device with meta-webkit and Yocto (see below), or you can test the image that I built for the demo on the above video. You can flash it to an usb stick memory device as follows, and boot it on any Intel x86_64 machine that has an Intel GPU:

# /dev/sdX is the device of your usb memory stick (for example /dev/sdc)
curl -s http://ftp.neutrino.es/webkitforwayland-yocto-ivi-demo/webkitforwayland-demo-ivi-weston-intel-corei7-64-20151019201929.hddimg.xz | xz -dc | dd bs=4k of=/dev/sdX

On the other hand WebKitGTK+ allows you to build a custom browser on top of it. It has a rich and stable API and many goodies like support for plugins, integrated web inspector, full toolkit support, etc. WebKitGTK+ also performs very good and consumes few resources, it can run both on top of Wayland and X11. But at the moment of writing this, the support for Wayland is still incomplete (we are working on it). So if possible we recommend that you base your product on the X11 backend. You could later migrate to Wayland without much effort once we complete the support for it.

Building a Yocto image with WebKitForWayland

The usual way to create an image with Yocto and WebKitForWayland is:

  • Setup the environment and source oe-init-build-env as usual.
  • Checkout the branch of meta-webkit that matches your Yocto/OE version (for example: fido).
    Note that fido (1.8) is the less recent version supported. If you are using the fido branch you will also need to add the meta-ruby layer that is available on meta-openembedded
  • Add the path to the meta-webkit layer in your conf/bblayers.conf file
  • Append the following lines to the conf/local.conf file
  • DISTRO_FEATURES_append = " opengl wayland"
    IMAGE_INSTALL_append = " webkitforwayland"
    
  • Then build the target image, for example
  • bitbake core-image-weston
    
  • Then boot the image on the target device and run the webkitforwayland engine from a weston terminal
  • WPELauncher http://postercircle.com
    

Building a Yocto image with WebKitGTK+

There are some things to take into account when building WebKitGTK+:

  • The package webkitgtk contains the shared libraries and the webkitgtk runtime.
  • The package webkitgtk-bin contains the MiniBrowser executable. This is very basic browser built on top of the webkitgtk runtime, mainly used for testing purposes.
  • On the recipe there are several packageconfig options that you can tune. For example, for enabling WebGL support you can add the following to your conf/local.conf file:
    PACKAGECONFIG_pn-webkitgtk = "x11 webgl"
    

    Check the recipe source code to see all the available options.

  • The name of the recipe is the same than the one available in oe-core (master), so you should select which version of webkitgtk you want to build. For example, to build 2.10.3 add on conf/local.conf:
    PREFERRED_VERSION_webkitgtk = "2.10.3"
    

So, the usual way to create an image with Yocto and WebKitGTK+ is:

  • Setup the environment and source oe-init-build-env as usual.
  • Checkout the branch of meta-webkit that matches your Yocto/OE version (for example: fido).
    Note that fido (1.8) is the less recent version supported. If you are using the fido branch you will also need to add the meta-ruby layer that is available on meta-openembedded
  • Add the following lines to your conf/local.conf file (for building the X11 backend of WebKitGTK+)
  • DISTRO_FEATURES_append = " opengl x11"
    IMAGE_INSTALL_append = " webkitgtk-bin"
    
  • Then build the X11 image:
  • bitbake core-image-sato
    
  • Then boot the image on the target device and run the included test browser from an X terminal (or any other browser that you have developed on top of the WebKitGTK+ API)
    MiniBrowser http://google.com
    

Further info

If you are new to Yocto / OpenEmbedded, a good starting point is to check out the documentation.

If you have any issue or doubt with the meta-webkit layer, please let me know about that (you can mail me at clopez@igalia.com or open an issue on github)

Finally, If you need help for integrating a Web engine on your product, you can also hire us. As maintainers of the GTK+ and WebKit For Wayland ports of WebKit we have considerable experience creating, maintaining and optimizing ports of WebKit.

WebKitGTK+ Performance Bot!

Lately I have been working on the WebKitGTK+ build bots that Igalia maintains.

Some of the improvements we achieved include:

The most interesting one is the performance bot. You can check the numbers at http://perf.webkit.org. That numbers are not valid to compare the performance between the different WebKit ports (EFL/GTK+/Mac), because each one of these performance bots run on different hardware. The idea is to track the performance between revisions, and identify the commits that have a performance impact.

The performance bot we deployed is not very powerful (i5 661), but does the job. I took extra care to ensure that external factors don’t affect the performance results of the tests on the machine. This is a summary of that measures:

  • I disabled cron and every other daemon.
  • Gave the performance tests processes very high priority (nice -19, ionice -c1).
  • Disabled Hyper-Threading: While the tests are run in sequential order, the javascript engine (JSC) or other parts of WebKit can be parallelized. HT is great if you want to squeeze the maximum performance of your machine, but it can become a problem when you care about consistent results related to performance (or you care about real time).
  • By default the performance tests are run inside Xvfb, and this is not great if you are interested in the most realistic scenario possible. So I patched WebKit to allow running the tests on the native X display.

These are some details of the bot we deployed:

  • The machine is an Intel Core i5 661 with 8GB of RAM.
  • As explained, we run the tests on a real X server:
    $ glxinfo | grep "render.*:"
    direct rendering: Yes
    OpenGL renderer string: Mesa DRI Intel(R) Ironlake Desktop 
    
  • The OS were the tests are run is Debian Jessie.
  • The DE is GNOME 3: I disabled every power-save feature of GNOME (including the screensaver). Since the window of the browser that runs the tests is positioned at the top-left of the screen, I installed the extension Activities Configurator that allows to hide the top bar of activities, and to disable the hot upper-left corner.

If you want to run the performance tests on your machine, this is as easy as:

Tools/Scripts/run-perf-tests --platform=gtk

If you want to run them on your X display, simply export the variable USE_NATIVE_XDISPLAY before:

export USE_NATIVE_XDISPLAY=1

If you don’t want to run the full suite (takes no less than 2 hours), you can specify the test names (look on the PerformanceTests directory for the names).

For example to run the test DoYouEvenBench Speedometer (DoYouEvenBench was renamed to Speedometer):

Tools/Scripts/run-perf-tests --platform=gtk Speedometer/Full.html

Check the WebKit wiki for further information and keep and eye on http://perf.webkit.org: If you notice any performance regression, please let us know that by opening a bug.

How to properly activate TRIM for your SSD on Linux: fstrim, lvm and dm-crypt

Unlike hard disk drives (HDDs), NAND flash memory that make SSD cannot overwrite existing data. This means that you first have to delete the old data before writing new one. Flash memory is divided into blocks, which is further divided in pages. The minimum write unit is a page, but the smallest erase unit is a block.

Data can be written directly into an empty page, but only whole blocks can be erased. Therefore, to reclaim the space taken up by invalid data, all the valid data from one block must be first copied and written into the empty pages of a new block. Only then can the invalid data in the original block be erased, making it ready for new valid data to be written.

Do you see the problem? This means that as time goes on, the SSD will internally fragment the blocks among the different pages, until that it reaches a point where there won’t be available any empty page. Then every time the drive needs to write a block into any of the semi-full pages, it first needs to copy the current blocks from the page to a buffer, then it has to delete the whole page to finally rewrite the old blocks along with the new one. This means that as time goes on the SSD performance degrades more and more, because for every write it has to go through a cycle of read-erase-modify-write. This is known as “write amplification”.

Without TRIM the disk is unable to know which blocks are in use by a file or which ones are marked as free space. This is because when a file is deleted, the only thing the OS does is to mark the blocks that were used by the file as free inside the file system index. But the OS won’t tell the disk about this. This means that over time the performance of the SSD disk will degrade more and more, and it don’t matters how much space you free, because the SSD won’t know about it.

What is TRIM? TRIM was invented for solving this problem. TRIM is the name of a command that the operating system can send to tell the SSD which blocks are free in the filesystem.
The SSD uses this information to internally defragment the blocks and keep free pages available to be written quickly and efficiently.

How to active TRIM on Linux? The first thing to know is that TRIM should be enabled on all I/O abstraction layers. This means that if you have an ext4 partition on top of LVM, which in turn is on top of an encrypted volume with LUKS/dm-crypt, then you must enable support for TRIM in these three layers: The filesystem, LVM and dm-crypt. There is no point in enabling it at the filesystem level if you don’t enable it also on the other layers. The TRIM command should be translated from one layer to another until reaching the SSD.

  1. Enabling TRIM support on dm-crypt

  2. We simply have to add the option discard inside our crypttab

    • $ cat /etc/crypttab

      # <target name>    <source device>    <key file>       <options>
      sda2_crypt         /dev/sda2          none             luks,discard
      

    Note: The usage of TRIM on dm-crypt could cause some security issues like the revelation of which sectors of your disk are unused.

  3. Enabling TRIM support on LVM

  4. We have to enable the option issue_discards in the LVM configuration.

    • $ cat /etc/lvm/lvm.conf

      # [...]
      devices {
         # [...]
         issue_discards = 1
         # [...]
      }
      # [...]
      
  5. Enabling TRIM support on the file system

  6. This is the most interesting part. Most people simply add the option “discard” in the mounting options at /etc/fstab. However, this means that every time you delete a file, the OS will be reporting in real-time to the SSD which blocks were occupied by that file and are not longer in use, and then the SSD will have to perform a defragmentation and deletion of those internal blocks, operation which will take an amount of time higher than desired.

    In order to optimize the performance of the SSD, I strongly advise you to avoid doing the TRIM operation in real time (whenever a file is deleted) because you would be putting an unnecessary extra amount of work over the SSD. In other words: You should not enable the discard option in fstab.

    Instead, what I recommend is to run a script periodically to tell the SSD which blocks are free with the command fstrim. Doing this operation daily or weekly is more than enough. This way we do not lose any performance due to TRIM when deleting files and we periodically keep informed the SSD about the free blocks.

    Other advantages of the fstrim way are:

    • If you didn’t enabled correctly the TRIM support in the above layers of your setup, you will receive an error when executing fstrim. On the other hand, if you were using the discard option at fstab you wouldn’t have received any error and you would end thinking that you managed to get TRIM working properly when you didn’t.
    • If you delete a file by mistake (you know it happens), you can recover it before anacron runs your script fstrim. On the other hand, if you were using the discard-at-fstab option you wouldn’t have any chance of recovering the file, because the OS would have told the SSD to TRIM that blocks as soon as you deleted the file, and consequently the SSD has irreversibly destroyed such blocks.

    Here you have simple script to run fstrim on the /, /boot and /home partitions, which can be programmed to be executed periodically by anacron

    • $ cat /etc/cron.weekly/dofstrim

      #! /bin/sh
      for mount in / /boot /home; do
      	fstrim $mount
      done
      

    It’s worth mentioning that both fstrim and discard-at-fstab options only work on filesystems that implement the ioctl FITRIM. Today, such filesystems are:

    ~/kernel/linux (v3.8) $ grep -lr FITRIM fs/ | cut -d/ -f2 | sort | uniq | xargs echo
    btrfs ext3 ext4 gfs2 jfs ocfs2 xfs
    

Note: On most setups you will have to rebuild your initramfs with update-initramfs -u (Debian and derivatives) or dracut -f (Redhat and derivatives) and reboot the machine after touching the configuration options of LVM or dm-crypt.

Update 25-Feb-2013: Seems that Fedora users with a dm-crypt volume will be affected by this problem: https://bugzilla.redhat.com/890533

Update 14-Aug-2014: The following script can be used to automatically detect and fstrim all filesystems that have TRIM support enabled.

  • #!/bin/sh
    #
    # To find which FS support trim, we check that DISC-MAX (discard max bytes)
    # is great than zero. Check discard_max_bytes documentation at
    # https://www.kernel.org/doc/Documentation/block/queue-sysfs.txt
    #
    for fs in $(lsblk -o MOUNTPOINT,DISC-MAX,FSTYPE | grep -E '^/.* [1-9]+.* ' | awk '{print $1}'); do
    	fstrim "$fs"
    done
    

Git: Copy a file or directory from another repository preserving the history

How to copy a file or directory from another GIT repository while preserving its history?

Internet is full of magic formulas each one more complex.

Here I’m proposing a much simpler and faster one that is to make a git format-patch for the entire history of the file or subdirectory that we want and then import it into the destination repository.

mkdir /tmp/mergepatchs
cd ~/repo/org
export reposrc=myfile.c #or mydir
git format-patch -o /tmp/mergepatchs $(git log $reposrc|grep ^commit|tail -1|awk '{print $2}')^..HEAD $reposrc
cd ~/repo/dest
git am /tmp/mergepatchs/*.patch

Simple and fast :)

IKEA Hackers: LackRack

Among the hundreds of hacks from the fantastic website IKEA Hackers, one is particularly interesting.

How to build a rack with an IKEA lack table worth less than 10 euros?
Well, with this manual:

The best of the LackRack (after its price) is that its construction is modular and you can grow it with your needs:

5x LackRack

The LackRack is the ultimate, low-cost, high shininess solution for your modular datacenter-in-the-living-room. It is said that Google engineers were the first to explore the idea of using lack tables for data centers. The LackRack is so famous that even has its own website: http://lackrack.org/ 😀

Unlocking a LUKS encrypted root partition remotely via SSH

If you are thinking on sending a new server to a remote datacenter for colocation or you have rented one or more servers in the cloud, probably you have thought that you would like to encrypt your server’s hard disk.

The problem is that if you encrypt the whole hard disk (the root partition) you will need some kind of KVM to type the password remotely every time the server is restarted … sure??? No!

Thanks to this nifty trick, you can enter the password remotely during the boot process. The trick involves embedding a small ssh server (dropbear) in the initramfs that allows you to enter the password remotely for the root partition at boot time.

For those who are lucky enough to use Debian, the procedure is so simple and easy as ::

1) Install your server with the root partition encrypted.

2) Install the required packages:

apt-get install openssh-server dropbear busybox

3) Copy the SSH key that has been generated automatically

scp root@my.server.ip.addr:/etc/initramfs-tools/root/.ssh/id_rsa ~/id_rsa.initramfs

4) If your server gets the IP address automatically (DHCP) ignore this step, otherwise you have to specify the IP configuration at the Kernel boot line. To do this edit the file /etc/default/grub and define the line:

GRUB_CMDLINE_LINUX="ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>"

Using the format specified in the file Documentation/nfsroot.txt of the Linux kernel documentation. For example:

GRUB_CMDLINE_LINUX="ip=192.168.122.192::192.168.122.1:255.255.255.0::eth0:none"

Reload the grub configuration

update-grub

5) Reboot

reboot

6) And unlock remotely!

ssh -o "UserKnownHostsFile=~/.ssh/known_hosts.initramfs" \
	-i "~/id_rsa.initramfs" root@my.server.ip.addr \
	"echo -ne \"MyS3cr3tK3y\" >/lib/cryptsetup/passfifo"

And for those not lucky enough to use Debian, and also for those who have such luck, but want more details on this procedure, I am pasting here the archive cryptsetup/README.remote from Debian that I am sure that you will find very useful :)

$ zcat /usr/share/doc/cryptsetup/README.remote.gz

unlocking rootfs via ssh login in initramfs
-------------------------------------------

You can unlock your rootfs on bootup from remote, using ssh to log in to the
booting system while it's running with the initramfs mounted.


Setup
-----

For remote unlocking to work, the following packages have to be installed
before building the initramfs: dropbear busybox

The file /etc/initramfs-tools/initramfs.conf holds the configuration options
used when building the initramfs. It should contain BUSYBOX=y (this is set as
the default when the busybox package is installed) to have busybox installed
into the initramfs, and should not contain DROPBEAR=n, which would disable
installation of dropbear to initramfs. If set to DROPBEAR=y, dropbear will
be installed in any case; if DROPBEAR isn't set at all, then dropbear will only
be installed in case of an existing cryptroot setup.

The host keys used for the initramfs are dropbear_dss_host_key and
dropbear_rsa_host_key, both located in/etc/initramfs-tools/etc/dropbear/.
If they do not exist when the initramfs is compiled, they will be created
automatically. Following are the commands to create them manually:

# dropbearkey -t dss -f /etc/initramfs-tools/etc/dropbear/dropbear_dss_host_key
# dropbearkey -t rsa -f /etc/initramfs-tools/etc/dropbear/dropbear_rsa_host_key

As the initramfs will not be encrypted, publickey authentication is assumed.
The key(s) used for that will be taken from
/etc/initramfs-tools/root/.ssh/authorized_keys.
If this file doesn't exist when the initramfs is compiled, it will be created
and /etc/initramfs-tools/root/.ssh/id_rsa.pub will be added to it.
If the latter file doesn't exist either, it will be generated automatically -
you will find the matching private key which you will later need to log in to
the initramfs under /etc/initramfs-tools/root/.ssh/id_rsa (or id_rsa.dropbear
in case you need it in dropbear format). Following are the commands to do the
respective steps manually:

To create a key (in dropbear format):

# dropbearkey -t rsa -f /etc/initramfs-tools/root/.ssh/id_rsa.dropbear

To convert the key from dropbear format to openssh format:

# /usr/lib/dropbear/dropbearconvert dropbear openssh \
	/etc/initramfs-tools/root/.ssh/id_rsa.dropbear \
	/etc/initramfs-tools/root/.ssh/id_rsa

To extract the public key:

# dropbearkey -y -f /etc/initramfs-tools/root/.ssh/id_rsa.dropbear | \
	grep "^ssh-rsa " > /etc/initramfs-tools/root/.ssh/id_rsa.pub

To add the public key to the authorized_keys file:

# cat /etc/initramfs-tools/root/.ssh/id_rsa.pub >> /etc/initramfs-tools/root/.ssh/authorized_keys

In case you want some interface to get configured using dhcp, setting DEVICE= in
/etc/initramfs-tools/initramfs.conf should be sufficient.  The initramfs should
also honour the ip= kernel parameter.
In case you use grub, you probably might want to set it in /boot/grub/menu.lst,
either in the '# kopt=' line or appended to specific 'kernel' line(s).
The ip= kernel parameter is documented in Documentation/nfsroot.txt in the
kernel source tree.


Issues
------

Don't forget to run update-initramfs when you changed the config to make it
effective!

Collecting enough entropy for the ssh daemon sometimes seems to be an issue.
Startup of the ssh daemon might be delayed until enough entropy has been
retrieved. This is non-blocking for the startup process, so when you are at the
console you won't have to wait for the sshd to complete its startup.


Unlocking procedure
-------------------

To unlock from remote, you could do something like this:

# ssh -o "UserKnownHostsFile=~/.ssh/known_hosts.initramfs" \
	-i "~/id_rsa.initramfs" root@initramfshost.example.com \
	"echo -ne \"secret\" >/lib/cryptsetup/passfifo"

This example assumes that you have an extra known_hosts file 
"~/.ssh/known_hosts.initramfs" which holds the cryptroot system's host-key,
that you have a file "~/id_rsa.initramfs" which holds the authorized-key for
the cryptroot system, that the cryptroot system's name is
"initramfshost.example.com", and that the cryptroot passphrase is "secret"

-- <debian@x.ray.net>, Wed, 30 Sep 2009

OpenRISC: Making the stack 100% free

OpenRISC is the flagship project of the OpenCores community. This project aims to develop a series of general purpose open source RISC CPU architectures.

The OpenRISC 1200 (OR1200) is a synthesizable CPU core that runs on FPGAs from a broad number of vendors, and is currently being used in a number of successful industrial projects. This CPU is an open source implementation (LGPL) of the OpenRISC 1000 RISC architecture.

These days the project has reached an important milestone: they have submitted their patches for review to the Linux kernel mailing list and hopefully they will get it merged, so Linux will officially support the OpenRISC architecture.

This are exciting news!

If you have a FPGA board you can load it with the OpenRISC softcore and execute Linux on it (demo video).

But if you don’t have one, then you can try the nice OpenRISC emulator: its easy and funny.
Here is a link to the detailed howto, and here is my cheat sheet:

sudo apt-get install build-essential libmpc-dev lzop git-core
sudo apt-get build-dep gcc-4.4
mkdir ~/openrisc
cd ~/openrisc
git clone git://openrisc.net/jonas/or1ksim-svn
cd or1ksim-svn
./configure && make && sudo make install
cd ~/openrisc
git clone git://openrisc.net/jonas/toolchain
cd toolchain
git submodule update --init
# go for a coffee
export PATH=~/openrisc/toolchain/bin:$PATH
make PREFIX=~/openrisc/toolchain
cd linux
make ARCH=openrisc defconfig
make CROSS_COMPILE=or32-linux-
# go for a tea
sim -f arch/openrisc/or1ksim.cfg vmlinux

And voila! You are running a 100% free system, from the CPU to the Operating System, including the Wishbone bus :)

# cat /proc/cpuinfo
cpu                     : OpenRISC-18
revision                : 1
dcache size             : 8192 kB
dcache block size       : 16 bytes
icache size             : 8192 kB
icache block size       : 16 bytes
immu                    : 64 entries, 1 ways
dmmu                    : 64 entries, 1 ways
bogomips                : 40.00

As the OpenRISC 1000 is now considered stable, the OpenCores project is trying to build a cost-efficient ASIC with this design to get improved performance. They launched a call for donations in 2011 with the aim to produce the first ASIC in Q1 2012.

I am willing to get one of this!

The open-source methodology is here to stay, just take a look at the success story behind Linux/Android, and all the great benefits that open-source offers. We now need to take the next step with open-source hardware development, meaning to actually create “true” open-source Semiconductor components in order to stay competitive, and to make sure that smaller companies can compete with large Semiconductor giants. So please join us and help us rewrite the history books for hardware development…

Don’t eject your USB storage: stop it!

This is probably one of the biggest mistakes I’ve dragged for more time as Linux user, in part for not reading the man pages and in part by assuming too much from the name of an executable.

When working with USB storage I was used to plug it, mount it, do some stuff with it, sync, unmount, eject and unplug it. And seems that eject does not apply to USB storage, only to “ejectable” media: CD/DVD/Floppies…

So, the right way to stop an USB storage device is to use:

sdparm --command=stop /dev/device

Anyway I recommend you to check out the following script that does it very nicely, plus another useful things like unbinding it. The script is here: suspend-usb-device

PS: I’m afraid that I was not the only unwary out there that was “ejecting” the USB storage before unplugging it, so I hope you can find this tip useful hehe 😀

RFKill Applet for GNOME

I don’t like to use wireless devices if I can avoid them, I am a bit paranoid with this things and also I have empirically checked that always that I use the wireless of my laptop for a long time I end with headaches. If you are worried about this too I encourage you to watch the documentary “surrounded by waves” (just google it)

And one of the things that I miss on my Asus laptop is that it lacks of a button switch for turning on / off the wireless and bluetooh devices. Months ago I discovered the rfkill utility (package wireless-tools on Debian/Ubuntu) and I loved it. I used to disable or enable the wireless devices from the terminal with this utility, but soon I realized that it was not comfortable to open a terminal and run the commands each time.

So as part of my hackfest time here at Igalia I wrote an applet for GNOME that lets the user to disable or enable the emission of radiation from RF devices with a simple click on the applet icon. I took the inspiration for this from the GNOME power manager inhibit applet.

It also polls the system for the status of the RF devices and updates the icon showed, so it works also as a notification applet that will display the emitting icon if only one of the RF devices of your system is activated and the inhibited icon if all RF devices are blocked. Finally, if a problem is found or you don’t have RF devices on your system then it will display the unknown icon.

GNOME RFKill Applet

You can get it from gitorious.

I hope you find it useful 😉

Auth External: the Swiss Army Knife of Apache Auth

When you need to deploy a custom, weird or non typical Auth schema with Apache, knowing about the Auth External module can save you tons of time.

It is is commonly deployed along with pwauth for allowing unix users to authenticate against Apache, but this only one case use. Actually it can be used for almost any Auth setup that you can imagine.

The main idea behind it is very simple and powerful, it delegates Apache authentication on any program or script that you want. Simply it will feed your program with the user/password variables along other interesting variables like IP, cookies, URI, etc. and your program must return zero if the user is allowed to access or non-zero otherwise.

I will show you a simple example.

” I have my home server where I run an LDAP directory with some users and I also have a remote server in a hosting company where I want to allow the users of my LDAP directory to access some https directory protected by the Apache authentication.

Accessing the LDAP directory from the remote server is not an option because I have a firewall protecting it and I can’t/don’t want to do such a thing. And configuring an slave LDAP server on the remote server is even a worse option.”

So here is when Auth external comes to rescue.

On my home server I configure a dummy https virtual host that authenticates users via the local LDAP

<VirtualHost *:443>
    
    SSLEngine on
    
    ServerName   dummy-https.homeserver.com
    ServerAdmin  clopez@igalia.com
    
    TransferLog  /var/log/apache2/dummy-https-access.log
    ErrorLog     /var/log/apache2/dummy-https-error.log
        
    <Directory /var/www/empty>
        AuthName Auth
        AuthType Basic
        AuthBasicProvider ldap
        AuthzLDAPAuthoritative On
        AuthLDAPURL "ldap://localhost/dc=homeserver,dc=com?uid"
        AuthLDAPGroupAttribute uniqueMember
        AuthLDAPGroupAttributeIsDN On
        Require valid-user
        Order allow,deny
        Allow from all
    </Directory>

</VirtualHost>

And on the remote server I just implement a very simple script that will read from stdin an user and password strings and will try to establish an https connection with my apache home server using the supplied credentials. It will return zero if it can get the connection, otherwise will return 1

#! /bin/sh
read user
read password
/usr/bin/curl -f -u "$user":"$password" https://dummy-https.homeserver.com >/dev/null 2>&1 || exit 1
exit 0

I saved this script as /usr/local/bin/home-ssl.sh and now I will install the auth external module and will enable it

apt-get install libapache2-mod-authnz-external
a2enmod authnz_external

Note: for Debian Lenny grab the backport’s package

And now I will configure the Apache virtual host with the auth external authentication:

<VirtualHost *:443>
    
    SSLEngine on
    
    ServerName   securearea.hostingprovider.com
    ServerAdmin  clopez@igalia.com
    
    TransferLog  /var/log/apache2/securearea.hostingprovider-access.log
    ErrorLog     /var/log/apache2/securearea.hostingprovider-error.log
      
    DocumentRoot /var/www/securearea
        
    <Directory /var/www/securearea>
        AuthType Basic
        AuthName "Restricted"
        AuthBasicProvider external
        AuthExternal home-ssl
        require valid-user
    </Directory>

    AddExternalAuth home-ssl /usr/local/bin/home-ssl.sh
    SetExternalAuthMethod home-ssl pipe

</VirtualHost>

In summary:

  • User access https://securearea.hostingprovider.com/
  • Apache@hostingprovider ask user for credentials
  • Apache@hostingprovider calls /usr/local/bin/home-ssl.sh and feed it with user credentials
  • /usr/local/bin/home-ssl.sh calls Apache@homeserver at https://dummy-https.homeserver.com
  • Apache@homeserver ask for user credentials to local LDAP database
  • If credentials are correct Apache@homeserver will allow /usr/local/bin/home-ssl.sh to access making it to return zero so the user will be allowed to access securearea at Apache@hostingprovider

Want to write an external authenticator? Here you have a quick howto Enjoy !