Introducing the meta-webkit Yocto layer.

Lately, in my daily work at Igalia, I have been learning to use Yocto / OpenEmbedded to create custom distributions and products targeting different embedded hardware. One of the goals was to create a Kiosk-like browser that was based on a modern Web engine, light (specially regarding RAM requirements) and fast.

WebKit fits perfectly this requirements, so I started checking the available recipes on the different layers of Yocto to build WebKit, unfortunately the available recipes for WebKitGTK+ available were for ancient versions of the engine (no WebKit2 multi-process model). This has changed recently, as the WebKitGTK+ recipe available in the oe-core layer has been updated to a new version. In any case, we needed this for an older release of Yocto so I ended creating a new layer.

I have added also some recipes for WebKitForWayland and released it as meta-webkit. The idea is to collect here recipes for building browsers and web engines based on WebKit. For the moment it includes recipes for building the WebKitGTK+ and WebKitForWayland ports.

WebKitGTK+ vs WebKitForWayland

First a few words about the main differences between this two different ports of WebKit, so you have a better understanding on where to use one or the other.

Let’s start by defining both engines:

  • WebKit for Wayland port pairs the WebKit engine with the Wayland display protocol, allowing embedders to create simple and performant systems based on Web platform technologies. It is designed with hardware acceleration in mind, relying on EGL, the Wayland EGL platform, and OpenGL ES.
  • WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers. It offers WebKit’s full functionality and is useful in a wide range of systems from desktop computers to embedded systems like phones, tablets, and televisions.

From the definitions you may guess already some differences. WebKitForWayland focus on simple and performant Web-oriented systems, meanwhile WebKitGTK+ focus on any kind of product (complex or simple) that requires full Web integration.

WebKitForWayland is what you need when you want a very fast and light HTML5/js runtime, capable of squeeze the hardware acceleration of your platform (Wayland EGL platform) up to the maximum performance. But is not suitable if you are thinking in building a general purpose browser on top of it. It lacks some features that are not needed for a Web runtime but that are desirable for a more generic browser.

Here you can see a video of WebKitForWayland running on the weston ivi-shell and several instances of WebkitForWayland showing different demos (poster circle and several webgl demos) running on the Intel NUC (Intel Atom E3815 – 1 core). The OS is a custom image built with Yocto 1.8 and this meta-webkit layer.

On the Weston terminal you can appreciate the low resource usage (CPU and RAM) despite having several browser process running the demanding demos at full speed. Link to the video on YouTube here.

In case you want to try it, you might want to build an image for your device with meta-webkit and Yocto (see below), or you can test the image that I built for the demo on the above video. You can flash it to an usb stick memory device as follows, and boot it on any Intel x86_64 machine that has an Intel GPU:

# /dev/sdX is the device of your usb memory stick (for example /dev/sdc)
curl -s http://ftp.neutrino.es/webkitforwayland-yocto-ivi-demo/webkitforwayland-demo-ivi-weston-intel-corei7-64-20151019201929.hddimg.xz | xz -dc | dd bs=4k of=/dev/sdX

On the other hand WebKitGTK+ allows you to build a custom browser on top of it. It has a rich and stable API and many goodies like support for plugins, integrated web inspector, full toolkit support, etc. WebKitGTK+ also performs very good and consumes few resources, it can run both on top of Wayland and X11. But at the moment of writing this, the support for Wayland is still incomplete (we are working on it). So if possible we recommend that you base your product on the X11 backend. You could later migrate to Wayland without much effort once we complete the support for it.

Building a Yocto image with WebKitForWayland

The usual way to create an image with Yocto and WebKitForWayland is:

  • Setup the environment and source oe-init-build-env as usual.
  • Checkout the branch of meta-webkit that matches your Yocto/OE version (for example: fido).
    Note that fido (1.8) is the less recent version supported. If you are using the fido branch you will also need to add the meta-ruby layer that is available on meta-openembedded
  • Add the path to the meta-webkit layer in your conf/bblayers.conf file
  • Append the following lines to the conf/local.conf file
  • DISTRO_FEATURES_append = " opengl wayland"
    IMAGE_INSTALL_append = " webkitforwayland"
    
  • Then build the target image, for example
  • bitbake core-image-weston
    
  • Then boot the image on the target device and run the webkitforwayland engine from a weston terminal
  • WPELauncher http://postercircle.com
    

Building a Yocto image with WebKitGTK+

There are some things to take into account when building WebKitGTK+:

  • The package webkitgtk contains the shared libraries and the webkitgtk runtime.
  • The package webkitgtk-bin contains the MiniBrowser executable. This is very basic browser built on top of the webkitgtk runtime, mainly used for testing purposes.
  • On the recipe there are several packageconfig options that you can tune. For example, for enabling WebGL support you can add the following to your conf/local.conf file:
    PACKAGECONFIG_pn-webkitgtk = "x11 webgl"
    

    Check the recipe source code to see all the available options.

  • The name of the recipe is the same than the one available in oe-core (master), so you should select which version of webkitgtk you want to build. For example, to build 2.10.3 add on conf/local.conf:
    PREFERRED_VERSION_webkitgtk = "2.10.3"
    

So, the usual way to create an image with Yocto and WebKitGTK+ is:

  • Setup the environment and source oe-init-build-env as usual.
  • Checkout the branch of meta-webkit that matches your Yocto/OE version (for example: fido).
    Note that fido (1.8) is the less recent version supported. If you are using the fido branch you will also need to add the meta-ruby layer that is available on meta-openembedded
  • Add the following lines to your conf/local.conf file (for building the X11 backend of WebKitGTK+)
  • DISTRO_FEATURES_append = " opengl x11"
    IMAGE_INSTALL_append = " webkitgtk-bin"
    
  • Then build the X11 image:
  • bitbake core-image-sato
    
  • Then boot the image on the target device and run the included test browser from an X terminal (or any other browser that you have developed on top of the WebKitGTK+ API)
    MiniBrowser http://google.com
    

Further info

If you are new to Yocto / OpenEmbedded, a good starting point is to check out the documentation.

If you have any issue or doubt with the meta-webkit layer, please let me know about that (you can mail me at clopez@igalia.com or open an issue on github)

Finally, If you need help for integrating a Web engine on your product, you can also hire us. As maintainers of the GTK+ and WebKit For Wayland ports of WebKit we have considerable experience creating, maintaining and optimizing ports of WebKit.

WebKitGTK+ Performance Bot!

Lately I have been working on the WebKitGTK+ build bots that Igalia maintains.

Some of the improvements we achieved include:

The most interesting one is the performance bot. You can check the numbers at http://perf.webkit.org. That numbers are not valid to compare the performance between the different WebKit ports (EFL/GTK+/Mac), because each one of these performance bots run on different hardware. The idea is to track the performance between revisions, and identify the commits that have a performance impact.

The performance bot we deployed is not very powerful (i5 661), but does the job. I took extra care to ensure that external factors don’t affect the performance results of the tests on the machine. This is a summary of that measures:

  • I disabled cron and every other daemon.
  • Gave the performance tests processes very high priority (nice -19, ionice -c1).
  • Disabled Hyper-Threading: While the tests are run in sequential order, the javascript engine (JSC) or other parts of WebKit can be parallelized. HT is great if you want to squeeze the maximum performance of your machine, but it can become a problem when you care about consistent results related to performance (or you care about real time).
  • By default the performance tests are run inside Xvfb, and this is not great if you are interested in the most realistic scenario possible. So I patched WebKit to allow running the tests on the native X display.

These are some details of the bot we deployed:

  • The machine is an Intel Core i5 661 with 8GB of RAM.
  • As explained, we run the tests on a real X server:
    $ glxinfo | grep "render.*:"
    direct rendering: Yes
    OpenGL renderer string: Mesa DRI Intel(R) Ironlake Desktop 
    
  • The OS were the tests are run is Debian Jessie.
  • The DE is GNOME 3: I disabled every power-save feature of GNOME (including the screensaver). Since the window of the browser that runs the tests is positioned at the top-left of the screen, I installed the extension Activities Configurator that allows to hide the top bar of activities, and to disable the hot upper-left corner.

If you want to run the performance tests on your machine, this is as easy as:

Tools/Scripts/run-perf-tests --platform=gtk

If you want to run them on your X display, simply export the variable USE_NATIVE_XDISPLAY before:

export USE_NATIVE_XDISPLAY=1

If you don’t want to run the full suite (takes no less than 2 hours), you can specify the test names (look on the PerformanceTests directory for the names).

For example to run the test DoYouEvenBench Speedometer (DoYouEvenBench was renamed to Speedometer):

Tools/Scripts/run-perf-tests --platform=gtk Speedometer/Full.html

Check the WebKit wiki for further information and keep and eye on http://perf.webkit.org: If you notice any performance regression, please let us know that by opening a bug.

How to connect to .onion Tor domains with Firefox on Linux

So you are looking for an easy way of surfing the hidden network of .onion domains?. Many people probably will use some of the proxies available to reach the .onion domains. Unfortunately this is not a good idea for your privacy since the proxy owner can spy all your communications.

So, I’m going to explain in 3 easy steps how to configure Firefox to access .onion domains directly through your local Tor daemon. This way, we ensure all traffic goes directly to the tor network.

  1. Install Tor daemon.
  2. On Debian/Ubuntu Linux distributions this is as easy as:


    • sudo apt-get install tor
  3. Configure Firefox to tunnel requests to .onion domains via the Tor network
  4. This is the tricky part, because we only want Firefox to tunnel requests to .onion domains via the Tor network, but we don’t want Firefox tu tunnel all the other requests via the Tor network. To achieve this we are going to use a proxy configuration file.

    • Save the following content in a file (for example as ~/.proxy_pac)
      function FindProxyForURL(url, host) {
          isp = "PROXY ip_address:port; DIRECT";
          tor = "SOCKS 127.0.0.1:9050";
          if (shExpMatch(host,"*.onion")) {
              return tor;
          }
          return "DIRECT";
      }
      
    • And configure Firefox to use that file as a proxy configuration file (Edit->Preferences->Advanced->Network->Settings->Automatic proxy configuration URL).
      • You have to use the following syntax: file://absolute-path-to-the-file
      • If you are not sure, just open a new firefox window, and type on the browser location bar (the place where you type web addresses) the following URL file://. Now just browse your HDD to where you saved that file, right click on it and select Copy Link Location. Now paste such link location on the Automatic proxy configuration URL settings.
  5. Configure Firefox to tunnel DNS queries via a SOCKS5 proxy
  6. The last step is to tell Firefox that it should tunnel the DNS lookups via the Tor SOCKS5 proxy when we want to access a .onion domain. By default firefox will try to resolve .onion domains using our local DNS resolver, therefore it will fail to do that.
    To fix this, we should enable network.proxy.socks_remote_dns on the advanced configuration page:

    • In the browser location bar (the place where you type web addresses), type about:config and press Enter. This opens a different set of Firefox preferences. Where it says Search: at the top, type network.proxy.socks. The list of preferences will automatically change to show your proxy preferences.
      Highlight network.proxy.socks_remote_dns by clicking it only once. Then, right-click it. This opens a small pull-down menu. Select Toggle from the menu to change its value to true.

    This will make Firefox to tunnel all the queries to .onion domains via our local Tor daemon. This also adds privacy by preventing DNS queries to .onion domains from leaking.

Now restart Firefox and you should be able to surf .onion domains directly.

How to properly activate TRIM for your SSD on Linux: fstrim, lvm and dm-crypt

Unlike hard disk drives (HDDs), NAND flash memory that make SSD cannot overwrite existing data. This means that you first have to delete the old data before writing new one. Flash memory is divided into blocks, which is further divided in pages. The minimum write unit is a page, but the smallest erase unit is a block.

Data can be written directly into an empty page, but only whole blocks can be erased. Therefore, to reclaim the space taken up by invalid data, all the valid data from one block must be first copied and written into the empty pages of a new block. Only then can the invalid data in the original block be erased, making it ready for new valid data to be written.

Do you see the problem? This means that as time goes on, the SSD will internally fragment the blocks among the different pages, until that it reaches a point where there won’t be available any empty page. Then every time the drive needs to write a block into any of the semi-full pages, it first needs to copy the current blocks from the page to a buffer, then it has to delete the whole page to finally rewrite the old blocks along with the new one. This means that as time goes on the SSD performance degrades more and more, because for every write it has to go through a cycle of read-erase-modify-write. This is known as “write amplification”.

Without TRIM the disk is unable to know which blocks are in use by a file or which ones are marked as free space. This is because when a file is deleted, the only thing the OS does is to mark the blocks that were used by the file as free inside the file system index. But the OS won’t tell the disk about this. This means that over time the performance of the SSD disk will degrade more and more, and it don’t matters how much space you free, because the SSD won’t know about it.

What is TRIM? TRIM was invented for solving this problem. TRIM is the name of a command that the operating system can send to tell the SSD which blocks are free in the filesystem.
The SSD uses this information to internally defragment the blocks and keep free pages available to be written quickly and efficiently.

How to active TRIM on Linux? The first thing to know is that TRIM should be enabled on all I/O abstraction layers. This means that if you have an ext4 partition on top of LVM, which in turn is on top of an encrypted volume with LUKS/dm-crypt, then you must enable support for TRIM in these three layers: The filesystem, LVM and dm-crypt. There is no point in enabling it at the filesystem level if you don’t enable it also on the other layers. The TRIM command should be translated from one layer to another until reaching the SSD.

  1. Enabling TRIM support on dm-crypt

  2. We simply have to add the option discard inside our crypttab

    • $ cat /etc/crypttab

      # <target name>    <source device>    <key file>       <options>
      sda2_crypt         /dev/sda2          none             luks,discard
      

    Note: The usage of TRIM on dm-crypt could cause some security issues like the revelation of which sectors of your disk are unused.

  3. Enabling TRIM support on LVM

  4. We have to enable the option issue_discards in the LVM configuration.

    • $ cat /etc/lvm/lvm.conf

      # [...]
      devices {
         # [...]
         issue_discards = 1
         # [...]
      }
      # [...]
      
  5. Enabling TRIM support on the file system

  6. This is the most interesting part. Most people simply add the option “discard” in the mounting options at /etc/fstab. However, this means that every time you delete a file, the OS will be reporting in real-time to the SSD which blocks were occupied by that file and are not longer in use, and then the SSD will have to perform a defragmentation and deletion of those internal blocks, operation which will take an amount of time higher than desired.

    In order to optimize the performance of the SSD, I strongly advise you to avoid doing the TRIM operation in real time (whenever a file is deleted) because you would be putting an unnecessary extra amount of work over the SSD. In other words: You should not enable the discard option in fstab.

    Instead, what I recommend is to run a script periodically to tell the SSD which blocks are free with the command fstrim. Doing this operation daily or weekly is more than enough. This way we do not lose any performance due to TRIM when deleting files and we periodically keep informed the SSD about the free blocks.

    Other advantages of the fstrim way are:

    • If you didn’t enabled correctly the TRIM support in the above layers of your setup, you will receive an error when executing fstrim. On the other hand, if you were using the discard option at fstab you wouldn’t have received any error and you would end thinking that you managed to get TRIM working properly when you didn’t.
    • If you delete a file by mistake (you know it happens), you can recover it before anacron runs your script fstrim. On the other hand, if you were using the discard-at-fstab option you wouldn’t have any chance of recovering the file, because the OS would have told the SSD to TRIM that blocks as soon as you deleted the file, and consequently the SSD has irreversibly destroyed such blocks.

    Here you have simple script to run fstrim on the /, /boot and /home partitions, which can be programmed to be executed periodically by anacron

    • $ cat /etc/cron.weekly/dofstrim

      #! /bin/sh
      for mount in / /boot /home; do
      	fstrim $mount
      done
      

    It’s worth mentioning that both fstrim and discard-at-fstab options only work on filesystems that implement the ioctl FITRIM. Today, such filesystems are:

    ~/kernel/linux (v3.8) $ grep -lr FITRIM fs/ | cut -d/ -f2 | sort | uniq | xargs echo
    btrfs ext3 ext4 gfs2 jfs ocfs2 xfs
    

Note: On most setups you will have to rebuild your initramfs with update-initramfs -u (Debian and derivatives) or dracut -f (Redhat and derivatives) and reboot the machine after touching the configuration options of LVM or dm-crypt.

Update 25-Feb-2013: Seems that Fedora users with a dm-crypt volume will be affected by this problem: https://bugzilla.redhat.com/890533

Update 14-Aug-2014: The following script can be used to automatically detect and fstrim all filesystems that have TRIM support enabled.

  • #!/bin/sh
    #
    # To find which FS support trim, we check that DISC-MAX (discard max bytes)
    # is great than zero. Check discard_max_bytes documentation at
    # https://www.kernel.org/doc/Documentation/block/queue-sysfs.txt
    #
    for fs in $(lsblk -o MOUNTPOINT,DISC-MAX,FSTYPE | grep -E '^/.* [1-9]+.* ' | awk '{print $1}'); do
    	fstrim "$fs"
    done
    

Git: Copy a file or directory from another repository preserving the history

How to copy a file or directory from another GIT repository while preserving its history?

Internet is full of magic formulas each one more complex.

Here I’m proposing a much simpler and faster one that is to make a git format-patch for the entire history of the file or subdirectory that we want and then import it into the destination repository.

mkdir /tmp/mergepatchs
cd ~/repo/org
export reposrc=myfile.c #or mydir
git format-patch -o /tmp/mergepatchs $(git log $reposrc|grep ^commit|tail -1|awk '{print $2}')^..HEAD $reposrc
cd ~/repo/dest
git am /tmp/mergepatchs/*.patch

Simple and fast :)

IKEA Hackers: LackRack

Among the hundreds of hacks from the fantastic website IKEA Hackers, one is particularly interesting.

How to build a rack with an IKEA lack table worth less than 10 euros?
Well, with this manual:

The best of the LackRack (after its price) is that its construction is modular and you can grow it with your needs:

5x LackRack

The LackRack is the ultimate, low-cost, high shininess solution for your modular datacenter-in-the-living-room. It is said that Google engineers were the first to explore the idea of using lack tables for data centers. The LackRack is so famous that even has its own website: http://lackrack.org/ 😀

Unlocking a LUKS encrypted root partition remotely via SSH

If you are thinking on sending a new server to a remote datacenter for colocation or you have rented one or more servers in the cloud, probably you have thought that you would like to encrypt your server’s hard disk.

The problem is that if you encrypt the whole hard disk (the root partition) you will need some kind of KVM to type the password remotely every time the server is restarted … sure??? No!

Thanks to this nifty trick, you can enter the password remotely during the boot process. The trick involves embedding a small ssh server (dropbear) in the initramfs that allows you to enter the password remotely for the root partition at boot time.

For those who are lucky enough to use Debian, the procedure is so simple and easy as ::

1) Install your server with the root partition encrypted.

2) Install the required packages:

apt-get install openssh-server dropbear busybox

3) Copy the SSH key that has been generated automatically

scp root@my.server.ip.addr:/etc/initramfs-tools/root/.ssh/id_rsa ~/id_rsa.initramfs

4) If your server gets the IP address automatically (DHCP) ignore this step, otherwise you have to specify the IP configuration at the Kernel boot line. To do this edit the file /etc/default/grub and define the line:

GRUB_CMDLINE_LINUX="ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>"

Using the format specified in the file Documentation/nfsroot.txt of the Linux kernel documentation. For example:

GRUB_CMDLINE_LINUX="ip=192.168.122.192::192.168.122.1:255.255.255.0::eth0:none"

Reload the grub configuration

update-grub

5) Reboot

reboot

6) And unlock remotely!

ssh -o "UserKnownHostsFile=~/.ssh/known_hosts.initramfs" \
	-i "~/id_rsa.initramfs" root@my.server.ip.addr \
	"echo -ne \"MyS3cr3tK3y\" >/lib/cryptsetup/passfifo"

And for those not lucky enough to use Debian, and also for those who have such luck, but want more details on this procedure, I am pasting here the archive cryptsetup/README.remote from Debian that I am sure that you will find very useful :)

$ zcat /usr/share/doc/cryptsetup/README.remote.gz

unlocking rootfs via ssh login in initramfs
-------------------------------------------

You can unlock your rootfs on bootup from remote, using ssh to log in to the
booting system while it's running with the initramfs mounted.


Setup
-----

For remote unlocking to work, the following packages have to be installed
before building the initramfs: dropbear busybox

The file /etc/initramfs-tools/initramfs.conf holds the configuration options
used when building the initramfs. It should contain BUSYBOX=y (this is set as
the default when the busybox package is installed) to have busybox installed
into the initramfs, and should not contain DROPBEAR=n, which would disable
installation of dropbear to initramfs. If set to DROPBEAR=y, dropbear will
be installed in any case; if DROPBEAR isn't set at all, then dropbear will only
be installed in case of an existing cryptroot setup.

The host keys used for the initramfs are dropbear_dss_host_key and
dropbear_rsa_host_key, both located in/etc/initramfs-tools/etc/dropbear/.
If they do not exist when the initramfs is compiled, they will be created
automatically. Following are the commands to create them manually:

# dropbearkey -t dss -f /etc/initramfs-tools/etc/dropbear/dropbear_dss_host_key
# dropbearkey -t rsa -f /etc/initramfs-tools/etc/dropbear/dropbear_rsa_host_key

As the initramfs will not be encrypted, publickey authentication is assumed.
The key(s) used for that will be taken from
/etc/initramfs-tools/root/.ssh/authorized_keys.
If this file doesn't exist when the initramfs is compiled, it will be created
and /etc/initramfs-tools/root/.ssh/id_rsa.pub will be added to it.
If the latter file doesn't exist either, it will be generated automatically -
you will find the matching private key which you will later need to log in to
the initramfs under /etc/initramfs-tools/root/.ssh/id_rsa (or id_rsa.dropbear
in case you need it in dropbear format). Following are the commands to do the
respective steps manually:

To create a key (in dropbear format):

# dropbearkey -t rsa -f /etc/initramfs-tools/root/.ssh/id_rsa.dropbear

To convert the key from dropbear format to openssh format:

# /usr/lib/dropbear/dropbearconvert dropbear openssh \
	/etc/initramfs-tools/root/.ssh/id_rsa.dropbear \
	/etc/initramfs-tools/root/.ssh/id_rsa

To extract the public key:

# dropbearkey -y -f /etc/initramfs-tools/root/.ssh/id_rsa.dropbear | \
	grep "^ssh-rsa " > /etc/initramfs-tools/root/.ssh/id_rsa.pub

To add the public key to the authorized_keys file:

# cat /etc/initramfs-tools/root/.ssh/id_rsa.pub >> /etc/initramfs-tools/root/.ssh/authorized_keys

In case you want some interface to get configured using dhcp, setting DEVICE= in
/etc/initramfs-tools/initramfs.conf should be sufficient.  The initramfs should
also honour the ip= kernel parameter.
In case you use grub, you probably might want to set it in /boot/grub/menu.lst,
either in the '# kopt=' line or appended to specific 'kernel' line(s).
The ip= kernel parameter is documented in Documentation/nfsroot.txt in the
kernel source tree.


Issues
------

Don't forget to run update-initramfs when you changed the config to make it
effective!

Collecting enough entropy for the ssh daemon sometimes seems to be an issue.
Startup of the ssh daemon might be delayed until enough entropy has been
retrieved. This is non-blocking for the startup process, so when you are at the
console you won't have to wait for the sshd to complete its startup.


Unlocking procedure
-------------------

To unlock from remote, you could do something like this:

# ssh -o "UserKnownHostsFile=~/.ssh/known_hosts.initramfs" \
	-i "~/id_rsa.initramfs" root@initramfshost.example.com \
	"echo -ne \"secret\" >/lib/cryptsetup/passfifo"

This example assumes that you have an extra known_hosts file 
"~/.ssh/known_hosts.initramfs" which holds the cryptroot system's host-key,
that you have a file "~/id_rsa.initramfs" which holds the authorized-key for
the cryptroot system, that the cryptroot system's name is
"initramfshost.example.com", and that the cryptroot passphrase is "secret"

-- <debian@x.ray.net>, Wed, 30 Sep 2009

OpenRISC: Making the stack 100% free

OpenRISC is the flagship project of the OpenCores community. This project aims to develop a series of general purpose open source RISC CPU architectures.

The OpenRISC 1200 (OR1200) is a synthesizable CPU core that runs on FPGAs from a broad number of vendors, and is currently being used in a number of successful industrial projects. This CPU is an open source implementation (LGPL) of the OpenRISC 1000 RISC architecture.

These days the project has reached an important milestone: they have submitted their patches for review to the Linux kernel mailing list and hopefully they will get it merged, so Linux will officially support the OpenRISC architecture.

This are exciting news!

If you have a FPGA board you can load it with the OpenRISC softcore and execute Linux on it (demo video).

But if you don’t have one, then you can try the nice OpenRISC emulator: its easy and funny.
Here is a link to the detailed howto, and here is my cheat sheet:

sudo apt-get install build-essential libmpc-dev lzop git-core
sudo apt-get build-dep gcc-4.4
mkdir ~/openrisc
cd ~/openrisc
git clone git://openrisc.net/jonas/or1ksim-svn
cd or1ksim-svn
./configure && make && sudo make install
cd ~/openrisc
git clone git://openrisc.net/jonas/toolchain
cd toolchain
git submodule update --init
# go for a coffee
export PATH=~/openrisc/toolchain/bin:$PATH
make PREFIX=~/openrisc/toolchain
cd linux
make ARCH=openrisc defconfig
make CROSS_COMPILE=or32-linux-
# go for a tea
sim -f arch/openrisc/or1ksim.cfg vmlinux

And voila! You are running a 100% free system, from the CPU to the Operating System, including the Wishbone bus :)

# cat /proc/cpuinfo
cpu                     : OpenRISC-18
revision                : 1
dcache size             : 8192 kB
dcache block size       : 16 bytes
icache size             : 8192 kB
icache block size       : 16 bytes
immu                    : 64 entries, 1 ways
dmmu                    : 64 entries, 1 ways
bogomips                : 40.00

As the OpenRISC 1000 is now considered stable, the OpenCores project is trying to build a cost-efficient ASIC with this design to get improved performance. They launched a call for donations in 2011 with the aim to produce the first ASIC in Q1 2012.

I am willing to get one of this!

The open-source methodology is here to stay, just take a look at the success story behind Linux/Android, and all the great benefits that open-source offers. We now need to take the next step with open-source hardware development, meaning to actually create “true” open-source Semiconductor components in order to stay competitive, and to make sure that smaller companies can compete with large Semiconductor giants. So please join us and help us rewrite the history books for hardware development…

Don’t eject your USB storage: stop it!

This is probably one of the biggest mistakes I’ve dragged for more time as Linux user, in part for not reading the man pages and in part by assuming too much from the name of an executable.

When working with USB storage I was used to plug it, mount it, do some stuff with it, sync, unmount, eject and unplug it. And seems that eject does not apply to USB storage, only to “ejectable” media: CD/DVD/Floppies…

So, the right way to stop an USB storage device is to use:

sdparm --command=stop /dev/device

Anyway I recommend you to check out the following script that does it very nicely, plus another useful things like unbinding it. The script is here: suspend-usb-device

PS: I’m afraid that I was not the only unwary out there that was “ejecting” the USB storage before unplugging it, so I hope you can find this tip useful hehe 😀

RFKill Applet for GNOME

I don’t like to use wireless devices if I can avoid them, I am a bit paranoid with this things and also I have empirically checked that always that I use the wireless of my laptop for a long time I end with headaches. If you are worried about this too I encourage you to watch the documentary “surrounded by waves” (just google it)

And one of the things that I miss on my Asus laptop is that it lacks of a button switch for turning on / off the wireless and bluetooh devices. Months ago I discovered the rfkill utility (package wireless-tools on Debian/Ubuntu) and I loved it. I used to disable or enable the wireless devices from the terminal with this utility, but soon I realized that it was not comfortable to open a terminal and run the commands each time.

So as part of my hackfest time here at Igalia I wrote an applet for GNOME that lets the user to disable or enable the emission of radiation from RF devices with a simple click on the applet icon. I took the inspiration for this from the GNOME power manager inhibit applet.

It also polls the system for the status of the RF devices and updates the icon showed, so it works also as a notification applet that will display the emitting icon if only one of the RF devices of your system is activated and the inhibited icon if all RF devices are blocked. Finally, if a problem is found or you don’t have RF devices on your system then it will display the unknown icon.

GNOME RFKill Applet

You can get it from gitorious.

I hope you find it useful 😉