Linux hard drive information

Sometimes it is useful to get some more information about your hard drive than just the size remaining. For this purpose there is a Linux package called smartmontools available, which can be easily installed.

$ sudo apt-get install smartmontools

Once available you can inspect different quantities and qualities of your drive by issuing smartctl commands. A good article is available over at written by Vincent Danen

The tools will be installed to /usr/sbin/smartctl. In case you are on a machine that does not have sbin in it’s path (I was working on a bananapi, which does not) you will have to precede the commands with the full path.

Some commands and their output are given below.

Get the basic information from your drive

$ sudo smartctl -i /dev/sda
smartctl 6.4 2014-10-07 r4002 [armv7l-linux-3.4.112-bananian] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke,

Model Family:     Western Digital Red (AF)
Device Model:     WDC WD20EFRX-68EUZN0
Serial Number:    WD-WCC4M0AX088S
LU WWN Device Id: 5 0014ee 2b66bd176
Firmware Version: 82.00A82
User Capacity:    2.000.398.934.016 bytes [2,00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sun Apr 16 17:57:13 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Starting a short test

$ sudo smartctl --test=short /dev/sda

Viewing the test results

The test results will be at the end of the ouptut provided by smartctl -a. Test information starts with a hash.

$ sudo smartctl -a /dev/sda

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      8034         -

SMART Selective self-test log data structure revision number 1
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Setting up an NFS server on Banana Pi

Install the NFS Server

sudo apt-get install nfs-kernel-server


Mount the drive

sudo nano /etc/fstab
more /etc/fstab


# 512 MB swapfile
/swapfile1 swap swap defaults 0 0

# NAS Volume mount
# /dev/sda2 /mnt/nas_volume ext4 auto 0 0

Setting up the exports

sudo mkdir /nfs
cd nfs
ln -s /mnt/nas_volume/Public/ Public
sudo ln -s /mnt/nas_volume/Public/ Public
sudo ln -s /mnt/nas_volume/georg/ Georg
sudo ln -s /mnt/nas_volume/marit/ Marit
sudo ln -s /mnt/nas_volume/backup/ Backup
sudo ln -s /mnt/nas_volume/gabi/ Gabi

Configuring the exported directories

sudo nano /etc/exports

sudo exportfs -ra

Starting the server

sudo service nfs-kernel-server start
sudo tail /var/log/messages

After normal restart

georg@bananapi:~$ sudo ps -elf |grep rpc
[sudo] password for georg:
1 S root 23 2 0 60 -20 - 0 rescue 18:16 ? 00:00:00 [rpciod]
5 S root 1432 1 0 80 0 - 473 poll_s 18:16 ? 00:00:00 /sbin/rpcbind -w
5 S statd 1464 1 0 80 0 - 551 poll_s 18:16 ? 00:00:00 /sbin/rpc.statd
1 S root 1476 1 0 80 0 - 581 epoll_ 18:16 ? 00:00:00 /usr/sbin/rpc.idmapd
1 S root 1992 1 0 80 0 - 663 poll_s 18:16 ? 00:00:00 /usr/sbin/rpc.mountd --manage-gids
0 S georg 2684 2618 0 80 0 - 931 pipe_w 18:18 pts/0 00:00:00 grep rpc
georg@bananapi:~$ sudo ps -elf |grep nfs
1 S root 29 2 0 60 -20 - 0 rescue 18:16 ? 00:00:00 [nfsiod]
1 S root 1955 2 0 60 -20 - 0 rescue 18:16 ? 00:00:00 [nfsd4]
1 S root 1956 2 0 60 -20 - 0 rescue 18:16 ? 00:00:00 [nfsd4_callbacks]
1 S root 1957 2 0 80 0 - 0 svc_re 18:16 ? 00:00:00 [nfsd]
1 S root 1958 2 0 80 0 - 0 svc_re 18:16 ? 00:00:00 [nfsd]
1 S root 1959 2 0 80 0 - 0 svc_re 18:16 ? 00:00:00 [nfsd]
1 S root 1960 2 0 80 0 - 0 svc_re 18:16 ? 00:00:00 [nfsd]
1 S root 1961 2 0 80 0 - 0 svc_re 18:16 ? 00:00:00 [nfsd]
1 S root 1962 2 0 80 0 - 0 svc_re 18:16 ? 00:00:00 [nfsd]
1 S root 1963 2 0 80 0 - 0 svc_re 18:16 ? 00:00:00 [nfsd]
1 S root 1964 2 0 80 0 - 0 svc_re 18:16 ? 00:00:00 [nfsd]
0 S georg 2687 2618 0 80 0 - 931 pipe_w 18:18 pts/0 00:00:00 grep nfs

Adding the NFS group

sudo groupadd --gid 1111 nfs
sudo usermod -aG nfs georg

Assign the new group to existing files

sudo chown -R root:nfs *

Adding the group on the client machine

sudo addgroup --gid 1111 nfs
sudo usermod -aG nfs georg

Using Californium in Eclipse

Why Californium?

When dealing with IoT on a technical level one of the protocols to look at ist CoAP. The Californium™ library is one of the rare libraries that provides the CoAP protocol on top of the secure DTLS protocol. Both CoAP respective DTLS are based on UDP.

So let’s go for it…

Setting up Eclipse for usage with Californium

I am using Ubuntu 14.04 and the Eclipse you can install with the package manager is not featuring the required extensions:

  • EGit
  • Maven

So first thing is to install them manually.

Eclipse Install Menu

Select a site with Eclipse software

Eclipse Install Software Dialog. Select Site

Install EGit into Eclipse

Eclipse software dialog - Select EGit

Eclipse software installation - Egit

Install Maven Integration into Eclipse

Eclipse software installation - Maven

Import Californium into Eclipse

Importing a maven project into Eclipse

Select californium directory with projects


Setting up an own project using Californium




Getting Ginkgo CADx up and running on Ubuntu

I needed to look at some MRT images but the CDs doctors provide you after your tube ride only contain a windows viewer. So when I clicked on the image data on the disc Ubuntu was recommending Ginkgo CADx to me, which I installed right away.

The first start was however a disappointment. I was able to see a thumbnail of the image but when clicking on it no image was opening an an error message was displayed.

"Unable to open modality MR with transfer syntax 1.2.840.10008.1.2.1"

The Ginkgo CADx web site was also of no help. It seems once upon a time a fix was described but now it is gone. By the way I am using version 2.6 of that program.

Starting the program from command line reveals an error about shared libraries that do not load properly. So in the end editing the .inf files in that directory to load the correct library files fixed the problem for me.

How to fix it

1. Cd to


2. Open up the .inf file in an editor with super user privileges:

>sudo atom lightvisualizator.inf visualizator.inf

3. Change the first line in each of the two files by basically just removing each time the trailing “.2.6.0.” as shown below and of course save the changes.

Start the program again and it should work. I recommend to start it from the command line.

Original lightvisualizator.inf file

Bildschirmfoto Ginkgo-error vom 2016-03-02 21:35:44

Fixed file

Bildschirmfoto Ginkgo vom 2016-03-02 21:34:48

Original visualizator.inf file

Ginkgo inf file

Fixed file

Bildschirmfoto Ginkgo vom 2016-03-02 21:34:15

Using Ginkgo CADx

The program is working nicely. Remember it is for free.

Open up the “dicomdat” folder of your CD data, click on the thumbnail to view the real images.

Zoom in and out by turning the mouse wheel.

Hold down the Ctrl key while turning the mouse wheel to flip movie like through the views.

Quite impressive is the 3DTool which combines a series of images taken from one angle into a 3D view. Just try it out.

I prefer to start this program from the command line as it will not exit properly. So issuing Ctrl-C will then exit the program brutally but effective.

Ubuntu Upgrade Problem lösen

Schon zum zweiten Mal laufe ich in das Problem, dass die Upgrades des Ubuntu Systems sich nicht mehr installieren lassen. Wenn man nach den Symptomen in deutscher Sprache sucht findet man leider auch nicht allzu viel. Daher hier meine Anleitung wie ich die Installation wieder ans Laufen gebracht habe, samt dem Löschen nicht benötigter Kernel Versionen.
Für mich zur Erinnerung und vielleicht für Euch als Anregung.

Ihr solltet außerdem in der Lage sein mit der Shell umzugehen. Ich übernehme keinerlei Haftung für die Korrektheit oder für Schäden die durch das Befolgen dieser Anleitung entstehen. Ihr handelt auf eigene Gefahr.


  • Der normale Upgrade Manger meldete ein Problem bei der Installation und konnte die Linux Kernel Header Dateien nicht installieren.
  • In der Kommand Bar erscheint ein Stoppschild, dass darauf hinweist, dass die Upgrade Funktion beschädigt ist.
  • sudo apt-get -f install schmeißt in etwa den im folgenden Unterkapitel gezeigten Log aus.
  • Im Ubuntu Software Center gibt es sofort eine Nachricht, dass das Paketierungssystem aus dem Tritt ist und man weder installieren noch de-installieren kann
  • Wenn man mit baobab (Shell Aufruf) in etwa folgendes in /usr/ sieht
    Bildschirmfoto baobab Ubuntu Kernel Header

Ich habe dann auch alle Varianten versucht, um z.B. Linux Header Dateien zu de-installieren aber alles läuft in einen Fehler.

sudo apt-get -f install

Ihr solltet dasselbe Problem haben, wenn die unten rot markierte Nachricht erscheint.


georg@georg-PC:~$ sudo apt-get -f install
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut
Statusinformationen werden eingelesen... Fertig
Abhängigkeiten werden korrigiert... Fertig
Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
kde-l10n-de language-pack-kde-de language-pack-kde-en linux-headers-3.2.0-29 linux-headers-3.2.0-64 language-pack-kde-en-base kde-l10n-engb
linux-headers-3.2.0-64-generic-pae linux-headers-3.2.0-29-generic-pae language-pack-kde-de-base nvidia-settings-304 libkms1 openjdk-7-jre-lib
Verwenden Sie »apt-get autoremove«, um sie zu entfernen.
Die folgenden zusätzlichen Pakete werden installiert:
The following NEW packages will be installed
0 to upgrade, 1 to newly install, 0 to remove and 3 not to upgrade.
3 nicht vollständig installiert oder entfernt.
Es müssen noch 0 B von 11,7 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 56,4 MB Plattenplatz zusätzlich benutzt.
Möchten Sie fortfahren [J/n]? Y
(Lese Datenbank ... 1194962 Dateien und Verzeichnisse sind derzeit installiert.)
Entpacken von linux-headers-3.2.0-84 (aus .../linux-headers-3.2.0-84_3.2.0-84.121_all.deb) ...
dpkg: Fehler beim Bearbeiten von /var/cache/apt/archives/linux-headers-3.2.0-84_3.2.0-84.121_all.deb (--unpack):
 »/usr/src/linux-headers-3.2.0-84/include/linux/netfilter/xt_sctp.h.dpkg-new« konnte nicht angelegt werden (während der Verarbeitung von »./usr/src/linux-headers-3.2.0-84/include/linux/netfilter/xt_sctp.h«): Auf dem Gerät ist kein Speicherplatz mehr verfügbar
Es wurde kein Apport-Bericht verfasst, da die Fehlermeldung auf einen Fehler wegen voller Festplatte hindeutet
dpkg-deb: Fehler: Unterprozess einfügen wurde durch Signal (Datenübergabe unterbrochen (broken pipe)) getötet
Fehler traten auf beim Bearbeiten von:
E: Sub-process /usr/bin/dpkg returned an error code (1)


  1. Auf jeden Fall, erst mal in der Shell
    sudo apt-get -f install
    aufrufen. Vielleicht ist das Problem ja damit schon behoben.

Wenn das nichts geholfen hat und Ihr auch die Log Zeile “Auf dem Gerät ist kein Speicherplatz mehr verfügbar” findet fahrt wie folgt fort, um Speicher manuell freizugeben.

Speicherplatz manuell freimachen

Achtung das Löschen der Kernel Dateien kann ein System unbrauchbar machen. Insbesondere das Löschen der aktuellen Kernel Dateien führt ins Verderben.

  1. Feststellen welchen Kernel man selbst am laufen hat. uname -r war bei mir ergibt bei mir 3.2.0-84-generic-pae.
    Dateien die diese Nummer z.B. bei mir 3.2.0-84 enthalten auf keinen Fall löschen!! (wird bei Euch sicher eine andere Nummer sein)
  2. Listen der Header Datein ls /usr/src
  3. Löschen von Header Dateien rm -rf /usr/src/3.xxxxx (Sucht Euch die kleinsten Nummern raus bei mir z.B. und löscht das ganze Verzeichnis
  4. Listen der Lib Dateien ls /lib/modules/
  5. Löschen der Lib Dateien rm -rf /lib/modules/3.xxxx (Nehmt hier die selben Nummern wie vorher bei den Headern
  6. Jetzt nochmal sudo apt-get -f install. Wenn es geklappt hat dann läuft das jetzt durch

Das war aber nur ein Teil der Miete, denn jetzt heißt es…


Uns so geht das Aufräumen und das Freigaben von mehr Speicherplatz

  1. Die Header Dateien der Kernel die Installiert sind kann man sich anzeigen lassen indem man in der shell folgendes eingibt
    sudo apt-get remove linux-
    dann zwei mal Tab drücken.
  2. Jetzt die Header Dateien der Kernel für die man vorher Dateien manuell gelöscht hat mit apt-get löschen. Auf keinen Fall die Version die man oben mit uname -r ermittelt hat!!
    sudo apt-get remove linux-headers-3.2.0-29
  3. Die libs der Kernel kann man sich anzeigen lassen indem man
    sudo apt-get purge linux-image-3
    eingibt und zweimal Tab drückt.
  4. Jetzt wie oben unter 2 das ganze für die libs
    sudo apt-get purge linux-image-3.2.0-29
  5. Nun und das ganze wiederholen für die Kernel die man nicht mehr benötigt. Die letzten 5 würde ich persönlich stehen lassen.
georg@georg-PC:~$ sudo apt-get purge linux-image-3.2.0-
linux-image-3.2.0-29-generic-pae  linux-image-3.2.0-60-generic-pae
linux-image-3.2.0-35-generic-pae  linux-image-3.2.0-61-generic-pae
linux-image-3.2.0-36-generic-pae  linux-image-3.2.0-63-generic-pae
linux-image-3.2.0-37-generic-pae  linux-image-3.2.0-64-generic-pae
linux-image-3.2.0-38-generic-pae  linux-image-3.2.0-65-generic-pae
linux-image-3.2.0-39-generic-pae  linux-image-3.2.0-67-generic-pae
linux-image-3.2.0-40-generic-pae  linux-image-3.2.0-68-generic-pae
linux-image-3.2.0-41-generic-pae  linux-image-3.2.0-69-generic-pae
linux-image-3.2.0-43-generic-pae  linux-image-3.2.0-70-generic-pae
linux-image-3.2.0-44-generic-pae  linux-image-3.2.0-72-generic-pae
linux-image-3.2.0-45-generic-pae  linux-image-3.2.0-75-generic-pae
linux-image-3.2.0-48-generic-pae  linux-image-3.2.0-76-generic-pae
linux-image-3.2.0-52-generic-pae  linux-image-3.2.0-77-generic-pae
linux-image-3.2.0-53-generic-pae  linux-image-3.2.0-79-generic-pae
linux-image-3.2.0-54-generic-pae  linux-image-3.2.0-80-generic-pae
linux-image-3.2.0-55-generic-pae  linux-image-3.2.0-82-generic-pae
linux-image-3.2.0-56-generic-pae  linux-image-3.2.0-83-generic-pae
linux-image-3.2.0-57-generic-pae  linux-image-3.2.0-84-generic-pae
georg@georg-PC:~$ sudo apt-get purge linux-image-3.2.0-29-generic-pae linux-image-3.2.0-35-generic-pae linux-image-3.2.0-36-generic-pae linux-image-3.2.0-37-generic-pae linux-image-3.2.0-38-generic-pae linux-image-3.2.0-39-generic-pae linux-image-3.2.0-40-generic-pae 
 linux-image-3.2.0-41-generic-pae linux-image-3.2.0-43-generic-pae linux-image-3.2.0-44-generic-pae linux-image-3.2.0-45-generic-pae linux-image-3.2.0-48-generic-pae linux-image-3.2.0-52-generic-pae linux-image-3.2.0-53-generic-pae linux-image-3.2.0-54-generic-pae </strong>
linux-image-3.2.0-55-generic-pae linux-image-3.2.0-56-generic-pae linux-image-3.2.0-57-generic-pae linux-image-3.2.0-58-generic-pae linux-image-3.2.0-60-generic-pae linux-image-3.2.0-61-generic-pae linux-image-3.2.0-63-generic-pae linux-image-3.2.0-64-generic-pae linux-image-3.2.0-65-generic-pae linux-image-3.2.0-67-generic-pae linux-image-3.2.0-68-generic-pae linux-image-3.2.0-69-generic-pae linux-image-3.2.0-70-generic-pae linux-image-3.2.0-72-generic-pae linux-image-3.2.0-75-generic-pae linux-image-3.2.0-76-generic-pae linux-image-3.2.0-77-generic-pae
[sudo] password for georg:
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut
Statusinformationen werden eingelesen... Fertig
Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
kde-l10n-de language-pack-kde-de language-pack-kde-en
language-pack-kde-en-base kde-l10n-engb language-pack-kde-de-base
nvidia-settings-304 libkms1 openjdk-7-jre-lib
Verwenden Sie »apt-get autoremove«, um sie zu entfernen.
The following packages will be REMOVED
linux-image-3.2.0-29-generic-pae* linux-image-3.2.0-35-generic-pae*
linux-image-3.2.0-36-generic-pae* linux-image-3.2.0-37-generic-pae*
linux-image-3.2.0-38-generic-pae* linux-image-3.2.0-39-generic-pae*
linux-image-3.2.0-40-generic-pae* linux-image-3.2.0-41-generic-pae*
linux-image-3.2.0-43-generic-pae* linux-image-3.2.0-44-generic-pae*
linux-image-3.2.0-45-generic-pae* linux-image-3.2.0-48-generic-pae*
linux-image-3.2.0-52-generic-pae* linux-image-3.2.0-53-generic-pae*
linux-image-3.2.0-54-generic-pae* linux-image-3.2.0-55-generic-pae*
linux-image-3.2.0-56-generic-pae* linux-image-3.2.0-57-generic-pae*
linux-image-3.2.0-58-generic-pae* linux-image-3.2.0-60-generic-pae*
linux-image-3.2.0-61-generic-pae* linux-image-3.2.0-63-generic-pae*
linux-image-3.2.0-64-generic-pae* linux-image-3.2.0-65-generic-pae*
linux-image-3.2.0-67-generic-pae* linux-image-3.2.0-68-generic-pae*
linux-image-3.2.0-69-generic-pae* linux-image-3.2.0-70-generic-pae*
linux-image-3.2.0-72-generic-pae* linux-image-3.2.0-75-generic-pae*
linux-image-3.2.0-76-generic-pae* linux-image-3.2.0-77-generic-pae*
0 to upgrade, 0 to newly install, 32 to remove and 3 not to upgrade.
Nach dieser Operation werden 3.634 MB Plattenplatz freigegeben.

Nach dem Aufräumen als Beispiel die verbleibenden Header Dateien.
Bildschirmfoto baobab Ubuntu Kernel Header nach Aufräumen


Im Moment des Ausfalls hatte ich:

  • 89 MB pro Kernel an Header Dateien und
  • 135 MB pro Kernel an Lib Dateien
  • Das ganze 37 mal!

Nach dem Aufräumen hatte ich mal eben 6 GB freien Speicherplatz.

Das Problem ist je nach Perspektive:

  • Meine zu kleine Systempartition
  • Der Ubuntu Upgrade Prozess der dann eben doch nicht benutzerfreundlich die alten Kernel Logs aufräumt.

Aber ich tendiere zu letzterem. Es ist für den normalen Benutzer nicht nachvollziehbar warum das System plötzlich den Dienst versagt.  Wünschenswert wäre es nur die letzten sagen wir 5 Kernel aufzuheben oder den Platz für die Kernel sinnvoll zu begrenzen.

Und mal ehrlich wie oft geht beziehungsweise fällt man mehr als einen Kernel zurück?

File recovery from WD My Book World Edition NAS

I have a NAS WD ® My My Book World Edition and was happy with it but after six years of constant usage the device stopped working. At first from time to time the NAS was not accessible but a restart helped. Then one day it completely stopped to respond and the LED boot indicator (white light) showed no progress at all any longer.

Lucky you if you have a recent back-up. Mine was not recent enough and there were some pictures on the NAS that I wanted to recover?! So here is what happened and may help you as well. A few useful links that helped me can be found at the end of this article.

I provide this as a report about what helped in my situation yours may be completely different and if you follow the steps given in this article you act on your own risk and I will not be liable for any damage! I assume you have some knowledge of Linux and drives.

I hope you have  a backup of your data 😉

Well there are three main components of such a device that may break down. The power adapter, the controller and the harddrive. In my case the controller has a malfunction but the hard drive and power adapter were working.

There is a grain of salt for you. I was lucky that the drive was still working but if you have a problem it may as well be the case that your hard drive has a major failure. If you desperately need the data recovered don’t tinker around with it, please seek out professional support. Following the steps in this article may worsen the situation and you may lose all data.

Windows will kind of detect the drive but will not display it easily so I used Ubuntu 12.04.

Just mounting the drive to get a look at the data?

My first shot at the problem was to just mount the drive to see what happens. With the following command. (If your drive is not discovered as sdb you will have to change it accordingly maybe to sdc4, sdd4, …)

sudo mkdir /media/sdb
sudo mount -t ext3 -o ro /dev/sdb4 /media/sdb

Now you can explore the drive with the nautilus filemanager GUI. However there was no accessible data on that drive.The good news for me was that my drive looked physically OK!

So next tool please.

Using palimpset to look at the drive

There is a disk utility program that provides a nice graphical overview about what you have on a drive. The English name for it is “disk utility” the German name is “Laufwerksverwaltung”. So you better use the name palimpset if your GUI is not set to English by default.

Open the Ubuntu dock (press the windows key), enter palimpset and select the drive manager.  This will open the following program that allows you to have a look at installed hard drives.

Bildschirmfoto vom 2015-04-18 23:42:08-small

As you may see there are 4 raid partitions on that drive.

Partition one to three (sdb1, sdb2, sdb3) contain the operating system and swap partition of the WD controller.

Partition four (here sdb4) is the larger partition and contains the data.

So my version of the WD My book world edition utilizes for some reason a soft raid system that you can use on a Linux machine by installing mdadm. For the standard use case this is just adding hassle. Raid from my point of view makes most sense in set-ups of two and more disks for your data.

Using the tool parted to get drive information

$ sudo parted -l
Modell: ATA SAMSUNG SSD 830 (scsi)
Festplatte  /dev/sda:  128GB
Sektorgröße (logisch/physisch): 512B/512B
Partitionstabelle: msdos

Nummer  Anfang  Ende    Größe   Typ       Dateisystem     Flags
1      32,3kB  62,9GB  62,9GB  primary   ntfs            boot
2      62,9GB  105GB   42,0GB  extended
5      62,9GB  82,9GB  20,0GB  logical   ext4
6      82,9GB  103GB   20,0GB  logical   ext4

Modell: ATA WDC WD10EARS-00M (scsi)
Festplatte  /dev/sdb:  1000GB
Sektorgröße (logisch/physisch): 512B/512B
Partitionstabelle: gpt

Nummer  Anfang  Ende    Größe   Dateisystem     Name     Flags
1      32,9MB  2040MB  2007MB  ext3            primary  RAID
2      2040MB  2303MB  263MB   linux-swap(v1)  primary  RAID
3      2303MB  3315MB  1012MB  ext3            primary  RAID
4      3315MB  1000GB  997GB                   primary  RAID

Modell: ATA WDC WD20EFRX-68E (scsi)
Festplatte  /dev/sdc:  2000GB
Sektorgröße (logisch/physisch): 512B/4096B
Partitionstabelle: gpt

Nummer  Anfang  Ende    Größe   Dateisystem  Name  Flags
1      1049kB  1144GB  1144GB  ext2
2      1144GB  2000GB  856GB   ext2

Installing the multi device administration program (mdadm)

To work with raid drives you will have to install the mdadm program.

$ sudo apt-get install mdadm</pre>
Generating mdadm.conf... done.
Removing any system startup links for /etc/init.d/mdadm-raid ...
update-initramfs: deferring update (trigger activated)
* Starting MD monitoring service mdadm --monitor [ OK ]

Then you can some brief information about the drive by invoking mdadm as shown here.

$ sudo mdadm --examine /dev/sdb1

Magic : a92b4e
Version : 0.90.00
UUID : 2ac90fc8:85f6502a:3b
Creation Time : Thu Mar 11 04:54:42 2010
Raid Level : raid1
Used Dev Size : 1959872 (1914.26 MiB 2006.91 MB)
Array Size : 1959872 (1914.26 MiB 2006.91 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0

Update Time : Sat Apr 18 12:52:12 2015
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Checksum : 3e885ad7 - correct
Events : 481246

Number Major Minor RaidDevice State
this 0 8 1 0 active sync /dev/sda1

0 0 8 1 0 active sync /dev/sda1
1 1 0 0 1 faulty removed

A word of caution. In my case rebooting the machine now leads to a situation where Ubuntu discovered a raid file system and asks you if you want to mount it. I chose not to do this to stay on the save side. After this question you may end up in a bootloader, to be more specific in initramfs, here you should type “exit” and if asked to manually mount the drive you should not do it. Thereafter you boot normally into Ubuntu.

I booted up in this constellation a few times and once I discovered that the drives were mounted, shown in Nautilus and usable! That is all the below would not be necessary if this would work all the time. However I do not know how to reproduce this.

 My way to success

My procedure in short is:

  1. Get the WD hard drive out of it’s case and plug it into the PC
  2. Get a second hard drive to store the recovered data from the WD drive
  3. Generate a copy of the WD data partition to the other hard drive
  4. Bring up that image as a device
  5. Start the device with mdadm as a raid device
  6. Mount the raid device as a normal drive

Generate a copy

First I generated a copy of the sdb4 drive with the dd command. This took some time, in my case 24 hours. The recovery drive is the sdc drive and the sdc1 partition on it is big enough to contain the image.

$ sudo dd if=/dev/sdb4 of=/media/sdc1/sdb4.img bs=4096 conv=sync,noerror
1055197+0 Datensätze ein
1055196+0 Datensätze aus
4322082816 Bytes (4,3 GB) kopiert, 50,5156 s, 85,6 MB/s
1236669+0 Datensätze ein
1236668+0 Datensätze aus
5065392128 Bytes (5,1 GB) kopiert, 59,0936 s, 85,7 MB/s

Bind in the image as a device

With the losetup command it is possible to bind the image as a device. On my machine it is /dev/loop0. See [2].

$ sudo losetup -d /dev/loop0
$ sudo losetup /dev/loop0 /media/sdc1/sdb4.img

The command losetup -d is used to unbind any possible previous use of the loop0 device. Usually unnecessary, here I do it just to be sure.

Start the raid

$ sudo mdadm --assemble /dev/md4 /dev/loop0

mdadm: /dev/md4 has been started with 1 drive (out of 2).

Mount the raid

$ sudo mount /dev/md4 /media/raid_img

Useful links in English


Useful links in German


A first time Banana Pi experience

In this article I will present you with the experiences that I made when installing everything to boot up a Banana Pi with a Lubuntu operating system. I will also show you what went wrong and will sprinkle in some hints on how to configure the system.

How did I get this idea?

I was always wondering how to get my own server up an running at home. I did my first steps with a laptop but that was not satisfactory. I wanted to use the laptop also for other stuff and furthermore I disliked the ratio between power consumption and the time I actually used the server. So when hearing about the Raspberry Pi I was interested but only when reading about the Banana Pi I was hooked as I believed it would suit my requirements:

  • No fan, no noise
  • Small and unobstrusive
  • Constantly on as it consumes low power

Banana Pi Shopping cart

So I bought a few components.


  1. Banana Pi board
  2. SD Card SanDisk Ultra 30MB/s 8GB class 10
  3. Micro USB AC/DC Adapter, Model “MAREL-5V2000”


But the Banana Pi HW itself isn’t booting up on it’s own you need a decent Operating System and there exists a bunch of them. You will find all of them on the lemaker web pages. As I fancy Ubuntu my first choice was the Lubuntu distro. Besides you will find quite a few installation instructions which helped me a lot but there were also pitfalls.
I will show you how it worked in my case and as always I don’t take any responsibilities you use the following instructions at your own risk. 😉

Preparing the SD card

The below is pretty much the same as in the instructions. just the name of the SD card device is different. I used my laptop, which allows me to insert SD cards, with an installed Ubuntu 12.4.

When hiting return on the comamnd in line 6 be prepared to take a break. This took for me about 20 to 30 minutes to finish.

sudo fdisk -l
umount /dev/mmcblk0p1
sudo fdisk -l
tar zvxf Lubuntu_For_BananaPi_v3.1.1.tgz
ls *img
sudo dd bs=4M if=Lubuntu_1404_For_BananaPi_v3_1_1.img of=/dev/mmcblk0

First start, first experience

Now it became interesting. I connected my old tube TV which features an AV input with the AV output of the Banana Pi and pluged in the power. What shall I say? The screen stayed dark. One minute, two minutes, five minutes. The only “eye candy” were two LEDs. Basically a red LED which was constantly on and a green LED which first was on and then went into a slow blinking pattern. An SOS? No actually this is a good sign as it shows that the CPU and SD card talk to each other.

  • RED LED = power
  • GREEN LED = SD Card Access

But then I connected the Banana PI via it’s HDMI plug to the TV and it worked!

Logging in to the Banana PI

I wanted to use the machine remotely so I tried to log in from a console via ssh. For this to succeed you need to know the IP address that the Banana PI got assigned via DHCP.

ping lemaker


In your case the IP might be of course different but in mine the following command will alow you to log in as root.

ssh root@


Congratulations your first log-in!

As you see there are quite some upgrades pending.

Therefor the next thing to try is to call

apt-get upgrade

Please note, as we just logged in using the root account this will work perfectly. If we however use for example the banapi user account where you would call

sudo apt-get upgrade

this will fail as the bananpi user account has no sudo permission. If you paid close attention you probably spotted that the console screenshot above shows a log-in with the bananpi user account. So I actually run into that problem and I will show you some chapters below how to get rid of this limitation.

A second SD card does not work

While using mainly ssh and no monitor. I realized that I am in for an OS change. An OS such as Bananaian without X support might fit better. Besides I find it a realy cool feature to just swap SD cards to fire up another OS and bought an SD card in a shop. Also 8GB, same vendor but this card was just Class 4 and not Class 10 as my initial card.

  • SanDisk SDHC Card 8GB class 4

I repeated the steps I made for the first SD card. This time with no success, it did not work. I had permanently the green LED on, which as I know today, is no good sign. In this case the device is not booting up and is not getting an IP address.

After hours of casting “format” commands and “dd” commands in different variations at the SD cards I felt frustrated and had to accept failure. The card itself is perfectly working in my laptop but it is not tolerated by the Banana Pi.

Trying a third card identical to the first

I wanted to have a second working card and just bought the same class 10 card and of cours it worked! So why does the class 4 not work…? If anyone knows please drop a comment.

Power consumption

As I mentioned, power consumption is a key parameter for me. So I tried to measure the power consumption with the meter that I have used in the past. Well what shall I say, the meter that is working happily on a TV, laptop or radio is not accurate enough to measure the Banana Pi. The Banana Pi does just not consume enough power and my meter does not bother and keeps displaying a 0 consumption.

Do you need a case for the Banana Pi?

If you studied carefully my list of ingredients at the top of the article you may have noted that I did not buy a case. I thought I could save the money but the handling of the naked board is cumbersome if it sits loosly in front of you on your table. So I would suggest that if you can spare the money go an buy a case. I will do the same.

Configuration changes I would recommend

Do not stay with the default passwords

The OS just installed has a password that is known to everyone. Even worse the password of root on that machine is known to everyone, which is dagerous. That should send cold shivers down your spine. So we better go and change it now!

It is no good idea to keep the passwords unchanged!

Just enter the below commands and type and re-type your new cunning password as requested.

passwd root
passwd bananapi

Assign SUDO rights to user bananapi

I do not like working as root so to grant to the user bananapi the rights you have to add him to the sudo group as by default it belongs just to the below groups

id bananapi
uid=1000(bananapi) gid=1000(bananapi) groups=1000(bananapi),4(adm),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),44(video),46(plugdev),102(netdev),105(fuse),108(scanner),113(lpadmin)

We add the user bananapi to the group sudo

usermod -a -G sudo bananapi
id bananapi

uid=1000(bananapi) gid=1000(bananapi) groups=1000(bananapi),4(adm),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),27(sudo),29(audio),30(dip),44(video),46(plugdev),102(netdev),105(fuse),108(scanner),113(lpadmin)


Good-Bye Root! Now we can exit back to the bananapi account and work with the new sudo rights.

There is one catch, the systm did not reread the groups. The easiest way of getting a reset of the group ids is by exiting the ssh and reopen the ssh connection.

Inflating a 4GB image in an 8GB SSD leaves space

The OS image is apparantly from a 4GB card and I have used a 8GB card which leaves 4GB ununsed as you can see if entering the commadn fdisk -l when being logged in to the Banana PI. In the below you have to look at the used sectors and divide this by 2 to get to the size in kBytes.

sudo fdisk -l

Disk /dev/mmcblk0: 7948 MB, 7948206080 bytes
4 heads, 16 sectors/track, 242560 cylinders, total 15523840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x2b1c25d6

Device Boot      Start         End      Blocks   Id  System
/dev/mmcblk0p1            2048      124927       61440   83  Linux
/dev/mmcblk0p2          124928     7167999     3521536   83  Linux

Now what to do with it?

As said my aim is to use it as a server but this I will cover in another article where I am going to show how to setup Ruby On Rails with MySQL on that tiny computer.

 Copyright © 2014Georg Schweflinghaus

Mounting FAT drive in Ubuntu with the right permissions

I just plugged an old harddisk with a FAT filesystem on it that was used on a windows machine mounting it to my ubuntu system and ran into a problem of managing the files on that drive. I seldomly do this and if you run into the same issues maybe the following is of help to you. But first as always a word of caution. If you follow the steps I describe you do this at your own risk. I will not be responsible or liable for any damage or loss.


The symptoms were that I was able to see all data but was not able to delete or even move files.

[bash]>ls -l
-rwxr-xr-x 1 root root 113846 Apr 25 2006 TrekStor.ico
drwxr-xr-x 4 root root 32768 Mai 9 2009 Video
-r-xr-xr-x 1 root root 57636 Jul 18 2010 cache_0001.xml[/bash]

Initially I had this in the fstab file. The /dev/sdb1 is the device I mounted and the /media/sdb is the place where I mounted it in the linux file system as a FAT drive. Maybe you have something similar.

[bash]/dev/sdb1                /media/sdb     vfat    rw,user    0    0[/bash]

Thinking into the symptoms I realized that I am mounting a filesystem that does not know user rights into a file system that does. One might think that the “rw” option in the fstab should be sufficient but it is not.

The solution

To come to the point. I wanted the files of the mounted FAT drive to belong to my group and I wanted full read write execute rights on the file. In addition I wanted them to be only invisible for anybody else on the machine. To achieve this, first you need to find out your groups ID by issuing this command. (It shows my return values, yours may look different)

uid=1000(georg) gid=1000(georg) Gruppen=1000(georg),….[/bash]

So now we know that (in my case) the group ID is 1000 we will enter this together with a mask into the fstab


by issuing.

[bash]sudo gedit /etc/fstab[/bash]

And then adding a line looking something like this.

[bash]/dev/sdb1 /media/sdb vfat rw,user,gid=1000,umask=0007 0 0[/bash]

Now you unmount then mount again, which will only work if you are no longer using the files on that mounted drive. So please close editors or cd out of the directory.

>sudo umount /dev/sdb1
>sudo mount /dev/sdb1

And if you now look at the files on the drive you see that the permissions are set properly for you to change whatever files you want to change.

[bash]>ls -l
-rwxrwx— 1 root georg 113846 Apr 25 2006 TrekStor.ico
drwxrwx— 4 root georg 32768 Mai 9 2009 Video
-r-xr-x— 1 root georg 57636 Jul 18 2010 cache_0001.xml[/bash]

Finally Ubuntu’s Zeitgeist is useful for me

I never understood the concept of Zeitgeist, which seems to be a service running in the back and recording what I do. In fact I just know about it’s existens by chance. One day I tried to find a certain file I knew I had used with a certain program and a short internet search revealed that there is a service built into Ubuntu.

The only things that I was however able to lay my hands on were settings of what Zeitgeist is recording and what it isn’t. So as I did not and probably to a large extent still do not know what the program is truely doing it made me feel uneasy. Specifcally as obviously there is no direct use for me.

That changed when today I stumbled across a program called “GNOME Activity Journal”. I have a screenshot added below which shows that this program displays you your activities over time. It is even able to display the files you opened in a pre-view mode which is very nice.

You find it when searching for gnome-activity-journal.

Bildschirmfoto vom 2014-06-01 21:53:22

Now this is useful to me. Often some of my pet projects are carried out in loose intervals and if an interval is getting too large I have trouble to recall what I changed or did the last time I was tinkering with it.

Adding aFileChooser to your Android project

I had the problem of adding the aFileChooser library and here is how I solved it.
First you need to download the zipped source code from here:
Then I unpacked it
Folder structure afileChooser

As I did not need the examples I only copied the aFileChooser directory into my existing Android project directly under the /lib directory. I like this as this will provide me with all code that I use in one directory.

Folder structure afileChooser in your own project

Then in Eclipse you have to open the library as a NEW project by selecting:
File -> New -> Project…
Bildschirmfoto vom 2013-05-15 21:20:52

Next you will need to add the aFileChooser library to your project.
You do this by selecting your project in the explorer and right clicking on it’s name. This will open the context menu where you have to click on Properties to get to the following screen.
Eclipse project properties

By clicking Add you can then select the aFileChosser library.
Eclipse library dialoge

Press OK on this and the previous dialoge.

If you are unlucky like me you will now see an error occuring as your own project and the aFileChooser library project have under their respective /lib directory the same JAR file “android-support-v4.jar” but of different size and therefore version.
Error in the Eclipse console.

Just copy your android-support-v4.jar file to the /lib/aFileChooser/lib folder and press F5 back in Eclipse to get a refresh.