24 November, 2013

Stripped LVM (RAID 0) performance on a NAS system

Now I am setting up a new NAS and to be able to decide if I should use Stripped LVM I did some measurements. Stripped LVM means the the data volume is spread across 2 physical disks and data reads and writes are executed in parallel to the two disks, so in theory this will duplicate the disks speed. However stripping has one disadvantage that if one of the disks fails then all data is lost, while with normal volumes, there are chances that from the functional disk it is possible to restore the data. The NAS has a Gigabit Ethernet LAN connection so in theory 125 MBytes/sec speeds can be achieved.

To do the tests I have set up two 200 GByte logical volumes on the LVM containing two 4 Tbytes physical disks, one stripped across the two disks and the other without stripping. First I measured the speed within the NAS, using the linux DD command:

On the normal volume:

>dd if=/dev/zero of=/media/data/Adatok/test bs=1M count=2000      2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 13.979 s, 150 MB/s

and on the stripped:

>dd if=/dev/zero of=/media/data/Adatok/test bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 7.52203 s, 279 MB/s

This looks promising, the stripped disk looks almost 2 times faster (and the overall speed is also pretty impressing).

In the next step I have set up a Samba share and used CrystalMark to measure the speeds on the Windows 7 client machine:

What can be seen, that with 100MB size the results are almost identical and with 1000 MB size the results are also very similar, with some differences with small block sizes. Probably these differences are some kind of measurement error.

So the conclusion is clear, it is not worth to use Raid 0 or LVM stripping in a NAS, because it just reduces the security and doesn't improve the speed.

For comparison I include 3 other measurements done on the same Windows machine with internal/USB disks:

 This may also indicate some explanation for the poor 4K performance; this may be normal for HDDs and with the NAS with small file sizes the cache may help to improve it an almost SSD level.

20 November, 2013

Speeding up NTFS file system access on Linux with a Windows virtual machine

Summary: Ntfs-3g is not performing well, here are some measurements and a complicated way to have faster NTFS filesystem access on Linux.

Disclaimer: These tests were done on an Ubuntu 10.4 LTS installation, with the latest sw versions included in this Ubuntu distribution, but on the ntfs-3g site there are more recent versions of that ntfs-3g.

I wanted to do a backup of my Linux based NAS, to an NTFS formatted hard drive, which meant to copy around 4 TByte of data.

As the first step I built the disk into the NAS machine over a SATA connection and measured the drive speed:

>sudo hdparm -t /dev/sdf

 Timing buffered disk reads:  432 MB in  3.01 seconds = 143.62 MB/sec

This looks good, this means, 4 TB/143MB ~ 7 hours of copy time.

Next I mounted the already formatted drive and did a copy test:

>sudo mount -t auto /dev/sdf2 /media/windows/
>dd if=/dev/zero of=/media/windows/test bs=512 count=390625
390625+0 records in
390625+0 records out
200000000 bytes (200 MB) copied, 94.6419 s, 2.1 MB/s

Oh what, this looks terrible. 4 TB/ 2.1 MB ~ 22 days of copy time.

Then I started to look into it to check, if there are some performance issues with the ntfs-3g program.

On the Tuxera site, it says: "Normally the I/O performance of NTFS-3G and the other file systems are very comparable. " and there are some hits, why it can be slow. After checking all, in my case only the too small block sizes apply, so let's try to to increase the block size:

>dd if=/dev/zero of=/media/windows/test bs=1M count=200
200+0 records in
200+0 records out

209715200 bytes (210 MB) copied, 11.3721 s, 18.4 MB/s

Ok, this now looks much better, but still far from the hard disk speed.

After reading a bit more on the Tuxera site, there is a comparison of Embebed NTFS and other file systems among them NTFS 3G:

The NTFS-3G is the bottom one, so there is no sense to search more, why ntfs-3g is slow, on my machine, it is generally slow and probably it is not possible to speed it up.

Then come the idea, to set up a virtual Windows machine inside that computer and use that as a driver for NTFS hard disk.

I have set up the virtual machine with Virtualbox (easy apt-get installation) and an evaluation copy of Windows Server 2008 R2 (which is also very easy to install) and continued the tests.

First I wanted to use iSCSI, but it had also some overhead and finally concluded to assign the complete 4T drive as a physical drive the the Virtual Box virtual machine.

For copying I have shared the 4T disk in Windows Server and mounted that on the linux system as a CIFS file system.

So let's test:

>dd if=/dev/zero of=/media/test/test bs=1M count=200
200+0 records in
200+0 records out

209715200 bytes (210 MB) copied, 3.8882 s, 53.9 MB/s

Still not even close to 100 MB/s but already an acceptable speed to copy the 4 TB in one day.

There are 2 more tuning possibilities, what I have thought about but did not try:

  1. Somewhere I read the as ntsf-3g is CPU limited and it uses only one core the Cool&Quiet feature of my AMD processor my reduce the clock speed. If I disable it may be the ntfs-3g driver would perform better.
  2. The network between the virtual machine and the linux host was a bridged ethernet, I do not know much about the internals of this, but using the lo interface my provide better speeds.

19 October, 2013

Installing Ubuntu server 12.04 LTS from pendrive on an old PC to create a NAS

I have a pretty old PC from 2003 which I have been using for some time as a NAS. It is a standard desktop PC (self built), but it has 8 IDE hard drives ranging from 200 Gbyte up to 500 Gbyte. The total capacity of the drives is around 2 T and I just decided to give a refresh to the system to back up my digital videos to it.

I have been using FreeNAS 7.2, which is pretty nice, but it is BSD based and as I have more experience with linux and especially with Ubuntu Server so I decided to upgrade to that one.

Here comes a short guide on how to setup Ubuntu Server on a machine to work as a simple NAS.

Installing Ubuntu Server from pendrive (Flash memory)

Today this should be a strait forward thing, you take the installation ISO image, with some tool you copy it to a pendrive you plug it into the machine and install it, but in my case I run into plenty of problems. By the end I have solved all of them, but here is a list of how to solve them one by one.

1) Copy the ISO to a pendrive: I have used Linux Live USB Creator it worked fine without any problem.

2) It boots perfectly, starts the installation, but hangs after some time as it  doesn't recognizes the USB drive as source for packages:

After some goggling I have found the solution. You have to add the following directive when starting up the install system:


You can do this when the system is starting, by manually edit the startup string  or you can add that to the default boot menu in the syslinux directory to the txt.cgf file, then on each start it will work automatically.

3) Now it recognizes the USB as source but after some progress it is complaining about missing packages:

The reason here was a bit less obvious, but it turned out, that somehow when extracting the ISO image to the USB drive - probably because the file length was to long, the extension and the file names were incorrect. In my case all the problems were in the:


folder. You can identify them, where the extension is not .deb or .udeb, but you should check also the last part of the file name before the extension, it should be _i386 or _all. I have manually corrected these problems and thus finally managed to install the system.

The installation is strait forward, you should just follow the questions asked by the installer.

When you get to the software selection, choose the following components:

OpenSSH server
LAMP server
Samba file server

LAMP is not needed for the backup purpose, but I thought, that it is always good to have a working web server on the machine just in case it will be needed.

Out of Range on the monitor after booting

After restarting the successfully installed server, I got only an "Out of Range" message on the monitor. Obviously GRUG tried to put the monitor to some nice graphic mode and it failed.
After some research found this article: http://askubuntu.com/questions/54067/how-do-i-safely-change-grub2-screen-resolution and based on it, I opened the /etc/default/grub/ and switched back to text mode by uncommenting GRUB_TERMINAL=console.

Initial setup

Before doing anything run:

sudo apt-get update
sudo apt-get upgrade 

To upgrade the computer to the latest software.


The first program I install is always Webmin as it gives me an easy way to set up the rest of the system.

Here is the description on how to install it: http://www.ubuntugeek.com/how-to-install-webmin-on-ubuntu-12-04-precise-server.html, but may be it is better to follow the installation guide on the Webmin web page.

For me this was the following:

wget http://prdownloads.sourceforge.net/webadmin/webmin_1.660_all.deb

sudo dpkg --install webmin_1.660_all.deb

This returned error that 4 packages are missing. Instead of installing the missing packages it is enough to order apt-get to fix the dependencies:

sudo apt-get -f install

This now run successfully and at the end it said, that Webmin is installed. And indeed after this I could log in to webmin. (https://yourserverip:10000)

Network setup

I prefer to use static address for the server like devices, to set it up, in Webmin go to Networking/Network Configuration/Network Interfaces, and set up a static address for the interface in use, then in Routing and Gateways set up the default GW. In the Hostname and DNS client set up the needed settings.
When ready with all the changes then you can apply them all at once by the Apply configuration button.

After this step you have to redirect your browser to the new Webmin address and if you have a terminal session change that as well.

LVM setup

In my practice I use LVM (Logical Volume Manager) to merge the individual disks into a big "virtual" disk, so when I am using it I do not need to care if I have enough space on the individual drives as the data is spanning across them automatically. It would be possible to introduce some kind of RAID here, but as this will be a backup device I do not feel that it is necessary. This approach has the drawback that if one of the disks fails I may loose all the data in all disks, but as I told, this is a backup, a low priority backup, and I hope that with some advanced "bithunting" technology it would be possible to recover the data at least from the remaining disks.

First we need to install LVM from the command line:

sudo apt-get install lvm2

After this press Refresh Modules in Webmin, log out and back in, and the Logical Volume Management will appear under Hardware.

Before creating the volumes, first partitions should be created to be added to the LVM volumes. This can be done from the Partitions on Local Disks Webmin module. Here we need to add one partition to each disk to cover the full disk space. For the full disks, I first Wiped and recreated the partition table, then added a partition of type "none" for the entire disk.

Now go to the LVM manager, first create a Volume Group, add the partition on first disk and name it something meaningful. Next go the the Physical Volumes and add all the remaining disk partitions to the Volume Group. Finally go to the Logical Volumes and allocate all the space in the recently created Volume Group to one logical volume. This logical volume will be accessible in the operating system as a big drive.

Checking drive for errors

The next step is to check for errors in the complete new logical volume. Normally this is not necessary but as I have a very old hardware, with very old HDDs I prefer to run a test, before starting to use it.

As this will run on my computer probably longer than a day, I use the "screen" utility which allows me to disconnect from the command line and reconnect again later.

The command for checking the disk is:

sudo badblocks -vws /dev/mapper/LogicalVolumeName

I was right with the precaution, as during the tests I got some errors, but after rearranging the data cables for the HDDs I managed to fix that. Badblocks isn't fast at all, for my 1,7 T logical volume it too more than 3 days to complete, but finally I got the test passed with 0 errors.

Formatting and mounting the new volume

Next step is to format the volume, it can be done in Webmin, Logical Volume manager. You should select the newly created LV and select create filesystem on the detail view page of the volume. I have selected Ext4 filesystem with default parameters except for the reserved blocks where I specified 2%, to reduce the overhead on this big volume.

Although you can do the mounting also here in Logical Volume Management, the Disks and Network File Systems in the System menu of Webmin provides much more options to do it. So we go there, select to add a new ext4 mount and mount the filesystem to /media/datalv.

Samba Configuration

The last step is to configure this volume for network access from Windows with Samba.

In Webmin we go to Servers/Samba and in the top list we select "Create a new file share". For the backup server I do not need access control and user level permissions so I will setup the share to "guest only" access. When creating the share we give the name of our choice, then we select media/datalv as the Directory to share. If we would not like to see the lost+found directory within the share then we can add a sub directory to it, Webmin will automatically create that directory.  To facilitate the guest access we should change the owner to nobody and the group to nogroup. We can leave the rest on default.
After creating the share we should click on it to get to its settings and select "Security and Access Control". Here we change it to writable and to Guest only access and we are ready.

From here on, the system is ready and you can use it for file storage.

04 February, 2013

Cleaning file names in XBMC before scrapping

XBMC is an excellent media center for use on a lot of different platforms. Beside the various playback possibilities it has a very good scrapping and library functionality, which means, that based on the video file name, it can download DVD cover, director, actors and all kind of useful information. When using the program you can select the movie based on all this nicely presented information and pictures and not only the equally looking file names.

During scrapping - when the data is collected from different movie databases on the internet (like IMDB) -  XBMC first tries to figure out the movie title from the file name and then calls a plugin, which then retrieves the data from the internet.

In this post I will discuss, that how the file name to movie title conversion is working. The here mentioned  cleandatetime and cleanstrings can be customized in advancedsettings.xml, so you are able to finetune the process.
  1. The file name is matched against the cleandatetime; a regular expression string, which is finding the date in the file name, and if it finds it, then everything before the date is considered the name of the movie and everything after the date is discarded. The date is a 4 digit number starting with 19 or 20 and separated by special characters like comma, period, hyphen, _, ( and ) from the rest of the file name. The full regexp is the following:
    (.+[^ _\,\.\(\)\[\]\-])[ _\.\(\)\[\]\-]+(19[0-9][0-9]|20[0-1][0-9])([ _\,\.\(\)\[\]\-][^0-9]|$)
    From here  \1 is the movie name, \2 is the date  and \3 and everything else is discarded.
    If it doesn't find a date, then the file extension is removed from the file name, and that is used as the file name. 
  2. After the first step, the result is processed further with the cleanstrings. Cleanstrings is a set of regexps which are processed each after the other. When a match is found, then the match and everything after that is removed from the end of the name.
    As I have a lot of movies with the name: local title - original title I have added a very simple route to my settings:
    <cleanstrings action="append"> <regexp>-</regexp> </cleanstrings>
    This is simply discarding the original title and the scrapper is able to find it.
    It is a petty that it can only remove from the end of the file name, because I have some files when there are some abbreviations in the beginning of the file, it would be nice to be able to remove that as well.
  3. In the last step the underscores and dots are converted to spaces, but the dots only then if there are no spaces at all in the file name.

So this is the process of converting the file name to a movie title, and the way you can customize it. 
If you have a lot of movies to scrap, it is worth customizing the process and if you run into limitations, you can do at least 2 things: a) find or write some utility which renames the movies to a format which is more compatible with XBMC b) in the scrapper plugings, there is also some flexibility to do some regexp processing.

09 January, 2013

Difference between W25 SW and Netgear WNR2000v2 SW

Recently I bought some used Netgear 300 M routers for a very affordable price and I have checked a bit of the internal structure of the SW that came with that router.

It was very interesting to look at the architecture of this software as it was quite different from the one found on W25.

The first positive surprise is that Netgear provides the complete build environment for the router with a lot of source codes and instructions on how to build the firmware. I have actually not tried to build the firmware, but when checking the details of the operation I could find all the relevant source files. As far I know Ericsson has never released the source code of the W25, probably violating the GNU licence of lot of its components.

On the other side the structure of the W25 is much nicer, it is like a mini linux machine keeping most of the conventions of the desktop linux operating systems. The configuration is stored in the /etc directory, the standard tools to set up the network and other parts of the system are included and with a desktop linux knowledge you are able to look around and do simple tasks. One part of the flash is configured as rw filesystem and the files which should be changed are symlinked to this area.

The Netgear has a very different concepts. The data here is also stored in a separate part of the Flash, but it is not exposed as a file system, but there is an utility called nvram, which can read and write parameters to this area. The configuration is done by setting parameters in this area and then using C programmed utilities to set up the router based on this information. Even the init process and the web server is a hard coded C program so there is not much tweaking what you can do without compiling a new firmware.

So the good news is that both devices are possible to tweak, but not the same difficulty is involved.

Android phone not syncing with Google account

Recently I run two times into the above mentioned problem, and the reasons were obvious, so I share the common reasons with you:

1) Adding new calendar event on the phone

When you add a new calendar event on your phone, according to the defaults settings it is added to the local calendar on your phone which is not synchronized to your Google  account. Once you have done this, you can not correct it, you have to add it again to your Google calendar, but to avoid this mistake you should always check that the appointment is added to to correct calendar.

2) The Google account not syncing at all

In this case none of the mail, calendar or contacts is syncing, you can go to your accounts, you see that synchronization is enabled for all of them, but the latest synchronization is some days old, and even if you ask for an immediate synchronization nothing happens. The reason for this in my case was that the available free space went down, and I had already a notification icon. It was a bit strange, because I had more than 15 MB free space, which should be enough for all my email, contact and calendar, but after uninstalling some applications suddenly the syncing started, and I had my latest email and calendar on my phone.

So if you have strange problems with your synchronization check this 2 things first.