Skip to main content

budgetwarrior 0.2.1 - Minor changes and Gentoo ebuild
   Posted:


I've released a new version of budgetwarrior, the release 0.2.1. budgetwarrior is a simple command line application to manage a personal budget.

The version 0.2.1 contains several bug fixes about archived accounts and bug fixes for budget across several years.

The application as well as the source code is available online: https://github.com/wichtounet/budgetwarrior

I've created Gentoo ebuilds for this application. They are available on my Portage overlay: https://github.com/wichtounet/portage-overlay

Gentoo Installation

  • Edit overlays section of /etc/layman/layman.cfg. Here's an example:

overlays: http://www.gentoo.org/proj/en/overlays/repositories.xml http://github.com/wichtounet/portage-overlay/raw/master/repository.xml

  • Sync layman
layman -S
  • Add the overlay:
layman -a wichtounet
  • Install budgetwarrior
emerge budgetwarrior

Conclusion

If you find any issues with the tool, don't hesitate to post an issue on Github. If you have comments about it, you can post a comment on this post or contact me by email.

Comments

Home Server Adventure – Step 3
   Posted:


Here are some news about my home server installation project.In the past, I already installed a server in a custom Norco case. I wanted to replace my QNAP NAS with a better server, the QNAP being too slow and not extensible enough for my needs.

Here is it how it looks right now (sorry about the photo qualitiy :( my phone does not seem to focus anymore...):

My Home Rack of Server

So I replaced my QNAP NAS with a custom-built NAS. Again, I bought a NORCO case, the RPC-4220. This case has 20 SATA/SAS bays. I bought it with a replacement of the SAS backplane by a SATA one. I also ordered some fan replacement to make it less noisy. I installed my 6 hard disk in Raid 5, managed with mdadm, with LVM partitions on top of the array.

I also added an APC UPS which allows me to go through all the minor power issues that there is in my old apartment and which also me about 10 minutes of backup when there is a power outage.

I haven't added a lot of services on the server. I now run Owncloud on the server and that completely replaces my Dropbox account. I also improved by Sabnzbd installation with other newsgroup automation tools.

Not directly related to my rack, but I also installed a custom XBMC server for my TV. It reads from the NAS server. And of course, it runs Gentoo too.

In the future, I'll add a new simple server as a front firewall to manage security a bit more than here and avoid having to configure redirection in my shitty router (which I would like to replace, but there are not a lot of compatible rack router for my ISP unfortunately). It will probably use a Norco case too.

If you have any question about my build, don't hesitate ;)

Comments

Zabbix - Low Level Discovery of cores, CPUs and Hard Disk
   Posted:


Zabbix SSD Status, configured with Low Level Discovery

At home, I'm using Zabbix to monitor my servers, it has plenty of interesting features and can be extended a lot by using User Parameter.

In this post, I'm gonna talk about Low Level Discovery (LLD). If you are only interested in the final result, go the Conclusion section, you can download my template containing all the rules ;)

Low Level Discovery (LLD)

LLD is a feature to automatically discover some properties of the monitored host and create items, triggers and graphs.

By default, Zabbix support three types of item discovery:

  • Mounted filesystems
  • Network interface
  • SNMP's OIDs

The first two are very useful, since they will give you by default, for instance, the free space of each mounted file system or the bandwith going in and out of each network interface. As I only monitor Linux servers, I don't use the last one, but it will eventually interest other people.

Another very interesting thing about this feature is that you can extend it by discovering more items. In this article, I will show how to discover CPUs, CPU Cores and Hard Disk.

The most important part of custom discovery is to create a script on the monitored machines that can "discover" something. It can be any executable, the only thing important is that it outputs data in the correct format. I have to say that the format is quite ugly, but that is probably not very important ;) Here is the output of my hard disk discovery script:

{
"data":[
    {"{#DISKNAME}":"/dev/sda","{#SHORTDISKNAME}":"sda"},
    {"{#DISKNAME}":"/dev/sdb","{#SHORTDISKNAME}":"sdb"},
    {"{#DISKNAME}":"/dev/sdc","{#SHORTDISKNAME}":"sdc"},
    {"{#DISKNAME}":"/dev/sdd","{#SHORTDISKNAME}":"sdd"},
    {"{#DISKNAME}":"/dev/sde","{#SHORTDISKNAME}":"sde"},
    {"{#DISKNAME}":"/dev/sdf","{#SHORTDISKNAME}":"sdf"},
    {"{#DISKNAME}":"/dev/sdg","{#SHORTDISKNAME}":"sdg"},
]
}

You can have as many keys for each discovered items, but the format must remains the same. In the item, trigger and graph prototypes, you will then use {#DISKNAME} or {#SHORTDISKNAME} to use the discovered values.

Once you have created your scripts, you have to register it in the zabbix configuration as a user parameter. For instance, if you use the zabbix daemon, you need these lines in /etc/zabbix/zabbix_agentd.conf:

EnableRemoteCommands=1
...
UnsafeUserParameters=1
...
UserParameter=discovery.hard_disk,/scripts/discover_hdd.sh

Now, when you will create the discovery rule, you can use discovery.hard_disk as the key.

A discovery rule in itself is useful without prototypes, you can create three types of prototypes:

  • Item Prototype: This will create a new item for each discovered entity
  • Trigger Prototype: This will create a new trigger for each discovered entity.
  • Graph Prototype: This will create a graph for each discovered entity.

The most useful are by far the item and trigger prototypes. The biggest problem with graphs is that you cannot create an aggregate graph of each discovered items. For instance, if you record the temperature of your CPU cores, you cannot automatically create a graph with the temperature of each discovered cores. For that, you have to create the graph in each host. Which makes, imho, graph prototypes pretty useless. Anyway...

In the next section, I'll show how I have created discovery rules for Hard Disk, CPU and CPU cores.

Discover Hard Disk

The discovery script is really simple:

#!/bin/bash
disks=`ls -l /dev/sd* | awk '{print $NF}' | sed 's/[0-9]//g' | uniq`
echo "{"
echo "\"data\":["
for disk in $disks
do
    echo "    {\"{#DISKNAME}\":\"$disk\",\"{#SHORTDISKNAME}\":\"${disk:5}\"},"
done
echo "]"
echo "}"

It just lists all the /dev/sdX devices, remove the partition number and remove the duplicates, to have only the hard disk at the end.

I've created several item prototypes for each hard disk. Here are some examples using S.M.A.R.T. (you can download the template with all the items in the Conclusion section):

  • Raw Read Error Rate
  • Spin Up Time
  • SSD Life Left
  • Temperature
  • ...

You may notice that some of them only make sense for SSD (SSD Life Left) and some others do not make any sense for SSD (Spin Up Time). This is not a problem since they will just be marked as Not Supported by Zabbix.

All these datas are collected using the smartctl utility.

I've also created some trigger to indicate the coming failure of an hard disk:

  • SSD Life Left too low
  • Reallocated Sector Count too low
  • ...

I've just used the threshold reported by smartctl, they may be different from one disk manufacturers to another. I don't put a lot of faith on these values, since disk generally fail before going to threshold, but it could be a good indicator anyway.

Discover CPUs

Here is the script to discover CPUs:

#!/bin/bash
cpus=`lscpu | grep "CPU(s):" | head -1 | awk '{print $NF}'`
cpus=$(($cpus-1))
echo "{"
echo "\"data\":["
for cpu in $(seq 0 $cpus)
do
    echo "    {\"{#CPUID}\":\"$cpu\"},"
done
echo "]"
echo "}"

It just uses lscpu and parses its output to find the number of CPU and then create an entry for each CPUs.

I just have one item for each CPU: The CPU Utilization.

I haven't created any trigger here.

Discover CPU Cores

Just before, we discovered the CPUs, but it is also interesting to discover the cores. If you don't have Hyperthreading, the result will be the same. It is especially interesting to get the temperature of each core. Here is the script:

#!/bin/bash
cores=`lscpu | grep "Core(s) per socket:" | awk '{print $NF}'`
cores=$(($cores-1))
echo "{"
echo "\"data\":["
for core in $(seq 0 $cores)
do
    echo "    {\"{#COREID}\":\"$core\"},"
done
echo "]"
echo "}"

It works in the same way as the previous script.

I've only created one item prototype, to get the temperature of each core with lm_sensors.

Wrap-Up

Here are all the UserParameter necessary to make the discovery and the items works:

### System Temperature ###
UserParameter=system.temperature.core[*],sensors|grep Core\ $1 |cut -d "(" -f 1|cut -d "+" -f 2|cut -c 1-4
### DISK I/O###
UserParameter=custom.vfs.dev.read.ops[*],cat /proc/diskstats | egrep $1 | head -1 | awk '{print $$4}'
UserParameter=custom.vfs.dev.read.ms[*],cat /proc/diskstats | egrep $1 | head -1 | awk '{print $$7}'
UserParameter=custom.vfs.dev.write.ops[*],cat /proc/diskstats | egrep $1 | head -1 | awk '{print $$8}'
UserParameter=custom.vfs.dev.write.ms[*],cat /proc/diskstats | egrep $1 | head -1 | awk '{print $$11}'
UserParameter=custom.vfs.dev.io.active[*],cat /proc/diskstats | egrep $1 | head -1 | awk '{print $$12}'
UserParameter=custom.vfs.dev.io.ms[*],cat /proc/diskstats | egrep $1 | head -1 y| awk '{print $$13}'
UserParameter=custom.vfs.dev.read.sectors[*],cat /proc/diskstats | egrep $1 | head -1 | awk '{print $$6}'
UserParameter=custom.vfs.dev.write.sectors[*],cat /proc/diskstats | egrep $1 | head -1 | awk '{print $$10}'
UserParameter=system.smartd_raw[*],sudo smartctl -A $1| egrep $2| tail -1| xargs| awk '{print $$10}'
UserParameter=system.smartd_value[*],sudo smartctl -A $1| egrep $2| tail -1| xargs| awk '{print $$4}'
### Discovery ###
UserParameter=discovery.hard_disk,/scripts/discover_hdd.sh
UserParameter=discovery.cpus,/scripts/discover_cpus.sh
UserParameter=discovery.cores,/scripts/discover_cores.sh

(it must be set in zabbix_agentd.conf)

You also need to give zabbix the right to use sudo with smartctl. For that, you have to edit your /etc/sudoers file and add this line:

ALL ALL=(ALL)NOPASSWD: /usr/sbin/smartctl

Conclusion and Download

I hope that this helps some people to use Low Level Discovery in their Zabbix Monitoring Installation.

LLD eases a lot the creation of multiple items discovery for hosts with different hardware or configuration. However, it has some problems for which I have not yet found a proper solution. First, you have to duplicate the client scripts on each host (or at least have them on a share available from each of them). Then, the configuration of each agent is also duplicated in the configuration of each host. The biggest problem I think is the fact that you cannot automatically create graph with the generated items of each discovered entities. For instance, I had to create a CPU Temperature graph in each of my host. If you have few hosts, like many, it is acceptable, but if you have hundreds of hosts, you just don't do it.

All the scripts and the template export file are available in the zabbix-lld repository. For everything to work, you need the lscpu, lm_sensors and smartmontools utilities.

If you have any question or if something doesn't work (I don't offer any guarantee, but it should work on most recent Linux machines), don't hesitate to comment on this post.

Comments

Thor OS: Boot Process
   Posted:


Some time ago, I started a hobby project: writing a new operating system. I'm not trying to create a concurrent to Linux, I'm just trying to learn some more stuff about operating systems. I'm gonna try to write some posts about this kernel on this blog.

In this post, I'll describe the boot process I've written for this operating system.

Bootloader Step

The first step is of course the bootloader. The bootloader is in the MBR and is loaded by the system at 0x7C00.

I'm doing the bootloading in two stages. The first stage (one sector) print some messages and then load the second stage (one sector) from floppy at 0x900. The goal of doing it in two stages is just to be able to overwrite the bootloader memory by the stage. The second stage loads the kernel into memory from floppy. The kernel is loaded at 0x1000 and then run directly.

The bootloader stages are written in assembly.

Real mode

When the processor, it boots in real mode (16 bits) and you have to setup plenty of things before you can go into long mode (64 bits). So the first steps of the kernel are running in 16 bits. The kernel is mostly written in C++ with some inline assembly.

Here are all the things that are done in this mode:

  1. The memory is inspected using BIOS E820 function. It is necessary to do that at this point since BIOS function calls are not available after going to protected mode. This function gives a map of the available memory. The map is used later by the dynamic memory allocator.
  2. The interrupts are disabled and a fake Interrupt Descriptor Table is configured to make sure no interrupt are thrown in protected mode
  3. The Global Descriptor Table is setup. This table describes the different portion of the memory and what each process can do with each portion of the memory. I have three descriptors: a 32bit code segment, a data segment and a 64bit code segment.
  4. Protected mode is activated by setting PE bit of CR0 control register.
  5. Disable paging
  6. Jump to the next step. It is necessary to use a far jump so that the code segment is changed.

Protected Mode

At this point, the processor is running in protected mode (32 bits). BIOS interrupts are not available anymore.

Again, several steps are necessary:

  1. To be able to use all memory, Physical Address Extensions are activated.
  2. Long Mode is enabled by setting the EFER.LME bit.
  3. Paging is setup, the first MiB of memory is mapped to the exact same virtual addresses.
  4. The address of the Page-Map Level 4 Table is set in the CR0 register.
  5. Finally paging is activated.
  6. Jump to the real mode kernel, again by using a far jump to change code segment.

Real Mode

The kernel finally runs in 64 bits.

There are still some initialization steps that needs to be done:

  1. SSE extensions are enabled.
  2. The final Interrupt Descriptor Table is setup.
  3. ISRs are created for each possible processor exception
  4. The IRQs are installed in the IDT
  5. Interrupts are enabled

At this point, is kernel is fully loaded and starts initialization stuff like loading drivers, preparing memory, setting up timers...

If you want more information about this process, you can read the different source files involved (stage1.asm, stage2.asm, boot_16.cpp, boot_32.cpp and kernel.cpp) and if you have any question, you can comment on this post.

Comments

New hobby project: Thor-OS, 64bit Operating System in C++
   Posted:


It's been a long time since I have posted on this blog about a project. A bit more than two months ago, I started a new project: thor-os

This project is a simple 64bit operating system, written in C++. After having written a compiler, I decided it could be fun to try with an operating system. And it is fun indeed :) It is a really exciting project and there are plenty of things to do in every directions.

I've also written the bootloader myself, but it is a very simple one. It just reads the kernel from the floppy. loads it in memory and then jumps to it and nothing else.

Features

Right now, the project is fairly modest. Here are the features of the kernel:

  • Serial Text Console
  • Keyboard driver
  • Timer driver (PIT)
  • Dynamic Memory Allocation
  • ATA driver
  • FAT32 driver (Work In progress)
  • Draft of an ACPI support (only for shutdown)

All the commands are accessible with a simple shell integrated directly in the kernel.

Testing

All the testing is made in Bochs and Qemu. I don't have any other computer available to test in real right now but that is something I really want to do. But for now, my bootloader only supports floppy, so it will need to be improved to load the kernel from a disk, since it is not likely that I will have a floppy disk to test :D

Here is a screenshot of the OS in action:

Thor OS Screenshot

Future

The next thing that I will improve is the FAT32 driver to have a complete implementation including creating and writing to files.

After that, I still don't know whether I will try to implement a simple Framebuffer or start implement user space.

As for all my projects, you can find the complete source code on Github: https://github.com/wichtounet/thor-os

Don't hesitate to comment if you have any question or suggestion for this project ;) I will try to write some posts about it on the future, again if you have idea of subject for these posts, don't hesitate. The first will probably be about the boot process.

Comments

Gentoo Tips: Avoid Gnome 3.8 from being emerged automatically
   Posted:


Since Gnome 3.8 has been out in the portage tree, a lot of problems arise when you try to emerge something. If it was only when you update the system, it would be OK, but this arises every time you try to install something.

For instance, if I try to update vim on my system, it tries to update empathy to version 3.8 and then pulls some other dependencies causing blocks and other USE problems. I personally don't think empathy should be emerged when emerging vim. Fortunately, you can disable this behavior by using emerge in this way:

emerge --ignore-built-slot-operator-deps=y ...

With that, when you emerge vim, it doesn't emerge Gnome 3.8. It is very useful if you want to stay with Gnome 3.6 for the moment.

I already used this tip several times. I hope that this will be useful to other people.

Comments

budgetwarrior 0.2 - Visual reports, fortune status and expenses aggregates
   Posted:


I've released a new version of budgetwarrior the version 0.2.

I've several new features to the tool. First, I've added a graph of the expenses/earnings/balances of each month for a given year in the form of a bar plot. You can see an example in practice here:

budgetwarrior monthly report

Nothing fancy, but it gives a good overview of the current state of your budget.

I've added a new module, called fortune, that lets you enter your total fortune and then computes the difference between the entered fortune statuses. For now, it doesn't do anything else with this data. But in the future, I want to correlate this data with the balances to check the difference between the filled expenses and earnings and the fortune evolution.

I've also added a more convenient way of creating expenses and earnings. Just type "budget expense add" and you'll be able to fill all the fields one by one. Of course, the command line commands are still available.

The last new feature I've added is an aggregate report (budget overview aggregate). This view simply groups all the expenses with the same name of a year together. If you always use the same expense title for your groceries, you'll see the total you spent in groceries for a year. You can also name your expenses with the format "Category/Expenses" and all the expenses with the same category will be grouped together in the aggregate view. That allows you to still have enough details in the monthly overview but to logically groups your expenses together in the aggregate view.

The other changes are minor. I've improved the monthly overview to sort the expenses and earnings by date. To facilitate the storage of the files in a service like Dropbox, the data and configuration files are now only written if they have been modified. The mean in the current overview has been changed to reflect only the months up to the current month and not the future (which was just ruining the means).

If you are interested by the tool, you can download it on Github: budgetwarrior

I hope this tool will be useful to some people. If you've any question, just let a comment on this post or contact me directly by email. I'll be glad to help.

Comments

Home Server Adventure - Step 2
   Posted:


If you remember, I talked about my home server project in a previous post. I had installed an old Dell Poweredge server, a 3com gigabit switch and a monitoring console. The problem with this configuration was that it was too noisy as the rack is installed in my apartment.

The first thing I did was to replace my old 3Com switch with a Zyxel managed fanless switch. Being fanless, this switch is completely silent which makes a good difference :)

After that, I have replaced the Dell server with a custom installation in a Norco-RPC 230 case. The basic case comes with two 80mm fans in the front. These two fans are very powerful, but they are quite noisy. I replaced them with two Enermax T.B. Silence 80mm fans. They are almost silent, it is really great. It is probably not enough airflow for a large configuration, but in my case, I think that this will be largely enough.

I've already installed several services on my server:

  • Tiny Tiny RSS: a web-based news feed reader and aggregator. I use it to replace Feedly which I was getting less and less fond to.
  • Sabnzbd: I already had this NZB download on my desktop, but now as it is on the server, it can download even when I'm not at home.
  • Zabbix: a monitoring application that manages all my appliances and server (the new server, the NAS, the switch and the router). It is my first attempt with Zabbix. It is a bit cryptic to conigure, but the features are very numerous.
  • Teamspeak and Mumble: They already were installed on the previous server.

I plan to install new services on the future, but I have no plans for now.

Here is a picture of the rack in its current state:

19 Rack

I have also installed my router and the NAS on a layer in the rack.

You can't see it, but I have also added to rackable PDU on the back to have to organize a bit the cables. Even if it still not perfect, it is already better than before.

For now, I think that my system in in good shape. When I will have some more budget, I will replace QNAP NAS with a custom server, probably again with a Norco case.

Comments

budgetwarrior 0.1.0 - command-line personal budgeting tool
   Posted:


Being bored by using Google spreadsheets for my personal budgeting, I decided to write an application to do that. Being a huge fan of taskwarrior, I decided to write a kind of similar application for my personal budget, budgetwarrior was born. I use it since two months and I thought that it could be useful for other persons. The application is developed in C++. More information is available on Github: https://github.com/wichtounet/budgetwarrior.

budgetwarrior 0.1.0 is a command-line only tool. It works on this principle: you create a set of accounts with a certain limit and then you declare your expenses in each of these accounts. You can also manage earnings that are not each month in each account. You can also keeps track of your debts via this application. It also supports automatic creation of recurring expenses, for instance when you pay the rent (for now, only monthly expenses are supported).

Once you've put all your data in the application, it provides you report on the state of your budget by month or by year. For instance, here is my current monthly report:

Monthly Report

You can see directly which accounts are in a good shape and which are not.

Here is the current yearly report:

Yearly report

In this view, you can see directly how your accounts evolve during the year and where are your biggest expenses and earnings.

As everything is displayed horizontally, the more accounts you have the larger the view become. With the 7 accounts I have, it takes about 1600 pixels of width to display it. I will try to improve that in the future if some people are interested in making it work on smaller screens.

Installation

You can install budgetwarrior directly from the sources:

git clone -b master git://github.com/wichtounet/budgetwarrior.git
cd budgetwarrior
cmake .
make
sudo make install

After that, you can use budgetwarrior by using the command budget.

The usage is fairly simple, you can use budget help to have the list of the commands that you have to use to create expenses and earnings and display overviews.

The project

If you have any question on the project, don't hesitate to contact me or to post a comment to this post. If there are people interested, I can write a more complete help.

If you have a suggestion or you found a bug, please post an issue on the github project: https://github.com/wichtounet/budgetwarrior.

Comments

Norco RPC 230 Case Review
   Posted:


As my previous Dell 1U server was too noisy for my home server installation. I invested in a case, a Norco RPC 230 model. This case is a 2U standard case for 19" rack. It supports micro-ATX motherboard and ATX power supply. These components are easy too find, so that makes it a very interesting case for a home server installation.

I ordered the case at Ri-Vier Automatisering, which is the Norco european distributor. I directly ordered it with the RL-26 rails for the rack and the ATX power supply that is provided by Ri-Vier. For the case, I also bought a micro-ATX Gigabyte G41M-Combo motherboard. I already had 4GB of DDR2 RAM and a Core 2 Duo processor from a previous configuration. I also took my old 120GB Samung SSD for this server. This would be largely sufficient for what I plan to run on this server.

The case has four 3.5" drive bays and one 5.25" drive bay. The air flow is provided with two 80mm fans on the front panel.

Package

I received the package very quickly from Ri-Vier. The case is packaged in double box and everything in the package is well ordered. The power supply was already in the box itself. Here is a picture of the content of the package:

2013-08-20 09.45.14

It contains only the case and the necessary screws. The rails are not on this picture, since I already had installed them in my rack. As you see, there are no manual, which may be a bit frustrating. Once you remove the top panel (4 screws to remove), you can see the inside of the case:

2013-08-20 09.54.09

On the top are the four bays for the drives. They are removable. As you can see on the picture, the power supply has plenty of cables. I would have expected a power supply with less cables to come with this case as there is not much room.

Here is a view of the front panel:

2013-08-20 11.11.22

There are two USB ports on the front, the Power and Reset buttons as well as LEDS.

Installation

Even without any manual, the installation is fairly easy. The top panel can be removed by unscrewing 4 four screws on the side. I strongly recommend to remove the 4 drive bays before installing the other components to make some room. Once it is done, the motherboard can be screwed in place. The power supply is attached by four screws on the back panel.

I installed a SSD 2.5" in the leftmost disk bay. I had some issues with this installation as the bays do not have any screws on the side, only on the bottom. They are really not made for anything else than 3.5" drives.

Here is a picture of my installation once finished:

2013-08-20 11.11.39

I managed to make it looks nice with the cables, but only because I have not installed any CD-Rom, so I did put a lot of cables in its place.

Once it is done, I have installed it in my rack. It is not straightforward without a manual, but I was able to screw it to the rails. I did put two screws on each side:

2013-08-20 11.31.08

Once in place, it looks quite good:

2013-08-20 11.31.51

However, I don't know if I have done something wrong, but the rails are not sliding smoothly, they are blocking at two places and the last part is quite hard to push. Anyway, as I won't be moving it a lot, I think it is ok. You also have to remember that even if it is a short depth case, the RL-26 rails are not short at all, they are not made for short depth racks.

Once it is launched, the server is quite busy due to the two 80mm fans. However, the airflow is quite strong, which is what most customers want. It is still more silent than my Dell server, but not enough for a home experience. I will try to replace them with more silent fans in the future.

Conclusion

Pros

  • Enough bays for a simple server
  • Support standard formats
  • Enough air flow
  • Very short depth
  • Seems of heavy duty construction
  • Looks nice
  • Not too expensive
  • Should fit in most racks

Cons

  • No manuals at all
  • Disk bays not very convenient
  • The provided power supply is not very adapted to the case.
  • The provided rails seems to not slide smoothly

To conclude, I would say that this case is very fit to a custom installation as all the components are standards. I would recommend it for a similar usage than mine.

Comments