pm 0.1.1 - A simple workspace manager for Git projects

In the last month, I've developped a very simple tool in Python: pm. This tool allows to check the status of all the Git repositories inside a repository. I've just released the first version of this tool: pm-0.1.1

Those who are following this blog will perhaps wonder why Python and not C++ :) The reason is quite simple, I wanted to improve my skills in Python. And what is better than to develop a project from scratch.


The main feature of this application is to show the status of every projects in a directory. The status of your projects can be queried by using pm status. On my computer this gives something like that:


The state of each branch of each project is shown. There different possible status (they are cumulative): * Behind remote: Commits are available on the remote repository * Ahead of remote: Some local commits are no pushed * Diverged: Behind and Ahead * Uncomitted changes: Some changes are not committed * Clean: Indicates that everything is committed, pushed and pull.

By default, the directory is ~/dev/ but you can change it by passing the repository to the command, if you pass a relative directory, it will be relative to home. For instance, here is the status of my doc repositories:


Another feature that can be useful is that it is able to check the status of submodules with the -s option:


As you can see it supports recursive submodules. For each submodule it will indicate if there are new commits available or not.

pm is not only able to show status of the projects, it can also fetch the status of branches from remote by using pm fetch. All the remote branches are fetched from remote. It can also automatically update the projects that are behind remote (equivalent of git pull) with pm update. Only projects that can be fast-forwarded are updated.


Thanks to pip, installation of pm is quite simple:

pip install pm

If you don't want to use pip, you can install it by hand:

tar xf 0.1.1.tar.gz
cd 0.1.1
python install

For those interested, source code is available on Github.

If you have any suggestion for the tool or on the source code, post a comment to this post ;)


A Mutt Journey: Download mails with offlineimap

In the series of posts about Mutt, I recently presented how I was filtering my email. In this post, I'll show how I download my emails locally using offlineimap. This is the perfect companion for Mutt.

With Mutt, you can easily directly query an IMAP server and keep the views up to date with it. There are a few problem with this approach:

  • First, you wont' be able to read your mails when you'are offline. It is rarely an issue in these days, but it can be useful.
  • Opening an IMAP folder with a large number of mails (>1000) can be quite slow. I've several large folders and it was a pain opening them.
  • When Mutt synchronizes with the state of the IMAP server, you'll encounter a freeze. If you want to synchronize often, it is quite boring.

Having your mails offline on your computers solves all these problems. Moreover, it is also a good way to have a backup of your mails. I'm gonna talk here about the usage for Mutt, but you can use offlineimap just for backup or for migration reasons. The downside is that you have to store it locally. My mails takes around 5GB on my computer.

offlineimap is a very simple tool to synchronize emails from IMAP servers. It only supports IMAP, but in those days it is not a limitation. The synchronization is made both ways, it will upload your local changes to the IMAP server. It is very powerful when paired with a MUA such as Mutt.

To use offlineimap, you have to put your configuration in the ~/.offlineimaprc. You can synchronize several accounts at once, in this post, we'll focus on one, but the process is the same for several accounts. I'll focus on Gmail too, but again it is the same with a bit different parameters for other mail accounts.


First, we have to declare the account:

accounts = Gmail

[Account Gmail]
localrepository = Gmail-Local
remoterepository = Gmail-Remote

accounts is the list of accounts that we have, here only one. Then, in account, repositories are just names of the repositories we'll declare now.

The local repository has to be configured:

[Repository Gmail-Local]
type = Maildir
localfolders = /data/oi/Gmail/
sep = /

The first important point is localfolders that sets where the mail will be put on your computer. sep defines the separator used for nested IMAP folders. I recommend / since Mutt will nest them automatically if / is used as separator.

Then, the remote repository has to be configured:

[Repository Gmail-Remote]
type = Gmail
remoteuser = USER
remotepass = PASSWORD
realdelete = no
folderfilter = lambda folder: folder not in ['[Gmail]/All Mail',
sslcacertfile = /etc/ssl/certs/ca-certificates.crt

remotepass and remoteuser are your user names and password. You can also use remotepassfile to read the password from a file. realdelete=noindicates that we only want to remove all the labels of deleted mails. For Gmail, it means that the mail will still be in the All Mail folder. The last line (sslcacertfile) is mandatory for recent versions of offlineimap. The folderfilter is a function that filters some folders. In my case, I do not want to get the "All Mail", "Important" and "Starred" of my Gmail account because it is only a duplicata of the mails in other labels. What is pretty cool with offlineimap is that you can write Python directly in it for some of the configuration options. Here is rule for filter is plain Python, so you can complicated filtering if you want.

Last, but not least, offlineimap can generates a list of mailboxes (one for each folder in every account). It is pretty useful since Mutt can then read this file and you'll find your mailboxes directly configured in Mutt :)

This code will generate a file ~/.mutt/mailboxes that you can source in your Mutt configuration and get the complete list of available mailboxes. This will be kept up to date if you add new IMAP folders on the server for instance.

enabled = yes
filename = ~/.mutt/mailboxes
header = "mailboxes "
peritem = "+%(accountname)s/%(foldername)s"
sep = " "
footer = "\n"

Translate names

You may have seen in the previous section some weird folder name like "[Gmail]/All mail", this is how Gmail names folders that are not labels. This is quite ugly and will create odd looking folders on your computer. You can configure offlineimap to rename these names to better ones. For that, you'll need to rule (in Python ;) ), one to translate from remote to local and one to do the reverse.

Here is what I did:

[Repository Gmail-Local]
nametrans = lambda folder: {'drafts':   '[Gmail]/Drafts',
                            'sent':     '[Gmail]/Sent Mail',
                            'spam':     '[Gmail]/Spam',
                            'starred':  '[Gmail]/Starred',
                            'trash':    '[Gmail]/Trash',
                            'archive':  '[Gmail]/All Mail',
                            }.get(folder, folder)

[Repository Gmail-Remote]
nametrans = lambda folder: {'[Gmail]/Drafts':    'drafts',
                            '[Gmail]/Sent Mail': 'sent',
                            '[Gmail]/Starred':   'flagged',
                            '[Gmail]/Important':   'important',
                            '[Gmail]/Spam':   'spam',
                            '[Gmail]/Trash':     'trash',
                            '[Gmail]/All Mail':  'archive',
                            }.get(folder, folder)

I simply renamed all "[Gmail]" folders into something more readable and that makes more sense to me. It is not limited to special Gmail folders of course, this can also be applied to rename a folder X into a folder Y in the same. As it is Python, you can do sophisticated stuff if necessary.

Speed up things

If you happen to sync your mails often, you may want to speed things up. There are several ways to do that.

The first thing you can do is use several connections to the server. You can set maxconnections to a number higher than 1 in the remote repository configuration. I tested several values and for Gmail 2 was the fastest choice. You can try some values with your server to see what value is good.

Instead of plain old text files for the status of the mails, offlineimap can use a sqlite backend. This is much faster since the complete file is not rewritten for each update of the flags. For that behaviour, you have to set status_backend = sqlite in the Account configuration.

Another thing you can do is reduce the I/O involved during sync by setting general.fsync to false. With that, offlineimap won't have to wait for disk operation completion after each operation.

You can run offlineimap in quick mode with -q option. With this option, change in flags of remote messages will not be updated locally. Changes on the local side will be uploaded corectly. It is generally a good idea is to run offlineimap in quick mode often (every X minutes) and run it in normal mode once or twice a day.

You can also specify which folder to sync with the -f option. Sometimes it is enough to sync INBOX for instance. It may be much faster.


Now that you have fully configured offlineimap, you can make it run by hand or in a cron job. I personally run it every 5 minutes, you can choose your favourite frequency according to your workflow. I think I'll reduce the frequency further, it is more comfortable to get mails only by batch and not too much of them.

If you're interested, you can take a look at my .offlineimaprc configuration.

If you want more information about this awesome tool, you can take a look at the reference documentation.

This is it for this part of this series. In the next post, I'll present my Mutt configuration and how I use it.


A Mutt Journey: Filter mails with imapfilter

About a month ago, I decided to switch to Mutt to read my emails. I kept my GMail account, but I don't use the web interface anymore. It took me a long time to prepare a complete enviromnent.

Currently, i'm using:

  • imapfilter to filter mails
  • offlineimap to download my mails
  • notmuch to quickly search all my mails

And of course Mutt. To be precise, I use mutt-kz, a fork of mutt with very good notmuch integration.

I'll try to explain each part of my environment in a series of articles on this blog. The first one will be about imapfilter.

imapfilter is a mail filtering utility. It connects to a remote server using IMAP and is then able to move, copy or delete mails around. You can use it for several tasks:

  • Delete unwanted mail
  • Move mails into folders according to rules

What is pretty cool is that the configuration is entirely made in Lua. It is quite easy to write rules and then apply them to several mailboxes as if you were programming.

Another advantage of imapfilter is that it works at the server level. Therefore, even if you use your web client from time to time or check your mail on your phone, the changes will still be viewable.

The configuration is done in the ~/.imapfilter/config.lua file. The configuration is quite easy, you have to declare an IMAP object as the account.

local account = IMAP {
    server = 'imap_sever',
    username = 'username',
    password = 'password',
    ssl = 'ssl3',

As the configuration is in Lua, you can easily get the password from another file. For instance, here is my account declaration:

local account = IMAP {
    server = '',
    username = '',
    password = get_imap_password(".password.offlineimaprc"),
    ssl = 'ssl3',

-- Utility function to get IMAP password from file
function get_imap_password(file)
    local home = os.getenv("HOME")
    local file = home .. "/" .. file
    local str =
    return str;

It gets the password by reading a file in the home directory.

Once, you have the account, you can check the status of a folder with the check_status() function. For instance:


You can run imapfilter simply by launching imapfilter on the command line. Once imapfilter is run, it will print the status of the folder you choses:

38 messages, 0 recent, 6 unseen, in
70 messages, 0 recent, 67 unseen, in[Gmail]/Trash.

Several functions are important:

  • select_all() on a folder allows you to get messages from an account to them perform action on them
  • contain_subject('subject') on a list of mails allows you to keep only the mails that contains 'subject' in their subject
  • contain_from('from') on a list of mails allows you to keep only the mails that comes from 'from'
  • contain_to('to') on a list of mails allows you to keep only the mails that are addressed to 'to'
  • delete_messages() on a collection of mails deletes all of them
  • move_messages(folder) on a collection of mails moves all of them to another folder.

You can also mix different IMAP accounts, you don't have to use only one.

For instance, if you would delete all the mail coming from me, you could do:

mails = account.INBOX:select_all()
filtered = mails:contains_from("")

Or you could move all the mails containing Urgent in the subject line to an IMAP folder:

mails = account.INBOX:select_all()
filtered = mails:contains_subject("Urgent")

If you want some more examples, you can take a look at my imapfilter configuration.

The best way to start using it is to look at examples, there are plenty of them in the internet, especially in Github dotfiles repositories.

The reference documentation is available using 'man imapfilter_config', there is plenty more to see.

For more information, you can also consult the offical site.

That is it for this part of the mutt series. In the next post about mutt, I'll talk about how I use offlineimap to get my mails.


budgetwarrior 0.4 - Enhanced wish list and aggregate

I've just released a new version of my command-line budget manager: budgetwarrior 0.4.

Enhanced aggregate overview

The aggregate overviews have been greatly improved. First, there is now a budget overview month command that groups all expenses of amonth together. Here is a possible output:


It also possible to use --full option to also aggregate together the different accounts:


Another new option is --no-group that disables the grouping by categories:


Moreover, the separator of categories can now be configured with --separator=.

All these options can also be set in the configuration with these options:

  • aggregate_full : If set to true, does the same as the --full option.
  • aggregate_no_group : If set to true, does the same as the --no-group option.
  • aggregate_separator : Sets the separator for grouping.

Enhanced wish list

The wishes management has also been improved.

First, each wish can now be set an Urgency and Importance level. This is now shown in wish status as simple indicators:


Moreover, the accuracy of the estimation compared to the paid amount is shown in wish list:


Various changes

Objective status now shows more information about the status of the objectives:


The versioning module has been improved. The versioning sync does now perform a commmit as well as pull/push. versioning push, versioning pull and versioning status commands have been added.

budget version command shows the version of budgetwarrior.

Aliases a now available to make shorted commands:

  • budget sync -> budget versioning sync
  • budget aggregate -> budget overview aggregate


If you are on Gentoo, you can install it using layman:

layman -a wichtounet
emerge -a budgetwarrior

If you are on Arch Linux, you can use this AUR repository.

For other systems, you'll have to install from sources:

git clone git://
cd budgetwarrior
sudo make install


If you are interested by the sources, you can download them on Github: budgetwarrior.

If you have a suggestion or you found a bug, please post an issue on Github.

If you have any comment, don't hesitate to contact me, either by letting a comment on this post or by email.


Compile integer Square Roots at compile-time in C++

For one of my projects, I needed to evaluate a square root at compile-time. There are several ways to implement it and some are better than the others.

In this post, I'll show several versions, both with Template Metaprogramming (TMP) and constexpr functions.

Naive version

The easiest way to implement it is to enumerate the integers until we find two integers that when multiplied are equal to our number. This can easily be implemented in C++ with class template and partial specialization:

template <std::size_t N, std::size_t I=1>
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I> {};

template<std::size_t N>
struct ct_sqrt<N,N> : std::integral_constant<std::size_t, N> {};

Really easy, isn't it ? If we test it with 100, it gives 10. But, if we try with higher values, we are going to run into problem. For instance, when compiled with 289, here is what clang++ gives me:

src/sqrt/tmp.cpp:5:64: fatal error: recursive template instantiation exceeded maximum depth of 256
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 257>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 256>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 255>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 254>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 253>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: (skipping 247 contexts in backtrace; use -ftemplate-backtrace-limit=0 to see all)
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 5>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 4>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 3>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:5:64: note: in instantiation of template class 'ct_sqrt<289, 2>' requested here
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};
src/sqrt/tmp.cpp:11:18: note: in instantiation of template class 'ct_sqrt<289, 1>' requested here
    std::cout << ct_sqrt<289>::value << std::endl;
src/sqrt/tmp.cpp:5:64: note: use -ftemplate-depth=N to increase recursive template instantiation depth
struct ct_sqrt : std::integral_constant<std::size_t, (I*I<N) ? ct_sqrt<N,I+1>::value : I > {};

And it is only to compute the square root for 289, not a big number. We could of course increase the template depth limit (-ftemplate-depth=X), but that would only get us a bit farther. If you try with g++, you should see that this works, that is because g++ has a higher template depth limit (900 for 4.8.2 on my machine) where clang has a default limit of 256. It can be noted too that with g++ no context is skipped, therefore the error is quite long.

Now that C++11 gives us constexpr function, we can rewrite it more cleanly:

constexpr std::size_t ct_sqrt(std::size_t n, std::size_t i = 1){
    return n == i ? n : (i * i < n ? ct_sqrt(n, i + 1) : i);

Much nicer :) And it works perfectly with 289. And it works quite well up to a large number. But it still fails once we git large numbers. For instance, here is what clang++ gives me with 302500 (550*550):

src/sqrt/constexpr.cpp:8:36: error: constexpr variable 'result' must be initialized by a constant expression
static constexpr const std::size_t result = ct_sqrt(SQRT_VALUE);
                                   ^        ~~~~~~~~~~~~~~~~~~~
src/sqrt/constexpr.cpp:5:38: note: constexpr evaluation exceeded maximum depth of 512 calls
    return n == i ? n : (i * i < n ? ct_sqrt(n, i + 1) : i);
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 512)'
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 511)'
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 510)'
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 509)'
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 508)'
src/sqrt/constexpr.cpp:5:38: note: (skipping 502 calls in backtrace; use -fconstexpr-backtrace-limit=0 to see all)
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 5)'
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 4)'
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 3)'
src/sqrt/constexpr.cpp:5:38: note: in call to 'ct_sqrt(302500, 2)'
src/sqrt/constexpr.cpp:8:45: note: in call to 'ct_sqrt(302500, 1)'
static constexpr const std::size_t result = ct_sqrt(SQRT_VALUE);

Again, we run into the limits of the compiler. And again, the limit can be change with fconstexpr-backtrace-limit=X. With g++, the result is the same (without the skipped part, which makes the error horribly long), but the command to change the depth is -fconstexpr-depth=X.

So, if we need to compute higher square roots at compile-time, we need a better version.

Binary Search version

To find the good square root, you don't need to iterate through all the numbers from 1 to N, you can perform a binary search to find the numbers to test. I found a very nice implementation by John Khvatov (source).

Here is an adaptation of its code:

#define MID(a, b) ((a+b)/2)
#define POW(a) (a*a)

template<std::size_t res, std::size_t l = 1, std::size_t r = res>
struct ct_sqrt;

template<std::size_t res, std::size_t r>
struct ct_sqrt<res, r, r> : std::integral_constant<std::size_t, r> {};

template <std::size_t res, std::size_t l, std::size_t r>
struct ct_sqrt : std::integral_constant<std::size_t, ct_sqrt<res,
        (POW(MID(r, l)) >= res ? l : MID(r, l)+1),
        (POW(MID(r, l)) >= res ? MID(r, l) : r)>::value> {};

With smart binary search, you can reduce A LOT the numbers that needs to be tested in order to find the answer. It very easily found the answer for 302500. It can find the square root of almost all integers, until it fails due to overflows. I think it is really great :)

Of course, we can also do the constexpr version:

static constexpr std::size_t ct_mid(std::size_t a, std::size_t b){
    return (a+b) / 2;

static constexpr std::size_t ct_pow(std::size_t a){
    return a*a;

static constexpr std::size_t ct_sqrt(std::size_t res, std::size_t l, std::size_t r){
        l == r ? r
        : ct_sqrt(res, ct_pow(
            ct_mid(r, l)) >= res ? l : ct_mid(r, l) + 1,
            ct_pow(ct_mid(r, l)) >= res ? ct_mid(r, l) : r);

static constexpr std::size_t ct_sqrt(std::size_t res){
    return ct_sqrt(res, 1, res);

Which is a bit more understandable. It works the same way than the previous one and is only limited by numeric overflow.

C++14 Fun

In C++14, the constraints on constexpr functions have been highly relaxed, we can now use variables, if/then/else statements, loops and so on... in constexpr functions making them much more readable. Here is the C++14 version of the previous code:

static constexpr std::size_t ct_sqrt(std::size_t res, std::size_t l, std::size_t r){
    if(l == r){
        return r;
    } else {
        const auto mid = (r + l) / 2;

        if(mid * mid >= res){
            return ct_sqrt(res, l, mid);
        } else {
            return ct_sqrt(res, mid + 1, r);

static constexpr std::size_t ct_sqrt(std::size_t res){
    return ct_sqrt(res, 1, res);

I think this version is highly superior than the previous version. Don't you think ?

It performs exactly the same as the previous. This can only be done in clang for now, but that will come eventually to gcc too.


As you saw, there are several ways to compute a square root at compile-time in C++. The constexpr versions are much more readable and generally more scalable than the template metaprogramming version. Moreover, now, with C++14, we can write constexpr functions almost as standard function, which makes really great.

I hope that is is helpful to some of you :)

All the sources are available on Github:


All Packt Publishing ebooks at $10 for 10 years anniversary


To celebrate the 10 years anniversay of Packt Publishing, all of the available ebooks on Packt Publishing are available at $10 with no limit per client.

In 10 years, Packt Publishing has published more than 200 titles. Throught its Open Source Project Royalty Scheme, Packt Publishing supports Open Source project and already has awarded more than $400'000.

If you need something to read or just want to use this occasion to pile some ebooks, don't hesitate to profit from this superb offer.

More information on the official site:


Build OpenCV with libc++ on Gentoo and static-libs

When you build C++ projects with CLang, you have the choice between using the stdlibc++ that is provided along G++ and the new libc++ that is provided by CLang.

libc++ is another implementation of the C++ Standard Library. This implementation is dual-licensed under the MIT license and UIUC license. It is especially targeting C++11 and has already 100% support for C++14. This last point is the reason that I use libc++ on several of my projects. Moreover, it is also the default on Mac OS X.

The problem with linking with another library is that you can only works with libraries that have been compiled with libc++ support. For instance, if you want to use Boost dynamic libraries, you'll have to compile Boost from sources with libc++.

For one of my project, I'm using OpenCV and libc++. To simplify the installation of OpenCV, I created a new ebuild with a libcxx use flag to selectively build the library with libc++. This requires LLVM/CLang on the build machine. Moreover, by default, the Gentoo ebuild does not have support for building the static libraries. The reason for that is that OpenCV build is not able to build dynamic and static libraries. I added a static-libs use flag that build the static libraries by building OpenCV a second time after the first. That will likely double the compile time (unless ccache is used). Anyhow, it is simple easier than to build that by hand on several machine.

The ebuild is available on my overlay. You can add the overlay to your machine by modifying /etc/layman/layman.cfg:


Then, you can add it to layman:

layman -S
layman -a wichtounet

For now, I have created an ebuild for opencv-2.4.8-r1. If someone is interested in other versions, I'd be glad to create new ebuilds.

I hope that this ebuild will be helpful.


Software Reliability Presentation

In behalf of my school (College of Engineering and Architecture of Fribourg), I presented a shoft presentation about Software Reliability. In this presentation, I outline the main issues about the subject and propose some solutions:

  • Software Validation
  • Defensive Programming
  • Software Analysis Tools

In the Software Analysis Tools, I present three tools: cppcheck, Valgrind and the Clang Static analyzer. Several examples are presented for each tools as well as some recommendations for using them. A short presentation of SonarQube is also performed.

I thought that it could be of some interest to some of the readers, so here it is:

Don't hesitate if you have any comments or questions about the presentation ;)

The source code for the examples is available on Github.


budgetwarrior 0.3.1 - Git versioning and easier creation

I've finished a new version of budgetwarrior: budgetwarrior 0.3.1


The most interesting change is the ability to estimate the date when it is a good time to buy something from the wish list. This is done with the budget wish estimate command:

budget wish estimate

This command gives you two dates for each wish in your list. The first is the date wating for each yearly objectives to be fullfilled. The second one considers only the monthly objectives. For now on, no estimation of expenses is made for the future months. It means that the estimation is made as if there were no expenses in the future months. I'll try to improve that by considering averages of expenses in the previous months to make it more reliable.

Still on the wish module, you can now mark your wishes as paid instead of deleting them. This helps you keep track of the prices of your wishes. This is done with the budget wish paid id command. Finally, the totals of the unpaid wishes and of the paid wishes is displayed in budget wish list.

Another helpful change is the ability to set a date relative to today's date when creating an expense or an earning. For instance, you can create an expense one month before (-1m) or in one year ((+1y) or yesterday (-1d):

new date selection mechanism

Of course, you can also still set the date manually.

The last major change is the addition of a new module: budget versioning. This module helps you manipulate you budget directory with Git. There are two new commands:

  • budget versioning save: Commit the current changes with a default message (Update).
  • budget versioning sync: Pull the changes from the remote directory and push the local changes.

This will only works if you have already configured your budget directory to use Git.


I hope you'll found these changes interesting :)

If you are interested by the tool, you can download it on Github: budgetwarrior

If you have a suggestion or you found a bug, please post an issue on the github project:

If you have any comment, don't hesitate to contact me, either by letting a comment on this post or by email.


Changing root hard disk on Gentoo with LVM and Grub2

I recently changed my hard disk on one of my servers and I didn't found a simple and up to date tutorial to do that, so I decided to post here how I did it.

I do not claim that is the best method, or the easiest or the safest method, but that is the one I used and it worked quite well. You could do it online with LVM, but when changing the boot hard disk, I'd rather be safe. Especially because you need to reinstall Grub on the new hard disk and probably alter its configuration. But, doing it online directly with LVM should work.

My server is using LVM and Grub2. You will need a Gentoo Live CD (or DVD, or USB, or any other Linux live installation with the necessary tools). If you do not use LVM, this guide is not for you. Even if I wrote this guide for Gentoo, this is also gonna work with another system as long as you use LVM and Grub2, just use your distribution live cd.

Here I will assume /dev/sda is your old disk and /dev/sdb is your new disk. I will also assume that you previous VG was vg0. If it is, you'll have to replace them in the commands.

1. Obviously, shutdown your computer.

2. Install the new hard disk in it.

3. Reboot on the Live CD:

  • Activate LVM: vgchange -a y
  • Create the LVM partition on the new disk (/dev/sdb) with fdisk.
  • Create a new PV: pvcreate /dev/sdb1
  • Create a new VG: vgcreate vg1 /dev/sdb1
  • Recreate all your LV with lvcreate. In my case, I have /dev/vg0/root and /dev/vg0/data

Now, you'll need to copy your data. Here is an example with my root:

mkdir old
mkdir new
mount /dev/vg0/root/ old
mount /dev/vg1/root/ new
cp -ax old/* new
umount old
umount new

You have to do that with for LV you have.

4. Shutdown the computer

5. Remove the old hard disk and use the connector of the old hard disk to connect the new one.

6. Reboot on the Live CD:

  • Activate LVM: vchange -a y
  • Rename the VG: vgrename vg0
  • Chroot in the new hard disk
  • Install grub on the hard disk: grub2-install /dev/sda
  • Regenerate the Grub config: grub2-mkconfig -o /boot/grub/grub.cfg
  • Exit from chroot

7. Reboot on Gentoo

8. You now have switched hard disk.

At any point, if something goes wrong, you still have the previous hard disk completely ready at hand.

I hope this would useful to some Linux users.