EDDIC 0.5 : Functions and foreach

I'm pleased to release the version 0.5. of the EDDI Compiler.

This new version introduced the first version of function calls. The function can take several parameters but cannot return anything at this moment. A version of foreach loop is now available in the language.

You can also declare variables globally in the source code. The global variables are stored in the .data section of the ELF file and the local variables are stored on the stack.

The error reporting of the compiler has been improved. Indeed, now the syntactical errors are reported with the exact location of the source.

There are also a lot of improvements in the source code. The big header files have been splitted into several files. I replaced all the pointers by smart pointers that allowed me to remove all the memory leaks of the applications and to simplify the memory management. Finally, I started using some new features of C++11 to improve the source code of the application.

The next version will certainly see return types for functions and perhaps a first version of switch case. Moreover, I have a lot of improvements to do at the assembly level. Indeed, the generated assembly is not efficient at all. Perhaps, I will consider adding arrays too to this version.

You can find the compiler on the Github repository : https://github.com/wichtounet/eddic. If you watch the repository, you'll see that I followed a new branching model, the one proposed and enforced by the git-flow tool.

The exact version I refer to is the v0.5 available in the github tags.

How to install git-flow on Linux

One week ago, I started using git-flow on eddic. This is a collection of Git extensions to easily follow a branching-model convention for a Git project. I will try to describe this project later on this blog.

You can install git-flow using this simple command:

wget --no-check-certificate -q -O - https://github.com/nvie/gitflow/raw/develop/contrib/gitflow-installer.sh | sudo sh

I recommend you to install a script to autocomplete the git-flow commands and params:

mkdir -p ~/src/external && cd ~/src/external
git clone https://github.com/bobthecow/git-flow-completion.git git-flow-completion
mkdir -p ~/bin/ && cd ~/bin
ln -s ~/src/external/git-flow-completion/git-flow-completion.bash ./git-flow-completion.bash

Then add a simple command in your .bashrc file:

source ~/bin/git-flow-completion.bash

If you want an introduction of git-flow, I recommend you this blog post : Why aren't you using git-flow ?

Diploma Thesis : Inlining Assistance for large-scale object-oriented applications

One month ago, my diploma thesis has been accepted and I got my Bachelor of Science in Computer Science.

I made my diploma thesis at Lawrence Berkeley National Laboratory, Berkeley, California. I was in the team responsible of the developmenet of the ATLAS Software for the LHC in Cern. The title of my thesis is Inlining Assistance for large-scale object-oriented applications

The goal of this project was to create a C++ analyzer to find the best functions and call sites to inline. The input of the analyzer is a call graph generated by CallGrind of the Valgrind project.

The functions and call sites to inline are computed using a heuristic, called the temperature. This heuristic is based on the cost of calling the given function, the frequency of calls and the size of the function. The cost of calling a function is based on the number of parameters, the virtuality of the function and the shared object the function is located in.

The analyzer is also able to find clusters of call sites. A cluster is a set of hot call sites related to each other. It can also finds the functions that should be moved from one library to the other or the function that should not be virtual by testing the use of each function in a class hierarchy.

To achieve this project, it has been necessary to study in details how a function is called on the Linux platform. The inlining optimization has also been studied to know what were the advantages and the problems of this technique.

To retrieve the information about the sizes and the virtuality of the function, it has been necessary to read the shared libraries and executables files. For that, we used libelf. The virtuality of a function is calculated by reading each virtual table and searching for the function in the virtual tables content.

The graph manipulation is made by the Boost Graph Library. As it was an advanced library, it has helped me improving my skills in specific topics like templates, traits or Template Metaprogramming.

The analyzer is able to run on the Linux platform on any program that has been compiled using gcc.

Read more…

Packt Open Source Awards 2011

Packt launched the Open Source Awards 2011 contest. This is a contest that aims to encourage, support, recognize and reward Open Source projects.

This contest has been running since 2006.

The nominations started the first of August and finished on the 9th of September. The finalists of each category are available on the website.

You can vote for your favorite open source project in each of these categories:

  • Open Source CMS
  • Open Source Mobile Toolkits and Libraries
  • Most Promising Open Source project
  • Open Source Business Applications
  • Open Source JavaScript Libraries
  • Open Source Multimedia Software

The winner of each category will win 2500$ !

Morevoer, if you vote for your favorite project, you will be entered into a prize draw to win a Kindle!

The votes are open, you can vote now on this page.

You can have more information about the awards here.

Google+ is now open to all

After about 90 days of trial on invitation-only mode, Google+ is now open to everybody.

For those who don't know, Google+ is the social network platform of Google, with several interesting features like Circles, Hangouts, ...

For example, you can see my page on Google+.

Personally, I find this social network very interesting, but there are not enough people on it to concurrence really Facebook and the others networks. Don't hesitate to give it a try, it's worth it!

Book Review : Effective C++

Some time ago, I have read Accelerated C++. After this introductory book, I have chosen to read a book focused on the good practices of C++ development. On that purpose, I bought Effective C++ Third Edition from Scott Meyers. I already finished this book months ago, but I didn't found the time and the movitation to review it until now.

First of all, this book is aimed for people who already know C++ development and want to improve their skills in this language and especially to produce better programs and designs. This book can also be used by C developer that just switched to C++. The book is organized around 55 specific guidelines. Each guideline describe a way to write better C++.

Read more…

How to profile C++ application with Callgrind / KCacheGrind

I have shown before how to profile a C++ application using the Linux perf tools.  In this post, we will see how to profile the same kind of application using Callgrind. Callgrind is a tool in part of the Valgrind toolchain. It is running in Valgrind framework. The principle is not the same. When you use Callgrind to profile an application, your application is transformed in an intermediate language and then ran in a virtual processor emulated by valgrind. This has a huge run-time overhead, but the precision is really good and your profiling data is complete. An application running in Callgrind can be 10 to 50 times slower than normally.

The output of Callgrind is flat cal graph that is not really usable directly. In this post, we will use KCachegrind to display the informations about the profiling of the analyzed application.

Read more…

How to compute metrics of C++ project using CCCC

CCCC (C and C++ Code Counter) is a little command-line tool that generates metrics from the source code of a C or C++ project. The output of the tool is a simple HTML website with information about all your sources.

CCCC generates not only information about the number of lines of codes for each of your modules, but also complexity metrics like the McCabe Cyclomatic Complexity level of your modules and functions, design metrics like the coupling between the modules or object oriented metrics like the depth of inheritance tree for each of your classes, ...

Read more…

Java 7 has been released!

Five years after Java 6, Oracle has just released Java 7!

This is the first release of Java since Oracle bought Sun Microsystems.

This new version of Java introduces a lot of new features, but some of the languages new features will be introduced in Java 8 as stated by the "Plan B".

In this version, there some great new language features, as stated by the JSR 334 :

We will also see the new NIO.2 API (specified by the JSR 203).

A new bytecode instruction has been added to the virtual machine, InvokeDynamic.

You can download Java SE 7 on the Oracle website.

I think it was time now for Java to have a new version with some refreshing, and it's IMO a good new version that we have now. I just hope that the next version,  Java 8, will be here in less than five years to give us the closures.