Newbie's Questions

Q: What is GiNaC?

A: GiNaC is Not a Computer Algebra System (CAS). It is a library for non-interactively manipulating symbolic mathematical expressions in a predictable and well-defined way. It is designed to allow the creation of integrated systems that embed symbolic manipulations together with more established areas of computer science (like computation- intense numeric applications, graphical interfaces, etc.) under one roof.

Q: Why did you write this?

A: The xloops-project aims at calculating so-called loop diagrams in Quantum Field Theory, a branch of physics dealing with the fundamental interactions between elementary particles. Those diagrams eventually evaluate to a number that (with some luck) can be checked by experiment. The calculation involves several integrations. Of course it would be possible to calculate this number numerically given infinite computing resources. A more practical approach is to use symbolic integration for some integrals and only use numerical methods at the very end. This process should not be imagined as typing in some formulae and using some built-in magic symbolic integrator but instead one has to carefully arrange the terms and thus "manually" perform the integration. The computer is used only for keeping book about huge expressions that cannot be handled by pencil. Not only do we have to handle a huge number of terms in our project but also the number of lines of code to do so may become rather large. At a certain point it was not possible for Maple to satisfy our needs any more and we decided that a hand-made solution is more appropriate.

Q: What was the problem with Maple?

A: First of all: Maple is a wonderful product. Yes, it is. However, when you start writing large-scale applications you are doomed to run into trouble with it unless you are extremely careful and know exactly what you are doing. One important problem with Maple (and any other CAS) is the lack of a standardized up-to-date language. For instance, the concept of Object Oriented design is not present. Quite generally, facilities for encapsulation are poorly developed. Maple's language is dynamically scoped and from time to time you find that Maple's developers messed up with that by not properly declaring variables to be local resulting in obscure (history-dependend) bugs. How are we supposed to write scientific software with the language even the Maple developers have problems to handle? Mathematica and Macsyma face the same problem, by the way.

Rather than pointing out a number of Maple's linguistical and structural weaknesses let us ponder about one simple fact. The purpose of symbolic computation is to "simplify" mathematical expressions so that we can more easily understand their structure or code them more efficiently for numerical evaluation by a computing machine. Most beginners simply use their Computer Algebra tool by typing in some expression and then tell the system to "simplify" it, usually by a command with that name. This is fine, so far. However, when several people embark on a large-scale project that relies considerably on symbolic computation, it is unacceptable. This is because whenever somebody codes simplify(expression) somewhere, this is a demonstration of his inability to understand what's going on. Does he really want to "simplify" a rational function by canceling a greatest common divisor from numerator and denominator? Or maybe he really only wants to expand an expression and later collect for some variable? When the CAS manufacturer ships the next release of his product, such calls to "simplify" are doomed to break. Sure, CAS do an amazing job at simplifying results. But since nobody ever defined what "simple" really means the next release might come up with different (though still hopefully correct) results. This frequently leads to subtle errors that are very hard to debug. When you start a large-scale program involving Computer Algebra, it is a good idea to memorize this: "simplify" is evil.

Q: Is GiNaC an expression template library?

A: No. GiNaC is a system where the problems are being solved dynamically during program execution. Expression templates are quite powerful and certainly one of the most funky features of C++. However, in a symbolic context it would be pointless trying to solve the problems during compile-time. Executables are much faster and have a smaller memory-footprint than a compiler.

Q: Why that silly name?

A: The name GiNaC was a preliminary name in spring 1999, just to give the kid some name. Eventually, we became fond of it, partly because it bears so much resemblance to Orangina, one of our favorite soft drinks. Also, the letter G was a definite must: As Gene Kan put it in the book "Peer-to-Peer", published by O'Reilly: "GNU is short for GNU's Not Unix, the geekish rallying cry of a new generation of software developers who enjoy giving free access to the source code of their products."

And, by the way: the name has nothing to do with Frank P. Ginac, the author of several books about software development, or with the Ginac Group, a company engaged in career consulting. Also, the abbreviation "Gaming Is Not A Crime" was not popular back in 1999, when our work started.

Q: Why on earth do you claim it is not a CAS?

A: Would you call LaTeX a word processor? No, seriously: One of the main problems we were faced with during our long practical experience with CASs was the instability of the languages invented by CAS designers. We felt it would be better to rely on something standardized. If you consider that GNU stands for "GNU is not Unix", yet GNU widely defines Unix nowadays, the name should be put into the proper perspective. ;-)

Q: Ginac seems to be a Polish word. What does it mean?

A: It is a verb and its meaning is to vanish. Incidentally, testing for zero is one of the most frequent uses of Computer Algebra Systems. But this is a pure coincidence. Nobody knew about this when GiNaC was given its name. It is just another coincidence. :-)

Q: Can GiNaC do my homework assignments?

A: Argh! Go, pay more attention at your classes and do them the old-fashioned way.

Q: I don't like the GPL. May I have another license, please?

A: The idea to dual-license GiNaC has been proposed a number of times. It has, however, been squarely rejected by some developers. Asking for a permission to dual-license it is likely going to waste your valuable time. That being said: The GPL is a good license. Why not directly put derived work under the GPL, too?

Beginner's Questions

Q: Will it run on my Computer?

A: It depends on your computer. GiNaC is written in ISO-C++. However, the developers themselves use the C++ compiler from the GNU Compiler Collection (GCC). It should run on any system, where a recently new GCC is installed. This includes most Unix variants (Linux, BSD, Solaris,...) and also Machines running Windows using cygwin or MinGW used as a compiler.

Another way of using GiNaC on Windows is cross-compiling the library on a Linux host to MinGW as described in this post or using Alexei Sheplyakov's scripts.

Q: What is CLN?

A: CLN is a C++ library for efficiently handling arbitrary size exact numbers (integers, rationals) and arbitrary precision floating points. It was written by Bruno Haible and it is covered by the GNU General Public License (just as GiNaC). Any realistic symbolic manipulation needs arbitrary sized integers, machine-size is simply not enough. GiNaC depends on these data types from CLN, it will not run without it. Installing CLN is quite easy (thanks to GNU autoconf) but you may first want to check out if it comes packaged for your operating systems (some Linux distributions do have it prepackaged).

Q: Why rely on CLN? Why not GMP/NTL/BN/Piologie/...

A: Because it's free software and because it's good software. CLN has some features that make it ideally suited for symbolic computation: it has immediate small integers, it honors the injection of integers into rationals by directly converting them, etc. It is also one of the fastest systems available.

People have occasionally objected to CLN's speed. They point out that compared to other systems it is quite slow for small integers and refer to Daniel J. Bernstein's Benchmarks where CLN version 1.0.1 was compared to a number of other libraries. However, those numbers were flawed and Daniel Bernstein has kindly updated them in February 2002. Please check again.

Q: GiNaC is so huge! Compiling the beast makes my pitiful little machine swap like a wounded moth!

A: Is there a question? Anyway, here's a nickel, kid. Go buy more memory.

Q: How can I learn good C++?

A: There are countless books and tutorials out there. We cannot give a review of literature here but only mention some books that we think were helpful for us.

Don't bother reading the international standard ISO/EIC 14882-1998(E) unless you're writing a C++ compiler. It is not suited for learning the language.

Q: I don't like the non-interactiveness. Isn't there some other way of using GiNaC?

A: To reiterate: GiNaC is a C++ library for non-interactively manipulating symbolic mathematical expressions. This means that the most natural way of interfering with it is by writing programs in C++, compiling them and linking them against libginac. This does, however, not restrict anybody from writing frontend applications that use GiNaC as a backend. One such example is the interactive program ginsh, distributed with GiNaC's sources. It is a simple but useful tool that let's you quickly test some things in a Maple-like language but without any conditionals or loop statements. For the brave and adventurous, there is also an interface to the Cint C/C++ interpreter. Please refer to the installation instructions provided in the tutorial. And if this is not enough you are more than welcome to write your own frontend!

Q: You say you are using GCC for development. Seriously: How portable is it?

A: Okay, since you want to know the truth: it is highly portable, as long as your compiler is standard-compliant. Even CLN can be compiled easily as long as you are building a static library only. The reason is that in shared libraries the 'static initialization order fiasco' strikes and one has to guarantee that at runtime the order of global object initialization is such that no object that invokes another object's method is initialized before that other object exists. CLN uses some PROVIDE/REQUIRE preprocessor magic to solve this problem, GiNaC uses a 'construct on first use' idiom. The macros give more natural code but work only for GCC, and only if optimization is switched on! Using static libraries, we have successfully built CLN and GiNaC on other compilers. But there is another reason why we don't recommend it: Performance. In our experience, GCC-compiled C++ code is much faster than anything else we have tried so far. Please don't be led to believe that GCC is a poor compiler simply because some Fortran dudes out there tell you so. This is C++. And things are different here.

Q: I tried it and got a different result at each run!

A: The terms of sums and products (and some other things like the arguments of symmetric functions, the indices of symmetric tensors etc.) are re-ordered into a canonical form that is deterministic, but not lexicographical or in any other way easy to guess (it almost always depends on the number and order of the symbols you define). However, constructing the same expression twice, either implicitly or explicitly, will always result in the same canonical form.

The canonical form might change from run to run. Using ginsh, this can be studied on the command line:

$ for i in `seq 1 100`; do
>        echo "a = x; b = y; x - y;" | ginsh
> done |sort -n | uniq
Depending on luck, the canonical form of the same expression might be either 'x-y' or '-y+x' (of course it's the same during one run).

The terms of sums (and products) are sorted according to a hash value. This makes collecting similar terms reasonably fast. As a side effect it makes the order of terms 'unpredictable', that is, deterministic, but not easy to guess (that's a property of any reasonable hash function). This is not a bug. Fast(er) automatic evaluation is much more important than a pretty output. Given that input and output expressions typically contain 104 terms or more the exact term order does not really matter anyway. If you really need a fancy output, write a custom printing context.

Advanced Questions

Q: How can I use class ex as keys in associative STL-containers?

A: The problem here is usually that the comparison operator < is used inside these containers, but that operator for class ex is not good enough. Given two symbols a and b, is a<b or b<a? Both must return false since nothing is known about a and b. But that confuses associative containers like map, set or multiset. The solution is to specify a comparison functor that does a canonical ordering as an additional template parameter. There already is one, for your convenience, called ex_is_less. So you would define a map of expressions to values of type T as map<ex, T, ex_is_less>, and a set of expressions as set<ex, ex_is_less>.

Q: How do I simplify things like sin^2+cos^2 to 1?

A: You don't, at least there is no wizard that does so automatically. This is one of GiNaC's limitations: it has no knowledge that this is an identity. But if you have such knowledge and you know exactly that you want to apply this transformation to your expressions you may identify the arguments and substitute them with hand-written program (after expanding and collecting perhaps). One possibility is to eliminate one of the two dependent functions by replacing any occurrences of cos(foo) by sqrt(1-pow(sin(foo),2)). Straightforwardly, howver, for simple cases like this one, syntactic wildcard substitution may be enough, like so:

    ex e = pow(sin(x),2)+pow(cos(x),2);
    e = e.subs(cos(wild())==sqrt(1-pow(sin(wild()),2)));
Once you got the idea you may feel quite disappointed, because it is not very powerful. In this case please read the next question.

Q: How do I write a tree-traversal algorithm?

A: There are many ways. Ideally, you would think, by adding new methods to all the classes that might occur in your expressions. At least this is the canonical way in class hierarchies like the one provided by GiNaC. It might require some rethinking, however, for people accustomed to List-oriented systems and unfortunately it will be difficult to maintain in the light of new upstream versions that don't know about your methods. As an alternative, we'll describe how the so-called visitor-pattern can readily be applied to algebraic expression trees.

In the example above we have seen how to substitute wildcards. Sometimes, this is not enough, though. Many times it is necessary for a tree-traversing function to represent some state. For instance, in the expression cos(2*x)^2-2*cos(x) the arguments of the cosine are related by a factor of two and one may use the formulae for double angles to reduce this expression to 1 if we know that we want to express cos(2*x) in terms of cos(x). We first have to think up a strategy for such a reduction to common arguments. We may apply the rules cos(2*x)==2*cos(x)^2-1 and sin(2*x)==2*cos(x)*sin(x) to arbitrary expressions. But that is not enough, since the expression may also contain cos(3*x) or such. In the general case where the arguments are multiples of each other with the quotient being an odd factor we can use the formula cos(n*x)==cos((n-1)*x)*cos(x)-sin((n-1)*x)*sin(x) together with sin(n*x)==cos((n-1)*x)*sin(x)+sin((n-1)*x)*cos(x). After applying them, the resulting expression contains cos(x) as well as cos(n*x), but now n is even and the procedure may be repeated recursively until no multiples of x occur inside the arguments of sin and cos. For this purpose, we define a function object with an operator()(const ex &) so we may call it like a function. We want to let this function object traverse expression trees. The GiNaC class hierarchy has hooks for this, but to make use of them we need to derive our function object from map_function. Also, it needs to know in which variable to reduce the arguments (x in our examples, so far). Here is the declaration:

class sin_cos_multiple_angle_reducer : public map_function {
    ex arg;
    sin_cos_multiple_angle_reducer(const ex & x) : arg(x) {}
    ex operator()(const ex &);
The definition of the operator() is then implementing our rules for reducing arguments of sin and cos to common expressions. It could look like this:
ex sin_cos_multiple_angle_reducer::operator()(const ex & expr)
    if (is_ex_the_function(expr, cos)) {
        const ex trialdiv = normal(expr.op(0)/arg);
        if (is_a<numeric>(trialdiv)) {
            // rules: cos(n*x)==cos((n-1)*x)*cos(x)-sin((n-1)*x)*sin(x)
            //        cos(2*n*x)==2*cos(n*x)^2-1
            //        sin(-n*x)==-sin(n*x),  cos(-n*x)==cos(n*x)
            const numeric n = ex_to<numeric>(trialdiv);
            if (n.is_integer()) {
                if (n.is_even())
                    return (2*pow(cos(n/2*arg),2)-1).map(*this);
                    return (-csgn(n-1)*sin(abs(n-1)*arg)*sin(arg)
        return expr;
    if (is_ex_the_function(expr, sin)) {
        const ex trialdiv = normal(expr.op(0)/arg);
        if (is_a<numeric>(trialdiv)) {
            // rules: sin(n*x)==cos((n-1)*x)*sin(x)+sin((n-1)*x)*cos(x)
            //        sin(2*n*x)==2*cos(n*x)*sin(n*x)
            //        sin(-n*x)==-sin(n*x),  cos(-n*x)==cos(n*x)
            const numeric n = ex_to<numeric>(trialdiv);
            if (n.is_integer()) {
                if (n.is_even())
                    return (2*cos(n/2*arg)*sin(n/2*arg)).map(*this);
                    return (+csgn(n-1)*sin(abs(n-1)*arg)*cos(arg)
        return expr;
Now, given an expression cos(3*x)+3*cos(x), we can simplify it to 4*cos(x)^3 by first constructing a sin_cos_multiple_angle_reducer for the minimal argument x, then calling it, then substituting the arising sin(x) with sqrt(1-cos(x)^2) and then expanding again:
    ex e = cos(3*x)+3*cos(x);
    sin_cos_multiple_angle_reducer f(x);
    e = f(e).subs(sin(wild())==sqrt(1-pow(cos(wild()),2))).expand();
    // e is now 4*cos(x)^3

There is one silly problem, however. In the above example, we need to know what to reduce the arguments of sin and cos to. A more clever program would figure this out by itself. The solution is, you might have guessed it, to write another function object that traverses the expression first and establishes a list of all the arguments that are prospective candidates for such a reduction. We'll do this step by step here. In addition to the tree-traversing operator(), its declaration will have a GiNaC::lst member variable to hold all the arguments we find, a helper method called choose_candidate(const ex &, const ex &) that for instance returns x given the arguments 2*x and x and another helper method reduce() that throws out all the redundancies:

class sin_cos_multiple_argument_finder {
    lst reduced;
    static const ex choose_candidate(const ex &, const ex &);
    void reduce(void);
    ex operator()(const ex &);
Let's start with the implementation of operator(). We want to recursively take apart the ex argument. For this purpose we cannot use the .map(*this) as above, because this wants to map sums to sums and products to products which is not anything like splitting. Instead, we just call operator() explicitly. (By the way, we haven't derived sin_cos_multiple_argument_finder from map_function because this is only needed whan calling .map().)
ex sin_cos_multiple_argument_finder::operator()(const ex & expr) {
    if (is_ex_the_function(expr, cos) || is_ex_the_function(expr, sin)) {
        return lst(expr.op(0));
    } else {
        ex retval;
        for (unsigned i=0; i<expr.nops(); ++i) {
            retval = this->operator()(expr.op(i));
            for (unsigned i=0; i<retval.nops(); ++i)
    return reduced;
The reduction to the reduced list of arguments is called at the end. Mathematically, this is a set rather than a list and we can be clever and use the uniqueness of the elements of the container std::set for this purpose:
void sin_cos_multiple_argument_finder::reduce(void)
    set<ex, ex_is_less> uniques;
    for (unsigned i=0; i<reduced.nops(); ++i)
    reduced = lst();  // clear the list to make room
    for (set<ex,ex_is_less>::iterator i=uniques.begin(); i!=uniques.end(); ++i) {
        ex candidate = *i;
        for (set<ex,ex_is_less>::iterator j=uniques.begin(); j!=uniques.end(); ++j) {
            ex ratio = normal((*j)/candidate);
            if (is_a<numeric>(ratio) &&
                candidate = choose_candidate(*j, candidate);
(Please see the item about associative STL-containers if you don't know what the ex_is_less is good for.) The last missing piece is the function that chooses the right candidates from a pair:
const ex sin_cos_multiple_argument_finder::choose_candidate(const ex & A, const ex & B)
    const ex gcd_AB = gcd(A, B);
    const numeric a = ex_to<numeric>(normal(A/gcd_AB));
    const numeric b = ex_to<numeric>(normal(B/gcd_AB));
    const numeric denoms_lcm = lcm(a.denom(),b.denom());
    return gcd_AB * gcd(a*denoms_lcm, b*denoms_lcm) / denoms_lcm;
It looks a bit involved, but this is because given x and x/2 we want it to pick x/2 in order to apply sin_cos_multiple_angle_reducer(x/2) to expressions containing both.

So, what can we do with this? Given an expression containing terms like cos(x), cos(2*x), cos(y), cos(1), etc, we can first find the multiple argument candiates (x, y and 1 in this case) and then apply a sin_cos_multiple_angle_reducer for all the candidates. Here is a little code-snippet:

    symbol x("x"), y("y");
    ex e = 2*pow(sin(1),2)+cos(2)+sin(5*x)*cos(2*x)-sin(2*x)*cos(5*x)+sin(x)-4*sin(x)*pow(cos(x),2)+cos(2*y)-2*pow(cos(y),2);
    sin_cos_multiple_argument_finder f1;
    ex a = f1(e);
    for (int i=0; i<a.nops(); ++i) {
        sin_cos_multiple_angle_reducer f2(a.op(i));
        e = f2(e);
    e = e.expand().subs(cos(wild())==sqrt(1-pow(sin(wild()),2))).expand();
After that, the longish expression e will be magically reduced to zero. It should be clear now that a collection of carefully written function objects can replace the anonymous simplifiers as conventional CAS provide them.

Q: What exactly triggers this automatic evaluation thingie?

A: It occurs automatically. Nothing to see here. Please move along...

Okay, since you insist: There really isn't any magic in here. First some very basic simplification already takes place at the constructor level, like this one: (a+b)+c->a+b+c. Then come the .eval() methods: you have seen that there are all these .eval() methods and that they get as an argument an integer specifying how deep in the expression tree evaluation should take place. The mechanism is triggered automatically whenever an ex is constructed from a basic-derived object. And it is triggered to one level only but this is enough since all the basic-derived containers (add...) hold other objects of class ex and the process repeats therein. That's all.

You can even probe this process: The system knows that psi(0,x)=psi(x) by means of psi(x,y)'s eval function. So try calling .nops() once on psi(0,x) and once on ex(psi(0,x)) and see what comes out.

Q: I have various modules, some global variables and...

A: ...and you are getting weird results with symbols not being identical? Imagine this situation: A file globals.h declares symbol x("x"). Two other modules #include "globals.h", do something with symbol x and return an expression ex containing the symbol to the caller (e.g. to main()). When the caller combines the two expressions there are really two different symbols both with the same print-name x. This may cause terms like x-x to stay around which will never be simplified to 0 because the system has no knowledge that the two symbols are supposed to be the same. How should it? After all the declarations really happend in different compilation units!

Here is your solution. You have to build a factory for the global symbols so they are shared whenever they are constructed from the same string. In the language of design patterns this is called a flyweight factory. It can be readily coded using an STL map<key,value> internally like this:

const symbol symbol_factory(const string & s)
    static map<string, symbol> directory;
    map<string, symbol>::iterator i = directory.find(s);
    if (i!=directory.end())
        return i->second;
    return directory.insert(map<string, symbol>::value_type(s, symbol(s))).first->second;

Instead of declaring symbol x("x"); the header globals.h now declares symbol x = symbol_factory("x"); and everything is being cared for.

Q: I wish to mix matrices and indexed objects. How can that be done?

The indexed class (and most derived classes) is intended for tensor manipulation without referring to a particular basis. Thus, the indexed class is well suited for calculations involving (formally defined) tensor algebra of non-integer-dimensional space. This is particularly useful for evaluation of Feynman integrals in the framework of dimensional regularization. In that framework, unrolling aiai to a12+a22+a32+... is not possible, since the dimension is not integer! On the other hand, the matrix class is not treated as a tensor, so mixing matrices with indexed objects typically gives meaningless results.

As an example, the following expression (in ginsh syntax) won't compare equal unless a has been declared a matrix:

X1 = [[-1,0],[0,1]]~mu~mu * a~mu 
X2 = [[-1,0],[0,1]].nu~mu * a~nu

Q: My programs dump core. How can I debug with GiNaC?

A: We assume you have already put a try { ... } catch (exception & e) { cout << e.what() << endl; } block around the statement where it crashes. Haven't you? GiNaC throws exceptions derived from std::exception if it encounters some algebraic impossibilities like division by zero or the like. If this doesn't help, read on.

Use gdb or one of its nifty frontends like DDD. GiNaC provides a method called dbgprint(void) for displaying the contents of expressions when debugging. It is just a small wrapper around the method print(ostream&,unsigned) which usually cannot be called directly since the object cout is not present. It can be invoked by the call debugger command.

(Note: If your debugger complains "Cannot resolve method ex::dbgprint to any overloaded instance" you are likely to be using gdb 4.18. This has been fixed in gdb 5.0. In the meantime, you may work around this problem by switching off overload resolution: set overload-resolution off in gdb. In DDD versions >= 3.2 this can be permanently switched off in the preferences.)

Last updated July 15 2005.