1/3
@end example
-All numbers occuring in GiNaC's expressions can be converted into floating
-point numbers with the @code{evalf} method, to arbitrary accuracy:
+Exact numbers are always retained as exact numbers and only evaluated as
+floating point numbers if requested. For instance, with numeric
+radicals is dealt pretty much as with symbols. Products of sums of them
+can be expanded:
+
+@example
+> expand((1+a^(1/5)-a^(2/5))^3);
+1+3*a+3*a^(1/5)-5*a^(3/5)-a^(6/5)
+> expand((1+3^(1/5)-3^(2/5))^3);
+10-5*3^(3/5)
+> evalf((1+3^(1/5)-3^(2/5))^3);
+0.33408977534118624238
+@end example
+
+The function @code{evalf} that was used above converts any number in
+GiNaC's expressions into floating point numbers. This can be done to
+arbitrary predefined accuracy:
@example
> evalf(1/7);
but also on other parameters, for instance what value for @env{CXXFLAGS}
you entered. Optimization may be very time-consuming.
-Just to make sure GiNaC works properly you may run a simple test
-suite by typing
+Just to make sure GiNaC works properly you may run a collection of
+regression tests by typing
@example
$ make check
@end example
-This will compile some sample programs, run them and compare the output
-to reference output. Each of the checks should return a message @samp{passed}
-together with the CPU time used for that particular test. If it does
-not, something went wrong. This is mostly intended to be a QA-check
-if something was broken during the development, not a sanity check
-of your system. Another intent is to allow people to fiddle around
-with optimization. If @acronym{CLN} was installed all right
-this step is unlikely to return any errors.
+This will compile some sample programs, run them and check the output
+for correctness. The regression tests fall in three categories. First,
+the so called @emph{exams} are performed, simple tests where some
+predefined input is evaluated (like a pupils' exam). Second, the
+@emph{checks} test the coherence of results among each other with
+possible random input. Third, some @emph{timings} are performed, which
+benchmark some predefined problems with different sizes and display the
+CPU time used in seconds. Each individual test should return a message
+@samp{passed}. This is mostly intended to be a QA-check if something
+was broken during development, not a sanity check of your system.
+Another intent is to allow people to fiddle around with optimization.
Generally, the top-level Makefile runs recursively to the
subdirectories. It is therfore safe to go into any subdirectory
@cindex evaluation
The last line returns @code{cos(x)} if we don't know what else to do and
stops a potential recursive evaluation by saying @code{.hold()}, which
-sets a flag to the expression signalint that it has been evaluated. We
+sets a flag to the expression signaling that it has been evaluated. We
should also implement a method for numerical evaluation and since we are
lazy we sweep the problem under the rug by calling someone else's
function that does so, in this case the one in class @code{numeric}:
@end example
The first argument is the function's name used for calling it and for
-output. The second binds the corresponding methods as options to this
-object. Options are separated by a dot and can be given in an arbitrary
-order. GiNaC functions understand several more options which
-are always specified as @code{.option(params)}, for example a method
-for series expansion @code{.series_func(cos_series)}. If no series
-expansion method is given, GiNaC defaults to simple Taylor
-expansion, which is correct if there are no poles involved (as is
-the case for the @code{cos} function). The way
-GiNaC handles poles in case there are any is best understood by studying
-one of the examples, like the Gamma function for instance. In essence
-the function first checks if there is a pole at the evaluation point and
-falls back to Taylor expansion if there isn't. Then, the pole is
-regularized by some suitable transformation.) Also, the new function
-needs to be declared somewhere. This may also be done by a convenient
-preprocessor macro:
+output. The second binds the corresponding methods as options to this
+object. Options are separated by a dot and can be given in an arbitrary
+order. GiNaC functions understand several more options which are always
+specified as @code{.option(params)}, for example a method for series
+expansion @code{.series_func(cos_series)}. Again, if no series
+expansion method is given, GiNaC defaults to simple Taylor expansion,
+which is correct if there are no poles involved as is the case for the
+@code{cos} function. The way GiNaC handles poles in case there are any
+is best understood by studying one of the examples, like the Gamma
+function for instance. (In essence the function first checks if there
+is a pole at the evaluation point and falls back to Taylor expansion if
+there isn't. Then, the pole is regularized by some suitable
+transformation.) Also, the new function needs to be declared somewhere.
+This may also be done by a convenient preprocessor macro:
@example
DECLARE_FUNCTION_1P(cos)
mechanisms. Please, have a look at the real implementation in GiNaC.
(By the way: in case you are worrying about all the macros above we can
assure you that functions are GiNaC's most macro-intense classes. We
-have done our best to avoid them where we can.)
+have done our best to avoid macros where we can.)
That's it. May the source be with you!