1/3
@end example
-All numbers occuring in GiNaC's expressions can be converted into floating
-point numbers with the @code{evalf} method, to arbitrary accuracy:
+Exact numbers are always retained as exact numbers and only evaluated as
+floating point numbers if requested. For instance, with numeric
+radicals is dealt pretty much as with symbols. Products of sums of them
+can be expanded:
+
+@example
+> expand((1+a^(1/5)-a^(2/5))^3);
+1+3*a+3*a^(1/5)-5*a^(3/5)-a^(6/5)
+> expand((1+3^(1/5)-3^(2/5))^3);
+10-5*3^(3/5)
+> evalf((1+3^(1/5)-3^(2/5))^3);
+0.33408977534118624228
+@end example
+
+The function @code{evalf} that was used above converts any number in
+GiNaC's expressions into floating point numbers. This can be done to
+arbitrary predefined accuracy:
@example
> evalf(1/7);
> a=Pi^2+x;
x+Pi^2
> evalf(a);
-x+9.869604401089358619L0
+9.869604401089358619+x
> x=2;
2
> evalf(a);
-11.869604401089358619L0
+11.869604401089358619
@end example
Built-in functions evaluate immediately to exact numbers if
@example
> lsolve(a+x*y==z,x);
y^(-1)*(z-a);
-lsolve([3*x+5*y == 7, -2*x+10*y == -5], [x, y]);
+> lsolve([3*x+5*y == 7, -2*x+10*y == -5], [x, y]);
[x==19/8,y==-1/40]
> M = [[ [[1, 3]], [[-3, 2]] ]];
[[ [[1,3]], [[-3,2]] ]]
x^(-1)-EulerGamma+(1/12*Pi^2+1/2*EulerGamma^2)*x
+(-1/3*zeta(3)-1/12*Pi^2*EulerGamma-1/6*EulerGamma^3)*x^2+Order(x^3)
> evalf(");
-x^(-1.0)-0.5772156649015328606+(0.98905599532797255544)*x
--(0.90747907608088628905)*x^2+Order(x^(3.0))
+x^(-1)-0.5772156649015328606+(0.9890559953279725555)*x
+-(0.90747907608088628905)*x^2+Order(x^3)
> series(gamma(2*sin(x)-2),x,Pi/2,6);
-(x-1/2*Pi)^(-2)+(-1/12*Pi^2-1/2*EulerGamma^2-1/240)*(x-1/2*Pi)^2
-EulerGamma-1/12+Order((x-1/2*Pi)^3)
but also on other parameters, for instance what value for @env{CXXFLAGS}
you entered. Optimization may be very time-consuming.
-Just to make sure GiNaC works properly you may run a simple test
-suite by typing
+Just to make sure GiNaC works properly you may run a collection of
+regression tests by typing
@example
$ make check
@end example
-This will compile some sample programs, run them and compare the output
-to reference output. Each of the checks should return a message @samp{passed}
-together with the CPU time used for that particular test. If it does
-not, something went wrong. This is mostly intended to be a QA-check
-if something was broken during the development, not a sanity check
-of your system. Another intent is to allow people to fiddle around
-with optimization. If @acronym{CLN} was installed all right
-this step is unlikely to return any errors.
+This will compile some sample programs, run them and check the output
+for correctness. The regression tests fall in three categories. First,
+the so called @emph{exams} are performed, simple tests where some
+predefined input is evaluated (like a pupils' exam). Second, the
+@emph{checks} test the coherence of results among each other with
+possible random input. Third, some @emph{timings} are performed, which
+benchmark some predefined problems with different sizes and display the
+CPU time used in seconds. Each individual test should return a message
+@samp{passed}. This is mostly intended to be a QA-check if something
+was broken during development, not a sanity check of your system.
+Another intent is to allow people to fiddle around with optimization.
Generally, the top-level Makefile runs recursively to the
subdirectories. It is therfore safe to go into any subdirectory
int main()
@{
- numeric two(2); // exact integer 2
- numeric r(2,3); // exact fraction 2/3
- numeric e(2.71828); // floating point number
- numeric p("3.1415926535897932385"); // floating point number
-
+ numeric two(2); // exact integer 2
+ numeric r(2,3); // exact fraction 2/3
+ numeric e(2.71828); // floating point number
+ numeric p("3.1415926535897932385"); // floating point number
+ // Trott's constant in scientific notation:
+ numeric trott("1.0841015122311136151E-2");
+
cout << two*p << endl; // floating point 6.283...
// ...
@}
@cindex evaluation
The last line returns @code{cos(x)} if we don't know what else to do and
stops a potential recursive evaluation by saying @code{.hold()}, which
-sets a flag to the expression signalint that it has been evaluated. We
+sets a flag to the expression signaling that it has been evaluated. We
should also implement a method for numerical evaluation and since we are
lazy we sweep the problem under the rug by calling someone else's
function that does so, in this case the one in class @code{numeric}:
@code{ex::diff}):
@example
-static ex cos_derive(const ex & x, unsigned diff_param)
+static ex cos_deriv(const ex & x, unsigned diff_param)
@{
return -sin(x);
@}
are curious:
@example
-REGISTER_FUNCTION(cos, cos_eval, cos_evalf, cos_derive, NULL);
+REGISTER_FUNCTION(cos, eval_func(cos_eval).
+ evalf_func(cos_evalf).
+ derivative_func(cos_deriv));
@end example
The first argument is the function's name used for calling it and for
-output. The second, third and fourth bind the corresponding methods to
-this objects and the fifth is a slot for inserting a method for series
-expansion. (If set to @code{NULL} it defaults to simple Taylor
-expansion, which is correct if there are no poles involved. The way
-GiNaC handles poles in case there are any is best understood by studying
-one of the examples, like the Gamma function for instance. In essence
-the function first checks if there is a pole at the evaluation point and
-falls back to Taylor expansion if there isn't. Then, the pole is
-regularized by some suitable transformation.) Also, the new function
-needs to be declared somewhere. This may also be done by a convenient
-preprocessor macro:
+output. The second binds the corresponding methods as options to this
+object. Options are separated by a dot and can be given in an arbitrary
+order. GiNaC functions understand several more options which are always
+specified as @code{.option(params)}, for example a method for series
+expansion @code{.series_func(cos_series)}. Again, if no series
+expansion method is given, GiNaC defaults to simple Taylor expansion,
+which is correct if there are no poles involved as is the case for the
+@code{cos} function. The way GiNaC handles poles in case there are any
+is best understood by studying one of the examples, like the Gamma
+function for instance. (In essence the function first checks if there
+is a pole at the evaluation point and falls back to Taylor expansion if
+there isn't. Then, the pole is regularized by some suitable
+transformation.) Also, the new function needs to be declared somewhere.
+This may also be done by a convenient preprocessor macro:
@example
DECLARE_FUNCTION_1P(cos)
mechanisms. Please, have a look at the real implementation in GiNaC.
(By the way: in case you are worrying about all the macros above we can
assure you that functions are GiNaC's most macro-intense classes. We
-have done our best to avoid them where we can.)
+have done our best to avoid macros where we can.)
That's it. May the source be with you!
by the parser. In particular, it turns out to be almost impossible to
fix bugs in a traditional system.
+@item
+multiple interfaces: Though real GiNaC programs have to be written in
+some editor, then be compiled, linked and executed, there are more ways
+to work with the GiNaC engine. Many people want to play with
+expressions interactively, as in traditional CASs. Currently, two such
+windows into GiNaC have been implemented and many more are possible: the
+tiny @command{ginsh} that is part of the distribution exposes GiNaC's
+types to a command line and second, as a more consistent approach, an
+interactive interface to the @acronym{Cint} C++ interpreter has been put
+together (called @acronym{GiNaC-cint}) that allows an interactive
+scripting interface consistent with the C++ language.
+
@item
seemless integration: it is somewhere between difficult and impossible
to call CAS functions from within a program written in C++ or any other
@itemize @bullet
-@item
-not interactive: GiNaC programs have to be written in an editor,
-compiled and executed. You cannot play with expressions interactively.
-However, such an extension is not inherently forbidden by design. In
-fact, two interactive interfaces are possible: First, a shell that
-exposes GiNaC's types to a command line can readily be written (the tiny
-@command{ginsh} that is part of the distribution being an example).
-Second, as a more consistent approach, an interactive interface to the
-@acronym{CINT} C++ interpreter is under development (called
-@acronym{GiNaC-cint}) that will allow an interactive interface
-consistent with the C++ language.
-
@item
advanced features: GiNaC cannot compete with a program like
@emph{Reduce} which exists for more than 30 years now or @emph{Maple}