evalf()/Digits bugs

Richard B. Kreckel kreckel at thep.physik.uni-mainz.de
Tue Jan 8 16:41:14 CET 2002


On Tue, 8 Jan 2002, Phil Mendelsohn wrote:
> > Oh, if you worry about that, then throwning away digits in the output is
> > not going to help since it is merely cosmetic.  If we could have
> > two digits, then compared to 3.3, you are saying that 3.333 is better than
> > 3.3334.  Why bother?
> Sorry for sticking my oar in, but I think the reason the one inaccurate
> answer is better than the other is a human one, not a machine or a
> precision one.  Either way you should describe the output behavior
> concisely, and with the current behavior, the description that should be
> used is
> "GiNaC gives you meaningless garbage at the end of your output under <x>
> circumstances."

No, no, no!  This is not `meaningless garbage', although the human eye
might think it is.  This `garbage' is absolutely correct from the
computer's point of view.  See below...

> Not exactly something that sounds like a feature. ;)
> It's easier (I think) to spot that the number of digits isn't right, but
> nice to know that at least what you *do* see is correct.

Okay, we have two options, then:
1) Fix CLN, since this is where that output is really generated.
2) Apply a cosmetical patch to GiNaC that throws away the last numbers.

Option 1)
  We first have to find an error here.  Considering the 1/3 issue you may
  notice that the pure CLN output routine always seems to round up except
  for those cases where the precision is lower than 17 decimal digits.
  Why?  The reason is that CLN uses whole "digit sequences" consisting of 
  arrays of machine-size words for the mantissa, once a machine-size word
  cannot handle sign, mantissa and exponent for the accuracy that we have
  ordered.  Such an array has necessarily a power of two binary digits.
  As such, 1/3 always looks like this in binary notation
  which has to be truncated by the machine to fit a power of two.  In
  other words, the truncation has to take place after a 0 binary digit.
  But due to rounding, that last 0 must be converted to a 1.  What we
  thus get is this, assuming words of 16-bits:
  Later, that internal representation is converted to decimal notation.
  Here are the first few cases, in which I give the mantissa as it is
  stored internally by CLN in binary notation in the first line, the "exact"
  output in decimal notation to two more digits and the output in decimal
  as it is generated by CLN.  The last one is the one which seems to
  offend people when typing `evalf(1/3)' in ginsh.  The number of decimal
  digits corresponds to cln::cl_float_format_t.
  17 decimal Digits:
  27 decimal Digits:
  37 decimal Digits:
  46 decimal Digits:
  56 decimal Digits:
  66 decimal Digits:
  75 decimal Digits:
  The rounding is actually working wonderfully, in all these cases.
  Hence, there does not seem to be anything we must fix in CLN.

Option 2)
  I am not entirely sure but to me this looks purely cosmetic and would
  only lull the user into a false sense of security.  I still fail to
  see what's so wrong about 0.33333333333333333334.  It gives you some
  valuable information, doesn't it?  After all, when you throw away the
  last digit, we could interpret the number 0.3333333333333333333 as
  0.33333333333333333327, but the original is much closer to reality.

Best wishes
Richard B. Kreckel
<Richard.Kreckel at Uni-Mainz.DE>

More information about the GiNaC-list mailing list