 |
forums.silverfrost.com Welcome to the Silverfrost forums
|
View previous topic :: View next topic |
Author |
Message |
eric_carwardine
Joined: 13 Jun 2009 Posts: 70 Location: Perth, Western Australia
|
Posted: Wed Jun 01, 2011 2:59 pm Post subject: Anatomy of a Calculation |
|
|
G'day, folks
Are there any general principles that might govern how a calculation should be structured? Any things that can be considered to apply independent of the data which the calculation is handling?
For example,
(a + b + c) / d is algebraically equivalent to a/d + b/d + c/d but does that equivalence apply to processing on a limited-precision machine? In other words, does structure matter when writing the code? I'm thinking of things which are independent of the data. If the 'ideal' structure is not independent of the data then I guess we have a whole new kettle of porcupines!
Here's the actual segment of code that has got me thinking about this possibility -
Code: |
C
C here for isothermal calculation
C
f0(i) = (10.0**(c2)) * tinc + f0(i)
goto 400
C
C here for non-isothermal calculation
C
300 dm = c1 * aln10
f0(i) = (10.0**(c1*tinc+c2))/dm + f0(i) - (10.0**c2)/dm
C
|
In particular, I wondered if the 'accumulator' f0(i) should be real*8, thinking that the results of the exponentiation (p1 & p2) should be tested before they possibly distorted the rest of the calculation.
Code: |
C
C here for isothermal calculation
C
route='I'
p1=10.0**c2
f0 = dble(p1 * tinc) + f0
goto 400
C
C here for non-isothermal calculation
C
300 route='N'
dm = c1 * aln10
p1=10.0**c2
p2=10.0**(c1*tinc+c2)
f0 = dble(p2/dm) + f0 - dble(p1/dm)
C
|
Eric |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2402 Location: Yateley, Hants, UK
|
Posted: Wed Jun 01, 2011 7:00 pm Post subject: |
|
|
Hi Eric,
You can't assume it will be correct. Especially when using Real*4. Imagine
( A(1) +A(2) .... A(N) ) / B, where A(1) is big, but A(2) ... A(N) are tiny. Say A(1) is huge, and the other values are small, for example, A(1)=10^6, and A(i)=10^-2 However, N is a really big number - for example, 10^8. B is another big number, say 10^9.
If N is big enough, then the sum divided by B will be different to just A(1) divided by B, but, if you expand the expression, A(i)/B will always round to 0, and the sum calculated that way will be the same as A(1)/B, whereas the answer should be twice as much.
As the precision of REAL*4 is about 1 part in 10^7 - and you are lucky if you get that - then this sort of roundoff happens very often. So much so that you have to be really careful about roundoff in almost every calculation you do.
The precision of REAL*8 is much, much, better, so you get the roundoff error less, but it is still there as a possiblity - ditto REAL*10 (better than REAL*8, but still not perfect).
An interesting compromise was 24-bit word (equating to 48-bit REALs (or in effect, REAL*6) commonplace on early British-made computers, which made the roundoff problem rather rare. As DOUBLE PRECISION on such a machine equated to 96-bit (e.g. REAL*12), you could very nearly make the roundoff problem disappear entirely! Some Cray machines used 60-bit REALs, or in effect, REAL*7.5 (or DOUBLE PRECISION at REAL*15)
Eddie |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Thu Jun 02, 2011 2:38 am Post subject: |
|
|
You could change 300
Code: | f0(i) = (10.0**(c1*tinc+c2))/dm + f0(i) - (10.0**c2)/dm
= f0(i) + ( (10.0**(c1*tinc+c2))/dm - (10.0**c2)/dm )
= f0(i) + ( (10.0**(c1*tinc) * 10.0**c2 - 10.0**c2)/dm )
= f0(i) + ( (10.0**(c1*tinc) - 1.0) * (10.0**c2)/dm ) |
if "c1*tinc" is < -10 or > +10 then there becomes a problem and if < -32 or > +32 then outside the range of a real number. Certainly the use of **real can become very sensitive to precision.
With real finite precision calculations, if you know the order of magnitude of the variables, you can modify the precision slightly by changing the order or using brackets. Certainly:
"(a + b + c)/d" could be calculated differently to
"(a/d) + (b/d) + (c/d)"
While you can produce unique examples where this change works well, I'm not sure how stable this approach is if you are expecting problems with round-off for a range of examples.
What would "(big1 - big2) + very_small" mean if big1 is very close to big2. Consider how accurately you could estimate big1 and big2 as input to the program, rather than their precision within the program. A good example is in finite elements, where the magnitude of all values in the stiffness matrix are typically only as accurate as each element stiffness definition, based on E, the elastic modulus, which is typically only 2 to 3 figures of accuracy. If the calculation is that sensitive, then there is probably a problem with the basic algorithm, especially when you consider the variability of the real system properties.
Another problem is if you select /opt then the compiler may change the order of the calculations.
Based on my past experience, the above approach is a bit like shuffling the deck chairs on the Titanic.
Better approaches are increasing precision, if there is any left, or trying to re-state the algorithm or equation to avoid ill conditioning.
I once modified my FE equation solver from REAL*8 to REAL*10 for a poorly defined problem (bad rounding in the result) without a significant improvement. The problem was more with the model definition, than with the equation solver. |
|
Back to top |
|
 |
eric_carwardine
Joined: 13 Jun 2009 Posts: 70 Location: Perth, Western Australia
|
Posted: Fri Jun 03, 2011 6:07 am Post subject: |
|
|
That's a sobering example of roundoff, Eddie. Paraphrasing: "If A(i) is tiny and B is huge then A(i)/B will always round to zero."
Perhaps even more 'deadly' (since I'm dealing with a critical heat-sterilisation process) is the 'serial accumulation' I'm using. If -
Fo = A(1) + A(2) + A(3) + ..... + A(n)
then even a tiny error in A(1) could see the eventual total diverge way off-target. A bit like directing a beam at a distant galactic body; if the beam is misaligned at its source by even the tiniest fraction of a degree the result could be the beam impacting the distant body many many miles off-target. I recall reading about the Germans in World War II encountering this sort of cumulative error as they positioned electromagnetic beams to guide their bombers across the English Channel. Names like 'Wurzburg' and 'Knickebein' come to mind.
John finished his post with -
"The problem was more with the model definition, than with the equation solver."
Good point, John. It's caused me to go right back to the original lethality equation -
[img]http://www.kurrattan.net/images/Fzero.bmp[/img]
where:
Fo = equivalent minutes at a temperature of 121 degrees Celsius
T = temperature of the sterilized product at time t
DELTA t = time interval between two successive measurements of T
z = temperature coefficient, assumed to be equal to 10�C
We can read the temperature at intervals as small as a second or two. I'm wondering if it's possible to use this [T]emperature/[t]ime data to obtain a smooth realtime estimate of the gradient dT/dt. A realtime estimate is desired so that the sterilisation process can be terminated when a sufficient Fo has been accumulated. |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2402 Location: Yateley, Hants, UK
|
Posted: Fri Jun 03, 2011 4:30 pm Post subject: |
|
|
Hi Eric,
My recollection of Wurzburg is that it is a gun-laying, fighter-directing radar. Knickebein is the beam. X-Gerat and Y-Gerat were something similar, but more advanced.
(http://en.wikipedia.org/wiki/Battle_of_the_Beams). You might also be interested in Gee, Oboe, H2S and Loran!
John is right that the answer lies in choice of algorithm, although REAL*4 is such rubbish that you can't expect anything to work well with it, so you might get a result with REAL*8 when you can't with REAL*4, but going much further (*10 etc) isn't probably worth it.
You might be able to sort your small elements and add them from smallest to largest .... so that they accumulated into something reasonable before you added them to the bigger numbers.
Eddie |
|
Back to top |
|
 |
IanLambley
Joined: 17 Dec 2006 Posts: 506 Location: Sunderland
|
Posted: Sat Jun 04, 2011 11:27 am Post subject: |
|
|
Ah, VAX - real*16 - grand and huge precision options - Happy days!
Ian
PS Still got a couple of MicroVaxes lying around for old time sake! |
|
Back to top |
|
 |
JohnHorspool
Joined: 26 Sep 2005 Posts: 270 Location: Gloucestershire UK
|
Posted: Sat Jun 04, 2011 11:41 am Post subject: |
|
|
Ian,
Those MicroVaxes must be museum pieces by now! Or are you waiting for the Antiques Roadshow to visit your area?
John _________________ John Horspool
Roshaz Software Ltd.
Gloucestershire |
|
Back to top |
|
 |
IanLambley
Joined: 17 Dec 2006 Posts: 506 Location: Sunderland
|
Posted: Sat Jun 04, 2011 4:42 pm Post subject: |
|
|
Just used it a few weeks ago for a bit of ultra slow FEA. I just wish I had some more up to date software - Got any ideas?
Thin shell and modal analysis to analyse pipework.
Ian |
|
Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|