Silverfrost Forums

Welcome to our forums

Different result in 64-bit application

27 Apr 2021 12:59 #27634

I have created a simple example which behaves in the 32-bit version correct. Same code provides a different result in 64-bit application. Looks like the decimal places behave differently.

I’m using the FTN95 version 8.70.0.

      PROGRAM TEST
C
      REAL*4  R1, R2, R3
      LOGICAL*2 LOGRES
C
      R1=14.0541906
      R2=0.0015596
      R3=14.0526314
C
      WRite(*,*) R1
      WRite(*,*) R2
      WRite(*,*) R3
      
      LOGRES=R1-R2.LT.R3
      WRITE(*,*) LOGRES           
C  
      END
          

https://i.imgur.com/0H4SuZx.png

27 Apr 2021 2:30 #27636

The difference is in the round-off error. You need double precision. By chance you get the right answer for 32 bits.

      PROGRAM TEST
      REAL*8  R1, R2, R3, R4
      LOGICAL LOGRES
      R1=14.0541906
      R2=0.0015596
      R3=14.0526314
      R4=R1-R2
      WRite(*,*) R1
      WRite(*,*) R2
      WRite(*,*) R3
      WRite(*,*) R4
      LOGRES=R4.LT.R3
      WRITE(*,*) R4, R3, LOGRES           
      END
27 Apr 2021 2:47 #27637

Yes, just have to change REAL4 to REAL8 and bot applications behave same. That’s not the point. This behavior makes an application port from 32-bit to 64-bit much more complicated. The question is, why does the round-off error behave so differently.

27 Apr 2021 3:07 #27638

In Fortran, expressions, including constants, have type. The number 0.0015596, for example, is of type REAL4, regardless of the declarations of the named variables. That number, then, has only 17 bits precision. When that value is stored in a REAL8 variable, the lost precision is not regained.

Related, but somewhat different: In 32-bits, FTN95 generates X87 instructions. Intermediate results may be kept in 80-bit registers (64-bits for mantissa), even for variables that are REAL*4.

In a 64-bit program, SSE2 instructions are used, and the registers are only 32-bits long (24-bit mantissa).

27 Apr 2021 3:15 #27639

It could be simply that 32 bits uses FPU (floating point unit) instructions whilst 64 bits uses CPU instructions. More generally it could be because the internal order of the calculations is different or the registers are used in a different way.

If round-off error is significant then you can get different results between compilers, between processors or even between different options for a given compiler.

28 Apr 2021 10:23 #27646

I think that for 32-bit, the calculation 'R1-R2.LT.R3' will compare values in the 80-bit FPU. As R1, R2 and R3 are all real4, I wonder what values are used ? Perhaps it is using values of R1,R2, R3 in the registers from the previous 32-bit truncation, rather that the real4 memory values.

Either way, the use of 'R1-R2.LT.R3', where R1-R2 and R3 are the same real4 stored values is a poor (contrived) test. The adaptation provides some more information, indicating that for the 32-bit calculation, the register value might be different to the stored real4 value.

      PROGRAM TEST
!
      REAL*4  R1, R2, R3, r4
      LOGICAL*2 LOGRES
!
      R1 = 14.0541906     ! 9 figures exceeds real*4 storage accuracy
      R2 =  0.0015596
      R3 = 14.0526314
      R4 = 14.0541906   &
          - 0.0015596  
!          14.0526310
      WRite (*,*) R1
      WRite (*,*) R2
      WRite (*,*) R3
      write (*,*) r1-r2
      write (*,*) (r1-r2)-r3
!     
      LOGRES=R1-R2.LT.R3
      WRITE(*,*) LOGRES           
! 
      WRite (*,1) R1
      WRite (*,1) R2
      WRite (*,1) R3, nearest(r3,-1.), nearest(r3,1.)
      write (*,1) r1-r2
      write (*,1) (r1-r2)-r3
      write (*,2) (r1-r2)-r3
  1   format (3f20.8)
  2   format (es20.8)
!
      END

Is this a contrived test or an actual problem ?

28 Apr 2021 12:21 #27648

I am not sure there is any value is exploring exactly why 32 bits gives the 'correct' result, whether by chance or because the calculation is done in a way that provides more precision than demanded. Either way, for 64 bits it is necessary to use double precision for this calculation.

28 Apr 2021 12:34 #27650

A full switch to double precision is not a solution for us. We try to detect the critical code parts. We will use the REAL function to handle the REAL4 data as REAL8. With that change 32-bit and 64-bit produce same results. Our first tests look promising.

28 Apr 2021 1:16 #27652

It is sometimes possible to use /DREAL locally (for some files or some subroutines) in order to fix issues of this kind.

28 Apr 2021 2:30 #27655

As an historical note: When I started my first programming course in 1969 (Fortran), one of the pieces of advice we were given was never to do a comparison for equality alone when using a floating point number. Especially when one of the numbers was a constant. The reason stated: strict equality cannot be guaranteed when either the result or the constant has a fractional component, or the integer portion exceeds the precision of the mantissa. I have found that if the equality only involves whole number being calculated from whole numbers (i.e. only integer values used/saved), equality works flawlessly. Nevertheless, I make the appropriate accommodations in the code just-in-case someone messes with the raw data.

Even on the IBM 7040 (my first real computer) with 72-bit double precision, I saw a lot of software being terminated because a loop would not quit as a result of floating point equality checks. My 1970 work-study job was as a 'student consultant', helping students find their bugs and correct them. (BTW, I loved doing this.) Most errors were floating point equality, like calculating the square root of 2 (a typical entry-level course assignment) and expecting to get a precise answer. The students we were helping with these kinds of problems were almost exclusively in the Business Administration courses, but on campus closer to the Engineering center, so used us rather than their helpers across campus.

When the student would ask 'Why doesn't it work?' The answer we would always give was 'There are rounding errors and any bits that are lost fall into the bit bucket.'

Once we were asked 'What happens when the bit bucket fills up?' We had a field day with that one, and I spoke at length on the bucket filling up, and having to be emptied and replaced on a regular basis, interfering with the computer operations, having to replenish the bit reservoir and the expense that incurred, and on and on. I passed this tidbit on to the other student consultants.

Perhaps we were a bit harsh, yes?

28 Apr 2021 4:39 #27656

Bill

That's interesting. I learned Fortran in about the same year in order to teach it to undergraduate engineers. Before that I only knew Algol 60.

28 Apr 2021 9:10 #27657

Paul, I enjoyed Algol! I kind of twitched at the syntax requirements, though. I'd rather type less! I learned assembly by taking the assembly listing from the compiler, learning different techniques (like indexing) on the 7040. Then, introduced to the IBM 360/40 assembly language used by the University accounting system. I love assembly!

Ah the days of an actual key punch machine. I can still recall the clunking of the punch cutters and the steppers.

Nice to reminisce! Bill

29 Apr 2021 4:12 #27659

When I learnt Algol and FORTRAN, most card punch machines were '026' and a few '029'. Algol was no fun with those card punches and I soon went back to FORTRAN.

The OP is ignoring the reality of floating point precision, where the calculation 'R1-R2.LT.R3' is subject to round-off error, especially where R1-R2 has the same 32-bit binary representation as R3. The Fortran 90 Standard actually addresses this issue and demonstrates one of the non-standard features of FTN95, which was not required of FTN77. I am one of those who objected to this Fortran 90 feature, but it shows it's benefit in this example.

29 Apr 2021 9:45 #27661

Bill, John, Paul,

I would like to comment your very interesting comments.

I learnt ALGOL W in 1974 (approximately) and then changed to ALGOL 60 when implementing algorithms in numerical analysis (as part of my studies in mathematics). I liked both very much, especially ALGOL 60 because ALGOL60 was the programming language in which the mathematical scientists published in that time.

But I was also very fond of C which I learnt when programming on the old UNIX environments/machines.

I know that you better not compare real numbers for equality in mathematical algorithms. But in this special situation I did not expect a round-off error.

One more reminiscense: I remember that in 1974 (and some years later) we sometimes did not find a free card punch machine at once in the computing center rooms and had to wait some time. Well ... times have changed ...

Regards, Dietmar

29 Apr 2021 11:26 #27663

When I learnt ALGOL and FORTRAN in 1973, I did not know that language Standards existed. We learnt the language provided for the hardware. ALGOL was on the KDF9 and (McCracken) FORTRAN was on the 7040. ALGOL was a very short experience on the KDF9! After that year of FORTRAN, I look back at how little I learnt with one run submitted per day of term. I certainly did not fully understand static variables, or the accuracy of real variables on different hardware. Next year was CDC which improved things.

In the original post, providing real*4 constants to 9 significant figures indicates some optimism that is mis-placed.

29 Apr 2021 1:23 #27664

John, I had forgotten the 026 versus 029 keypunch! Thanks for the reminder! I, too, had to wait for keypunch machines. Once I started working in the computing center, I had access to the ones behind the locked doors, so an added benefit!

Dietmar, I still have my Kernigan and Ritchie 'C' book from 1975. I do use it from time to time as a reference. Since part of what I'm doing now is 'C' interfacing with FTN95, I keep my toes in the water using 'C'. All PDF generation, a coordinate transformation library, and a DLL for handling character generation for bit-mapped devices is all in 'C' or C++.

Please login to reply.