Silverfrost Forums

Welcome to our forums

No E format in Clearwin

27 Nov 2017 5:35 #20863

Leaving aside a hack with /DREAL or OPTIONS(DOUBLE PRECISION) which has little chance to be a common practice in the future I see no reason that in the code

real*8 X, Y
X=3.4567890123456
Y=3

value X was cut to real4 despite twice was mentioned it has to be real8 while Y when assigned an INTEGER value got king's treatment and got real8 precision Y=3.000000000000000E+00. Why at least not the same absurdous real4? Good that at least compiler now points at this contradiction. Again, I'd consider this specific case an error.

27 Nov 2017 10:19 #20864

Quoted from DanRRight ... Y when assigned an INTEGER value got king's treatment and got real8 precision Y=3.000000000000000E+00. Why at least not the same absurdous real4?

If you set Y = an integer expression, and the value of that expression lies between -(224 - 1) and (224-1), 24 bits are sufficient to store the expression without error. Such numbers get 'king's treatment'. Integers larger in absolute value than these get 'queen's treatment', i.e., are subject to chopping or rounding.

A slight variation of your code may help you see things better:

real*8 X 
X=3/4
print *, X

If you feel up to it, you may even try

real*8 X 
X='3/4'
print *, X
27 Nov 2017 5:34 #20875

Yes, while integers get royal treatment, the obvious real*8 number gets unimaginable: the highway robbery treatment stripping it from extra digits. I have no clue how this got into the Standard.

28 Nov 2017 8:31 #20880

I agree with Dan,

If you coded 'X=3.4567890123456', it is very 'harsh' of the compiler to strip it to a real*4 constant, but that is what was defined and caused a few unnecessary problems. There is the case of what to do with SIN ( 3.4567890123456 ) or my_function ( 3.4567890123456 ), which is more of a problem.

The example 'real*8 X ; X = 0.6' was also a problem for the unwary, as these types of constants often occurred in FE coding when converting from CDC to mini. F77 devilry !!

Then they introduced object oriented programming as that was supposed to lead to fewer coding errors !!

5 Nov 2018 10:41 #22737

A new option [fmt=<format_specifier>] has been added to %rf in order to provide program control of the displayed REAL value. The <format_specifier> is analogous to one of %wf, %we and %wg. For example %rf[fmt=0.4e] is equivalent to %0.4we. In other words, we simply omit the '%' and the 'w'. The result in this case is an exponent form with 4 decimal places and 5 significant figures. See the help file on %wf etc. for detailed information on how to write the <format_specifier>.

Here is a simple example:

program rf
integer k,iw,winio@
real*8 v
v = 1.234567
k = 6
iw = winio@('%co[check_on_focus_loss]&')
iw = winio@('%rf[fmt=0.4e]&',v)
iw = winio@('%ff%nl&')
iw = winio@('%rd&',k)
iw = winio@('%ff%nl&')
iw = winio@('%`rf',v)
end

And here is a link to new DLLs to try this out...

https://www.dropbox.com/s/xabtjc3cyshi922/newDLLs26.zip?dl=0

7 Nov 2018 4:15 #22740

Answer to John S:

!  Example to show real constants not provided in F90+
!   similar for 0.1 shows errors, 
!   but 0.5 or 0.25 are same value for 0.5e0 or 0.5d0
!
      real*8 x,y,z,a
      x = 0.6     ! F77 would provide a r*8 constant, F90 provides r*4
      y = 0.6d0
      z = 0.6e0
      a = 0.1d0
      a = 6*a    ! round off difference
      write (*,*) 'x = 0.6  ',x
      write (*,*) 'y = 0.6d0',y
      write (*,*) 'z = 0.6e0',z
      write (*,*) 'x-y =    ',x-y
      write (*,*) 'a = 6*0.1d0',a
      write (*,*) 'y-a =    ',y-a
      end

What would %rf do with 0.6 ? I suspect 0.6d0.

7 Nov 2018 1:40 #22741

John C.: I realise that the comment on the first executable statement of your illustrative code may be based on a vague recollection, but I do not know if you actually had a Fortran-77 compiler that treated the constant as a double precision constant. If you can name that compiler, and that compiler did that without any options such as /DREAL, that would be interesting, because the Fortran 77 Standard says ( https://www.fortran.com/F77_std/rjcnf0001-sh-4.html#sh-4.2.1 ):

4.2.1 Data Type of a Constant. The form of the string representing a constant specifies both its value and data type.

We may critique the rules of the language, and many of us often are puzzled by why some such behaviour was built into the standard. Indeed, I have often wished for a codicil/commentary on the Fortran Standard that provides background and explanation of why and how such features were chosen. Nevertheless, an imperfect standard is far better to have than no standard.

The Salford/Silverfrost FTN77 compiler gives the same output as FTN95 for your test program.

8 Nov 2018 10:11 #22749

Lahey Fortran 95 documentation does refer to this problem. I recall that Lahey F77 did automatically upgrade constants to double precision, such as ' x = 0.6 '

Lahey/Fujitsu Fortran 95 Language Reference :Revision D, Appendix A : Fortran 77 Compatibility states:

'Standard Fortran 90 is a superset of standard Fortran 77 and a standard-conforming Fortran 77 program will compile properly under Fortran 90. There are, however, some situations in which the program’s interpretation may differ.
• Fortran 77 permitted a processor to supply more precision derived from a REAL constant than can be contained in a REAL datum when the constant is used to initialize a DOUBLE PRECISION data object in a DATA statement. Fortran 90 does not permit this option.'

I have experienced the changes to the precision of constants in old code. It has been a significant annoyance when checking transferring code to newer compilers. This problem also was mitigated when transferring from CDC to mini, with constants for REAL*8 being upgraded. Pr1me FTN compiler also provided this support. ( The other significant issue was the loss of 80 bit registers when moving to SSE vector instructions )

8 Nov 2018 12:29 #22756

John, there is no DATA statement in your example code ( http://forums.silverfrost.com/posting.php?mode=quote&p=25703 ), so the Lahey extension (of considering a constant to be DOUBLE PRECISION based on the number of digits in the mantissa being sufficiently large, with or without a 'Dnn' exponent field) would not apply.

8 Nov 2018 12:47 #22757

My recollection is that the increased precision of constants extended to statements like ' x = 0.6'.

There was support from the mini computer compiler writers to address the precision problems when using double precision/real*8, as a lot of their market was from users of CDC developed software, which they needed to prove could run on their mini. This issue became more significant when moving to F95, which effectively banned this feature. I am surprised that the Silverfrost F77 doesn't show this feature.

I have tried unsuccessfully to find my 16-bit and 32-bit F77 compilers or documentation. ( which was probably paper copies only )

This is of historical interest only, as the Fortran standard has long been fixed in this approach (nearly 30 years!), requiring precision to now be appropriately addressed.

8 Nov 2018 2:43 #22764

John C., there were at least three Lahey Fortran 77 compilers that I used in the past:

  1. LFP77 ('personal'); small memory model (64 K code, 64 K data)

  2. LF77; 20-bit address range, 16 bit code

  3. EM32; uses a DOS extender (TNT? PharLaps?) to access up to 16 MB data

I still probably have some of those on old floppies. Are you referring to one of these? Which?

Thanks.

10 Nov 2018 1:40 #22781

mecej4,

I certainly used LF77 (640k) and EM32 ( a few mb of memory ) This was after using Microsoft F77 (640k), with HUGE attribute for arrays larger than 64k. Somewhere in there I used an overlay linker with Microsoft F77 & LF77(?). After this I then moved from EM32 to FTN77 and better graphics. In parallel I was still using Pr1me FTN (not F77) then Apollo and Sparc. Not much Vax, as their Fortran was too different. (What a loss when Apollo went to HP). I recall that Pr1me FTN was very accommodating, as was Apollo, so I'd expect they would have provided higher precision constants.

I thought Lahey applied 80 bit precision to constants. Their documentation for F95 certainly recognises the issue.

I am surprised Salford FTN77 did not, although you are showing it does not now. This wasn't the only precision hickup, as moving from x87 to SSE instructions certainly lost precision, which had to be accepted when vector instructions became more common.

The net result was that for testing new compilers, I had to write code to compare runs and report the different round-off errors to determine if the compiler switch had some errors. It now becomes even more interesting with multi-thread results.

I started moving code from CDC to Pr1me in 1975. Pr1me were very keen to help in those days with benchmarks. It is amazing to recall the computer costs in those days, and especially the frequency of service technicians being called to repair disks. We did partial disk backups each day and a full backup each week, and probably a rebuild every 2 months. We had a 600 mb, 2 x 300 mb and 1 x 80 mb drives for storage with 32 mb for paging / virtual memory. (In the last year I have been cleaning out a lot of disk to memory based solvers and replacing then with memory to cache solvers. Interesting the similarity in now treating memory as we considered the paging disks of 40 years ago)

My disappointment is that with AVX registers/instructions we don't have a hardware supported real12 or real16 ( no interest in real*12 ! )

10 Nov 2018 1:51 #22782

(In the last year I have been cleaning out a lot of disk to memory based solvers and replacing then with memory to cache solvers. Interesting the similarity in now treating memory as we considered the paging disks of 40 years ago)

Up to a point, John, you wasted your time. I just put together a Ryzen system, with a 500Gb M2 NVMe SSD to boot from. It then dawned on me that a second one (the MB only supports 2, although Ryzen Threadripper boards often support 3) could be used for 'scratch' files at really high response rates. (Even the OS SSD could be used, but I kept them separate).

Eddie

10 Nov 2018 2:36 #22784

Quoted from JohnCampbell mecej4, I thought Lahey applied 80 bit precision to constants. There are a number of options to control (in a non-standard-conforming way, naturally) the interpretation of real constants in Fortran source. Lahey provides a compiler option to generate SSE2 code for the P4.

My disappointment is that with AVX registers/instructions we don't have a hardware supported real12 or real16 ( no interest in real*12 ! ) No AVX instruction supports floating point computations with numbers other than FLOAT32/FLOAT64. The -128, -256 and -512 modifiers signify how many (4, 8 and 16 FLOAT32; 2, 4 and 8 FLOAT64) floats can be processed with a single instruction, not the precision/range of each component.

11 Nov 2018 1:14 #22786

Quoted from mecej4 No AVX instruction supports floating point computations with numbers other than FLOAT32/FLOAT64. The -128, -256 and -512 modifiers signify how many (4, 8 and 16 FLOAT32; 2, 4 and 8 FLOAT64) floats can be processed with a single instruction, not the precision/range of each component.

I wonder if FLOAT128 will ever be hardware supported with a single instruction. ( There was a time when 2gb memory was not even contemplated, such as when we used 2 platters of a CDC drive for paging )

12 Nov 2018 4:43 #22797

Quoted from LitusSaxonicum Up to a point, John, you wasted your time.

Eddie,

In many ways you are so correct. I am often looking at alternative algorithms that I think suit the modern hardware, especially vector instructions, cache and threads.

For some of my attempts, a 'pass' is if the new approach does not run slower ! This especially applies to some of what I have done in the last year, such as removing disk I/O (which is memory cached) and moving private arrays from heap to stack are two recent examples that took a lot of work for an unidentifiable improvement, possibly a fail; certainly not the change I was hoping for.

John

12 Nov 2018 3:10 #22806

John,

Then I suggest trying the old version with disk I/O on a machine with the new generation of M2 MVMe SSDs. You could get some idea of the benefit simply by using a SATA SSD for the temporary files, (which is a simple trial, costing about £100), and if this gives sensible speed-ups, remember that the NVMe SSD is many times faster than a SATA SSD (but may require a different hardware).

Eddie

Please login to reply.