 |
forums.silverfrost.com Welcome to the Silverfrost forums
|
View previous topic :: View next topic |
Author |
Message |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2503 Location: Sydney
|
Posted: Thu Nov 08, 2018 11:11 am Post subject: |
|
|
Lahey Fortran 95 documentation does refer to this problem.
I recall that Lahey F77 did automatically upgrade constants to double precision, such as " x = 0.6 "
Lahey/Fujitsu Fortran 95 Language Reference :Revision D,
Appendix A : Fortran 77 Compatibility states:
"Standard Fortran 90 is a superset of standard Fortran 77 and a standard-conforming Fortran 77 program will compile properly under Fortran 90. There are, however, some situations in which the program’s interpretation may differ.
• Fortran 77 permitted a processor to supply more precision derived from a REAL constant than can be contained in a REAL datum when the constant is used to initialize a DOUBLE PRECISION data object in a DATA statement. Fortran 90 does not permit this option."
I have experienced the changes to the precision of constants in old code. It has been a significant annoyance when checking transferring code to newer compilers. This problem also was mitigated when transferring from CDC to mini, with constants for REAL*8 being upgraded. Pr1me FTN compiler also provided this support.
( The other significant issue was the loss of 80 bit registers when moving to SSE vector instructions ) |
|
Back to top |
|
 |
mecej4
Joined: 31 Oct 2006 Posts: 1839
|
Posted: Thu Nov 08, 2018 1:29 pm Post subject: |
|
|
John, there is no DATA statement in your example code ( http://forums.silverfrost.com/posting.php?mode=quote&p=25703 ), so the Lahey extension (of considering a constant to be DOUBLE PRECISION based on the number of digits in the mantissa being sufficiently large, with or without a "Dnn" exponent field) would not apply. |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2503 Location: Sydney
|
Posted: Thu Nov 08, 2018 1:47 pm Post subject: |
|
|
My recollection is that the increased precision of constants extended to statements like " x = 0.6".
There was support from the mini computer compiler writers to address the precision problems when using double precision/real*8, as a lot of their market was from users of CDC developed software, which they needed to prove could run on their mini. This issue became more significant when moving to F95, which effectively banned this feature.
I am surprised that the Silverfrost F77 doesn't show this feature.
I have tried unsuccessfully to find my 16-bit and 32-bit F77 compilers or documentation. ( which was probably paper copies only )
This is of historical interest only, as the Fortran standard has long been fixed in this approach (nearly 30 years!), requiring precision to now be appropriately addressed. |
|
Back to top |
|
 |
mecej4
Joined: 31 Oct 2006 Posts: 1839
|
Posted: Thu Nov 08, 2018 3:43 pm Post subject: |
|
|
John C., there were at least three Lahey Fortran 77 compilers that I used in the past:
1. LFP77 ("personal"); small memory model (64 K code, 64 K data)
2. LF77; 20-bit address range, 16 bit code
3. EM32; uses a DOS extender (TNT? PharLaps?) to access up to 16 MB data
I still probably have some of those on old floppies. Are you referring to one of these? Which?
Thanks. |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2503 Location: Sydney
|
Posted: Sat Nov 10, 2018 2:40 pm Post subject: |
|
|
mecej4,
I certainly used LF77 (640k) and EM32 ( a few mb of memory )
This was after using Microsoft F77 (640k), with HUGE attribute for arrays larger than 64k.
Somewhere in there I used an overlay linker with Microsoft F77 & LF77(?).
After this I then moved from EM32 to FTN77 and better graphics.
In parallel I was still using Pr1me FTN (not F77) then Apollo and Sparc.
Not much Vax, as their Fortran was too different. (What a loss when Apollo went to HP).
I recall that Pr1me FTN was very accommodating, as was Apollo, so I'd expect they would have provided higher precision constants.
I thought Lahey applied 80 bit precision to constants. Their documentation for F95 certainly recognises the issue.
I am surprised Salford FTN77 did not, although you are showing it does not now.
This wasn't the only precision hickup, as moving from x87 to SSE instructions certainly lost precision, which had to be accepted when vector instructions became more common.
The net result was that for testing new compilers, I had to write code to compare runs and report the different round-off errors to determine if the compiler switch had some errors. It now becomes even more interesting with multi-thread results.
I started moving code from CDC to Pr1me in 1975. Pr1me were very keen to help in those days with benchmarks. It is amazing to recall the computer costs in those days, and especially the frequency of service technicians being called to repair disks. We did partial disk backups each day and a full backup each week, and probably a rebuild every 2 months. We had a 600 mb, 2 x 300 mb and 1 x 80 mb drives for storage with 32 mb for paging / virtual memory. (In the last year I have been cleaning out a lot of disk to memory based solvers and replacing then with memory to cache solvers. Interesting the similarity in now treating memory as we considered the paging disks of 40 years ago)
My disappointment is that with AVX registers/instructions we don't have a hardware supported real*12 or real*16 ( no interest in real*12 ! ) |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2385 Location: Yateley, Hants, UK
|
Posted: Sat Nov 10, 2018 2:51 pm Post subject: |
|
|
Quote: | (In the last year I have been cleaning out a lot of disk to memory based solvers and replacing then with memory to cache solvers. Interesting the similarity in now treating memory as we considered the paging disks of 40 years ago) |
Up to a point, John, you wasted your time. I just put together a Ryzen system, with a 500Gb M2 NVMe SSD to boot from. It then dawned on me that a second one (the MB only supports 2, although Ryzen Threadripper boards often support 3) could be used for 'scratch' files at really high response rates. (Even the OS SSD could be used, but I kept them separate).
Eddie |
|
Back to top |
|
 |
mecej4
Joined: 31 Oct 2006 Posts: 1839
|
Posted: Sat Nov 10, 2018 3:36 pm Post subject: Re: |
|
|
JohnCampbell wrote: | mecej4,
I thought Lahey applied 80 bit precision to constants. | There are a number of options to control (in a non-standard-conforming way, naturally) the interpretation of real constants in Fortran source. Lahey provides a compiler option to generate SSE2 code for the P4.
Quote: | My disappointment is that with AVX registers/instructions we don't have a hardware supported real*12 or real*16 ( no interest in real*12 ! ) | No AVX instruction supports floating point computations with numbers other than FLOAT32/FLOAT64. The -128, -256 and -512 modifiers signify how many (4, 8 and 16 FLOAT32; 2, 4 and 8 FLOAT64) floats can be processed with a single instruction, not the precision/range of each component. |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2503 Location: Sydney
|
Posted: Sun Nov 11, 2018 2:14 am Post subject: Re: |
|
|
mecej4 wrote: | No AVX instruction supports floating point computations with numbers other than FLOAT32/FLOAT64. The -128, -256 and -512 modifiers signify how many (4, 8 and 16 FLOAT32; 2, 4 and 8 FLOAT64) floats can be processed with a single instruction, not the precision/range of each component. |
I wonder if FLOAT128 will ever be hardware supported with a single instruction.
( There was a time when 2gb memory was not even contemplated, such as when we used 2 platters of a CDC drive for paging ) |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2503 Location: Sydney
|
Posted: Mon Nov 12, 2018 5:43 am Post subject: Re: |
|
|
LitusSaxonicum wrote: | Up to a point, John, you wasted your time. |
Eddie,
In many ways you are so correct.
I am often looking at alternative algorithms that I think suit the modern hardware, especially vector instructions, cache and threads.
For some of my attempts, a "pass" is if the new approach does not run slower !
This especially applies to some of what I have done in the last year, such as removing disk I/O (which is memory cached) and moving private arrays from heap to stack are two recent examples that took a lot of work for an unidentifiable improvement, possibly a fail; certainly not the change I was hoping for.
John |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2385 Location: Yateley, Hants, UK
|
Posted: Mon Nov 12, 2018 4:10 pm Post subject: |
|
|
John,
Then I suggest trying the old version with disk I/O on a machine with the new generation of M2 MVMe SSDs. You could get some idea of the benefit simply by using a SATA SSD for the temporary files, (which is a simple trial, costing about £100), and if this gives sensible speed-ups, remember that the NVMe SSD is many times faster than a SATA SSD (but may require a different hardware).
Eddie |
|
Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|