mecej4,
I certainly used LF77 (640k) and EM32 ( a few mb of memory )
This was after using Microsoft F77 (640k), with HUGE attribute for arrays larger than 64k.
Somewhere in there I used an overlay linker with Microsoft F77 & LF77(?).
After this I then moved from EM32 to FTN77 and better graphics.
In parallel I was still using Pr1me FTN (not F77) then Apollo and Sparc.
Not much Vax, as their Fortran was too different. (What a loss when Apollo went to HP).
I recall that Pr1me FTN was very accommodating, as was Apollo, so I'd expect they would have provided higher precision constants.
I thought Lahey applied 80 bit precision to constants. Their documentation for F95 certainly recognises the issue.
I am surprised Salford FTN77 did not, although you are showing it does not now.
This wasn't the only precision hickup, as moving from x87 to SSE instructions certainly lost precision, which had to be accepted when vector instructions became more common.
The net result was that for testing new compilers, I had to write code to compare runs and report the different round-off errors to determine if the compiler switch had some errors. It now becomes even more interesting with multi-thread results.
I started moving code from CDC to Pr1me in 1975. Pr1me were very keen to help in those days with benchmarks. It is amazing to recall the computer costs in those days, and especially the frequency of service technicians being called to repair disks. We did partial disk backups each day and a full backup each week, and probably a rebuild every 2 months. We had a 600 mb, 2 x 300 mb and 1 x 80 mb drives for storage with 32 mb for paging / virtual memory. (In the last year I have been cleaning out a lot of disk to memory based solvers and replacing then with memory to cache solvers. Interesting the similarity in now treating memory as we considered the paging disks of 40 years ago)
My disappointment is that with AVX registers/instructions we don't have a hardware supported real12 or real16 ( no interest in real*12 ! )