View previous topic :: View next topic |
Author |
Message |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Fri Dec 02, 2011 2:58 am Post subject: Support for 4gb available memory on 64-bit OS |
|
|
Paul,
I have developed a memory mapping program which identifies all available memory in the range 0:4gb using FTN95.
With Windows 7_64 ( and XP_64 ) memory between 2gb and 4gb is available using ALLOCATE if compiled as ftn95 program /link. (This is a useful extension to FTN95 when running on 64 bit OS.)
However, if I use ftn95 program /debug /link only memory between 0gb and 2gb is available.
Can you indicate why /debug excludes the availability of this extended memory.
I extensively use /debug to provide better call-back error report of code addresses (line numbers) if the program crashes.
Is there a possibility that 2gb to 4gb memory on 64 bit OS could be available when using the /debug option to provide line numbers.
John |
|
Back to top |
|
 |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8210 Location: Salford, UK
|
Posted: Fri Dec 02, 2011 9:32 am Post subject: |
|
|
/check and /debug use a different memory allocation process and this would explain the difference.
Given time and some investigation, it ought to be possible for me to provide you with an option that forces the alternative but this may bypass some of the memory checking features (particularly for /check). |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Fri Dec 02, 2011 10:50 am Post subject: |
|
|
Paul,
The difference I am reporting is between /debug and no option, not between /debug and /check. ( I think with the /3gb option, /debug also cancelled it out.)
I am looking for an option that includes code line number references but still allows the extended addressing beyond 2gb.
Does /debug provide more than code line number referencing for call back reporting and sdbg ?
I thought any code checking would have required at least /check.
I have not found /debug to provide a noticeable run time penalty, which I took to mean there was minimal run time checks associated with /debug.
My approach is to use /debug for the bulk of the code, then use /opt for the few areas of code (typically in static libraries) where the bulk of the computation takes place. Mixing the compilation options through .obj files turns off the extended memory access.
It would still be good to provide the extended memory addressing capability at least with /debug capability. I'm not sure why the restriction applies.
John |
|
Back to top |
|
 |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8210 Location: Salford, UK
|
Posted: Fri Dec 02, 2011 2:03 pm Post subject: |
|
|
I have logged this for investigation. I cannot do anything right now but hopefully soon. |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Tue Dec 06, 2011 8:28 am Post subject: |
|
|
Paul,
I note you are unable to do anything right now, but I am puzzled as to why /debug could restrict access to above 2gb memory for ALLOCATE.
My interpretation of /debug is to only provide an index structure for source files and code line numbers.
Having above 2gb memory access is not as good as 64-bit but it can extend the life of 32-bit access to Clearwin+ graphics.
John |
|
Back to top |
|
 |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8210 Location: Salford, UK
|
Posted: Tue Dec 06, 2011 9:30 am Post subject: |
|
|
Apparently /debug does restrict the memory access to 2GB, perhaps for no good reason, but I need to work through the relevant code before I can comment. |
|
Back to top |
|
 |
Robert

Joined: 29 Nov 2006 Posts: 457 Location: Manchester
|
Posted: Tue Dec 13, 2011 6:53 pm Post subject: |
|
|
One reason would be the top-bit NULL pointer checking. I must admit I didn't realise it restricted memory in /debug. |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Thu Dec 15, 2011 12:31 am Post subject: |
|
|
Robert,
/debug restricting memory was a problem with /3gb, although I did not find /3gb very useful.
Now with Win x64, there is access to an extra 2gb of memory using FTN95, which allows me to develop 64bit solutions using FTN95, but limited to a 2gb ALLOCATE pool. As I much prefer the code checking of FTN95 to other compilers, I can not use /check or /debug ( in any .obj or library) when scaling up above 2gb. It would be good to at least have line number reports while developing this code.
For my applications, I am finding big memory problems typically involve only one large array, so it is the maximum ALLOCATE array size and not the total memory that is important. /3gb did not increase the maximum array size. I had considered trying to use 2 ALLOCATE arrays with FTN95 or /3gb, above and below the 2gb address, but saw little future in this approach.
I would also recommend x64 OS for FTN95 or 32 bit applications, as the better memory management and disk buffering improves performance.
Thank you for your help.
John |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Mon Dec 19, 2011 8:17 pm Post subject: |
|
|
Well, this is the most painful problem with FTN95 to me for the last almost a decade. I actually stopped active code development due to that pondering if i have to switch compilers, or change the methods, or start the new code completely rebuilding older one (which may take another decade)
I expected this team will be the first to embrace 64bit as they were first who made 32bit Fortran compiler which was able virtually allocate all 2 GB in the 80th when maximum RAM memory on PC were limited to notorious 640 K. Well...Now even free G77 is a 64bit compiler while PCs can get 50 GB of RAM. Not mentioning the hopes for something like older "virtual common" to be able to allocate 4 billion times more RAM in a snap then these 4GB. |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Wed Dec 21, 2011 12:47 am Post subject: |
|
|
Dan,
I have looked at Intel and Portland 64-bit compilers and neither of them allow COMMON above 2gb. I think that the limit could be that Microsoft does not allow defined arrays above 2gb with the linking facility they provide for 64-bit .exe files.
The only way I can get more than 2gb is via ALLOCATE, which I have successfully used. It has required re-writing my memory management approaches. I have developed an approach to use a memory pool that is related to the physical memory available, as Window's virtual memory approach does not work very well. Once you run out of memory, 32-bit is as good as 64-bit.
I've also found that the vector instructions could be a significant factor in improved run time performance vs FTN95. It is interesting to see the focus that Intel have on the Polyhedron benchmark set, which appear to be a fairly narrow range of Fortran computation.
John |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Thu Dec 22, 2011 5:53 pm Post subject: |
|
|
It's like "640K is enough for anyone" LOL Same mental shezzz over and over again.
Yes, by some generally absolutely not justifiable reason 64bit IVF does not allow static arrays >2GB, only allocatable ones. Hope if FTN95 will ever become 64bit it will not follow them (they at the end have their own C compiler). I afraid that million times per run allocation and deallocation of huge arrays (sparse by the way) in my code will tremendously slow it down. |
|
Back to top |
|
 |
Sebastian
Joined: 20 Feb 2008 Posts: 177
|
Posted: Fri Feb 03, 2012 10:37 am Post subject: |
|
|
Quote: | Once you run out of memory, 32bit is as good as 64-bit. |
No it's not since standard 64bit PCs have at *least* 8GB (workstations here have 24GB by default) which is significantly different from what 32bit systems (ftn95) can use.
But official statements regarding 64bit support are plain depressing. |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Fri Feb 03, 2012 11:44 am Post subject: |
|
|
I can't recall where I said that, but what I meant was that when my 64bit program requires more memory than is installed, then it goes to a disk based solution. In this case both the 32bit or 64bit solutions have similar solution times. For the FE problems I solve, 1.5gb of addressable memory for my out of core solution is adequate.
Recently, I have found running programs requiring 2gb of memory and using 5gb for disk cacheing provides good buffering of disk I/O with performance times similar to the 64bit solver.
My recent conclusion is that the 32bit solver still has some life left.
My view of this has changed over the years.
At 2002, with only 2gb of physical memory, there was no significant amount of memory available for disk cacheing.
By about 2006, with processor improvement outpacing disk I/O, I did a lot of work to mimise disk I/O, which identified the attraction of 64-bit solutions.
Now in 2012, improved disk cacheing and SSD means that 32-bit approaches are more effective. 64-bit still has an advantage, as for a new type of analysis, it is easier to write a in-core 64-bit solution than develop a 32-bit out-of-core solution.
I find that my controling the "out-of-core" solution is much better than the OS paging, although I am yet to try a 64-bit paged solution using SSD.
I also think that there needs to be a new definition of the 64bit memory implementation for linking. Why can't we have COMMON > 2gb?
John |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Mon Feb 06, 2012 8:28 pm Post subject: |
|
|
"Why can't we have COMMON > 2gb?"
Apple patented it
As to using SSD -- my advice is to use RAMdisk (or RAMdrive) instead with 64bit Windows. That's 10 times faster (6GB/sec) and allows to get a lot of RAM dedicated to RAMdrive. With 8GB SIMs on 6 memory slot motherboard it's 48GB. I use and like most QSoft RAMdisk which is speed king and is free, it's so great (i plan to pay the author anyway, when he will implement delayed write to backup almost in realtime RAMdrive to harddrive without RAMdrive's speed drop. Yes, you do not lose your RAMdrive content when you reboot computer and there is very little chance you lose anything besides the current write stream when computer crashes)
There also exist RAMdisk which allows >4GB allocation for RAMdisk with 32bit Windows but i did not try it. |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Tue Feb 07, 2012 12:22 am Post subject: |
|
|
Dan,
Thanks very much for the advice. I am about to get a SSD and will be able to test this option.
I also looked for QSoft RAMdisk, but was directed to a download site that Mcafee did not like ?
I am interested in understanding the relative read and write speeds for these options.
Windows 7 (and to a lesser extent XP) appears to provide good disk cacheing if you don't demand too much memory for your program, which is a simple option.
Last year I tested one of the cheap netbooks with a SSD, but their performance was extremely bad for disk I/O. Not sure why, I assumed that the bandwidth of the disk I/O service was the problem.
We continue to learn !
John |
|
Back to top |
|
 |
|