View previous topic :: View next topic |
Author |
Message |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8210 Location: Salford, UK
|
Posted: Sat Jul 27, 2019 7:16 am Post subject: |
|
|
John
When you write that "the design of the stack" is lazy, is that directed at Microsoft? In what way might it have been designed differently? I wonder how you have formed this opinion.
Paul |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Sat Jul 27, 2019 9:10 am Post subject: Re: |
|
|
PaulLaidler wrote: | When you write that "the design of the stack" is lazy, is that directed at Microsoft? |
Yes, definitely, although I think most operating systems have a similar problem.
PaulLaidler wrote: | In what way might it have been designed differently? |
The stack is limited to a single memory block, while the heap has multiple memory blocks.
I do not understand why the stack was not designed as extendable, rather than report a stack overflow error. While this error can be caused by the programmer when using more that expected array sizes, this error is more commonly due to temporary memory allocation which is initiated by the system. The memory manager should fix it, especially with 64-bit.
I have forgotten how FTN95 manages large temporary arrays being allocated on the heap. Isn't this by default with /64 ?
I was also trying to point out that this is a well known problem, which should be managed as:
# large local arrays should be defined using ALLOCATE
# large local or temporary arrays should be allocated on the heap.
"large" needs a definition, which I thought was 20k for FTN95.
John |
|
Back to top |
|
 |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8210 Location: Salford, UK
|
Posted: Sat Jul 27, 2019 12:31 pm Post subject: |
|
|
John
The announcement for v8.50 http://forums.silverfrost.com/viewtopic.php?t=3994 includes Quote: | The size of the FTN95 stack is effectively no longer limited. |
It would be interesting to test this to see how far you can go.
The instructions in the help file includes information on what to do with very large local arrays but this will need to be revised in the light of the above announcement. |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Sat Jul 27, 2019 12:36 pm Post subject: |
|
|
Paul,
Are the size in 32bit and 64bit when you do /stack=size measured differently?
For sure the size in 32bits is in KBs but you are saying in 64bits it is in MBs?
Do i correctly understand that ini file addition to SLINK was done for convenience? Say if you do something like quick compilation FTN95 prog.f95 /link you will be able to compile it if large stack is needed? (compilation will not go if do that this way FTN95 prog.f95 /link /stack=1024 ). Otherwise there is no problem to add to BAT file /stack=value and forget about it
John,
Doing things by declaring fixed arrays and avoiding allocation/deallocation is just from laziness. Formally with 64bits we have what looks like an infinite ocean of RAM space. And expecting that almost infinity trying to run the code you're abruptly stepping on a rack at the 1/100 of what you have, and even below 32bit limit. Brain just refuses to accept existence of such small stack limit with 64bits and you always waiting that this year or maximum next one the stack concept will be called obsolete. If somebody likes the cars with the manual shift - fine, take and drive them, but most just do not like to deal with something no one knows what it is made for.
Last edited by DanRRight on Tue Jul 30, 2019 7:39 pm; edited 1 time in total |
|
Back to top |
|
 |
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 8210 Location: Salford, UK
|
Posted: Sat Jul 27, 2019 1:49 pm Post subject: |
|
|
Dan
"stack_size" takes a hex value for the number of bytes.
"stack" takes a decimal value which SLINK64 multiplies by one million to get the number of bytes.
The default value is 32MB but this can be set to some other value using Slink64.ini. This is for users who don't want to use "stack_size" or "size" but would prefer a larger default for the stack size reserve.
I tested it and ran our test suite using a default reserve of 16GB (which is the size of my RAM) and it did not complain but very large local arrays are currently not included in our tests.
It means that having a very large reserve does not appear to cause problems. The operating system will commit memory from this reserve on demand but otherwise the memory is still available for other purposes.
I don't know what will happen when it comes to creating very large local arrays but I recommend having a fire extinguisher to hand. |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Thu Jan 16, 2020 7:38 am Post subject: |
|
|
I think the error I started this posting was due to either bug in previous build of Windows, or specifically its incorrect handling of resources. Also possible conflict with NVIDIA driver of course.
Since then I updated only Windows twice first to 1903 and then to current build and have not seen stack problems. But sometimes computer graphics froze if demand for memory was to high while plotting in OpenGL. The screen got black for a minute but mostly recovered losing resolution but without putting the Windows do the knees.
After I manually set Windows from automatic handling pagefile to manual size as high as 200 GB (devoting to it fast SSD) there is no problem to plot in OpenGL even 600 million rectangles and simultaneously 3-5 high RAM demand programs can be resident without the need to shutting them off to free the memory. Before I thought that the reason of crashes also could be relatively small RAM addressable limit of processor itself but no, all works OK with such high RAM demands like if only the sky is the limit
And there is no more problems to allocate any size array because there is not enough memory  |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Tue Jan 21, 2020 2:59 am Post subject: |
|
|
Quote: | is this the 'trick' to 'activate' the technique to use an SSD as quick 'RAM' | John, I am not sure if this is the case. I have never had virtual memory use that I would describe as anything but "tooo slow".
I am surprised by Dan's description, but it may be the newer type of SSD interface that is being used that provides a faster rate. Rates of 1GByte/sec are fairly fast, while a 20 second apparent stop while virtual memory is activated is not.
For this reason, I don't use virtual memory but rely on 32Gbytes of DDR4 memory, probably soon 64 GB. Dan's reported large memory usage is very impressive, but would need contiguous/sequential accessing else there would be significant memory page access delays, even with DDR4.
At the high transfer rates being achieved, there is not that many processor cycles available to use this data to perform calculations that can demand these memory rates. |
|
Back to top |
|
 |
|