View previous topic :: View next topic |
Author |
Message |
John-Silver
Joined: 30 Jul 2013 Posts: 1520 Location: Aerospace Valley
|
Posted: Thu Jul 25, 2019 7:21 pm Post subject: |
|
|
Ideally then the ftn95 error dialog box would maybe better reflect the source of the problem via a modification to the error message of the type ...
Quote: | Stack Overflow ...
Occurred During an OPENGL call embedded Within SALFLIBC64.DLL |
which would at least give a pointer to the problem area.
... which would score brownie points on the Dan-scale of error reporting.
Of course that would earn adding a level of complexity to the error message generation. _________________ ''Computers (HAL and MARVIN excepted) are incredibly rigid. They question nothing. Especially input data.Human beings are incredibly trusting of computers and don't check input data. Together cocking up even the simplest calculation ... " |
|
Back to top |
|
|
JohnCampbell
Joined: 16 Feb 2006 Posts: 2554 Location: Sydney
|
Posted: Sat Jul 27, 2019 5:26 am Post subject: |
|
|
Dan,
I certainly agree that the functionality of the stack is poor and I would suggest that it's design is lazy. The heap has the ability to extend to multiple memory locations.
I don't agree with your post, as the weakness of the stack has been frequently described and should be avoided.
The solution is to avoid the use of the stack, mainly by moving large arrays to the heap. This can be done explicitly using ALLOCATE or can be done by selecting compiler options for large arrays to go to the heap. (Not sure if this is the case with /64 ?)
Temporary arrays in array sections and array syntax can be an issue, which should also be avoided with past versions of FTN95.
"Further information about 64 bit FTN95" provides some information about this
John |
|
Back to top |
|
|
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 7925 Location: Salford, UK
|
Posted: Sat Jul 27, 2019 7:16 am Post subject: |
|
|
John
When you write that "the design of the stack" is lazy, is that directed at Microsoft? In what way might it have been designed differently? I wonder how you have formed this opinion.
Paul |
|
Back to top |
|
|
JohnCampbell
Joined: 16 Feb 2006 Posts: 2554 Location: Sydney
|
Posted: Sat Jul 27, 2019 9:10 am Post subject: Re: |
|
|
PaulLaidler wrote: | When you write that "the design of the stack" is lazy, is that directed at Microsoft? |
Yes, definitely, although I think most operating systems have a similar problem.
PaulLaidler wrote: | In what way might it have been designed differently? |
The stack is limited to a single memory block, while the heap has multiple memory blocks.
I do not understand why the stack was not designed as extendable, rather than report a stack overflow error. While this error can be caused by the programmer when using more that expected array sizes, this error is more commonly due to temporary memory allocation which is initiated by the system. The memory manager should fix it, especially with 64-bit.
I have forgotten how FTN95 manages large temporary arrays being allocated on the heap. Isn't this by default with /64 ?
I was also trying to point out that this is a well known problem, which should be managed as:
# large local arrays should be defined using ALLOCATE
# large local or temporary arrays should be allocated on the heap.
"large" needs a definition, which I thought was 20k for FTN95.
John |
|
Back to top |
|
|
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 7925 Location: Salford, UK
|
Posted: Sat Jul 27, 2019 12:31 pm Post subject: |
|
|
John
The announcement for v8.50 http://forums.silverfrost.com/viewtopic.php?t=3994 includes Quote: | The size of the FTN95 stack is effectively no longer limited. |
It would be interesting to test this to see how far you can go.
The instructions in the help file includes information on what to do with very large local arrays but this will need to be revised in the light of the above announcement. |
|
Back to top |
|
|
DanRRight
Joined: 10 Mar 2008 Posts: 2816 Location: South Pole, Antarctica
|
Posted: Sat Jul 27, 2019 12:36 pm Post subject: |
|
|
Paul,
Are the size in 32bit and 64bit when you do /stack=size measured differently?
For sure the size in 32bits is in KBs but you are saying in 64bits it is in MBs?
Do i correctly understand that ini file addition to SLINK was done for convenience? Say if you do something like quick compilation FTN95 prog.f95 /link you will be able to compile it if large stack is needed? (compilation will not go if do that this way FTN95 prog.f95 /link /stack=1024 ). Otherwise there is no problem to add to BAT file /stack=value and forget about it
John,
Doing things by declaring fixed arrays and avoiding allocation/deallocation is just from laziness. Formally with 64bits we have what looks like an infinite ocean of RAM space. And expecting that almost infinity trying to run the code you're abruptly stepping on a rack at the 1/100 of what you have, and even below 32bit limit. Brain just refuses to accept existence of such small stack limit with 64bits and you always waiting that this year or maximum next one the stack concept will be called obsolete. If somebody likes the cars with the manual shift - fine, take and drive them, but most just do not like to deal with something no one knows what it is made for.
Last edited by DanRRight on Tue Jul 30, 2019 7:39 pm; edited 1 time in total |
|
Back to top |
|
|
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 7925 Location: Salford, UK
|
Posted: Sat Jul 27, 2019 1:49 pm Post subject: |
|
|
Dan
"stack_size" takes a hex value for the number of bytes.
"stack" takes a decimal value which SLINK64 multiplies by one million to get the number of bytes.
The default value is 32MB but this can be set to some other value using Slink64.ini. This is for users who don't want to use "stack_size" or "size" but would prefer a larger default for the stack size reserve.
I tested it and ran our test suite using a default reserve of 16GB (which is the size of my RAM) and it did not complain but very large local arrays are currently not included in our tests.
It means that having a very large reserve does not appear to cause problems. The operating system will commit memory from this reserve on demand but otherwise the memory is still available for other purposes.
I don't know what will happen when it comes to creating very large local arrays but I recommend having a fire extinguisher to hand. |
|
Back to top |
|
|
DanRRight
Joined: 10 Mar 2008 Posts: 2816 Location: South Pole, Antarctica
|
Posted: Thu Jan 16, 2020 7:38 am Post subject: |
|
|
I think the error I started this posting was due to either bug in previous build of Windows, or specifically its incorrect handling of resources. Also possible conflict with NVIDIA driver of course.
Since then I updated only Windows twice first to 1903 and then to current build and have not seen stack problems. But sometimes computer graphics froze if demand for memory was to high while plotting in OpenGL. The screen got black for a minute but mostly recovered losing resolution but without putting the Windows do the knees.
After I manually set Windows from automatic handling pagefile to manual size as high as 200 GB (devoting to it fast SSD) there is no problem to plot in OpenGL even 600 million rectangles and simultaneously 3-5 high RAM demand programs can be resident without the need to shutting them off to free the memory. Before I thought that the reason of crashes also could be relatively small RAM addressable limit of processor itself but no, all works OK with such high RAM demands like if only the sky is the limit
And there is no more problems to allocate any size array because there is not enough memory |
|
Back to top |
|
|
John-Silver
Joined: 30 Jul 2013 Posts: 1520 Location: Aerospace Valley
|
Posted: Mon Jan 20, 2020 5:15 pm Post subject: |
|
|
Quote: | After I manually set Windows from automatic handling pagefile to manual size as high as 200 GB (devoting to it fast SSD) there is no problem to plot in OpenGL |
is this the 'trick' to 'activate' the technique to use an SSD as quick 'RAM' that I kept seeing bandied about when they first started commercialising SSD's but which i never understood ? _________________ ''Computers (HAL and MARVIN excepted) are incredibly rigid. They question nothing. Especially input data.Human beings are incredibly trusting of computers and don't check input data. Together cocking up even the simplest calculation ... " |
|
Back to top |
|
|
JohnCampbell
Joined: 16 Feb 2006 Posts: 2554 Location: Sydney
|
Posted: Tue Jan 21, 2020 2:59 am Post subject: |
|
|
Quote: | is this the 'trick' to 'activate' the technique to use an SSD as quick 'RAM' | John, I am not sure if this is the case. I have never had virtual memory use that I would describe as anything but "tooo slow".
I am surprised by Dan's description, but it may be the newer type of SSD interface that is being used that provides a faster rate. Rates of 1GByte/sec are fairly fast, while a 20 second apparent stop while virtual memory is activated is not.
For this reason, I don't use virtual memory but rely on 32Gbytes of DDR4 memory, probably soon 64 GB. Dan's reported large memory usage is very impressive, but would need contiguous/sequential accessing else there would be significant memory page access delays, even with DDR4.
At the high transfer rates being achieved, there is not that many processor cycles available to use this data to perform calculations that can demand these memory rates. |
|
Back to top |
|
|
John-Silver
Joined: 30 Jul 2013 Posts: 1520 Location: Aerospace Valley
|
Posted: Wed Jan 22, 2020 4:17 pm Post subject: |
|
|
Quote: | I don't use virtual memory but rely on 32Gbytes of DDR4 memory |
Lucky you !
but the whole world at the moment seems to be going barmy army on upgrading computer specs., and in many cases it can only lead to tears imo.
Last year i looked at the requirements to run this new-fangled MSc Apex wotsit (a poor man's MSc/XL repacement (lmao - if you know what I'm talking about). anyway , I came across a table of hardware requirements which listed a number of compatible graphic cards to run it, without really summarising the necessary baseline characteristics so you could see if ny other grphics card would work. In other words - use one of theis limited list or we won't guarantee it'll work. Fine if you'r a one-off user, but larger companies are not only cash-strapped but technology-strapped too.
Just like the article I saw lst weejėk which 'reminded' us that Windows 7 support finished as of last Tuesday.
And the whole world and his mother blabbing that they recommended updating to Windows 10 immediately or risk serious hacking/phishing of their accounts. Th UK govt even recommending that NOONE USE a Windows 7 computer for online banking ! What do they know they're not telling us. Anyway the point is that MS are pulling out the stops to sell new computers with Windows 10, and conning everyone on board to help them out.
What could possibly go wrong ?
Apart from people generating more numbers than they actually need (in the majority of cases, I'm sure there are some valid applications - accurate weather forecast anyone) but the vast majority don't need it. _________________ ''Computers (HAL and MARVIN excepted) are incredibly rigid. They question nothing. Especially input data.Human beings are incredibly trusting of computers and don't check input data. Together cocking up even the simplest calculation ... " |
|
Back to top |
|
|
|