forums.silverfrost.com Forum Index forums.silverfrost.com
Welcome to the Silverfrost forums
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Size of all arrays
Goto page Previous  1, 2
 
Post new topic   Reply to topic    forums.silverfrost.com Forum Index -> General
View previous topic :: View next topic  
Author Message
JohnCampbell



Joined: 16 Feb 2006
Posts: 2144
Location: Sydney

PostPosted: Mon Sep 23, 2019 2:25 am    Post subject: Reply with quote

Dan,

Try the following with a variety of compile options

32 bit
(default)
/debug
/checkmate

64 bit
/64
/64 /debug
/64 /checkmate

Code:
module nc_def
integer*4, parameter :: nc = 14000
end module nc_def

use nc_def
common /a1/ a, aa(nc,nc), aaa(nc,nc)
real*4 a,aa,aaa

A=1
pause 1
call sub
end

subroutine sub
use nc_def
common /a1/ a, aa(nc,nc), aaa(nc,nc)
real*4 a,aa,aaa

print*, a
aa(:,:)=2
aa(:,:)=3
pause 2
end


This demonstrates that common is not allocated memory until variables are initialised (used) for release mode, but initially for checkmate mode.

My 32-bit release and /64 shows that the memory is not allocated to after pause 1, which is what could be a useful way to interrogate your program, certainly when comparing working set to commit

Also, FTN95 /64 supports common larger than 2gb. Use nc = 30000 and do the 64 bit compiles. (other 64 bit compilers do not do this, or support pause, which is a significant feature of FTN95)

the following approach works for large common and /64
module nc_def
integer*4, parameter :: nc = 30000 ! 30000
end module nc_def

the following approach fails when adapted to the program above
module nc_def
integer*4, parameter :: nc = 30000 ! 30000
common /a1/ a, aa(nc,nc), aaa(nc,nc)
real*4 a,aa,aaa
end module nc_def


John
Back to top
View user's profile Send private message
DanRRight



Joined: 10 Mar 2008
Posts: 2106
Location: South Pole, Antarctica

PostPosted: Mon Sep 23, 2019 4:02 am    Post subject: Reply with quote

Unless i misunderstood you, John, you have just repeated what we said with Bill that arrays not used remain hidden to Task Manager till they are used or they are exposed if compiled with /undef.

And the larger than 2GB arrays in COMMON already available making things dangerous already today. The only what left to put the last nail is to allow stack to be more than 2GB like in this example which does not work yet but az said that is already fixed
Code:
Program aaa
a=1
call sub
contains
subroutine sub
real aa(30000,30000)
aa(:,:)=2
print*, a
pause
end subroutine
end


That means (returning to our sheep from the start of this thread) that using Task Manager is very confusing to hunt for the large arrays existing in the program. They may expose themselves in one type of compilation (checkmate) and do not in another (release) if they are not in use.

Much better would be the option /dump /list which if a bit modified by Silverfrost to place arrays in descending size order (and also showing the total sum) will reveal the largest arrays in the program. Currently this option is also very confusing. And though we now understand where devilry was hiding a bit better the complete hunting him out is still work in progress. Agree?
Back to top
View user's profile Send private message
JohnCampbell



Joined: 16 Feb 2006
Posts: 2144
Location: Sydney

PostPosted: Tue Sep 24, 2019 1:49 pm    Post subject: Re: Reply with quote

DanRRight wrote:
Unless i misunderstood you, John, you have just repeated what we said with Bill that arrays not used remain hidden to Task Manager till they are used or they are exposed if compiled with /undef.


Dan,

They (common) are not hidden, as they are reported in "Commit" at start.
Allocate arrays (to heap) are hidden until they are allocated ( then shown in Commit ) then when used/initialised (shown in working set ). The "heap" just extends.
Automatic arrays (to stack) are part of the pre-allocated stack ? Stack would be in Commit initially and in Working set as it is used. I think FTN95 /64 does sent send large automatic arrays to the heap.

Using /undef to expose them could be an approach, but not for production version of the program. ( there is a problem for debugging with different ways of allocating arrays between /undef, /check /debug and release mode, which can change the appearance of bugs. )

Complaining about FTN95 allowing COMMON to be larger than 2GB doesn't look like a good idea to me !

Perhaps /statistics /list could be enhanced to report memory usage for each routine, say sum of local arrays, sum of common arrays for each routine. (I am not sure if this is an easy stat to report)

Paul, has there been any thought to report /list as a .csv file, as is done with /timing ? I know I could use this mod.

A previous compiler I used had options /xref and also /xrefs which excluded Common variables that were not referenced. (might have also identified arguments and declared variables that were not referenced) Was that FTN77 ?

John


Last edited by JohnCampbell on Tue Sep 24, 2019 11:00 pm; edited 1 time in total
Back to top
View user's profile Send private message
PaulLaidler
Site Admin


Joined: 21 Feb 2005
Posts: 6076
Location: Salford, UK

PostPosted: Tue Sep 24, 2019 2:09 pm    Post subject: Reply with quote

John

As far as I recall the work on /TIMING was not done by Silverfrost/Salford Software so to my knowledge no thought has been given to producing .csv lists for /LIST.
Back to top
View user's profile Send private message
JohnCampbell



Joined: 16 Feb 2006
Posts: 2144
Location: Sydney

PostPosted: Wed Sep 25, 2019 1:02 pm    Post subject: Reply with quote

Paul,

Whoever did the /timing implementation should be invited back, as I am a fan of that analysis.
.csv file output makes it easier to import into excel or read, using the comma as a unique field delimiter,

thanks,

John
Back to top
View user's profile Send private message
DanRRight



Joined: 10 Mar 2008
Posts: 2106
Location: South Pole, Antarctica

PostPosted: Wed Sep 25, 2019 3:41 pm    Post subject: Re: Reply with quote

JohnCampbell wrote:
Complaining about FTN95 allowing COMMON to be larger than 2GB doesn't look like a good idea to me !

I am not complaining about this, are you kidding? Do not have any limits was my request to SF when 64bit was not even in the work! But having such no limit option we got a danger because there could be huge size arrays programmer have set and do not have a clue has no clue about that
Back to top
View user's profile Send private message
JohnCampbell



Joined: 16 Feb 2006
Posts: 2144
Location: Sydney

PostPosted: Thu Sep 26, 2019 6:13 am    Post subject: Reply with quote

My main problem with large arrays is when I forget to check the installed memory vs memory required. Windows has a very poor way of paging a program to virtual memory, when the memory demand exceeds physical memory. Everything just stops !!
Back to top
View user's profile Send private message
JohnCampbell



Joined: 16 Feb 2006
Posts: 2144
Location: Sydney

PostPosted: Sat Sep 28, 2019 8:22 am    Post subject: Reply with quote

Dan,

This topic of large arrays is an issue I am dealing with at the moment in my consulting work.

Basically, I find large local arrays to be a very poor choice, as they go on the stack that at some stage will be too small.
I would recommend against putting large arrays on the stack. It is a bad idea to focus on stack size and increase the stack size.
All large arrays should be allocated and placed on the heap, which is not restricted in size.

I have been running a large multi-threaded program, where I had tried to put thread private arrays on the thread stack to minimise memory conflicts between threads.
Unfortunately, as soon as I ran a larger problem, the program simply stopped. No error report, nothing after 5 hours running !

By changing these large thread private local arrays to ALLOCATE all is now working and the performance penalty of shifting from local stack to heap is not noticeable.
My thread stacks are each 16 MB or 32 MB. My other heap arrays are about 17 GB and I thought the thread private arrays in question became about 20 MB per thread, but I was wrong.

My message for 64-bit usage is if you want a robust program, use ALLOCATE and not local (stack) arrays.
If you are arguing about stack size, you are heading in the wrong direction.

John
Back to top
View user's profile Send private message
DanRRight



Joined: 10 Mar 2008
Posts: 2106
Location: South Pole, Antarctica

PostPosted: Sun Sep 29, 2019 11:39 am    Post subject: Reply with quote

John,
You misunderstood my major principle: in ideal compiler there should be no more talk about allocatable is better than stack or anything else. I do not like this BS to be a subject of any talk, both have to be unlimited and i see FTN95 is moving in this direction
Back to top
View user's profile Send private message
JohnCampbell



Joined: 16 Feb 2006
Posts: 2144
Location: Sydney

PostPosted: Sun Sep 29, 2019 12:06 pm    Post subject: Reply with quote

Dan,

My understanding of the stack as being a single allocation of memory; this is what is provided by Microsoft and the Windows O/S, not Silverfrost. I can't see that changing. I don't see this "moving in (your) direction"

Based on the limitations of this design I recommend to not use the stack wherever possible.
Back to top
View user's profile Send private message
mecej4



Joined: 31 Oct 2006
Posts: 1200

PostPosted: Sun Sep 29, 2019 1:08 pm    Post subject: Reply with quote

Someone reading this thread may be startled by seeing recommendations to not use the stack! Both of them (John and Dan) are talking only about avoiding the allocation of large local arrays on the stack. That was never an intended purpose of the stack when microprocessors (or central processors of mainframes) were designed. There have even been mainframes that were entirely "stack machines".

There are many advantages to keeping scalar and small array variables on the stack, since garbage collection is almost automatic. A stack makes a traceback much easier to create and output when a program runs into trouble. A stack makes recursion easy to program. Stack-relative addressing keeps the code size smaller than it would need to be otherwise.

I don't think for a moment that either JC or DRR want to prohibit such routine uses of the stack.
Back to top
View user's profile Send private message
JohnCampbell



Joined: 16 Feb 2006
Posts: 2144
Location: Sydney

PostPosted: Sun Sep 29, 2019 2:09 pm    Post subject: Reply with quote

mecej4,

I have certainly been recommending against putting large arrays on the stack, as I am not aware of a way of safely using the stack in this way.
I would be interested to know if my claims about the stack limitations are wrong or if it can be extended at run time.
I certainly use the stack for other purposes, as it's limited design has intended.

The heap is a much better place for large arrays, using ALLOCATE.

I have forgotten what compile options are available for FTN95 /64 to help with this ?
Back to top
View user's profile Send private message
John-Silver



Joined: 30 Jul 2013
Posts: 1227
Location: Aerospace Valley

PostPosted: Tue Oct 01, 2019 10:00 pm    Post subject: Reply with quote

Quote:
Unfortunately, as soon as I ran a larger problem, the program simply stopped. No error report, nothing after 5 hours running !


... and voyla ! le probleme mes amis ...... people insisting on running damn enormous models, tht take hours to run , and then spending centuries processing the data, most of which is useless 'noise' nywayanyway !

Ther should be ways that technical managers can limit the size of models their engineers create !
FE vendors huld be made by law to intorėduce a parameter limiting the size of the maximum model size any particular user cn create ! thus stopping all this nonsnse at it's source.

Technical manager to young engineer ..... 'go and model that and mke sure you get the mesh size around the strss concentration no more than 5mm !'
.... said engineer goes away and meshes the whole flippin' model with mesh of 5mm ....... just because he can ..... q.e.d.

resulting in ....... MILLIONs of damn d.o.f. !!!

(b.t.w. find nd watch the film , it really is brill Smile
_________________
''Computers (HAL and MARVIN excepted) are incredibly rigid. They question nothing. Especially input data.Human beings are incredibly trusting of computers and don't check input data. Together cocking up even the simplest calculation ... Smile "
Back to top
View user's profile Send private message
JohnCampbell



Joined: 16 Feb 2006
Posts: 2144
Location: Sydney

PostPosted: Wed Oct 02, 2019 12:22 pm    Post subject: Reply with quote

John S,

A bit off topic from practical problems using large arrays, but should I take from your comment that I should create a small model where the displacement field is too course to provide reasonable results and then show the vibration is not being transmitted.

Fortunately, since I am the modeller, manager and software developer, I don't have the external influences you provide. (load of cynical BS to me!)
Back to top
View user's profile Send private message
John-Silver



Joined: 30 Jul 2013
Posts: 1227
Location: Aerospace Valley

PostPosted: Wed Oct 02, 2019 3:41 pm    Post subject: Reply with quote

John C - I was of course talking generally.

It's not off-topic it was just an alternative way of avoiding the problem of having to manipulate large matrices !

Sort the technique out and then engineers will start making even bigger models, as I said 'just becuse they can, and can run them'. It's human nature.

I don't know what you're analysing exactly, but it sounds like maybe shock ? (based on your 'tramėnsmisssion of vibration ' comment)
That opens up yet another bag of worms regarding the need to make the mesh sufficiently small w.r.t. the wavelength of the shock transmission.
That would then imply the need to initially mesh finely all over, but with still the possibility to reduce the mesh and hence overall size once the first results are in.

Shock (if it is that) is a very specialėsed subjecct indeed.

As is CFD which is another area where often you don't really know the extent of where the mesh size needed until the results first pop out.
And of course then nobody ever thinks of revising the model mesh (downwards in complexity) as a result of the first results, because their manager ės on their backs .... as indeed he is when they can't get the results out quick enough because the jobs are taking so long to run ! No there's no winning really.

I've only ever crossed paths wih one bloke who analysed anything shock-related with FE (related to ptro shocks on the Ariane launch rocket clampband release. Needless to say he brought the depatėrtments computing facility to a standstill (well it was early 90's, but he knew how to hack the system parameters to take 100% of the bandwidth for his job - he wasn't popular man when he was found out :O))

If it's not shock then I can't see why a mesh needs to be that small all over (whatever 'that' might be)

out of interest maybe you could post a picture to illustrate the model you're dealing with ?
_________________
''Computers (HAL and MARVIN excepted) are incredibly rigid. They question nothing. Especially input data.Human beings are incredibly trusting of computers and don't check input data. Together cocking up even the simplest calculation ... Smile "
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    forums.silverfrost.com Forum Index -> General All times are GMT + 1 Hour
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group