forums.silverfrost.com Forum Index forums.silverfrost.com
Welcome to the Silverfrost forums
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Salford FTN95 run time performance
Goto page Previous  1, 2, 3
 
Post new topic   Reply to topic    forums.silverfrost.com Forum Index -> General
View previous topic :: View next topic  
Author Message
LitusSaxonicum



Joined: 23 Aug 2005
Posts: 2388
Location: Yateley, Hants, UK

PostPosted: Sun Jan 23, 2011 4:55 pm    Post subject: Reply with quote

Hi John,

It just goes to show that there are a huge diversity of needs – and really faster computers would address all of them. It isn’t just what we do, it’s also how we like to work.

My thoughts were driven by a problem I had to solve in the 1970’s – it was a component that suffered tension fractures in service, and it was clear that the angle between 2 faces on this component and the radius between them affected the stress concentration. The problem was to do different combinations to find the optimum. The 32k word computer (ICL 1900 with an 8 Mb disk) I was using would allow me a maximum of 60 isoparametric elements, and I was using just about the maximum number of nodes. I could get one run overnight. Unlike Carl, I could debug the data errors in my deck of cards on a second (but slower) 32k word computer (Elliot 4120 with tapes) in a few minutes – I just couldn’t run the analysis (in this case because the OS would stop every 30 minutes to ask if I wanted to continue, and I couldn’t face that for several days; plus BACKSPACE was unreliable, as it stretched the tapes!). I was allowed one (overnight) run every day on the 1900. I knew I wanted at least 6 combinations, and it took over a week to get them done. I wanted enough “stress concentration” values, one from each run, to draw a graph because I didn’t much trust the absolute values for the individual results but I was happy to accept that there should be an underlying trend from which I could “pick a winner”. The real proof came when the components were manufactured (it worked!).

That was a case where multiple computers would have shortened the elapsed time. I agree that sometimes one has to lurch towards the solution step at a time, and then multiple computers don’t help.

The next time I did a similar job, not only was it on a VAX that solved it in a few minutes, but I could look at the results on a Tektronix screen then and there instead of drawing it all out on a slow drum plotter.

I was intrigued by "Dr Tip" Carl’s 3 failed runs before he got one that worked. I found a long time ago that I had 3 kinds of errors: punch card mistakes (today’s typos); analysing the wrong problem or giving the program a set of data it would choke on. I never completely avoided the former until I moved away from creating datafiles in a text editor, but I avoided the second by making sure I had a pictorial representation of whatever I was analysing well before embarking on a lengthy calculation process. Experience with Clearwin style interfaces and Windows generally makes both of these much simpler, and avoids the halting run-edit-run that I grew up on. The latter category sends you back into programming, sometimes for days ...

I now run something on my PC (not FE) that I first developed on the Elliot 4120 in 1973. (I had an Algol-60 version before that, but it wouldn’t run on any other computer – that’s the beauty of Fortran!). The run times now are simply not measurable in useful units (i.e. <<1sec). However, in 1973, the run time started out as 3 hours. I got that down to 45 minutes by dint of careful re-coding. However the run times dropped to less than 6 sec on a CDC6400, which taught me a lesson about the value of fast hardware. I was astonished later to discover that a very basic early PC was getting on for as fast as a VAX (or at least the percentage of it one was allowed), and now they are significantly faster than the CDC. Perhaps it is only a matter of waiting for the right machine to come along. The best part of 40 years might do it, if you can wait!

I hadn’t included the time-wasting that comes from wrong data, but I was pleased that Carl noticed a related pattern to that I theorized but in his work. Time passes in quanta, and they are of variable size, and variable impact. As a result, getting 10% faster (at whatever cost) is excellent for some users, and pointless for others. Ten times faster is a desirable goal for all.

Dan mentions Z-buffering in OpenGL being a standard facility. I’m sure it
Back to top
View user's profile Send private message
JohnHorspool



Joined: 26 Sep 2005
Posts: 270
Location: Gloucestershire UK

PostPosted: Sun Jan 23, 2011 9:20 pm    Post subject: Reply with quote

"Dan mentions Z-buffering in OpenGL being a standard facility"

- the beauty of that is that it is all done in hardware by the graphics card, fast enough for dynamic displays but requiring only relatively minimal programming effort. Something I really appreciate, since like I Eddie I also wrote my first graphics program using a line printer and an ICL1900 before discovering the "wonders" of the tektronix terminal connected to a VAX !
Back to top
View user's profile Send private message Visit poster's website
JohnCampbell



Joined: 16 Feb 2006
Posts: 2554
Location: Sydney

PostPosted: Mon Jan 24, 2011 12:41 am    Post subject: Reply with quote

I've got work deadlines, but two quick replies,

Dan, with skyline solvers, the run times are not related to n^3, but about n^2.3. However, storage is related to about n^2, which can quickly eat into the 64_bit size increase.

Eddie, Your tale of the 70's with isoparametric elements interests me. Were they 8-node? This was about the time we were learning about modelling crack singlarities by moving the mid-side nodes. Without graphics screens, understanding the displacement fields in the elements certainly would have been challenging. Even when I did have tektronics, being able to correctly display the distorted fields wasn't much easier. There were a lot of bad rules about distorted elements, based on mis-understandings of the internal element fields.

I have (had) a standard benchmark run I used for a long time. It first ran in 6 days on a Pr1me 400 on a fixed band solver. I got it down to 30 minutes on a Pr1me 750 with a skyline solver, with 95% being the equation solution. It now runs in less than 0.5 seconds, with the equation solver only 0.03 seconds and result printing about 80%. Times have changed! ( couldn't resist a 3rd comment)

John
Back to top
View user's profile Send private message
LitusSaxonicum



Joined: 23 Aug 2005
Posts: 2388
Location: Yateley, Hants, UK

PostPosted: Mon Jan 24, 2011 9:26 pm    Post subject: Reply with quote

As so often, I got chopped. I don't need to say that John H swears by it (OpenGL), but it wasn't John C's problem. I can't fathom the instructions for OpenGL, notwithstanding that I have a £1500 Quadro card in my PC that no doubt would run it beautifully.

My 60-element model was run linear elastic with 8-node isoparametric elements with 2x2 Gaussian integration, and I was looking for relative stress concentrations, not modelling plasticity or cracks (or even "no-tension" which would have been OK for the material involved). Eventually, the result was confirmed by manufacturing test specimens. All the graphics were done on a single-pen drum plotter! I modelled the whole component initially, then a part of it in more detail.

Eddie
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    forums.silverfrost.com Forum Index -> General All times are GMT + 1 Hour
Goto page Previous  1, 2, 3
Page 3 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group