Hi John,
It just goes to show that there are a huge diversity of needs â and really faster computers would address all of them. It isnât just what we do, itâs also how we like to work.
My thoughts were driven by a problem I had to solve in the 1970âs â it was a component that suffered tension fractures in service, and it was clear that the angle between 2 faces on this component and the radius between them affected the stress concentration. The problem was to do different combinations to find the optimum. The 32k word computer (ICL 1900 with an 8 Mb disk) I was using would allow me a maximum of 60 isoparametric elements, and I was using just about the maximum number of nodes. I could get one run overnight. Unlike Carl, I could debug the data errors in my deck of cards on a second (but slower) 32k word computer (Elliot 4120 with tapes) in a few minutes â I just couldnât run the analysis (in this case because the OS would stop every 30 minutes to ask if I wanted to continue, and I couldnât face that for several days; plus BACKSPACE was unreliable, as it stretched the tapes!). I was allowed one (overnight) run every day on the 1900. I knew I wanted at least 6 combinations, and it took over a week to get them done. I wanted enough âstress concentrationâ values, one from each run, to draw a graph because I didnât much trust the absolute values for the individual results but I was happy to accept that there should be an underlying trend from which I could âpick a winnerâ. The real proof came when the components were manufactured (it worked!).
That was a case where multiple computers would have shortened the elapsed time. I agree that sometimes one has to lurch towards the solution step at a time, and then multiple computers donât help.
The next time I did a similar job, not only was it on a VAX that solved it in a few minutes, but I could look at the results on a Tektronix screen then and there instead of drawing it all out on a slow drum plotter.
I was intrigued by 'Dr Tip' Carlâs 3 failed runs before he got one that worked. I found a long time ago that I had 3 kinds of errors: punch card mistakes (todayâs typos); analysing the wrong problem or giving the program a set of data it would choke on. I never completely avoided the former until I moved away from creating datafiles in a text editor, but I avoided the second by making sure I had a pictorial representation of whatever I was analysing well before embarking on a lengthy calculation process. Experience with Clearwin style interfaces and Windows generally makes both of these much simpler, and avoids the halting run-edit-run that I grew up on. The latter category sends you back into programming, sometimes for days ...
I now run something on my PC (not FE) that I first developed on the Elliot 4120 in 1973. (I had an Algol-60 version before that, but it wouldnât run on any other computer â thatâs the beauty of Fortran!). The run times now are simply not measurable in useful units (i.e. <<1sec). However, in 1973, the run time started out as 3 hours. I got that down to 45 minutes by dint of careful re-coding. However the run times dropped to less than 6 sec on a CDC6400, which taught me a lesson about the value of fast hardware. I was astonished later to discover that a very basic early PC was getting on for as fast as a VAX (or at least the percentage of it one was allowed), and now they are significantly faster than the CDC. Perhaps it is only a matter of waiting for the right machine to come along. The best part of 40 years might do it, if you can wait!
I hadnât included the time-wasting that comes from wrong data, but I was pleased that Carl noticed a related pattern to that I theorized but in his work. Time passes in quanta, and they are of variable size, and variable impact. As a result, getting 10% faster (at whatever cost) is excellent for some users, and pointless for others. Ten times faster is a desirable goal for all.
Dan mentions Z-buffering in OpenGL being a standard facility. Iâm sure it is, although Iâve never had the need, nor been able to fathom out the details of OpenGL. Several users on this forum swear by it. However John you did say that wasnât your problem, and even your code for that was fast. I agree with Dan that it is very easy to misunderstand where the time is taken. It never occurred to me in the past that it took so long (relatively) to work the line-printer, and while I printed as I progressed, the program would always be slow. However, you know that the solution is faster, as we used to say, âin-coreâ - and that is a cheap gain if all it takes is being able to address RAM in a machine you've already got, without changing your Fortran code (much).
RAMDISK is a way of using the code youâve got on the machine youâve got, substituting the faster RAM that you arenât using for the slower hard disk. For only the cost of the RAMDISK software, that also seems to me to be a cheap gain, whatever it is.
Eddie