In response to John Horspool as well as Nigel, here is my suggestion for client server computing, effectively multi-threading, using FTN95 and Clearwin, not .NET. If the forum truncates my message, I'll post a 'second half'. I originally had FE analysis in mind. My idea was to have several server apps able to (a) produce stiffness matrices, perhaps for 100-element blocks, and (b) perform stress-extraction, again, perhaps in 100-element blocks. The client in any case orders the elements (perhaps in frontal solution order) for processing. I imagine that reduction of your structure into substructures could permit you to also have server applications that performed the substructure reduction, so that the client eventually only needs to solve an assemblage of substructures. John, you will have to read this (following) with FE in mind.
Nigel,
I read your posting on the Silverfrost Forum with interest, and was reminded of a comparable debate when cpus started to become clock-doubled (they are now all clock-multiplied). The problem is that the rest of the system is equivalent – the RAM, the hard disk, etc. This reply is too long for a single reply post on the Forum, so I have taken the liberty of communicating directly.
Imagine a cafe, running smoothly with one cook, one waitress, a few customers. The speed of service is how long it takes to cook a pizza. More cooks (dual core cooks!) often don’t help – they are just idle more of the time. When one gets more customers, the extra cooks come into their own. But two cooks can’t cook twice as many pizzas – they have to use the same pizza oven and the same fridge – and they get in each others’ way. The single waitress often can’t cope with a rush … and in any case, the café isn’t simply used only by paying customers, there is also a health inspector, the VAT inspector, and (assuming the café is in Naples!) a Mafioso checking on activity to see how much profit can be creamed off for protection.
For some situations, the extra cook is useful. In others, it needs more waitresses, more tables, a bigger pizza oven etc.
I find that my dual-core machine helps when my virus checker does a scan – I can keep working, most times. Before, I couldn’t. But nothing runs faster.
* If your problem is disk access, then as well as a faster cpu, you need either a faster hard disk, or hard disks set up in a Raid array.
* If your problem is speed of generating complicated graphics, then you need a faster and more expensive graphics card.
* If the problem is speed of RAM, you need faster RAM, lower latency RAM (although this depends on cpu type), more cache (ditto).
* If the problem is lack of RAM (all the inspectors taking up tables in the café) – more RAM helps, although a faster hard disk helps too if the virtual memory sends things to disk often.
In the days before FTN for Windows (i.e. FTN77 with DBOS), it had multi-threading, but I could never see the point of that (except in the ease with which some things could be programmed) with a single cpu.
I looked on your firm’s website to see what you do - so that I could write sensibly with an example. Here it is.
When your main application senses that it is going to have a busy computationally-intensive set of jobs to do – say to superimpose a 10km road layout on a digital ground model – the problem is probably solved in its entirety, and then one displays the part of the job that can be seen. If the problem is the amount of computation on the model, then there’s a wait before the scene shows, but the scene renders quickly. If the problem is drawing the screen, then a faster graphics system is probably going to help, but if it is calculating the geometry of what is on the screen, then it is cpu and memory speed again that helps most.