|
forums.silverfrost.com Welcome to the Silverfrost forums
|
View previous topic :: View next topic |
Author |
Message |
narayanamoorthy_k
Joined: 19 Jun 2014 Posts: 142 Location: Chennai, IN
|
Posted: Thu Mar 17, 2016 6:24 pm Post subject: Sparse Matrix Solution tools for matrix inversion |
|
|
Hi Paul,
Greetings. Is there any library tools and functions available with FTN95 to deal with Sparse Matrix Solutions like for efficient storing, retrieving and computing the matrix inverse calculations. This is applicable when we deal with large sized matrix say 1000x1000 matrix, but the 80% of the matrix will be zeros and when we find the inversion, the inverted matrix will be full.
Therefore, an efficient way of computer modelling in terms of storing, retrieving the matrix elements and computing the inverse will be needed. Do you have any suggestions ? Pls. let me know. _________________ Thanks and Regards
Moorthy |
|
Back to top |
|
|
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2388 Location: Yateley, Hants, UK
|
Posted: Thu Mar 17, 2016 6:49 pm Post subject: |
|
|
I suggest that you look in a book on the Finite Element method. Zienkiewicz's book contains a lot of references you could follow, as what you describe is central to the method. Once, I found his book, presumably pirated, online as a PDF. |
|
Back to top |
|
|
mecej4
Joined: 31 Oct 2006 Posts: 1886
|
Posted: Fri Mar 18, 2016 12:39 am Post subject: Re: Sparse Matrix Solution tools for matrix inversion |
|
|
A compiler vendor is not often the source of libraries of routines of numerical methods. FTN95 does not even provide pre-built BLAS and Lapack routines.
The usual advice given for the question "how do I compute the inverse of a ... matrix" is "don't!". With few exceptions, the user only needs to solve simultaneous linear equations, and in this case only the inexperienced person insists on forming the inverse explicitly.
Search the Web for SuiteSparse, SuperLU, Mumps, Umfpack, Pardiso, DSS and WSMP.
The NA (Numerical Analysis) library packages from IMSL, NAG and HSL, among others, contain routines for working with sparse matrices. |
|
Back to top |
|
|
narayanamoorthy_k
Joined: 19 Jun 2014 Posts: 142 Location: Chennai, IN
|
Posted: Fri Mar 18, 2016 7:49 am Post subject: Re: |
|
|
LitusSaxonicum wrote: | I suggest that you look in a book on the Finite Element method. Zienkiewicz's book contains a lot of references you could follow, as what you describe is central to the method. Once, I found his book, presumably pirated, online as a PDF. |
THanks.. Can you share the URL to download the PDF. That helps. _________________ Thanks and Regards
Moorthy |
|
Back to top |
|
|
narayanamoorthy_k
Joined: 19 Jun 2014 Posts: 142 Location: Chennai, IN
|
Posted: Fri Mar 18, 2016 7:57 am Post subject: Re: Sparse Matrix Solution tools for matrix inversion |
|
|
mecej4 wrote: | A compiler vendor is not often the source of libraries of routines of numerical methods. FTN95 does not even provide pre-built BLAS and Lapack routines.
The usual advice given for the question "how do I compute the inverse of a ... matrix" is "don't!". With few exceptions, the user only needs to solve simultaneous linear equations, and in this case only the inexperienced person insists on forming the inverse explicitly.
Search the Web for SuiteSparse, SuperLU, Mumps, Umfpack, Pardiso, DSS and WSMP.
The NA (Numerical Analysis) library packages from IMSL, NAG and HSL, among others, contain routines for working with sparse matrices. |
I agree mecej4... But I am ready to even develop some functions to handle it. But, there are n-number of methods available in Web. Hence not sure of the best one. But I remember I have gone through Bi-Factorisation methods when I was in my College about 25 years back. But now, there would be so many advancements in the methedology from the reliability perspectives. I remember, even now the compilers will handle more dimension, voluminous and sparse matrix data with built-in Fortran library functions. Hence, trying to take help from our FTN95 compiler product experts.. That helps.
Thanks for your directions mecej4. It is useful. _________________ Thanks and Regards
Moorthy |
|
Back to top |
|
|
PaulLaidler Site Admin
Joined: 21 Feb 2005 Posts: 7925 Location: Salford, UK
|
Posted: Fri Mar 18, 2016 8:22 am Post subject: |
|
|
Adding BLAS and Lapack routines to the FTN95 release is on my wish list with a fairly high priority. |
|
Back to top |
|
|
JohnCampbell
Joined: 16 Feb 2006 Posts: 2554 Location: Sydney
|
Posted: Fri Mar 18, 2016 9:29 am Post subject: |
|
|
Paul,
Please add SSE and hopefully AVX dot_product and Vector addition (daxpy) routines before adding BLAS and Lapack routines. They would work much better with that support.
John |
|
Back to top |
|
|
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2388 Location: Yateley, Hants, UK
|
Posted: Fri Mar 18, 2016 2:35 pm Post subject: |
|
|
Hi Moorthy,
You'll have to search yourself for pirated material. As for me, I have several editions of the book, all paid for!
Regards
Eddie |
|
Back to top |
|
|
DanRRight
Joined: 10 Mar 2008 Posts: 2816 Location: South Pole, Antarctica
|
Posted: Sat Mar 19, 2016 5:06 am Post subject: |
|
|
Moorthy, Even dense matrix sizes 1000x1000 are not considered large nowadays. It may even fit into L2/L3 cache of your processor. What type of sparsity is your matrix?
As to inverting matrix the BLAS is probably the best but all depends on the type of sparse matrix. For dense matrices BLAS gives speedup versus Crout method for example by factor 1.3. I have fortran example with sources of Lapack and BLAS of using different methods of matrix inversion made by DaveGemini called testfpu, try to search this name and you may find it, if not let me know.
If matrix is larger than that then parallel methods are recommended, why not use extra cores, even cellphones are now multi-processors. You can see an example of parallel library speedup versus some other methods in my post today in another thread in "General"
PaulLaidler wrote: | Adding BLAS and Lapack routines to the FTN95 release is on my wish list with a fairly high priority. | Suspect it will be slow if optimization work on FTN95 is not finished. Use DaveGemini example to compare with Intel which is also supplied to see what the difference will be. I have done that in the past with older versions of FTN95 and the results were not stellar. Unless this serious work was already done, let's be realistic and don't delay 64-bit compiler even further . Better just use DLL of these libraries from other optimized compilers with all their optimizations, SSE and AWS already implemented. Instead making slow BLAS i suggest to add LAIPE parallel library like Lahey did and overjump all the BLASes by light years. LAIPE library exists with all other compilers besides FTN95 with which it did not compile by some unclear to me reason the author wrote me after communication with Silverfrost few years back. Or at least ask LAIPE authors to make DLL in addition to LIB and then it will work with any compiler on earth with no problem or any finger movement from Silverfrost side. Adding SSE to LAIPE would be easy, like we have done here in the forum playing with Gauss methods of solution of linear equations, I asked LAIPE authors to do that but seems they are busy. If this library will be compilable we will do that ourselves in a day or two. Advantage of LAIPE over Lapack/BLAS or OpenMP John Campbell uses is that you just use one line of Fortran code and - boooomm - your problem is solved in parallel without any influence from programmer, without difficulties of any learning curve, pretty much in the route of ideology of this compiler to do complex things in just one single line. For example, how you solve linear equations AX=B? You write "Call GaussMethod(A,X,B)" and get solution in X. With LAIPE you will write "Call LaipeGaussMethod(A,X,B)" and these 5 letters is the only difference |
|
Back to top |
|
|
narayanamoorthy_k
Joined: 19 Jun 2014 Posts: 142 Location: Chennai, IN
|
Posted: Tue Mar 22, 2016 9:47 am Post subject: Re: |
|
|
DanRRight wrote: | Moorthy, Even dense matrix sizes 1000x1000 are not considered large nowadays. It may even fit into L2/L3 cache of your processor. What type of sparsity is your matrix?
As to inverting matrix the BLAS is probably the best but all depends on the type of sparse matrix. For dense matrices BLAS gives speedup versus Crout method for example by factor 1.3. I have fortran example with sources of Lapack and BLAS of using different methods of matrix inversion made by DaveGemini called testfpu, try to search this name and you may find it, if not let me know.
If matrix is larger than that then parallel methods are recommended, why not use extra cores, even cellphones are now multi-processors. You can see an example of parallel library speedup versus some other methods in my post today in another thread in "General"
|
Hi Dan
Thanks for the elaborated reply. Pls. send the testfpu as I could not download the testfpu.f90 as the DaveGeminii's FTP is not reacheable. I will try it out.
Sparse matrix are nothing but the non-zero elements are about 5% of the whole matrix which includes non-zero diagonal elements. Hence, exploiting the sparse structure in storing and processing would be required. In such situation, finding an inverse of a matrix to solve for the vector x
the LU decomposition by triangular factorization, optimal ordering and direct solutions methods would be easily helpful to compute the inverse of a matrix without loosing the advantage of sparsity. Otherwise, the inverted matrix in an conventional way would fetch in the coefficient factor matrix with 100% all elements calculated, causes heavy performance deterioration and time delays. Hence, faster methods are helpful to speed up the computations using the above methods without compromising the performance and accuracy of the results.
This is what I am looking at. I believe, testfpu follows the normal way. Do you have any suggestions in the above directions where some fortran codes or methods to help out.
As you rightly said, the parallel computation methods are very much in need for such cases as I stated above. Exploiting the cores and caches would very much useful. As per your statistics, LAIPE is better approach in exploiting the parallel processing for these computations.. But it is not in usable condition for matrix inversion here
Your discussion is very useful. Thanks _________________ Thanks and Regards
Moorthy |
|
Back to top |
|
|
narayanamoorthy_k
Joined: 19 Jun 2014 Posts: 142 Location: Chennai, IN
|
Posted: Tue Mar 22, 2016 9:56 am Post subject: Re: |
|
|
LitusSaxonicum wrote: | I suggest that you look in a book on the Finite Element method. Zienkiewicz's book contains a lot of references you could follow, as what you describe is central to the method. Once, I found his book, presumably pirated, online as a PDF. |
Hi Eddie
I am looking at the FEM book for Electrical Power system network Analysis purpose. But I could not see that of what you mentioned. I can see only the FEM book with 3 volumes, of Solid Mechanics, Fluid dynamics and the Basis. Not sure, which one deals with this. Pls. suggest. Thanks _________________ Thanks and Regards
Moorthy |
|
Back to top |
|
|
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2388 Location: Yateley, Hants, UK
|
Posted: Tue Mar 22, 2016 5:18 pm Post subject: |
|
|
Hi Moorthy,
Your electrical networks are analogous to the finite elements dealt with by Zienkiewicz, with components equal to elements and connections equal to nodes. In the origins of FEM, computers were small, and systems were invented to not explicitly invert the matrix. Zienkiewicz gives simple Fortran codes to do this solution, but you would need to work out the analogies with your own problem.
I will look to see what I have in my library.
Eddie |
|
Back to top |
|
|
DanRRight
Joined: 10 Mar 2008 Posts: 2816 Location: South Pole, Antarctica
|
Posted: Thu Mar 24, 2016 11:27 pm Post subject: |
|
|
PM me your email address and I will send the source codes |
|
Back to top |
|
|
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2388 Location: Yateley, Hants, UK
|
Posted: Fri Mar 25, 2016 4:25 pm Post subject: |
|
|
I see that in the last (3 volume) version of Zienkiewicz's book(s) he says that the computer codes have been moved to a website. I think that you could find solution schemes in an older version (e.g. 2nd edition - my personal favourite).
You might have success with the codes in Smith & Griffiths 'Programming the Finite Element Method, now in the 4th edition: get the codes from here: http://inside.mines.edu/~vgriffit/4th_ed/Software/
Eddie |
|
Back to top |
|
|
John-Silver
Joined: 30 Jul 2013 Posts: 1520 Location: Aerospace Valley
|
Posted: Sun Mar 27, 2016 2:08 am Post subject: |
|
|
1. I had a mooch around , as someone who doesn't have a clue what they're doing does, and dropped upon this resource which may prove interesting/irresistible to the experts in the field (just like strawberries to a donkey ? ), as well as maybe helping answer the original post question ......
http://www.netlib.org/utk/people/JackDongarra/la-sw.html
(last updated Apr 2015 - see bottom of page)
__________________________________________________-
2. Apart from that, the great irishman () Zenk O'Vitch 's book codes relate to a free 'personal version of the FEAP package, with the code in the public domain) which can be found here:-
http://www.ce.berkeley.edu/projects/feap/feappv/
(the link given in 5th Ed of the book no longer being active - well it was in 2000 and things move on).
Beware though that it looks like you might have to spend some considerable time delving into it to make head or tail of it. (No support offered). Apparently it's geared to academic learning use so don't expect to have found an all singing all dancing solution (sic). |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|