Silverfrost Forums

Welcome to our forums

Sparse Matrix Solution tools for matrix inversion

17 Mar 2016 5:24 #17313

Hi Paul, Greetings. Is there any library tools and functions available with FTN95 to deal with Sparse Matrix Solutions like for efficient storing, retrieving and computing the matrix inverse calculations. This is applicable when we deal with large sized matrix say 1000x1000 matrix, but the 80% of the matrix will be zeros and when we find the inversion, the inverted matrix will be full.

Therefore, an efficient way of computer modelling in terms of storing, retrieving the matrix elements and computing the inverse will be needed. Do you have any suggestions ? Pls. let me know.

17 Mar 2016 5:49 #17314

I suggest that you look in a book on the Finite Element method. Zienkiewicz's book contains a lot of references you could follow, as what you describe is central to the method. Once, I found his book, presumably pirated, online as a PDF.

17 Mar 2016 11:39 #17315

A compiler vendor is not often the source of libraries of routines of numerical methods. FTN95 does not even provide pre-built BLAS and Lapack routines.

The usual advice given for the question 'how do I compute the inverse of a ... matrix' is 'don't!'. With few exceptions, the user only needs to solve simultaneous linear equations, and in this case only the inexperienced person insists on forming the inverse explicitly.

Search the Web for SuiteSparse, SuperLU, Mumps, Umfpack, Pardiso, DSS and WSMP.

The NA (Numerical Analysis) library packages from IMSL, NAG and HSL, among others, contain routines for working with sparse matrices.

18 Mar 2016 6:49 #17316

Quoted from LitusSaxonicum I suggest that you look in a book on the Finite Element method. Zienkiewicz's book contains a lot of references you could follow, as what you describe is central to the method. Once, I found his book, presumably pirated, online as a PDF.

THanks.. Can you share the URL to download the PDF. That helps.

18 Mar 2016 6:57 #17317

Quoted from mecej4 A compiler vendor is not often the source of libraries of routines of numerical methods. FTN95 does not even provide pre-built BLAS and Lapack routines.

The usual advice given for the question 'how do I compute the inverse of a ... matrix' is 'don't!'. With few exceptions, the user only needs to solve simultaneous linear equations, and in this case only the inexperienced person insists on forming the inverse explicitly.

Search the Web for SuiteSparse, SuperLU, Mumps, Umfpack, Pardiso, DSS and WSMP.

The NA (Numerical Analysis) library packages from IMSL, NAG and HSL, among others, contain routines for working with sparse matrices.

I agree mecej4... But I am ready to even develop some functions to handle it. But, there are n-number of methods available in Web. Hence not sure of the best one. But I remember I have gone through Bi-Factorisation methods when I was in my College about 25 years back. But now, there would be so many advancements in the methedology from the reliability perspectives. I remember, even now the compilers will handle more dimension, voluminous and sparse matrix data with built-in Fortran library functions. Hence, trying to take help from our FTN95 compiler product experts.. That helps.

Thanks for your directions mecej4. It is useful.

18 Mar 2016 7:22 #17319

Adding BLAS and Lapack routines to the FTN95 release is on my wish list with a fairly high priority.

18 Mar 2016 8:29 #17320

Paul,

Please add SSE and hopefully AVX dot_product and Vector addition (daxpy) routines before adding BLAS and Lapack routines. They would work much better with that support.

John

18 Mar 2016 1:35 #17322

Hi Moorthy,

You'll have to search yourself for pirated material. As for me, I have several editions of the book, all paid for!

Regards

Eddie

19 Mar 2016 4:06 #17325

Moorthy, Even dense matrix sizes 1000x1000 are not considered large nowadays. It may even fit into L2/L3 cache of your processor. What type of sparsity is your matrix?

As to inverting matrix the BLAS is probably the best but all depends on the type of sparse matrix. For dense matrices BLAS gives speedup versus Crout method for example by factor 1.3. I have fortran example with sources of Lapack and BLAS of using different methods of matrix inversion made by DaveGemini called testfpu, try to search this name and you may find it, if not let me know.

If matrix is larger than that then parallel methods are recommended, why not use extra cores, even cellphones are now multi-processors. You can see an example of parallel library speedup versus some other methods in my post today in another thread in 'General'

Quoted from PaulLaidler Adding BLAS and Lapack routines to the FTN95 release is on my wish list with a fairly high priority. Suspect it will be slow if optimization work on FTN95 is not finished. Use DaveGemini example to compare with Intel which is also supplied to see what the difference will be. I have done that in the past with older versions of FTN95 and the results were not stellar. Unless this serious work was already done, let's be realistic and don't delay 64-bit compiler even further . Better just use DLL of these libraries from other optimized compilers with all their optimizations, SSE and AWS already implemented. Instead making slow BLAS i suggest to add LAIPE parallel library like Lahey did and overjump all the BLASes by light years. LAIPE library exists with all other compilers besides FTN95 with which it did not compile by some unclear to me reason the author wrote me after communication with Silverfrost few years back. Or at least ask LAIPE authors to make DLL in addition to LIB and then it will work with any compiler on earth with no problem or any finger movement from Silverfrost side. Adding SSE to LAIPE would be easy, like we have done here in the forum playing with Gauss methods of solution of linear equations, I asked LAIPE authors to do that but seems they are busy. If this library will be compilable we will do that ourselves in a day or two. Advantage of LAIPE over Lapack/BLAS or OpenMP John Campbell uses is that you just use one line of Fortran code and - boooomm - your problem is solved in parallel without any influence from programmer, without difficulties of any learning curve, pretty much in the route of ideology of this compiler to do complex things in just one single line. For example, how you solve linear equations AX=B? You write 'Call GaussMethod(A,X,B)' and get solution in X. With LAIPE you will write 'Call LaipeGaussMethod(A,X,B)' and these 5 letters is the only difference 😃

22 Mar 2016 8:47 #17328

Quoted from DanRRight Moorthy, Even dense matrix sizes 1000x1000 are not considered large nowadays. It may even fit into L2/L3 cache of your processor. What type of sparsity is your matrix?

As to inverting matrix the BLAS is probably the best but all depends on the type of sparse matrix. For dense matrices BLAS gives speedup versus Crout method for example by factor 1.3. I have fortran example with sources of Lapack and BLAS of using different methods of matrix inversion made by DaveGemini called testfpu, try to search this name and you may find it, if not let me know.

If matrix is larger than that then parallel methods are recommended, why not use extra cores, even cellphones are now multi-processors. You can see an example of parallel library speedup versus some other methods in my post today in another thread in 'General'

Hi Dan

Thanks for the elaborated reply. Pls. send the testfpu as I could not download the testfpu.f90 as the DaveGeminii's FTP is not reacheable. I will try it out.

Sparse matrix are nothing but the non-zero elements are about 5% of the whole matrix which includes non-zero diagonal elements. Hence, exploiting the sparse structure in storing and processing would be required. In such situation, finding an inverse of a matrix to solve for the vector x

Quoted from Moorthy Ax=B

the LU decomposition by triangular factorization, optimal ordering and direct solutions methods would be easily helpful to compute the inverse of a matrix without loosing the advantage of sparsity. Otherwise, the inverted matrix in an conventional way would fetch in the coefficient factor matrix with 100% all elements calculated, causes heavy performance deterioration and time delays. Hence, faster methods are helpful to speed up the computations using the above methods without compromising the performance and accuracy of the results.

This is what I am looking at. I believe, testfpu follows the normal way. Do you have any suggestions in the above directions where some fortran codes or methods to help out.

As you rightly said, the parallel computation methods are very much in need for such cases as I stated above. Exploiting the cores and caches would very much useful. As per your statistics, LAIPE is better approach in exploiting the parallel processing for these computations.. But it is not in usable condition for matrix inversion here 😦

Your discussion is very useful. Thanks

22 Mar 2016 8:56 #17329

Quoted from LitusSaxonicum I suggest that you look in a book on the Finite Element method. Zienkiewicz's book contains a lot of references you could follow, as what you describe is central to the method. Once, I found his book, presumably pirated, online as a PDF.

Hi Eddie I am looking at the FEM book for Electrical Power system network Analysis purpose. But I could not see that of what you mentioned. I can see only the FEM book with 3 volumes, of Solid Mechanics, Fluid dynamics and the Basis. Not sure, which one deals with this. Pls. suggest. Thanks

22 Mar 2016 4:18 #17331

Hi Moorthy,

Your electrical networks are analogous to the finite elements dealt with by Zienkiewicz, with components equal to elements and connections equal to nodes. In the origins of FEM, computers were small, and systems were invented to not explicitly invert the matrix. Zienkiewicz gives simple Fortran codes to do this solution, but you would need to work out the analogies with your own problem.

I will look to see what I have in my library.

Eddie

24 Mar 2016 10:27 #17342

PM me your email address and I will send the source codes

25 Mar 2016 3:25 #17343

I see that in the last (3 volume) version of Zienkiewicz's book(s) he says that the computer codes have been moved to a website. I think that you could find solution schemes in an older version (e.g. 2nd edition - my personal favourite).

You might have success with the codes in Smith & Griffiths 'Programming the Finite Element Method, now in the 4th edition: get the codes from here: http://inside.mines.edu/~vgriffit/4th_ed/Software/

Eddie

28 Mar 2016 6:02 #17347

Thank you all for your reply & suggestions.

29 Mar 2016 3:44 #17359

Quoted from DanRRight PM me your email address and I will send the source codes

Hi Dan Thanks for your reply. I have sent my email id through PM yesterday. Did you receive it?. Not sure you received it, as the mail is still in my OutBox and not dispatched. Hence, attaching my personal email id here. narayanamoorthy_k@yahoo.com

Thanks

30 Mar 2016 11:21 #17370

Moorthy,

You have been requesting information on solution of linear equations with sparse matrices. The most common approach is the use of a skyline or profile solver and there are many examples of this published. I suggest googling Taylor skyline solver. Robert Taylor (UCB) provided the solver code in Zienkiewicz's book Ver ? and I think for the feap package, which is a widely known source. I first used the skyline solver based on a paper by Graham Powell in 1973 and I am still modifying it today, using vector instructions and OpenMP.

A couple of other points: It is rarely a good approach to invert the matrix, unless it is very small. Any sparsity is typically lost when the matrix is inverted. Skyline solvers are used for a class of sparse matrices, which are 'banded'. You did not describe the sparsity that your matrix has, but the type of sparsity is an important issue. Most finite element matrices have this type of sparsity. Most skyline solvers are for symmetric positive definite matrices, although the sparsity attribute these solvers utilise can be modified/applied to non-symmetric indefinite matrices.

I would expect that there is a Fortran version of a skyline solver available from someone in every university engineering school.

You should spend some more time reviewing the sparsity types that are solved and where these matrices are used. There are maths packages (eg MKL and Pardiso) that support different types of sparsity, although they can require a greater understanding of the solution process to be used effectively. Others would have more experience of these than me.

John

31 Mar 2016 5:14 #17371

John Thank you very much for your wonderful suggestions. I will explore.

I know that finding an inverse of a sparse matrix, will fetch in full matrix, hence the exploitation of the sparsity will be advantageous in terms of storage and performance. I am trying with factorisation, reduction and forward and backward substitution approaches to use it for finding the unknown variables. I will check your suggestions too. Thanks.

What is the title of the paper by Graham Powell who published in 1973?

31 Mar 2016 8:51 #17372

'Towards optimal in-core equation solving' by Digambar P. Mondkar and Graham H. Powell Computers & Structures, Vol. 4, pp. 531-548. Pergamon Press 1974.

See also Computers & Structures, Vol. 4, pp. 699-728. Pergamon Press 1974. for second paper.

It has a Fortran IV listing, which can be updated to F77/F90 DO loop syntax. This has been the basis for many codes developed since then, and I would suspect was used by Taylor.

1 Apr 2016 11:11 #17377

Thank you John, for the details of the paper.

Please login to reply.