If you do not find the answer to your question here, please have a look at the MUMPS Users' mailing list archives.
Orderings
-
How can MUMPS use the METIS ordering ?METIS is available at http://glaros.dtc.umn.edu/gkhome/metis/metis/download. After you have installed METIS in, say, /usr/local/metis/, you must modify the Section 'orderings' of your file Makefile.inc in the following way (PORD is available by default):
LMETISDIR = /usr/local/metis/ IMETIS = LMETIS = -L$(LMETISDIR) -lmetis ORDERINGSF = -Dmetis -Dpord ORDERINGSC = -Dmetis -Dpord
For IBM SP platforms (xlf* compilers), a different syntax is to be used for preprocessing:ORDERINGSF = -WF,-Dmetis -WF,-Dpord ORDERINGSC = -Dmetis -Dpord
After that, you will need to clean everything ('make clean') and recompile ('make' or 'make all'). To force the METIS ordering at execution time, set ICNTL(7) to 5. -
How can MUMPS use the Scotch ordering ?Scotch is available at http://gforge.inria.fr/projects/scotch/. After you have installed Scotch in, say, /usr/local/scotch/, you must modify the Section 'orderings' of your file Makefile.inc in the following way (PORD is available by default):
LSCOTCHDIR = /usr/local/scotch/ LSCOTCH = -L$(LSCOTCHDIR)/lib -lesmumps -lscotch -lscotcherr ISCOTCH = -I$(LSCOTCHDIR)/include/ ORDERINGSF = -Dscotch -Dpord ORDERINGSC = -Dscotch -Dpord
For IBM SP platforms (xlf* compilers), a different syntax is to be used for preprocessing:ORDERINGSF = -WF,-Dscotch -WF,-Dpord ORDERINGSC = -Dscotch -Dpord
After that, you will need to clean everything ('make clean') and recompile ('make' or 'make all'). To force the Scotch ordering at execution time, set ICNTL(7) to 3. -
I have a problem with the Metis/Parmetis ordering during link phase. The returned error is: mumps_orderings.c(479): error: a value of type "void" cannot be assigned to an entity of type "int" iierr=ParMETIS_V3_NodeND(first, vertloctab, edgeloctab, numflag, options, order, sizes, &int_comm);Unfortunately, due to a change in the Metis interface between version 4 and 5, MUMPS was incompatible with Metis 4 and Parmetis 3. MUMPS 5.0 now assumes that Metis 5.1.0 or ParMetis 4.0.3 or later is used. It is possible to continue using Metis 4.0.3 and ParMetis 3.2.0 or earlier versions by forcing the compilation flag -Dmetis4 or -Dparmetis3. Note that Metis 5.0.1/5.0.2/5.0.3 and ParMetis 4.0.1/4.0.2 are not supported in MUMPS.
-
I want to switch from SCOTCH 5.1.12 or ealier to SCOTCH 6.0.x. What do I need to do?Since SCOTCH version 6.0.1, the enriched package for interfacing with MUMPS (esmumps and ptesmumps) is included in the standard SCOTCH distribution and can be compiled by issuing the command "make esmumps" and/or "make ptesmumps" when installing SCOTCH. You can simply donwload scotch_6.0.3.tar.gz to have access to all the features of SCOTCH.
Furthermore, in the SCOTCH 6.0.x series, the PT-SCOTCH library does not include the SCOTCH library. Therefore, when using PT-SCOTCH, the SCOTCH library must also be provided during the link phase. A typical example of LSCOTCH definition in your Makefile.inc will then look like:
LSCOTCH = -L$(LSCOTCHDIR)/lib -lptesmumps -lptscotch -lptscotcherr -lscotch
Finally, there is a problem in the SCOTCH 6.0.0 package which is making it unusable with MUMPS. You should update your version of SCOTCH to 6.0.1 or later; we recommend using the latest version, 6.0.3 at the time of writing. -
I have a problem with the Scotch/PT-Scotch ordering, the code crashes at analysis with the error:
(0): ERROR: stratParserParse: invalid method parameter name "type", before "h,rat=0.7,vert=100,low=h{pass=10},..."
(0): ERROR: SCOTCH_stratGraphOrder: error in ordering strategyUnfortunately, there is a problem in the SCOTCH 6.0.0 package which is making it unusable with MUMPS. The SCOTCH developement team has solved the problem in SCOTCH 6.0.1. You should update your version of SCOTCH to 6.0.1 or later.
NB: The complete error is :
(0): ERROR: stratParserParse: invalid method parameter name "type", before "h,rat=0.7,vert=100,low=h{pass=10},asc=b{width=3,bnd=f{bal=0.2}, org=(|h{pass=10})f{bal=0.2}}}|m{type=h,rat=0.7,vert=100,low=h{pass=10},asc=b{width=3,bnd=f{bal=0.2}, org=(|h{pass=10})f{bal=0.2}}};,ole=f{cmin=0,cmax=100000,frat=0.0},ose=g}, unc=n{sep=/(vert>120)?m{type=h,rat=0.7,vert=100,low=h{pass=10},asc=b{width=3,bnd=f{bal=0.2}, org=(|h{pass=10})f{bal=0.2}}}|m{type=h,rat=0.7,vert=100,low=h{pass=10},asc=b{width=3,bnd=f{bal=0.2}, org=(|h{pass=10})f{bal=0.2}}};,ole=f{cmin=15,cmax=100000,frat=0.0},ose=g}}"
(0): ERROR: SCOTCH_stratGraphOrder: error in ordering strategy
Installation & Execution
-
I have problems with libraries missing or during link phase.Try 'make clean', 'make'. It may be that you modified some options in the Makefile.inc (e.g. sequential or parallel MUMPS library, more orderings available) ; in that case you need to recompile MUMPS.
If it still does not work then check option FORTRAN/C COMPATIBILITY in Makefile ( see Make.inc/Makefile.inc.generic ).
And if it still does not work then please check the rest of this FAQ section. -
Which libraries do I need to run the parallel version of MUMPS ?You need BLAS (basic linear algebra subroutines), ScaLAPACK, BLACS, MPI. Proper paths to those libraries should be included in the Makefile.
- Tuned versions of the BLAS are available from the constructor of your platform (e.g. MKL in Intel environments) or through ATLAS. Some users have reported Goto BLAS to be very efficient.
- Alternatively source files for the BLAS (non optimized) are available from http://www.netlib.org/blas/
- MPI should be available on your platform or you will need to install either MPICH, LAM MPI or open MPI.
- BLACS/ScaLAPACK are available from netlib: http://www.netlib.org/blacs/ and http://www.netlib.org/scalapack/.
-
Which libraries do I need to run the sequential version of MUMPS ?You need the BLAS (see links in FAQ #2 above). The library libseq/libmpiseq.a (provided with MUMPS distribution) will then be used in place of MPI, BLACS and ScaLAPACK.
-
Can I run both a sequential and a parallel version of MUMPS ?Those two libraries cannot coexist in the same applcation since they have the same interface. You need to decide at compilation time which library you want to install. If you install the parallel version after the sequential version (or vice-versa), be sure you use 'make clean' in between.
If you plan to run MUMPS sequentially from a parallel MPI application, you need to install the parallel version of MUMPS and pass a communicator containing a single processor to the MUMPS library. The reason for this behaviour is that the sequential MUMPS uses a special library libmpiseq.a instead of the true MPI, BLACS, ScaLAPACK, ... As this library implements all the symbols needed by MUMPS for a sequential environment, it cannot co-exist with MPI/BLACS/ScaLAPACK. -
I have warnings about misalignement of items in derived datatypes when using MUMPS 5.0.2 or earlier versions.If these warnings annoy you or if you expect better performance by allowing better alignments, try suppressing the SEQUENCE statement in [sdcz]mumps_struc.h and [sdcz]mumps_root.h. After that do a 'make clean' and recompile MUMPS. In that case, you must ensure that the same alignment options to the FORTRAN compiler are used to compile MUMPS and your application. Note that if your application is not written in FORTRAN and only uses the C interface to MUMPS then you can safely suppress the SEQUENCE statement.
-
I want to use MUMPS from a C program but I get a message "main: multiply defined" at the link phase.Your Fortran runtime library (required by MUMPS) defines the symbol "main". Usually this symbol calls MAIN__ , so instead of "int main()" use "int MAIN__()" in your C code. You should check how to avoid the definition of "main" from the Fortran runtime libraries (compiler-dependent). See also the driver c_example.c (in directory test) where the preprocessing option -DMAIN_COMP can be used.
-
I have missing symbols related to pthreads at the link phase and pthreads (library of POSIX threads) are not available on my computer.Please recompile MUMPS (make clean; make) with the option -DWITHOUT_PTHREAD in the OPTC entry from your Makefile.inc.
-
I want the parallel analysis phase based on SCOTCH to be deterministic.For the parallel analysis to be deterministic, you need to compile the SCOTCH package with the option -DSCOTCH_DETERMINISTIC and to compile MUMPS (make clean; make) with the option -DDETERMINISTIC_PARALLEL_GRAPH in the OPTF entry from your Makefile.inc.
-
How can I use MUMPS on Windows, Mac OSX or in special environments?Please refer to the "Contributions/interfacing done by MUMPS Users" section of the links page.
-
I am using MUMPS 5.1.1 with opensolaris/f90 or Cray compiler and I encounter the following issue:
"dlr_core.F", Line = 625, Column = 18: ERROR: Procedure "DMUMPS_TRUNCATED_RRQR" is referenced at line 361 (dlr_core.F).
It must have an explicit interface specified.In case you use MUMPS 5.1.1 and encounter this problem, a patch is available here. This issue has also disappeared in MUMPS 5.1.2. -
I am using MUMPS 5.2.0 and I encounter the following issue:
"Internal error 1 in DMUMPS_DM_FREEALLDYNAMICCB F F"The problem is due to an erroneous assertion. One can safely replace the following 5 lines in subroutines [SDCZ]MUMPS_DM_FREEALLDYNAMICCB of files sfac_mem_dynamic.F, dfac_mem_dynamic.F, cfac_mem_dynamic.F, zfac_mem_dynamic.F:
ELSE WRITE(*,*) "Internal error 1 in CMUMPS_DM_FREEALLDYNAMICCB" & , IS_PTRAST, IS_PAMASTER CALL MUMPS_ABORT() ENDIF
byELSE ICURRENT = ICURRENT + IW(ICURRENT+XXI) CYCLE ENDIF
This problem is solved in MUMPS 5.2.1. -
MUMPS does not compile with gfortran-10?Please add the '-fallow-argument-mismatch' option to the Fortran compiler options 'OPTF' in Makefile.inc.
Remarkable behaviour
-
I have a symmetric positive definite problem. The unsymmetric code (SYM=0) solves the problem correctly but the symmetric positive definite code (SYM=1) gives an erroneous solution or returns with the error code -10.You must provide only one triangular part of the matrix to the symmetric code. Otherwise entries (I, J) and (J, I) will be summed, resulting in a different matrix. Of course this also applies to general symmetric matrices (SYM=2). See also Section 5.2.1 of the MUMPS Users' guide.
-
The computed inertia (number of negative pivots encountered returned in INFOG(12) when SYM=1 or 2) is correct in the sequential version but is too small in the parallel version.In the multifrontal method, the last step of the factorization phase consists in the factorization of a dense matrix. MUMPS uses ScaLAPACK for this final node. Unfortunately ScaLAPACK does not offer the possibility to compute the inertia of a dense matrix. (And in fact it does not offer either the possibility to perform an L D L^T factorization of a dense matrix so that we use ScaLAPACK L U on that final node).
Therefore the number of negative pivots in INFOG(12) does not take into account the variables corresponding to the last dense matrix (which can vary depending on the ordering). You only obtain a lower bound.
If you want the correct inertia in parallel, you can avoid ScaLAPACK (the last dense matrix will be treated sequentially but the rest will still be parallel) by setting ICNTL(13)=1. -
I get segmentation faults or wrong results with the 64-bit version of g95 (or I want to use the -i8 option to use 64-bit integers).Please recompile MUMPS using the -i4 option. If you want to use 64-bit integers (use of -i8 option), then make sure that the C parts of MUMPS also use 64-bit integers (we use "int" by default but 64-bit integers can be forced by adding the option -DINTSIZE64 in the OPTC entry of your Makefile.inc) and that all libraries called by MUMPS rely on 64-bit integers (metis, ScaLAPACK, BLACS, BLAS).
-
I have problems with large matrices when using gfortran and/or g95 on 32-bit machines. Why does MUMPS crash during the factorization ?gfortran version 4.1 and g95 allow MUMPS to allocate an array larger than 2 GB even on 32-bit machines. Instead of returning an error code, the execution continues and a problem arises later when accessing the array. The problem has been reported to the gfortran dev team by one of our MUMPS users, Karl Meerbergen. Note that the Intel Fortran compiler (ifort) works correctly and returns an error.
-
I have a problem with the Scilab interface, the main window crashes / is closed when MUMPS is called.By default MUMPS prints some statistics on Fortran unit 6. Depending on your compiler, you may need to switch all MUMPS printings off from Scilab by setting id.ICNTL(1:4)=-1. We have observed this behaviour with gfortran (version 4.1).
-
MUMPS is slower in parallel than in sequential.If your test problem is too small (few seconds in sequential) then no speed-up can be expected because of parallelism overhead.
It can also be related to your compiler. If you are using Intel compiler, make sure that the directives starting with '!DEC$' (e.g. '!DEC$ NOOPTIMIZE') in mumps_static_mapping.F are correctly interpreted. -
I have a diagonally dominant matrix on which the factorization is very slow. Actually, the more diagonal dominant it is, the slower MUMPS is. What is happening ?You may be encoutering subnormal numbers, on which floating-point operations are often very slow. You can round those subnormal numbers to zero by using special compiler flags (with gfortran, a possibility is to use -ffast-math).
-
I get memory leaks inside BLAS routines when using Intel MKL.When running, Intel MKL allocates and deallocates internal buffers to facilitate better performance.
However, in some cases this behavior may result in memory leaks. To avoid memory leaks, you can do either of the following:-
Set the MKL_DISABLE_FAST_MM environment variable to 1 or call the mkl_disable_fast_mm() function.
Be aware that this change may negatively impact performance of some Intel MKL functions, especially for small problem sizes. - Call the mkl_free_buffers() function or the mkl_thread_free_buffers() function in the current thread.
-
Set the MKL_DISABLE_FAST_MM environment variable to 1 or call the mkl_disable_fast_mm() function.
Miscellaneous questions
-
I am a user of a previous version of MUMPS, what are the new features of MUMPS 5.6.2 and what should I do to use the new version?Please refer to the MUMPS Users' guide section 2.
-
I want to share my matrix with the MUMPS developers.
- Generate your matrix (see section 5.2.3 of the MUMPS Users' guide).
- Upload your matrix on a FTP server or any cloud service like dropbox or google drive.
- Provide the link to the MUMPS team by precising if the matrix should be considered as public or private.
-
Are there some debian packages built from MUMPS?You can find packages concerning MUMPS here.
-
Which version of MUMPS should I use with Trilinos?Since Trilinos 12.2, both Amesos and Amesos2 packages support the latest version of MUMPS (MUMPS 5.0.0 and after).