ÿØÿàJFIFÿþ ÿÛC       ÿÛC ÿÀÿÄÿÄ"#QrÿÄÿÄ&1!A"2qQaáÿÚ ?Øy,æ/3JæÝ¹È߲؋5êXw²±ÉyˆR”¾I0ó2—PI¾IÌÚiMö¯–þrìN&"KgX:Šíµ•nTJnLK„…@!‰-ý ùúmë;ºgµŒ&ó±hw’¯Õ@”Ü— 9ñ-ë.²1<yà‚¹ïQÐU„ہ?.’¦èûbß±©Ö«Âw*VŒ) `$‰bØÔŸ’ëXÖ-ËTÜíGÚ3ð«g Ÿ§¯—Jx„–’U/ÂÅv_s(Hÿ@TñJÑãõçn­‚!ÈgfbÓc­:él[ðQe 9ÀPLbÃãCµm[5¿ç'ªjglå‡Ûí_§Úõl-;"PkÞÞÁQâ¼_Ñ^¢SŸx?"¸¦ùY騐ÒOÈ q’`~~ÚtËU¹CڒêV  I1Áß_ÿÙ ]c@`sDdZddlmZmZmZddddddd d d d d ddddddddddgZddlZddlmZm Z m Z m Z m Z m Z mZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZmZm Z m!Z!m"Z"m#Z#m$Z$m%Z%m&Z&m'Z'm(Z(m)Z)m*Z*m+Z+m,Z,m-Z-m.Z.m/Z/m0Z0m1Z1ddl2m3Z3ddl4m5Z5m6Z6ddl7m8Z8m9Z9ddl:m;Z;dZ<dZ=d Z>d!Z?d"Z@eZAdeBfd#YZCeDaEd$ZFeFd%ZGd&ZHd'ZId(ZJd)ZKd*ZLd+ZMiee6ee6ee6ee6ZNiee6ee6ee6ee6ZOed,ZPed-ZQd.ZRiee6ee6ee6ee6ZOd/ZSeZTd0ZUd1ZVd2ZWd3ZXd4ZYd5ZZd6Z[d7Z\d8Z]eDd9Z^d:Z_d;d<Z`d=Zad>Zbd?d@ZcdAZdd"dBZedCZfdDZgd"dEZhdFdFdGZieDdHZjeDdIZkdJdKZldLZmdMZndNdOZodPZpeDeDeqdQZrdRZsdSZteqdTZudUZvdS(VsxLite version of scipy.linalg. Notes ----- This module is a lite version of the linalg.py module in SciPy which contains high-level Python interface to the LAPACK library. The lite version only accesses the following LAPACK functions: dgesv, zgesv, dgeev, zgeev, dgesdd, zgesdd, dgelsd, zgelsd, dsyevd, zheevd, dgetrf, zgetrf, dpotrf, zpotrf, dgeqrf, zgeqrf, zungqr, dorgqr. i(tdivisiontabsolute_importtprint_functiont matrix_powertsolvet tensorsolvet tensorinvtinvtcholeskyteigvalsteigvalshtpinvtslogdettdettsvdteigteightlstsqtnormtqrtcondt matrix_rankt LinAlgErrort multi_dotN(*tarraytasarraytzerostemptyt empty_liket transposetintctsingletdoubletcsingletcdoubletinexacttcomplexfloatingtnewaxistraveltalltInftdottaddtmultiplytsqrttmaximumtfastCopyAndTransposetsumtisfinitetsizetfinfoterrstatet geterrobjt longdoubletrollaxistamintamaxtproducttabst broadcastt atleast_2dtintpt asanyarraytisscalartobject_tones(tnormalize_axis_index(ttriutasfarray(t lapack_litet _umath_linalg(RtNtVtAtStLcB`seZdZRS(s Generic Python-exception-derived object raised by linalg functions. General purpose exception class, derived from Python's exception.Exception class, programmatically raised in linalg functions when a Linear Algebra-related condition would prevent further correct execution of the function. Parameters ---------- None Examples -------- >>> from numpy import linalg as LA >>> LA.inv(np.zeros((2,2))) Traceback (most recent call last): File "", line 1, in File "...linalg.py", line 350, in inv return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) File "...linalg.py", line 249, in solve raise LinAlgError('Singular matrix') numpy.linalg.LinAlgError: Singular matrix (t__name__t __module__t__doc__(((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR+sc C`s[t}|d}tddddddddtd}WdQX||dgadS( Nitinvalidtcalltovertignoretdividetunderi(R4R3tNonet_linalg_error_extobj(terrobjtbufsizetinvalid_call_errmask((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_determine_error_statesLs   cC`stddS(NsSingular matrix(R(terrtflag((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_raise_linalgerror_singularYscC`stddS(NsMatrix is not positive definite(R(R[R\((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_raise_linalgerror_nonposdef\scC`stddS(NsEigenvalues did not converge(R(R[R\((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt-_raise_linalgerror_eigenvalues_nonconvergence_scC`stddS(NsSVD did not converge(R(R[R\((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt%_raise_linalgerror_svd_nonconvergencebscC`stt}||d<|S(Ni(tlistRV(tcallbacktextobj((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pytget_linalg_error_extobjes  cC`s+t|}t|d|j}||fS(Nt__array_prepare__(Rtgetattrt__array_wrap__(tatnewtwrap((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt _makearrayjs cC`s t|tS(N(t issubclassR$(tt((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt isComplexTypeoscC`stj||S(N(t_real_types_maptget(Rmtdefault((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt _realType|scC`stj||S(N(t_complex_types_mapRp(RmRq((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt _complexTypescC`stS(s,Cast the type t to either double or cdouble.(R (Rm((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_linalgRealTypescG`st}t}x|D]}t|jjtrt|jjrIt}nt|jjdd}|dkrt d|jj fqnt }|t krt }qqW|rt }t|}nt }||fS(NRqs&array type %s is unsupported in linalg(RtFalseRltdtypettypeR#RntTrueRrRUt TypeErrortnameR R"Rs(tarrayst result_typet is_complexRhtrtRm((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt _commonTypes$      cG`sg}xU|D]M}|jjdkrM|jt|d|jjdq |j|q Wt|dkrx|dS|SdS(Nt=t|Rwii(RR(Rwt byteordertappendRt newbyteordertlen(R|trettarr((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_to_native_byte_orders (cG`s}d}xR|D]J}|jj|kr;|t|f}q |t|j|f}q Wt|dkru|dS|SdS(Nii((RwRxt_fastCTtastypeR(RxR|t cast_arraysRh((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_fastCopyAndTransposes  cG`s:x3|D]+}|jdkrtd|jqqWdS(Nis9%d-dimensional array given. Array must be two-dimensional(tndimR(R|Rh((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt _assertRank2s cG`s:x3|D]+}|jdkrtd|jqqWdS(NisB%d-dimensional array given. Array must be at least two-dimensional(RR(R|Rh((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_assertRankAtLeast2s cG`sBx;|D]3}t|jt|jkrtdqqWdS(NsArray must be square(tmaxtshapetminR(R|Rh((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_assertSquarenesss cG`sJxC|D];}t|jdt|jdkrtdqqWdS(Nis-Last 2 dimensions of the array must be square(RRRR(R|Rh((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_assertNdSquarenesss &cG`s6x/|D]'}t|jstdqqWdS(Ns#Array must not contain infs or NaNs(R0R'R(R|Rh((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt _assertFinites cC`s&|jdko%t|jddkS(Nii(R1R9R(R((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt _isEmpty2dscG`s0x)|D]!}t|rtdqqWdS(NsArrays cannot be empty(RR(R|Rh((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_assertNoEmpty2ds  c C`st|\}}t|}|j}|dk rttd|}x+|D]#}|j||j||qOW|j|}n|j ||j }d}x|D]}||9}qW|j d|}|j }|t ||} || _ | S(s Solve the tensor equation ``a x = b`` for x. It is assumed that all indices of `x` are summed over in the product, together with the rightmost indices of `a`, as is done in, for example, ``tensordot(a, x, axes=b.ndim)``. Parameters ---------- a : array_like Coefficient tensor, of shape ``b.shape + Q``. `Q`, a tuple, equals the shape of that sub-tensor of `a` consisting of the appropriate number of its rightmost indices, and must be such that ``prod(Q) == prod(b.shape)`` (in which sense `a` is said to be 'square'). b : array_like Right-hand tensor, which can be of any shape. axes : tuple of ints, optional Axes in `a` to reorder to the right, before inversion. If None (default), no reordering is done. Returns ------- x : ndarray, shape Q Raises ------ LinAlgError If `a` is singular or not 'square' (in the above sense). See Also -------- numpy.tensordot, tensorinv, numpy.einsum Examples -------- >>> a = np.eye(2*3*4) >>> a.shape = (2*3, 4, 2, 3, 4) >>> b = np.random.randn(2*3, 4) >>> x = np.linalg.tensorsolve(a, b) >>> x.shape (2, 3, 4) >>> np.allclose(np.tensordot(a, x, axes=3), b) True iiiN( RkRRRURatrangetremovetinsertRRtreshapeR&R( RhtbtaxesRjtantallaxestktoldshapetprodtres((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRs$/        c C`st|\}}t|t|t|\}}t||\}}|j|jdkrotj}n tj}t|rdnd}t t }|||d|d|} || j |dt S(s Solve a linear matrix equation, or system of linear scalar equations. Computes the "exact" solution, `x`, of the well-determined, i.e., full rank, linear matrix equation `ax = b`. Parameters ---------- a : (..., M, M) array_like Coefficient matrix. b : {(..., M,), (..., M, K)}, array_like Ordinate or "dependent variable" values. Returns ------- x : {(..., M,), (..., M, K)} ndarray Solution to the system a x = b. Returned shape is identical to `b`. Raises ------ LinAlgError If `a` is singular or not square. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. The solutions are computed using LAPACK routine _gesv `a` must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use `lstsq` for the least-squares best "solution" of the system/equation. References ---------- .. [1] G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 22. Examples -------- Solve the system of equations ``3 * x0 + x1 = 9`` and ``x0 + 2 * x1 = 8``: >>> a = np.array([[3,1], [1,2]]) >>> b = np.array([9,8]) >>> x = np.linalg.solve(a, b) >>> x array([ 2., 3.]) Check that the solution is correct: >>> np.allclose(np.dot(a, x), b) True isDD->Dsdd->dt signatureRctcopy( RkRRRRRFtsolve1RRnRdR]RRv( RhRt_RjRmtresult_ttgufuncRRctr((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR,s<     icC`st|}|j}d}|dkr[|||| }x+||D]}||9}qDWn td|j|d}t|}|j|S(s Compute the 'inverse' of an N-dimensional array. The result is an inverse for `a` relative to the tensordot operation ``tensordot(a, b, ind)``, i. e., up to floating-point accuracy, ``tensordot(tensorinv(a), a, ind)`` is the "identity" tensor for the tensordot operation. Parameters ---------- a : array_like Tensor to 'invert'. Its shape must be 'square', i. e., ``prod(a.shape[:ind]) == prod(a.shape[ind:])``. ind : int, optional Number of first indices that are involved in the inverse sum. Must be a positive integer, default is 2. Returns ------- b : ndarray `a`'s tensordot inverse, shape ``a.shape[ind:] + a.shape[:ind]``. Raises ------ LinAlgError If `a` is singular or not 'square' (in the above sense). See Also -------- numpy.tensordot, tensorsolve Examples -------- >>> a = np.eye(4*6) >>> a.shape = (4, 6, 8, 3) >>> ainv = np.linalg.tensorinv(a, ind=2) >>> ainv.shape (8, 3, 4, 6) >>> b = np.random.randn(4, 6) >>> np.allclose(np.tensordot(ainv, b), np.linalg.tensorsolve(a, b)) True >>> a = np.eye(4*6) >>> a.shape = (24, 8, 3) >>> ainv = np.linalg.tensorinv(a, ind=1) >>> ainv.shape (8, 3, 24) >>> b = np.random.randn(24) >>> np.allclose(np.tensordot(ainv, b, 1), np.linalg.tensorsolve(a, b)) True iisInvalid ind argument.i(RRt ValueErrorRR(RhtindRRtinvshapeRtia((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR|s5     cC`st|\}}t|t|t|\}}t|rJdnd}tt}tj|d|d|}||j |dt S(s Compute the (multiplicative) inverse of a matrix. Given a square matrix `a`, return the matrix `ainv` satisfying ``dot(a, ainv) = dot(ainv, a) = eye(a.shape[0])``. Parameters ---------- a : (..., M, M) array_like Matrix to be inverted. Returns ------- ainv : (..., M, M) ndarray or matrix (Multiplicative) inverse of the matrix `a`. Raises ------ LinAlgError If `a` is not square or inversion fails. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. Examples -------- >>> from numpy.linalg import inv >>> a = np.array([[1., 2.], [3., 4.]]) >>> ainv = inv(a) >>> np.allclose(np.dot(a, ainv), np.eye(2)) True >>> np.allclose(np.dot(ainv, a), np.eye(2)) True If a is a matrix object, then the return value is a matrix as well: >>> ainv = inv(np.matrix(a)) >>> ainv matrix([[-2. , 1. ], [ 1.5, -0.5]]) Inverses of several matrices can be computed at once: >>> a = np.array([[[1., 2.], [3., 4.]], [[1, 3], [3, 5]]]) >>> inv(a) array([[[-2. , 1. ], [ 1.5, -0.5]], [[-5. , 2. ], [ 3. , -1. ]]]) sD->Dsd->dRRcR( RkRRRRnRdR]RFRRRv(RhRjRmRRRctainv((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRs9   cC`stt}tj}t|\}}t|t|t|\}}t|r_dnd}||d|d|}||j |dt S(s Cholesky decomposition. Return the Cholesky decomposition, `L * L.H`, of the square matrix `a`, where `L` is lower-triangular and .H is the conjugate transpose operator (which is the ordinary transpose if `a` is real-valued). `a` must be Hermitian (symmetric if real-valued) and positive-definite. Only `L` is actually returned. Parameters ---------- a : (..., M, M) array_like Hermitian (symmetric if all elements are real), positive-definite input matrix. Returns ------- L : (..., M, M) array_like Upper or lower-triangular Cholesky factor of `a`. Returns a matrix object if `a` is a matrix object. Raises ------ LinAlgError If the decomposition fails, for example, if `a` is not positive-definite. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. The Cholesky decomposition is often used as a fast way of solving .. math:: A \mathbf{x} = \mathbf{b} (when `A` is both Hermitian/symmetric and positive-definite). First, we solve for :math:`\mathbf{y}` in .. math:: L \mathbf{y} = \mathbf{b}, and then for :math:`\mathbf{x}` in .. math:: L.H \mathbf{x} = \mathbf{y}. Examples -------- >>> A = np.array([[1,-2j],[2j,5]]) >>> A array([[ 1.+0.j, 0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> L = np.linalg.cholesky(A) >>> L array([[ 1.+0.j, 0.+0.j], [ 0.+2.j, 1.+0.j]]) >>> np.dot(L, L.T.conj()) # verify that L * L.H = A array([[ 1.+0.j, 0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> A = [[1,-2j],[2j,5]] # what happens if A is only array_like? >>> np.linalg.cholesky(A) # an ndarray object is returned array([[ 1.+0.j, 0.+0.j], [ 0.+2.j, 1.+0.j]]) >>> # But a matrix object is returned if A is a matrix object >>> LA.cholesky(np.matrix(A)) matrix([[ 1.+0.j, 0.+0.j], [ 0.+2.j, 1.+0.j]]) sD->Dsd->dRRcR( RdR^RFt cholesky_loRkRRRRnRRv(RhRcRRjRmRRR((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRsI    treducedc C`sh|dkr|dkrFdjd}tj|td d d}q|dkrwd}tj|td d d }qtd|nt|\}}t|t||j\}}t |\}}t ||}t |}t ||}t |f|} t|r+tj} d} ntj} d} d} t | f|} | ||||| | dd}|ddkrtd| |dfntt| d} t | f|} | ||||| | | d}|ddkrtd| |dfn|dkrWt ||ddd|f}|t|S|dkrm|| fS|d kr||kr|j|dt}n||jS|dkr||kr|}t||f|}n|}t||f|}|||*t|r#tj} d} ntj} d} d} t | f|} | |||||| | dd }|ddkrtd| |dfntt| d} t | f|} | |||||| | | d }|ddkrtd| |dfnt ||| }t ||ddd|f}|||t|fS(s+ Compute the qr factorization of a matrix. Factor the matrix `a` as *qr*, where `q` is orthonormal and `r` is upper-triangular. Parameters ---------- a : array_like, shape (M, N) Matrix to be factored. mode : {'reduced', 'complete', 'r', 'raw', 'full', 'economic'}, optional If K = min(M, N), then 'reduced' : returns q, r with dimensions (M, K), (K, N) (default) 'complete' : returns q, r with dimensions (M, M), (M, N) 'r' : returns r only with dimensions (K, N) 'raw' : returns h, tau with dimensions (N, M), (K,) 'full' : alias of 'reduced', deprecated 'economic' : returns h from 'raw', deprecated. The options 'reduced', 'complete, and 'raw' are new in numpy 1.8, see the notes for more information. The default is 'reduced' and to maintain backward compatibility with earlier versions of numpy both it and the old default 'full' can be omitted. Note that array h returned in 'raw' mode is transposed for calling Fortran. The 'economic' mode is deprecated. The modes 'full' and 'economic' may be passed using only the first letter for backwards compatibility, but all others must be spelled out. See the Notes for more explanation. Returns ------- q : ndarray of float or complex, optional A matrix with orthonormal columns. When mode = 'complete' the result is an orthogonal/unitary matrix depending on whether or not a is real/complex. The determinant may be either +/- 1 in that case. r : ndarray of float or complex, optional The upper-triangular matrix. (h, tau) : ndarrays of np.double or np.cdouble, optional The array h contains the Householder reflectors that generate q along with r. The tau array contains scaling factors for the reflectors. In the deprecated 'economic' mode only h is returned. Raises ------ LinAlgError If factoring fails. Notes ----- This is an interface to the LAPACK routines dgeqrf, zgeqrf, dorgqr, and zungqr. For more information on the qr factorization, see for example: http://en.wikipedia.org/wiki/QR_factorization Subclasses of `ndarray` are preserved except for the 'raw' mode. So if `a` is of type `matrix`, all the return values will be matrices too. New 'reduced', 'complete', and 'raw' options for mode were added in NumPy 1.8.0 and the old option 'full' was made an alias of 'reduced'. In addition the options 'full' and 'economic' were deprecated. Because 'full' was the previous default and 'reduced' is the new default, backward compatibility can be maintained by letting `mode` default. The 'raw' option was added so that LAPACK routines that can multiply arrays by q using the Householder reflectors can be used. Note that in this case the returned arrays are of type np.double or np.cdouble and the h array is transposed to be FORTRAN compatible. No routines using the 'raw' return are currently exposed by numpy, but some are available in lapack_lite and just await the necessary work. Examples -------- >>> a = np.random.randn(9, 6) >>> q, r = np.linalg.qr(a) >>> np.allclose(a, np.dot(q, r)) # a does equal qr True >>> r2 = np.linalg.qr(a, mode='r') >>> r3 = np.linalg.qr(a, mode='economic') >>> np.allclose(r, r2) # mode='r' returns the same r as mode='full' True >>> # But only triu parts are guaranteed equal when mode='economic' >>> np.allclose(r, np.triu(r3[:6,:6], k=0)) True Example illustrating a common use of `qr`: solving of least squares problems What are the least-squares-best `m` and `y0` in ``y = y0 + mx`` for the following data: {(0,1), (1,0), (1,2), (2,1)}. (Graph the points and you'll see that it should be y0 = 0, m = 1.) The answer is provided by solving the over-determined matrix equation ``Ax = b``, where:: A = array([[0, 1], [1, 1], [1, 1], [2, 1]]) x = array([[y0], [m]]) b = array([[1], [0], [2], [1]]) If A = qr such that q is orthonormal (which is always possible via Gram-Schmidt), then ``x = inv(r) * (q.T) * b``. (In numpy practice, however, we simply use `lstsq`.) >>> A = np.array([[0, 1], [1, 1], [1, 1], [2, 1]]) >>> A array([[0, 1], [1, 1], [1, 1], [2, 1]]) >>> b = np.array([1, 0, 2, 1]) >>> q, r = LA.qr(A) >>> p = np.dot(q.T, b) >>> np.dot(LA.inv(r), p) array([ 1.1e-16, 1.0e+00]) RtcompleteRtrawtftfullts7The 'full' option is deprecated in favor of 'reduced'. s,For backward compatibility let mode default.t stackleveliteteconomics$The 'economic' option is deprecated.sUnrecognized mode '%s'tzgeqrftdgeqrfiiitinfos %s returns %dNRtzungqrtdorgqr(RRRR(RR(s7The 'full' option is deprecated in favor of 'reduced'. s,For backward compatibility let mode default.(RR(tjointwarningstwarntDeprecationWarningRRkRRRRRRRRRnRERRRtintR:RCRRvtTRRR(RhtmodetmsgRjtmtnRmRtmnttautlapack_routinet routine_nametlworktworktresultsRtmctq((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR\su             !! %          $$%cC`st|\}}t|t|t|t|\}}tt}t|r`dnd}tj |d|d|}t|st |j dkr|j }t |}qt|}n|j|dtS(s Compute the eigenvalues of a general matrix. Main difference between `eigvals` and `eig`: the eigenvectors aren't returned. Parameters ---------- a : (..., M, M) array_like A complex- or real-valued matrix whose eigenvalues will be computed. Returns ------- w : (..., M,) ndarray The eigenvalues, each repeated according to its multiplicity. They are not necessarily ordered, nor are they necessarily real for real matrices. Raises ------ LinAlgError If the eigenvalue computation does not converge. See Also -------- eig : eigenvalues and right eigenvectors of general arrays eigvalsh : eigenvalues of symmetric or Hermitian arrays. eigh : eigenvalues and eigenvectors of symmetric/Hermitian arrays. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. This is implemented using the _geev LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays. Examples -------- Illustration, using the fact that the eigenvalues of a diagonal matrix are its diagonal elements, that multiplying a matrix on the left by an orthogonal matrix, `Q`, and on the right by `Q.T` (the transpose of `Q`), preserves the eigenvalues of the "middle" matrix. In other words, if `Q` is orthogonal, then ``Q * A * Q.T`` has the same eigenvalues as ``A``: >>> from numpy import linalg as LA >>> x = np.random.random() >>> Q = np.array([[np.cos(x), -np.sin(x)], [np.sin(x), np.cos(x)]]) >>> LA.norm(Q[0, :]), LA.norm(Q[1, :]), np.dot(Q[0, :],Q[1, :]) (1.0, 1.0, 0.0) Now multiply a diagonal matrix by Q on one side and by Q.T on the other: >>> D = np.diag((-1,1)) >>> LA.eigvals(D) array([-1., 1.]) >>> A = np.dot(Q, D) >>> A = np.dot(A, Q.T) >>> LA.eigvals(A) array([ 1., -1.]) sD->Dsd->DRRciR(RkRRRRRdR_RnRFR R'timagtrealRrRtRRv(RhRjRmRRcRtw((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR 3sC      c C`s|j}|d kr'tdntt}|dkrKtj}n tj}t|\}}t|t |t |\}}t |rdnd}||d|d|}|j t |dtS( sl Compute the eigenvalues of a Hermitian or real symmetric matrix. Main difference from eigh: the eigenvectors are not computed. Parameters ---------- a : (..., M, M) array_like A complex- or real-valued matrix whose eigenvalues are to be computed. UPLO : {'L', 'U'}, optional Specifies whether the calculation is done with the lower triangular part of `a` ('L', default) or the upper triangular part ('U'). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns ------- w : (..., M,) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. Raises ------ LinAlgError If the eigenvalue computation does not converge. See Also -------- eigh : eigenvalues and eigenvectors of symmetric/Hermitian arrays. eigvals : eigenvalues of general real or complex arrays. eig : eigenvalues and right eigenvectors of general real or complex arrays. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. The eigenvalues are computed using LAPACK routines _syevd, _heevd Examples -------- >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> LA.eigvalsh(a) array([ 0.17157288, 5.82842712]) >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[ 5.+2.j, 9.-2.j], [ 0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eigvals() >>> # with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[ 5.+0.j, 0.-2.j], [ 0.+2.j, 2.+0.j]]) >>> wa = LA.eigvalsh(a) >>> wb = LA.eigvals(b) >>> wa; wb array([ 1., 6.]) array([ 6.+0.j, 1.+0.j]) RKtUs UPLO argument must be 'L' or 'U'sD->dsd->dRRcR(RKR(tupperRRdR_RFt eigvalsh_lot eigvalsh_upRkRRRRnRRrRv( RhtUPLORcRRjRmRRR((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR sH        cC`s4t|\}}t|j|}|||fS(N(RRR(RhRmR((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt _convertarrayscC`s t|\}}t|t|t|t|\}}tt}t|r`dnd}tj |d|d|\}}t| rt |j dkr|j }|j }t |}n t|}|j|dt}|j|dt||fS(s Compute the eigenvalues and right eigenvectors of a square array. Parameters ---------- a : (..., M, M) array Matrices for which the eigenvalues and right eigenvectors will be computed Returns ------- w : (..., M) array The eigenvalues, each repeated according to its multiplicity. The eigenvalues are not necessarily ordered. The resulting array will be of complex type, unless the imaginary part is zero in which case it will be cast to a real type. When `a` is real the resulting eigenvalues will be real (0 imaginary part) or occur in conjugate pairs v : (..., M, M) array The normalized (unit "length") eigenvectors, such that the column ``v[:,i]`` is the eigenvector corresponding to the eigenvalue ``w[i]``. Raises ------ LinAlgError If the eigenvalue computation does not converge. See Also -------- eigvals : eigenvalues of a non-symmetric array. eigh : eigenvalues and eigenvectors of a symmetric or Hermitian (conjugate symmetric) array. eigvalsh : eigenvalues of a symmetric or Hermitian (conjugate symmetric) array. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. This is implemented using the _geev LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays. The number `w` is an eigenvalue of `a` if there exists a vector `v` such that ``dot(a,v) = w * v``. Thus, the arrays `a`, `w`, and `v` satisfy the equations ``dot(a[:,:], v[:,i]) = w[i] * v[:,i]`` for :math:`i \in \{0,...,M-1\}`. The array `v` of eigenvectors may not be of maximum rank, that is, some of the columns may be linearly dependent, although round-off error may obscure that fact. If the eigenvalues are all different, then theoretically the eigenvectors are linearly independent. Likewise, the (complex-valued) matrix of eigenvectors `v` is unitary if the matrix `a` is normal, i.e., if ``dot(a, a.H) = dot(a.H, a)``, where `a.H` denotes the conjugate transpose of `a`. Finally, it is emphasized that `v` consists of the *right* (as in right-hand side) eigenvectors of `a`. A vector `y` satisfying ``dot(y.T, a) = z * y.T`` for some number `z` is called a *left* eigenvector of `a`, and, in general, the left and right eigenvectors of a matrix are not necessarily the (perhaps conjugate) transposes of each other. References ---------- G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, Various pp. Examples -------- >>> from numpy import linalg as LA (Almost) trivial example with real e-values and e-vectors. >>> w, v = LA.eig(np.diag((1, 2, 3))) >>> w; v array([ 1., 2., 3.]) array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) Real matrix possessing complex e-values and e-vectors; note that the e-values are complex conjugates of each other. >>> w, v = LA.eig(np.array([[1, -1], [1, 1]])) >>> w; v array([ 1. + 1.j, 1. - 1.j]) array([[ 0.70710678+0.j , 0.70710678+0.j ], [ 0.00000000-0.70710678j, 0.00000000+0.70710678j]]) Complex-valued matrix with real e-values (but complex-valued e-vectors); note that a.conj().T = a, i.e., a is Hermitian. >>> a = np.array([[1, 1j], [-1j, 1]]) >>> w, v = LA.eig(a) >>> w; v array([ 2.00000000e+00+0.j, 5.98651912e-36+0.j]) # i.e., {2, 0} array([[ 0.00000000+0.70710678j, 0.70710678+0.j ], [ 0.70710678+0.j , 0.00000000+0.70710678j]]) Be careful about round-off error! >>> a = np.array([[1 + 1e-9, 0], [0, 1 - 1e-9]]) >>> # Theor. e-values are 1 +/- 1e-9 >>> w, v = LA.eig(a) >>> w; v array([ 1., 1.]) array([[ 1., 0.], [ 0., 1.]]) sD->DDsd->DDRRcgR(RkRRRRRdR_RnRFRR'RRRrRtRRv(RhRjRmRRcRRtvt((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRs w    !"   c C`s|j}|d kr'tdnt|\}}t|t|t|\}}tt}|dkrtj }n tj }t |rdnd}||d|d|\}} |j t |dt}| j |dt} ||| fS( sj Return the eigenvalues and eigenvectors of a Hermitian or symmetric matrix. Returns two objects, a 1-D array containing the eigenvalues of `a`, and a 2-D square array or matrix (depending on the input type) of the corresponding eigenvectors (in columns). Parameters ---------- a : (..., M, M) array Hermitian/Symmetric matrices whose eigenvalues and eigenvectors are to be computed. UPLO : {'L', 'U'}, optional Specifies whether the calculation is done with the lower triangular part of `a` ('L', default) or the upper triangular part ('U'). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns ------- w : (..., M) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. v : {(..., M, M) ndarray, (..., M, M) matrix} The column ``v[:, i]`` is the normalized eigenvector corresponding to the eigenvalue ``w[i]``. Will return a matrix object if `a` is a matrix object. Raises ------ LinAlgError If the eigenvalue computation does not converge. See Also -------- eigvalsh : eigenvalues of symmetric or Hermitian arrays. eig : eigenvalues and right eigenvectors for non-symmetric arrays. eigvals : eigenvalues of non-symmetric arrays. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. The eigenvalues/eigenvectors are computed using LAPACK routines _syevd, _heevd The eigenvalues of real symmetric or complex Hermitian matrices are always real. [1]_ The array `v` of (column) eigenvectors is unitary and `a`, `w`, and `v` satisfy the equations ``dot(a, v[:, i]) = w[i] * v[:, i]``. References ---------- .. [1] G. Strang, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 222. Examples -------- >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> a array([[ 1.+0.j, 0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> w, v = LA.eigh(a) >>> w; v array([ 0.17157288, 5.82842712]) array([[-0.92387953+0.j , -0.38268343+0.j ], [ 0.00000000+0.38268343j, 0.00000000-0.92387953j]]) >>> np.dot(a, v[:, 0]) - w[0] * v[:, 0] # verify 1st e-val/vec pair array([2.77555756e-17 + 0.j, 0. + 1.38777878e-16j]) >>> np.dot(a, v[:, 1]) - w[1] * v[:, 1] # verify 2nd e-val/vec pair array([ 0.+0.j, 0.+0.j]) >>> A = np.matrix(a) # what happens if input is a matrix object >>> A matrix([[ 1.+0.j, 0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> w, v = LA.eigh(A) >>> w; v array([ 0.17157288, 5.82842712]) matrix([[-0.92387953+0.j , -0.38268343+0.j ], [ 0.00000000+0.38268343j, 0.00000000-0.92387953j]]) >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[ 5.+2.j, 9.-2.j], [ 0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eig() with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[ 5.+0.j, 0.-2.j], [ 0.+2.j, 2.+0.j]]) >>> wa, va = LA.eigh(a) >>> wb, vb = LA.eig(b) >>> wa; wb array([ 1., 6.]) array([ 6.+0.j, 1.+0.j]) >>> va; vb array([[-0.44721360-0.j , -0.89442719+0.j ], [ 0.00000000+0.89442719j, 0.00000000-0.4472136j ]]) array([[ 0.89442719+0.j , 0.00000000-0.4472136j], [ 0.00000000-0.4472136j, 0.89442719+0.j ]]) RKRs UPLO argument must be 'L' or 'U'sD->dDsd->ddRRcR(RKR(RRRkRRRRdR_RFteigh_loteigh_upRnRRrRv( RhRRjRmRRcRRRR((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR{s"p        icC`st|\}}t|t|t|\}}tt}|jd}|jd}|rF|r||krtj} qtj } n!||krtj } n tj } t |rdnd} | |d| d|\} } } | j |dt} | j t|dt} | j |dt} || | || fS||kr^tj} n tj} t |rydnd } | |d| d|} | j t|dt} | Sd S( s Singular Value Decomposition. Factors the matrix `a` as ``u * np.diag(s) * v``, where `u` and `v` are unitary and `s` is a 1-d array of `a`'s singular values. Parameters ---------- a : (..., M, N) array_like A real or complex matrix of shape (`M`, `N`) . full_matrices : bool, optional If True (default), `u` and `v` have the shapes (`M`, `M`) and (`N`, `N`), respectively. Otherwise, the shapes are (`M`, `K`) and (`K`, `N`), respectively, where `K` = min(`M`, `N`). compute_uv : bool, optional Whether or not to compute `u` and `v` in addition to `s`. True by default. Returns ------- u : { (..., M, M), (..., M, K) } array Unitary matrices. The actual shape depends on the value of ``full_matrices``. Only returned when ``compute_uv`` is True. s : (..., K) array The singular values for every matrix, sorted in descending order. v : { (..., N, N), (..., K, N) } array Unitary matrices. The actual shape depends on the value of ``full_matrices``. Only returned when ``compute_uv`` is True. Raises ------ LinAlgError If SVD computation does not converge. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. The decomposition is performed using LAPACK routine _gesdd The SVD is commonly written as ``a = U S V.H``. The `v` returned by this function is ``V.H`` and ``u = U``. If ``U`` is a unitary matrix, it means that it satisfies ``U.H = inv(U)``. The rows of `v` are the eigenvectors of ``a.H a``. The columns of `u` are the eigenvectors of ``a a.H``. For row ``i`` in `v` and column ``i`` in `u`, the corresponding eigenvalue is ``s[i]**2``. If `a` is a `matrix` object (as opposed to an `ndarray`), then so are all the return values. Examples -------- >>> a = np.random.randn(9, 6) + 1j*np.random.randn(9, 6) Reconstruction based on full SVD: >>> U, s, V = np.linalg.svd(a, full_matrices=True) >>> U.shape, V.shape, s.shape ((9, 9), (6, 6), (6,)) >>> S = np.zeros((9, 6), dtype=complex) >>> S[:6, :6] = np.diag(s) >>> np.allclose(a, np.dot(U, np.dot(S, V))) True Reconstruction based on reduced SVD: >>> U, s, V = np.linalg.svd(a, full_matrices=False) >>> U.shape, V.shape, s.shape ((9, 6), (6, 6), (6,)) >>> S = np.diag(s) >>> np.allclose(a, np.dot(U, np.dot(S, V))) True iisD->DdDsd->dddRRcRsD->dsd->dN(RkRRRRdR`RRFtsvd_m_ftsvd_n_ftsvd_m_stsvd_n_sRnRRvRrtsvd_mtsvd_n(Rht full_matricest compute_uvRjRmRRcRRRRtutsR((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRs8S           !   cC`sjt|}|dkr:t|dt}|d|d St||dd tt||dd SdS( s Compute the condition number of a matrix. This function is capable of returning the condition number using one of seven different norms, depending on the value of `p` (see Parameters below). Parameters ---------- x : (..., M, N) array_like The matrix whose condition number is sought. p : {None, 1, -1, 2, -2, inf, -inf, 'fro'}, optional Order of the norm: ===== ============================ p norm for matrices ===== ============================ None 2-norm, computed directly using the ``SVD`` 'fro' Frobenius norm inf max(sum(abs(x), axis=1)) -inf min(sum(abs(x), axis=1)) 1 max(sum(abs(x), axis=0)) -1 min(sum(abs(x), axis=0)) 2 2-norm (largest sing. value) -2 smallest singular value ===== ============================ inf means the numpy.inf object, and the Frobenius norm is the root-of-sum-of-squares norm. Returns ------- c : {float, inf} The condition number of the matrix. May be infinite. See Also -------- numpy.linalg.norm Notes ----- The condition number of `x` is defined as the norm of `x` times the norm of the inverse of `x` [1]_; the norm can be the usual L2-norm (root-of-sum-of-squares) or one of a number of other matrix norms. References ---------- .. [1] G. Strang, *Linear Algebra and Its Applications*, Orlando, FL, Academic Press, Inc., 1980, pg. 285. Examples -------- >>> from numpy import linalg as LA >>> a = np.array([[1, 0, -1], [0, 1, 0], [1, 0, 1]]) >>> a array([[ 1, 0, -1], [ 0, 1, 0], [ 1, 0, 1]]) >>> LA.cond(a) 1.4142135623730951 >>> LA.cond(a, 'fro') 3.1622776601683795 >>> LA.cond(a, np.inf) 2.0 >>> LA.cond(a, -np.inf) 1.0 >>> LA.cond(a, 1) 2.0 >>> LA.cond(a, -1) 1.0 >>> LA.cond(a, 2) 1.4142135623730951 >>> LA.cond(a, -2) 0.70710678118654746 >>> min(LA.svd(a, compute_uv=0))*min(LA.svd(LA.inv(a), compute_uv=0)) 0.70710678118654746 R.iitaxisiN(.i(.i(ii(ii(RRURRvRR(txtpR((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR}s O  cC`st|}|jdkr2tt|dk St|dt}|dkr|jdddtt|j dt |j j }n||kj ddS( s Return matrix rank of array using SVD method Rank of the array is the number of SVD singular values of the array that are greater than `tol`. Parameters ---------- M : {(M,), (..., M, N)} array_like input vector or stack of matrices tol : {None, float}, optional threshold below which SVD values are considered zero. If `tol` is None, and ``S`` is an array with singular values for `M`, and ``eps`` is the epsilon value for datatype of ``S``, then `tol` is set to ``S.max() * max(M.shape) * eps``. Notes ----- The default threshold to detect rank deficiency is a test on the magnitude of the singular values of `M`. By default, we identify singular values less than ``S.max() * max(M.shape) * eps`` as indicating rank deficiency (with the symbols defined above). This is the algorithm MATLAB uses [1]. It also appears in *Numerical recipes* in the discussion of SVD solutions for linear least squares [2]. This default threshold is designed to detect rank deficiency accounting for the numerical errors of the SVD computation. Imagine that there is a column in `M` that is an exact (in floating point) linear combination of other columns in `M`. Computing the SVD on `M` will not produce a singular value exactly equal to 0 in general: any difference of the smallest SVD value from 0 will be caused by numerical imprecision in the calculation of the SVD. Our threshold for small SVD values takes this numerical imprecision into account, and the default threshold will detect such numerical rank deficiency. The threshold may declare a matrix `M` rank deficient even if the linear combination of some columns of `M` is not exactly equal to another column of `M` but only numerically very close to another column of `M`. We chose our default threshold because it is in wide use. Other thresholds are possible. For example, elsewhere in the 2007 edition of *Numerical recipes* there is an alternative threshold of ``S.max() * np.finfo(M.dtype).eps / 2. * np.sqrt(m + n + 1.)``. The authors describe this threshold as being based on "expected roundoff error" (p 71). The thresholds above deal with floating point roundoff error in the calculation of the SVD. However, you may have more information about the sources of error in `M` that would make you consider other tolerance values to detect *effective* rank deficiency. The most useful measure of the tolerance depends on the operations you intend to use on your matrix. For example, if your data come from uncertain measurements with uncertainties greater than floating point epsilon, choosing a tolerance near that uncertainty may be preferable. The tolerance may be absolute if the uncertainties are absolute rather than relative. References ---------- .. [1] MATLAB reference documention, "Rank" http://www.mathworks.com/help/techdoc/ref/rank.html .. [2] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, "Numerical Recipes (3rd edition)", Cambridge University Press, 2007, page 795. Examples -------- >>> from numpy.linalg import matrix_rank >>> matrix_rank(np.eye(4)) # Full rank matrix 4 >>> I=np.eye(4); I[-1,-1] = 0. # rank deficient matrix >>> matrix_rank(I) 3 >>> matrix_rank(np.ones((4,))) # 1 dimension - rank 1 unless all 0 1 >>> matrix_rank(np.zeros((4,))) 0 iiRRitkeepdimsiN(RRRR'RRvRURRyRR2RwtepsR/(tMttolRJ((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRsL  <gV瞯>> a = np.random.randn(9, 6) >>> B = np.linalg.pinv(a) >>> np.allclose(a, np.dot(a, np.dot(B, a))) True >>> np.allclose(B, np.dot(B, np.dot(a, B))) True iiRwiig?gN(RkRRRRwt conjugateRR-treduceRRR)RR+R%( RhtrcondRjRRRRRRtcutoffti((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR +s? 4    4cC`st|}t|t|t|\}}t|}t|rPdnd}tj|d|\}}t|r|j |}n|j |dt }t|r|j |}n|j |dt }||fS(s< Compute the sign and (natural) logarithm of the determinant of an array. If an array has a very small or very large determinant, then a call to `det` may overflow or underflow. This routine is more robust against such issues, because it computes the logarithm of the determinant rather than the determinant itself. Parameters ---------- a : (..., M, M) array_like Input array, has to be a square 2-D array. Returns ------- sign : (...) array_like A number representing the sign of the determinant. For a real matrix, this is 1, 0, or -1. For a complex matrix, this is a complex number with absolute value 1 (i.e., it is on the unit circle), or else 0. logdet : (...) array_like The natural log of the absolute value of the determinant. If the determinant is zero, then `sign` will be 0 and `logdet` will be -Inf. In all cases, the determinant is equal to ``sign * np.exp(logdet)``. See Also -------- det Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. .. versionadded:: 1.6.0 The determinant is computed via LU factorization using the LAPACK routine z/dgetrf. Examples -------- The determinant of a 2-D array ``[[a, b], [c, d]]`` is ``ad - bc``: >>> a = np.array([[1, 2], [3, 4]]) >>> (sign, logdet) = np.linalg.slogdet(a) >>> (sign, logdet) (-1, 0.69314718055994529) >>> sign * np.exp(logdet) -2.0 Computing log-determinants for a stack of matrices: >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> sign, logdet = np.linalg.slogdet(a) >>> (sign, logdet) (array([-1., -1., -1.]), array([ 0.69314718, 1.09861229, 2.07944154])) >>> sign * np.exp(logdet) array([-2., -3., -8.]) This routine succeeds where ordinary `det` does not: >>> np.linalg.det(np.eye(500) * 0.1) 0.0 >>> np.linalg.slogdet(np.eye(500) * 0.1) (1, -1151.2925464970228) sD->Ddsd->ddRR( RRRRRrRnRFR R?RRv(RhRmRtreal_tRtsigntlogdet((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR }sJ      cC`st|}t|t|t|\}}t|rDdnd}tj|d|}t|r}|j|}n|j|dt }|S(s Compute the determinant of an array. Parameters ---------- a : (..., M, M) array_like Input array to compute determinants for. Returns ------- det : (...) array_like Determinant of `a`. See Also -------- slogdet : Another way to representing the determinant, more suitable for large matrices where underflow/overflow may occur. Notes ----- .. versionadded:: 1.8.0 Broadcasting rules apply, see the `numpy.linalg` documentation for details. The determinant is computed via LU factorization using the LAPACK routine z/dgetrf. Examples -------- The determinant of a 2-D array [[a, b], [c, d]] is ad - bc: >>> a = np.array([[1, 2], [3, 4]]) >>> np.linalg.det(a) -2.0 Computing determinants for a stack of matrices: >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> np.linalg.det(a) array([-2., -3., -8.]) sD->Dsd->dRR( RRRRRnRFR R?RRv(RhRmRRR((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyR s/    icC`s ddl}t|\}}t|\}}|jdk}|r^|ddtf}nt||t|||jd}|jd}|jd} t||} ||jdkrtdnt ||\} } t | } t | }t | | f| }|j |d|jdd| f>> x = np.array([0, 1, 2, 3]) >>> y = np.array([-1, 0.2, 0.9, 2.1]) By examining the coefficients, we see that the line should have a gradient of roughly 1 and cut the y-axis at, more or less, -1. We can rewrite the line equation as ``y = Ap``, where ``A = [[x 1]]`` and ``p = [[m], [c]]``. Now use `lstsq` to solve for `p`: >>> A = np.vstack([x, np.ones(len(x))]).T >>> A array([[ 0., 1.], [ 1., 1.], [ 2., 1.], [ 3., 1.]]) >>> m, c = np.linalg.lstsq(A, y)[0] >>> print(m, c) 1.0 -0.95 Plot the data along with the fitted line: >>> import matplotlib.pyplot as plt >>> plt.plot(x, y, 'o', label='Original data', markersize=10) >>> plt.plot(x, m*x + c, 'r', label='Fitted line') >>> plt.legend() >>> plt.show() iNisIncompatible dimensionsg@ii iRs,SVD did not converge in Linear Least SquaresRwRtrankiR(!tmathRkRR%RRRRRRRrRuRRRRRRtlogtfloatt fortran_intRnREtzgelsdR:tdgelsdRR&RyR/RRRv(RhRRRRRjtis_1dRRtn_rhstldbRmRt result_real_tRtbstarRtnlvltiworkRRtrworkRRta_realt bstar_realtlrworktresidsRtst((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRsT        ) 54    " &  4 82"cC`s\||kr|d8}ntt|||j|d}|t|dddd}|S(sCompute a function of the singular values of the 2-D matrices in `x`. This is a private utility function used by numpy.linalg.norm(). Parameters ---------- x : ndarray row_axis, col_axis : int The axes of `x` that hold the 2-D matrices. op : callable This should be either numpy.amin or numpy.amax or numpy.sum. Returns ------- result : float or ndarray If `x` is 2-D, the return values is a float. Otherwise, it is an array with ``x.ndim - 2`` dimensions. The return values are either the minimum or maximum or sum of the singular values of the matrices, depending on whether `op` is `numpy.amin` or `numpy.amax` or `numpy.sum`. iiRiR(R6RR(Rtrow_axistcol_axistoptytresult((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt_multi_svd_norms   !c C`st|}t|jjttfs9|jt}n|dkr|j }|dks|dkrr|dks|dkr|dkr|j dd}t |jjrt |j |j t |j|j}nt ||}t|}|r|j|dg}n|Sn|j }|dkrDtt|}nAt|tsyt|}WntdnX|f}nt|dkr^|tkrt|jd|d |S|t krt|jd|d |S|d kr|d kjtjd|d |S|dkrDtjt|d|d |S|dks\|dkr|j|j }ttj|d|d |Sy |dWntk rtd nX|jjt krt|} nUt |jjr|n t!|} | j|jkr t| } nt| d | | |C} tj| d|d |d |Snt|dkr|\} } t"| |} t"| |} | | krtdn|dkrt#|| | t$}n|dkrt#|| | t%}n|dkrL| | kr"| d8} ntjt|d| jd| }nU|tkr| | krq| d8} ntjt|d| jd| }n|dkr| | kr| d8} ntjt|d| jd| }n|t kr:| | kr| d8} ntjt|d| jd| }ng|dkrqttj|j|j d|}n0|dkrt#|| | t}n td|rt&|j'} d| |d >> from numpy import linalg as LA >>> a = np.arange(9) - 4 >>> a array([-4, -3, -2, -1, 0, 1, 2, 3, 4]) >>> b = a.reshape((3, 3)) >>> b array([[-4, -3, -2], [-1, 0, 1], [ 2, 3, 4]]) >>> LA.norm(a) 7.745966692414834 >>> LA.norm(b) 7.745966692414834 >>> LA.norm(b, 'fro') 7.745966692414834 >>> LA.norm(a, np.inf) 4.0 >>> LA.norm(b, np.inf) 9.0 >>> LA.norm(a, -np.inf) 0.0 >>> LA.norm(b, -np.inf) 2.0 >>> LA.norm(a, 1) 20.0 >>> LA.norm(b, 1) 7.0 >>> LA.norm(a, -1) -4.6566128774142013e-010 >>> LA.norm(b, -1) 6.0 >>> LA.norm(a, 2) 7.745966692414834 >>> LA.norm(b, 2) 7.3484692283495345 >>> LA.norm(a, -2) nan >>> LA.norm(b, -2) 1.8570331885190563e-016 >>> LA.norm(a, 3) 5.8480354764257312 >>> LA.norm(a, -3) nan Using the `axis` argument to compute vector norms: >>> c = np.array([[ 1, 2, 3], ... [-1, 1, 4]]) >>> LA.norm(c, axis=0) array([ 1.41421356, 2.23606798, 5. ]) >>> LA.norm(c, axis=1) array([ 3.74165739, 4.24264069]) >>> LA.norm(c, ord=1, axis=1) array([ 6., 6.]) Using the `axis` argument to compute matrix norms: >>> m = np.arange(8).reshape(2,2,2) >>> LA.norm(m, axis=(1,2)) array([ 3.74165739, 11.22497216]) >>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :]) (3.7416573867739413, 11.224972160321824) RtfroiitordertKs6'axis' must be None, an integer or a tuple of integersRRisInvalid norm order for vectors.toutg?sDuplicate axes given.iitnucs Invalid norm order for matrices.s&Improper number of dimensions to norm.N(RR(NRR((RRlRwRxR#R@RRRURR&RnR)RRR,RttupleRt isinstanceRRzRR(R:RRR/R*RtconjRR5RDRBRR8R7RaR( RtordRRRtsqnormRtndRtabsxR R t ret_shape((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRs    +       %   $ $       *   *   *   * +  cC`st|}|dkr'tdn!|dkrHt|d|dSg|D]}t|^qO}|dj|dj}}|djdkrt|d|d>> from numpy.linalg import multi_dot >>> # Prepare some data >>> A = np.random.random(10000, 100) >>> B = np.random.random(100, 1000) >>> C = np.random.random(1000, 5) >>> D = np.random.random(5, 333) >>> # the actual dot multiplication >>> multi_dot([A, B, C, D]) instead of:: >>> np.dot(np.dot(np.dot(A, B), C), D) >>> # or >>> A.dot(B).dot(C).dot(D) Notes ----- The cost for a matrix multiplication can be calculated with the following function:: def cost(A, B): return A.shape[0] * A.shape[1] * B.shape[1] Let's assume we have three matrices :math:`A_{10x100}, B_{100x5}, C_{5x50}`. The costs for the two different parenthesizations are as follows:: cost((AB)C) = 10*100*5 + 10*5*50 = 5000 + 2500 = 7500 cost(A(BC)) = 10*100*50 + 100*5*50 = 50000 + 25000 = 75000 isExpecting at least two arrays.iiiiN(ii( RRR)R>RR<RRt_multi_dot_threet_multi_dot_matrix_chain_ordert _multi_dotR&(R|RRht ndim_firstt ndim_lastRR((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRs*L     !  c C`s~|j\}}|j\}}||||}||||}||krdtt|||St|t||SdS(s Find the best order for three arrays and do the multiplication. For three arguments `_multi_dot_three` is approximately 15 times faster than `_multi_dot_matrix_chain_order` N(RR)( RItBtCta0ta1b0tb1c0tc1tcost1tcost2((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyRE s c C`sft|}g|D]}|jd^q|djdg}t||fdt}t||fdt}xtd|D]}xt||D]}||} t||| fR?R@RAtnumpy.core.multiarrayRBt numpy.libRCRDt numpy.linalgRERFtnumpy.matrixlib.defmatrixRt_Nt_Vt_At_St_LRt ExceptionRRURVRZR]R^R_R`RdRkRnRoRsRrRtRuRRRRRRRRRRRRRRRRRR R RRRRRRR R R RRRvRRRR R!(((sF/opt/alt/python27/lib64/python2.7/site-packages/numpy/linalg/linalg.pyt s                            E P E F U W [ y W W R [ =  o  )