nnls#
- scipy.optimize.nnls(A, b, *, maxiter=None)[source]#
Solve
argmin_x || Ax - b ||_2^2forx>=0.This problem, often called as NonNegative Least Squares, is a convex optimization problem with convex constraints. It typically arises when the
xmodels quantities for which only nonnegative values are attainable; weight of ingredients, component costs and so on.- Parameters:
- A(m, n) ndarray
Coefficient array
- b(m,) ndarray, float
Right-hand side vector.
- maxiterint, optional
Maximum number of iterations, optional. Default value is
3 * n.
- Returns:
- xndarray
Solution vector.
- rnormfloat
The 2-norm of the residual,
|| Ax-b ||_2.
See also
lsq_linearLinear least squares with bounds on the variables
Notes
The code is based on the classical algorithm of [1]. It utilizes an active set method and solves the KKT (Karush-Kuhn-Tucker) conditions for the non-negative least squares problem.
References
[1]: Lawson C., Hanson R.J., “Solving Least Squares Problems”, SIAM, 1995, DOI:10.1137/1.9781611971217
Examples
>>> import numpy as np >>> from scipy.optimize import nnls ... >>> A = np.array([[1, 0], [1, 0], [0, 1]]) >>> b = np.array([2, 1, 1]) >>> nnls(A, b) (array([1.5, 1. ]), 0.7071067811865475)
>>> b = np.array([-1, -1, -1]) >>> nnls(A, b) (array([0., 0.]), 1.7320508075688772)