Pure Python: Gauss-Jordan Solve Ax = b / invert A

Posted by

Partly because I don’t know how to relax, I was doing some maths on a recent holiday.

My better half had persuaded me to leave my laptop at home, so I only had my iPad. This made doing real work difficult. Luckily(?), I have a python install on my iPad, so I was able to satisfy my curiosity.

I did, however, hit a wall when it came to regression as I was unable to invert matrices in the limited ‘pure python’ install I had available.

This little problem has been solved a million times already, but i’ve solved it again. The code is below in case anyone faces the same limitations.

The function implements the Gauss-Jordan algorithm for solving Ab = x, or inverting A, in pure python.

It works just like the solve() function in R.

If you call gj_Solve(A) — i.e. on a matrix A alone – the function will return A^-1. If you call gj_Solve(A, b), it returns [A|x], with A in reduced row echelon form.

It was an interesting little task. One fascinating issue i bumped into was the problem of numerical stability. See don’t invert that Matrix for an example.


# some helpers

def idMatx(size):
  id = []
  for i in range(size):
    id.append([0]*size)
  for i in range(size):
    id[i][i] = 1
  return(id)

def tranMtx(inMtx):
  tMtx = []
  for row in range(0, len(inMtx[0])):
    tRow = []
    for col in range(0, len(inMtx)):
      ele = inMtx[col][row]
      tRow.append(ele)
    tMtx.append(tRow)
  return(tMtx)

def matxRound(matx, decPts=4):
  for col in range(len(matx)):
    for row in range(len(matx[0])):
      matx[col][row] = round(matx[col][row], decPts)

# the solver ...

def gj_Solve(A, b=False, decPts=4):
  """ A gauss-jordan method to solve an augmented matrix for
      the unknown variables, x, in Ax = b.
      The degree of rounding is 'tuned' by altering decPts = 4.
      In the case where b is not supplied, b = ID matrix, and therefore
      the output is the inverse of the A matrix.
  """

  if not b == False:
    # first, a test to make sure that A and b are conformable
    if (len(A) != len(b)):
      print 'A and b are not conformable'
      return
    Ab = A[:]
    Ab.append(b)
    m = tranMtx(Ab)
  else:
    ii = idMatx(len(A))
    Aa = A[:]
    for col in range(len(ii)):
      Aa.append(ii[col])
    tAa = tranMtx(Aa)
    m = tAa[:]

  (eqns, colrange, augCol) = (len(A), len(A), len(m[0]))

  # permute the matrix -- get the largest leaders onto the diagonals
  # take the first row, assume that x[1,1] is largest, and swap if that's not true
  for col in range(0, colrange):
    bigrow = col
    for row in range(col+1, colrange):
      if abs(m[row][col]) > abs(m[bigrow][col]):
        bigrow = row
        (m[col], m[bigrow]) = (m[bigrow], m[col])
  print 'm is ', m

  # reduce, such that the last row is has at most one unknown
  for rrcol in range(0, colrange):
    for rr in range(rrcol+1, eqns):
      cc = -(float(m[rr][rrcol])/float(m[rrcol][rrcol]))
      for j in range(augCol):
        m[rr][j] = m[rr][j] + cc*m[rrcol][j]

  # final reduction -- the first test catches under-determined systems
  # these are characterised by some equations being all zero
  for rb in reversed(range(eqns)):
    if ( m[rb][rb] == 0):
      if m[rb][augCol-1] == 0:
        continue
      else:
        print 'system is inconsistent'
        return
    else:
      # you must loop back across to catch under-determined systems
      for backCol in reversed(range(rb, augCol)):
        m[rb][backCol] = float(m[rb][backCol]) / float(m[rb][rb])
      # knock-up (cancel the above to eliminate the knowns)
      # again, we must loop to catch under-determined systems
      if not (rb == 0):
        for kup in reversed(range(rb)):
          for kleft in reversed(range(rb, augCol)):
            kk = -float(m[kup][rb]) / float(m[rb][rb])
            m[kup][kleft] += kk*float(m[rb][kleft])

  matxRound(m, decPts)

  if not b == False:
    return m
  else:
    mOut = []
    for row in range(len(m)):
      rOut = []
      for col in range(augCol/2, augCol):
        rOut.append(m[row][col])
      mOut.append(rOut)
    return mOut

# test it!

A = [[1,2,4],
     [1,3,0],
     [1,5,5]]

b = [5,8,2]

sol = gj_Solve(A, b)
print 'sol is ', sol

inv = gj_Solve(A)
print 'inv is ', inv

For those that wish to nerd-out on the iPad, I recommend Textastic for writing the code (it is light, while still being useful, and has good syntax highlighting), Python 2.7 for iOS for testing it, and a Logitech bluetooth keyboard case for your fingers.

4 comments

  1. This is how i did it
    def solveMatrix(a,b):
    “””
    loop 1
    devide all items in array a by first item in array a
    devide first item in b array by first item in a array
    more general;
    loop i only after j has looped each time
    a[i][j] / a[i][i]
    b[i]/a[i][i]
    loop 2
    subtract items in next rows by first item in row multiplied by by first item in first row
    i and j loop this time
    i = 0
    j = 0
    a[i][j] = a[][] – a[i+1][j]*a[i][i]
    “””

    i = 0; j = 0; n = 0; m = 0
    r = len(a); c = len(a[0])
    if r != c:
    print ‘Not Square’
    sys.exit(0)
    j = 0; i = 0; n = 0; m = 0
    try:
    while j < len(a):
    try:
    aa = a[j][j]
    b[j] = b[j]/float(aa)
    while i < len(a):
    a[j][i] = float(a[j][i])/float(aa)
    i += 1
    i = 0

    while n < len(a):
    if n == j:
    n+=1
    aa = a[n][j]
    b[n] -= b[j]*aa
    while m < len(a):
    a[n][m] -= a[j][m]*aa
    m+=1
    m = 0
    n+=1
    n = 0
    j+=1
    except ZeroDivisionError:
    j+=1
    except IndexError:
    b = [round(i,14) for i in b]
    return b

Comments are closed.