# Miroslav and the separating wand

In a huge graph, it is hard to determine a bottleneck by just looking at it, or doing any type of search. And by a bottleneck, I mean part of graph that is not very much connected to the other part. In different setting this could mean different things, for example, if your graph is a communication network, then a bottleneck mean the information doesn’t get to all of the nodes very easily, there will be traffic around the bottleneck.

One of the celebrated results in algebraic graph theory is by Miroslav Fiedler, a Czech mathematician [1, 2], about the algebraic connectivity of a graph. It simply says the second smallest eigenvalue of the Laplacian matrix of a graph determines how connected the graph is. In particular, if it is 0, the graph is disconnected. Furthermore, it shows the eigenvector corresponding to this eigenvalue distinguishes the vertices in the two connected components. In the case that the graph is connected, this eigenvector will find bottlenecks of the graph. Below I first generate a random graph that has a visible bottleneck, then I’ll use Fiedler’s result to find it computationally.

First, let’s generate a random graph with a bottleneck. In order to do so, I’ll generate two random graphs, then I look at their disjoint union and put few random edges between them.

def random_bottleneck_graph(n1,n2,p1,p2,p):
#n1 = number of vertices of first part
#n2 = number of vertices of second part
#p1 = probability of edges of first part
#p2 = probability of edges of second part
#p  = probability of edges between two parts

import random

#generate two parts randomly and put them together
H1 = graphs.RandomGNP(n1,p1) #generate a random GNP graph for first part
H2 = graphs.RandomGNP(n2,p2) #generate a random GNP graph for second part
G = H1.disjoint_union(H2) #form the disjoint union of the two parts

for i in range(n1): #pick each vertex in H1
for j in range(n2): #pick each vertex in H2
a = random.random() #generate a random number between 0 and 1
if a > 1-p: #with probability p add an edge between the two vertices of H1 and H2
return(G)


And here is a sample run:

#initialization
n1 = 40 #number of vertices of first part
n2 = 60 #number of vertices of second part
p1 = .7 #probability of edges of first part
p2 = .6 #probability of edges of second part
p = .01 #probability of edges between two parts

#run
G = random_bottleneck_graph(n1,n2,p1,p2,p)


Of course when I look at the graph, sage who has generated it knows that the graph has two main parts, and even the vertices are numbered in a way that anyone can recognize it. The left picture is generated by the above ‘show’ command, and the right picture is using the following code.

def show_random_bottleneck_graph(G,layout=None):
EdgeColors={'red':[], 'blue':[], 'black':[]}

for e in G.edges():
if not e[0][0] == e[1][0]:
EdgeColors['black'].append(e)
elif e[0][0] == 0:
EdgeColors['red'].append(e)
else:
EdgeColors['blue'].append(e)

G.show(vertex_labels=False,figsize=[10,10],layout=layout,edge_colors=EdgeColors)


So, in order to really get a random graph with a bottleneck, I take the above graph and try to loose the information that I had from it.

def relable_graph(G,part=None):
# input: a graph G
# output: if part is not given, it gives a random labeling of nodes of graph with integers
#         if part is given, then it partitions the vertices of G labeling each vertex with pairs (i,j) where i in the number of partition the vertex is in part, and j is an integer.

V = G.vertices()
E = G.edges()

H = Graph({})

map = {}
if part == None:
new_part = None
import random
random.shuffle(V)

for i in range(len(V)):
map[V[i]] = i
else:
new_part = [ [] for p in part]
count = 0
for i in range(len(part)):
for w in part[i]:
map[w] = (i,w)
new_part[i].append((i,w))
count += 1

for v,w,l in E:

return(H,new_part)

###############################
#run
H = relable_graph(G)[0]
H.show(layout='circular',vertex_labels=False, figsize=[10,10])

Now if I look at the graph I won’t be able to recognize the bottleneck:

Now that I truly have random graph, which I know has a visible bottleneck, let’s see if we can find it. Below I use ‘networkx’ to compute the Fiedler vector, an eigenvector corresponding to the second smallest eigenvalue of the Laplacian matrix. Then I’ll partition the vertices according to the sign of the corresponding entry on this vector:

def partition_with_fiedler(G):
# input: a graph G
# algorithm: using networkx finds the fiedler vector of G
# a partitioning of vertices into two pieces according to positive and negative entries of the Fiedler vector

import networkx
H = G.networkx_graph()
f = networkx.fiedler_vector(H, method='lanczos')

V = H.nodes()
P = [[],[]]
for i in range(len(V)):
if f[i] >= 0:
P[0].append(V[i])
else:
P[1].append(V[i])

return(P)


Let’s see how it is partitioned:

P = partition_with_fiedler(H)
H.show(partition=P, layout='circular', vertex_labels=True, figsize=[10,10])

Well, not much can be seen yet. Now let me put the red vertices in one side and the blue vertices in the other side:

def position_bottleneck(G):
P = partition_with_fiedler(G)

V = G.vertices()
new_pos = {}

for v in V:
if v in P[0]:
new_pos[v] = [6*(1+random.random()),6*(2*random.random()-1)]
else:
new_pos[v] = [-6*(1+random.random()),6*(2*random.random()-1)]

return(new_pos)
pos = position_bottleneck(H)
H.show(partition=P, pos=pos, vertex_labels=True, figsize=[10,10])

Now it’s easy to see the bottleneck of the graph! We can even recover the original picture as follows:

(K,newP) = relable_graph(H,part=P)
K.show(partition=newP, layout='circular', vertex_labels=False, figsize=[10,10])

And here is the representation of it using the colored edges:

show_random_bottleneck_graph(K,layout='circular') #suggested layouts are "circular" and "spring"

UPDATE:

Since I posted this, I’ve been thinking about multiway Fiedler clustering. Here is what I got so far. First I define a list N of number of vertices in each part, and a matrix A which is basically a probability matrix where entry (i,j) is the probability of an edge between part i and part j, hence it is symmetric. If you choose diagonal entries close to 1 and off-diagonal entries close to zero, then you’ll get a graph with multiple communities. I’ll shuffle the vertices around after creating the multi-community graph.

def multi_bottleneck_graph(N,A):
#N = list of size n of number of vertices in each part
#A = an nxn matrix where entry (i,j) shows the probability of edges between part i and part j of vertices, and n is the number of parts.

import random

n = len(N)
G = Graph({})

for i in range(n):
for j in range(N[i]):

V = G.vertices()

for v in V:
for w in V:
if not v == w:
a = random.random() #generate a random number between 0 and 1
p = A[v[0],w[0]]
if a > 1-p: #with probability p add an edge between the two vertices of H1 and H2

return(G)

def relable_graph(G,part=None):
# input: a graph G
# output: if part is not given, it gives a random labeling of nodes of graph with integers
#         if part is given, then it partitions the vertices of G labeling each vertex with pairs (i,j) where i in the number of partition the vertex is in part, and j is an integer.

V = G.vertices()
E = G.edges()

H = Graph({})

map = {}
if part == None:
new_part = None
import random
random.shuffle(V)

for i in range(len(V)):
map[V[i]] = i
else:
new_part = [ [] for p in part]
count = 0
for i in range(len(part)):
for w in part[i]:
map[w] = (i,w)
new_part[i].append((i,w))
count += 1

for v,w,l in E:

return(H,new_part)

def shuffled_multi_bottleneck_graph(N,A):
G = multi_bottleneck_graph(N,A)
H = relable_graph(G)[0]
return(H)

# run
N = [10,15,20]
A = matrix([[.9,.01,.01],[.01,.9,.01],[.01,.01,.9]])
G = shuffled_multi_bottleneck_graph(N,A)
G.show(layout='circular',vertex_labels=False, vertex_size=30, figsize=[10,10])


Then I break the graph into two pieces using Fiedler vector, compare the two pieces according their algebraic connectivities (second smallest eigenvalue of the Laplacian) and break the smaller one into two pieces and repeat this process, until I get n communities.

def partition_with_fiedler(G):
# input: a graph G
# algorithm: using networkx finds the fiedler vector of G
# a partitioning of vertices into two pieces according to positive and negative entries of the Fiedler vector

import networkx
H = G.networkx_graph()
f = networkx.fiedler_vector(H, method='lanczos')

V = H.nodes()
P = [[],[]]
for i in range(len(V)):
if f[i] >= 0:
P[0].append(V[i])
else:
P[1].append(V[i])

return(P)

def multi_fiedler(G,n=2):
if n == 2:
P = partition_with_fiedler(G)
return(P)

else:
import networkx

P = partition_with_fiedler(G)
H = [ G.subgraph(vertices=P[i]) for i in range(2) ]
C = [ (networkx.algebraic_connectivity(H[i].networkx_graph(), method='lanczos')/H[i].order(),i) for i in range(2) ]
l = min(C)[1]

while n > 2:
n = n-1

temP = partition_with_fiedler(H[l])
nul = P.pop(l)
P.append(temP[0])
P.append(temP[1])

temH = [ G.subgraph(vertices=temP[i]) for i in range(2) ]
nul = H.pop(l)
H.append(temH[0])
H.append(temH[1])

temC = [ (networkx.algebraic_connectivity(temH[i].networkx_graph(), method='lanczos')/temH[i].order(),i) for i in range(2) ]
nul = C.pop(l)
C.append(temC[0])
C.append(temC[1])
l = min(C)[1]

return(P)
# run
G.show(partition=K, layout='circular', figsize=[10,10])


Now I can separate same colour vertices to see the communities in the graph:

def show_communities(G,n):
#G: a graph
#n: number of communities > 1
if n == 1:
P = [G.vertices()]
elif n > 1:
P = multi_fiedler(G,n)
else:
raise ValueError('n > 0 is the number of communities.')
L,newP = relable_graph(G,part=P)
L.show(partition=newP, layout='circular', vertex_labels=False, figsize=[10,10])

# run
show_communities(G,3)


There is only one important thing in using networkx. It seems like the default method (tracemin) for algebraic_connectivity and for fiedler_vector has a bug. So, I used the method=’lanczos’ option there.

You might be interested to see what happens if you ask for more or less than 3 communities. Here are a couple of them:

The next question is how to decide about number of communities.

1. M. Fiedler, Algebraic connectivity of graphs. Czechoslovak Math. J. 23(98):298–305 (1973).
2. M. Fiedler, A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory, Czech. Math. J. 25:619–637, (1975).

# Two New Movies

I recently went to a couple of screening which made me really happy.

First film was “gifted“, the story of a first grader who has a special talent in mathematics and people struggle making a decision between letting her to be an ordinary kid, or coaching her hard to become one of the greatest mathematicians of history. Though the story is somewhat beefed up to become a holywood movie, it shows some of the aspects of the life very nicely. The movie overall was made well, and very much recommended.

The second movie was “codegirl“, more of a documentary about groups of high school girls attacking real life problems in their communities by developing mobile apps. The first scene is a quote from Justin Wolfers, an economist and public policy scholar:

## FEWER LARGE COMPANIES ARE RUN BY WOMENTHAN BY MEN NAMED JOHN

The movie opens in a small town in Moldova, showing the problems with clean drinking water. The movie goes on to show some of the teams’ early progresses, their struggles, and the emotional moments when a team advances to the next level or gets eliminated. The contrast of types of problems each team has to deal with depending on where they’re from is shown very clearly and beautifully in the movie. The final scenes announcing what some of the teams (eliminated or otherwise) are doing with their projects is particularly inspiring. Highly recommended.

Trailer: CODEGIRL

# Nonsymmetric SIEP paper is published in LAA

My first solo paper “Existence of a Not Necessarily Symmetric Matrix with Given Distinct Eigenvalues and Graph” got published in Journal of Linear Algebra and its Applications, yesterday. It took it exactly one year! I’ve written about it here: Losing the Symmetry.

The main idea is to use the implicit function theorem to perturb the following matrix in a way that entries corresponding to the edges become nonzero, and by adjusting the rest of the entries to get back to the original spectrum.

You can find the published version here (download for the first 50 days is free).

# Determinant and Jacobian

Today I started my linear algebra class by motivating the determinant using a little bit of history and applications. In particular I talked about determinant appearing in the change of variables in integration of multivariate functions in calculus:Then after the class I noticed that my picture of what happens when we linearize the image wasn’t very clear as I had drawn it. So I decided to take a look at an actual example. The following code takes a function $f$ that maps $\mathbb{R}^2$ to $\mathbb{R}^2$ and draws three regions:

• the unit cube in blue,
• the image of the unit cube under $f$ in red, and
• the linearization of the image by Jacobian of $f$ at the origin
def show_linearization(f,n=100):
# f is a function that maps R^2 to R^2 for example
# f(x,y) = ((x^2+2*(x+2)*(y+1))/4-1,y^2-(x+1)*y)
# The output is three areas:
# the unit cube
# the image of unit cube under f
# the linearization of the image using the Jacobian of f at (0,0)

var('x y')
A = jacobian(f,(x,y))(0,0)
p = f(0,0)
domxy = []
imgxy = []
jacxy = []
for i in range(n+1):
for j in range(n+1):
domxy.append((i/n,j/n))
imgxy.append(f(i/n,j/n))
jacxy.append(p+A*vector([i/n,j/n]))

P = points(domxy,color="blue",aspect_ratio=1, alpha = .3)
Q = points(imgxy,color="red",aspect_ratio=1, alpha = .3)
R = points(jacxy,color="green",aspect_ratio=1, alpha = .3)

(P+Q+R).show()

Here is a sample run:

f(x,y) = ((x^2+2*(x+2)*(y+1))/4-1,y^2-(x+1)*y)
show_linearization(f)

and the output is:

I chose the function in a way that $f(0,0)=(0,0)$ for simplicity. Here it is on sage cell server for you to play around with it: link.

What do you think? Any suggestions or comments?

# Step-by-step reduction

One of the things that I always tell my students is to check their solutions when they are done solving a problem. That by itself can mean several things, depending on what the problems is. Of course after solving a system of linear equations, one would plug in the solutions into the original system of equations to see if they are satisfied. The harder part is to come up with a way to check if you’ve done the row-reduction correctly or not. One can easily use Sage to see if the reduced row echelon form is computed correctly. If A is your matrix, just input A.rref() to find the reduced row echelon form of it.

But still sometimes in the case that the answer is wrong, one might get frustrated by not trying to figure out in which step they’ve made an error, or sometimes they might just get stuck. Considering that row-reduction is a time consuming process, it is plausible to assume that some one can easily give up after a few tries. I have found this nice code in Sage that shows step-by-step Gauss-Jordan elimination process, and actually tells you what to do in each step, from here. Then I did a little bit of customization on it. Here is the final version:

# Naive Gaussian reduction
"""Describe the reduction to echelon form of the given matrix of rationals.

MATRIX  matrix of rationals   e.g., M = matrix(QQ, [[..], [..], ..])

Returns: reduced form.  Side effect: prints steps of reduction.

"""
M = copy(MATRIX)
num_rows=M.nrows()
num_cols=M.ncols()
show(M.apply_map(lambda t:t.full_simplify()))

col = 0   # all cols before this are already done
for row in range(0,num_rows):
# ?Need to swap in a nonzero entry from below
while (col < num_cols
and M[row][col] == 0):
for i in M.nonzero_positions_in_column(col):
if i > row:
print " swap row",row+1,"with row",i+1
M.swap_rows(row,i)
show(M.apply_map(lambda t:t.full_simplify()))
break
else:
col += 1

if col >= num_cols:
break

# Now guaranteed M[row][col] != 0
and M[row][col] != 1):
print " take",1/M[row][col],"times row",row+1
M.rescale_row(row,1/M[row][col])
show(M.apply_map(lambda t:t.full_simplify()))

for changed_row in range(row+1,num_rows):
if M[changed_row][col] != 0:
factor = -1*M[changed_row][col]/M[row][col]
print " take",factor,"times row",row+1,"plus row",changed_row+1
show(M.apply_map(lambda t:t.full_simplify()))
col +=1

print "Above is a row echelon form, let's keep cruising to get the reduced row echelon form:\n"

for i in range(num_rows):
row = num_rows-i-1
if M[row] != 0:
for col in range(num_cols):
if M[row,col] != 0:
if M[row,col] != 1:
print " take",1/M[row][col],"times row",row+1
M.rescale_row(row,1/M[row][col])
show(M.apply_map(lambda t:t.full_simplify()))
break

for changed_row in range(row):
factor = -1 * M[row-1-changed_row,col]
if factor != 0:
print " take", factor,"times row", row+1, "plus row", row-1-changed_row+1
show(M.apply_map(lambda t:t.full_simplify()))
return(M.apply_map(lambda t:t.full_simplify()))


And here is a sample run:

M = matrix(SR,[[3,-1,4,6],[0,1,8,0],[-2,1,0,-4]])


And the output looks like this:

Click here to run it in the sage cell server. Note that the code is implemented to work over rational field since it is being used for pedagogical reasons.

In the comments Jason Grout has added an interactive implementation of the first part of the code:

# Naive Gaussian reduction
@interact
"""Describe the reduction to echelon form of the given matrix of rationals.

M  matrix of rationals   e.g., M = matrix(QQ, [[..], [..], ..])

Returns: None.  Side effect: M is reduced.  Note: this is echelon form,
not reduced echelon form; this routine does not end the same way as does
M.echelon_form().

"""
num_rows=M.nrows()
num_cols=M.ncols()
print M

col = 0   # all cols before this are already done
for row in range(0,num_rows):
# ?Need to swap in a nonzero entry from below
while (col < num_cols                and M[row][col] == 0):              for i in M.nonzero_positions_in_column(col):                 if i > row:
print " swap row",row+1,"with row",i+1
M.swap_rows(row,i)
print M
break
else:
col += 1

if col >= num_cols:
break

# Now guaranteed M[row][col] != 0
and M[row][col] != 1):
print " take",1/M[row][col],"times row",row+1
M.rescale_row(row,1/M[row][col])
print M
change_flag=False
for changed_row in range(row+1,num_rows):
if M[changed_row][col] != 0:
change_flag=True
factor=-1*M[changed_row][col]/M[row][col]
print " take",factor,"times row",row+1,"plus row",changed_row+1
if change_flag:
print M
col +=1


One of the uses of this can be to find the inverse of a matrix step by step. For example:

var('a b c d')
A = matrix([[a,b],[c,d]])
R = gauss_method(A.augment(identity_matrix(A.ncols()),subdivide=True))

will give you: Or you can run it for a generic matrix of size 3 (for some reason it didn’t factor i, so I used letter k instead:)

var('a b c d e f g h i j k l m n o p q r s t u v w x y z')
A = matrix([[a,b,c],[d,e,f],[g,h,k]])
R = gauss_method(A.augment(identity_matrix(A.ncols()),subdivide=True))

The steps are too long, so I’m not going to include a snapshot, if you are interested look at the steps here. But you can check out only the final result by

view(R.subdivision(0,1))

which will give you And to check if the denominators and the determinant of the matrix have any relations with each other you can multiply by the determinant and simplify:

view((det(A)*R.subdivision(0,1)).apply_map(lambda t:t.full_simplify()))

to get

Do you think this can help students?

# Generating not-so-much-random matrices

I’ve been using sage for a while now. I’ve also been teaching linear algebra for a few years. One of the problems in teaching linear algebra is coming up with a handful of examples of matrices that can nicely be row-reduced. One way to do this is to start with your “nice” row-reduced matrix and take a few (or many) “nice” elementary matrices and there is your example. So, one might want to code this in sage. I just recently found out that actually sage has such a function built in it. For example,

random_matrix(ZZ,4,5,algorithm='echelonizable',rank=3, upper_bound=7)


will create a random $4 \times 5$ matrix with rank $3$, integer entries (that’s the ZZ), and entries no bigger than $7$, which can “nicely” be turned into rref. So, here it is in sage cell server. I think this is a good thing to share with students so they can practice as many times as they want.

For the same reasons when generating a random matrix to find its inverse, we usually want a matrix with small-ish integer entries such that the inverse also has relatively small integer entries. It is easy to see that for a matrix $A$ with integer entries to have an inverse with integer entries, it is necessary that its determinant is $\pm 1$. It turns out that it is the sufficient condition too, surprisingly, thanks to Cramer’s rule! Though it might lose some pedagogical purposes, like relating the inverse to the determinant, but here is a code that generates random matrices with integer entries where their inverses also have integer entries:

random_matrix(ZZ,3,algorithm='unimodular',upper_bound=6)

The sage documentation says:

Warning: Matrices generated are not uniformly distributed. For
unimodular matrices over finite field this function does not even
generate all of them: for example "Matrix.random(GF(3), 2,
algorithm='unimodular')" never generates "[[2,0],[0,2]]". This
function is made for teaching purposes.

And

Random matrices in echelon form.  The "algorithm='echelon_form'"
keyword, along with a requested number of non-zero rows
("num_pivots") will return a random matrix in echelon form.  When
the base ring is "QQ" the result has integer entries.  Other exact
rings may be also specified.

sage: A=random_matrix(QQ, 4, 8, algorithm='echelon_form', num_pivots=3); A # random
[ 1 -5  0 -2  0  1  1 -2]
[ 0  0  1 -5  0 -3 -1  0]
[ 0  0  0  0  1  2 -2  1]
[ 0  0  0  0  0  0  0  0]
sage: A.base_ring()
Rational Field
sage: (A.nrows(), A.ncols())
(4, 8)
sage: A in sage.matrix.matrix_space.MatrixSpace(ZZ, 4, 8)
True
sage: A.rank()
3
sage: A==A.rref()
True

For more, see the documentation of the "random_rref_matrix()"
function.  In the notebook or at the Sage command-line, first
execute the following to make this further documentation available:

from sage.matrix.constructor import random_rref_matrix

Random matrices with predictable echelon forms.  The
"algorithm='echelonizable'" keyword, along with a requested rank
("rank") and optional size control ("upper_bound") will return a
random matrix in echelon form.  When the base ring is "ZZ" or "QQ"
the result has integer entries, whose magnitudes can be limited by
the value of "upper_bound", and the echelon form of the matrix also
has integer entries.  Other exact rings may be also specified, but
there is no notion of controlling the size.  Square matrices of
full rank generated by this function always have determinant one,
and can be constructed with the "unimodular" keyword.

sage: A=random_matrix(QQ, 4, 8, algorithm='echelonizable', rank=3, upper_bound=60); A # random
sage: A.base_ring()
Rational Field
sage: (A.nrows(), A.ncols())
(4, 8)
sage: A in sage.matrix.matrix_space.MatrixSpace(ZZ, 4, 8)
True
sage: A.rank()
3
sage: all([abs(x)<60 for x in A.list()])
True
sage: A.rref() in sage.matrix.matrix_space.MatrixSpace(ZZ, 4, 8)
True

For more, see the documentation of the
"random_echelonizable_matrix()" function.  In the notebook or at
the Sage command-line, first execute the following to make this
further documentation available:

from sage.matrix.constructor import random_echelonizable_matrix
The "x" and "y" keywords can be used to distribute entries
uniformly. When both are used "x" is the minimum and "y" is one
greater than the maximum.

sage: random_matrix(ZZ, 4, 8, x=70, y=100)
[81 82 70 81 78 71 79 94]
[80 98 89 87 91 94 94 77]
[86 89 85 92 95 94 72 89]
[78 80 89 82 94 72 90 92]
If only "x" is given, then it is used as the upper bound of a range
starting at 0.

sage: random_matrix(ZZ, 5, 5, x=25)
[20 16  8  3  8]
[ 8  2  2 14  5]
[18 18 10 20 11]
[19 16 17 15  7]
[ 0 24  3 17 24]
To control the number of nonzero entries, use the "density" keyword
at a value strictly below the default of 1.0.  The "density"
keyword is used to compute the number of entries that will be
nonzero, but the same entry may be selected more than once.  So the
value provided will be an upper bound for the density of the
created matrix.  Note that for a square matrix it is only necessary
to set a single dimension.

sage: random_matrix(ZZ, 5, x=-10, y=10, density=0.75)
[-6  1  0  0  0]
[ 9  0  0  4  1]
[-6  0  0 -8  0]
[ 0  4  0  6  0]
[ 1 -9  0  0 -8]
For a matrix with low density it may be advisable to insist on a
sparse representation, as this representation is not selected
automatically.

sage: random_matrix(ZZ, 5, 5, density=0.3, sparse=True)
[ 4  0  0  0 -1]
[ 0  0  0  0 -7]
[ 0  0  2  0  0]
[ 0  0  1  0 -4]
[ 0  0  0  0  0]
Random rational matrices:
sage: random_matrix(QQ, 2, 8, num_bound=20, den_bound=4)
[ -1/2 6 13 -12 -2/3 -1/4 5 5]
[ -9/2 5/3 19 15/2 19/2 20/3 -13/4 0]
Random matrices over other rings.  Several classes of matrices have
specialized "randomize()" methods.  You can locate these with the
Sage command:

search_def('randomize')

# Gershgorin Disks

Gershgorin circle theorem roughly states that the eigenvalues of an $n \times n$ matrix lie inside $n$ circles where $i$-th circle is centred at $A_{i,i}$ and its radius is the sum of the absolute values of the off-diagonal entries of the $i$-th row of $A$. Applying this to $A^{\top}$ implies that the radius of this circles shall be the smaller of the sums for rows and columns. The following Sage code draws the circles for a given matrix.

def Gershgorin(A,evals=False):
# A is a square matrix
# Output is the Gershgorin disks of A

from sage.plot.circle import Circle
from sage.plot.colors import rainbow

n = A.ncols()
Colors = rainbow(n, 'rgbtuple')
E = point([])

B = A.transpose()
R = [(A[i-1,i-1], min(sum( [abs(a) for a in (A[i-1:i]).list()] )-abs(A[i-1,i-1]),sum( [abs(a) for a in (B[i-1:i]).list()] )-abs(B[i-1,i-1]))) for i in range(1,n+1)]
C = [ circle((real(R[i][0]),imaginary(R[i][0])), R[i][1], color="black") for i in range(n)]
if evals == True:
E = point(A.eigenvalues(), color="black", size=30)
CF = [ circle((real(R[i][0]),imaginary(R[i][0])), R[i][1], rgbcolor=Colors[i], fill=True, alpha=1/n) for i in range(n)]

(sum(C)+sum(CF)+E).show()


And here is a sample output:

A =  random_matrix(CDF,4)
Gershgorin(A)

Gershgorin disks for a randomly generated $4\times 4$ complex matrix

If you want to see the actual eigenvalues, call the function as below:

Gershgorin(A,evals=True)