CPS222 Lecture: Graphs Last Revised 3/17/2015
Objectives:
1. To introduce basic graph terminology (e.g. vertex, edge, directed vs
undirected graphs, (digraphs), (in/out) degree, incident, head, tail,
adjacent (from/to), loop edge, path, path length, simple path, directed path,
cycle, acyclic, subgraph, connected (component), reachability, rooted,
network, spanning tree, back edge)
2. To introduce standard internal representations for graphs (e.g. adjacency
matrix, edge list, adjacency list, adjacency multilist).
3. To introduce standard graph algorithms (e.g. DFS, BFS, Minimum cost spanning
tree (Kruskal's algorithm), shortest path (Dijkstra's algorithm), transitive
closure, topological sort)
4. To introduce the use of graphs in planning problems (AOE, AOV).
5. To introduce network flow problems.
Materials: Handout with demonstration versions of the following:
1. Read a graph from a file into an adjacency matrix
2. Read a graph from a file into an adjacency list
3. DFS on an adjacency matrix
4. BFS on an adjacency list
5. Warshall's transitive closure algorithm
6. Topological sorting (without queue of vertices, with queue of vertices)
I. Introduction
- ------------
A. The general trend in our discussion has been to move from the simplest
most specific data structures to increasingly flexible and general kinds
of structures. Thus, we have moved from primitive structures through
sequential structures to a particular form of branching structure, the
the tree. We now focus on the most general kind of branching structure,
the graph. So general is this structure that all of the others we have
studied turn out to be just special kinds of graphs. Even apart from
this consideration, graphs are probably the most widely used of all
mathematical structures.
B. Formally, a graph consists of a set of VERTICES (often denoted V) and a
set of EDGES (often denoted E) which connect the vertices. Each edge
is, in fact, a (possibly ordered) pair of vertices.
ex: A ----- B ---- C
\ \______ D _/
\ /
\_ E _/
V = { A, B, C, D, E }
E = { (A,B), (A,D), (A,E), (B,C), (C,D), (D,E) }
1. In an undirected graph, the order of the edges in the pairs does
not matter. The above example has been drawn as an undirected
graph - hence the edges could just as well be listed as (B, A) etc.
2. In a directed graph (digraph), the edges are ORDERED pairs. This
can be symbolized by drawing the edges with arrow heads, and by
enclosing the pairs in angle brackets rather than parentheses:
ex: The following is a digraph having the same general shape as
the graph we have been discussing:
A ----> B ------> C
^ \______> D <_/
\ /
\_ E <_/
V = { A, B, C, D, E }
E = { , , , , , }
3. In an edge of a digraph , V1 is called the TAIL and V2 is
called the HEAD (cf. the way we draw the edge).
4. In either case, we say that an edge e is INCIDENT ON a vertex v if
v is either the tail or the head of the edge.
5. In either case, in some of our analyses of efficiency of various
graph algorithms we will let n stand for the cardinality of V and
e for the cardinality of E. (We will say, for example, that some
graph algorithms are O(some function of n), others are O(some function
of e), and some have behavior like O(n+e).
6. An edge from a vertex to itself is sometimes called a LOOP edge.
Example: _
/ \
\ /
A
This edge would be represented by the unordered pair (A, A), or by
the ordered pair . (Such an edge is relatively rare.)
C. Other terminology
1. In an undirected graph, we say that vertices V1, V2 are ADJACENT if
(V1,V2) or (V2,V1) is in E. In a digraph, we say that V1 is ADJACENT
TO V2 (note implicit direction) if is in E, and we likewise
say that V2 is ADJACENT FROM V1.
2. In an undirected graph, the DEGREE of a vertex is the number of
vertices it is adjacent with. In a digraph, the OUTDEGREE of a vertex
is the number of vertices it is adjacent to, and the INDEGREE of a
vertex is the number of vertices adjacent to it.
ex: in the undirected graph above, A and B are adjacent, A and D are
adjacent, and A and E are adjacent, so the degree of A is 3.
in the digraph above, A is adjacent to B and A is adjacent to D,
so its outdegree is 2. E is adjacent to A, so A's indegree is 1.
3. In a graph, a PATH from vertex Vs to vertex Vf is a set of vertices
Vs, V1, V2 .. Vn, Vf s.t. (Vs,V1), (V1,V2) .. (Vn,Vf) are in E.
In a digraph, a DIRECTED PATH from vertex Vs to vertex Vf is a set
of vertices Vs, V1, V2 .. Vn, Vf s.t. , .. are
in E. (Note - if Vs is adjacent to Vf, then Vs,Vf is a path from
Vs to Vf).
4. The LENGTH of a path is the number of edges on it.
Note: For some algorithms, it is helpful to think of each vertex as
being connected to itself by a path of length 0. Of course, no edge is
explicitly involved in such a case. (In general, though, a vertex
is not regarded as being connected to itself unless there is an
explicit loop edge.) For other algorithms, it turns out to be
expedient to consider the length of the path from a vertex to itself
to be infinity!
5. A SIMPLE PATH is one in which all of the vertices (save possibly the
first and last) are unique.
(Some writers call such a path ELEMENTARY, and use the term simple
for a path in which all the edges, but not necessarily the nodes, are
unique.)
6. A CYCLE is a simple path of length at least 1 from some vertex to
itself. In an undirected graph, in addition to requiring that the path
be simple we also require that all of the edges be unique - otherwise
every edge in an undirected graph would give rise to a cycle between
the two nodes it connects! (Note: Weiss distinguishes between a
simple cycle (which is a simple path) and a cycle (which need not be
simple. We will use the term cycle to refer to what Weiss calls a
simple cycle.)
7. A graph that contains no cycles is ACYCLIC.
8. A subgraph of a graph G is a graph G' such that V' is a subset of V
and E' is a subset of E. (Of course, only vertices in V' may appear
in the pairs in E' if G' is to be a graph).
9. A graph that contains a path connecting any pair of vertices V1,V2
(where V1 <> V2) is CONNECTED. A digraph that contains a directed
path from each vertex to each other vertex is STRONGLY CONNECTED.
ex: our graph is connected and our digraph is strongly connected.
a. If a digraph is not strongly connected, we sometimes say it is
WEAKLY CONNECTED if the corresponding undirected graph is connected.
This corresponding undirected graph is one that contains (V1,V2) in
its set of edges iff and/or is in the set of edges
of the digraph.
b. If a digraph is not strongly connected, we sometimes say it is
ROOTED if there exists at least one vertex R such that there is
a directed path from R to each other vertex in the graph. Note
that a strongly connected digraph is always rooted, but the reverse
is not necessarily so. However, if a digraph is rooted then the
corresponding undirected graph is always connected.
10. Whether or not a digraph is connected, we say that a vertex B is
REACHABLE from a vertex A if there is a directed path from A to B.
11. In an unconnected graph, a CONNECTED COMPONENT is a connected subgraph
of maximal size. In an unconnected digraph, a STRONGLY CONNECTED
COMPONENT is a strongly connected subgraph of maximal size.
ex: The graph A---B----C----D E----F----G
is not connected. The connected components are
A---B----C----D and E----F----G
A--B--C is not a connected component because it is not of maximal size.
D. Recall that we defined a graph in terms of a SET of edges, E. This
implies that there cannot be more that one edge connecting any pair
of vertices in a graph, or more than one edge connecting any pair of
vertices in the same direction in a digraph. A graph-like structure in
which this restriction is not met is called a MULTIGRAPH.
E. A graph/digraph in which each edge has a numerical value (weight or
cost) associated with it is called a NETWORK.
Example: Transportation network - edge costs are distances or fares
(In the example here, approximate distance from center of
town to center of town in miles.)
_____________ WENHAM
| / 5
| 4 BEVERLY
| / 3 |
DANVERS | 2
\ 3 |
SALEM
Note: sometimes a multigraph can be represented by a network in which
the weight assigned to each edge is the number of occurrences of
the corresponding edge in the multigraph.
F. Note that some familiar structures are in fact special kinds of graphs:
1. A list is an acyclic rooted digraph in which every vertex save the
root has indegree one and every vertex save one has outdegree one.
2. A tree is an acyclic rooted digraph. Alternately, if we are not
concerned about specifying the root explicitly, we can think of a
tree as a connected acyclic graph. Such a tree is sometimes called
a free tree, because any vertex can serve as the root.
II. External and Internal representations of graphs
-- -------- --- -------- --------------- -- ------
A. Because of the many applications of graphs, it turns out to be
advantageous to consider several different ways of representing a graph
in memory. Often, it will turn out that one of these representations
will be vastly superior to others for a given application.
B. For representing a graph in an external file (e.g. as input to a
program), a simple representation is as follows:
1. First line of the file: two integers - number of vertices (n), number
of edges (e).
2. Next n lines - information on each of the vertices. (Can be omitted
if vertices are simply labeled by some scheme such as 1, 2, 3 .. or
A, B, C...
3. Next e lines - information on each of the edges:
a. Tail vertex
b. Head vertex
c. Weight and/or other information if needed.
ex: our four-town network:
4 5
BEVERLY
DANVERS
SALEM
WENHAM
BEVERLY DANVERS 3
BEVERLY SALEM 2
BEVERLY WENHAM 5
DANVERS SALEM 3
DANVERS WENAM 4
(Note: order of listing towns when describing an edge is immaterial
unless the graph is regarded as directed - then we list tail, first,
then head.)
C. An approach that provides very fast access is an ADJACENCY MATRIX.
If there are n vertices, then the matrix will have n rows and n columns.
The elements of the matrix may be of type boolean, or may be pointers
to nodes storing information about the edge or null if there is no edge.
We consider the use of boolean values here - the books gives an example
in which pointers to edge nodes are used
1. For a graph, matrix elements[1, j] and [j, i] are both T iff (i, j) is
in E.
ex: A B C D E
A F T F T T
B T F T F F
C F T F T F
D T F T F T
E T F F T F
2. For a digraph, matrix element [i,j] will be T iff is in E.
ex: A B C D E
A F T F T F
B F F T F F
C F F F T F
D F F F F T
E T F F F F
3. Note that for a graph, the adjacency matrix will be symmetrical
around the diagonal. Wasted space can be avoided by storing only
half the matrix. This is not an issue for a digraph, of course.
4. For a network, we can use pointers to edge nodes or a matrix in which
the elements are the weights or labels associated with the edges. If
no edge exists connecting a given pair of vertices, an empty string can
be stored as a label, or it may be expedient to store "infinity" as a
weight - i.e. the cost of going from one point to another along a
nonexistent path is infinite.
ex:
BEVERLY DANVERS SALEM WENHAM
BEVERLY "infinity" 3 2 5
DANVERS 3 "infinity" 3 4
SALEM 2 3 "infinity" "infinity"
WENHAM 5 4 "infinity" "infinity"
Note: in the above, it may seem reasonable to use a value of 0 for
distance from a town to itself. However, if the model is one of paths,
we may not wish our algorithms to explore the possibility of driving
around in circles! In practice, since we may not be able to represent
infinity per se (unless we are using IEEE floats or doubles), we use
an impossibly large value.
5. With an adjacency matrix, the following question is answered
easily (O(1)):
is x adjacent to y? (for a network): if so, what is the weight?
6. The following questions are O(n):
find all y that x is adjacent to (or that are adjacent to x)
degree of x in an undirected graph
indegree of x in a digraph
outdegree of x in a digraph
7. HANDOUT CODE: Create an adjacency matrix from a disk file
representation
Analysis?
ASK
Initially creating the representation is O(n^2) because we have to set
all elements of an n x n matrix to false! Therefore, overall cost
is O(n^2).
D. The book discusses a representation that is not often used in practice -
the edge list.
1. Each edge is represented by an object that stores information about
the vertices it connects.
2. A single list of edge objects is kept.
3. While this is often more space efficient than an adjacency matrix
(if e<< n^2), it requires a scan of the edge list to determine what
edges are incident on a given vertex.
E. Adjacency list: A more more efficient implementation results if we
represent each vertex by an object as well, and associate with each
vertex a linked list of edges incident to that vertex. The benefit of
this is that we can quickly find all the edges associated with a given
vertex by traversing the list, instead of having to look through possibly
hundreds of zero values to find a few ones in a row of an adjacency
matrix, or having to scan the edge list for the entire graph.
1. Normally what we do is use an array to represent the vertices. Each
array element contains the label on the vertex and possibly other
related information, plus a pointer to a linked list of nodes
describing edges of which the given vertex is the tail.
2. Each edge node contains the label on the tail and the head of the
edge, plus the weight if the graph is a network.
ex:
Beverly ------> Danvers ------> Salem --------> Wenham
3 2 5
Danvers ------> Beverly ------> Salem --------> Wenham
3 3 4
Salem --------> Beverly ------> Danvers
2 3
Wenham -------> Beverly ------> Danvers
5 4
3. Note that for a graph, each edge will appear in the adjacency list
twice - once for each of the vertices it is incident on. (cf the
symmetry of the adjacency matrix). This will not ordinarily happen
with a digraph, of course.
4. The following questions are now relatively easy. Though in the worst
case O(e), they tend toward O(e/n) if the number of edges incident
on a vertex does not vary too greatly for the graph:
find all y that x is adjacent to
degree of x in an undirected graph
outdegree of x in a digraph
5. However, the following question has become a bit harder (also O(e)
tending toward O(e/n) - but it used to be O(1):
is x adjacent to y? (for a network): if so, what is the weight?
6. The following questions have become O(n+e) in a digraph - but not in
an undirected graph: (We have to follow the edge list for each
vertex and examine each edge node to see if our vertex is the head
of that edge.)
find all x that are adjacent to y
indegree of x
7. HANDOUT CODE: Create an adjacency list from a disk file
representation
Analysis?
ASK
O(n + e)
F. Adjacency multilists
1. With adjacency lists, each edge in an undirected graph appears twice
in the list. Also, there is an obvious assymetry for digraphs - it
is easy to find the vertices a given vertex is adjacent to (simply
follow its adjacency list), but hard to find the vertices adjacent to
a given vertex (we must scan the adjacency lists of all vertices).
These can be rectified by a structure called an adjacency multilist.
2. An adjacency multilist is similar to adjacency lists, except that
each edge node appears on two linked lists - one for each of the
vertices it is incident on. In addition, in a digraph each vertex
has two lists associated with it - one of edges of which it is the
tail, and one of edges of which it is the head.
3. The following shows the adjacency multilist for our example
DIRECTED graph:
/--A B C D <--- E
/ | /| /| /| / |
/ ---+-------/ | / | / | / |
/ / | | / | / | / |
/ /-> A,B ---+-----/ | / | / |
/ | / | | / | / |
/ | /-->B,C | / + |
/ ----+-------------------+--/ /| |
/ / | | / | |
/ /---> A,D -------------> C,D / | |
/ / | |
/ /-> D,E |
/-----------------------------------------------------> E,A
III. Graph Searches and Spanning Trees
--- ----- -------- --- -------- -----
A. Introduction
1. When we discussed trees, we saw that one class of operations that was
very important was traversal - the systematic visiting of every node in
the tree. For graphs, the corresponding operations are called searches.
In a search, we systematically visit as many vertices as possible and
as many edges as possible, starting from a given starting vertex.
(Note: the term "search" is somewhat confusing. We can use a search
algorithm to try to find a vertex meeting a certain criterion, or
to try to find a path to a certain vertex; but often a "search" is
really just a traversal of the entire graph!)
2. There are two basic search orders: depth first search (DFS) and
breadth-first search (BFS).
a. In DFS, we start at a vertex and move as far as we can down one
path from the vertex before exploring the other paths.
ex: on our sample undirected graph, starting at A, we would visit
vertices in the order A,B,C,D,E
b. In BFS, we explore all of the paths emanating from our starting
vertex before progressing further.
ex: on our sample undirected graph, starting at A, we would visit
vertices in the order A,B,D,E,C.
c. Note that either method requires some method of marking vertices
so that we do not visit them more than once. (This can be done
by maintaining a boolean array indexed by vertex number, initialized
to false before the search and set to true when the node is
visited. Or, if the order of visitation is important, we can use
an array that records when each node was visited, initially set to
0.)
d. Note that pre-order traversal on a tree is a DFS, and level-order
traversal on a tree is a BFS. Not surprisingly, DFS algorithms
make use of a stack or recursion, and BFS algorithms use a queue.
e. Note that if a graph is not connected (strongly connected), then
a search will only visit some of the vertices.
B. Depth First Search
1. Preliminary remarks
a. We will do an example of a DFS on an adajacency matrix, and a
BFS on an adjacency list, to show both representations. There is
no particular inherent reason why one search is easier than
another on a given representation - I just want to show an
example of both searches and both representations with only two
code samples!
b. Note that our sample algorithms will work equally well on an
undirected graph or a digraph, since, for a graph, we represent each
edge twice, once for each direction; for a digraph, we represent
it only for the specified direction.
2. Code for DFS on an adjacency matrix - HANDOUT
Analysis?
ASK
O(n^2) because of the representation.
C. Breadth First Search
1. Code for BFS on an adjacency list - HANDOUT
Analysis?
ASK
O(n + e) - each edge is visited once while processing its tail vertex
2. Note that DFS would also be O(n + e) on this representation, and
BFS would also be O(n^2) on an adjacency matrix
3. Searches on an adjacency multilist are similar to those on an
adjacency list. To be sure of handling digraphs correctly, we
must traverse the "tail" list for each vertex.
D. There are a number of important graph problems which are easily
solved by using either one of the searches:
1. Determining if an undirected graph is connected (could be important
in problems where a graph represents a communication or
transportation system):
a. Do a DFS or a BFS (either one will work) starting at any vertex.
b. Examine the visited array entries for all vertices
- if all are true, then the graph is connected
- if any is false, then the graph is not connected.
2. Finding connected components.
a. Example: the FORTRAN equivalence statement gives rise to equivalence
classes, which are connected components of a graph whose vertices
are all the variables occurring in the program.
- e.g. EQUIVALENCE (A,B,E), (D,F), (G,H), (A,I), (F,J), (J,G)
could be represented by the graph:
A--B--E
\
---I
D--F
\---J
/
/----/
/
G--H
yielding equivalence classes: (A,B,E,I), (D,F,G,H,J)
b. Method:
mark all vertices not visited
while not all vertices visited do
begin
pick any unvisited vertex v
do a DFS or BFS starting at v. All vertices visited
on this search form a connected component
end
Note: it might be convenient to store an array of component
numbers, one per vertex, initialized to zero. At any point in
the search, a zero component number means the vertex has not
been visited; non-zero means it has. Mark all vertices visited
on the first search as component 1; those visited on the second
search as component 2, etc. Or - you could just output vertex
names during each search, if all that was needed was a list of
vertices in each component.
3. Spanning trees: A spanning tree of a connected graph G is an
acyclic connected subgraph of G, containing all the vertices of G.
(Often, when we speak of a spanning tree, we refer chiefly to
the edges comprising such a subgraph.)
a. Example: in designing a communication network, if one treats the
stations as vertices of a graph and the links as edges, then one
need only build the links needed to form a spanning tree in order
to have communication possible between all stations.
b. Method: Do a DFS or a BFS of G, starting at an arbitrary vertex.
include an edge in the spanning tree if it is followed in the
search (i.e. its head is not visited at the time it is
encountered.)
c. A note on terminology: the edges of a graph that are not included
in a given spanning tree are sometimes called back edges. Note
that adding any back edge to a spanning tree creates a cycle.
- Ex: an electrical circuit can be represented by a graph:
+ R1 + R2
O---/\/\---O---/\/\---O
| | + | +
+ | \ \
V / R3 / R4
| S \ \
| | |
O----------O----------O
if we obtain a spanning tree, then we can form a set of
independent cycles by adding one back edge at a time to the tree.
Each cycle gives rise to a circuit equation by using Kirchov's
voltage law (the sum of the voltages around a closed path is 0) -
and each of these circuit equations are independent.
In the above, we may take our spanning tree to be:
O--/\/\---O---/\/\---O
| | |
| \ \
V / /
| \ \
| | |
O O O
with two back edges. Adding the first gives us the equation:
- Vs + V1 + V3 = 0 or V1 + V3 = Vs
while the second gives us:
-V3 + V2 + V4 = 0 or V3 = V2 + V4
since we have four unknowns, two more equations are needed;
these can be obtained from Kirchov's current law at two of the
nodes which connect only to resistors:
V1/R1 - V3/R3 - V2/R2 = 0 and
V2/R2 - V4/R4 = 0
4. Biconnectivity and articulation points: We say that a connected
graph is BICONNECTED if there is no single vertex whose removal
would disconnect the graph. If a connected graph is not biconnected
then each vertex whose removal would disconnect the remainder of
the graph is called an ARTICULATION POINT.
Example: B-----F B---- F
/ \ / / \ /
A C-E A C-E
\ / \
D D
Biconnected Not biconnected - articulation points A, B
a. Biconnection is a desirable property for reliable systems like
computer networks. An articulation point represents a point of
maximum risk to the system if it fails.
b. An approach to finding articulation points is based on DFS spanning
trees and back edges. If we use this algorithm on a connected graph
and find no articulation points, it is biconnected.
i. Do a DFS traversal (starting anywhere) to construct a DFS spanning tree
• Number the vertices in the order visited. (For a vertex v, we will refer
to this value as Num(v).
• Label the edges used with the direction they were followed. We will refer
to these edges as the "tree edges" and the edges that were not used as
"back edges". (Of course, back edges are not labeled with a direction.)
Note that a labeled edge defines a parent-child relationship between
the vertices it is incident on.
ii. Do a reverse DFS traversal using the same spanning tree.
• Assign a value Low(v) to each vertex. This value is the smallest of
the following
- Num(v)
- The Low value of any child in the DFS spanning tree
- The Num value of any node reachable by following a back edge
[ Note that the effect of the last two tests is to consider all the
neighbors of the vertex in question except its immediate parent ]
• Note that, by doing a reverse DFS after the original forward DFS, we can
be sure that all of the values needed to do this will have already been
assigned -
- all the vertices have Num values as a result of the forward DFS, and
- all of the children have low values as a result of traversing in reverse
iii. Determine whether each node is an articulation point as follows
If the node in question (we will call it v) is the root of the spanning
tree (start vertex of the DFS)
it is an articulation point iff it has two or more children in the
spanning tree
else
it is an articulation point iff it has a child w in the spanning tree
such that Low(w) >= Num(v)
iv. Note that, if you are clever, all three of the above can be done by a
single recursive traversal of the graph!
Examples: work out the algorithm for the two trees below:
Examples: 3/1 B-----F 4/1 3/3 B---- F 4/3
/ \ / / \ /
Vertices labelled 2/1 A C-E 5/1 2/2 A C-E 5/3
Num/Low. Assume \ / 6/1 \ 6/3
DFS starts with D D D
in each case 1/1 1/1
Edges go DA AB BF FE EC
in each case
No articulation A an articulation
points point due to B
B an articulation
point due to F
E. Minimum-cost spanning tree: we have seen that we can use a DFS or a BFS
to find a spanning tree of any connected graph. Of course, there will
typically be many spanning trees possible for a given graph; and the
one we find will be dependent on where we start and which search (DFS or
BFS) we use.
1. If our graph is a network, a relevant problem is to find the minimal
cost spanning tree. This is a spanning tree for which the sum of the
weights of the edges included is minimal.
2. Such a tree is of interest in designing transportation and/or
communication networks. Given that we want to have a connection
between every pair of nodes at minimal total cost, we could create
a network in which each edge has as its weight the cost of building
a link between the two vertices on which it is incident. We then find
the minimal cost spanning tree.
3. The book discussed two algorithms for this - one by Prim and one by
Kruskal. We will consider only the latter here:
- construct a list of edges, E, in increasing order of cost
- initialize a set T of tree edges to []
while (# of edges in T < n - 1) /* A spanning tree has n-1 edges */
select the edge of minimum weight in E, and delete it from E
if this edge does not form a cycle with the edges already in T,
then add it to T
4. One critical step is the determination of whether a candidate edge
forms a cycle. This can be handled by associating a component number
(initially 0) with each vertex.
compnum = 0;
while (# of edges in T < n - 1)
select the edge of minimum weight in E, and delete it from E
if both vertices incident on this edge have component number 0 then
include this edge in E
compnum++
set the component number of each vertex to compnum
else if one vertex has component number 0 then
include this edge in E
set the component number of the 0 vertex to number of other
else if both vertices have different component numbers then
include this edge in E
let L be the lower and H the bigger of the two component numbers
set the component number of all vertices currently marked H to L
else
this edge would form a cycle, so ignore it
IV. Transitive Closure and Shortest Path
-- ---------- ------- --- -------- ----
A. The transitive closure of a graph G is a graph G+, having the same
vertices as G, and having an edge from each vertex to
each other vertex that is reachable from it.
ex: G: A ---> B ---> C
\
\---> D
/---------> \
G+: A ---> B ---> C
\ \
\ \--> D
\-------> /
1. The transitive closure may be obtained directly from the adjacency
matrix by an iterative algorithm due to Warshall
HANDOUT CODE: Warshall's algorithm on an adjacency matrix
2. Observe that if a graph is connected (strongly connected), then its
transitive closure will contain an edge from each node to each other
node. In such cases, a closely related question is that of shortest
path. (This can also be asked for a non-connected graph; but in some
cases the answer will be infinity.) If the graph is not a network,
"shortest" will be measured in terms of number of edges traversed;
if it is a network, "shortest" will be measured in terms of minimum
sum of weights of edges traversed. There are two questions that we
can ask based on this issue:
a. Given a pair of vertices, find the shortest path from one to the
other.
b. We can define a matrix dist[vertexno][vertexno] such that
dist[i][j] = the length of the shortest path from i to j, or
"infinity" if there is no path. Note that here we are only
concerned with the length of the shortest path, not with listing
the vertices comprising it, though determining the actual path
is a simple extension to the algorithm.
3. It turns out this problem is most easily solved by treating it as a
collection of n subproblems - one for each possible start vertex.
(Actually, in many cases, it turns out that we are only interested in
the solution for a particular start vertex, so this is fine.)
B. Single-Source All Destinations Shortest Path
1. The basic problem is this, then: given a cost-adjacency matrix for a
network, and a specified vertex v in the network, generate a matrix
dist[vertexno] such that dist[i] is the cost of the shortest path from
v to i.
a. Actually generating the paths is only slightly more complex.
b. If we want to solve the problem for all possible starting vertexes,
we just apply the solution to this subproblem repeatedly.
2. The difficulty of the problem depends on whether or not we require all
edges to have non-negative costs. (An edge with a negative cost
creates the possibility that ADDING an edge to the path makes the path
shorter.) We will consider here only the case where edge costs are
non-negative - which is the normal case.
3. The algorithm we will discuss is called Dijkstra's algorithm. It goes
like this: we will generate all the paths in increasing order
of length. The basic algorithm will involve a loop; on each
iteration we generate one new path. We will associate a flag with each
vertex called known, which will become true when the shortest path to
that vertex has been found. Initially, just the shortest path to the
start vertex is known.
/-------------11---------->\
/ \
a. Example: A ---5---> B ---3---> C ---1---> E
\ /
\--2---> D ---3->/
starting at A, the shortest paths are (in the order in which we
would find them):
Initially, just A is known
A .. B: 5 Set B to known
A .. D: 7 Set D to known
A .. C: 8 Set C to known
A .. E: 9 Set E to known
b. The way we will find the next shortest path is as follows: we will
keep in dist the cost of the shortest path to each vertex that we
have found thus far. (Initially, this will be either the cost of
a direct path from v or "infinity" if there is none). As we generate
each new path, we will look at all vertices to which its terminal
vertex is adjacent. If the sum of the length of the newly
generated path plus the cost of that edge is less than the cost
of the best path thus far, then we will update dist. Finally,
on each iteration we will choose the vertex whose shortest path is
not yet known, having the minimum dist value to be the terminal of
our new path.
Ex: dist[A] [B] [C] [D] [E]
0 k 5 infinity infinity 11
0 k 5 k 8 7 11
0 k 5 k 8 7 k 10
0 k 5 k 8 k 7 k 9
0 k 5 k 8 k 7 k 9
4. The algorithm is easily extneded to record the paths as follows: with
each vertex, record its IMMEDIATE PREDECESSOR on the shortest path.
These form a linked list that can be used to generate the paths.
Example: for the above, the predecessors are
[A] -- none
[B] A
[C] B
[D] B
[E] C
The shortest path from A to E - in reverse order - is
C B A
Which we can now reverse to give the path
A B C
V. Use of Graphs in Planning and Scheduling of Activities.
- --- -- ------ -- -------- --- ---------- -- ----------
A. Activity on Vertex Networks
1. Definition: an activity on vertex (AOV) network is a directed network
in which each vertex models some subtask that must be completed as
part of an overall task, and each edge models a prerequisite
relationship between the activity at its head and that at its tail.
2. Example: Consider the following subset of a CS curriculum:
COURSE PREREQUISITE COURSE(S)
Calculus -
MAT230 -
CPS121 -
CPS122 CPS121
CPS221 CPS122
CPS222 CPS122
CPS311 CPS122
CPS320 CPS122, MAT230
CPS403 -
CPS491 -
CPS492 CPS491
GRADUATION (All of the above plus electives)
This could be modelled by the following, where if both A and
B are prerequisites for graduation, but A is a prerequisite for B,
then we only show a link from B to graduation since the link from
A is implied by the link from A to B
(Electives) -----------------------------------------\
Calculus -------------------------------------------\ \
MAT230 ------------------------------------------\ \ \ \
\______________________ \ \ \ \
\ \ \ \ \
CPS121 ---> CPS122 --------------> CPS320 -------> GRADUATION
\ \--> CPS221 ------------------/ / / /
\ \----------> CPS311 ----------/ / /
---------------------------------> CPS403 -----------/ /
---------------------------------> CPS491 -> CPS492 --/
B. Topological Sorting
1. One question we might want to answer is "what is a permissible
sequence of courses, assuming we take one at a time?". The answer to
this is arrived at by a topological sort:
a. A topological sort is an ordering of the vertices in a digraph
such that no vertex preceeds any vertex it is adjacent from - i.e.
no activity occurs before any of its prerequisites.
b. Of course, a topological sort is only possible in a digraph having
no cycles.
c. In general, a given digraph will have several topological sorts.
Ex: the above - one is CPS121, CPS122, Calculus, CPS221, MAT230,
CPS222, CPS311, CPS322, CPS403, CPS491,
CPS492, GRADUATION
but also OK is: Calculus, MAT230, CPS403, CPS121, CPS122,
CPS221, CPS222, CPS311, CPS320, CPS491,
CPS492, GRADUATION
and _many_ others
2. A method for topological sorting:
a. Associate with each vertex a count of the number of unsatisfied
prerequisites. Initially, this is the number of vertices adjacent
to it.
b. Repeat the following for i = 1 to n:
- Select a vertex not yet included in the sort whose prerequisite
count is 0. (If there is none, then the digraph has a cycle).
Include it in the sort, and decrement the prerequisite count of
each vertex it is adjacent to by 1.
3. Algorithm
a. HANDOUT CODE - Topological sort on an adjacency list WITHOUT Queue
Analysis:
ASK
First loop is O(n), second O(n+e); but third is O(n^2), so the
whole algorithm is O(n^2).
b. Here is a case where creative thinking can save us a lot of
trouble. Note that it is not necessary to examine all of the
vertices looking for a count of 0 on each iteration of the
main loop, if we maintain a queue of vertices with count of 0.
Initially, we can place vertices in this queue when we set up
the counts; and we can add a vertex to the queue whenever its
count is reduced to zero by the inner while p loop. This also
eliminates the need for a visited field.
4. Modified algorithm: CODE IN HANDOUT
What is the time complexity now?
ASK
O(n) + O(n+e) + O(n) + O(n+e) = (n+e)!
C. A further scheduling problem, using an Activity on Edge Network
1. Definition: an activity on edge (AOE) network is a directed network
in which each edge models an activity, with the weight of the edge
representing the time needed to complete the activity. The vertices
model significant events:
a. An event represented by a vertex only happens when all of the
activities modelled by edges leading into it have been completed.
b. No activity can start until the event modelled by the vertex at its
tail has occurred.
c. The events modelled by the vertices are often project milestones
such as "specifications accepted by user".
d. Normally, we include a start vertex with indegree 0 to model the
event "project begins".
ex:
A --1--> B --4--> E --2--> F
\ \--2-->\ /
\ \ /
\--1--> C --3--> D --2
2. A CRITICAL PATH is a path of maximal length through the network.
In the above, A,B,E,F is a critical path (of length 7).
3. A CRITICAL ACTIVITY is an edge that is part of a critical path:
a. Any increase in the time required for a critical activity will
delay the completion of the project.
b. The only way to speed the project up is by reducing the time for
one or more critical activities. (One may not be enough it there
are two critical paths.)
4. A question of interest: how can we find the critical activities of
an AOE network?
5. Further definition:
a. the earliest time for an event v is the length of the longest
path from the start vertex to v. The earliest completion time
for the project as a whole is, of course, the earliest time for
the final event.
b. The earliest start time for an activity is the earliest time for
the event at its tail.
c. The latest time for an event is the latest time it can occur
without delaying the completion of the project. In the above:
event earliest time latest time
A 0 0
B 1 1
C 1 2
D 4 5
E 5 5
F 7 7
Note: events on the critical paths have earliest time = latest
time.
d. The latest start time for an activity is the latest time it can
start without delaying project completion. This can be found
from (latest time for event at its head) - (time for activity).
Critical activities have earliest start time = latest start
time.
6. Methodology for critical path analysis:
a. First determine earliest and latest event times (ee and le) for
each vertex:
i. To find ee, examine the vertices in topological order.
ee[j] = max(ee[i]+cost[i,j]) for all i s.t. e E
(Note that calculating ee in topological order ensures that
all the ee[i] will have already been computed.)
ii. To find le, examine the nodes in reverse topological order.
le[j] = min(le[i] - cost[j,i]) for all i s.t. e E
b. Now determine earliest and latest start times for each
activity:
i. Early start = ee of its tail vertex.
ii. Late start = le of its head vertex minus its cost.
c. Critical activities are those with early start = late start.
d. To find all critical paths, delete all non-critical activities
from the network. All remaining paths through the network (and
there will be at least one) are critical.
VI. Network Flow Problems
-- ------- ---- --------
A. Thus far, we have used networks in which the edge weights represent
a COST, and we've solved problems where the goal is to MINIMIZE some
total of these costs.
B. Another class of problems arises when edge weights are CAPACITIES - and
our goal is to MAXIMIZE some measurement based on them.
1. A system of pipes in which the edge weight is a capacity in gallons
per hour. (Origin of the name "network flow" for this class of
problem.)
2. A highway network - weight = cars / hour.
3. A communications network - weight = bits / second.
C. Such networks give rise to the NETWORK FLOW PROBLEM.
1. We use a directed graph (pipes / roads / links are assumed to be
unidirectional)
2. One vertex - the SOURCE - has only outward edges; and one vertex -
the SINK - has only inward edges.
3. We associate two numbers with each edge:
a. Capacity
b. Actual flow
Where: 0 <= actual flow <= capacity
4. At each vertex - except source and sink - we require
sum(flows in) = sum(flows out)
5. We also require
sum(flows out of source) = sum(flows in to sink)
We call this value the overall flow.
6. Our goal is to maximize the overall flow. We assume some sort
of valves that allow us to reduce flow on edges below capacity if
needed.
Example: (Edges are labelled current flow / capacity )
2/6
--> B ---------> C
0/8 / ^ / \ 2/4
/ 2/3 \_______/ \
A < _____/\ > F
\ 0/2 / \ /
2/2 \ v \ / 0/5
--> D ---------> E
2/5
Sum of flows out of A = 2
Sum of flows into B = 2; out of B = 2
Sum of flows into C = 2; out of C = 2
Sum of flows into D = 2; out of D = 2
Sum of flows into E = 2; out of E = 2
Sum of flows into F = 2
Overall flow = 2
D. The following technique can be used to maximize flow for any given
network.
1. Start with any legal flow (say all 0)
2. Repeatedly perform flow-improving operations until no further
improvement is possible.
E. There are two ways to improve the overall flow in a network:
1. Consider any directed path from source to sink, such that all edges
have some unused capacity (capacity - current flow).
a. Determine the smallest such unused capacity along the path.
b. Increase all current flows along the path by this amount (which
will result in some edge(s) having current flow = capacity)
c. Example: Starting with above
Path ADEBCF - AD has unused capacity 0, so no improvement possible
using this path
Path ABCDEF - AB has unused capacity 8
BC has unused capacity 4
CD has unused capacity 2
DE has unused capacity 3
EF has unused capacity 5
Increase all current flows along this path by 2
4/6
--> B ---------> C
2/8 / ^ / \ 2/4
/ 2/3 \_______/ \
A < _____/\ > F
\ 2/2 / \ /
2/2 \ v \ / 2/5
--> D ---------> E
4/5
Path ABCF - AB has unused capacity 6
BC has unused capacity 2
CF has unused capacity @
Increase all current flows along this path by 2
6/6
--> B ---------> C
4/8 / ^ / \ 4/4
/ 2/3 \_______/ \
A < _____/\ > F
\ 2/2 / \ /
2/2 \ v \ / 2/5
--> D ---------> E
4/5
Path ADEF - AD has unused capacity 0, so no improvement possible
d. Once we have done this once for all possible directed paths, we can
make no further improvements this way.
2. A second way to make improvements is to consider arbitrary paths in
which we allow some edges to be followed the wrong way. We can make
an improvement along such a path if
a. All forward edges have unused capacity.
b. All backward edges have non-zero flow
c. We make improvements by taking the minimum (least unused capacity
over any forward edge along the path, least flow over any
backward edge along the path). We then increase the flow on all
forward edges on the path by this amount, and DECREASE the flow
on all backward edges along the path by this amount.
d. Example: Starting where we left off
Path ABEF - AB has unused capacity 4
BE has backward flow 2
EF has unused capacity 3
Increase forward flows by 2; decrease backward by 2
6/6
--> B ---------> C
6/8 / ^ / \ 4/4
/ 0/3 \_______/ \
A < _____/\ > F
\ 2/2 / \ /
2/2 \ v \ / 4/5
--> D ---------> E
4/5
We have maximized flow for this particular network!
F. Practical Considerations
1. Choice of order of considering paths for improvement is critical
Example:
1/1000 1/1000
----> B ---->
/ | \
A | 0/1 C
\ v /
----> D ---->
1/1000 1/1000
If we choose to improve along ABDC, we can get
1/1000 0/1000
----> B ---->
/ | \
A | 1/1 C
\ v /
----> D ---->
0/1000 1/1000
If we now use ADBC, we get
1/1000 1/1000
----> B ---->
/ | \
A | 0/1 C
\ v /
----> D ---->
1/1000 1/1000
Of course, we can repeat this cycle 999 more times before achieving
the optimal flow of 2000
However, if we considered paths in the order
ABC
ADC
we would get there in just two steps.
2. To arrive at maximum flow most quickly, it turns out that we should
consider the paths in the order they would be generated by a BFS
starting at the source.
a. Example: For our first network:
ABCF
ABEF (BE backwards)
ADCF (DC backwards)
ADEF
ABCDEF
ABEDCF (BE, ED, DC backward)
ADCBEF (DC, CB, BE backward)
ADEBCF
b. Example: For our second network:
ABC
ADC
ABDC
ADBC