So far, much of this course has consisted of advancing definitions
and examples that try to capture the meaning if mathematical concepts
in abstract language. Let's try to see how some of these definitions
might be used to decide theoretical and practical questions.
If you are interested in the theoretical side of mathematics,
then you might like to consider the following proposition.
Proposition
Suppose that G is a graph, with n
> 0 vertices. Let d1 , . . . , dn
denote the
degrees of the vertices of the graph G (see lesson
2 for the definition of the
degree of a vertex). Then the sum of the degrees is an even
number, and
can be written in the form:

If you are interested in applying mathematics to solve practical
problems, you might like to think about the following statement:
In a gathering of six people, either there are three people
who all know each other, or else there are three people none of
whom knows the other two.
We could represent the people at the gathering by vertices on
a graph. A gathering of six people would be represented by a
graph with six vertices. If two people at the gathering know
each other, then we could draw an edge between the two vertices.
(Here, the possibility of people knowing themselves is ignored.)
For example, a gathering might resemble:

At this gathering, Robin, Lisa and Denise all know each other.
Graphically, the edges linking Robin's, Lisa's and Denise's vertices
forms a triangle.
Another gathering might look like:

Here, Ali, Eli and Elizabeth do not know each other. There is
a graph, based on the pink and green graph, that has an edge between
two vertices if the two vertices don't have an edge between
them in the pink and green graph. This is called the conjugate
of the pink and green graph, and resembles:

(Here, both yellow and black edges are part of the conjugate graph).
In the conjugate graph, there is a triangle between
Ali, Eli, and Elizabeth (it has been highlighted in yellow).
There are other triangles in the conjugate graph, for example,
between Paula, Elizabeth and Naumann. There is another triangle
between Ali, Eli and Cheryl, and another between Ali, Naumann
and Cheryl. Each of these triangles represents a trio of people
at the gathering, none of who know the other two in the trio.
So, proving statement:
In a gathering of six people, either there are three people
who all know each other, or else there are three people none of
whom knows the other two.
is equivalent to proving the statement:
On a simple graph with exactly six vertices, either the graph
has a triangle, or the conjugate of the graph has a triangle.
Incidentally, a collection of edges that form a triangle (or a
square, pentagon, or any other closed figure) is called a circuit,
one of the themes of the current lesson.
Walks
One of the very first problems in graph theory was the "Seven
Bridges of Königsberg." The problem goes something
like this:
During the eighteenth century the city of Königsberg (in
East Prussia) was divided into four sections (including the island
of Kneiphof) by the Pregel river. Seven bridges connected these
regions, as shown in the picture below.

It was said that people spent their Sundays walking around, trying
to find a starting point so that they could walk about the city,
cross each bridge exactly once, and then return to their starting
point. Can you find a starting point, and a path around the
city that allow you to do this?
The bridges of the city of Königsberg could be represented
more abstractly as pairs of vertices joined by edges. For example:

(The vertices and edges of the graph are drawn in pink). Completing
the graph by drawing in the main streets of Königsberg might
give:

Making things more abstract, and dispensing with all but the vertices
and the edges,

(the edges representing the bridges are colored blue). Then the
problem of the Seven Bridges of Königsberg could be re-stated
as:
Is there a sequence of edges e1 , . . . , em
in the above pink and blue graph such that:
1. Consecutive edges ej and ej+1
are incident to a common vertex, and,
2. Each of the blue edges belongs to the sequence, and,
3. Each of the blue edges occurs exactly once in the sequence?
Because this kind of sequence represents the journey that the
people of Königsberg could take while walking around
the city, a collection of edges from any graph with these
kinds of properties is called a walk. Precisely
and abstractly:
Definition Let G
denote a graph. A walk in G is a finite
sequence of edges e1 , . . . , em
with each edge, ej , incident to vertices
vj - 1 and vj such
that any two
consecutive edges ej and ej
+ 1 are incident to a common vertex vj.
The
vertex v0 is called the initial vertex
and the vertex vm is called the final
vertex. The number of edges in the walk is called the
length of the walk.
Although the spirit of the definition of a walk seems pretty clear
(it is supposed to represent the way that someone 'walks around'
a graph), there is a little ambiguity. The point is that, in
the definition of a walk, there is no requirement that the edges
in the walk are different from each other. So, it is possible
to have a walk that just consists of the very same edge repeated
over and over again. This might correspond to a person walking
up and down the same old street again and again. This is an absolutely
valid walk to go on (although the scenery might get repetitive
after a while!), but it can be a technically difficult situation
to handle. It is sometimes very helpful, when proving technical
results, to be able to assume that all of the edges in the walk
are different. This leads to the definition of a path:
Definition Let G
denote a graph. A path in G is a walk in
G in which all of the edges
are distinct. A path in which all of the vertices are
distinct is called a simple
path. A closed walk is a walk from a vertex to
itself (i.e. the initial and
final vertices are the same vertex). A circuit is a
closed, simple path.
in the following graph (it will be helpful to label the edges):

Connected Graphs
In the supplement to lesson 2 ('Examples of Graphs'), the union
of two graphs was defined. When viewed from the perspective of
the edge and vertex sets, the idea of the union of two graphs
is just like the union of sets. To clarify, if G1
and G2 are two graphs, and a graph G3
is defined to be their union,
, then the
vertex and edge sets of G3 (V(G3)
and E(G3) respectively) are given by:

However, in terms of pictorial representations of graphs, the
union of two graphs seems rather pointless.

To get the pictorial representation of the union, you just draw
the two graphs next to one another! From the perspective of pictorial
representations, the concept of the union of two graphs doesn't
seem to be terribly useful. However, the concept of the union
of a graph allows for an elegant definition of what it means for
a graph to be 'connected.'
Definition A graph
is said to be connected if it cannot be expressed as the
union of
two graphs. Otherwise, the graph is said to be disconnected.
Definition The connected
component of a graph G containing a given vertex
v an element of V(G) is the largest sub-graph of G
that contains v and is a connected
graph.

are connected graphs, because they can't be expressed as two separate
graphs drawn side-by-side. This sort of makes sense - a person
would have to know at least one other person to be invited to
the gathering in the first place. Can you think of other examples
from everyday life where connected and disconnected graphs arise
because of the nature of the situation that you describe?
If you are interested in the more theoretical side of graph theory,
you might like to consider following proposition.
Proposition Let G be a simple graph on n > 0 vertices. If the graph G has more than:
edges, then the graph G is connected.
Measures of Distance and Spread
Sometimes, it is helpful to have a notion of how 'far apart' two
vertices of a graph are, and 'big' a graph is. For us, these
concepts are useful when considering a special kind of path that
a graph may have called a Hamiltonian cycle (presumably
in honor of William Hamilton, who left the '4-color Problem' alone
for 25 years). The measures of distance and spread make the most
sense in the context of connected graphs (or for the connected
components of disconnected graphs), which is why they are defined
now.
The two graphs:

are very different sizes when size is meant in the normal, everyday
way. The graph on the left is clearly much bigger than the graph
on the right. However, if you look closely at the vertices and
edges present in each of these graphs, you can see that they are
exactly the same graph.
Definition The distance
between any two vertices v and w in the same connected
component of a graph is the length of the shortest simple path
from v to w,
and is denoted by d(v, w).
Remember that a simple path is a special case of a walk, and the
length of a walk between two vertices is the number of edges that
are 'walked across.'
Definition The diameter
of a connected graph is the maximum distance between two
of its vertices. The girth of a graph is the length
of the shortest circuit.
Eulerian Graphs
Remember how the problem of the Seven Bridges of Königsberg
could be re-stated as:
Is there a sequence of edges e1 , . . . , em
in the pink and blue graph such that:
1. Consecutive edges ej and ej+1
are incident to a common vertex, and,
2. Each of the blue edges belongs to the sequence, and,
3. Each of the blue edges occurs exactly once in the sequence?
By defining paths, we have been able to characterize the first
requirement of the problem in abstract terms. To characterize
the next two requirements, we will study more specialized kinds
of paths, namely Eulerian paths (named for the famous Swiss
mathematician Leonhard Euler, who actually solved the problem
of the 'Seven Bridges') and Hamiltonian paths.
Definition An Eulerian
path in a connected graph G is a path in which every
edge in
G occurs exactly once. If such a path exists in G,
then the graph is called
semi-Eulerian.
Definition An Euler
tour in a connected graph G is a closed path
which includes
every edge of G exactly once. A graph that has an Euler
tour is called an
Eulerian graph.
If you are interested in the theoretical side of graph theory,
you might like to consider the following proposition:
Proposition If G
is a graph in which the degree of each vertex is at least two,
then G
must contain a circuit.
Remember the definition of a circuit: a circuit is a closed,
simple path.
You might have decided that not every connected graph is Eulerian,
an example of a connected graph that is not Eulerian is
the following.

Eulerian graphs are quite special. If you took the time to verify
whether the above example was actually Eulerian or not, you might
have found that it is actually quite time consuming and difficult
to decide whether a graph is Eulerian or not. Eulerian
graphs can be characterized as follows.
Theorem A finite,
connected, loop-free graph is Eulerian if and only if the degree
of
each vertex in the graph is even.
Note The statement
of this theorem includes a phrase, "If and only if."
This
makes the theorem kind of 'double-barreled.' It means that
if all the
vertices of a finite, connected, loop-free graph, G,
have even degrees, then
G is Eulerian. The theorem also means that if
a finite, connected, loop-free graph, G, is Eulerian, then
all the vertices of G
have even degrees. Because the theorem is 'double-barreled,' the
proof has to be 'double-barreled'
as well.
Proof First to be
established is the 'barrel' of the theorem that says:
If a finite, connected, loop-free graph, G, is Eulerian,
then all of the vertices of G have even degrees.
To prove this, suppose that G is a finite, connected,
loop-free Eulerian
graph. Then G has an Euler tour. Each time that the
Euler tour passes
through a vertex, it uses two edges incident to that vertex.
Each edge occurs
exactly once in the Euler tour, so every vertex has even degree.
Next to be established is the 'barrel' of the theorem that says:
If all the vertices of a finite, connected, loop-free graph,
G, have even
degree, then G is Eulerian.
To prove this, suppose that G is a finite, connected,
loop-free where every
vertex has even degree. Firstly, dispense with the somewhat
pathological
case where G has no edges (i.e. each vertex has degree
zero). Then the
result is holds, because if the graph has no edges, then the
'empty path'
consisting of no edges 'includes' every edge of the graph once.
This
sounds kind of nuts, but the reason that it works is that there
is no
requirement that a path actually contain any edges (check the
definition).
So, a path that contains no edges whatsoever is a mathematical
possibility,
although it might not occur as an intuitive possibility to everyone.
Next, to the more intuitively obvious situation where the graph
G actually
has some edges. By the proposition that you might have proved
in the last
'thinking opportunity,' the graph G contains a circuit.
Remove this circuit
(just the edges, not the vertices) from G, leaving a
subgraph H that is a
union of connected components. By induction on the number of
edges,
each of these connected components has an Euler tour. An Euler
tour in G
can now be constructed by starting at any one of the vertices
of the original
circuit, following this circuit around, and every time a new
connected
component of H is encountered, an Euler tour in this
component is made.
This ends the proof.
The proof of the theorem referred to a mathematical technique
called 'Proof by Induction.'
You check into a motel. The desk clerk informs you that you
will be staying in room number 'N.' However, the desk clerk isn't
very helpful, and instead of giving you the key to room 'N,' he
gives you the key to room 1. You ask him what the deal is, and
he tells you that the key to open room 'k + 1' is located in room
'k' How can you get to your room?
If you haven't studied mathematical induction, you might like
to have a look at the proof of the following theorem (which uses
mathematical induction). Try to figure out why the argument works.
Theorem For any integer
n > 0,
.
Proof Firstly, do the n = 1 case. When n = 1, the left hand side of the formula gives,
,
while the right hand side of the formula gives,
.
Since the left hand side and the right hand side of the formula
agree for
n = 1, we conclude that the theorem holds for the case
n = 1.
Next, the inductive step. Assume that the theorem holds true
for some
integer k > 0. We will show, on the basis of this
assumption, that the
theorem also holds true for the integer k + 1.
The assumption that the theorem holds for k means that
we can assume
that,

For the integer k + 1, the theorem predicts that the
right hand side should be
.
For the integer k + 1, the left hand side is:
.
More algebra gives that,
Having shown that the left and right hand sides of the theorem
agree for the
integer k + 1, when it has been assumed that the theorem
works for the
integer k, completes the proof.
The following proposition can be proved using the theorem characterizing
Eulerian graphs. See if you can come up with an argument to explain
why it is a true statement. (Observe that this proposition is
'double-barreled' as it contains the 'if and only if' phrase.
Make sure your argument is 'double-barreled' too.)
Proposition A finite,
connected, loop-free graph is semi-Eulerian if and only if it
has at
most two vertices of odd degree.
Hamilton Cycles
Euler paths and Euler tours were mainly concerned with the edges
of a graph. An Euler path, for example, included every edge
of the graph exactly once. Another special type of path focuses
more on the vertices of the graph. These are called Hamiltonian
paths.
Definition Any walk
in a connected graph G that passes through each vertex
of the
graph exactly once is called a Hamiltonian path. If
such a path exists,
then the graph is called semi-Hamiltonian.
Definition A Hamilton
cycle is a closed Hamiltonian path, and any graph that has
a
Hamilton cycle is said to be Hamiltonian.
What sort of criteria would a graph have to satisfy in order to
have a Hamiltonian cycle? What about a Hamiltonian path? Try
to give examples of graphs that show that your criteria are essential.

As with Eulerian graphs, checking graphs 'by hand' to see if they
are Hamiltonian or not is very time-consuming, and actually quite
difficult (especially for very large graphs with lots of vertices
and edges). It would be nice to develop another theorem that
would provide simpler criteria for deciding whether a graph was
Hamiltonian or not. There is a result (known as Oré's
Theorem) that goes part of the way towards this. Unfortunately,
Oré's Theorem is not a 'double-barreled' theorem. Oré's
Theorem gives a sufficient condition for a graph to be
Hamiltonian. Unfortunately, it does not give a necessary
condition; there are examples of Hamiltonian graphs that do not
fulfill all of the requirements of Oré's Theorem.
Theorem If G
is a finite, connected, loop-free graph with n vertices,
with the property
that the degrees of any two adjacent vertices is greater than
or equal to n,
then G is Hamiltonian.
Note As mentioned,
there are examples of Hamiltonian graphs that do not satisfy
the criteria of Oré's Theorem. For example, the cube
graph is Hamiltonian
- you might like to try and find a Hamiltonian cycle.

However, the cube graph is a graph on 8 vertices (n =
8), but the degrees of
adjacent vertices always sums to 6.
Proof Assume that
G has no Hamiltonian cycle. Add as few new edges to G
as
are necessary to produce a Hamiltonian cycle. (Note that the
complete
graph on n vertices has a Hamiltonian cycle, so eventually,
you will add
enough edges to produce a Hamiltonian cycle).
Suppose that the last edge that you added is between two vertices
v and w,
and that this is the final link in what becomes a Hamiltonian
cycle.
Schematically, the Hamiltonian cycle might look like this:

Label the vertices as indicated in the schematic. Now consider
the
'neighbors' of v and w in G (i.e. the vertices
adjacent to v and w in G.)
Firstly, v is not adjacent to itself and is not adjacent
to w in G (remember
that the edge joining v and w was the last edge
added to G), but is adjacent
to, say k of the vertices v2 , v3
, . . . , vn - 1. So, d(v)
= degree of v in G = k.
Secondly, w is not adjacent to itself, and is not adjacent
to v in G, but is
adjacent to, say, l of the vertices v2
, v3 , . . . , vn - 1.
So, d(w) = degree of w
in G = l.
Suppose that v is adjacent to vj (2=<
j =< n - 1), then w cannot be joined to vj
- 1
or else:

would have been a Hamiltonian cycle in G before the edge
linking v and w
was added (remember that it has been assumed that the fewest
possible
edges have been added to the graph G in order to produce
a Hamiltonian
cycle).
Thus, for each of the k vertices from v2 , . . . , vn - 1 that v is adjacent to, there is one vertex that w is not adjacent to. Thus, w is not joined to at least k vertices. Therefore,
This contradicts the assumption that the sum of the degrees
of any two
adjacent vertices must be at least n. So, G must
have had a Hamiltonian
cycle. This concludes the proof.
Do you think that this technique seems logically valid? Can you
identify what premise(s) it relies upon?
Can you explain how the technique of 'Proof by Contradiction'
is used to prove Oré's Theorem? There is a second, less
explicit use of 'Proof by Contradiction' in the body of the proof.
Can you identify it?
There is another result called Dirac's Theorem. Dirac's Theorem
may be stated:
Theorem If G
is a simple graph on n vertices with the property that
all of its vertices have degree at least
, then G
is Hamiltonian.