An adjacency matrix is a way of representing an $ \mathtt{n}$ vertex graph $ G=(V,E)$ by an $ \ensuremath{\mathtt{n}}\times\ensuremath{\mathtt{n}}$ matrix, $ \mathtt{a}$, whose entries are boolean values.
int n;
boolean[][] a;
AdjacencyMatrix(int n0) {
n = n0;
a = new boolean[n][n];
}
The matrix entry $ \mathtt{a[i][j]}$ is defined as
The adjacency matrix for the graph in Figure 12.1 is shown in Figure 12.2.With this representation, the $ \mathtt{addEdge(i,j)}$, $ \mathtt{removeEdge(i,j)}$, and $ \mathtt{hasEdge(i,j)}$ operations just involve setting or reading the matrix entry $ \mathtt{a[i][j]}$:
void addEdge(int i, int j) {
a[i][j] = true;
}
void removeEdge(int i, int j) {
a[i][j] = false;
}
boolean hasEdge(int i, int j) {
return a[i][j];
}
These operations clearly take constant time per operation.
Where the adjacency matrix performs poorly is with the $ \mathtt{outEdges(i)}$ and $ \mathtt{inEdges(i)}$ operations. To implement these, we must scan all $ \mathtt{n}$ entries in the corresponding row or column of $ \mathtt{a}$ and gather up all the indices, $ \mathtt{j}$, where $ \mathtt{a[i][j]}$, respectively $ \mathtt{a[j][i]}$, is true.
List<Integer> outEdges(int i) {
List<Integer> edges = new ArrayList<Integer>();
for (int j = 0; j < n; j++)
if (a[i][j]) edges.add(j);
return edges;
}
List<Integer> inEdges(int i) {
List<Integer> edges = new ArrayList<Integer>();
for (int j = 0; j < n; j++)
if (a[j][i]) edges.add(j);
return edges;
}
These operations clearly take
$ O(\ensuremath{\mathtt{n}})$ time per operation.
Another drawback of the adjacency matrix representation is that it is big. It stores an $ \ensuremath{\mathtt{n}}\times\ensuremath{\mathtt{n}}$ boolean matrix, so it requires at least $ \ensuremath{\mathtt{n}}^2$ bits of memory. The implementation here uses a matrix of $ \mathtt{boolean}$ values so it actually uses on the order of $ \ensuremath{\mathtt{n}}^2$ bytes of memory. A more careful implementation, that packs $ \mathtt{w}$ boolean values into each word of memory could reduce this space usage to [画像:$ O(\ensuremath{\mathtt{n}}^2/\ensuremath{\mathtt{w}})$] words of memory.
Despite the high memory usage and poor performance of the $ \mathtt{inEdges(i)}$ and $ \mathtt{outEdges(i)}$ operations, an AdjacencyMatrix can still be useful for some applications. In particular, when the graph $ G$ is dense, i.e., it has close to $ \ensuremath{\mathtt{n}}^2$ edges, then a memory usage of $ \ensuremath{\mathtt{n}}^2$ may be acceptable.
The AdjacencyMatrix data structure is also commonly used because algebraic operations on the matrix $ \mathtt{a}$ can be used to efficiently compute properties of the graph $ G$. This is a topic for a course on algorithms, but we point out one such property here: If we treat the entries of $ \mathtt{a}$ as integers (1 for $ \mathtt{true}$ and 0 for $ \mathtt{false}$) and multiply $ \mathtt{a}$ by itself using matrix multiplication then we get the matrix $ \ensuremath{\mathtt{a}}^2$. Recall, from the definition of matrix multiplication, that
Interpreting this sum in terms of the graph $ G$, this formula counts the number of vertices, $ \ensuremath{\mathtt{k}}$, such that $ G$ contains both edges $ \mathtt{(i,k)}$ and $ \mathtt{(k,j)}$. That is, it counts the number of paths from $ \ensuremath{\mathtt{i}}$ to $ \ensuremath{\mathtt{j}}$ (through intermediate vertices, $ \ensuremath{\mathtt{k}}$) that have length exactly 2. This observation is the foundation of an algorithm that computes the shortest paths between all pairs of vertices in $ G$ using only $ O(\log \ensuremath{\mathtt{n}})$ matrix multiplications.