🧑‍💻
CS2040S Data Structures and Algorithms
  • Algorithms
    • Overview
    • Time Complexity
    • Binary Search
  • Sorting Algorithms
  • Order Statistics
  • Interval Searching
  • Orthogonal Range Queries
  • Random Permutation Generation
  • Disjoint Set Union
  • Data Structures
    • Binary Search Tree
    • Trie
    • Hash (Symbol) Table
    • (a, b)-Trees
    • (k, d)-Trees
    • Heap
  • Graphs
    • Introduction
    • BFS and DFS
    • DAG and Topological Sort
    • SCC (Tarjan's and Kosaraju's)
    • SSSP (Bellman-Ford and Dijkstra)
    • MST (Prim's and Kruskal's)
  • Dynamic Programming
    • Concept
    • APSP Floyd-Warshall
    • Longest Increasing Subsequence
    • 0-1 Knapsack
    • Prize Collecting
    • Vertex Cover on a Tree
Powered by GitBook
On this page
  • Representing a Binary Tree using an Array
  • Full and Complete Binary Tree
  • Min-heap
  • Insertion in Min-heap
  • Deletion in min-Heap
  • Heap Sort
  • Heapify

Was this helpful?

  1. Data Structures

Heap

The purpose of a heap is to perform extractMin/extractMax, deleteMin/deleteMax, and peek.

You cannot search for an element efficiently in a heap, i.e., a heap is NOT a search tree.

Representing a Binary Tree using an Array

We store each node at a position in an array. But the key question is how do we know the structure of the binary tree? (i.e., how do we find a node’s parents and children).

A common implementation is as follows:

If a node uuu is at index iii of the array, then its left child is at index 2∗i2*i2∗i, its right child is at index 2∗i+12*i + 12∗i+1, and its parent is at index ⌊i2⌋\lfloor\dfrac{i}{2}\rfloor⌊2i​⌋.

It is easy to visualize it as follows: draw the binary tree - starting from top and moving down, add the nodes at every level in left to right order.

But if you think hard about it, you realise that this only works for a complete binary tree (if nodes to the left of current node on the same level have missing children then the current node cannot have any children). You can represent non-complete binary trees too if you put a null value at the index for missing nodes.

So, why don’t we use an array representation everywhere? How about an AVL tree? Just insert null where there is an element missing right? What’s the point in creating so many nodes and having pointers to depict parent-child relations?

Umm.. what about rotations then? Changing the parent and child relations would involving shifting a lot of elements to ensure that our parent-child relationship information is maintained correctly. It would take O(n)O(n)O(n) in the worst case, which, needless to say, is terrible.

Full and Complete Binary Tree

A full binary tree of height hhh has 2h+1−12^{h+1} -12h+1−1 nodes. That is, adding a node (doesn’t matter where since there is no space anywhere lol) results in the increase of the height of the tree (that’s why we call it “full” - there is no more space left).

A complete binary tree is a binary tree in which all the levels are completely filled except possibly the lowest one, which is filled from the left. A way to visualise this is to consider the array representation of a complete binary tree - it must not have any null values between 2 non-null values.

The height of a complete binary tree (and obviously also a full binary tree) is O(logn)O(logn)O(logn) (since it is full binary tree up to height h−1h -1h−1). In other words, a complete binary tree is balanced.

Min-heap

We shall discuss only about min-heap here. A max-heap is exactly analagous.

A min-heap is a complete binary tree in which a node has a value lower than that of its children (called min-heap property). Observe that this is a local property, i.e., for any node, we only need to check its left and right children and not any other node. This makes it easy to update on insertion and deletion. (Think what would happen if we had a property that depended on all other nodes in the tree, and we performed insertion - it would take O(n)O(n)O(n) to traverse the tree. Here, we only check the nodes along the root-to-leaf path of the newly inserted node)

Invariant: Formally, for any node u, u.value <= u.right.value && u.value <= u.left.value. It is crucial to be able to maintain this invariant (efficiently!) on insertion and deletion.

So, the root of a min-heap has the lowest value (it follows from the above property). Due to the recursive nature of the defintion, a node has a value lower than all its descendents and higher than all its ancestors (but no relation to ancestor’s other subtree, i.e., it is possible for a node at depth ddd from the root to have a larger value than a node at depth d+5d + 5d+5 from the root. Obviously 5 is arbitrary here.) So, the lower we go in the heap, the greater the values become. (Remember that there is no condition on the relation between values of a left and right child)

Insertion in Min-heap

Consider a min-heap represented by the array: [2,4,3,5,6][2,4,3,5,6][2,4,3,5,6] (verify the parent-child relationship using the formulae described before as an exercise). Its graph representation is as follows:

Say now we insert the element 111. We need to maintain two properties - complete BT and min-heap property. Maintaining complete BT is more important since it is a structural property, i.e., it depends on the position of the nodes in the tree. min-heap property can easily be maintained by performing swaps)

The procedure is as follows:

  1. Add the element to the end of the array (this maintains the complete binary tree property as there are no gaps) (just regular insert)

  2. If the heap property is violated, i.e., if the newly added node has a value lower than its parent, swap them. (maintain invariant of the min-heap)

  3. Keep performing step 2 until heap property is no longer violated.

Since we perform at most O(log⁡n)O(\log n)O(logn) swaps, insertion takes O(log⁡n)O(\log n)O(logn) (since we know that the height of a heap is O(log⁡n)O(\log n)O(logn) and we are only looking at the nodes along the root-to-leaf path)

insert(u, heap):
	heap.add(u); // heap property may be violated
	int index = heap.size() - 1;
	while (index >= 0 && heap.get(index) < heap.get(Math.floor(index/2)).value) { // traverse leaf-to-root path
		swap(index, Math.floor(index/2), heap); // swap with parent to correct the heap property
		index = Math.floor(index/2);
	}

This paradigm of insertion/deletion is quite similar to a lot of other data structures involving trees. For example, consider deletion in an AVL tree. Once we delete a node, we travel up to the root, performing rotations as necessary to ensure that our invariant (height-balance) remains true.

Notice that the element bubbles upwards from the leaf to the root (if necessary) when inserted. So, the direction of adjustment is upwards.

In case of our above example, the final heap will look like this:

and the array representation would be [1,4,2,5,6,3][1,4,2,5,6,3][1,4,2,5,6,3].

Deletion in min-Heap

You are only allowed to delete the root of the min-heap. That’s the whole point of storing the minimum at the root - so that you can pop it out. Heaps are commonly used as implementations of a priority queue and in a priority queue, you want to be able to remove the element with lowest priority (or whatever you specify as the basis for the total order).

The question is: after you delete the root, which node should become the root? A naive answer would be to pick the lower of the two children. But if the left child has lower value, then it no longer remains a complete binary tree. Remember that you need to maintain 2 key properties of a heap: it should be a complete binary tree and the invariant (min-heap property) should be satisfied

Here is the procedure (quite neat!) for deletion

  1. Pop out the root from the heap

  2. Make the last element of the array (the right-most node on the bottom-most level) the root (this preserves complete binary tree property)

  3. Now send the root towards the leaf by swapping with the smaller of the two children if necessary (maintains invariant)

Observe that the direction of adjustment is downwards in this case (opposite to that of insertion) - we push the node from the root to the leaf.

pop():
	Node root = heap.get(0);
	heap[0] = heap.get(heap.size() - 1); // make the new root to be the last element
	heap.removeIndex(heap.size() - 1); // preserves complete binary tree property
	int index = 0;
	while (index < heap.size()/2) {
		if heap.get(index).value > min(heap.get(2*index).value, heap.get(2*index + 1).value): // violation of heap property
			swap(index,heap.get(2*index).value < heap.get(2*index + 1).value ? 2*index : 2*index + 1, heap); // swap to correct
	}

It should be obvious by now that the time for deletion is O(logn)O(logn)O(logn) since we perform at most lognlognlogn swaps while traversing the root-to-leaf path.

If you want to delete a non-root node, you can use lazy deletion. When a delete operation is performed on a vertex, just flag is at “INVALID”. Then, if we encounter an INVALID vertex on an “extractMin/pop” call, we just remove them and call extractMin again (until we get a valid node). This notion of labelling as DELETED/INVALID is quite popular (recall that we used a similar idea while deleting during open addressing). It ensures that our structural property remains intact (in this case, complete BT; in case of open addessing, no null elements between two non-null elements that were originally mapped to the same bucket, which would lead to keyNotFound() while searching error even if the key exists).

As an aside, it is super important to be able to spot such similarities in paradigms common tips and tricks for data structures, and use them while designing your own (ingenious and creative) data structures to solve problems. Joining the dots and seeing patterns between related (or even seemingly unrelated) topics is one of the surest ways to gain an insight.

Heap Sort

The main purpose of a heap is to be able to get the item with the lowest value efficiently in a dynamic data structure (supports insertion, deletion). If there were no more insertions or deletions that were going to be performed, we could just sort the elements and return the lowest value in order.

But, we can use a heap to sort the elements too. Given an array and an empty heap, just insert all the elements into the heap one by one (or use heapify if you want it to be slightly faster) and then delete them one by one (delete gives the lowest element). The order in which elements are obtained is the sorted order.

Since both insertions and deletions take O(logn)O(logn)O(logn) time, the total running time of HeapSort is O(nlogn)O(nlogn)O(nlogn) as insertions and deletions are performed exactly nnn times each (for an array of size nnn). Even if you use heapify, deleting nnn elements would take O(nlogn)O(nlogn)O(nlogn) and so the total running time would remain unchanged.

In fact, one cool feature is that you can use the empty space of the same array that represents the heap to store the sorted prefix of the array while you are deleting the elements. (when you delete from a heap, the last space becomes free. Put this deleted element at that position). So, it takes only O(1)O(1)O(1) extra space. Compare this with MergeSort with needs O(n)O(n)O(n) additional space to store the intermediate stages of the array.

Heapify

Heapify is a method to create a heap from an unordered array. One way to create a heap is to perform insertions for each element. Then, the direction of adjustment is from the leaf to the root. This will take O(nlogn)O(nlogn)O(nlogn). Heapify is based on a different procedure (and is faster!!!).

Given an array that represents a complete binary tree (which does not satisfy the max-heap property), we want to create a heap (say, max-heap in this case)

We correct a single violation of the heap property in a subtree’s root at every step. So, heapify takes in an array that is a heap with only a single violation of the max-heap property.

Heapify(array, size, i)
  set i as largest
  leftChild = 2i
  rightChild = 2i + 1

  if leftChild > array[largest]
    set leftChildIndex as largest
  if rightChild > array[largest]
    set rightChildIndex as largest

  swap array[i] and array[largest]
MaxHeap(array, size)
  loop from the first index of non-leaf node down to zero #(right to left in the array)
    call heapify

The precondition is that if heapify is correcting a node at index iii, its left and right children are valid heaps. The postcondition is that the tree rooted at index iii is now a valid heap.

Recall that if uuu is the root of a heap, u.left and u.right are also roots of heaps. The loop invariant in case of heapify is that when heapify is correcting a node at index iii, all the nodes after iii are part of some correct heap. Formally, after kkk iterations of heapify, all subtrees stemming from the last kkk items in the array are valid heaps. This means that by the time we finish nnn iteratoins, all nodes in the tree are by themselves valid heaps and therefore, the heap property is achieved.

We greedily build a heap using a bottom-up approach. In any complete binary tree, each node in the lowest level of the tree is a heap by itself. In the second lowest level, perform swap if necessary.

Starting from the lowest level and moving upwards (leftwards in the array), ensure that each node is part of a heap.

We are performing at most nnn swaps and so the time complexity of heapify is O(n)O(n)O(n) (think about the maximum number of swaps that can be performed for every node and sum them all up - don’t use O(logn)O(logn)O(logn) since that is a lose bound)

So, the minimum time to create a heap is O(n)O(n)O(n) using heapify. Recall that creating a heap using insertions takes O(nlogn)O(nlogn)O(nlogn)

An important difference is that when the direction of adjustment is upwards (bubbling the node upwards) and you start from the leaf, it takes O(logn)O(logn)O(logn) swaps since you need to check all the way to the root. But when the direction of adjustment is downwards (bubling the node down) and you start at a node at height hhh, it takes O(h)O(h)O(h) time.

Proof that time taken for heapify is O(n)O(n)O(n):

As the maximum number of swaps needed for each node increases as we go higher up the heap, the number of nodes decreases exponentially. In particular, the number of nodes halves and the number of swaps per node increases by 1 with each level higher. So, the two effects cancel out (neutralize each other).

Consider a full binary tree of height hhh and having nnn nodes (obviously h=2n−1h = 2^{n} -1h=2n−1). Then, for we have n/2n/2n/2 leaf nodes on which no swaps need to be performed. For n/4n/4n/4 nodes, we need to perform only 1 swap (only 1 level below this node). For n/8n/8n/8 nodes, we need to perform a maximum of 2 swaps and so on.

A more rigorous proof is as follows:

Observe that max_heapify takes O(1)O(1)O(1) time for nodes that are one level above the leaves and in general, O(l)O(l)O(l) time for nodes that are lll levels above the leaves. Further, above that there are n/4n/4n/4 nodes at level 1, n/8n/8n/8 nodes at level 1, …\dots…, 1 node at level lognlognlogn.

So, the total amount of work in the for loop can be summed as:

T(n)=n4(1c)+n8(2c)+n16(3c)+n32(4c)+⋯+1(log(n)c)T(n) = \dfrac{n}{4}(1c)+ \dfrac{n}{8}(2c) + \dfrac{n}{16}(3c)+ \dfrac{n}{32}(4c) + \dots + 1(log(n)c)T(n)=4n​(1c)+8n​(2c)+16n​(3c)+32n​(4c)+⋯+1(log(n)c)

For ease of computation, set n/4=2kn/4 = 2^kn/4=2k. Then, we have

T(n)=c2k(120+221+322+⋯+k+12k)T(n) = c2^k\left( \dfrac{1}{2^0} + \dfrac{2}{2^1} + \dfrac{3}{2^2} + \dots + \dfrac{k + 1}{2^k} \right)T(n)=c2k(201​+212​+223​+⋯+2kk+1​)

The above convergent series in the brackets is bounded by a constant (in fact, the constant is about 3 since you’re adding 1 before the series below). So, we have T(n)=O(2k)=O(n)T(n) = O(2^k) = O(n)T(n)=O(2k)=O(n).

Derivation for sum of the arithmetico-geometric series:

Consider the series S=∑i=0∞i2iS = \sum_{i = 0}^{\infty} \dfrac{i}{2^i}S=∑i=0∞​2ii​

S=12+222+323+…=12+12(22+34+48+… )=12+12(S+∑i=1∞12i) (expand the first few terms of each to verify)=12+12(S+1) (it is a common harmonic series)S2=1S=2\begin{equation*} \begin{split} S &= \dfrac{1}{2} + \dfrac{2}{2^2} + \dfrac{3}{2^3} + \dots \\ &= \dfrac{1}{2} + \dfrac{1}{2}\left(\dfrac{2}{2} + \dfrac{3}{4} + \dfrac{4}{8} + \dots \right) \\ &= \dfrac{1}{2} + \dfrac{1}{2}\left(S + \sum_{i = 1}^{\infty}\dfrac{1}{2^i} \right) \text{ (expand the first few terms of each to verify)}\\ &= \dfrac{1}{2} + \dfrac{1}{2}(S + 1) \text{ (it is a common harmonic series)} \\ \dfrac{S}{2} &= 1 \\ S &= 2 \end{split} \end{equation*}S2S​S​=21​+222​+233​+…=21​+21​(22​+43​+84​+…)=21​+21​(S+i=1∑∞​2i1​) (expand the first few terms of each to verify)=21​+21​(S+1) (it is a common harmonic series)=1=2​​

Another proof of heapify time complexity:

What is the height of the subtree rooted at item jjj (0-indexed)?

Full height of tree = log2nlog_2nlog2​n.

Levels above subtree at jjj: log2(j+1)log_2(j+1)log2​(j+1)

Height of subtree rooted at jjj: log2n−log2(j+1)=log2(nj+1)log_2n - log_2(j+1) = log_2(\dfrac{n}{j+1})log2​n−log2​(j+1)=log2​(j+1n​)

So, what is the maximum number of comparisons heapify requires on a subheap rooted at index jjj? log2(nj+1)+1≤O(lognj)log_2(\dfrac{n}{j+1}) + 1 \leq O(log\dfrac{n}{j})log2​(j+1n​)+1≤O(logjn​) : For every vertex touched during the bubbleDown routine, O(1)O(1)O(1) comparisons are needed. This includes the leaves. So, another analysis of heapify yields:

Heap vs AVL Tree

The advantage of using an AVL is numerous: not only can it serve as both a max PQ and min PQ at the same time, it also supports searching, predecessor and successor queries. A heap is not without its merits too: it has similar asymptotic costs for standard PQ operations, has faster real costs (no constant factors), is simpler to implement and enjoys slightly better concurrency.

Which implementation is preferred clearly depends on the problem context: what is the most frequent operation? Do we start off with a list of priorities or do they come in one at a time etc.

Previous(k, d)-TreesNextIntroduction

Last updated 8 months ago

Was this helpful?