Time Complexity

Introduction

Time complexity of a program shows the order of growth of the running time as the input gets larger and larger.

It tells nothing about the absolute/actual running time of the algorithm. So, it is not always a good idea to compare two algorithms simply based on their time complexities. For example, MergeSort runs in O(nlogn)O(n \log n) whereas InsertionSort runs in O(n2)O(n^2). But in practice, for smaller size arrays, InsertionSort often runs faster than MergeSort!!

Since constants are omitted while dealing with time complexity of algorithms, two programs may have the same complexity but very different running times. For example, consider program A with T(n)=108n=O(n)T(n) = 10^8n = O(n) and program B with T(n)=n=O(n)T(n) = n = O(n). Obviously program B will run faster than program A, but this is not evident if you only look at their orders of growth.

Useful Mathematical Facts

It is useful to know the following properties of exponents and logarithms:

  1. ax=m    x=loga(m)a^x = m \iff x = log_a (m) → definition of logarithm

  2. loga(mn)=loga(m)+loga(n)log_a (mn) = log_a (m) + log_a (n) → product property

  3. loga(m/n)=loga(m)loga(n)log_a(m/n) = log_a(m) - log_a(n) → quotient property

  4. log(mn)=nlog(m)log(m^n) = nlog(m) → power property

  5. loga(b)=1/logb(a)log_a(b) = 1/log_b(a)

  6. logba=logc(a)×logb(c)=logc(a)/logc(b)log_b a = log_c(a) \times log_b(c) = log_c(a)/log_c(b) → change of base property

  7. aloga(x)=aa^{log_a(x)} = a → number raised to log

  8. loga(a)=1log_a(a) = 1

  9. loga(1)=0log_a(1) = 0

  10. loga(1/b)=loga(b)log_a(1/b) = -log_a(b)

  11. alogb(x)=xlogb(a)a^{log_b(x)} =x^{log_b(a)} (when x,a>0x, a > 0) → can take loglog on both sides to verify its true

  12. aman=am+na^ma^n = a^{m + n}

  13. am/an=amna^m/a^n = a^{m -n}

  14. 1/am=am1/a^m = a^{-m}

  15. (am)n=amn(a^m)^n = a^{mn}

  16. (ab)m=ambm(ab)^m = a^mb^m

1+2+3+...+n=n(n+1)2=O(n2)1 + 2 + 3 + ... + n = \dfrac{n(n+1)}{2} = O(n^2)

The above arithmetic progression can appear in various different forms. The more general form is:

Given an arithmetic series a+(a+d)+(a+2d)++a+(n1)d=n2(2a+(n1)×d)=n2(a+l)a + (a + d) + (a + 2d) + \dots + a + (n-1)d = \dfrac{n}{2}(2a + (n-1)\times d) = \dfrac{n}{2}(a + l) where ll is the last term of the series and nn is the number of terms.

12+22+32+...+n2=n(n+1)(2n+1)6=O(n3)1^2 + 2^2 + 3^2 + ... + n^2 = \dfrac{n(n+1)(2n+1)}{6} = O(n^3)
13+23++n3=(n(n+1)2)2=O(n4)1^3 + 2^3 + \dots + n^3 = \left(\dfrac{n(n+1)}{2} \right)^2 = O(n^4)
1+r+r2+r3+....=11r,for r<11 + r + r^2 + r^3 + .... = \dfrac{1}{1-r}, \quad \text{for}\ |r| < 1

More generally, the solution to the geometric progression a+ar+ar2++arn1=a(rn1)r1a + ar + ar^2 + \dots + ar^{n-1} = \dfrac{a(r^n - 1)}{r-1}

11+12+13++1n=O(logn)\dfrac{1}{1} + \dfrac{1}{2} + \dfrac{1}{3} + \dots + \dfrac{1}{n} = O(\log n)

The above series (called the harmonic series) is divergent but the sum of nn terms is upper bounded by O(logn)O(\log n). This is because:

n=1k1n>1k+11xdx=ln(k+1)\sum_{n = 1}^{k} \dfrac{1}{n} > \int_1^{k+1} \dfrac{1}{x}dx = ln(k+1)

It is easy to view the bound in terms of the graphs.

1+2+4+8+16++2m=2m+11=O(2m)1 + 2 + 4 + 8 + 16 + \dots + 2^m = 2^{m + 1} - 1 = O(2^m)
1+2+4+8+16++m=2m1=O(m)1 + 2 + 4 + 8 + 16 + \dots + m = 2m - 1 = O(m)

The following result is also very useful (e.g. for deriving the time complexity of heapify)

i=1i2i=12+24+38+=2\sum_{i = 1}^{\infty} \dfrac{i}{2^i} = \dfrac{1}{2} + \dfrac{2}{4} + \dfrac{3}{8} + \dots = 2

Common Recurrence Relations

T(n)=2T(n2)+O(1)=O(n)T(n) = 2T(\dfrac{n}{2}) + O(1) = O(n)

In general, T(n)=cT(nc)+O(1)=O(n)T(n) = cT(\dfrac{n}{c}) + O(1) = O(n)


T(n)=2T(n2)+O(n)=O(nlogn)T(n) = 2T(\dfrac{n}{2}) + O(n) = O(n \log n)

A common example of an algorithm whose running time has the above recurrence relation is MergeSort

In general, T(n)=cT(nc)+O(n)=O(nlogn)T(n) = cT(\dfrac{n}{c}) + O(n) = O(n\log n)


T(n)=T(n1)+T(n2)+...+T(1)=O(2n)T(n) = T(n-1)+ T(n-2)+... + T(1) = O(2^n)wherewhere where T(1)=1T(1) = 1

Proof

Let T(1)=1T(1) = 1. Then, T(2)=T(1)=1T(2) = T(1) = 1.

T(3)=T(2)+T(1)=1+1=2T(3) = T(2) + T(1) = 1 + 1 = 2

T(4)=T(3)+T(2)+T(1)=2+1+1=4T(4) = T(3) + T(2) + T(1) = 2 + 1 + 1 = 4

T(5)=T(4)+T(3)+T(2)+T(1)=4+2+1+1=8T(5) = T(4) + T(3) + T(2) + T(1) = 4 + 2 + 1 + 1 = 8

T(6)=T(5)+T(4)+T(3)+T(2)+T(1)=8+4+2+1+1=16T(6) = T(5) + T(4) + T(3) + T(2) + T(1) = 8 + 4 + 2 + 1 + 1 = 16

It is clear from the pattern that that the running time doubles when n is increased by 1.

Hence, the solution of this recurrence is given by T(n)={1, if n = 1 or 22n2, otherwiseT(n) = \begin{cases} 1 \text{\qquad, if n = 1 or 2} \\ 2^{n-2} \text{\quad, otherwise}\end{cases}

Note that the solution depends on the exact value of T(1)T(1) and cannot be determined simply by knowing that T(1)=O(1)T(1) = O(1) because when it comes to algorithmic time complexity, the base of the exponent matters!


T(n)=T(n/c)+O(1)=O(logn)for some constant c > 1T(n) = T(n/c) + O(1) = O(logn) \text{\qquad for some constant c > 1}

An example of an algorithm that follows the above recurrence relation is BinarySearch (here, c = 2).


T(n)=T(n/c)+O(n)=O(n)for some constant c > 1T(n) = T(n/c) + O(n) = O(n) \text{\qquad for some constant c > 1}

Q. Solve the following recurrence relation: T(n)=2T(n1)+O(1)T(n) = 2T(n-1) + O(1)

T(n)=2T(n1)+O(1)=2[2T(n2)+O(1)]+O(1)=4T(n2)+2O(1)+O(1)Assuming that O(1)=1,=4T(n2)+2+1=8T(n3)+4+2+1=2n+2n1++4+2+1=2n+11=O(2n)\begin{equation*} \begin{split} T(n) &= 2T(n-1) + O(1) \\ &= 2[2T(n-2) + O(1)] + O(1) \\ &= 4T(n-2) + 2O(1) + O(1) \\ \text{Assuming that }O(1) = 1, \\ &= 4T(n-2) + 2 + 1 \\ &= 8T(n-3) + 4 + 2 + 1 \\ &= 2^n + 2^{n-1} + \dots + 4 + 2 + 1 \\ &= 2^{n+1} - 1 \\ &= O(2^n) \end{split} \end{equation*}

Some Fun Time Complexity Analysis Questions

What is the order of growth of a program which has running time T(n)=(n217)(n4)+n3n7+n2lognT(n) = \left(\dfrac{n^2}{17}\right)\left(\dfrac{\sqrt{n}}{4}\right) + \dfrac{n^3}{n-7} + n^2\log n

Answer: There is no bound! Because, as nn approaches 7, T(n)T(n) grows without any bound. Mathematically, limn7T(n)=lim_{n \xrightarrow{} 7 } T(n) = \infty

What is the order of growth of the following code?

public static int loopy(int n) {
	int j = 1;
	int n2 = n;
	for (int i = 0; i < n; i++)
		n2 *= 5.0/7.0;
		for (int k = 0; k < n2; k++) {
			System.out.println("Hello");
		}
	}
	return j;
}

Answer: O(n)O(n)

It is easy to observe that the total running time is directly proportional to the number of times “hello” is printed (i.e., the number of times the inner loop runs). This can be analysed as follows

T(n)=57n+(57)2n+(57)3n+....=n(57+(57)2+(57)3+....)=n57157=2.5n=O(n)\begin{equation*} \begin{split} T(n) &= \dfrac{5}{7}n + \left(\dfrac{5}{7}\right)^2n + \left(\dfrac{5}{7}\right)^3n +.... \\ &= n \left( \dfrac{5}{7} + \left(\dfrac{5}{7}\right)^2 + \left(\dfrac{5}{7}\right)^3 +.... \right) \\ &= n \dfrac{\frac{5}{7}}{1-\frac{5}{7}} \\ &= 2.5n \\ &= O(n)\end{split}\end{equation*}

Note that even though the outer loop runs n times, since the value of n2 falls below 1 in less than n iterations, in the last few iterations of the outerloop, the inner loop does not run at all!

This is also a good example to not just assume O(n2)O(n^2) because there is a nested for loop or assume O(nlogn)O(nlogn) since at each stage, n2 is reduced by a fraction. Don’t jump to conclusions!

T(n)T(n) is the running time of a divide-and-conquer algorithm that divides the input of size nn into n10\dfrac{n}{10} equal parts and recurses on all of them. It uses O(n)O(n) work in dividing/recombining on all of them (and there’s no other cost, i.e., no other work done). The base case for the recursion is when the input is less than size 20, which costs O(1)O(1). What is the order of growth of T(n)T(n)?

Ans: O(n)O(n) (Read the question carefully! IT IS NOT O(nlogn)O(n\log n)) T(n)=n10T(nn/10)+O(n)=n10O(1)+O(n)=O(n)T(n) = \dfrac{n}{10}T(\dfrac{n}{n/10}) + O(n) = \dfrac{n}{10}O(1) + O(n) = O(n)

What is the running time of the following code, as a function of nn:

public static int recursiveloopy(int n) {
	for (int i = 0; i < n; i++) {
		for (int j = 0; j < n; j++) {
			System.out.println("Hello");
		}
	}
	if (n <= 2) return 1;
	else if (n%2 == 0) return recursiveloopy(n+1);
	else return recursiveloopy(n-2);

Ans: O(n3)O(n^3)

It is obvious that the nested for loops run in O(n2)O(n^2) time. The slightly more interesting part is the recursive call. If nn is even, recursiveloopy(n+1) is called to make it odd. If nn is odd, recursiveloopy(n-2) is called (and nn remains odd). So, recursiveloopy(n+1) is called at most 1 time. All other times, we can safely assume that recursiveloopy(n-2) is called. Now, the recurrence relation would be T(n)=T(n2)+O(n2)T(n) = T(n-2) + O(n^2) (Since we can ignore the one time that recursiveloopy(n+1) is called, if ever). Solving the recurrence relation,

It is clear that the sum of all even squares and sum of all odd squares should contribute nearly equally to the sum of squares of all integers from 1 through n. That is, we are estimating that the sum of squares of odd integers from 1 to n would be roughly n(n+1)(2n+1)12\dfrac{n(n+1)(2n+1)}{12}, which would still be order O(n3)O(n^3)

What is the running time of the following code, as a function of nn:

public int f(int n) {
	for (int i = 0l i < n; i++) {
		for (int j = 1; j < i; j *= 2) {
			System.out.println(".");
		}
	}
	return 0;
}

Ans: O(nlogn)O(nlogn) This is a classic example of 2 nested for loops that produce order O(nlogn)O(nlogn) when the inner loop variable grows by a constant factor. The detailed analysis is as follows:

During the ithi^{th} iteration of the outer for loop, the inner loop runs for about log(i)log(i)iterations. So,

T(n)=i=1nlog(i)=log(1)+log(2)+...+log(n)=log(12n)=log(n!)<log(nn)=nlogn=O(nlogn)T(n) = \sum_{i = 1}^{n} log(i) = log(1) + log(2) + ... + log(n) = log(1\cdot 2 \cdots n) = log(n!) < log(n^n) = nlogn = O(n\log n)

What is the running time of the following code, as a function of nn:

public String f(int n) {
	String s = "";
	for (int i = 0; i < n; i++) {
		s += "?";
	}
	return s;
}

Ans: O(n2)O(n^2). This appears to be O(n)O(n) at first glance since there is only 1 for loop that runs exactly n times. However, string concatenation takes O(l)O(l) time, where ll is the length of the string. During the ithi^{th} iteration of the loop, the string is i1i - 1 characters long. So, adding all of them up gives us T(n)=i=0n1=0+1+2+...+n1=n(n1)2=O(n2)T(n) = \sum_{i = 0}^{n-1} = 0 + 1 + 2 + ... + n -1 = \dfrac{n(n-1)}{2} = O(n^2)

What is the asymptotic running time of the following code, as a function of nn when you execute loopyloop(n,n) ?

public static void loopyloop(int a, int b) {
	if (b == 1) {
		for (int i = 1; i <= a; i ++) {
			System.out.println("Loopy!");
		}
	}
	else loopyloop(a - 1, b/2);
}

Ans: O(n)O(n). By the time b equals 1, there have been log2nlog_2n recursive calls made to loopyloop and so, the value of a would have reduced to nlog2nn - log_2n. So, the for loop runs for nlog2nn - log_2n times. This is still O(n)O(n).

What is the asymptotic running time of the following code, as a function of nn when you execute doubleTwist(n,n) ?

public static void doubleTwist(int a) {
	if (a == 1) return
	twist(a/2);
	twist(a/2);
}
public static void twist(int b) {
	if (b == 1) return;
	doubleTwist(b/2);
	doubleTwist(b/2);
}

Ans: Let the running time of doubleTwist be f(n)f(n) and that of twist be g(n)g(n). Then, f(n)=2g(n/2)+O(1)f(n) = 2g(n/2) + O(1) and g(n)=2f(n/2)+O(1)g(n) = 2f(n/2) + O(1). We can draw out the recursive tree or we can solve the recurrence relation rigorously. Either way the answer is O(n)O(n).

f(n)=2×g(n/2)=4×f(n/4)=16f(n/16)==O(n)f(n) = 2 \times g(n/2) = 4 \times f(n/4) = 16 f(n/16) = \dots = O(n)

Find the running time of the following program as a function of n (Assume that isDivisible(x,y) takes O(1)O(1) to run):

public static boolean isPrime(int n) {
	if (n < 2) return false;
	for (int i = 2; i <= sqrt(n); i++) {
		if (isPrime(i) && isDivisible(n,i)) {
			return false;
		}
	}
	return true;
}

Ans: O(n)O(n)

Let us consider the recurrence relation of the function. T(n)=n+i=2nT(i)T(n) = \sqrt{n} + \sum_{i = 2}^{\sqrt{n}}T(i)

This is because the for loop runs for O(n)O(\sqrt{n}) times (and each call of isDivisible is O(1)O(1)). So, excluding the recursive call, the time complexity of T(n)T(n) should be O(n)O(\sqrt{n}). But it also makes recursive calls in every loop - so we add those too.

Note that this is not a trivial recurrence relation to solve.

T(n)=n+i=2nT(i)n+n×T(n)=n1/2+n1/2×T(n1/2)=n1/2+n1/2(n1/4+n1/4×T(n1/4))=n1/2+n3/4+n3/4T(n1/4)=n1/2+n3/4+n3/4(n1/8+n1/8×T(n1/8))=n1/2+n3/4+n7/8+n7/8×T(n1/8)=n1/2+n3/4+n7/8++nT(1)=nT(1)++nn1/32+nn1/16+nn1/8+nn1/4+nn1/2Let us assume that T(1)=1/2. Then,T(n)=n2+n22+n24+n28+=ni=0122i=O(n)since i=0122i is a constant approximately equal to 0.816\begin{equation*} \begin{split} T(n) &= \sqrt{n} + \sum_{i = 2}^{\sqrt{n}}T(i) \\ &\leq \sqrt{n} + \sqrt{n}\times T(\sqrt{n}) \\ & = n^{1/2} + n^{1/2}\times T(n^{1/2}) \\ &= n^{1/2} + n^{1/2}(n^{1/4} + n^{1/4}\times T(n^{1/4})) \\ &= n^{1/2} + n^{3/4} + n^{3/4}T(n^{1/4}) \\ &= n^{1/2} + n^{3/4} + n^{3/4}(n^{1/8} + n^{1/8} \times T(n^{1/8})) \\ & = n^{1/2} + n^{3/4} + n^{7/8} + n^{7/8} \times T(n^{1/8}) \\ \dots \\ \dots \\ &= n^{1/2} + n^{3/4} + n^{7/8} + \dots + nT(1) \\ &= nT(1) + \dots + \dfrac{n}{n^{1/32}} + \dfrac{n}{n^{1/16}} + \dfrac{n}{n^{1/8}} + \dfrac{n}{n^{1/4}} + \dfrac{n}{n^{1/2}}\\ \text{Let us assume that }T(1) = 1/2. \text{ Then,}\\ T(n) &= \dfrac{n}{2} + \dfrac{n}{2^2} + \dfrac{n}{2^4} + \dfrac{n}{2^8} + \dots \\ &= n \sum_{i = 0}^{\infty} \dfrac{1}{2^{2^i}} = O(n) \text{\qquad since }\sum_{i = 0}^{\infty} \dfrac{1}{2^{2^i}} \text{ is a constant approximately equal to 0.816} \end{split} \end{equation*}

Big-O Notation Questions

Is 22n=O(2n)2^{2n} = O(2^n)?

Ans: No! 22n=2n2n=4nO(2n)2^{2n} = 2^n 2^n = 4^n \neq O(2^n) (The base of the exponent matters!)

Is log2n=O(log10n)log_2n = O(log_{10}n)?

Ans: Yes!! Because they differ only by a constant (log2n=log10nlog210log_2n = log_{10}n* log_210)

What is the best (tightest) asymptotic upper bound for the following?

f(n)=24logn+n5f(n) = 2^{4logn} + n^5

Ans: O(n5)O(n^5). Because, 24logn=2logn4=n42^{4logn} = 2^{logn^4} = n^4 (obviously we assume that the base of log is 2 since we are computer scientists :) )

f(n)=22n2+4n+7f(n) = 2^{2n^2 + 4n + 7}

Ans: O(22n2+4n)O(2^{2n^2 + 4n}). Only 272^7 is a constant. All other terms are being multiplied (not added!) and hence, cannot be ignored as being insignificant. In particular, the answer cannot be O(2n2)O(2^{n^2}) or O(22n2)O(2^{2n^2}) since you are forgetting to multiply a factor of 24n2^{4n}which is essentially 16n16^n, not a constant!

Amortized Analysis

It is a common technique for analyzing “average” cost per operation. We use this when most operations are cheap but once in a while, we need to pay a lot. This is similar to how you pay rent: you don’t pay rent every second or every day (although you could technically). So, you can think of it like you’re living for free on 29 days of the month and on 1 day you need to pay a large amount. But of course this does not give the true picture - so you find the cost you’re paying per day.

Similarly, we use amortized analysis for data structures.

An operation is said to have amortized cost T(n)T(n) if, for every integer kk, the cost of kk operations is kT(n)\leq kT(n)

When we say kk operations, we mean kk continuous operations, starting from the first operation. You cannot pick any random kk operations from the middle and say that the amortized cost is so high.

In other words, for every prefix sequence, the total cost of that sequence of operations cannot exceed T(n)T(n) times the length of that prefix sequence.

Amortized ≠ Average! Amortized is a much stronger constraint than average since it needs to hold for every value of kk. In case of average, the total cost of all operations should not exceed T(n)T(n) times the total number of operations.

Example (hash table): Inserting kk elements into a hash table takes time O(k)O(k). Therefore, the insert operation has amortized cost O(1)O(1).

Accounting Method

Imagine a bank account BB. Each operation performed adds money to the bank acconut. Every step of the algorithm spends money:

  • Immediate money: to perform that particular operation

  • Deferred operation: from the bank account

Total cost of execution = total money (Average time per operation = total money divided by number of operations)

For each operation, you are given T(n)T(n) time. For most operations you need less than that. So, you’re saving the time when performing cheap operations and using that saved time to perform expensive operations (that take T(n)\geq T(n) ) once in a while.

Binary Counter Amortized Analysis

Binary counter ADT is a data structure that counts in base two, i.e. 0s and 1s. Binary Counter ADT supports two operations:

  • increment() increases the counter by 1.

  • read() reads the current value of the binary counter

To make it clearer, suppose that we have a k-bit binary counter. Each bit is stored in an array A of size kk, where A[k]denotes the k-th bit (0-th bit denotes the most significant bit). For example, suppose A = [0, 1, 1, 0], which corresponds to the number 110110 in binary. Calling increment() will yield A = [0, 1, 1, 1], i.e. 111111. Calling increment() again will yield A = [1, 0, 0, 0], the number 10001000 in binary.

Suppose that the k-bit binary counter starts at 0, i.e. all the values in A is 0. A loose bound on the time complexity if increment() is called nn times is O(nk)O(nk) (since each operation flips at most kk bits). What is the amortized time complexity of increment() operation if we call increment() n times?

Let us look at a few operations to gain some insight into the amortized cost (the cost is essentially just the number of bits we need to flip):

nn (increment index)

Binary Counter

Cost of operation

Total cost of nn operations

Total cost/nn

1

[0,0,0,0,1]

1

1

1

2

[0,0,0,1,0]

2

3

1.5

3

[0,0,0,1,1]

1

4

1.3

4

[0,0,1,0,0]

3

7

1.75

5

[0,0,1,0,1]

1

8

1.6

6

[0,0,1,1,0]

2

10

1.67

7

[0,0,1,1,1]

1

11

<2< 2

8

[0,1,0,0,1]

4

15

<2< 2

So, for each operation, if we assign a cost of 22, the “bank balance” will never be negative and we can pay for expensive operations using previously saved money.

We can solve this problem using the Accounting Method – having a “bank” of “time saved up” which pays for expensive operations. We know to use amortized analysis on this question because increment takes a variable number of flips (read: some operations are expensive and others are cheap), and we are looking for a tighter bound than O(nk)O(nk) where nn is the number of times increment is called and kk is the total number of digits.

We can also observe that in order for a digit to be flipped to 0, it must have been flipped to 1 first. By the accounting method, we want to “save” up for the more expensive operations, which is when many 1s have to be flipped to 0s. We propose that each flip of a bit takes 2 units of time - 1 unit to flip from 0 to 1, and 1 unit to be saved in the bank for when the bit must be flipped from 1 to 0 in the future. This has the invariant that the amount in the bank never dips below 0 (as when you flip the bit from 0 to 1, you pay in advance for the future flip back to 0). Hence the amortised bound is 2 flips per increment. Increment is O(1)O(1) in amortized time per increment and nn increments is bounded by O(n)O(n).

Observe that the 0th bit flips nn times when increment is called nn times. The 1st bit flips n/2n/2 times when increment is called nn times. The 2nd bit flips n/4n/4 times when increment is called nn times. And so on. Also observe that in a sequence of nn operations, the total number of flips is n+n/2+n/4++12nn + n/2 + n/4 + \dots + 1 \leq 2n. So, the total cost of nn operations is 2n\leq 2n (Notice that this is true for any value of nn by the nature of how binary counting works)

Stack 2 Queue

It is possible to implement a queue using two stacks. But is it really efficient? (a) Design an algorithm to push and pop an element from the queue. (b) Determine the worst case and amortized runtime for each operation

(a) Let’s call the two stacks as S1S_1 and S2S_2. When we push an element, we push it to stack S1S_1. When we want to pop an element, note that the first element pushed in S1S_1 should be popped first. We should pop the elements from S1S_1 one by one and push it back at the same time to S2. Note that the ordering will be reversed in S2S_2, hence when we pop from S2S_2, it will be the first element pushed in S1S_1. Therefore, when pop operation is called, we pop the element from S2S_2. If S2S_2 is empty, then we transfer the elements in S1S_1 to S2S_2.

(b) The worst case for one operation is O(n)O(n), where n is the number of inserted elements. However, the amortized cost is much smaller than that. We will prove that the amortized cost for each operation is O(1)O(1) using Accounting Method. Whenever we push a new element, we will deposit $2 to the bank. When we transfer an element from S1S_1 to S2S_2 or popping an element from S2S_2, we will pay $1. Note that when pop is called and S2S_2 is empty, we have at least $2k deposited in the bank, where k is the number of inserted elements in S1S_1. This is enough money to pay for the transferring cost that takes O(k)O(k) time. Note that the remaining money $2k − $k = $k can be used to pay for when the element is popped from S2S_2.

Clarification

There is a very important distinction between worst-case analysis, big-O notation, average-case analysis, expected time analysis, big-theta notation, etc.

Worst case analysis deals with the worst possible kind of input to a program. We carefully handpick the worst possible input by analysing how the algorithm works. When we say that insertion sort takes Θ(n2)\Theta(n^2) in the worst case, we mean that the algorithm running time grows quadratically as the array size increases and the worst possible input (e.g. reversed array) is fed to the program. It is not necessary to use OO while talking about worst-case analysis.

O(f(n))O(f(n)) simply represents a family of functions that grow slower than cf(n)cf(n) for some positive constant cc, i.e., T(n)O(f(n))    n0,cn>n0cf(n)>T(n)T(n) \in O(f(n)) \implies \exists n_0, c \quad \forall n>n_0 \quad cf(n) > T(n). We often abuse notation and for convenience, simply write that T(n)=O(f(n))T(n) = O(f(n)) to mean that the running time T(n)T(n) is upper bounded by f(n)f(n). BUT this does not give us any information regarding what type of running time we are dealing with or what scenario we are analysing.

Average-case analysis deals with how the program performs when a random input is fed to the algorithm. It is important to note that here, while the input is random, the program may still be deterministic.

An indeterministic program is one which makes random choices while the algorithm is running. When talking about indeterministic programs, the running time is a random variable (it depends on the random choice made by the algorithm). In fact, even for the same input, the running time may vary. For example, in case of QuickSort the running time depends heavily on the choice of a pivot. So, it is more practical to talk about the expected running time of a randomised algorithm.

In short, big-O and big-theta notations only give us bounds of a function. They do not tell us anything about the scenario for which the function is being considered.

In CS2040S, we normally consider the worst-case scenario unless otherwise specified.

Last updated