Algorithm Analysis & Design (CST306) - Module 3 Study Guide
Part A: High Frequency Definitions (Module 3 Focus)
Topic: Control abstraction: Greedy Strategy
[HIGH PROB | ↻ REPEATED PYQ]
Section A: What is this? (The Concept)
Imagine you are at a buffet and want to eat as much high-protein food as possible before you get full. You don’t map out every possible combination of food on your plate. Instead, you look at the table, grab the item with the highest protein density right now, put it on your plate, and repeat until your plate is full.
The Analogy: The Greedy Strategy is the “buffet plate” approach. It makes the locally optimal choice at each step with the hope that these local optimums will lead to a global optimum. It never reconsiders a choice once it is made.
Section B: Exam-Ready Theory (The Rigor)
- Definition: A control abstraction is a procedure whose general flow is clear, but whose primary operations are left undefined. The Greedy method builds a solution piece by piece, always choosing the next piece that offers the most immediate benefit.
- Feasibility: A solution is feasible if it satisfies the problem’s constraints (e.g., doesn’t exceed the knapsack capacity).
- Optimality: An optimal solution is a feasible solution that maximizes or minimizes a given objective function.
- Algorithm Structure:
Greedy(a, n) { solution = Φ; for i=1 to n do { x = Select(a); if Feasible(solution, x) then solution = Union(solution, x); } return solution; }
Section C: Step-by-Step Worked Example (The Application)
(Theoretical Trace)
- Input: A set of candidate items.
- Select(): Picks the “best” item from the candidates based on a greedy criterion (e.g., highest profit-to-weight ratio).
- Feasible(): Checks if adding this item violates constraints.
- Union(): Adds it to the final solution set.
Section D: How to Write This in the Exam (The Strategy)
- Start With: Define the core philosophy: “Make the locally optimal choice at each stage.”
- Body: Write down the formal pseudo-code for the
Greedy(a, n)control abstraction. Explain what theSelect,Feasible, andUnionfunctions do. - Traps: Do not confuse Greedy with Dynamic Programming. Explicitly mention that Greedy never reconsiders previous choices.
- Close With: List standard applications (Fractional Knapsack, Kruskal’s, Dijkstra’s).
Topic: Control abstraction: Divide & Conquer
[HIGH PROB | ↻ REPEATED PYQ]
Section A: What is this? (The Concept)
Imagine you are asked to alphabetize a stack of 1,000 tests. Doing it all at once is overwhelming. Instead, you cut the stack in half, give 500 to a friend, and tell them to cut it in half again. Once everyone has just 1 or 2 tests, alphabetizing them takes a second. Then, you just merge the small, sorted stacks back together.
The Analogy: Divide and Conquer is exactly this “divide the labor” strategy. You break a massive, complex problem into identical, smaller mini-problems until they are so trivial you can solve them instantly, and then you merge the mini-solutions.
Section B: Exam-Ready Theory (The Rigor)
- Definition: An algorithm design paradigm that recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly.
- Three Steps:
- Divide: Break the problem into smaller instances .
- Conquer: Solve the sub-problems recursively.
- Combine: Merge the sub-solutions into a single solution for .
- Algorithm Structure:
Algorithm DAndC(P) { if Small(P) then return S(P); else { Divide P into P1, P2... Pk; apply DAndC to sub-problems; return Combine(DAndC(P1)... DAndC(Pk)); } }
Section C: Step-by-Step Worked Example (The Application)
(Theoretical Trace - Merge Sort Base Case)
- Divide: Array
[38, 27, 43, 3]is divided into[38, 27]and[43, 3]. - Conquer: Sub-arrays are recursively divided until size 1.
[38],[27],[43],[3]. (Small(P) is True). - Combine: Merge
[38]and[27]into[27, 38]. Merge[43]and[3]into[3, 43]. Finally, merge[27, 38]and[3, 43]into[3, 27, 38, 43].
Section D: How to Write This in the Exam (The Strategy)
- Start With: Explicitly list the three phases: Divide, Conquer, Combine.
- Body: Write the standard
DAndC(P)pseudo-code block. Define the general recurrence relation . - Traps: Do not skip the
Small(P)base-case check in your algorithm. Without a base case, recursion is infinite. - Close With: Provide 2 standard examples (e.g., Merge Sort, Binary Search).
Part B: Core Topics
Greedy: Fractional Knapsack
[SURE SHOT | ↻ REPEATED PYQ]
Section A: What is this? (The Concept)
Imagine you are a thief breaking into a spice shop with a bag that holds exactly 15 kg. You see sacks of Saffron, Pepper, and Salt. Saffron is incredibly valuable per kilogram, while Salt is cheap but heavy. Because it’s fractional, you can scoop out portions of a sack.
The Analogy: The Fractional Knapsack problem is the ultimate test of “value density”. You calculate the price-per-kilogram of every item, greedily load up on the most valuable dust until that sack is empty, then move to the second most valuable, and so on. If you run out of bag space on the last item, you just take a fraction of it.
Section B: Exam-Ready Theory (The Rigor)
- Problem Statement: Given objects, each with a weight and a profit , and a knapsack of capacity . Find the fractions for each object such that the total weight and the total profit is maximized.
- Greedy Strategy:
- Calculate the profit-to-weight ratio for all items: .
- Sort the items in descending order of this ratio.
- Iterate through the sorted items:
- If the item fits entirely, take it () and reduce capacity.
- If it doesn’t fit entirely, take the remaining fraction of it () and terminate.
- Time Complexity: due to the sorting step.
Section C: Step-by-Step Worked Example (The Application)
Example: Fractional Knapsack (April 2025 University Question) , Capacity . Profits , Weights .
- Calculate Ratios ():
- I1: 5, I2: 1.66, I3: 3, I4: 1, I5: 6, I6: 4.5, I7: 3.
- Sort Items (Descending Ratio):
- I5(6), I1(5), I6(4.5), I3(3), I7(3), I2(1.66), I4(1).
- Fill Knapsack:
- I5: 1kg (Full). Profit: 6. Rem: 14.
- I1: 2kg (Full). Profit: 10. Rem: 12.
- I6: 4kg (Full). Profit: 18. Rem: 8.
- I3: 5kg (Full). Profit: 15. Rem: 3.
- I7: 1kg (Full). Profit: 3. Rem: 2.
- I2: 2/3 of 3kg. Profit: . Rem: 0.
- Final Profit: .
- Solution Vector: .
Section D: How to Write This in the Exam (The Strategy)
- Start With: State the objective function (Maximize ) and the constraint ().
- Body: You must create a table. Do not do the calculations inline. Create columns for
Item,Profit,Weight,Ratio(P/W), andFraction (X). Show the remaining capacity after every single item is added. - Traps: A massive mistake students make is writing the final Solution Vector in the sorted order. You MUST write the final solution vector mapped back to the original item index order (e.g., ).
- Close With: Clearly write “Total Maximum Profit = 55.33”.
Greedy: Minimum Spanning Tree (Kruskal’s)
[SURE SHOT | ↻ REPEATED PYQ]
Section A: What is this? (The Concept)
Imagine you are building roads to connect a group of remote villages. Roads are expensive, and some terrains are harder to build on than others. Your goal is to connect all the villages using the absolute minimum amount of asphalt, ensuring every village can reach every other village, but with no redundant circular routes.
The Analogy: Kruskal’s algorithm is like sorting all possible road blueprints from cheapest to most expensive. You look at the cheapest blueprint. Does it connect two villages that were previously disconnected? If yes, build it. Does it create a useless loop between already-connected villages? If yes, throw the blueprint away. Repeat until all villages are linked.
Section B: Exam-Ready Theory (The Rigor)
- Spanning Tree: A subgraph of an undirected connected graph that includes all vertices but has exactly edges and no cycles.
- Minimum Spanning Tree (MST): A spanning tree with the minimum possible total edge weight.
- Kruskal’s Strategy:
- Treat every vertex as its own isolated set (forest of trees).
- Sort all edges in the graph in ascending order of their weight.
- Iterate through the sorted edges:
- If the edge connects two vertices in different sets (using Union-Find / Disjoint Sets), add it to the MST and Union the sets.
- If it connects vertices in the same set, adding it would form a cycle, so discard it.
- Stop when edges have been added.
- Time Complexity: dominated by sorting the edges using a min-heap.
Section C: Step-by-Step Worked Example (The Application)
Example: Kruskal’s MST Trace (University Question) Consider vertices {1..8} with edges: (1,6): 10, (1,2): 28, (2,3): 16, (3,4): 12, (4,5): 22, (5,6): 25, (2,7): 14, (7,4): 18, (7,5): 24.
- Sorted Edges: (1,6): 10, (3,4): 12, (2,7): 14, (2,3): 16, (7,4): 18, (4,5): 22, (7,5): 24, (5,6): 25, (1,2): 28.
- Steps:
- Add (1,6): Cost 10.
- Add (3,4): Cost 12.
- Add (2,7): Cost 14.
- Add (2,3): Cost 16. (Sets {2,7} and {3,4} merged).
- Edge (7,4): Discard (Forms cycle 7-2-3-4-7).
- Add (4,5): Cost 22.
- Edge (7,5): Discard (Forms cycle).
- Add (5,6): Cost 25.
- Final MST Cost: .
Section D: How to Write This in the Exam (The Strategy)
- Start With: Define Spanning Tree and MST. State that Kruskal’s uses a Greedy Approach and requires sorting edges.
- Body: For numericals, first list ALL edges sorted by weight in a column. Then, systematically write “Select Edge X -> Check Cycle -> Action (Include/Discard)”.
- Visuals: You must draw the graph being built step-by-step.
- Close With: Write the sum equation: “Min Cost = 99”.
Greedy: Single Source Shortest Path (Dijkstra’s)
[HIGH PROB | ↻ REPEATED PYQ]
Section A: What is this? (The Concept)
Imagine you are using a GPS navigation system to drive from your house to a specific restaurant. The GPS needs to find the absolute fastest route.
The Analogy: Dijkstra’s Algorithm works like a wave of water expanding outward from your house. It first finds the absolute closest intersection. From that intersection, it explores the next closest roads, constantly updating its internal map if it finds a “shortcut”. It “locks in” the fastest route to a location only when it is absolutely mathematically impossible to find a faster shortcut.
Section B: Exam-Ready Theory (The Rigor)
- Problem: Given a weighted graph with non-negative edge weights, find the shortest path from a single source vertex to all other vertices.
- Algorithm Strategy (Relaxation):
- Initialize
dist[] = ∞,dist[S] = 0. - Maintain set of unvisited nodes.
- While is not empty:
- Extract with min
dist. - For each neighbor of :
alt = dist[u] + weight(u, v).- If
alt < dist[v], updatedist[v] = alt.
- Extract with min
- Initialize
- Time Complexity: with array, with min-heap.
Section C: Step-by-Step Worked Example (The Application)
Example: Dijkstra’s Trace (University Question) Source: S. Nodes: A, B, C, D, E, F, G, T.
- Iteration 1: Extract S. dist(S)=0. Neighbors: A(4), B(3).
- Iteration 2: Extract B. dist(B)=3. Neighbor: D. .
- Iteration 3: Extract A. dist(A)=4. Neighbor: C. .
- Iteration 4: Extract C. dist(C)=5. Neighbor: E. .
- Iteration 5: Extract E. dist(E)=6. Neighbor: G(8), T. .
- Iteration 6: Extract D. dist(D)=7. Neighbor: E(8), T(10), F. .
- Final Path to T: (cost 10).
Section D: How to Write This in the Exam (The Strategy)
- Start With: Define the problem space (Single Source, Non-negative weights).
- Body: The university expects a standard routing table. Create a table where rows are Iterations, and columns are Vertices. Fill in the shortest known distance to every vertex at each step. Bold or circle the value when a vertex is extracted.
- Traps: Dijkstra’s algorithm fails with negative edge weights.
- Close With: Clearly write the final shortest path costs.
Divide and Conquer: Merge Sort
[HIGH PROB | ↻ REPEATED PYQ]
Section A: What is this? (The Concept)
Imagine someone drops a deck of 52 mixed cards on your desk and says “sort these.” Instead of frantically scanning the whole pile, you split the deck exactly in half (26 and 26). You tell your brain, “I’ll sort these two smaller piles later.” You split them again (13 and 13), and again, until you have 52 piles of exactly 1 card. A 1-card pile is inherently sorted.
The Analogy: Now the “Conquer” phase begins. You take two 1-card piles, look at both, and stack them in order. Now you have a sorted 2-card pile. You take another sorted 2-card pile, compare the top cards, and zip them together like a zipper into a sorted 4-card pile. You zip these piles upward until the whole deck is sorted. This is 2-Way Merge Sort.
Section B: Exam-Ready Theory (The Rigor)
- Algorithm Strategy:
- Divide: Find the midpoint
mid = (low + high) / 2. - Conquer: Recursively call
MergeSort(low, mid)andMergeSort(mid+1, high). - Combine: Call
Merge(low, mid, high)to weave the two sorted halves into a single temporary array, then copy it back.
- Divide: Find the midpoint
- Recurrence Relation:
- Dividing takes .
- Recursively sorting two halves takes .
- Merging two arrays of size takes comparisons.
- Equation:
- Time Complexity: Solving the recurrence (via Master’s Theorem or Recursion Tree) yields exactly in the Best, Average, and Worst cases.
Section C: Step-by-Step Worked Example (The Application)
Trace: [38, 27, 43, 3, 9, 82, 10, 15]
- Divide Phase:
- L1: [38, 27, 43, 3] | [9, 82, 10, 15]
- L2: [38, 27] [43, 3] | [9, 82] [10, 15]
- L3: [38] [27] [43] [3] | [9] [82] [10] [15]
- Merge Phase:
- Merge [38],[27] [27, 38]
- Merge [43],[3] [3, 43]
- Merge [27,38],[3,43] [3, 27, 38, 43]
- Merge [9],[82] [9, 82]
- Merge [10],[15] [10, 15]
- Merge [9,82],[10,15] [9, 10, 15, 82]
- Final Merge:
- Merge [3, 27, 38, 43] and [9, 10, 15, 82]
- Compare 3 and 9 [3]
- Compare 27 and 9 [3, 9]
- Compare 27 and 10 [3, 9, 10]
- Compare 27 and 15 [3, 9, 10, 15]
- Compare 27 and 82 [3, 9, 10, 15, 27]
- Continue…
- Sorted Array: [3, 9, 10, 15, 27, 38, 43, 82].
Section D: How to Write This in the Exam (The Strategy)
- Start With: Write the
MergeSort(low, high)andMerge(low, mid, high)algorithms clearly. - Body: If asked to illustrate, you MUST draw the recursion tree. Draw downward arrows showing the array splitting into individual boxes. Then draw upward merging arrows showing the sorted arrays combining.
- Traps: The complexity of Merge Sort is in all cases. Unlike QuickSort (which degrades to ), Merge Sort’s exact halving guarantees mathematical consistency. Mention this property.
- Close With: Derive the time complexity by setting up the recurrence and explicitly solving it to prove .