An algorithm is a finite sequence of well-defined instructions, typically used to solve a class of specific problems or to perform a computation. - Wikipedia
An algorithm refers to a set of step-by-step instructions to be followed to achieve a particular end or solve a specific problem.
Algorithmic Complexity
This refers to the amount of resources (time and memory space) required by an algorithm to solve a problem.
Since multiple algorithms can solve a single problem, how then do we compare the efficiency (algorithmic complexity) of one algorithm against another?
We simply can not measure by how much time (in seconds) it takes as this is subjective and dependent on various factors like a computer’s specification, or other applications running in the background which could affect how fast or slow a program executes.
Instead, we need an objective measurement of the efficiency of an algorithm, for which we use asymptotic notation. Commonly used notations are Big O notation, Big Ω (Omega) notation, and Big ⊖ (Theta) notation. Asymptotic notation measures the efficiency of an algorithm by the runtime of a function as the input grows.
While Big O measures the upper bound of the complexity in a worst case, giving an approximate of the maximum time taken by an algorithm given an input, Omega notation measures the lower bound of the complexity in a best-case scenario, and theta notation gives both the lower and upper bound in an average situation.
So instead of measuring the exact time (in seconds), asymptotic notation measures the rate of growth of operations to be executed depending on the size of input to be processed, and this will remain the same irrespective of the environment or hardware configurations.
Thus, in algorithmic complexity, “time” refers to the number of steps or operations, while “space” refers to the amount of memory an algorithm needs to complete its operation. These needed resources are measured as a function of the size of the input to the program. So the more input, the more resources are required to complete a given algorithm.
In calculating the algorithmic complexity of an algorithm, note three things:
Growth concerns input;
Constants are dropped as they would make a minimal difference, although theoretically because they have negligible impact on performance compared to the largest-growing term as the input increases.
Worst-case scenarios are considered - Big O notation.
Common Runtimes
These measure the performance of an algorithm as the size of the input data increases. They are expressed in Big O notation as such:
From bigcheatsheet.com - an illustration of various common time complexities
Big-O (1): This is referred to as constant time runtime complexity. This is because, independent of the input, an algorithm with this complexity will run the same steps or operations. This is the most aspired runtime for any algorithm.
Big-O (N): This is a linear time complexity, where the running time increases linearly with the size of the input data, such that for every additional input, there is an additional operation to be executed.
Big-O (log N): A logarithmic time complexity is one where the running time increases logarithmically with the size of the input data set. With a logarithmic time complexity, the runtime increases at a much slower rate compared to the increase in input.
Big-O (N log N): This is a quasilinear time complexity, that grows slightly faster than a linear time complexity, but significantly slower than quadratic time complexity. This runtime processes the input in a linear time complexity (O(n)) and, for each processing step, it performs a logarithmic operation (O(log n)), hence O(n log n), which is O(n) x O(log n).
Big-O (n²): In a quadratic time complexity, the running time of an algorithm grows proportionally to the square of the input size; the running time increases quadratically with the size of the input data.
Big-O (n³): Cubic time complexity describes an algorithm whose running time increases cubically with the size of the input, that is, the running time increases proportionally to the cube of the input size.
Big-O(2^N), Big-O(N!): Exponential and factorial time complexities respectively are running times that grow very quickly as the input size increases. Both of these time complexities are considered impractical for large input sizes, which is why optimizing algorithms to avoid them (when possible) is crucial for scalability.
Classic Algorithms for Frequent Tasks
These are step-by-step sets of instructions used to rearrange a given set of elements of a given data structure in a specific order, such as ascending or descending orders or any other specified order.
1. SORTING ALGORITHMS
Sorting algorithms are useful as prerequisites for other algorithms like the binary search algorithm that requires an ordered set of data to function. Sorting data makes it easier, and reduces the time complexity, for searching, retrieving, and analyzing information.
Common sorting algorithms are Selection sort, Insertion sort, Bubble sort, Merge sort, Quick sort, Heap sort, and Counting sort.
Selection Sort: A selection sort algorithm sorts elements in a data structure by going over each element and placing the smallest (or largest) value in order. It achieves this by keeping track of a sorted and unsorted data structure, and a minimum value. As it goes through every element of the data structure, it updates the minimum value where it finds a value smaller than the minimum value and continues until the end of the data structure at which point it swaps the newly found smallest value with the first element in the unsorted data structure. This iteration continues until all elements are sorted. For every iteration in a Selection sort, the smallest number (minimum value) in the unsorted array is placed in its right position.
Selection sort has a time complexity of Big O (n²)
function selectionSortTwo(arr) {
for (let i = 0; i < arr.length; i++) {
let minimumValue = i;
for (let j = i + 1; j < arr.length; j++) {
if (arr[j] < arr[minimumValue]) {
minimumValue = j;
}
}
if (minimumValue !== i) {
[arr[i], arr[minimumValue]] = [arr[minimumValue], arr[i]];
}
}
return arr;
}
console.log(selectionSortTwo([7, 9, 78, 2, 1, 3]));
//OUTPUT = [1,2,3,7,9,78]
Insertion Sort: In an insertion sort, we go through every item in a data structure and place them in their correct position by shifting larger elements to the right to make room for the current element being compared. With every iteration, we pick an item from the unsorted part of the array and insert it in its accurate position within the sorted portion of the array (or list).
The first element is regarded as sorted since there is no element before to compare it with, so iteration begins with the second element at index 1, and by the end of the iterations, the array is sorted.
Insertion sort has a time complexity of Big O (n²)
function insertionSort(arr) {
for (let i = 1; i < arr.length; i++) {
let currentItem = arr[i];
let j=i-1;
while (j >= 0 && arr[j] > currentItem) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = currentItem;
}
return arr;
}
console.log(insertionSort([7, 9, 78, 2, 1, 3]));
//OUTPUT = [1,2,3,7,9,78]
Bubble Sort: In a bubble sort, adjacent elements are compared and placed in proper order, such that with each iteration, the larger element ‘bubbles’ to its correct position, thereby reducing the number of elements to be compared.
Bubble sort has a time complexity of Big O (n²)
function bubbleSort(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length - 1 - i; j++) {
if (arr[j] > arr[j + 1]) {
const temp = arr[j + 1];
arr[j + 1] = arr[j];
arr[j] = temp;
}
}
}
return arr;
}
console.log(bubbleSort([7, 9, 78, 2, 1, 3]));
Merge Sort: Merge sort algorithm uses a ‘divide and conquer’ strategy to sort a given array or list, as the case may be. In division, it recursively splits the array into halves until each has just one element or is empty. Then it compares the elements of the smaller sorted arrays and merges them into a single sorted array by comparing the smallest values of each array and placing the smaller one in the sorted array.
Merge sort has a time complexity of Big O(n log n) for best, average and worst cases, making it efficient for large data sets.
// MERGER FUNCTION
function mergeFunction(givenArr, leftArr, rightArr) {
let i = 0;
let j = 0;
let k = 0;
while (leftArr.length > i && rightArr.length > j) {
if (leftArr[i] < rightArr[j]) {
givenArr[k] = leftArr[i];
i++;
} else {
givenArr[k] = rightArr[j];
j++;
}
k++;
}
while (i < leftArr.length) {
givenArr[k] = leftArr[i];
i++;
k++;
}
while (j < rightArr.length) {
givenArr[k] = rightArr[j];
j++;
k++;
}
return givenArr
}
//////// MERGE SORT
function mergeSort(arr) {
const arrLength = arr.length;
if(arrLength < 2){
return arr
}
const midIndex = arrLength / 2;
const leftHalf = arr.slice(0, midIndex);
const rightHalf = arr.slice(midIndex);
// console.log('LEFT HALF: ', leftHalf)
// console.log('RIGHT HALF: ', rightHalf)
//these will show the recursive division of the array
return mergeFunction(arr, mergeSort(leftHalf), mergeSort(rightHalf))
}
console.log(mergeSort([4, 30, 23, 2, 3, 4, 5, 6]));
//[2,3,4,4,5,6,23,30]
Quick Sort: A quick sort algorithm is implemented in 3 basic steps:
i. Pick a pivot (usually the last element of the data structure), although there are various ways to pick a pivot;ii. Partition the data structure by moving the elements lower than the pivot to the left side of the pivot, and moving the elements greater than the pivot to the right side of the pivot; in this position, a pivot is regarded as being in its sorted position since all elements to its left are lower than itself, and all elements to its right are greater than it;
iii. Recursively quickly sort all the values on the left and right partitions of the data structure, until there is only one element in the subarray since a single element is already sorted.
// QUICK SORT
function partionQuickSort(arr, l, h) {
let pivot = arr[h];
let i = l - 1;
for (let j = l; j <= h - 1; j++) {
if (arr[j] < pivot) {
i++;
[arr[i], arr[j]] = [arr[j], arr[i]];
}
}
[arr[i+1], arr[h]] = [arr[h], arr[i+1]]
return i + 1;
}
function quickSort(arr, low = 0, high = arr.length - 1) {
if (low < high) {
let pivot = partionQuickSort(arr, low, high);
quickSort(arr, low, pivot - 1);
quickSort(arr, pivot + 1, high);
}
return arr;
}
console.log(quickSort([4, 30, 23, 2, 3, 5, 4, 6]));
2. SEARCHING ALGORITHMS
Searching algorithms are instructions used to check the existence of an element within a given data structure and to retrieve it where necessary. Therefore in every search algorithm, there is a target element to be found within the data collection.
Practical use cases of search algorithms include e-commerce applications, where users can search for a specific product, and search engines like Google to retrieve relevant information amidst vast amounts of data.
Common searching algorithms include Linear search, Binary search, Jump search, and Ternary search.
Linear Search: In a linear search, every element of a data set is checked until the target value being sought is found, or all the elements have been checked. The efficiency of this algorithm is lesser than other search algorithms because it has to go through every element of a data set, thereby having a time complexity of O(n).
Despite the obvious low performance, especially for large data sets, it is useful for searching unsorted data sets (a big advantage over binary search), and it is useful for data structures that do not have random access, such as a linked list.
A linear search can be implemented with a simple ‘for loop’ function:
// LINEAR SEARCH
function linearSearch(arr, targetValue) {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === targetValue) {
console.log(`Found my value of ${targetValue} at index ${i}:)`);
return arr[i];
}
}
console.log(`Sorry, couldn't find your search ):`);
return -1;
}
console.log(linearSearch([56, 8, 9, 45, 20], 450));
// "Sorry, couldn't find your search ):"
// -1
- Binary Search: A binary search finds the location of a target value in a sorted data structure. It uses the divide and conquer strategy to divide the data structure until the possible locations are narrowed to one.
The target value searched for is compared against the middle element, if they are not equal, since a binary search requires a sorted array as a precondition for its operation, one-half of the search space is discarded depending on whether or not the target value is higher than or lower than the middle element.
Where the middle element is lower than the target value, the elements to the left of the middle element are discarded, and the same operation of dividing and comparing with the middle element continues with the other half of the data structure. With this, at every iteration, half of the data structure is discarded, drastically reducing the search space to be checked for the target value.
The time complexity of Binary Search is log(n) as it reduces the search space by half with every iteration.
// BINARY SEARCH
function binarySearch(arr, target) {
let low = 0;
let high = arr.length - 1;
while (low <= high) {
let middleIndex = Math.floor((low + high) / 2);
if (arr[middleIndex] === target) {
return middleIndex;
} else if (arr[middleIndex] > target) {
high = middleIndex - 1;
} else {
low = middleIndex + 1;
}
}
return -1;
}
console.log(binarySearch([3, 4, 5, 7, 9, 10, 34], 10));
I hope this article contributes to your understanding of data structures and algorithms on your journey as a developer. If it did, let me know in the comments.
Until the next, I am rooting for you !!
NOTES
An in-place sorting algorithm is a type of algorithm that sorts the data without needing extra space for another copy of the array or list. It modifies the original data directly by rearranging the elements.
An out-of-place algorithm creates a new array or data structure for sorting. Example: Merge Sort (as shown earlier) uses additional memory to store the sorted halves before merging them.
Resources
https://frontendmasters.com/courses/algorithms/
https://medium.com/@tushar_patil/how-to-prepare-for-dsa-zero-to-hero-53ee4b1e1ebd
https://medium.com/swlh/data-structures-101-e18fc33579fa
https://roadmap.sh/datastructures-and-algorithms
https://www.youtube.com/watch?v=bOk35XmHPKs
https://www.hackerearth.com/practice/algorithms/sorting/merge-sort/visualize/