Understanding Time Complexity: A Guide for Beginners

Photo by Icons8 Team on Unsplash

Understanding Time Complexity: A Guide for Beginners

Visualizing Time Complexity with an Easy-to-Follow Example

What is time complexity?

Time complexity is a measure of an algorithm's efficiency based on the amount of time it takes to run as the input size grows. Big O notation is typically used to represent how long an algorithm will take to complete.

so, What is Big O notation?

Big O notation is a way to determine how quickly an algorithm expands as the size of its input increases. It's like estimating the time it will take to sort a list of numbers based on the amount of numbers in the list.

Assume you have a list of 100 numbers you wish to sort. You could have a list-sorting method that takes 10 seconds. Assume you have a list of 1,000 numbers to sort. You might anticipate that the identical algorithm will take 100 seconds. Big O notation makes it easy to represent this growth rate.

Here are some common time complexities.

O(1): Constant time. The algorithm's processing time is independent of the size of the input. This type of time complexity is the most efficient. It is linked to finding items in an array by index or getting information from a dictionary by key.

O(log n): Logarithmic time. When the size of the input rises, the algorithm's execution time gradually grows. This time complexity is often associated with binary search and other divide-and-conquer algorithms.

O(n): Linear time. The algorithm's execution time increases linearly with the size of the input. An example of a linear time complexity algorithm is searching an array and operating on each entry.

O(n log n): Linearithmic time. It has a linearithmic time complexity and will have an execution time that increases proportionally to the input size (n), multiplied by the logarithm of the input size (log n). This time complexity is often associated with sorting algorithms.

O(n^2): Quadratic time. It takes a long time to run as the input size increases, and the time it takes to complete the algorithm increases at an exponential rate. This time complexity is often associated with nested loop operations.

O(2^n): Exponential time. Even for inputs of a reasonable size, it can be extremely sluggish and inefficient. Its time complexity is frequently connected with recursive algorithms, which produce all viable combinations of a set of elements or search for a solution thoroughly.

Explaining Time Complexity with Examples from the Real World

For the sake of simplification, I will only use examples with O(1) and O(n) time complexity.

As a developer, we are dealing with almost every day to perform filters over the array as per the demand of the task. Everything goes smoothly until the data size is smaller but the filter operation gets slow as the array size keeps increasing.

The array filter function scans each element in the array to get a result as per a given condition, therefore time complexity is O(n) for the filter function.

Understand the challenge of searching data within big Arrays owing to O(n) complexity.

We deal with performing filters over arrays almost every day as developers. Everything works OK until the data size is small, however as the array size grows larger, the filter operation becomes slow.

The array filter function scans each member in the array to determine the outcome based on a given condition, hence the filter function's time complexity is O(n).

Let's begin by performing a filter search over an array containing 10 million data points.

In the code snippet shown below, I first filled the array with 10 million fake data and then I added a second array size of 50, which we would use to search each item in the large array.

The time it takes to find all 50 objects from that big array is close to 4 or 5 seconds. To view it, click the CodePen link.

CodePen

// Populate Array size of 10 Million data 😱
let data = [];
const tenMillion = 10_000_000;
resultElement.innerText = "Preparing Array";

for (let i = 0; i < tenMillion; i++) {
  data.push({ item: i, number: `Number_${i}` });
}

// Populate Array size of 50 having random numbers.
// We will search each item of this array in that big size of array 🤒
const itemsToSearch = Array.from(
  { length: 50 },
  () => ~~(Math.random() * 9_000_000)
);

console.log("Array search starts");
const startTime = new Date().getTime();
const timeLogs = itemsToSearch.map((searchItem) => {
  const start = new Date().getTime();
  const item = data.filter((d) => d.item === searchItem);
  const end = new Date().getTime();
  return `Took ${end - start} Milliseconds to find ${searchItem}`;
});
const endTime = new Date().getTime();

console.log(
  `Searching the array took ${endTime - startTime} milliseconds for ${
    itemsToSearch.length
  } items over 10 Million size of Array.`
);

Improvement using dictionary dataset

Let us enhance the process of identifying an item in an array. To accomplish this, we changed an array to a dictionary, where the key of the dictionary is set to the item we're looking for. It is a random unique number in this case.

The time it takes to discover 50 entries in the dictionary is almost between 0 and 1 milliseconds. However, we have introduced extra overhead in code to convert the array to a dictionary, but the time to do that action is only 300 to 400 milliseconds, which is pretty acceptable for overall speed.

We changed the program's operational complexity from O(n) to O (1).

CodePen

// Populate Array size of 10 Million data 😱
let data = [];
const tenMillion = 10_000_000;
console.log("starts");
console.time("prepare array");
for (let i = 0; i < tenMillion; i++) {
  data.push({ item: i, number: `Number_${i}` });
}
console.timeEnd("prepare array");

// Populate Array size of 50 having random numbers.
// We will search each item of this array in that big size of array 🤒
const itemsToSearch = Array.from(
  { length: 50 },
  () => ~~(Math.random() * 9_000_000)
);

console.time("transform_array_to_dictionary");
console.time("time_including_tranforming_array_to_dict");
let dataDict = {};
data.forEach((d) => (dataDict[d.item] = d));
console.timeEnd("transform_array_to_dictionary");

console.log("Dictionary search starts");
const startTime = new Date().getTime();
const timeLogs = itemsToSearch.map((searchItem) => {
  const start = new Date().getTime();
  const item = dataDict[searchItem];
  const end = new Date().getTime();
  return `Took ${end - start} Milliseconds to find ${searchItem}`;
});
const endTime = new Date().getTime();
// console.log(timeLogs);
console.timeEnd("time_including_tranforming_array_to_dict");

console.log(
  `Searching the array took ${endTime - startTime} milliseconds for ${
    itemsToSearch.length
  } items over 10 Million size of Array.`
);

IN SUMMARY,

In summary, by understanding time complexity, we can provide more efficient and effective solutions that can solve problems quickly and efficiently, regardless of input size. As technology continues to advance and data sets grow larger, the importance of time complexity will only increase. Therefore, it's essential to continue learning and mastering this concept to become a successful programmer.