Understanding Big O Notation: A Beginner’s Guide

Jan 08, 2025

14 min read

Algorithms

When writing code to solve a problem, there are often multiple possible approaches — each with its own tradeoffs.

You might solve a challenge one way, while someone else solves it another way, and both solutions could still produce the correct result. However, not all solutions are created equal when it comes to performance.

That's why it's important to evaluate algorithms not just by whether they work, but also by how efficiently they run. This includes how much time they take to execute and how much memory they consume.

As a programmer, understanding these differences is critical. It helps you write code that scales, performs well, and avoids slowdowns or crashes in real-world applications.

This is where Big O Notation comes in. It gives you a standardised way to describe the efficiency of an algorithm. With Big O, you can:

  • Estimate how your code performs on different input sizes
  • Measure how well your solution scales
  • Compare different algorithms for the same problem

Let’s break it down in simple terms.

What Is Big O Notation? 

Big O, short for Big O notation, is a mathematical way to represent an algorithm’s worst-case complexity. It uses algebraic expressions to describe how the performance of an algorithm changes based on the size of the input data.

Big O notation is a way to describe the performance of an algorithm in terms of:

  • Time Complexity – how fast it runs
  • Space Complexity – how much memory it uses

Instead of measuring actual seconds or bytes, Big O expresses how performance scales with input size. It focuses on the trend, not the exact numbers.

What Are Time and Space Complexity? 

When evaluating the performance of your code, it's easy to think that factors like your CPU, memory, or operating system determine how fast or efficient it is. While those things do matter in practice, they're not the focus when analysing algorithms.

Instead, we care about how an algorithm performs relative to the size of its input — regardless of hardware or environment.

  • Time complexity describes how long an algorithm takes to run as the size of the input grows.
  • Space complexity describes how much memory it requires as the input size increases.

These complexities are expressed using Big O notation. They're essential for choosing the right approach, especially when you're working with large datasets or performance-critical systems.

Common Big O Notations 

Take a look at this simple JavaScript function that calculates the sum of all numbers up to a given input n:

function calculateSum(n) {
  let sum = 0;          // Statement 1 → O(1)
  for (let i = 1; i <= n; i++) { // Loop runs 'n' times → O(n)
    sum += i;           // Statement 2 → runs 'n' times
  }
  return sum;           // Statement 3 → O(1)
}

This function runs in O(n) time because the number of operations grows linearly with the size of the input n. Every additional number you add means one more iteration in the loop.

Here’s how we evaluate this: Statement 1 and Statement 3 run once, so they’re constant. The loop (and everything inside it) runs n times. So, total operations = n + 2 → and in Big O, we focus on how it grows, not the exact number. That’s why we simplify to O(n).

This is what we mean when we say complexity is a function of input size.

We can apply this kind of analysis to all algorithms. Here’s a general chart showing common time complexities and how they scale:

Notation

Name

Example Scenario

O(1)

Constant Time

Accessing an array element: arr[0]

O(log n)

Logarithmic Time

Binary search in a sorted array

O(n)

Linear Time

Looping through an array

O(n log n)

Linearithmic Time

Merge sort, quicksort (average case)

O(n²)

Quadratic Time

Nested loops (e.g., bubble sort)

O(2ⁿ)

Exponential Time

Recursive solutions with branching

O(n!)

Factorial Time

Generating all permutations

Before we look at examples for each time complexity, let's understand the Big O time complexity chart.

Tip

Smaller Big O values mean more scalable code!

Visualizing Big O 

The Big O chart, also known as the Big O graph, is an asymptotic notation used to express the complexity of an algorithm or its performance as a function of input size.

This helps programmers identify and fully understand the worst-case scenario and the execution time or memory required by an algorithm.

Big O Complexity Chart
Visualizing algorithm time complexity growth rates

The Big O chart above shows that O(1), which stands for constant time complexity, is the best. This implies that your algorithm processes only one statement without any iteration. Then there's O(log n), which is good, and others like it, as shown below:

  • O(1) – Constant time: Excellent, most efficient.
  • O(log n) – Logarithmic time: Very good.
  • O(n) – Linear time: Decent for many real-world scenarios.
  • O(n log n) – Linearithmic time: Less ideal, common in efficient sorting.
  • O(n²) – Quadratic time: Usually slow, avoid with large inputs.
  • O(2ⁿ) and O(n!) – Exponential and factorial time: Extremely inefficient, only viable for small inputs.

How to recognize time complexities: 

  • O(1): No loops, single operation.
  • O(log n): Input size is divided (e.g., binary search).
  • O(n): One loop over the input.
  • O(n²): Nested loops.
  • O(2ⁿ): Branching recursion (e.g., Fibonacci).

We’ll now break down these in examples, mostly using JavaScript. The concepts apply in any language, so focus on the logic rather than the syntax.


Big O Time Complexity Examples 

Constant Time: O(1) 

A constant time algorithm is one that performs a fixed number of operations — no matter how large the input is. Its runtime does not grow with the input size.

Time Complexity: O(1) 

Here’s a simple example:

function getFirstItem(arr) {
  return arr[0];
}

This function retrieves the first element of an array, no matter whether the array has 5 elements or 5 million. It only performs one operation — a single access — so its time complexity is O(1).

Space Complexity: O(1) 

Now let's look at space: does the function use more memory as the input size grows?

In this example — no. The function doesn't create any new data structures or store copies of the array. It simply accesses an existing element. This means the space complexity is also O(1).

Here's another example:

function isEven(num) {
  return num % 2 === 0;
}

One quick operation — and again, it doesn’t create any extra memory allocations. Constant time, constant space.

And one more:

function sayHello() {
  console.log("Hello, world!");
}

Just a fixed print statement: also O(1) time and O(1) space.

Why Constant Complexity Matters 

  • These are the most scalable kinds of operations.
  • Great for high-performance or low-latency systems.
  • Ideal in data structures like hash maps where direct access is key.
Tip

Use O(1) operations wherever you can. They’re fast, predictable, and efficient.

Now let’s look at the next type: linear time.

Linear Time: O(n) 

A linear time algorithm performs a number of operations that grow directly in proportion to the input size. If the input doubles, the number of steps also doubles. This is one of the most common time complexities in everyday programming.

Time Complexity: O(n) 

Here’s a simple example:

function logItems(arr) {
  for (let i = 0; i < arr.length; i++) {
    console.log(arr[i]);
  }
}

In this function, the loop executes once for each element in the input array arr. So if the array has 10 elements, it logs 10 times; if it has 1,000 elements, it logs 1,000 times. That’s why the time complexity is O(n) — it grows linearly with n, the number of elements.

Another example:

function findMax(arr) {
  let max = arr[0];
  for (let i = 1; i < arr.length; i++) {
    if (arr[i] > max) {
      max = arr[i];
    }
  }
  return max;
}

Even though we do some comparisons inside the loop, it still only runs once per item. So again — O(n) time complexity.

Space Complexity: O(1) and O(n) 

  • In the logItems example, we don’t store anything new — just read and print. So space complexity is O(1).
  • In contrast, if we create a new array based on the input, the space complexity can change. For example:
function doubleValues(arr) {
  let result = [];
  for (let i = 0; i < arr.length; i++) {
    result.push(arr[i] * 2);
  }
  return result;
}

Here, we’re building a new array with the same number of elements as the input. That means O(n) space complexity.

Why Linear Complexity Matters 

  • This is a reasonable and often acceptable complexity for many real-world problems.
  • It’s easy to understand and predict.
  • But for very large inputs, even O(n) can become costly, especially if done repeatedly.
Tip

O(n) is common and totally fine — just be mindful of how often you loop and what memory you allocate.

Next, let’s explore logarithmic time and why it’s considered faster than linear in many search algorithms.

Logarithmic Time: O(log n) 

A logarithmic time algorithm reduces the size of the problem with each step — often by half. These algorithms are incredibly efficient for large inputs because they don’t process every element.

Time Complexity: O(log n) 

Let’s look at a classic example — binary search:

function binarySearch(arr, target) {
  let left = 0;
  let right = arr.length - 1;
  
  while (left <= right) {
    const mid = Math.floor((left + right) / 2);
    
    if (arr[mid] === target) {
      return mid; // Target found
    } else if (arr[mid] < target) {
      left = mid + 1; // Search right half
    } else {
      right = mid - 1; // Search left half
    }
  }
  
  return -1; // Target not found
}

This function works only if the array is sorted. On each iteration, it cuts the search range in half. So if you start with 1,000 items:

  • 1st iteration → check 500
  • 2nd → check 250
  • 3rd → check 125
  • … and so on

It only takes around log₂(n) steps to find the result. For 1,000 items, that’s roughly 10 steps. That’s way faster than O(n).

Space Complexity: O(1) 

In the implementation above, we’re not creating any additional arrays or storing anything new — we just use a few variables. So the space complexity is O(1).

If we were to implement this recursively, however, the space complexity could become O(log n) due to the call stack.

Why Logarithmic Complexity Matters 

  • Excellent for search operations on sorted data.
  • Often used in divide-and-conquer algorithms.
  • Scales extremely well with large inputs.
Tip

If you see a pattern where the input is repeatedly halved or divided — think O(log n).

Next, we’ll move into algorithms that are more costly: O(n log n) and beyond.

Linearithmic Time: O(n log n) 

This time complexity is common in efficient sorting algorithms, where you process each element (O(n)) and also perform a logarithmic number of operations on each (O(log n)).

Time Complexity: O(n log n) 

Let’s look at merge sort, a classic divide-and-conquer sorting algorithm:

function mergeSort(arr) {
  if (arr.length <= 1) return arr;

  const middle = Math.floor(arr.length / 2);
  const left = mergeSort(arr.slice(0, middle));
  const right = mergeSort(arr.slice(middle));

  return merge(left, right);
}

function merge(left, right) {
  let result = [], i = 0, j = 0;

  while (i < left.length && j < right.length) {
    if (left[i] < right[j]) {
      result.push(left[i]);
      i++;
    } else {
      result.push(right[j]);
      j++;
    }
  }

  return result.concat(left.slice(i)).concat(right.slice(j));
}

Here’s what’s happening:

  • The array is split in half over and over (log n times)
  • Each level of recursion processes all elements (n operations)

So the total time complexity becomes O(n log n).

Space Complexity: O(n) 

Merge sort requires additional space to hold the subarrays during the merge phase. As a result, the space complexity is O(n).

This is one of the trade-offs of merge sort: it’s fast but not in-place.

Why Linearithmic Complexity Matters 

  • It’s the best you can do for general-purpose comparison-based sorting.
  • Found in algorithms like merge sort, quicksort (average case), heapsort.
  • Many built-in sort methods in modern languages use O(n log n) algorithms under the hood.
Tip

When you hear “divide and conquer” with full input processing, it’s likely O(n log n).

Next, we’ll look at quadratic time — a common pitfall in nested loops and brute-force logic.

Quadratic Time: O(n²) 

Quadratic time algorithms occur when you perform a linear operation for every element in a collection — often seen in nested loops. If the input size doubles, the number of operations grows four times.

Time Complexity: O(n²) 

Here’s a typical example: checking for duplicate pairs in an array.

function hasDuplicates(arr) {
  for (let i = 0; i < arr.length; i++) {
    for (let j = 0; j < arr.length; j++) {
      if (i !== j && arr[i] === arr[j]) {
        return true;
      }
    }
  }
  return false;
}

We loop over the array, and inside each loop, we loop over it again. This creates n × n operations — or O(n²). This grows very fast and can slow down dramatically with large inputs.

Another example — a naive implementation of bubble sort:

function bubbleSort(arr) {
  for (let i = 0; i < arr.length; i++) {
    for (let j = 0; j < arr.length - i - 1; j++) {
      if (arr[j] > arr[j + 1]) {
        let temp = arr[j];
        arr[j] = arr[j + 1];
        arr[j + 1] = temp;
      }
    }
  }
  return arr;
}

This also runs in O(n²) time because of the nested loop structure.

Space Complexity: O(1) or O(n) 

  • The hasDuplicates function doesn’t allocate new memory — just uses existing elements. So O(1) space.
  • Bubble sort also sorts in-place, so it’s O(1) space as well.
  • But if you create an additional data structure (e.g., store all pairs), it may become O(n²) space.

Why Quadratic Complexity Is Tricky 

  • O(n²) algorithms often work for small inputs, but break down quickly as input grows.
  • Watch for nested loops or combinations.
  • Always try to optimise or find an alternative if input size can be large.
Tip

If you see two nested loops iterating over the same data — it's probably O(n²).

Next, we’ll explore exponential and factorial time complexities, where things get out of hand very quickly.

Exponential Time: O(2ⁿ) 

Exponential time algorithms are where things really start to get out of hand. These algorithms double the number of operations with every additional input element.

Time Complexity: O(2ⁿ) 

A classic example is the naive recursive Fibonacci function:

function fibonacci(n) {
  if (n <= 1) return n;
  return fibonacci(n - 1) + fibonacci(n - 2);
}

This function makes two recursive calls per input — leading to a binary tree of calls. The number of calls grows exponentially:

  • fibonacci(5) → 15 calls
  • fibonacci(10) → 177 calls
  • fibonacci(30) → over a million calls

This makes O(2ⁿ) very inefficient for anything but small n.

Space Complexity: O(n) 

In the recursive version, each call is added to the call stack. So the space complexity is O(n) (equal to the depth of the recursion).

Why Exponential Complexity Is Dangerous 

  • Not scalable: the number of operations increases too fast.
  • Often appears in brute-force recursive solutions to problems like the traveling salesman, combinations, and subsets.
  • Look for opportunities to optimise with memoization or dynamic programming.
Tip

Recursive algorithms with multiple branches per call are often exponential.

Factorial Time: O(n!) 

Factorial time complexity is even worse. It happens when you generate all possible permutations or combinations of n items.

Time Complexity: O(n!) 

Here’s a function that generates all permutations of an array:

function permute(arr) {
  if (arr.length === 0) return [[]];
  const result = [];

  for (let i = 0; i < arr.length; i++) {
    const current = arr[i];
    const remaining = arr.slice(0, i).concat(arr.slice(i + 1));
    const remainingPermuted = permute(remaining);

    for (const perm of remainingPermuted) {
      result.push([current, ...perm]);
    }
  }

  return result;
}

This function explores every possible ordering of the input array. For 3 elements, that’s 6 permutations. For 10 elements — 3,628,800 permutations!

Space Complexity: O(n!) 

You also have to store all those permutations, so space usage grows just as fast.

Why Factorial Complexity Is Rare (and Dangerous) 

  • These algorithms are usually only used in theoretical or extremely small inputs.
  • Always look for ways to reduce the problem space.
  • Never use them in production without guarantees on input size.
Tip

If you're generating all combinations or permutations, expect O(n!).


Understanding Big O Notation helps you write better, faster, and more scalable code. It allows you to evaluate how your code performs as input size grows — which is essential in real-world applications and technical interviews.

Here’s a quick summary of what you’ve learned:

  • Big O describes how algorithms scale in terms of time and space.
  • Time complexity = how long it takes to run.
  • Space complexity = how much memory it uses.
  • Constant time (O(1)) is ideal.
  • Algorithms with nested loops or deep recursion often become inefficient quickly.

Great question! Understanding the time and space complexity of JavaScript’s built-in array methods helps you write more efficient code — especially when working with large datasets or preparing for interviews.

Here’s a handy breakdown of the most commonly used array methods and their typical time complexities. These can vary slightly depending on the JavaScript engine (like V8 in Chrome), but the general behavior is consistent.

🧮 JavaScript Array Method Complexities 

Method

Time Complexity

Description

push()

O(1)

Adds an element to the end of the array

pop()

O(1)

Removes the last element

shift()

O(n)

Removes the first element (requires shifting all other elements)

unshift()

O(n)

Adds element(s) to the start (also shifts existing elements)

concat()

O(n)

Combines two arrays (copies all elements)

slice()

O(k)

Extracts a section into a new array (k = number of elements sliced)

splice()

O(n)

Can add/remove elements at arbitrary positions — worst-case O(n)

forEach()

O(n)

Iterates through each element

map()

O(n)

Creates a new array by applying a function to each element

filter()

O(n)

Returns elements that match a condition

reduce()

O(n)

Reduces the array to a single value

find()

O(n)

Returns the first match

includes()

O(n)

Checks if a value exists in the array

indexOf()

O(n)

Finds the index of the first occurrence

sort()

O(n log n)

Default sorting algorithm is usually Timsort (hybrid of merge/insert)

reverse()

O(n)

Reverses elements in place

flat()

O(n)

Flattens nested arrays (depth = 1 by default)

every() / some()

O(n)

Tests elements until condition is false/true

💾 Space Complexity Considerations 

  • In-place methods like push(), pop(), reverse(), sort() generally use O(1) extra space.
  • Methods like map(), filter(), slice(), concat() return new arrays → usually O(n) space.

🧠 Tips 

  • Use push() over unshift() when possible.
  • Avoid chaining methods like filter().map().reduce() for huge arrays unless necessary — each creates a new array.
  • Know that sort() mutates the array — use slice().sort() if you need to keep the original.

Share this article