Programming LogicAdvanced· 12 min read

Algorithm Complexity

Why do some algorithms slow down with large data? Big-O notation describes how execution time grows with input size. Learn O(1), O(n), O(n²), and O(log n).

RF

Renato Freitas

Updated on May 5, 2026

Why efficiency matters

An algorithm that sorts 10 numbers in 1 millisecond may take 16 minutes to sort 1 million numbers — with the wrong algorithm. Another algorithm can do the same in less than 1 second. This difference does not come from hardware, but from the mathematics behind how execution time grows with input size.

For small inputs, any algorithm seems fast. Modern computers are so fast that differences are imperceptible. The problem appears at scale: databases with billions of records, images with millions of pixels, social networks with hundreds of millions of users. In these contexts, choosing the wrong algorithm can make a product unusable.

Complexity analysis provides a mathematical language for comparing algorithms independently of hardware. Instead of measuring seconds (which vary with the processor), we measure how the number of operations grows relative to the input size n.

🧮 Try it yourself — CalcSim

Want more features? Download CalcSim IA app

Big-O notation

Big-O is a mathematical notation that describes the growth behavior of a function. O(f(n)) means 'the execution time grows proportionally to f(n) in the worst case, ignoring constants and smaller terms'.

Why ignore constants? Because what matters is the growth trend. An algorithm that performs 100n operations and another that performs 5n operations belong to the same complexity class O(n). For n = 1 million, both perform between 5 and 100 million operations — the difference is a constant factor, not a difference in scale.

Big-O describes the worst case. Linear search in a list of n elements may find the element in the first position (1 operation) or the last (n operations). Big-O focuses on the worst case: O(n).

The four most common complexities

O(1) — constant: execution time does not depend on input size. Accessing an array element by index is O(1): grades[42] always takes the same time, whether the array has 10 or 10 million elements. It is the best possible complexity.

O(n) — linear: time grows proportionally to input size. Linear search (traversing all elements until the target is found) is O(n). If n doubles, the maximum time doubles. For n = 1 million, up to 1 million comparisons.

O(n²) — quadratic: common in algorithms with two nested loops. Bubble Sort is O(n²): for each of the n elements, it makes up to n comparisons. If n doubles, the time quadruples. For n = 10,000, up to 100 million operations — it starts to slow down.

O(log n) — logarithmic: time grows very slowly. Binary search in a sorted list is O(log n): at each step, it eliminates half of the remaining elements. For n = 1 billion, only about 30 comparisons are needed (log₂ of 1,000,000,000 ≈ 30). It is practically constant in practice.

  • O(1): constant — index access, insertion at the start of a hash table
  • O(log n): logarithmic — binary search, operations on balanced trees
  • O(n): linear — linear search, traversing a list
  • O(n log n): log-linear — Merge Sort, Quicksort (average case)
  • O(n²): quadratic — Bubble Sort, two nested loops over n elements

Linear search versus binary search

Linear search checks each element from start to end. It is O(n) and works on any list, sorted or not. Binary search requires the list to be sorted, but is much more efficient: O(log n).

How does binary search work? Take the middle element of the list. If it is the target, done. If it is greater than the target, discard the right half and repeat on the left half. If it is smaller, discard the left half and repeat on the right half. At each step, the search space is cut in half.

Practical comparison: searching for a name in a sorted list of 1 million people. Linear search: up to 1,000,000 comparisons. Binary search: up to 20 comparisons. This dramatic difference explains why databases sort their indexes and why sorting is so important.

Practical implications for real software

In day-to-day development, knowledge of complexity guides data structure choices. Hash tables (dictionaries) offer O(1) insertion and search on average — that is why they are ubiquitous. Balanced binary trees offer O(log n) for search, insertion, and deletion — used in databases and file systems.

A common mistake is calling an O(n) function inside an O(n) loop, inadvertently creating an O(n²) algorithm. This can go unnoticed in tests with little data and only surface in production with real data.

Premature optimization is a real problem: do not try to optimize before identifying bottlenecks. But knowing the complexity of the algorithms you use is different from premature optimization — it is making informed choices from the beginning of design.

Frequently asked questions

Does Big-O measure time or memory?

Big-O can measure both. Time complexity (temporal) describes how many operations the algorithm executes. Space complexity describes how much additional memory is needed as a function of input size. Generally, when Big-O is mentioned without specification, it refers to time complexity.

What is average-case versus worst-case complexity?

The worst case is the most unfavorable scenario possible. The average case is the expectation over random inputs. Quicksort has a worst case of O(n²) (when the pivot is always the smallest or largest element) but an average case of O(n log n). In practice, the average case is usually more relevant, but the worst case matters in critical systems.

Is an O(n²) algorithm always worse than an O(n) one?

For sufficiently large n, yes. But for small n, an O(n²) algorithm with a low constant may be faster than an O(n) algorithm with a high constant. Insertion Sort (O(n²)) is often faster than Merge Sort (O(n log n)) for lists with fewer than 20–30 elements, because it has less overhead.

How do I determine the complexity of an algorithm I wrote?

Count the loops and how they grow with n. A single loop over n elements is O(n). Two nested loops, each over n elements, is O(n²). If at each iteration you divide the problem in half (as in binary search), it is O(log n). Ignore constants and terms dominated by the largest.

Does complexity matter for beginners?

Understanding the basic concepts, yes — knowing that two nested loops are potentially problematic and that searching an unsorted list is slower than a sorted one are immediately practical pieces of knowledge. Formal detailed analysis can be learned gradually as experience grows.

Was this article helpful?

Rate with stars to help us improve the content.

Sign in to rate this article.

Still have questions?

The AI Professor explains step by step

Ask a question in natural language and get a personalised explanation about Programming Logic — or any other topic.

Prefer to solve it on your phone?

Download the free app →

Keep learning