# Big O notation

**Big O notation** is how hard an algorithm has to work to solve a problem or, in technical terms, the worst-case run time for an algorithm.

*O(1) — Constant run-time algorithm.*

This is an algorithm that returns an element in a known position regardless of the data size or array. For instance, designing an algorithm to test if the first element of an array is equal to the second.

*O(n) — Linear run-time algorithm*

For this algorithm, the number of iterations grows as the number of array elements. This is expressed as (n-1) where n is the number of array elements. If an array has 10 elements, the algorithm will require 9 comparisons.

*O(n2) — Quadratic run-time algorithm*

This squares the number of iterations or even quadruples. Suppose you have an algorithm that tests if an array element is duplicated elsewhere in the array. In testing, The first element must be compared with every other element in the array. The second element must be compared with every other element except the first. The third element must be compared with every other element in the array except the first two.

*O(log n) — Logarithmic run-time algorithm*

The algorithm splits the data into smaller portions to solve in iteration. This algorithm is perfect for very large data.

This has been a short article but I hope it gives some bit of idea,

Thank you.