Complexidade usando a notação de um grande O
Abstract: Complexidade usando a notação de um grande O. Pesquise 862.000+ trabalhos acadêmicosPor: wh0001 • 16/7/2014 • Abstract • 472 Palavras (2 Páginas) • 319 Visualizações
We express complexity using big-O notation.
For a problem of size N:
a constant-time algorithm is "order 1": O(1)
a linear-time algorithm is "order N": O(N)
a quadratic-time algorithm is "order N squared": O(N
2
)
Note that the big-O expressions do not have constants or low-order terms. This is
because, when N gets large enough, constants and low-order terms don't matter (a
constant-time algorithm will be faster than a linear-time algorithm, which will be faster
than a quadratic-time algorithm).
Formal definition:
A function T(N) is O(F(N)) if for some constant c and for values of N greater than some
value n0:
T(N) <= c * F(N)
The idea is that T(N) is the exact complexity of a procedure/function/algorithm as a
function of the problem size N, and that F(N) is an upper-bound on that complexity (i.e.,
the actual time/space or whatever for a problem of size N will be no worse than F(N)).
In practice, we want the smallest F(N) -- the least upper bound on the actual complexity.
For example, consider: T(N) = 3 * N
2
+ 5.
We can show that T(N) is O(N
2
) by choosing c = 4 and n0 = 2.
This is because for all values of N greater than 2:
3 * N
2
+ 5 <= 4 * N
2
T(N) is not O(N), because whatever constant c and value n0 you choose, There is always
a value of N > n0 such that (3 * N
2
+ 5) > (c * N)
In general, how can you determine the running time of a piece of code? The answer is
that it depends on what kinds of statements are used.
...