algorithm


This draft deletes the entire topic.

inline side-by-side expand all collapse all

Examples

  • 6

    Unlike Big-O notation, which represents only upper bound of the running time for some algorithm, Big-Theta is a tight bound; both upper and lower bound. Tight bound is more precise, but also more difficult to compute.

    An example

    If the algorithm for the input n takes 42n^2 + 25n + 4 operations to finish, we say that is O(n^2), but is also O(n^3) and O(n^100). However, it is Ө(n^2) and it is not Ө(n^3), Ө(n^4) etc. Algorithm that is Ө(f(n)) is also O(f(n)), but not vice versa!

    Formal mathematical definition

    Let f and g be two functions defined on some subset of the real numbers. We write f(x) = Ө(g(x)) as x->infinity if and only if there are positive constants K and L and a real number x0 such that holds:

    K|g(x)| <= f(x) <= L|g(x)| for all x >= x0.

  • Improvements requested by Nick Larsen:

    • This was posted as an example, but it does not attempt to illustrate the topic. It should possibly be an edit to the topic, a separate topic, or deleted altogether. - 1d ago
      Comments:
      • This all belongs in it's own topic. There is currently a request for a topic on Big O notation in this algorithm tag. - Nick Larsen
    0
    • Worst case: It is at most this complexity for all possible inputs and generated random numbers (if applicable).
    • Average: It is this complexity for the average time of all possible inputs.
    • Expected: It is this complexity for the average time of all possible generated random numbers. Practically an algorithm with good expected complexity is as good as the one with good worst case complexity, except for some extreme cases you make it possible for a DOS attack.
    • Amortized: If something has the amortized complexity O(f(n)), it means the total time of the first k continuous operations has the non-amortized complexity O(f(n)*k). You can't convert a data structure with amortized time complexity into a persistent data structure in naive ways and hope it will keep its complexity.

I am downvoting this example because it is...

Syntax

Syntax

Parameters

Parameters

Remarks

All algorithms are a list of steps to solve a problem. Each step has dependencies on some set of previous steps, or the start of the algorithm. A small problem might look like the following:

enter image description here

This structure is called a directed acyclic graph, or DAG for short. The links between each node in the graph represent dependencies in the order of operations, and there are no cycles in the graph.

How do dependencies happen? Take for example the following code:

total = 0
for(i = 1; i < 10; i++)
    total = total + i

In this psuedocode, each iteration of the for loop is dependent on the result from the previous iteration because we are using the value calculated in the previous iteration in this next iteration. The DAG for this code might look like this:

enter image description here

If you understand this representation of algorithms, you can use it to understand algorithm complexity in terms of work and span.

Work

Work is the actual number of operations that need to be executed in order to achieve the goal of the algorithm for a given input size n.

Span

Span is sometimes referred to as the critical path, and is the fewest number of steps an algorithm must make in order to achieve the goal of the algorithm.

The following image highlights the graph to show the differences between work and span on our sample DAG.

enter image description here

The work is the number of nodes in the graph as a whole. This is represented by the graph on the left above. The span is the critical path, or longest path from the start to the end. When work can be done in parallel, the yellow highlighted nodes on the right represent span, the fewest number of steps required. When work must be done serially, the span is the same as the work.

Both work and span can be evaluated independently in terms of analysis. The speed of an algorithm is determined by the span. The amount of computational power required is determined by the work.

Still have question about Algorithm complexity? Ask Question

Big-Theta notation

6

Unlike Big-O notation, which represents only upper bound of the running time for some algorithm, Big-Theta is a tight bound; both upper and lower bound. Tight bound is more precise, but also more difficult to compute.

An example

If the algorithm for the input n takes 42n^2 + 25n + 4 operations to finish, we say that is O(n^2), but is also O(n^3) and O(n^100). However, it is Ө(n^2) and it is not Ө(n^3), Ө(n^4) etc. Algorithm that is Ө(f(n)) is also O(f(n)), but not vice versa!

Formal mathematical definition

Let f and g be two functions defined on some subset of the real numbers. We write f(x) = Ө(g(x)) as x->infinity if and only if there are positive constants K and L and a real number x0 such that holds:

K|g(x)| <= f(x) <= L|g(x)| for all x >= x0.

Types of complexity

0
  • Worst case: It is at most this complexity for all possible inputs and generated random numbers (if applicable).
  • Average: It is this complexity for the average time of all possible inputs.
  • Expected: It is this complexity for the average time of all possible generated random numbers. Practically an algorithm with good expected complexity is as good as the one with good worst case complexity, except for some extreme cases you make it possible for a DOS attack.
  • Amortized: If something has the amortized complexity O(f(n)), it means the total time of the first k continuous operations has the non-amortized complexity O(f(n)*k). You can't convert a data structure with amortized time complexity into a persistent data structure in naive ways and hope it will keep its complexity.

Topic Outline