Using a different algorithm depending on the size of the input

I recently finished a course on advanced algorithms, and another on complexity & computability theory, and in the past few days my mind has been somewhat preoccupied by this question.

Why don’t we just use a different algorithm based on the size of the input?

I’m asking this question because I’ve never seen this done in practice or heard of it, and I’m also simply curious about the answer. I also tried looking it up on StackExchange and Google with various queries but couldn’t come up with anything remotely related to my question.

I’ll take the example of sorting algorithms, as they’re quite common and there are so many, with different properties and runtime complexities.

Say I have three algorithms, SortA, SortB and SortC. SortA is incredibly efficient on inputs of size <= 100 but becomes very slow on inputs that are any bigger; SortB is more efficient on inputs of length > 100 than SortA but falls off quickly after a size of 1000. Finally, SortC isn’t very fast on inputs of size < 1000, but is faster than SortA and SortB on very large inputs.

Why shouldn’t/couldn’t I make a function like this (written in pseudo-C#-ish code for simplicity)? Or why isn’t it done in practice?

int[] Sort(int[] numbers) {
    if (numbers.Length <= 100) {
        return SortA(numbers);
    } 
    else if (numbers.Length <= 1000) {
        return SortB(numbers);
    } 
    else {
        return SortC(numbers);
    }
}

I’m assuming some of the potential reasons are that

  1. it’s more code to write,
  2. more potential bugs since there’s more code,
  3. it’s not necessarily easy to find the exact breakpoints at which some algorithm becomes faster than another, or it might take a lot of time to do so (i.e. running performance tests on various input sizes for every algorithm),
  4. the breakpoints could only be on small or medium-sized input, meaning there won’t be a significant performance increase that is worth doing the additional implementation work,
  5. it just isn’t worth it in general, and is only used in applications where performance is crucial (similar to how some numerical algorithms use a different method to solve a problem based on the properties of a matrix, like symmetry, tridiagonality,…),
  6. input size isn’t the only factor on an algorithm’s performance.

I’m familiar with Landau/Big O notation, so feel free to use it in your answers.

Go to Source
Author: cliesens

Collaborative graph exploration algorithm

Given an undirected graph G of (10 .. 500) vertices and (vertice_count .. 1000) edges.
A vertex can have (1 .. 6) edges. All edges have a unique cost of 1.

Given K agents, all starting from the same vertex START.

What would be the best way to explore the graph to visit all the vertices in as little time as possible?

I tried running a DFS from START which gives me all the shortest paths to the “leaves” of the graph and then sort the paths by their length.

Unfortunately there are often more paths than there are agents, so I have to re-assign agents which have explored the shortest paths. That’s where I’m stuck and would appreciate any kind of help (suggestions, ideas, algorithms…)

Go to Source
Author: ZogStriP

Time complexity for a small code

I’m trying to find the time complexity for the following code.

N= number of elements in array
D= a constant: D>1
V= a constant: V>1000

counter=1; //the maximum value of the counter is N/D.
for(i=0; i<N; i++)
{
    [OP1]   O1_Operation;        // O(1) operation.   [Total: N times]
    [OP2]   if(i%D!=0) continue; // O(1) operation.   [Total: N times]

    [OP3]   for(j=0;j<counter;j++) //                 [Total: {(N/D)*((N/D)+1)}/2 times] 
    [OP4]        for(s=0;s<V;s++)
    [OP5]            O1_Operation; // O(1) operation. [Total: (V*{(N/D)*((N/D)+1)}/2) times] 

    [OP6]   counter++;             // O(1) operation. [Total: N/D times]
 }

I added each operation time complexity and the total times it will be executed.
The confusion for me in this code is because of the mod operation.
This mod will allow only (N/D) operations to complete the code OP[3-6].

For [OP3] in the first time it will get execute 1 time, the second 2 times, …, N/D times. Hence, the total number of executions can be [(N/D) * ( (N/D)+1)] /2.
Removing D and V because they are constants will lead to a complexity of O(N^2) for the whole code.

Is this correct?

Go to Source
Author: Alice