I recently finished a course on advanced algorithms, and another on complexity & computability theory, and in the past few days my mind has been somewhat preoccupied by this question.

Why don’t we just use a different algorithm based on the size of the input?

I’m asking this question because I’ve never seen this done in practice or heard of it, and I’m also simply curious about the answer. I also tried looking it up on StackExchange and Google with various queries but couldn’t come up with anything remotely related to my question.

I’ll take the example of sorting algorithms, as they’re quite common and there are so many, with different properties and runtime complexities.

Say I have three algorithms, `SortA`

, `SortB`

and `SortC`

. `SortA`

is incredibly efficient on inputs of size <= 100 but becomes very slow on inputs that are any bigger; `SortB`

is more efficient on inputs of length > 100 than `SortA`

but falls off quickly after a size of 1000. Finally, `SortC`

isn’t very fast on inputs of size < 1000, but is faster than `SortA`

and `SortB`

on very large inputs.

Why shouldn’t/couldn’t I make a function like this (written in pseudo-C#-ish code for simplicity)? Or why isn’t it done in practice?

```
int[] Sort(int[] numbers) {
if (numbers.Length <= 100) {
return SortA(numbers);
}
else if (numbers.Length <= 1000) {
return SortB(numbers);
}
else {
return SortC(numbers);
}
}
```

I’m assuming some of the potential reasons are that

- it’s more code to write,
- more potential bugs since there’s more code,
- it’s not necessarily easy to find the exact breakpoints at which some algorithm becomes faster than another, or it might take a lot of time to do so (i.e. running performance tests on various input sizes for every algorithm),
- the breakpoints could only be on small or medium-sized input, meaning there won’t be a significant performance increase that is worth doing the additional implementation work,
- it just isn’t worth it in general, and is only used in applications where performance is crucial (similar to how some numerical algorithms use a different method to solve a problem based on the properties of a matrix, like symmetry, tridiagonality,…),
- input size isn’t the only factor on an algorithm’s performance.

I’m familiar with Landau/Big O notation, so feel free to use it in your answers.

Go to Source

Author: cliesens