cross-posted from: https://programming.dev/post/6660679

It’s about asking, “how does this algorithm behave when the number of elements is significantly large compared to when the number of elements is orders of magnitude larger?”

Big O notation is useless for smaller sets of data. Sometimes it’s worse than useless, it’s misguiding. This is because Big O is only an estimate of asymptotic behavior. An algorithm that is O(n^2) can be faster than one that’s O(n log n) for smaller sets of data (which contradicts the table below) if the O(n log n) algorithm has significant computational overhead and doesn’t start behaving as estimated by its Big O classification until after that overhead is consumed.

#computerscience

Image Alt Text:

"A graph of Big O notation time complexity functions with Number of Elements on the x-axis and Operations(Time) on the y-axis.

Lines on the graph represent Big O functions which are are overplayed onto color coded regions where colors represent quality from Excellent to Horrible

Functions on the graph:
O(1): constant - Excellent/Best - Green
O(log n): logarithmic - Good/Excellent - Green
O(n): linear time - Fair - Yellow
O(n * log n): log linear - Bad - Orange
O(n^2): quadratic - Horrible - Red
O(n^3): cubic - Horrible (Not shown)
O(2^n): exponential - Horrible - Red
O(n!): factorial - Horrible/Worst - Red"

Source

  • HarkMahlberg@kbin.social
    link
    fedilink
    arrow-up
    11
    ·
    11 months ago

    Aren’t the most commonly accepted sorting algorithms O(nlog(n))? Quicksort? Mergesort? Those are considered bad?

    • Cawifre@beehaw.org
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 months ago

      I mean, it is entirely reasonable that “bad” is the best performance you can hope for while sorting an entire set of generally comparable items.

      If you can abuse special knowledge about the data being sorted then you can get better performance with things like radix sort, but in general it just takes a lot of work to compare them all even if you are clever to avoid wasted effort.

    • Kajo [he/him] 🌈@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      11 months ago

      Yeah, you’re right, it doesn’t make sense to say that O(f(n)) is good or bad for any algorithm. It must be compared to the complexity of other algorithms which solve the same problem in the same conditions.

    • duncesplayed@lemmy.one
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      I mean…yeah. Just because something is provably the best possible thing, doesn’t mean it’s good. Sorting should be avoided if at all possible. (And in many cases, such as with numbers, you can do better than comparison-based sorts)

    • ericjmorey@programming.devOP
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      The labels are from the perspective of viewing the space of all possible functions of element set size to operations, so they don’t apply to any particular problem an algorithm is attempting to solve (that space is often smaller).