There's a textbook waiting to be written at some point, with the working title *Data Structures, Algorithms, and Tradeoffs*. Almost every algorithm or data structure which you're likely to learn at the undergraduate level has some feature which makes it better for some applications than others.

Let's take sorting as an example, since everyone is familiar with the standard sort algorithms.

First off, complexity isn't the only concern. In practice, constant factors matter, which is why (say) quick sort tends to be used more than heap sort even though quick sort has terrible worst-case complexity.

Secondly, there's always the chance that you find yourself in a situation where you're programming under strange constraints. I once had to do quantile extraction from a modest-sized (1000 or so) collection of samples as fast as possible, but it was on a small microcontroller which had very little spare read-write memory, so that ruled out most $O(n \log n)$ sort algorithms. Shell sort was the best tradeoff, since it was sub-quadratic and didn't require additional memory.

In other cases, ideas from an algorithm or data structure might be applicable to a special-purpose problem. Bubble sort seems to be always slower than insertion sort on real hardware, but the idea of performing a bubble pass is sometimes exactly what you need.

Consider, for example, some kind of 3D visualisation or video game on a modern video card, where you'd like to draw objects in order from closest-to-the-camera to furthest-from-the-camera for performance reasons, but if you don't get the order exact, the hardware will take care of it. If you're moving around the 3D environment, the relative order of objects won't change very much between frames, so performing one bubble pass every frame might be a reasonable tradeoff. (The Source engine by Valve does this for particle effects.)

There's persistence, concurrency, cache locality, scalability onto a cluster/cloud, and a host of other possible reasons why one data structure or algorithm may be more appropriate than another even given the same computational complexity for the operations that you care about.

Having said that, that doesn't mean that you should memorise a bunch of algorithms and data structures just in case. Most of the battle is realising that there is a tradeoff to be exploited in the first place, and knowing where to look if you think there might be something appropriate.

15As I always say: these (usually) is no "best". Once you define explicitly what you mean by "better", the answer becomes obvious. – Raphael – 2016-02-17T11:12:25.507

2This is a good question, but it speaks to what I would consider a hole in your education you might look into correcting. That is practical experience, if you haven't actually written these algorithms during your education, you might consider writing them now, I suspect the answer to this question would have become quickly obvious as you tried to find uses for them. – Sam – 2016-02-18T16:52:28.827

@Sam From my experience, what I thought is that in lectures, or some textbooks, they are informative, introduce many algorithms, the analysis, etc., but not many practical cases or sample scenarios that A will outplay B. They may cover a genre of algorithms A to Z, and some homework problems, but to me they can all solved by A only, or by Z only, etc., thus the question asked. – shole – 2016-02-19T03:37:08.507

3If you insist on leaving academic interest aside the best practical reason to learn less than optimal algorithms is so you can recognize them for what they are and optimize them by refactoring to the optimal ones. You can't upgrade a bow and arrow to a gun if you don't know what a bow and arrow are even for. – CandiedOrange – 2016-02-21T14:26:40.363

1

We've actually proposed a StackExchange site to specifically help with CS education questions like this one. Come support us here: http://area51.stackexchange.com/proposals/92460/computer-science-educators?referrer=9Z3MnermjDx7JWcMHelYkQ2

– vk2015 – 2016-06-09T14:44:35.770