I play around quite a bit with location data and have found examples both where k-means works fine and where k-means is a poor representation and DBSCAN is a great fit.
If you've ever gone hiking or mountain climbing on a day with fog or a low cloud cover, there are times where you get to the top of the peak and can only see the surrounding peaks poking up through the clouds. I like to use this analogy when I think of DBSCAN. The density filtering allows one to select a threshold where data is kept, and all of the remaining data is filtered out.
Take a look at this Seattle crime incident data. Suppose I want to cluster the data by location to form pseudo neighborhoods i.e. neighborhoods that are roughly defined the the geography of where criminal events tend to occur. This is an example of k-means working just fine in location analytics:
Seattle crime data superimposing a primitive map with k-means clustering
If you know Seattle at all, you can see that the clusters tend to pick out neighborhoods and this works quite well in breaking up neighborhoods. Now suppose I want to pick out the high crime areas of Seattle in order to identify the hot spots. No matter how I adjust k, the k-means clustering doesn't really provide any additional insight. But the density filtering in DBSCAN does a wonderful job of identifying high crime areas:
Seattle crime data superimposing a primitive map with DBSCAN clustering
I think this gives the jest of some strengths that can be exploited using the two algorithms. There's nothing special about the crime data. For what its worth, I have a tutorial that goes through the analytics of identifying a user's home and work locations from their cell phone GPS pings. This is another case where DBSCAN was very useful, but the DBSCAN specific parts a buried a ways down in the tutorial.
Hope this helps!