[Edit: I'm deprecating this answer, having a better one up now I believe, but leaving it as it's attracted very useful comments. I think what's said here isn't quite wrong, but has been considerably improved upon, and can be more so as the community gains experience.
In particular, though for good reason, Railgun has turned out to be more naive than I thought (or see it can be configured at present), so that it is a great help on responsiveness out at the edge, but limited to as far as the original server is also speedy. This speed can be obtained in degrees, using Craft's caching at appropriate levels, and eventually by Varnish if one needs the ultimate available. Any of these improvements will also proportionately reduce load (and expense) on the originating server, while naturally increasing the precision required of configuration. Getting something in place sounds like it could give the space to work out such details on a staging server.]
My thought on the original question was to wonder how often Varnish was actually going to be needed. It turns out though that each of the types of caching could answer a given situation on their own, but also contribute when several are stacked, up to where there may be the additional need of Varnish,.
We have Craft's controllable and auto-breaking caching, which can engage memory-store methods like memcached or redis if needed above the effectiveness of database caching, if needed above the effectiveness it can give with just the filesystem method, especially with modern hosting having SSD or cache-SSD disks.
Beyond that, though, are the abilities of Railgun, a new addition on CloudFlare. This gives you edge caching, even on pages that may include active changes. They do this by sending only deltas, and holding your cache on edges near your client, in parallel with the static edge cashing CloudFlare began with. There are competitors, for example Incapsula.
Railgun has been apparently effective in small early trials here, but for high-page-access sites, it's going to also need load reduction and response-speed improvers for the source server, so let me conclude, and suggest the discussion of those in the fresh article.