Evaluating Utility of Incremental RAM


The Nvidia 2080 GPUs command a $500 premium for 3GB of incremental RAM (8GB -> 11GB).

What are the relevant questions and thought process to determine the incremental improvement by moving from the 8GB to the 11GB GPU? Assume these contexts for image segmentation (people counting)

  2. Tensorflow

I am inclined to think that the platform affects the RAM evaluation, however, I can not be certain.


Thanks for the good responses that sharpen the question. Updates to questions are provided below.

The plan is to train an image segmenter with recorded images and try different algorithms (YOLO R-CNN etc.) . The end goal is to develop a model that will process a live stream. I am in the planning stage and expect to build (if not buy) a GPU DL Server.


Posted 2018-10-28T15:29:07.570

Reputation: 218

Are you training an image segmenter, or using an already trained one to process images? If the latter, will you be processing a single live stream, or batches of images? How do you envisage your production system will scale, and do you have any metrics for performance of your system at this stage using any GPU setup? Do you have a specific CNN design that you are working with? – Neil Slater – 2018-10-28T16:01:40.087

It means you can process things significantly faster, if we assume that bulk of the time is spent in data movement from ram to memory (which is generally true) ...So yeah you may get a performance improvement of max 11/8 times keeping other things same. – DuttaA – 2018-10-28T17:00:02.423

Suggest DuttaA repost comment as an "Answer to Question" so community can evaluate and upvote the response. Interpreting NeilSlater's comment as questions to improve my question and not as a response, if intended as an answer, then may I suggest reposting as an answer. – gatorback – 2018-10-29T15:02:41.907

No answers