Deep Learning to estimate what is beyond the edge


I have an image of some data which is approximately 4,000 x 8,000 pixels. I am interested in finding out if anyone has used a deep learning algorithm to predict what would be on the image if it extended 100 more pixels in each direction. I would imagine that the data could be trained on smaller rectangles, and then the rules developed would be used to extend beyond the image given. Has anyone seen a problem like this (and is there a reference)? Even if not, what deep learning scheme would be best for this?


Posted 2017-10-19T17:07:24.217

Reputation: 245



I think the closest problem that has been addressed with deep learning is image inpainting, that is, filling a blacked out region in the image:


For instance, this paper: Semantic Image Inpainting with Perceptual and Contextual Losses.

So it is certainly possible to fill missing information from an image with deep learning.


Posted 2017-10-19T17:07:24.217

Reputation: 10 494

Ah, good catch. This looks close enough to what I was looking for. Thanks!. – Paul – 2017-10-20T13:54:49.793


There are quite a few papers on predicting the next image in a set of video sequences. So I would familiarize yourself with those first.

With that being said it is definitely possible to do this sort of things using ML. There has been a lot of work on recurrent layers for Convolutional Neural Nets. These at a high level seems like it would be a good candidate to investigate for your initial architectures.

Here is some info on RCNNs:

Example RCNN in keras: link

Papers on RCNNs: link link


Posted 2017-10-19T17:07:24.217

Reputation: 149

I've looked at algorithms for next image, and I don't think the continuity assumptions are quite the same. The inpainting in the other answer is much closer. The other references are definitely interesting though. Thanks!. – Paul – 2017-10-20T13:56:30.433