I was exploring image/video compression using Machine Learning. In there I discovered that autoencoders are used very frequently for this sort of thing. So I wanted to enquire:-
- How fast are autoencoders? I need something to compress an image in milliseconds?
- How much resources do they take? I am not talking about the training part but rather the deployment part. Could it work fast enough to compress a video on a Mi phone (like note8 maybe)?
Do you know of any particularly new and interesting research in AI that has enabled a technique to this fast and efficiently?