Convolutional filters: create new ones

1

I'm studying a Master's Degree in Artificial Intelligence an my final work is about Convolutional Neuronal Networks.

I was looking for information about filters (or kernel) at the convolutional layers. I have found this article: "Lode's Computer Graphics Tutorial - Image Filtering" but I need more.

Do you know more resources about more filters (that it is known that they work) and how to create new ones?

In other words, I want to know how they work and how can I create new ones.

I've thought to create a C++ program, or with Octave, to test the new kernels.

By the way, my research will be focused on image segmentation to process MRIs.

VansFannel

Posted 2019-11-20T07:50:02.870

Reputation: 523

Answers

2

I'd suggest you better understand edge detectors such as Robert or Sobel operators first to understand better how convolution operation on images extract features by constant value kernels.

Would personally recommend Gonzales and Woods for this, as it gives a pure mathematical explanation to how and why these features are extracted.

Essentially the convolution kernels used in CNN's are ones with a learned set of values for the kernel.

For a better understanding of learned convolution kernels and, quite frankly, any idea under deep learning would easily recommend Deep Learning by Goodfellow et al

ashenoy

Posted 2019-11-20T07:50:02.870

Reputation: 1 194

1Thanks for your answer. While the network is training, do the kernels change? – VansFannel – 2019-11-21T07:28:17.993

Yeah. Thats why they are that useful - because a kernel can be learned (via backprop) which produces feature maps which is most useful for the problem in hand. – ashenoy – 2019-11-21T08:12:33.723

And... is that a good idea? If I use a useful kernels for image segmentation (because I tested them in a C++ program), why the network change them? By the way, how the network learns new useful kernels? This is like magic. – VansFannel – 2019-11-21T11:00:51.643

The kernels would be a particular part of the operations you'd perform to get a segmentation map. Why would the network change them - That is what back propagation is? For example say you used a 3x3 kernel in a Convolution operation, the 9 values that make up the kernel are considered weights $w_1, w_2, ..., w_9$ which are learn-able parameters for the model and are learned via backpropogation. – ashenoy – 2019-11-21T11:24:19.107