Archive.fm

The Bold Blueprint Podcast

The Bold Blueprint Avideh Zakhor Their encouragement can help you stay focused and uplifted

Their encouragement can help you stay focused and uplifted, especially during challenging times.

Broadcast on:
09 Oct 2024
Audio Format:
other

Hey Amazon Prime members, why pay more for groceries when you can save big on thousands of items at Amazon Fresh? Shop Prime exclusive deals and save up to 50% on weekly grocery favorites. Plus save 10% on Amazon brands like our new brand Amazon Saver, 365 by Whole Foods Market, Aplenty and more. Come back for new deals rotating every week. Don't miss out on savings. Shop Prime exclusive deals at Amazon Fresh. Select varieties. We wear our work day by day, stitch by stitch. At Dickies, we believe work is what we're made of. So whether you're gearing up for a new project or looking to add some tried and true work wear to your collection, remember that Dickies has been standing the test of time for a reason. Their work wear isn't just about looking good. It's about performing under pressure and lasting through the toughest jobs. Head over to Dickies.com and use the promo code WorkWear20 at checkout to save 20% on your purchase. It's the perfect time to experience the quality and reliability that has made Dickies a trusted name for over a century. There are a few announcements to pick up your solutions on the way up. So pick up your graded homework. Then on the web look up for homework number eight which is it's really a lab. You can do it right after cost, right? This is on restoration and it's not due until April 21. Not because it's a hard homework but because you're also doing projects and we can afford to do that. Okay and the last announcement is, I guess those are the two, those are the two things I have to talk about. How did the lecture go on Wednesday? For even more. Any questions or comments? Just our curiosity, how are the two groups that we have doing? The two multi-student project group. I guess there's a men's group with Mary and Sergei. You guys are in one group. How are you guys doing? Making progress? Okay and the other group yeah did you have that group as well? Did you guys hook up with each other too? Right but did you team up with the other people in the group in the class or are you going to be solo? Who's David? David Lane. Okay so you and him are together in one team. I encourage teaming up. The simple reason being is you get more done and you don't each reinvent the wheel. So you're going to be meeting with Slav to get some stuff. You might also want to meet with Professor Malek simply because he's very intent on participating in that competition and he might like to have some help kind of with some aspect of it and it's perfectly legitimate for me if you did let's say the of the 39 features that 39 it's not features classifiers that he wants to build. Maybe you choose five or two or some aspect of it but to try not to reinvent the wheel kind of if you can get a team together that's good and the other team I don't see any of them. Gailin is not here and Vijay is not here and I forget the third person who was in that team. It's probably in there. Oh Eric Badenberg right and he's outside so that's the they might be watching the video later but they're absolutely not. Okay so what we're going to talk about today is image restoration and in particular be blurring. So and probably this is the last lecture on restoration although I do want to talk about homomorphic style restoration which would probably take you know 10 minutes or 20 minutes next lecture. I'm pretty sure it won't fit into this lecture but just so that you know the layer of the land after restoration we're going to talk about we're going to switch to compression and initially talk a little bit about basis of compression and then talk about image compression using various techniques like DCTs and wavelets etc as well as video. Okay are there any questions I got to answer before we get going. Okay so just just to recap where we are with I mean a number of pages a little quick. Okay so just to recap where we are with things so this is image restoration and we're talking about what what we have been talking about the last few weeks or so few lectures or so has been a winner filter in particular what we've been talking about is the situation where you have an original signal F bad somehow gets corrupted by noise W and you observe G this is this is what you see this is for example the output of the images from your Hubble telescope and you you want to process this with some sort of a linear shift in variant type of a system to get a new signal F hat in such a way that F and F hat are as close as possible in particular we try to minimize the expected value of the square of the error between F and F hat and in mathematically that's very easy it's expected value of F minus F hat the whole thing squared and what we've shown is that actually we don't assume that it's an LSI if you just decided to minimize this mean square error then then you can show that the operation that does that minimization the LSI filter is a winner filter that's of the form P F over P F plus P W where P F is the power spectral density of the original signal F and P W is the noise power spectral density so this is H of omega 1 and omega 2 this is the frequency response of the LSI system that we have and and we essentially in a nutshell what we discovered is that if you just apply winner winner filter globally to the image because the image is not stationary then you could get excessive blaring and on the other hand what I hope Mingo showed in last time's lecture was that if you do local processing by computing means of variances and other statistics of the signal then you can you can avoid that excessive blaring so adaptive in a natural adaptive winner filter kind of minimizes excessive blaring so today what I'm going to talk about is a slight variation of that problem in particular we're going to if you can roll up just a little bit what we're going to talk about is the more general version of it where you start with the signal F of N1 and N2 and it's passed through some sort of an LSI system unwantedly this is a degradation this LSI system up here was our own artifact this was ours we built it we did it to help to help process G in order to restore F and get it a fact here this is some natural degradation out of our control this could be atmospheric turbulent it could be motion blur it could be out of focus blur it could be anything okay that so and it has this this system function D of N1 impulse response D of N1 and N2 and and then some noise like has been added to it and now we observe G so the question that we want to ask is what do we do to G question mark question mark to get F hat in such a way that F hat comes close to F okay and so this this is a slightly more general problem and for a moment if you just assume there was no noise added it's just a signal has gone through a system and you observe the output I think brief I'm trying to call correctly briefly we talked about how we undo this problem what's the simple hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus save 10% on Amazon brands like our new brand Amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon fresh select varieties we wear our work day by day stitch by stitch a Dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the Dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to Dickies calm and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that is made Dickies a trusted name for over a century listening to do signal goes to a linear software management system we get gm from g we want to recover F how do we what's the best way of doing it yeah inverse filter we talked about that already and we already talked about the fact that we already talked about the fact that if the inverse filter in the frequency domain has some zeros it annihilates those frequencies of F that for which this filter goes to zero and therefore by a G will have value zero as the same frequency so there's no way on earth by looking at you can figure out what those values of F at those frequencies are but there's ways to treat that for example you can you can kind of make numerical attempts to to minimize the effect of that I'm not talking about that a little bit today so so this is kind of just of today's lecture and one of the things I want to say is that even though this linear shifting variant modeling is is reasonable and it applies to a great deal of practical situations it doesn't really apply to a large number of other practical situations so for example so what I want to say is LSI model modeling doesn't always work doesn't always present reality for example if in the lithography systems when you project a light on a mass to project it onto a wafer for IC manufacturing the model between the image that you get on your wafer and the mask function that you are using is it for a four two pull Hopkins integral so for example for optics that's used in lithography so you can have you can model the optics in a coherent way completely coherent you can model the completely incoherent way and both of those use straightforward convolutions and however in real life the light the model that best describes what happens in this step is partially coherent projection so partially coherent projection of light onto the mask so here's a light source here's our mask here's the wafer and there's some pattern so if the if the mask let's say is an L shaped the pattern that you get could be kind of a rounded version distorted version of this thing okay and and the relationship between what the image that you get on the wafer to the mask I won't write it down but but it's a four two pull Hopkins integral and it has nothing to do with LTI well I shouldn't say that it's not really a convolution or it's not really it's straightforward LSI convolution but nevertheless one of the big kind of breakthroughs of the late 90s in this field which Nick Cobb at Berkeley came up with was he managed to approximate this Hopkins equation with what he calls sucks it's some of some of coherent systems so basically he said look I'm going to apply convolution here I'm going to decompose these Hopkins equations using singular value decomposition into a series of filters that now I can do h1 h2 a series of convolution but then square the answer up and then add the things up and now the image can be related to the mask so if this is the mask which is the input and this is the image on the wafer now it's not really exactly convolution it's some of a bunch of convolutions whose answers have been squared so when you're doing coherent imaging it's just convolution when you're doing incoherent it's convolution with one thing squared but when you're doing partially coherent you can approximate this portable integral using SVD decomposition using this technique and when you do SVD decomposition only the singular vectors that have large singular values are important and therefore only those filters you put in here and it turns out that the image that you get using this way is very close to the image that you get using the actual for to pull integral and it was this discovery that made calculation of optical of images on wafer is fast and resulted in this whole field of optical proximity correction that is now a big industry with mental graphics and cadence and synopsis all kind of working in that and this is something that one of your fellow students Alan Booth is intimately familiar with some extent working with working with aspects of this and so does SVD. So I don't want to get distracted too much in this direction coming back to this problem that we have trying to solve let me I want to start with even though I talked about blur a little bit already and talk about it just a tiny bit more about possible blur functions and then talk about different techniques that we can use in order to solve that. So I want to talk about the three kinds of blur one of them is motion blur one of them is autofocus blur and then the other one is atmospheric turbulence which we also talked about a little bit already so for motion blur we're going to assume that the scene is translating at a constant velocity under an angle phi with respect to horizontal axis so we just assume that scene translates at constant velocity which we call mu relative under an angle phi radians with respect to horizontal axis. So that's the relative velocity between the camera and the scene okay and we assume that the exposure time is zero to T exposure then under all these assumptions then what you can say is that the then the length of motion and under this condition the length of motion is nothing but which we call L is nothing but mu relative times T exposure. So from the time the camera shutter open until it was closed because there was motion this is how much movement there was in units of length and therefore in this situation the blur function that specifies D so point spread function for D of X comma Y if you're going to be in the continuous time would be 1 over L if square root of X squared plus Y squared is smaller than L over 2 and X over Y is equal to minus tan phi and it's zero otherwise and this D is L and phi dependent so I put L and phi underneath to show that it depends on this relative motion the relative speed and exposure and all these other things and of course you can discretize this thing whichever way you like and I once spent time writing that down that's kind of trivial but what I would like to show you is the point spread function in the Fourier transform of this thing in order with D of omega 1 omega 2 for some values of L and phi so for example for L equals 7.5 and phi equals zero and then L equals 7.5 and phi equals pi over 4 so can you zoom in to this as much as this picture here I figure as much as you possibly can this is maximum okay so so this is a magnitude of the Fourier transform of D of u comma v u and v is the spatial frequency symbols that this guy chose but by the way this is a chapter out of a handbook of image and video processing that albolic published many years ago this is another hundred and fifty dollar book that's why I didn't want you guys to buy it because we don't use it too much but at this chapter in particular I think there's very good so in any event so this is the magnitude of the Fourier transform of D this is u-axis v-axis the two spatial frequency axis pi over 2 and because the motion is along phi is equal to zero so pretty much the motion is along the x-axis you can see that these lines of zero are perpendicular to the u-axis u is the spatial frequency corresponding to x and v is the spatial frequency corresponding to. Hey amazon prime members why pay more for groceries when you can save big on thousands of items at amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus save 10% on amazon brands like our new brand amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at amazon fresh select varieties We wear our work day by day stitch by stitch a dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies.com and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made dickies a trusted name for over a century to to to to why and and if now phi becomes pi over four then this is the this is the frequency is magnitude of the frequency response that you get these these are now at a 45 degree angle to unv directions and maybe this might be a good point actually to show you something else yeah if you look at this image this is the if you if you just this one on the left so this is the observed signal of blurred images if you were traveling at 45 degree angle and with L equals 7.5 so but by looking so many times you're looking at the only thing you have to look at which is G right in order to figure out what the poor if you didn't know the parameters of your motion blur if you didn't then you have to estimate that many times by looking at G and seeing these angular things you you are able to estimate for example the parameters of motion both L and phi by observing G I'll get to that at the end of the lecture about methods of estimating blur okay so so that's that's one one sort of a degradation then the other kind of blur that you can get if you can continue this discussion is out of focus blur can you zoom out so that all the whole paper is shown so here d of x comma y as a function of r is 1 over pi r squared and and r is look it up is the yeah r is a parameter of the point spread function as to how much out of focus you are okay says one over pi over squared if a square root of x plus x square plus y square is less than r squared and it's zero otherwise so r is just a parameter for how much out of focus you are okay and finally so an example of that is shown here if you can just zoom zoom down into this picture so this is the Fourier transform of magnitude of the Fourier transform of that d so it's a because in the time domain it's a circularly symmetric thing like it's shown here in the frequency domain it's just going to be a vessel function of some time which you showed at the beginning of the semester okay and zoom back out the last one that and the discrete version of this thing if you're going to discretize that you get d of n one and n two as a function of r as one over some constant if square root of n one square plus n two squared is less than or squared and it's zero otherwise so that that's relatively easy to discretize and finally the last blur function that I want to talk about is the atmospheric blur which we already talked about or showed some examples even and that is for that d of x comma y which is also corresponds to sigma g is some constant and then exponential to the power of minus x squared plus y squared over two sigma squared g and as you change the value of g you change the amount of sigma squared g you change the amount of blur that you said you introduced so these are some common blur functions and I haven't bothered I won't bother well let me just quickly show it to you zoom in please this is the this is the magnitude of the Fourier transform of the blur function okay zoom back up okay so now what I want to talk about is really techniques for deconvolution so this is called deconvolution because the signal got convolved with something inadvertently you're deconvolving it right and by the way deconvolution is a pretty old field and it got a huge amount of boost in the 70s when there was oil crisis going on where the oil companies were using these kind of techniques to using deconvolution techniques to reconstruct layers of earth to figure out where to dig for oil or not but so basically we're going to consider two cases case one where the blur function is known in case two where it is unknown and mostly today I'm going to talk about the case that we know where the blur function is and with the case where we don't know it this is called the line deconvolution okay so given these two classes I'm going to propose three classes of restoration slash deconvolution algorithms one of them is inverse filtering which we already talked about a little bit the other one is least square filters least square filters which again we talk about wiener and the other one is constraint least square CLS and the third one is is iterative filters okay so let me begin with inverse filtering so the basic idea is that you've got this blur function d here that your signal has gone through right here and you want to undo that so what you do is you say find some sort of an inverse function such that after I convolve with d I get delta or in frequency domain the Fourier transform of the inverse filter is one over the omega one and omega two okay and if you do that then the product of these two things has to be equal to one so so what happens is h of inverse of omega one and omega two times d of omega one and omega two has got to be equal to one and there's really two problems associated with this method one of them as we talk about last time is noise in that because the most of the time is a low pass filter h inverse most of the time ends up being high pass filter because so h inverse is a high pass filter as a result it tends to accentuate noise and secondly the places where we talked about that already a few times d of omega one and omega two tenths to zero then everything goes bad in other words in this case h blows up essentially so you have to deal with that and one way to deal with that you just say h inverse of omega one and omega two is one over d of omega one and omega two only if d only one over d is not too large it's smaller than some threshold gamma only if one over d of omega one and omega two is smaller than gamma and it's equal to gamma otherwise so this way you've kind of you've kind of you've kind of limited things so h inverse for example it could look something like this but then cap up at gamma so that it doesn't it doesn't it doesn't blow up okay and you might want to do that so if so if d for example magnitude of d was like that so it has these zeros you might want to so before you even get to the points when magnitude d becomes too small this thing kicks in and flattens it out at a certain value so for example so this could be this would be one over d equals gamma so it equals one over gamma all right so all these tight lobes are gone and you just basically do a little bit of a high pass to show you some example it hey amazon prime members why pay more for groceries when you can save big on thousands of items at amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus save 10% on amazon brands like our new brand amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at amazon fresh select varieties we wear our work day by day stitch by stitch a dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies dot com and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that is made dickies of trusted name for over a century of inverse filtering not working and not doing any of these stuff I want to switch back to this picture if you can zoom in please yeah great so this is the typical camera man picture that we've seen in lots of examples it's it's at MIT and here we're assuming that it's gone through a basic blur function let me just check to see what what blur function they apply to this yeah it's it's an autofocus blur applied plus some noise applied to the camera man picture so it's not that you can't see the eyes very well even the picture on the paper in front of me is is blurred and this is a signal that you get if you just apply inverse filtering just garbage and what you see here is the magnitude of the four year transform of the restored signal and this is the magnitude of the four year transform of the inverse filter response so without playing this game of capping it at gamma this inverse filter h in is just going to blow up these are high high values and then in the restored signal you can see you're not the same sort of a thing these lines by the way corresponds to these lines that are already in the image so that's not the problem but the problem is all the zeros in this image are here and that's what's completely messing up the the inverse filter so generally speaking inverse filtering just by itself without fixing it or massaging it is just not going to not going to work very well okay so and and let me show you I think I already showed this to you in the previous class but I'll show it to you again so this is the case where we start actually I don't have to show it in the book I'm going to go to the computer to the 526 yeah so if I have loaded so this is a original picture that's been blurred and some noise has been added to it and this is this is the original with different with the same amount of blur but different amount of noise this one level of noise another level of noise and another level of noise and the amount of noise is the highest up here and then goes a little bit lower here and then somewhat lower in this case and these are the two these are the three signals that you get by just applying inverse filtering to this original image garbage garbage and I mean almost as you can almost see what the thing is but it's just it's just mostly garbage just just showing that inverse filtering without taking care of the noise issues and accentuating of the high frequency issues and all those things it is just disaster it doesn't work very well and the next scheme that I'm going to show is going to be the wiener filtering which results in this image this image and this image and just as a preview the next scheme after that that I'm going to show constraint these square is going to result in this image this image and this image which is considerably better than even wiener filtering okay so this so this I'm just writing down figure five twenty nine of Gonzales and Woods this shows the inverse full-time process and it just doesn't doesn't really work very well okay so so let's move on to something more sophisticated and that's the least squares kind of filters okay so so the problem that we have is you start with F you go you go through this is the undesired distortion function add noise to it you get G and you want to design an LSI filter that that minimizes the expected value of F minus F hat square and you can go I don't want to spend a lot of time going through the optimal derivation of this but if you do like like we did a few lectures ago what you can what you find is that the winner filter that minimizes this mean square error is given by the complex conjugate omega one omega two over the complex conjugate omega one omega two times D of omega one omega two plus something here which is P w of omega one omega two which is the power spectral density of the noise over P sub F of omega one and omega two the power spectral density of F and you know we can we can analyze this to death it has the same problem as the traditional winner filter had in a sense that we don't know power spectral density of F and I'm going to talk about two techniques that helps us do that but basically you're going to have to estimate P of F by looking at P of G and making some assumptions about F and D noise and all those other things so let me make few observations observation number one is that if there was no filter if D of omega one omega two is just one no blurring that's just reduces the winner filter and that's a good thing because you want this is a more general case you want it to reduce to the special case so if D of omega one and omega two was just one then we get no blurring in that case H winner just becomes what we had before which was P sub F over P sub F plus P sub W that that should be familiar to you all of you observation two is that as noise tends to zero then we get the inverse filtering answer which is H inverse is equal to one over D of omega one and omega two for D of omega one and omega two not equal to zero and it's zero otherwise okay and so that is nothing but the inverse filter and you get that by plugging in into P of omega being equal to zero so this goes away just cancel to give one over D that's all there is still it and observation three which is a little bit less trivial but still soothing and comforting to know is the two cases where the signal energy is higher than the noise energy versus the case where the noise energy is higher than the signal energy so let's can you roll up please just a little bit. So observation three is for values of omega one and omega two where piece of W of omega one and omega two is much much much smaller than a piece of F of omega one and omega two that means noise is much smaller than signal then the linear filter approaches inverse filter. So why is that if you just come up here if P of omega is much smaller than this then in the limit this is equal to zero this is much smaller than this this kind of disappears this cancels that you get one over D so that's kind of inverse filter and intuitively does that make sense. Yeah because in those regions or for those values of frequencies the noise is very little so all you have to do is undo the blaring or the thing that the linear timing very a D filter did to your signal. However for omega one and omega two's where piece of W of omega one and omega two is much larger than piece of F of omega one and omega two. In other words the noise energy is much higher than your focus has got to be to get rid of the noise and in that case the linear filter tends to zero it becomes a frequency rejection filter. Okay and that also falls right away out of this general kind of expression that we have. So basically it does kind of all the right things as we expected to do. Hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus save 10% on Amazon brands like our new brand Amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon fresh select varieties we wear our work day by day stitch by stitch a Dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the Dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies.com and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that is made Dickies a trusted name for over a century. It's a good time to kind of not shove under the rug how to estimate PF let me just spend few minutes talking but a couple of different ways to estimate PF so how do we estimate the power spectral density the first approach is to just say okay well PF is nothing but PG minus PW because we're assuming that the noise that we added is independent of the signal etc etc and so if all we have to do is somehow come up with the power spectrum of G then and make some assumption about the power spectrum of noise we can tell what the power spectrum of F is and why do we talk about power spectrum of G because G is the thing we actually observe it's in our hand right so for example you can model W noise as a some sort of a white noise with variance sigma squared W okay in that case piece of F of omega one and omega two is equal to piece of G of omega one and omega two minus sigma squared W okay and and one way to estimate piece of G is use what's called the pre-autogram method in this case again we replace piece of G with G of omega one and omega two times G complex conjugate of omega one and omega two minus sigma squared W so so this method is called the pre-autogram as a way of estimating piece of G you just literally take the four you the very signal you have in your hand and it's and it's plagued with its own its own set of issues I mean for one thing piece of G is the power spectrum density it's we are assuming G comes from a random as a random process and we are assuming G is one waveform coming from a random process that stationary and that has the article relation function before your transformers is piece of G and so just using that one sample to estimate piece of G is you know it has its own set of limitations right what if that waveform was not representative of that class of signals that there's one argument you could use against it but on the plus side you could say well the G is all I got that's the only thing I observed from this from this process anyway another way of doing is is to think about a set of representative images so if you're dealing for example with natural imagery you take you know 100 natural image images from the Corel image database and you average those over in order to estimate piece of G or if you're dealing with MRI images which some of you are working for the term class project you can get 100 MRI images if you're dealing with pet images or astronomical images you could take 100 of those or 1000 of those and do averaging over those and one of the other ways besides these two kind of mundane ways of estimating piece of F one of the other ways that you could do is to model it's called the model based technique so you just for example you model your images as a two dimensional causal order with regressive model so 2D causal order with regressive model and what do I mean by that well you can say that F in general can be written as some coefficient A not one F of N one comma N two minus one plus a one one F of N one minus one comma N two minus one plus a one zero F of N one minus one comma N two plus some noise because if I just leave it at that it means that every pixel can be predicted perfectly or is equal to perfectly from the one above to the left and to the right so so you add some noise V of N one and N two and you have to make some assumption about these noise so we can say okay well we call this the white noise with some sort of variance and you might say well it's still you don't have access to F well my answer is well we do that on G and then we subtract P W from P of G to get P of F so you can do the modeling on G in order to find this coefficient so you can there's fancy techniques called Euler rocket equations which I believe is covered in 225A is it yeah to estimate these coefficients so for the for example for the camera man by the way in 225A did I give any image applications of these stuff no it's all one dimensional anyway so if I apply this this new rocket set of equations to determine these coefficients to the camera man image I get a 0 1 is 0.709 a 1 1 is equal to minus 0.467 and a 1 0 is 0.729 with sigma squared V being 231 okay now one of the questions that you might be asking yourself is that this is good but you know how sensitive are these coefficients to different images for each other if instead of camera man I had Leno would these things be massively different the answer is they will be different but not that massively to make you lose complete faith in this technique so just just for the hell of it I want to show you this table if you zoom in as much as you possibly can to show the numbers in this table okay great so on this column you have the different images camera man Leno treble white and just white noise and this is a 0 1 a 1 1 a 1 0 and sigma squared V so first of all let's start with white noise if if you have truly white noise what do you expect these a's to be zero right there's no correlation between successive pixels that's what white noise means and what do you expect sigma squared V to be huge right some number like you know five four in this case is five thousand four hundred and seventy right um if you look at camera man Leno and Trevor a 0 1's they're kind of closed they're between point one and point seven five nine point seven a 1 1's they're all kind of negative around negative point negative half or something minus point three four three four and a one zero well there are between point seven point eight and and the sigma squared V is you know they're in between thirty three to two hundred and thirty so there's quite a bit more variation around here but the point I'm trying to make is that these numbers are not drastically different from one image to the other like for example eight not going is not point seven for camera man and ten to the eight for Leno you don't get that you get numbers that are kind of in the same vicinity of each other in any event so you could you could use zoom back please so you could use these 2D autoregressive kind of modeling in order to determine these parameters even if you don't have access to G to F you can do it to G get these and then come out with with the power spectrum density of P sub G and subtract it from the fact for me people you can get P sub F so so to show so I want to use for this examples I'm going to show I'm going to show use this third technique autoregressive model to estimate my my piece of F and and then I applied to this we know restoration technique that I just talked about zoom in as as much as you can okay so what what you see on the left and up here is is a Rina restoration of the original this is the the 5a this picture was the blurred noisy version of camera man here you apply Rina restoration to it and and we're assuming the autoregressive model for the image and the amount of noise that has been added to the signal in the restoration process we assume three levels of noise and we get three signals this one this one this one and this one and I'm pretty sure this is not coming across well maybe it does a little bit but what I like you I'll tell you what you're supposed to see and at the end of the class you're welcome to come and look at it in my paper up here there's no ringing so the amount of assume noise is the smallest here or too too small here just a wide amount here and too too big here okay and and what you see here is that it's it's kind of blurred but doesn't have ringing artifact here there's kind of an optimal balance between blur and ringing and down here it's it's it's not blurred at all it's short but it's got ringing and again I don't know I didn't have access to the electronic version of this I couldn't put it on the screen but I think it's it's still good for you to to to be aware of that okay and this is the magnitude the four year transform of this signal okay after it's been we know hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus save 10% on Amazon brands like our new brand Amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon fresh select varieties we wear our work day by day stitch by stitch at Dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the Dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to Dickies comm and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made Dickies a trusted name for over a century filter and compare that to to what we got there for the inverse filtering which is garbage here right this one here so in addition the the improvement in signal to noise ratio and I'll talk about the equations what I mean by delta signal to noise ratio in just one second I forgot to write it the pressure this images is is only 3.7 dB here 8.8 dB here and 1.1 dB here so you can actually see quite a bit of improvement so so this one not only looks the best but thankfully it also has the highest improvement in signal to noise ratio which is a good sign because many times in image processing you look at signal to noise ratio and so does it really correlate with what we think visually and in this case it does okay so let me know before I forget so let me write down this delta see delta of signal to noise ratio expression for it so what what I showed you was figure 6 and yan beam on and lag and deck paper okay actually send it can you put the reference for this book on the web for the kids do you know what it is why are you laughing oh I called you the kid okay you're you're really very young compared to me I'm I'm actually watch that make myself I'm very young myself for kids and kids I mean just don't look at all the gray here I forgot to dye my hair this this last few months it's it's it's be mon and lag and deck I'll send me an email and I'll send it to you okay so so the signal to noise ratio so the signal to noise ratio for this G signal is 10 log 10 of variance of F the ideal thing over the variance of G minus F in dB and then the signal to noise ratio of F hat is kind of the same thing it's 10 log 10 of variance of F hat over variance of F hat minus F and then finally the signal to noise ratio delta signal to noise ratio it means the change in signal to noise ratio is signal to noise ratio of F hat minus signal to noise ratio of G which then ends up being 10 log 10 of variance of G minus F over variance of F hat minus F okay and these are all in dB okay so so in this case we did the delta SNR so so in figure six if you assume the right noise level then the delta SNR is also the highest so we improve the signal to noise ratio the most this is between G and F and this is between F hat and F okay okay so in the interest of time I'm going to now move on to oh actually there's one other equation I forgot to write down after I after I wrote down this I forgot to write down now what the what the expression for the power spectrum density is if I model this in F as as this in this form then you can show that piece of F of omega 1 and omega 2 is just sigma squared V over 1 minus A naught 1 e to the minus j omega 1 minus A 1 1 e to the minus j omega 1 minus j omega 2 minus A 1 0 e to the minus j omega 2 squared and when I say that we assume the right noise level it's not this sigma squared V it's rather this additive noise W that I'm injecting that the signal got corrupted with that's the one that we actually added that much noise but then we ended up getting we ended up getting you know we miss estimated that in the in the reconstruction process so so if you if you come out with the autocorrelation model then you can use this piece of F to do things and I guess before I leave winter filtering I would like to switch back to the computer again and and talk about figure 529 of Gonzales and Woods can you switch to the computer please okay so this is this is just applying winter filtering to this motion blurred noisy signal where the noise level is the highest here lower here and the lowest there and and kind of the thing you want to walk away from is that it's kind of the the the coupling between noise and the blaring if the noise level was really small it's very easy to deep blur and get rid of it if then when the noise level is high that's that's where most of these deep blaring algorithms fail right this is not a very nice picture you're not very acceptable okay so let me now move on to least squares kind of filter so this is kind of an alternative if you will to win a filter and kind of the basic idea is once again let's let's look at our picture F passes through this LSI system with impulse response D we add noise we get G and we want to pass it through something that gives us F hat and kind of the basic idea behind it is called constrain least square technique is to say look I want I want to design my my goal is such that if I convolve D with F and I subtract G from it then then that norm approximately the energy in that signal should be approximately equal to the energy of this noise signal which we're assuming it's y da da da da sigma square W so that's kind of the basic idea and sorry I knew I had dropped something so this this has to be hot so let me let me now track back and that's why why this is a good idea so so the idea is that if we want F and F hat if you want F had to be as close as possible to F if we plug back F hat into the system convolve with D and then subtract that from G that amount of noise the amount of energy should has to be equal to sigma square W so ideally we like this to be true if F hat is to approximate F properly okay and the question is how do and if I if I was going to just solve this problem find that I've had that that does that there's there's not enough constraints there's multiplicity of solutions going on and I can't quite nail it down and why is that there's a multiplicity of solutions going intuitively anybody can because of the D guy because the D is annihilating certain frequencies so essentially add those frequencies this guy F and this guy F hat can be anything so there's many many solutions that make this thing come true simply because this guy D has those zeros in the frequency domain and it's a it's a pretty good condition problem and one way of solving any condition problem is it's what's called the regular regularization technique where you add an additional constraint and you say of all the classic signals that satisfy this I want to find the best one where I now define what I mean by best and the in this in many image processing applications the best could be maybe the smoothest signal but that's something of major debate and if you if you look at a thousand signal processing papers on the best image looking you'll probably find a thousand models for images that people think are good ones what the best should mean but for the sake of this example and this we're going to we're going to find our approach is to find a smooth signal no high frequencies in it or the smoothest possible signal such that I call this equation star such that star is satisfied so how do I do that well hey amazon prime members why pay more for groceries when you can save big on thousands of items at amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus save 10% on amazon brands like our new brand amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at amazon fresh select varieties we wear our work day by day stitch by stitch a dickies we believe work is what we're made off so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies.com and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that is made dickies a trusted name for over a century let me just define this guy c of n1 and n2 as a high pass filter ok so I want to minimize c convolved with f hat norm of that subject to star subject to this constraint why do I want to minimize c convolved with f hat I'm just thinking whatever f hat I can walk come out with after I convolved with c which is a high pass filter this is the high pass components of f hat I want to minimize the amount of high pass components to f hat because I'm interested in it smooth f hat if that's what I decided is a good looking image that's what I'm going to do it so it becomes a constrained minimization problem so that's what it's called constrained least squares and again I won't go into the mathematics of it but if you follow the derivation blah blah blah blah what you get is the following can you roll up please the following filter in the frequency domain hcls of omega 1 omega 2 is given by the complex conjugate of omega 1 and omega 2 over the complex conjugate of omega 1 and omega 2 times d of omega 1 and omega 2 plus alpha times c complex conjugate of omega 1 and omega 2 c of omega 1 and omega 2 and this guy alpha ideally is supposed to be chosen to satisfy this star equation I've really chosen to satisfy star but in practice it's called the regularization parameter and by tuning it's actually in many of these restoration algorithms the ultimate judge could be the human being you can just change the alpha until the image looks good and you're happy with the outcome of it and the other question you might ask is what kind of high pass filters what form should see be well it could be any see but for example we can use this this 2d Laplacean operator so examples of c are 2d Laplacean operator that we've talked about it's a good high pass filter so it's 4 minus 1 minus 1 minus 1 minus 1 okay so let me now show you the effect that this by alpha has and the reconstruction quality can you zoom in please as much as you can okay so from left to right I changed the value of alpha from 2 times 10 to the minus 2 10 to the minus 4 and then 10 to the minus 6 it's all it's all being zoom out just a tiny tiny bit so it's all being applied to the same blurry image that we had before so as you can see on the left when alpha is 10 to the minus 2 the image is blurry but it has no ringing as I move to the right there's a lot more ringing happening but the blur is kind of gone so the same kind of trade-offs that we saw before with with assumed level of noise would be helpful to me happens here with with alpha and furthermore if you look at the delta SNR the change in signal to noise ratio it's 1.7 dB here it's 6.9 dB here and it's 0.8 dB here so in all of these techniques you might have to do some fine tuning with alpha either visually to get the right balance between blur and ringing or but by just computing the delta SNR in real life you can't compute delta SNR because you don't know what the real F is but but in real life all you got is G and you just change alpha until you look at it and that's a good compromise so so that's kind of how you do it I like to wrap up this by moving to the computer and showing an example from Gonzales and Woods hold on one second straight from Beamond and then figure 530 from Gonzales and Woods so I like to show this so this is applying constraining square to that same problem with a little bit of noise a little bit more noise I saw a little bit less noise and a lot less noise and and you can see unfortunately these two things this is this was talking about the effects of alpha this is talking about the effects of noise so it's a different thing but once again the thing to compare here is constraint think of these three images remember how they look and compare it to these three images which winner filter produces so I think that when when the noise level is high and medium I think the winner filter looks worse and you tell me if you agree with it then the constraining square when the noise level is small they both do very well but when the noise level is is is medium or high in my judgment anyway this is these two are better than these two okay let me pause for just a second or any questions before I get into the last technique which is iterative filters okay so let me move on to the last topic of today's class and that is the final method of fixing these things is iterative filters okay so you might be asking you know why would we want to do yet another technique well these are some advantages of iterative techniques so the motivation is really twofold one of them is that so what happens when you iterative to solve a problem you keep giving an image and every iteration and you can look at it and stop when the image looks good for some of them there's a right balance between blaring and ringing okay so you can you can actively control the trade-off between ringing and blaring okay and another advantage of iterative techniques is that it can incorporate our prior constraints or knowledge about your signal so for example we already did that a little bit at the beginning of the semester when you were dealing with 2d48 transform reconstructions you had all this constraint my signal is positive it has this region of support and and and you actually I don't know if it was part of your homework or whether I just talked about it you actually implemented the reconstruction algorithms that that imposed those constraints right so in this case you can also impose them you could say my image is always positive and there are because we're counting the number of four times that hit some sort of a sensor at the particular for during a particular interval of time so that number is always positive so um so those are the positives about iterative algorithms and what's what's the negative in general about anything that's iterative convergence iterate forever and and and never converge okay so what is the what's the guts of the iterative algorithm the gist of it is the the in the i plus first iteration this the recovered signal and one and f f hat of i f hat of sub i plus one of n one and n two is takes on the answer from the previous iteration which is f hat of i of n one and n two plus some correction and generally speaking how do you just how do you decide how much correction you make well generally if you're far off from the answer you make a lot of corrections because you're far off if you're close you don't want to rock the boat too much you want to make little correction so how do you decide how far off from from the answer well subtract g from d convolved with fi if the if at the right direction after you convolved this thing it comes really close to what you observed then then you're close and don't change things and so there's also a parameter beta that you use here to to control the rate of convergence right so you want to normalize you multiply this by some constant and then you add it to the original signal and so what can we say about convergence where math generally speaking it's much better if you can math hey amazon prime members why pay more for groceries when you can save big on thousands of items at amazon fresh shop prime exclusive deals and save up to 50 percent on weekly grocery favorites plus save 10 percent on amazon brands like our new brand amazon saver 365 by whole foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at amazon fresh select varieties we wear our work day by day stitch by stitch a dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason their work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies dot com and use the promo code work where 20 at checkout to save 20 percent on your purchase it's the perfect time to experience the quality and reliability that has made dickies a trusted name for over a century i think it proved convergence and come up for the single reason is that you can come up with conditions under which the thing converges but if you can't many times you can just apply it and if the thing converges and results in better images you can say well it works so you can show that in this case it has been shown that you get convergence if one absolute value of one minus beta d of omega one omega two is less than one for all omega ones and omega two if that happens you can get convergence and assuming that you know d of omega one and omega two we're primarily concerned about its shape right at the end of the day you don't want to you don't want to think you don't want to you want to make sure that the gain of this thing d of omega one and omega two is less than one because otherwise it's just a scaling factor so basically assuming what i'm trying to say is this assuming that absolute value of d of omega one and omega two is less than one for all omega ones and omega two which is just a very simplistic assumption i mean it's just a gain value of if it is not it's a gain value you're assuming that the filter that your signal got degraded but it didn't add energies to it it didn't at any frequencies then we can show that this condition of convergence translates into beta being between two and zero provided d of omega one and omega two is a larger than zero not an absolute value but d itself the Fourier transform and that condition doesn't necessarily hold in practice but when you do a motion blur or sync as your thing it becomes negative so but if that happens then you get this kind of convergence so beta has to be chosen in this and in in that case what you can show is that the limit as i tends to infinity of a hat of i is h inverse convolved with g which is the inverse filter we talked about earlier and it's a good point to now show figure nine and legandec beam on paper and there's no examples of that in the results and what he does he just doesn't cover so so you start with here this is after ten iterations is after a hundred iterations after five hundred editions is after five thousand iterations and in my paper you can really see the winging that occurs here this is blurry this is much sharper but there's a bunch of winging especially in the background so the beauty of the iterative algorithm is that you can stop it at an optimal point where you think that you've reached kind of a good compromise between the two of them and let me wrap up today's lecture even though we just have one minute left um with just how do we incorporate a priori knowledge in this in this iterative algorithm a priori knowledge okay um so suppose that we know the image is positive then you can apply what's called a projection operator so what do i mean by that well the projection operator it enforces the positivity constraint so projection of f hat of n1 and n2 is f hat of n1 and n2 if f hat of n1 and n2 is larger than zero for all those n1 and n0 otherwise so if f hat becomes negative i just set it to zero so in that case my iterations become f hat of i plus 1 of n1 and n2 takes on the value of the projection of f hat of i plus beta of g minus d convolved with f i right and i can i can do multiple projections i can do p1 followed by p2 followed by p3 etc so i can do p1 t2 p3 and then this whole thing and there's a famous result that i'd really feel very obligated to talk about because it's it's it's one of the most important results and image processing and that that has to do with pox oh today we talked about socks and pox that's history making but but pox stands for projection on to convex sets so the question you might ask yourself is under what condition does this converge and independently by several people and people came up with conditions under which the if you do this iterative algorithm which is a series of projections on to convex sets converge and and ula and and weber in polytechnic university in new york where the first one is to show it in 1984 okay so here's the main result it and it relates to pox projections on to convex sets so to begin with i have to tell you what a convex set is convex set is a set such that let's call it s if if a is an element of s and b is an element of s then for all epsilon a times epsilon plus one minus epsilon times b is also a member of s so it means if i have a set and it's convex that means any two elements are picking it the linear combination in this way is also in this okay so for example is the set of all positive images is that a convex set yeah i pick any two positive image positive image means the image which has all all the pixels are positive right if i if i take any any two positive image then epsilon times the first one plus one minus epsilon times the other one is also positive for all epsilon so example set of positive images is convex so what is what does theory of projections on to convex set say it says suppose i have a series of convex sets s1 s2 s3 all convex and i'm interested in finding let's say this is s1 s2 s3 i'm interested in finding an element that is an intersection of all these things so we want to find any point in the intersection and what theory of projection of convex sets says is that as long as you start in any arbitrary initial condition and any anywhere in this any of the sets let's say here and you find let's say here and you project it into s1 and then you project and projection in here means the following if i'm if i'm at a point s here and i want to project into s1 i want to find the element in s1 that comes closest to this point that's what projection means according to some metric right then i go here then i've projected to s3 that means i find the element closest in s3 that comes closest to s1 and i keep doing that eventually i'll converge to the intersection okay so start at an arbitrary condition an arbitrary point and you keep projecting eventually converge and and the convergence parameter there is a beta parameter there that controls the convergence weight that also has to be between 0 and 2 by the way so i'm not writing on the details of projection of the promises maybe maybe in the next lecture i will so eventually converge to the intersection okay so yeah i think this is an important enough result that i will i will talk about that i will write it down more formally in in in ones days lecture i think i should stop now okay see you see you all on one's day hey amazon prime members why pay more for groceries when you can save big on thousands of items at amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus save 10% on amazon brands like our new brand amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at amazon fresh select varieties we wear our work day by day stitch by stitch the dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies dot com and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that is made dickies a trusted name for over a century