Archive.fm

The Bold Blueprint Podcast

The Bold Blueprint Avideh Zakhor Take Action Consistently

Success is not about grand gestures; it’s about consistent

Broadcast on:
09 Oct 2024
Audio Format:
other

Hey Amazon Prime members, why pay more for groceries when you can save big on thousands of items at Amazon Fresh? Shop Prime exclusive deals and save up to 50% on weekly grocery favorites. Plus they've 10% on Amazon brands like our new brand Amazon Saver, 365 by Whole Foods Market, a plenty and more. Come back for new deals rotating every week. Don't miss out on savings. Shop Prime exclusive deals at Amazon Fresh. Select varieties. We wear our work day by day, stitch by stitch. The Dickies we believe work is what we're made of. So whether you're gearing up for a new project or looking to add some tried and true work wear to your collection, remember that Dickies has been standing the test of time for a reason. The work wear isn't just about looking good. It's about performing under pressure and lasting through the toughest jobs. Head over to Dickies.com and use the promo code WorkWear20 at Checkout to save 20% on your purchase. It's the perfect time to experience the quality and reliability that has made Dickies a trusted name for over a century. The restoration lab is due not this Friday but next Friday. This Friday I'm out of town and substitute one of my graduate students different from the one who gave the last lecture. We'll be giving the lecture so his name is Wei and it's on basics of image compression. And there is a possibility, I know I had announced that you'll all be giving presentations on the last day of the semester. There's a possibility that I might have to attend the department retreat on that Friday in which case either I'll have a substitute person teaching that class or come up with a makeup lecture. So at this point it's not quite clear which one and how many of you will give the presentation or if anybody. So we might just end up having you give me a 10 page PowerPoint presentation attached to your paper. Just so that it makes it easier for me to kind of go through it. So it's a little bit iffy at this point. I don't quite know. I mean on one hand the department doesn't want us to miss lectures and then on the other hand they say you really have to attend the retreat so we're getting this mixed messages. I have to reconcile all those things to figure out what the best strategy is. I think one possibility would be just to have a makeup lecture on a on a Monday but because it's a presentation then you all of you have to physically be here and it doesn't make much sense to have a makeup class that's that everybody has to attend that's not on a regularly scheduled class. So I'll tell you more about it as we go on. For now we stick to the deadline of having your term papers do on May 15th which is why I announced in the in a lecture. Are there any questions or comments? Okay. So what I'm going to talk about today is wrap up the entire discussion on image restoration. So we have talked about generalized we've talked about kind of the main three methods of restoring images when the signal has been first blurred or processed by a linear shifting variant system and then added some noise and and we've talked about three methods of recovering that one of them is just simple inverse filter. The other one is some sort of a least square filter either in terms of wiener or constrain least squares and the third one with iterative techniques that we talked about last time. And what I'm going to talk about today is really two topics two general topics. It's a bit of a odds and ends lecture in it and that we're cleaning up I mean I kind of went over all my lecture notes and I said okay well all the fun things what have I not covered this is the last lecture restoration. So today we're going to talk about deconvolution which is the same problem as we talked about before but behind deconvolution all the discussion we had last time assume that you know the filter by which your signal was degraded and today we're going to assume that we don't know it and come up with ways of estimating it so that then we can do the deconvolution problem. So that's kind of half the lecture and the other half of the lecture is I'll talk a little bit about what's called homomorphic filtering which is in hindsight or you could debate that is really an enhancement technique and we shouldn't be covering it during restoration but you know it's six of this versus half a dozen of the other. I think that it's important enough, excuse me, I think it's important enough topic that we should talk about it and then apply it to multiplicative noise which is a noise that sometimes happens in film grain noise and so you'll see a little bit of that and that's that's really mostly it if there's time I will motivate the next lecture for way which is why we worry about compression and and stuff like that. Actually before I start I'm kind of curious what is the status of the different teams maybe we have two teams that are kind of three persons or large can you maybe give them one sentence progress report or where you guys we have three all three representations of team one here Mary and Ming and Sergey you were gonna talk okay I'm all ears and we have one representative of team two. Did you get an account on the millennium cluster okay have you inverted any matrices yet? Oh okay okay all right I look forward to meaning and just while I'm at it how is like how are you doing Chris you're doing you you're trucking along you're not you're not doing this time-honored technique of doing it last the the day before are you guys a little bit okay and and you Casey yeah you actually made a lot of products because of that symposium of the work show that okay and and you Robert pretty well you you're okay and okay you're your young student right okay I forgot your your name but I remember you okay and Howard okay all right and and Alan I met with yesterday okay so I'm glad things are you know if if you need to talk to me be aggressive you know let's catch me at the end of the class etc. Okay so talk about restoration in particular so we're going to talk about image restoration I don't like this part and in particular we're gonna talk about two topics and I don't like this one either one of them is deconvolution in particular blind deconvolution and the other topic we're gonna talk about today is is homomorphic filtering oh this is much better okay so just to kind of recap from last times you know last times lecture the problem that we've been considering for the last few lectures has been you start with some original signal F it gets D blurred by some signal by some filter D we assume this is a linear shifting variant kind of a filter it's a it's a kind of a blurring function then some noise W gets added to it on top of it and we observe G okay and and what we want to do is you want to pass it through some processing steps so that you get this is the restoration filter that we're after restoration operations are algorithms to get F hat and what we want the goal was we want to minimize the expected value of square of the error between F and F hat so minimize expected value of F minus F hat squared that's kind of our goal okay and in in this setup last time so what what the three algorithms that we proposed were what number one we talked about inverse filter and all the associated problems it could have if you don't implement it properly because of the noise issues and high frequency issues and and zeros of the noise filter etc and then we talked about least squares filters and then we talked about iterative techniques and I ended last time sex actually talking about pox provision on to convex sets I'm not covering that today in depth as I thought I would there's simply too much stuff to cover and we're kind of behind terms of covering the material so so you can read about pop POC it POC S pox projection on the convex sets on your own you know on Weber 1984 paper on transactional medical imaging is really the V seminal paper to read on that but regardless of that we talk about least squares filters and in particular we talked about winner and we talked about constraintly square and you know hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon fresh shop Prime exclusive deals and save up to 50% on weekly grocery favorites plus they've 10% on Amazon brands like our new brand Amazon Saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon fresh select varieties we wear our work day by day stitch by stitch a dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to Dickies calm and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made Dickey's a trusted name for over a century those discussions if you can roll up a little in all of those discussions the fundamental assumptions was what that we know the D the the learning function we know what happened to our signal and and the question I want to ask today is how about if we don't know D then that leads to the problem that's referred to as blind deconvolution not blind dates but blind deconvolution blind means you don't know what happened to your to your system okay so there's basically a number of approaches one can use in order to estimate so it's sort of I mean the easiest way of solving this problem is to say okay let's first estimate the this function D D of N1 and N2 and then solve it using one of these three techniques or N techniques that we've learned before so basically want to do a two-step process to solve this first estimate D and then second use classical I call it classical but whatever restoration techniques we just talked about and the biggest question is how do we do that how do we estimate and there's I'm gonna talk about few techniques today to that that shine sunlight and all of these techniques on the surface look like a little bit of a I don't know hand waving or not so not so rigorous but but to more or less they they walk to varying degrees let's put it this way so approach number one is is we've already talked about possible blur functions that we get and we've talked about atmospheric blur motion blur out of focus blur and all of those blurring functions have few parameters associated with them I mean it is true though you we might not know what the exact D of N1 and N2 is when the Hubble telescope sends the pictures back but we know it's it's it's kind of an out of focus type type blur or if you took a picture while your car was driving you don't know what the exact transfer function or the impulse response of the motion was but you know it's because of the motion so because you know again the more you exploit the physics or the knowledge of the domain of the problem the easier time you have solving it so because you know the nature of the degradation then you can parameterize it and then you can estimate the parameters of that degradation and and for example one way to do that is to just just look at the frequency domain of the Fourier transform of G the thing you observed and from that try to estimate the parameters of the blur so approach number wonder for is just to estimate the parameters of the blur function knowing what kind of blur we we have exactly for example is it atmospheric blur where we talked about that we wrote down some equations that describe that or or out of focus which has that parameter are or or just motion that we talked about last time we had this parameter l and and phi and the velocity etc etc so for example you can look at the Fourier transform of of of G and because in this picture if assuming that the noise level is kind of small if if if the Fourier transform of your blaring function d has got strong zeros like it's a sink function that goes to zero then you would expect that the Fourier transform of G here to also have zeros and from those zeros then you can say something about this function d okay so if you can roll up please so at one possible approach is to look at the Fourier transform of G and in particular look at the zeros of G of omega 1 and omega 2 and and and from that try to deduce the parameters of the blur function okay for example the R in the case of autofocus blur or L and phi in the case of motion blur and just just so that you have an idea about this let me show you an example so if you if you look at so if you look at this picture here and again I don't have the electronic version this is the same picture as we had last time by by Biman and Lagendekt Lagendekt who's a professor in in Delft Institute of Technology they're both professors in Delft Institute technology in Holland so what you see here is the G of omega 1 omega 2 the Fourier transform of the observed degraded signal after it's gone through motion blur and can you see the I think you could see on your screen can you see the zeros of it yeah it's it's these lines here so from from these locations of these zeros you can try to estimate the L and N the phi and here is another focus one and you can see there's a dark ring here and from this you can estimate the parameters of of R okay and yet that you know yet another way of doing that is look at substram techniques but I won't go into details of that but once again but but just observing G you might be able to estimate those parameters kind of an another way of dealing with this problem is to apply maximum likelihood blur estimation techniques and I'll talk about this briefly I tried very hard to find examples of images there weren't any in this paper that I found but without much luck so I won't be able to show you examples of images but I think it's still a good idea to spend few minutes talking about the basic techniques in general ML or maximum likelihood how many of you have heard of it before in two twenty six two twenty five oh excellent excellent okay so ML is you is a technique that's used for parameter estimation and basically when when you don't have a probability distribution function for the for the for the parameter that you're trying to estimate you just basically say given the observation what what's what's what's what's the values of the parameters that would maximize the likelihood of that observation okay so the basic idea is given observation find the parameters of the model for that observation that maximizes the likelihood likelihood means probability okay and so so how do we apply it in this case well what are the parameters that that that we're dealing with in this case well we have a bunch of parameters first of all for example we have a a bunch of yeah in our situation we observe g that's definitely our observation but what are some parameters that that affect g that we're trying to find remember we're solving this problem we observe g and and we want a we want to try to find an estimate of f and we don't know d so some of the parameters that affect g is is d the the blurring function itself right the degradation function like this is a finite impulse response 2d f essentially this is a 2d fiR filter remember the beginning half of the course right so this is a 2d finite impulse response filter if you assume it's got nine tops those are the nine parameters we're after right and if you're trying to do we know filtering then and and you have come up with some sort of autoregressive model for for F then the parameters for F as well are for the power the parameters for the model for F is also some hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon Fresh shop Prime exclusive deals and save up to 50% on weekly grocery favorites plus they've 10% on Amazon brands like our new brand Amazon Saver 365 by Whole Foods Market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon Fresh select varieties we wear our work day-by-day stitch-by-stitch the Dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the Dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure lasting through the toughest jobs head over to Dickies comm and use the promo code work where 20 a checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that is made Dickies a trusted name for over a century I think we would be interested in estimating right we wrote that down I'll read I write it again now but we wrote that down last time we said look we can come up with armor models for F it's a coefficient time the pixel above it some another times plus another coefficient on the pixel to the left of it plus tons another coefficient times the pixel to the upper left plus some noise V so the variance of that noise is an A not and then those coefficients are the parameters of the model for this and another parameter is is the amount of noise here okay so in our in our situation so for blind deconvolution if you will or where we want to find actually it's not really blind deconvolution to estimate our blaring function so the parameters in our case are what are we we call them theta it's it's a set it's it consists of the variance of noise sigma square W and then it consists of D of N1 and N2 all the however number of taps you assume and it consists of sigma square V which is the noise associated with the signal model I'll write that down and it consists of AIJs and what do I mean by all of that well AIJ and sigma squared V relate to the model to the autoregressive model for F okay so what we have if you can roll back please is we we modeled F of N1 and N2 as A not 1 times F of N1 comma N2 minus 1 plus A 1 1 F of N1 minus 1 comma N2 minus 1 plus A 1 0 times F of N1 minus 1 comma N2 plus V of N1 comma N2 so these AIJs are these guys A 0 1 A 1 1 A 1 0 and sigma square V is the variance of this guy and sigma square W is the variance of the added noise W and finally this D is just this is just D is the parameters of the one so we got we can apply maximum likelihood to do that and in particular if you apply the log likelihood in the in the spectral domain rather than in the space domain in other ways in the frequency domain then after few steps of derivation and and I'm going to skip all of that you can you what your goal is to try to maximize the log likelihood function and and by the way one thing I forgot to say but it's pretty important as we are assuming that this is Gaussian and we're assuming this is Gaussian and the reason we're talking about log likelihood functions because you've got Gaussians that have probability distribution functions e to the something and when you take the log the log and the e cancel each other out and you get expressions that you can actually optimize because log is a monotone if you're increasing function it doesn't matter whether you're maximizing the likelihood function that means the probability itself or the log of it because log is a monotonically increasing function if log wasn't a monotone the increasing function you couldn't you couldn't maximizing the likelihood would result in a different answer than maximising log of likelihood okay so yeah so what so what do we get what we get is this is the function L of theta is minus summation of omega one omega two log of P of omega one and omega two plus magnitude square of G of omega one and omega two over P of omega one and omega two and this P guy where is defined by the following P of omega one and omega two is just sigma square V the variance of this noise times magnitude square of D of omega one and omega two over one minus A of omega one comma omega two magnitude squared plus sigma squared W and now I have to tell you what A is well A is just a this 2D 4 year transform of is the 2D discrete time 4 year transform of AIJs of these coefficients that we had up there okay so it turns out that if you if you maximize that then if you find that the parameters in the system that maximizes that using the observation G then then you find the filter and simultaneously you find the parameters for your auto regressive model and now you're ready to apply any of the techniques the regular not regular I mean any of the restoration techniques that we've talked about in the past okay okay so so it turns out that you know there's there in the early 90s there was a ton of paper published about maximizing this and but really the fundamentally they all agreed this is what we got to maximize but it this is a different numerical practical point of view this is a difficult function to to maximize simply because of the issues that I'm going to enumerate now so so one thing in order to to get a unique answer this is this is kind of a ill posed problem and there's there's a number of filters D that could satisfy this so every time you have an ill posed problem as I said earlier you can make it not ill posed or it's called another way for it is make it regularize it by applying additional constraints it's kind of the same thing as saying look I have I have a matrix make I have a matrix equation a X equals B where the number of constraints are have the number rows of the matrix is is fewer than the number of unknowns that I have right in that case that doesn't under determine system of equations and the only way you can uniquely solve for X is to add more constraints it's kind of the same story that that's going on is that it's not quite a X equals B it's not a linear system of equation but what is going on is that this is an ill condition problem and you have the first thing that you have to do to make it well condition is to have to regularize it okay so issues so issues associated let's call this a star issues associated with solving a star so number one it's must be a regularized must apply regularization techniques to to rescue it from being ill conditions so must apply regularization techniques to make it well conditioned and what so what does that mean that means you have to add constraints to it because it doesn't have enough constraints and so the question is what constraints can we think about doing well one possibility is to talk about energy conservation principle conservation and by the way where was another situation in this course that we applied additional constraint to make the problem better condition the problem of reconstruction from Fourier transform magnitude right we kept applying more and more constraints for example positivity of the image or having a finite extent from 0 to n1 and 0 to n2 you know the more constraints you're apply the more you've nailed down where the solution lies and the easier it is to find it generally speaking numerical algorithms have a lot of trouble when there's multiple multiplicity of solutions right and for example when we talk about reconstruction for Fourier transform magnitude we talked about the solution can be f of n1 and n2 or f of minus n1 and minus n2 and when you solve the actual iterative reconstruction algorithm you could get a super position of f of n1 and n2 and f of minus n1 minus n2 on top of each other and and and your iterative solution can stagnate between those two answers and never really fully converge by the way I had a discussion about this with Dave Israel Dave Israel with the guy who did at MIT who did his thesis on reconstruction from Fourier transform magnitude I literally hadn't seen him for 20 years exactly and so I just saw him about a week ago I am we we actually talked about his thesis and this stagnation prop hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus they've 10% on Amazon brands like our new brand Amazon Saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon fresh select varieties we wear our work day-by-day stitch-by-stitch it dickies we believe work is what we're made off so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason their work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies calm and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made dickies a trusted name for over a century lamb and all those things that that we we used to worry about when we were graduate students I published my very first technical paper with him it was a little note in our proceedings of IEEE on counting the number of zeros of common zeros between two polynomials anyway long story short so so you have to have constraints in order to make the problem well conditions and the energy conservation what do I mean by that well they basically what I mean is that during the energy conservation that means let me just first let them mathematically what I mean I mean the double summation of d of n one and then two is just one that means the DC value of of d over i e the DC value of d of omega one and omega two is one which means that d of omega one and omega two evaluated at omega one comma omega two equals zero comma zero is just one what does it mean physically it means that the degradation process was a passive operation it didn't it didn't result in any additional energy in the signal that came out of it or it didn't absorb any energy out of it which means that the degradation or the blaring process was a passive process which means that no energy was was either generated nor absorbed okay so that's one nice constraint to apply another constraint that you can apply is to say look I know that my blur function has symmetry in the space domain and so that I have a this Fourier transform it real and many times blur functions have either circular symmetry or or what's it called quadratic symmetry etc so so another constraint is to say d of n n one and n two is equal to d of minus n one comma minus n two i e the blur point spread function point spread function is another word that's used in optics for sending impulse response is symmetric okay so that's that's one issue is we got a regularize this and and but different people have published different papers trying to regularize these different ways the other problem with this log likelihood function maximizing this thing is that it's it's highly highly nine linear and how do you how do you solve how do you optimize a non-linear function so let me come here if you can roll up so that we can see yeah great how do you optimize nonlinear functions anybody you could do that but what what optimization techniques do you use of a nonlinear function well how do I optimize what numerical technique do I apply exactly steepest descent or gradient type techniques you start somewhere and you kind of just figure out your your think think of the terrain that you're optimizing as mountains in some region right and you want to find the peak well in this case maximization minimization or flip side at the same point let's say if you do gradient descent you do you find the minimum and you find the direction of which you go the down the fastest it's kind of like skiing I'm a top of the mountain what's the fastest right I can what's the fat fastest I can get to down so you find this slope steepest slope and you just keep going or if you want to the opposite is ascent steepest ascent you want to reach the mountain you find the direction going up what's the problem with that approach exactly so if you have a if you have a terrain like this and if this peak is higher than this so if you're doing globe not descent but ascent you might end up here if you started here you keep going up one step at the time and you get stuck here and you miss the global minimum global maxima and you get stuck at local local minima or maxima rather than global so this is local this is global so each so just let me start writing some of these things down so issue number two has to do with optimization of a nonlinear function how many of you have actually had to implement code to optimize a nonlinear function okay so so you have to optimize a nonlinear function and the the most common techniques is steepest descent descent or ascent and the problem with that you get is local minima you get stuck in local minima the same thing here I could get stuck in this local minima if I started here and and what determines what local minima I get stuck at exactly the initial condition so that's related to the initial condition and what what is the name of some techniques that people have developed in the last 20 or 30 years to to combat this you probably too young to remember but like in the late 80s there was this craze going on for simulating annealing right have you heard of have you heard of that term no you haven't right it was you know it was a fad it was simulating annealing and there was neural networks and then there's this fads keep coming go but simulating annealing essentially I think it was a technique I not forget who invented it but IBM guys and on the East Coast had something to do with it for sure basically you what there's two things you could do in order not to get stuck in local minimum one of them is is to do random jumps once in a while right so that way you have a chance to visit multiple initial conditions and therefore have a chance to hit the global minimum right and so in the middle don't just do steepest ascent or steepest descent just occasionally do do a random jump and and depending upon some criteria in order to visit more of the trend terrain the second approach is at each point you compute the the the direction of steepest descent but don't go down there you flip a coin with a probability P where you adjust what P is for instance criteria and and sometimes you take the opposite direction of steepest descent in order to give yourself a chance not to get stuck in local minimum anyway there's a whole slew of techniques people develop to solve these these kinds of problems now one one gradient based technique that has found that a lot of applications within signal processing is the EM algorithm and have you how many of you have heard of that expectation maximizer it doesn't stand for electromagnetic so so one particular besides POCs which is found a lot of applications image processing so one gradient descent technique that's used a lot is EM which stands for expectation maximization and basically the way this works is very similar to almost all other iterative algorithms and in the context of the problem that we're looking at I'll just give you kind of a brief overview of it right so the basic idea is in in in one step you you assume some sort of a so it's an iterative technique you go from step one a step two okay so if I assume instead one I assume I know theta all the parameters for for my point spread function and and aij's and sigma square v and sigma square w okay so if I knew theta and I and g is always given so g is always given the problem is that we need to estimate theta and we also need to estimate a fat simultaneously we want to restore the image and determine the parameters so if I knew theta then I can compute from theta I can compute f hat right so I start with some initial guess of what my theta is and then I I compute f hat well if I know f hat then what what can I what can I estimate from it if f hat is a good approximation to f from from there I can estimate aij's which is the parameters for his article auto regressive model and sigma square v okay and and what else hey amazon prime members why pay more for groceries when you can save big on thousands of items at Amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus they've 10% on Amazon brands like our new brand Amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon fresh select varieties we wear our work day-by-day stitch-by-stitch it dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies calm and use the promo code work where 20 a checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made dickies a trusted name for over a century and what else can I from F had what else can I can I estimate where I can apply my from here I can apply my I can apply this this approximate relationship which is G of N1 and N2 is approximately D convolved with F hat I know G and I know F hat from here I can come up with an estimate so these are all estimates you can estimate what what D is D of N1 and N2 is right using any of the techniques that that we talked about right approximately know what what D is and then once you know D and and Aij and Sigma squared V then you can come back up here now I know theta again I'm sorry I stepped I skip sorry no no okay yeah one thing I didn't tell you if I know theta how do I get F hat this is done using meaner filtering or any of your favorite techniques I didn't skip any steps okay so so now I know theta and I keep going back and forth so forget about this one and two this kind of shows the flow okay so assume you know all the parameters you can compute F hat using inner filtering and then and then knowing F hat you can estimate Aij and Sigma square V and then also you can estimate D using this relationship and kind of go back and forth in this loop until you converge and a better diagram than what I've drawn here is in the in this paper I'll try to to well I'll just show it instead of drawing it in the interest of time if you can zoom in here okay so this is the you start with an initial estimate this is the winner filter you you you from from the from using the G and the initial estimate of the parameters you estimate F hat from F hat you estimate both the image model and the blur and which is Aij and D and these hats are just mean that these are estimates right and then and then using these parameters and and then then you feed back G and also these parameters into the winner filter in your estimate F hat again okay so it's it's the two main calculations is here during the winner filter operations in order to get F hat given that you know these two set of parameters and then the second set of operations is given F hat trying to estimate the motion blur parameters given that you know F you can estimate the both both D and the parameters Aij and Sigma square V okay so you kind of iterate iterate back and forth until convergence is is achieved and so in general you in image processing applications or signal processing applications these iterative techniques is are very common you have you have a chicken and egg problem you make an estimate you make an initial guess of one you estimate the other one from that you estimate the first one you kind of go back and forth hoping that you converge and there's some powerful kind of results showing that these techniques under certain conditions do converge okay any questions okay so I want to move on and talk about another technique for blind deconvolution so continued and this time we'll have a real example at the end of it and this example is actually completely non fabricated it's a real example of a ship at the bottom of the ocean that people applied image deep learning techniques to get the answer so so and in it in this in this situation I'm assuming that again knowing for the simplicity of the derivations I have assumed that there's no noise so you start with this signal F the original it's been blurred by this function B of omega 1 and omega 2 and I realized I call it D in the previous discussion but let's just stick to this so this blur function okay and the assumption here that what we make that is that absolute value of P is smooth okay so and then we go through a set of steps that can you zoom out just a tiny bit so we have the entire picture entire paper in the in the view they're busy doing something else great thank you so if I write G of omega 1 and omega 2 as F of omega 1 and omega 2 as at times B of omega 1 and omega 2 then I can I can then put magnitudes around this I got tired of writing omega 1s and omega 2s and and what I'm saying is I'm assuming that the magnitude of B of omega 1 and omega 2 if I were to plot it as as as functional square with omega 1 plus omega 2 squared it's kind of smooth this is what I'm assuming is smooth and it's in many situations it's it's quite the case so what I'm going to do next is I'm going to decompose you can roll up again so I'm going to decompose F into magnitude of F of omega 1 and omega 2 into its into its two components the slowly varying component and fast varying component you can think of it as high frequency and low frequency except that this F is already in the frequency domain okay so slowly varying component so pictorially what I'm saying is again if I'm looking at it as a function of omega 1 squared plus omega 2 squared magnitude of F of omega 1 and omega 2 is going to do something like this we all know as a function of frequency as you go away from DC the energy decreases so I'm kind of exaggerating now this is the high frequency part of F of omega 1 omega 2 and this is the slow frequency part so it's equal to two things this guy which is we call F of omega 1 and omega 2 L for the low slow varying plus another component F of omega 1 and omega 2 H which is a the high varying omega 1 squared plus omega 2 squared. Okay so so mathematically what would I have is F of omega 1 and omega 2 magnitude is equal to this guy plus this guy F of omega 1 and omega 2 L plus H F of omega 1 and omega 2 H the high frequency and the low frequency part so so given that what this equation up here translates to is F times B equals G so pictorially what we have is F magnitude of F looking something like this times magnitude of B which we assume this smooth is equal to magnitude of G which is going to be wiggly like that right so so magnitude of G of omega 1 and omega 2 is equal to magnitude of B of omega 1 and omega 2 times and now I'm going to replace this F the F guy that I have I'm going to replace it with this equation here so it'll be magnitude of F omega 1 omega 2 L plus magnitude of F of omega 1 and omega 2 H okay so I can I'm now getting really tired of writing omega ones and omega 2 so I'll just drop them and you'll know what they mean so magnitude of G then is magnitude of B times magnitude of F L plus magnitude B times magnitude of F H now what I'm going to do is apply a smoothing operator not the smooth operator. Hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon Fresh shop Prime exclusive deals and save up to 50% on weekly grocery favorites plus they've 10% on Amazon brands like our new brand Amazon Saver 365 by Whole Foods Market a plenty and more come back for new deals rotating every week don't miss out on savings shop Prime exclusive deals at Amazon Fresh select varieties we wear our work day-by-day stitch-by-stitch a dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dikies.com and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made Dickey's a trusted name for over a century under that what's the name of that singer Shardee right not smooth operator but a smooth thing just have been teaching this course for many years I never thought about that joke that's a new one I added to the repository apply a smooth thing operator to both sides and again it's as I said it sounds like hand-waving but it really isn't it the darn thing actually works so if you do that so we call it the smooth operator is S okay so S apply to GG is a smooth operator what does it do it it removes the repose it removes the high-free it removes this this junk's right is equal to S applied to be times FL plus S applied to be times FH and what I'm gonna argue is that this bit here is 0 and why well let's look at our various functions come back down here our FH is the is the fast varying portion of F which is this guy right and B we assume there's a smooth function like this so B times FH is some is function that this is just draw that okay so so if if B how does so if B looks like this and FH looks like this then B times FH approximately I mean we're just drawing this in a in a caricature way it's like this it's gonna because this is goes about 0 it's gonna be big and small it's gonna do that right because it's it's large here so it makes this thing oscillate a large amounts here and small there but what what happens if I apply so this would be times are fish what happened if I apply S to this guy the smooth operator averaging operator it it makes it zero right if the thing is moving up on down too fast if I smooth it out I just get approximately 0 and and these are all for analysis purposes so that we get an answer that we give it a try and at the end if the answer looks those crappy then we know that all of these assumptions and handwaves were terrible but but let's see what we get and a lot of times again in in doing research you have to make a bunch of assumptions to get an A answer and then you can test to see how good your answer looks and therefore how valid your assumptions were so assuming that this guy is 0 right then what we get is is just s applied to G is s applied to BF so we get s applied to magnitude of G is approximately equal to how about s applied to BF I will be looks like something like this and FL B is like this and FL applied like this B smooth FL is smooth B times FL is smooth apply a smooth on top of something smooth what do you get something smooth the original itself it doesn't really change so s applied to BFL is just BFL itself okay so I'm going to making this approximation here I'm going to argue that that's just B times FL okay so so then I can come down and and rewrite this equation and say s applied to G is approximately equal to B times FL and what that what does that mean is magnitude of B is approximately s smooth operator applied to G over FL and then again we've we run into this terrible problem of how do we estimate FL you know we don't know what F is we can say look we're talking about four-year transform magnitude of the of the original signal and we already went through this diagonal and at the beginning of the class the Fourier transform magnitude of all signals look the same especially the smooth version of them right it's a mod that comes down we even exchange for your transfer magnitude of one image with the other and replace it with the phase and we'd rather it doesn't make any difference so you can kind of hand wave and argue is that and if you if you take the ensemble of images F prime and compute F prime of L overall of them it's a good as good enough estimate for F FL so estimate FL by using an ensemble of images F prime L okay so you get magnitude of B of omega one and omega two now I'm gonna begin writing it is approximately smoothing operator applied to G over magnitude of F prime of a oh I forgot to put omega one omega two all right it is G of omega one on omega two right and you might ask yourself how about phase what do I do with phase of B I remember this is blind deconvolution our whole goal was to estimate B right so how about phase well you know we wave our hand and we say let's just make it zero phase because that's the best thing you ever do in image processing it distorts the edges at least etc so it makes phase of B of omega one and omega two equal to zero so it's a zero phase filter so apply that to to a signal and if I look at J Lim's book figure I think 921 so figure 921 this we use this technique to compute B but also once we have B it's even more important what technique we use so we use we applied the iterative technique remember on which was the F at the duration K plus one is F had it is F had at the duration K plus one is F had at the duration K plus some constant lambda times G minus F hat convolved with B okay so figure 2021 shows the blind deconvolution to estimate B using this technique plus iterative technique and iterative technique is what do I mean by iterative let me just remind you from last time's lecture is is just this F had of K plus one means F had of K plus some lambda times the difference between G of N1 and N2 minus F had of K convolved with the blur function remember we talked about this this technique right so this is the estimate of the restored signal after K plus one iteration is that the K iteration and all those other things so what does it look like let's let's look at it you can so this is a underwater picture that was obtained by Jules Jaffe who I think at the time when this book was written was was a scientist at Woods Hole Oceanographic Institute in Cape Cod in Massachusetts and I think or we used to spend the summers there or something like that but he's he's now at Scripps Institute in San Diego which is also oceanographic kind of so this is a underwater picture of a ship and this is after this restoration technique has been applied so you can actually see the hull of the ship you can see these straight lines a lot of the stuff that's missing from that that picture okay any questions or comments yeah sure actually can you use the mic so that I can hear you and also the remote people can hear you thank you sorry this is just a minor question but like the absolute value of F you have is going positive to negative I don't know it was just a little bit worrisome because I would think of the absolute value of F being always positive so I wasn't sure if it's me yeah yeah I made a mistake I mean it's a okay I see what your point is you're talking about this thing right yeah absolutely as they all have to be positive so that's just a constant value then I mean that's all it is yeah it should it should really be like this you're right hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus they've 10% on Amazon brands like our new brand Amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon Fresh select varieties we wear our work day-by-day stitch-by-stitch a dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies calm and use the promo code work where 20 a checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made dickies a trusted name for over a century ah sorry I flip flops of quickly absolute this this thing is you start with the absolute value and then you apply high frequency you get the high fast varying component and the slow varying components so by the time you apply that then it it can become negative this is okay that's a good point the way I had it I had a professor at MIT Bruce music as once he did like a huge he spent a whole lecture to derive something and he said okay the answer intuitively also makes sense because you know as you see this and this and this happens signal ratio ratio goes to down then some students said actually excuse me professor you made a mistake right there in the middle of the board and so that thing has to be in the numerator not the denominator and he looked at it and said yeah you're right so he fixed the answer and says oh I can still explain it intuitively as this and this and this happened the thing goes up for a signal so you can explain everything regardless of whether it was a new reader or denominator so it's almost the same thing here okay any other questions that was a good question and they could have been confusing it's a I totally agree it's a lot of hand waving to get to this point but it does it does work okay what I like to do in the last 15 minutes or so I talk about homomorphic processing because it's the one topic that kind of fell through the cracks throughout throughout the course so I like to spend a little bit of time and and I'd like to first talk about it in the context of restoration okay so homomorphic processing and by the way this this actually the the topic of the PhD thesis of Al Oppenheim at MIT in 1963 so it's a pretty old topic but but it's still quite interesting okay so the basic reason I want to talk about homomorphic processing and the concept of restoration is that so far what we've done is we've kind of added noise and we and observe G and we try to estimate F right and now the question comes up is how about if you have multiplicative noise what if F is multiplied by some signal W to get G so for example the your film grain noise is multiplicative so then what do you do to in order to estimate G hat okay and so one of the things one method to solve that so in this case G is approximate is it can be approximate or can be modeled by F times W then what do we do what is and the question what is one simple way of converting a multiplicative system into an additive system exactly logarithms I bet you know it because you come from speech community and you guys deal with what sepsilon a lot right I thought so so if you take the log of G then it's it's equal to if G is log of F plus log of W exactly and and I call this G prime now is approximately equal to F prime plus W prime where we're now now we're now W prime is log of this and this is a renamed and now we have an additive kind of a system so the approach to take will be kind of the same thing we've always been saying if you don't know how to solve a problem reduce it to another one that you do know how to solve so by applying log to both sides now you have an additive system and now we have a huge bag of tools that we've spent in the last five weeks this and four weeks to do how to solve this problem so now apply any kind of additive noise restoration algorithm to to restore F hat and then once you restore F hat how do you get back to F well that's the relationship pass it through an exponentiation filter to get F okay so the the flow for this kind of a processing system is to start with G take the log and then do anything that you like to do for additive noise removal it could you could apply one of and techniques that we've come out with and then exponentiate and then you get F hat and I'm trying to see if I can show you a picture of that from J name's book he might or might not have it might be a canned example but let me just see yeah I don't I don't recall I don't remember seeing an image that kind of does that oh no it does okay so here is yeah that yeah he does have it sorry so here's an example of a system that does it and let's figure 9.27 J limb so on the top you have the vega signal with with no noise 512 by 512 this on the bottom is the vega signal with noise lots of noise on the face again I don't know if it makes it over to the screen and here is the process image after removing the noise on this I mean I see to some extent in the book here I honestly I don't see it on the screen I don't know if you guys see it or not but it's figured if you want to if you want to look at it it's figure 9.27 in J limbs book okay so there's another example of a homomorphic system I'll talk about now that does make a difference and you can see examples of it in the book and also even doesn't come in the book I have an electronic version and that is for situation where okay so so another application of homomorphic filtering okay so when you take a picture and you're recording it on some sort of a medium right there's this constant struggle if you will between dynamic range and local contrast so what do I mean by that suppose that I'm taking a picture of a person kind of a group of people standing in the Sun one subject I never did well is drawing okay so yeah but actually when I was a kid in elementary school every time I drew my teacher said try to complete it and I would take it home and said mom what does what does she mean I already completed that's that's it it's done he said no no no add more things you know more grass more more Sun but that I mean literally every time I used to get that come and we had this it wasn't a regular it was looking great and yet we had a different painting to your drawing teacher than the regular you know and that's all the comments I ever got anyway so so he so so let's say you know there's an array there's a ray of Sun that comes on like this and and if I were to draw kind of we might not even have needed to draw that picture if I draw as a function of space that's right if I draw as a function of space the intensity that I would get on average I get something like that but the suppose that the dynamic range of the film so this is if I had infinite dynamic range so suppose this is the dynamic range of the film okay and and the actual signal that I record has high high varying component here some more hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus they've 10% on Amazon brands like our new brand Amazon saver 365 by Whole Foods Market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon fresh select varieties we wear our work day by day stitch by stitch a dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the dickies has been standing the test of time for a reason the work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to dickies.com and use the promo code work where 20 at checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made dickies a trusted name for over a century they're in component here and some some high varying component here right and so essentially so the reason I drew this picture is because this this man this guy is this signal the child is this signal and this down here is this signal so I'm kind of taking a picture from this angle right basically there's two components here okay there's the reflectance component due to the objects in the scene and then there's the illumination component due to the to the sources of illumination to the the lighting for those of you who've taken computer graphics right that you always put light sources at different locations and then you apply what ray tracing algorithms to figure out what the image would look like right this is kind of the same thing the recorded signal if I take a picture F is is approximately R f of n one and n two in space domain is R of n one and n two which is the reflectance due to the objects times the illumination I of n one and n two this is illumination due to light sources essentially in this case it's the Sun okay and generally speaking our light sources are have very large low frequency component right they don't vary as a function of space they don't change very much I mean it's not that this part of the room is is bright bright and this part is dark dark it's it's smoothly varying I mean we have these light sources that that that illuminate the room so this is very slowly varying not as a function of time but as a function of space function of space and this guy is has varies very fast why because it's the object I mean it like if we take a picture of me it's white here then becomes black here then becomes white here then becomes you know it's it's the intricacies of the object your foot we're taking a picture of right and so you can imagine if you took a picture like this you're gonna have a disaster because the this guy's face which corresponds to this part of the signal it's just gone it exceeded the dynamic range of the film and this guy it's in the this portion is in the dark part of the signal and the luminance is also exceeded the only the only person you can see well is the guy in the middle right here so so essentially it's it's the classic thing you take a picture of one person standing in the sun one person standing in the shade the guy in the Sun is too bright you can't make out his face the guy in the shade is too dark you can't make out his face how would you if you had the means how would you like to how could you modify that signal well you compute the local mean the bright part of the picture you reduce that local means you can see the details of the guy's face the details means the high frequency part and the guy that's in the dark the local mean is low you jack that up so that you so that it becomes normalized and you also increase the local contrast the high frequency part in those so essentially to to rescue a signal that's been captured like this what you need to do if you roll up please is you want to do both contrast enhancement local contrast enhancement and dynamic range reduction so ideally instead of having something like this I would like it so that it looks something as a function of space it looks something like if this is my dynamic range I wanted to do I want to increase this here increase this here increase this here without exceeding that so I have I have jacked up this local mean up this has got up to this level this is more or less the same level this has gone down to this level and this low low variation of signals I have I have made them bigger so how do I accomplish that having set all of these things well I can see because this is it because this this model holds true which is it the recorded signal is a product of the reflectance times the luminance and this is a fast varying function and this is a slow varying function we can take we can play some games here and apply the homework filtering to to get the thing done so how do I do that take log log of f is equal to log of r plus log of i and these are all n one and n twos that's the one thing about teaching the two dimensional signal processing instead of one dimension it when you do one d it's all n and then here you double the number of variables anyway you have to keep writing n one and two omega one omega two it gets to be tiring and and so so because r is high frequency log of r is also high frequency or rapidly varying let's just call it and this is slowly varying okay so so what what would be a reasonable system to design in order to accommodate this if you well start with f take the log and you you low pass filter it to get log of i approximately and you high pass filter it to get log of r approximately because this is that r is the heart and what and what it in this in this diagram or in this picture what do we want to do we want the illumination which is the low frequency part which is the mean here we want this to get reduced in dynamic range so you want to multiply it by some alpha which is smaller than one right you're reducing the dynamic range and this r illumination is responsible for the large dynamic range that you have right did I say that earlier yeah I guess I didn't so it comes right here this is illumination due to the light source is slowly varying and this affects the dynamic range of the recorded signal and so by multiplying it by minus one I'm reducing the dynamic range and by multiplying this thing by some other value beta which is larger than one I'm increasing the local contrast so this one reduces dynamic range and this one increases the local contrast which is the reflected signal that I'm interested in that's the people in the picture and then I add these guys up and after I add them up now I have to undo the log so I exponentiate and boom comes the output signal F and and if you don't like to process this thing this is it this is just conceptually to help you understand what's going on you can also this system is equivalent to the following system F of N1 and N2 take a log and then pass through is a system H of omega 1 and omega 2 where as a function of omega 1 omega 2 it does something like that it's an alpha and then it turns in the beta and this is a function of omega 1 square plus omega 2 square and then exponentiate outcomes F or not F that's a F hat F hat so this whole thing can be collapsed into this thing here okay and just to wrap up the lecture and I realize I've gone over time I apologize I'll show you an example of it and can show it in this book or in the other book just quickly show it on this if you can zoom in here or maybe yeah okay so this is an example of homomorphic filtering applied to the signal on the left to get the signal on the right and as you can see it's figure eight eleven in open and Jay Lim's book you can hardly make out the details inside here but by the time you've you've you've essentially reduced the dynamic range you've made this part brighter and you could hardly see the pipes here in the background now you can see the pipes quite a bit better the trees are blurred if you look at here the trees are unblurred oh sorry what picture this is this is a radiator or of something at MIT I don't really know why the guy picked this example but it shows the point and and and and and this is a this is a real example again it's not it wasn't canned okay all right I'll see you all not on Friday but on Wednesday and there's by the way this example is also in your other book in your Gonzales and wood hey Amazon Prime members why pay more for groceries when you can save big on thousands of items at Amazon Fresh shop prime exclusive deals and save up to 50% on weekly grocery favorites plus they've 10% on Amazon brands like our new brand Amazon saver 365 by Whole Foods market a plenty and more come back for new deals rotating every week don't miss out on savings shop prime exclusive deals at Amazon Fresh select varieties we wear our work day by day stitch by stitch a Dickies we believe work is what we're made of so whether you're gearing up for a new project or looking to add some tried and true work where to your collection remember the Dickies has been standing the test of time for a reason their work where isn't just about looking good it's about performing under pressure and lasting through the toughest jobs head over to Dickies calm and use the promo code work where 20 a checkout to save 20% on your purchase it's the perfect time to experience the quality and reliability that has made Dickies a trusted name for over a century