Jump to content

User:92anonymous92

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by 92anonymous92 (talk | contribs) at 20:47, 17 October 2010 (Created page with '==Problem 1== <math> I(\lambda)=\int_1^{8} g(x) exp(-\lambda f(x)) dx, </math> where <math>g(x)=\frac{x^4+15}{x^7+10+exp(x)}</math>, <math> f(x)=x^2 -4x+6</math> ...'). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Problem 1

[edit]

where ,

Analytic Approximation

[edit]

We begin the problem by searching for an analytic expression for . We note that this is a Laplace integral, and can so be evaluated using the method of Laplace. The notes for this course derive a formula of the form where is the where experiences its minimum (assuming that this minimum is within the limits of integration, as it is in our case). We will, however, proceed to derive such a formula ourselves. We note that if , then such that where is a small, positive constant, then approaches in the limit of . This is because the behavior of the integrand of our problem is only relevant near the minimum of , since the integrand decays exponentially as one moves away from .
Since is small, we can sensibly approximate both the and the terms with their Taylor series. For our purposes, we will at this point keep only the first nonzero term of each Taylor series. Also, we have so far been working with a general Laplace integral, although making some assumptions about the differentiability of the functions, the location of the minimum of away from either of the limits of integration, et cetera; however, it will be useful for us to note before working with the Taylor series that is a polynomial of degree 2. As such, all derivatives of order 3 or higher are 0, and the first three terms (counting zero and nonzero terms) exactly describe the function. For the particular combination , the value of the difference at is 0 by the construction of the difference and the value of first derivative of the difference is also 0 because we are expanding the Taylor series around and is a minimum of . This all means that writing the first nonzero term of the Taylor series of the combination about is doing nothing except rewriting the difference. By the standard notation for the Taylor series, we would just be factoring the difference () into . Despite the fact that this Taylor expansion is trivial for our particular function, we will continue to work with the more general case for the clarity of our derivation.
We now expand our functions and make the aforementioned approximation, so that

.

From our earlier limit as , the following is true in large

.

For convenience, we will know define the expression . We know perform a change of variables on the integral in our previous expression for , changing it to

.

We see in this formula that as , the limtis of integration approach . Thus, for our approximation, we will change the limits of integration to . Our integral now becomes

.

Evaluating the integral, we get an approximation for :

.

For our particular integral, this is

.

This is almost identical to the form derived in the notes for this course, but is in fact greater by a factor of . As will be displayed in the numerical evaluation of our solution, this factor of accounts for much of the discrepancy between the form and the notes and the numerically integrated answer. Also, it should of course be noted that the section of the notes immediately succeeding that from which the initial form was derived, the authors proceed to more carefully derive the form with the factor of using the same method as our own. The authors also treat the case where does not have a minimum on the interval , which we will not do here.
As will be displayed in our later numerical evaluation of the approximations for above, our approximation fits the actual value of the function quite closely. In another regard, however, it is quite a poor approximation. This is because we failed to derive any representation of the error of our approximation. It is possible for us to numerically evaluate our integral and hence determine the error, but an approximation is always better when you can deduce the error without resorting to numerical methods. For this reason, we will now proceed to derive the same approximation in a way that, while perhaps less clear and definitely consisting of more cluttered algebra, yields the order of the next term, and thus gives a sense of the error of our approximation. We will do this derivation for the general Laplace integral, up to such assumptions as differentiability that our necessary for our derivation, and then address the particularities of our particular integral. This derivation will hinge on Watson's Lemma
We begin this more careful derivation by splitting into two separate definite integrals

.

This splitting has split the integral such that is monotonic over the ranges of integration (actually, over and .) We'll now go ahead and Taylor expand and .

Note that the term is left out since has a minimum at and so .
Until otherwise noted, we will know work with , although analogous logic will apply to . On this integral we will perform a substitution to the variable where

.

We will also define a function , where

.

Now, we can rewrite as

.

To apply Watson's lemma (which essentially states that replacing in by an expansion and then integrating term-by-term yields an infinite asymptotic expansion of ,) we now must determine an asymptotic power series expansion for as approaches 0 from the right. We can get that

.

To determine the overall sign, we must note that increases with over the range of integration, so we want the plus in the . It then follows by Watson's lemma that

We now switch from analyzing to . The logic is analogous, but we will walk through the steps briefly for clarity. First, we perform the same change of variable as above, to

.

Next, we expand , this time taking the minus in the :

.

Lastly, we use Watson's lemma to write :

.

We can now simply sum and and get that

.

We have now recovered our earlier approximation for , along with the order of the error! Now we have a good approximation, because we can both approximate the behavior of the function as well as characterize the error in our approximation without the use of numerical methods. We will, of course, still use numerical methods to check our approximation as well as to check our characterization of the error, but the analytic characterization of the error provides useful information that we could not have gleaned simply from a plot of our approximation versus a numerical evaluation of a function by giving us the functional form and its dependence on .
Briefly, we must examine our previous derivation and make sure that the no pathologies of our particular integral derail the integration. Our function has no interesting behavior that causes any concern, aside from the zero nature of . This is at first a bit worrisome, since does in fact arise, but it is only in the numerator of quotients (so we need not worry about it blowing up a quotient) and only in terms that in fact cancel out when we sum and . Thus, we are confident that our approximation of the integral and our characterization of the error of our approximation are accurate.


Numerical Evaluation

[edit]

Various plots can yield a wide variety of interesting information about the approximations we have made. We present a plot over a large domain that indicates the accuracy of our approximation and the approximation without the factor of from the notes as well as over a segment where is quite large to indicate the improved accuracy of our approximation over that without the factor of . We also present plots that indicate the accuracy of our characterization of the error.
This first two plots illustrate the accuracy of our approximation (dashed red)versus the simpler approximation from the notes of the course without a factor of (solid green) versus a numerically integrated value of the function (solid blue.) The two plots have slightly different ranges, and both are log-log plots. On each plot, the vertical axis is and the horizontal axis is . Click on images to view them on the file page in a larger format.

File:Group12pset3problem1plot1.png
File:Group12pset3problem1plot2.png

On the plots above, both approximations appear quite accurate, and it seems like they tend very close to each other at large . However, this next plot illustrates that the factor of definitely distinguishes the two approximations at large . It is clear that this additional factor is quite useful. This is a log-log plot with the same axes and color scheme, but a different range.

File:Group12pset3problem1plot3.png

Now that we are fairly convinced that our approximation is decently accurate and that our factor of is justified, we examine the error of our approximation. Our next plot plots the error of our approximation (absolute value of our approximation minus the numerical integration result) versus on a log-log plot.

File:Group12pset3problem1plot4.png

In order to test if this error is properly characterized by our characterization above, we plotted our error divided by our expression above for the order of our error, on a log-log plot. If our characterization is correct, this new quantity should have no dependence on for large .

File:Group12pset3problem1plot5.png

Indeed, the error divided by our expression characterizing the error becomes essentially a flat line at large . This means that the error of our approximation at large is in fact our expression characterizing the error times some constant prefactor! For our final plot, we examine for how high of we can check our error characterization by plotting the same quantity but on a regular plot (no logs) and for a small range of around .

File:Group12pset3problem1plot6.png

It appears from this plot that our error divided by the characterization drops off immediately, but by evaluating this for a particular value above the drop-off, we find that the issue is just that Mathematica's numerical integrator "failed to converge to prescribed accuracy after 9 recursive bisections." Basically, the numerical integrator hit its precision limit and failed, so it returned 0. Our error characterization seems accurate after all!





Problem 3

[edit]

Problem 3: A Contour Integral Carry out the following integral (using contour integration).

Now evaluate the integral using the Height X Width method described in class. How accurate of an answer can you generate?

Contour Integration

[edit]

To evaluate the above integral, we will first look at the following contour integral, which we will find to be very helpful:

where the contour is the counterclockwise semicircle in the upper half of the complex plane of radius , where <maht>R</math> is sufficiently large to encompass all of the poles of the integrand above the real axis. Since all of the poles lie on the unit circle, this condition is equivalent to the statement that is greater than 1. We will later see that the reason that we chose to examine this integral was that the integral over the curved part of the contour will go to 0 as the radius of the contour goes to , which would not have been true of the initial integrand over the same contour, but that we will still be able to determine the value of the integral in the problem by examining this different integral.
The integrand has 4 poles, at .
We'd like to use the Cauchy Residue Theorem, so we need to evaluate the residues of our integrand at the poles enclosed by our contour, . We first note the general form for the residue of a function at a pole of order

.

Our poles are of order 1, which means that the above formula reduces to

.

Also, it is well-known that where can be expressed as a quotient of two other functions, , then the residue at a pole can be rewritten as

.

For our purposes, we will work with this final expression. Thus, if we plug in and , we get

.

We can now use the Cauchy Residue Theorem and rewrite our integral as

.

Using Euler's identity, we rewrite this as

.

We then expand and use some basic properties of the sine and cosine functions to rewrite this as

.

We have no successfully computed the integral of our function around our semicircular contour ! However, what we really want is the integral along the real axis from to . Here we will split our integral into two parts in the method employed by Carrier for his integral (3-12) and split our integral into along the real axis and along the curved part of the semicircular contour of radius through the complex plane.

where . This integral is bounded by the product of and the absolute value of the integrand, which tends to 0 as . Thus as .

.

The limits of integration for both integrals are centered about zero, about which sine is odd and thus the integral of sine is 0. Thus . Therefore . We confirm the accuracy of this solution versus a solution generated by numerical integration in the plot below.
File:Group12pset3problem3plot1.png