Contents
Problem 1
1.1 Problem definition
1.2 Solution of theproblem
1.2.1 Linearinterpolation
1.2.2 Method of leastsquares interpolation
1.2.3 Lagrangeinterpolating polynomial
1.2.4 Cubic splineinterpolation
1.3 Results anddiscussion
1.3.1 Lagrange polynomial
Problem 2
2.1 Problem definition
2.2 Problem solution
2.2.1 Rectangular method
2.2.2 Trapezoidal rule
2.2.3 Simpson's rule
2.2.4 Gauss-Legendremethod and Gauss-Chebyshev method
Problem 3
3.1 Problem definition
3.2 Problem solution
Problem 4
4.1 Problem definition
4.2 Problem solution
References
Problem1 1.1 Problem definition
For thefollowing data set, please discuss the possibility of obtaining a reasonableinterpolated value at />, />, and /> via at least 4 differentinterpolation formulas you are have learned in this semester.
/>
/> 1.2 Solution of the problem
Interpolationis a method of constructing new data points within the range of a discrete setof known data points.
In engineeringand science one often has a number of data points, as obtained by sampling orexperimentation, and tries to construct a function which closely fits thosedata points. This is called curve fitting or regression analysis. Interpolationis a specific case of curve fitting, in which the function must go exactlythrough the data points.
First we haveto plot data points, such plot provides better picture for analysis than dataarrays
Following fourinterpolation methods will be discussed in order to solve the problem:
· Linearinterpolation
· Methodof least squares interpolation
· Lagrangeinterpolating polynomial
/>
Fig 1. Initialdata points
· Cubicspline interpolation 1.2.1Linear interpolation
One of thesimplest methods is linear interpolation (sometimes known as lerp). Generally,linear interpolation tales two data points, say /> and/>, and the interpolant isgiven by:
/> at the point />
Linearinterpolation is quick and easy, but it is not very precise/ Anotherdisadvantage is that the interpolant is not differentiable at the point />.1.2.2Method of least squares interpolation
The method ofleast squares is an alternative to interpolation for fitting a function to aset of points. Unlike interpolation, it does not require the fitted function tointersect each point. The method of least squares is probably best known forits use in statistical regression, but it is used in many contexts unrelated tostatistics.
/>
Fig 2. Plot ofthe data with linear interpolation superimposed
Generally, ifwe have /> data points, there isexactly one polynomial of degree at most /> goingthrough all the data points. The interpolation error is proportional to thedistance between the data points to the power n. Furthermore, the interpolantis a polynomial and thus infinitely differentiable. So, we see that polynomialinterpolation solves all the problems of linear interpolation.
However,polynomial interpolation also has some disadvantages. Calculating theinterpolating polynomial is computationaly expensive compared to linearinterpolation. Furthermore, polynomial interpolation may not be so exact afterall, especially at the end points. These disadvantages can be avoided by usingspline interpolation.
Example ofconstruction of polynomial by least square method
Data is givenby the table:
/>
Polynomial isgiven by the model:
/>
In order tofind the optimal parameters /> thefollowing substitution is being executed:
/>, />, …, />
Then: />
The errorfunction:
/>
It isnecessary to find parameters />, whichprovide minimums to function />:
/>
/>
/>
/>
It should benoted that the matrix /> must benonsingular matrix.
For the givendata points matrix /> become singular,and it makes impossible to construct polynomial with /> order, where /> - number of data points,so we will use /> polynomial
/>
Fig 3. Plot ofthe data with polynomial interpolation superimposed
Because thepolynomial is forced to intercept every point, it weaves up and down. 1.2.3Lagrange interpolating polynomial
The Lagrangeinterpolating polynomial is the polynomial /> ofdegree /> that passes through the /> points />, />, …, /> and is given by:
/>,
Where
/>
Writtenexplicitly
/>
Whenconstructing interpolating polynomials, there is a tradeoff between having abetter fit and having a smooth well-behaved fitting function. The more datapoints that are used in the interpolation, the higher the degree of theresulting polynomial, and therefore the greater oscillation it will exhibitbetween the data points. Therefore, a high-degree interpolation may be a poorpredictor of the function between points, although the accuracy at the datapoints will be «perfect.»
/>
Fig 4. Plot ofthe data with Lagrange interpolating polynomial interpolation superimposed
One can see,that Lagrange polynomial has a lot of oscillations due to the high order ifpolynomial.
1.2.4 Cubicspline interpolation
Remember thatlinear interpolation uses a linear function for each of intervals />. Spline interpolation useslow-degree polynomials in each of the intervals, and chooses the polynomialpieces such that they fit smoothly together. The resulting function is called aspline. For instance, the natural cubic spline is piecewise cubic and twicecontinuously differentiable. Furthermore, its second derivative is zero at theend points.
Likepolynomial interpolation, spline interpolation incurs a smaller error thanlinear interpolation and the interpolant is smoother. However, the interpolantis easier to evaluate than the high-degree polynomials used in polynomialinterpolation. It also does not suffer from Runge's phenomenon.
/>
Fig 5. Plot ofthe data with Lagrange interpolating polynomial interpolation superimposed
It should benoted that cubic spline curve looks like metal ruler fixed in the nodal points,one can see that such interpolation method could not be used for modelingsudden data points jumps. 1.3 Results and discussion
The followingresults were obtained by employing described interpolation methods for thepoints />; />; />:
Linear interpolation Least squares interpolation Lagrange polynomial Cubic spline Root mean square
/> 0.148 0.209 0.015 0.14 0.146
/> 0.678 0.664 0.612 0.641 0.649
/> 1.569 1.649 1.479 1.562 1.566
Table 1.Results of interpolation by different methods in the given points.
In order to determinethe best interpolation method for the current case should be constructed thetable of deviation between interpolation results and root mean square, ifnumber of interpolations methods increases then value of RMS become closer tothe true value. Linear interpolation Least squares interpolation Lagrange polynomial Cubic spline
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/> Average deviation from the RMS
/>
/>
/>
/>
Table 2. Tableof Average deviation between average deviation and interpolation results.
One can seethat cubic spline interpolation gives the best results among discussed methods,but it should be noted that sometimes cubic spline gives wrong interpolation,especially near the sudden function change. Also good interpolation results areprovided by Linear interpolation method, but actually this method gives averagevalues on each segment between values on it boundaries.
Problem2 2.1 Problem definition
For the abovementioned data set, if you are asked to give the integration of /> between two ends /> and />? Please discuss thepossibility accuracies of all the numerical integration formulas you havelearned in this semester. 2.2 Problem solution
In numericalanalysis, numerical integration constitutes a broad family of algorithms forcalculating the numerical value of a definite integral.
There areseveral reasons for carrying out numerical integration. The integrand /> may be known only atcertain points, such as obtained by sampling. Some embedded systems and othercomputer applications may need numerical integration for this reason.
A formula forthe integrand may be known, but it may be difficult or impossible to find anantiderivative which is an elementary function. An example of such an integrandis />, the antiderivative ofwhich cannot be written in elementary form.
It may bepossible to find an antiderivative symbolically, but it may be easier tocompute a numerical approximation than to compute the antiderivative. That maybe the case if the antiderivative is given as an infinite series or product, orif its evaluation requires a special function which is not available.
The followingmethods were described in this semester:
· Rectangularmethod
· Trapezoidalrule
· Simpson'srule
· Gauss-Legendremethod
· Gauss-Chebyshevmethod 2.2.1Rectangular method
The moststraightforward way to approximate the area under a curve is to divide up theinterval along the x-axis between /> and /> into a number of smallerintervals, each of the same length. For example, if we divide the interval into/> subintervals, then thewidth of each one will be given by:
/>
Theapproximate area under the curve is then simply the sum of the areas of all therectangles formed by our subintervals:
/>
The summaryapproximation error for /> intervalswith width /> is less than or equal to
/>
Thus it isimpossible to calculate maximum of the derivative function, we can estimateintegration error like value:
/>
2.2.2Trapezoidal rule
Trapezoidalrule is a way to approximately calculate the definite integral. The trapeziumrule works by approximating the region under the graph of the function /> by a trapezium and calculatingits area. It follows that
/>
To calculatethis integral more accurately, one first splits the interval of integration /> into n smallersubintervals, and then applies the trapezium rule on each of them. One obtainsthe composite trapezium rule:
/>
The summaryapproximation error for /> intervalswith width /> is less than or equal to:
/> 2.2.3Simpson's rule
Simpson's ruleis a method for numerical integration, the numerical approximation of definiteintegrals. Specifically, it is the following approximation:
/>
If theinterval of integration /> is insome sense «small», then Simpson's rule will provide an adequateapproximation to the exact integral. By small, what we really mean is that thefunction being integrated is relatively smooth over the interval />. For such a function, asmooth quadratic interpolant like the one used in Simpson's rule will give goodresults.
However, it isoften the case that the function we are trying to integrate is not smooth overthe interval. Typically, this means that either the function is highlyoscillatory, or it lacks derivatives at certain points. In these cases,Simpson's rule may give very poor results. One common way of handling thisproblem is by breaking up the interval /> intoa number of small subintervals. Simpson's rule is then applied to eachsubinterval, with the results being summed to produce an approximation for theintegral over the entire interval. This sort of approach is termed thecomposite Simpson's rule.
Suppose thatthe interval /> is split up in /> subintervals, with n aneven number. Then, the composite Simpson's rule is given by
/>
The errorcommitted by the composite Simpson's rule is bounded (in absolute value) by
/> 2.2.4Gauss-Legendre method and Gauss-Chebyshev method
Since function values are given in fixed points then just two pointsGauss-Legendre method can be applied. If /> iscontinuous on />, then
/>
The Gauss-Legendre rule />G2( f ) has degree of precision />. If />, then
/>,
where
/>
It should benoted that even in case of two points method we have to calculate values of thefunction in related to />, />, this values could beevaluated by linear interpolation (because it is necessary to avoidoscillations), so estimation of integration error become very complicatedprocess, but this error will be less or equal to trapezoidal rule.
Mechanism ofGauss-Chebyshev method is almost the same like described above, and integrationerror will be almost the same, so there is no reason to use such methods forthe current problem.
Problem3 3.1 Problem definition
It is wellknown that the third order Runge-Kutta method is of the following form
/>, />
/>
/>
Suppose thatyou are asked to derived a new third order Runge-Kutta method in the followingfrom
/>, />
/>
/>
Find determinethe unknowns />, />, /> and /> so that your scheme is athird order Runge-Kutta method. 3.2 Problem solution
GenerallyRunge-Kutta method looks like:
/>,
wherecoefficients /> could be calculated by thescheme:
/>
/>
/>
The errorfunction:
/>
Coefficients />, />, /> must be found to satisfyconditions />, consequently we cansuppose that for each order of Runge-Kutta scheme those coefficients aredetermined uniquely, it means that there are no two different third orderschemes with different coefficients. Now it is necessary to prove statement.
For />, />:
/>
/>; />
/>; />
/>; />; />
/>
/>
/>
/>
/>
/>; />
/>
/>; />; />
Thus we havesystem of equations:
/>
Some ofcoefficients are already predefined:
/>; />; />; />; />; />; />; />
/>
/>
/>
/>
Obtainedresults show that Runge-Kutta scheme for every order is unique.
Problem4 4.1 Problem definition
Discuss thestability problem of solving the ordinary equation />,/> via the Euler explicitscheme />, say />. If />, how to apply yourstability restriction? 4.2 Problem solution
The Eulermethod is 1st order accurate. Given scheme could be rewritten in form of:
/>
If /> has a magnitude greaterthan one then /> will tend togrow with increasing /> and may eventuallydominate over the required solution. Hence the Euler method is stable only if /> or:
/>
For the case /> mentioned above inequalitylooks like:
/>
Last resultshows that integration step mast be less or equal to />.
For the case />, for the iteration methodcoefficient looks like
/>
/>
As step /> is positive value of thefunction /> must be less then />. There are two ways todefine the best value of step />, thefirs one is to define maximum value of function /> onthe integration area, another way is to use different /> for each value />, where />. So integration step isstrongly depends on value of />.
References
1. J. C.Butcher, Numerical methods for ordinary differential equations, ISBN 0471967580
2. GeorgeE. Forsythe, Michael A. Malcolm, and Cleve B. Moler. Computer Methods forMathematical Computations. Englewood Cliffs, NJ: Prentice-Hall, 1977. (SeeChapter 6.)
3. ErnstHairer, Syvert Paul Nørsett, and Gerhard Wanner. Solving ordinarydifferential equations I: Nonstiff problems, second edition. Berlin: SpringerVerlag, 1993. ISBN 3-540-56670-8.
4. WilliamH. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling.Numerical Recipes in C. Cambridge, UK: Cambridge University Press, 1988. (SeeSections 16.1 and 16.2.)
5. KendallE. Atkinson. An Introduction to Numerical Analysis. John Wiley & Sons — 1989
6. F.Cellier, E. Kofman. Continuous System Simulation. Springer Verlag, 2006. ISBN0-387-26102-8.
7. GaussianQuadrature Rule of Integration — Notes, PPT, Matlab, Mathematica, Maple,Mathcad at Holistic Numerical Methods Institute
8. Burden,Richard L.; J. Douglas Faires (2000). Numerical Analysis (7th Ed. ed.).Brooks/Cole. ISBN 0-534-38216-9.