Errors in Numerical Method
Types of Error in Numerical Method
1. True Error
True error is denoted by Et and is defined as the difference between the true value and approximate value.
formula: True Error (Et) = True Value – Approximate Value
2. Relative Error
Relative error is denoted by Er and is defined as the ratio between the true error and the true value.
formula: Relative Error (Er) = True Error / True value
3. Approximate Error
Approximate error is denoted by Ea, and is defined as the difference between the present approximation and previous approximation
formula: Approximate Error (Ea) = Present Approximation – Previous Approximation
4. Relative Approximate Error
Relative Approximate Error is denoted by Era and is defined as the ratio between the approximate error and the present approximation
formula: Relative Approximate Error (Era) = Approximate Error / Present Approximation
Sources of Error in Numerical Method
There are mainly three sources of errors in Numerical Computation: rounding, data uncertainty, and truncation.
1. Truncation errors or Discretization or Approximation errors
Truncation errors arises from using an approximation in place of exact mathematical procedure. It is the error resulting from the truncation of the numerical process. We often use some finite number of terms to estimate the sum of a finite series.
2. Round Off Errors
Round off errors occurs when fixed number of digits are used to represent exact number. Since, the numbers are stored at every stage of computation, round off errors is introduced at the end of every arithmetic operation.
3. Uncertainty errors (Propagation of errors)
It may arise in several ways: from errors in measuring physical quantities, from errors in storing the data on the computer, or, if the data is itself the solution to another problem, it may be the result of errors in an earlier computation.
0 Comments