📚Statistical Inference I Notes
The theory of estimation invented by Prof. R. A. Fisher in a series of fundamental papers in around 1930.
Statistical inference is a process of drawing conclusions about a population based on the information gathered from a sample. It involves using statistical techniques to analyse data, estimate parameters, test hypotheses, and quantify uncertainty. In essence, it allows us to make inferences about a larger group (i.e. population) based on the characteristics observed in a smaller subset (i.e. sample) of that group.
Notation of parameter: Let x be a random variable having distribution function F or f is a population distribution. the constant of distribution function of F is known as Parameter. In general the parameter is denoted as any Greek Letters as θ.
now we see the some basic terms :
i. Population : in a statistics, The group of individual under study is called Population. the population is may be a group of object, animate like persons or inanimate like group of non-living cars. e.g. if we are interested to study the economic condition of males in Sangli district then the all males in Sangli district are the population.
ii. Sample : A sub-group of a population is called sample, the sample is a portion of a population which is examine to estimate a characteristic of population. but selected sample may be a true representative of the population.
iii. Parameter: Parameter is a constant value based on the population observation. they are usually denoted as Greek Letters Like θ, 𝝑, 𝝁, 𝞂. e.g. binomial distribution has parameter P. normal distribution has parameters are 𝝁, 𝞂. but these are notations we use any notation for represent the parameter of any distribution.
iv. Parameter Space: The set of all values of parameter θ is called as Parameter Space ad it is denoted as Θ. and the symbol Θ is read as script theta. e.g. X has normal distribution with mean 𝝻 and variance 𝞼² then the parameter space is Θ = { (𝝻, 𝞼² ) : -∞ < 𝝻 < ∞ ; 𝞼² > 0}
in particular if the variance 𝞼² = 1 then parameter space is Θ = { (𝝻, 1 ) : -∞ < 𝝻 < ∞ }
🔖Point Estimation:
the random sample of size n drawn form distribution f(x). and the θ be the unknown parameter of the distribution we are interested to finding the value of parameter or estimated value of parameter. but there is problem of point estimation is to choosing the statistics T(X₁, X₂, X₃, .......Xn) that may be consider as the estimate of parameter θ then the statistic T is said to point estimate of parameter θ if take single value in Θ. thus an estimator of parameter gives single value is called a point estimate of parameter.
Definition: Form sample we obtain single value as estimate of parameter , we call it as point estimate of parameter and the method used to find the estimator is called point estimation or method of estimation.
Estimator:
The function of random variable is called estimator, or function of sample observation. it is used to estimate the parameter.
Estimate: the numerical value of estimator is called estimate.
Standard Error:
Standard error of estimator is the positive square root of variance of estimator.
Properties of estimators:
i. Unbiasedness
ii. Efficiency
iii. Consistency
iv. Sufficiency
we see one by one properties of estimator
i. Unbiasedness:
The estimated value of the parameter that false nearest to the true value of parameter, then this property of estimator is called as unbiasedness.
Definition: An estimator T =T(X₁, X₂, X₃, .......Xn) is said to be an Unbiased estimator of parameter θ, if the E (T) = θ ;∀ θ ɛ Θ.
thus the unbiasedness means essentially the average value of estimate that will close to the true Parameter value. i.e. if we were to take sample of size n and for each sample compute the observed value of T =T(X₁, X₂, X₃, .......Xn)
then E (T) = θ ;∀ θ ɛ Θ.
a) Biased Estimator:
Definition : An estimator T =T(X₁, X₂, X₃, .......Xn) is said to be an biased estimator of parameter θ, if the E (T) ≠ θ ;∀ θ ɛ Θ. therefore this quantity is
b(T, θ) = E(T- θ)
b(T, θ) = E(T) - θ is called biased estimator of T.
there are two type of biases:
i. Positive Bias and ii. Negative Bias
i. Positive Bias : If the Bias is greater than zero then this bias is called Positive Bias.
i.e. b(T, θ) = E(T- θ) > 0
i.e. E(T) > θ
ii. Negative Bias : If the Bias is greater than zero then this bias is called Negative Bias.
i.e. b(T, θ) = E(T- θ) < 0
i.e. E(T) < θ
Examples:
1. Let X₁, X₂, X₃, .......Xn be a random sample of size n from distribution with finite mean μ then show that the sample mean is unbiased estimator of μ.
Solution: Let X₁, X₂, X₃, .......Xn be a random sample of size n from distribution with finite mean μ.
therefore E(x) = μ
by definition of unbiased estimator
i.e. E (T) = θ
consider T = sample mean = x̄ = (1/n) ∑Xi
now E(T ) = E( x̄ ) = E ((1/n) ∑Xi)
= (1/n) x n x E ( X )
= E ( X )
= μ
E( x̄) = μ
therefore the sample mean ( i.e. x̄ ) is an unbiased estimator of population mean μ.
2. Let X₁, X₂, X₃, .......Xn a ramdom sample from normal distribution with mean μ and variance 1. then Show that T = (1/n) ∑Xi² is an biased estimator of μ² + 1.
Solution:
Let x has normal distribution with mean ( μ, 1).
Then E(x) = μ and v(x) = 1
by definition of unbiased estimator
i.e. E (T) = θ
consider T = (1/n) ∑Xi²
now E(T ) = E [(1/n) ∑X²i ]
= (1/n) x n x E(X²i ) ..................1
We know that V (X) = E (X²) - [E(X)]²
therefore E(X²i ) = V(X) + [E(X)]²
E(X²i ) = 1 + [μ ]²
E(X²i ) = 1 + μ²
put E(X²i ) = 1 + μ² in equation 1
E [(1/n) ∑X²i ]= (1/n) x n x E(X²i )
= 1 + μ²
E [(1/n) ∑X²i ] = 1 + μ²
therefore (1/n) ∑X²i is an unbiased estimator of 1 + μ² .
Properties of Unbiasedness.
I) If T is an unbiased estimator of 𝛉 then Ø(T) is an Unbiased estimator of Ø(𝛉). Provided Ø(.) is a linear function.
Proof : Here, Given that T is an unbiased estimator of 𝛉
i.e. E (T) = 𝛉
and Ø(.) is an linear function
Consider a and b are two constant then
Ø(T) = aT +b is a linear function of T
therefore, E[Ø(T)] = E (aT +b)
= a E(T) +b
= a𝛉 + b it is a linear function of Ø(𝛉).
=Ø(𝛉)
E[Ø(T)] = Ø(𝛉)
Hence Ø(T) is an unbiased estimator of Ø(𝛉).
If T is an unbiased estimator of 𝛉 then Ø(T) is an Unbiased estimator of Ø(𝛉). Provided Ø(.) is a linear function.
this property is not hold when Ø(.) is non-liner function.
II. Two distinct unbiased estimators of Ø(𝛉) gives rise to infinitely many unbiased estimators of Ø(𝛉).
Proof:
Let T1 and T2 are Two distinct unbiased estimators of parametric function Ø(𝛉) based on random sample X₁, X₂, X₃, .......Xn
i.e. E(T1 ) =E ( T2) = Ø(𝛉); ∀ θ ɛ Θ.
Let us consider a linear combination of these two estimators, T1 and T2 of Ø(𝛉) as
T𝛼 = 𝛼 T1 + (1- 𝛼) T2 for any real value of 𝛼 ɛ R
Now E(T𝛼 ) = E [𝛼 T1 + (1- 𝛼) T2 ]
= 𝛼 E( T1) + (1- 𝛼) E( T2 )
= 𝛼 Ø(𝛉) + (1- 𝛼) Ø(𝛉)
= 𝛼 Ø(𝛉) + Ø(𝛉)- 𝛼 Ø(𝛉)
= Ø(𝛉)
E(T𝛼 ) = Ø(𝛉)
Therefore T𝛼 is an unbiased estimator of Ø(𝛉) for any real value of 𝛼.
we take any real value for 𝛼 we get infinity many unbiased estimators.
Example: 3. If T is unbiased estimator of 𝛉, then show that T² is a biased estimator of 𝛉².
Solution: given that T is unbiased estimator of 𝛉 then E(T) = 𝛉
we have variance of T = Var(T) = E(T²) - [E(T)]²
= E(T²) - [𝛉]²
therefore E(T²) = Var(T) + 𝛉² we have variance of T is greater than 0 i.e. Var(T)>0
E(T²) > 𝛉² means E(T²) is not same as 𝛉² the value of 𝛉² is greater than E(T²)
i.e. E(T²) ≠ 𝛉² this is the definition of biased estimator.
hence T² is a biased estimator of 𝛉².
4. Let X₁, X₂, X₃, .......Xn a ramdom sample from Poisson distribution with parameter 𝛉 then show that T= ∝ x̄ + (1-∝) s² is unbiased estimator of 𝛉 for any real value of ∝. given that x̄ and s² are unbiased estimators of parameter 𝛉.
Solution: We know that sample mean x̄ and sample mean square s² are unbiased estimator of 𝛉.
E (x̄ ) =E(s²) = 𝛉
now T= ∝ x̄ + (1-∝) s²
taking expectation
E(T)= E(∝ x̄ + (1-∝) s² )
E(T)= ∝E( x)̄ + (1-∝)E( s² )
E(T)= ∝ 𝛉̄ + (1-∝) 𝛉
E(T) = ∝ 𝛉̄ + 𝛉 - ∝𝛉
E(T) = 𝛉
hence T is unbiased estimator of 𝛉. for any real value of ∝ hence we get infinity many unbiased estimators of parameter 𝛉.
Comments
Post a Comment