Statistical Inference I:
Method Of Moment:
Let X1, X2, ........Xn be a random sample from a population with probability density function (pdf) f(x, θ) or probability mass function (pmf) p(x) with parameters θ1, θ2,……..θk.
If μr' (r-th raw moment about the origin) then μr' = ∫-∞∞ xr f(x,θ) dx for r=1,2,3,….k .........Equation i
In general, μ1', μ2',…..μk' will be functions of parameters θ1, θ2,……..θk.
Let X1, X2,……Xn be the random sample of size n from the population. The method of moments consists of solving "k" equations (in Equation i) for θ1, θ2,……..θk to obtain estimators for the parameters by equating μ1', μ2',…..μk' with the corresponding sample moments m1', m2',…..mk'.
Where mr' = sample moment = (∑xir) / n
We get μ1' = m1', μ2' = m2', μ3' = m3', and so on μk' = mk'.
These k equations give (θ̂1, θ̂2,……..θ̂k) k estimators for θ1, θ2,……..θk by the method of moments.
Procedure to finding Moment estimator:
- First, equate the first sample moment about the origin m1' to the first theoretical moment, i.e., μ1' = m1'.
- Then, equate the second sample moment about the origin m2' to the second theoretical moment, i.e., μ2' = m2'.
- Continue equating the sample moments about the origin to the theoretical moments until you have as many parameters as equations (i.e., if there are 2 parameters, then equate 2 equations; if there are 4 parameters, then equate 4 equations).
- The equations are solved for the parameters. The resultant values are called method of moment estimators.
Example for Single Parameter:
Let X1, X2,……Xn be a random sample from a Bernoulli distribution with parameter p, then find an estimator for parameter p by the method of moments.
Answer:
Let X1, X2,……Xn be a random sample from a Bernoulli distribution with parameter p.
Then P(X=x) = px q1-x for x=0,1; 0 ≤ p ≤ 1, q=1-p
= 0; otherwise.
For the Bernoulli distribution, μ1' = ∑ Xr·P(X=x)
μ1' = p ........Equation i
Now, m1' = sample moment = (∑xi1) / n = X̄ .......Equation ii
Equating sample moment to theoretical moment, i.e., equating Equation i and ii:
μ1' = m1'
p = X̄
i.e., p̂ = X̄
Therefore, p̂ = X̄ is the moment estimator for parameter p.
Moment Estimator for μ and σ²
Let x1, x2,……xn be a random sample from a Normal Distribution with parameters N(μ, σ²).
Answer: We have a random sample of size n from a normal distribution, and its probability density function (pdf) is:
f(x, θ) = 1⁄(σ√(2π)) * e-1 / (2σ²) * (x - μ)², -∞ ≤ x, μ < ∞; σ² > 0
We have E(x) = μ and V(x) = σ².
Then we find μ1' = m1' and μ2' = m2'.
Two equations are:
μ1' = E(x) = μ (Equation 1)
m1' = (∑ xi) / n = x̄ (Equation 2)
Equating Equation 1 and Equation 2:
μ1' = m1'
μ̂ = x̄
Therefore, x̄ is the method of moment estimator for parameter μ.
Now, μ2' = E(x²) = V(X) + (E(X))²
μ2' = σ² + μ² (Equation 3)
m2' = (∑ xi2) / n (Equation 4)
Equating Equation 3 and Equation 4:
μ2' = m2'
σ² + μ² = (∑ xi2) / n
We find the estimator for μ, which is μ̂ = x̄, and then:
σ² + x̄² = (∑ xi2) / n
σ̂² = (∑ xi2) / n - x̄² (Sample Variance)
Sample Variance is the method of moment estimator for σ².
So, x̄ is the method of moment estimator for parameter μ, and Sample Variance is the method of moment estimator for σ².
Maximum Likelihood Estimation (MLE)
The method of Maximum Likelihood Estimator (MLE) was introduced by Prof. R. A. Fisher in 1912.
Likelihood Function:
If we have a random sample of size n from a population with density function f(x, θ) where θ ∈ Θ, then the likelihood function of the sample values x₁, x₂, ..., xₙ is given by:
L(θ) = ∏ᵢ₌₁ⁿ f(xᵢ, θ)
Definition:
If x₁, x₂, …, xₙ is a random sample from any probability density function (PDF) or probability mass function (PMF), then The value of θ for which the likelihood function is maximized is called the Maximum Likelihood Estimator (MLE) of θ.
Maximum Likelihood Estimation Procedure:
The MLE is obtained by solving dL/dθ = 0 and subject to the condition that d²L/dθ² < 0. The same result can be obtained by taking the logarithm of the likelihood function and solving d(log(L(θ))/dθ = 0, subject to the condition d²(log(L(θ))/dθ² < 0
Properties:
- Maximum Likelihood estimator may not be unique.
- If f(X, θ) is a Probability Density Function (PDF) belonging to the exponential family, then the MLE of θ is a function of 1/n ∑T(x).
- If T is the Minimum Variance Unbiased Estimator (MVBUE) of θ, then it is also the MLE of θ.
- If T is the MLE of θ, then any function φ(T) is the MLE of φ(θ).
- MLE is a function of Sufficient Statistics.
- Asymptotic Normality of MLE: A consistent solution of the likelihood equation is asymptotically normally distributed about the true value of θ.
Remark:
MLEs are always consistent estimators but may not be unbiased.
Comments
Post a Comment