Skip to main content

Method of Moment & Maximum Likelihood Estimator: Method, Properties and Examples.

 Statistical Inference I:

Method Of Moment:  


One of the oldest method of finding estimator is Method of Moment, it was discovered by Karl Pearson in 1884. 


Method of Moment Estimator

Let X1, X2, ........Xn be a random sample from a population with probability density function (pdf) f(x, θ) or probability mass function (pmf) p(x) with parameters θ1, θ2,……..θk.

If μr' (r-th raw moment about the origin) then μr' = ∫-∞ xr f(x,θ) dx for r=1,2,3,….k .........Equation i

In general, μ1', μ2',…..μk' will be functions of parameters θ1, θ2,……..θk.

Let X1, X2,……Xn be the random sample of size n from the population. The method of moments consists of solving "k" equations (in Equation i) for θ1, θ2,……..θk to obtain estimators for the parameters by equating μ1', μ2',…..μk' with the corresponding sample moments m1', m2',…..mk'.

Where mr' = sample moment = (∑xir) / n

We get μ1' = m1', μ2' = m2', μ3' = m3', and so on μk' = mk'.

These k equations give (θ̂1, θ̂2,……..θ̂k) k estimators for θ1, θ2,……..θk by the method of moments.

Procedure to finding Moment estimator:

  1. First, equate the first sample moment about the origin m1' to the first theoretical moment, i.e., μ1' = m1'.
  2. Then, equate the second sample moment about the origin m2' to the second theoretical moment, i.e., μ2' = m2'.
  3. Continue equating the sample moments about the origin to the theoretical moments until you have as many parameters as equations (i.e., if there are 2 parameters, then equate 2 equations; if there are 4 parameters, then equate 4 equations).
  4. The equations are solved for the parameters. The resultant values are called method of moment estimators.

Example for Single Parameter:

Let X1, X2,……Xn be a random sample from a Bernoulli distribution with parameter p, then find an estimator for parameter p by the method of moments.

Answer:

Let X1, X2,……Xn be a random sample from a Bernoulli distribution with parameter p.

Then P(X=x) = px q1-x for x=0,1; 0 ≤ p ≤ 1, q=1-p

= 0; otherwise.

For the Bernoulli distribution, μ1' = ∑ Xr·P(X=x)

μ1' = p ........Equation i

Now, m1' = sample moment = (∑xi1) / n = X̄ .......Equation ii

Equating sample moment to theoretical moment, i.e., equating Equation i and ii:

μ1' = m1'

p = X̄

i.e., p̂ = X̄

Therefore, p̂ = X̄ is the moment estimator for parameter p.

Moment Estimator for μ and σ² of noraml distribution

Moment Estimator for μ and σ²

Let x1, x2,……xn be a random sample from a Normal Distribution with parameters N(μ, σ²).

Answer: We have a random sample of size n from a normal distribution, and its probability density function (pdf) is:

f(x, θ) = 1(σ√(2π)) * e-1 / (2σ²) * (x - μ)², -∞ ≤ x, μ < ∞; σ² > 0

We have E(x) = μ and V(x) = σ².

Then we find μ1' = m1' and μ2' = m2'.

Two equations are:

μ1' = E(x) = μ (Equation 1)

m1' = (∑ xi) / n = x̄ (Equation 2)

Equating Equation 1 and Equation 2:

μ1' = m1'

μ̂ = x̄

Therefore, x̄ is the method of moment estimator for parameter μ.

Now, μ2' = E(x²) = V(X) + (E(X))²

μ2' = σ² + μ² (Equation 3)

m2' = (∑ xi2) / n (Equation 4)

Equating Equation 3 and Equation 4:

μ2' = m2'

σ² + μ² = (∑ xi2) / n

We find the estimator for μ, which is μ̂ = x̄, and then:

σ² + x̄² = (∑ xi2) / n

σ̂² = (∑ xi2) / n - x̄² (Sample Variance)

Sample Variance is the method of moment estimator for σ².

So, x̄ is the method of moment estimator for parameter μ, and Sample Variance is the method of moment estimator for σ².

Maximum Likelihood Estimation

Maximum Likelihood Estimation (MLE)

The method of Maximum Likelihood Estimator (MLE) was introduced by Prof. R. A. Fisher in 1912.

Likelihood Function:

If we have a random sample of size n from a population with density function f(x, θ) where θ ∈ Θ, then the likelihood function of the sample values x₁, x₂, ..., xₙ is given by:

L(θ) = ∏ᵢ₌₁ⁿ f(xᵢ, θ)

Definition:

If x₁, x₂, …, xₙ is a random sample from any probability density function (PDF) or probability mass function (PMF), then The value of θ for which the likelihood function is maximized is called the Maximum Likelihood Estimator (MLE) of θ.

Maximum Likelihood Estimation Procedure:

The MLE is obtained by solving dL/dθ = 0 and subject to the condition that d²L/dθ² < 0. The same result can be obtained by taking the logarithm of the likelihood function and solving d(log(L(θ))/dθ = 0, subject to the condition d²(log(L(θ))/dθ² < 0

Properties:

  • Maximum Likelihood estimator may not be unique.
  • If f(X, θ) is a Probability Density Function (PDF) belonging to the exponential family, then the MLE of θ is a function of 1/n ∑T(x).
  • If T is the Minimum Variance Unbiased Estimator (MVBUE) of θ, then it is also the MLE of θ.
  • If T is the MLE of θ, then any function φ(T) is the MLE of φ(θ).
  • MLE is a function of Sufficient Statistics.
  • Asymptotic Normality of MLE: A consistent solution of the likelihood equation is asymptotically normally distributed about the true value of θ.

Remark:

MLEs are always consistent estimators but may not be unbiased.

Comments

Popular posts from this blog

Statistical Inference II Notes

Likelihood Ratio Test 

Statistical Inference: Basic Terms and Definitions.

  📚📖 Statistical Inference: Basic Terms. The theory of estimation is of paramount importance in statistics for several reasons. Firstly, it allows researchers to make informed inferences about population characteristics based on limited sample data. Since it is often impractical or impossible to measure an entire population, estimation provides a framework to generalize findings from a sample to the larger population. By employing various estimation methods, statisticians can estimate population parameters such as means, proportions, and variances, providing valuable insights into the population's characteristics. Second, the theory of estimating aids in quantifying the estimates' inherent uncertainty. Measures like standard errors, confidence intervals, and p-values are included with estimators to provide  an idea of how accurate and reliable the estimates are. The range of possible values for the population characteristics and the degree of confidence attached to those est...

B. Com. -I Statistics Practical No. 1 Classification, tabulation and frequency distribution –I: Qualitative data.

  Shree GaneshA B. Com. Part – I: Semester – I OE–I    Semester – I (BASIC STATISTICS PRACTICAL-I) Practical: 60 Hrs. Marks: 50 (Credits: 02) Course Outcomes: After completion of this practical course, the student will be able to: i) apply sampling techniques in real life. ii) perform classification and tabulation of primary data. iii) represent the data by means of simple diagrams and graphs. iv) summarize data by computing measures of central tendency.   LIST OF PRACTICALS: 1. Classification, tabulation and frequency distribution –I: Qualitative data. 2. Classification, tabulation and frequency distribution –II : Quantitative data. 3. Diagrammatic representation of data by using Pie Diagram and Bar Diagrams. 4. Graphical representation of data by using Histogram, Frequency Polygon, Frequency Curve and     Locating Modal Value. 5. Graphical representation of data by using Ogive Curves and Locating Quartile Values....

Index Number

 Index Number      Introduction  We seen in measures of central tendency the data can be reduced to a single figure by calculating an average and two series can be compared by their averages. But the data are homogeneous then the average is meaningful. (Data is homogeneous means data in same type). If the two series of the price of commodity for two years. It is clear that we cannot compare the cost of living for two years by using simple average of the price of the commodities. For that type of problem we need type of average is called Index number. Index number firstly defined or developed to study the effect of price change on the cost of living. But now days the theory of index number is extended to the field of wholesale price, industrial production, agricultural production etc. Index number is like barometers to measure the change in change in economics activities.   An index may be defined as a " specialized  average designed to measure the...

B. Com. I Practical No. 4 :Graphical representation of data by using Histogram, Frequency Polygon, Frequency Curve and Locating Modal Value.

Practical No. 4 Graphical representation of data by using Histogram, Frequency Polygon, Frequency Curve and Locating Modal Value   Graphical Representation: The representation of numerical data into graphs is called graphical representation of data. following are the graphs to represent a data i.                     Histogram ii.                 Frequency Polygon    iii.                Frequency Curve iv.        Locating Modal Value i.     Histogram: Histogram is one of the simplest methods to representing the grouped (continuous) frequency distribution. And histogram is defined as A pictorial representation of grouped (or continuous) frequency distribution to drawing a...

Basic Concepts of Probability and Binomial Distribution , Poisson Distribution.

 Probability:  Basic concepts of Probability:  Probability is a way to measure hoe likely something is to happen. Probability is number between 0 and 1, where probability is 0 means is not happen at all and probability is 1 means it will be definitely happen, e.g. if we tossed coin there is a 50% chance to get head and 50% chance to get tail, it can be represented in probability as 0.5 for each outcome to get head and tail. Probability is used to help us taking decision and predicting the likelihood of the event in many areas, that are science, finance and Statistics.  Now we learn the some basic concepts that used in Probability:  i) Random Experiment OR Trail: A Random Experiment is an process that get one or more possible outcomes. examples of random experiment include tossing a coin, rolling a die, drawing  a card from pack of card etc. using this we specify the possible outcomes known as sample pace.  ii)Outcome: An outcome is a result of experi...

Statistical Inference I ( Theory of Estimation) : Unbiased it's properties and examples

 📚Statistical Inference I Notes The theory of  estimation invented by Prof. R. A. Fisher in a series of fundamental papers in around 1930. Statistical inference is a process of drawing conclusions about a population based on the information gathered from a sample. It involves using statistical techniques to analyse data, estimate parameters, test hypotheses, and quantify uncertainty. In essence, it allows us to make inferences about a larger group (i.e. population) based on the characteristics observed in a smaller subset (i.e. sample) of that group. Notation of parameter: Let x be a random variable having distribution function F or f is a population distribution. the constant of  distribution function of F is known as Parameter. In general the parameter is denoted as any Greek Letters as θ.   now we see the some basic terms :  i. Population : in a statistics, The group of individual under study is called Population. the population is may be a group of obj...

Median test

 Non- Parametric test Median test Median test is also a Non-Parametric test and it is alternative to Parametric T test. The median test is used when we are interested to check the two independent sample have same median or not. It is useful when data is discrete or continuous and if data is in small size.  Assumptions:  I) the variable under study is ordinal scale II) the variable is random and Independent. The stepwise procedure for computation of median test for two independent sample : Step I :- firstly we define the hypothesis Null Hypothesis is the two independent sample have same median.  Against Alternative Hypothesis is the two independent sample have different median.  Step II :- In this step we combine two sample data. And calculating the median of combined data. Step III :- after that for testing hypothesis we constructing the (2x2) contingency table. For that table we divide the sample into two parts as number of observation above and below to the ...

Non- Parametric Test: Run Test

Non- Parametric Test  A Non-Parametric tests is a one of the part of Statistical tests that non-parametric test does not assume any particular distribution for analyzing the variable. unlike the parametric test are based on the assumption like normality or other specific distribution  of the variable. Non-parametric test is based on the rank, order, signs, or other non-numerical data. we know both test parametric and non-parametric, but when use particular test? answer is that if the assumption of parametric test are violated such as data is not normally distributed or sample size is small. then we use Non-parametric test they can used to analyse categorical data  or ordinal data and data are obtained form in field like psychology, sociology and biology. For the analysis use the  some non-parametric test that are Wilcoxon signed-ranked test, mann-whiteny U test, sign test, Run test, Kruskal-wallis test. but the non-parametric test have lower statistical power than ...