Skip to main content

Statistical Inference ( Unit 2: Cramer-Rao Inequality, Method Of Moment, Maximum Likelihood Estimator)

Changing Color Blog Name

Statistical Inference I: (Cramer-Rao Inequality, Method Of Moment, Maximum Likelihood Estimator)

I. Introduction

We see in unbiased estimator from two distinct unbiased estimators give infinitely many unbiased estimators of θ, among these estimators we find the best estimator for parameter θ by comparing their variance or mean square errors. But in some examples, we see that the number of estimators is possible as.

For Normal distribution:

If X1, X2, ........Xn. random sample from a normal distribution with mean 𝛍 and variance 𝛔², then T1 = x̄, T2 = Sample median, both are unbiased estimators for parameter 𝛍. Now we find a sufficient estimator; therefore, T1 is a sufficient estimator for 𝛍, hence it is the best estimator for parameter 𝛍.

Thus, for finding the best estimator, we check if the estimator is sufficient or not.

Now we are interested in finding the variance of the best estimator. This can be discussed in this article.

II. Properties of Probability Mass Function (p.m.f.) or Probability Density Function (p.d.f)

If X1, X2, ........Xn. are from any p.d.f or p.m.f f(x,θ), θ ∈ Θ, following are the properties of P.D.F. OR P.M.F:

Properties of Fisher Information Function

1. \( \int_{-\infty}^{\infty} f(x, \theta) \, dx = 1 \)

Proof: We have

\( \int_{-\infty}^{\infty} f(x, \theta) \, dx = 1 \) from 1

2. \( \frac{\partial}{\partial\theta} \int_{-\infty}^{\infty} f(x, \theta) \, dx = 0 \)

Proof: We have

\( \frac{\partial}{\partial\theta} \int_{-\infty}^{\infty} f(x, \theta) \, dx = 0 \) from 2

3. \( E\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right] = 0 \)

Proof: We have

\( E\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right] = 0 \) from 3

4. \( E\left[\frac{\partial^2}{\partial\theta^2} \log f(x, \theta)\right] = -E\left[\frac{\partial\log f(x, \theta)}{\partial\theta}\right]^2 \)

Proof: We have

\( E\left[\frac{\partial^2}{\partial\theta^2} \log f(x, \theta)\right] = -E\left[\frac{\partial\log f(x, \theta)}{\partial\theta}\right]^2 \) from 4

5. \( \text{Var}\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right] = -E\left[\frac{\partial^2}{\partial\theta^2} \log f(x, \theta)\right] = E\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right]^2 \)

Proof: We have

Var(\( \frac{\partial}{\partial\theta} \log f(x, \theta) \)) = \( E\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right]^2 - \left(E\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right]\right)^2 \)

Var(\( \frac{\partial}{\partial\theta} \log f(x, \theta) \)) = \( E\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right]^2 - 0 \) from 3

Var(\( \frac{\partial}{\partial\theta} \log f(x, \theta) \)) = \( E\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right]^2 = -E\left[\frac{\partial^2}{\partial\theta^2} \log f(x, \theta)\right] \) from 4

Fisher Information Function:

Definition:

  1. Fisher Information Measure (or the amount of information) about parameter \(\theta\) obtained in the random variable \(x\) is denoted as \(I(\theta)\) and is defined as
  2. \(I(\theta) = \text{Var}\left[\frac{\partial}{\partial\theta} \log f(x, \theta)\right] = \text{E}\left[\left(\frac{\partial}{\partial\theta} \log f(x, \theta)\right)^2\right] = -\text{E}\left[\frac{\partial^2}{\partial\theta^2} \log f(x, \theta)\right]\)

  3. Fisher Information Measure (or the amount of information) about parameter \(\theta\) obtained in the random variable \(X_1, X_2, \ldots, X_n\) of size \(n\) is denoted as \(I_n(\theta)\) and is defined as
  4. \(I_n(\theta) = \text{Var}\left[\frac{\partial}{\partial\theta} \log L(\theta)\right] = \text{E}\left[\left(\frac{\partial}{\partial\theta} \log L(\theta)\right)^2\right] = -\text{E}\left[\frac{\partial^2}{\partial\theta^2} \log L(\theta)\right]\)

  5. Let \(X_1, X_2, \ldots, X_n\) be a random sample from the distribution of random variable \(x\), and \(T = T(X_1, X_2, \ldots, X_n)\) be any statistic for the parameter \(\theta\), and \(g(t, \theta)\) is the probability function, then the Fisher information function (or the amount of information) about parameter \(\theta\) contained in statistic \(T\) is given by \(I_T(\theta)\) and is given as
  6. \(I_T(\theta) = \text{Var}\left[\frac{\partial}{\partial\theta} \log g(t, \theta)\right] = \text{E}\left[\left(\frac{\partial}{\partial\theta} \log g(t, \theta)\right)^2\right] = -\text{E}\left[\frac{\partial^2}{\partial\theta^2} \log g(t, \theta)\right]\)

Properties of Fisher Information Function

Result 1:

Let \(X_1, X_2, \ldots, X_n\) be a random sample from the distribution \(f(x,\theta)\), then \(I_n(\theta) = n I(\theta)\).

Proof:

Let \(X_1, X_2, \ldots, X_n\) be a random sample from the distribution \(f(x,\theta)\), \(\theta \in \Theta\).

The Fisher Information Measure (or the amount of information) about parameter \(\theta\) obtained in random variable \(x\) is denoted as \(I(\theta)\) and is defined as:

\[ I(\theta) = \text{Var}\left[\frac{\partial}{\partial \theta} \log f(x,\theta)\right] = \text{E}\left[\left(\frac{\partial}{\partial \theta} \log f(x,\theta)\right)^2\right] = -\text{E}\left[\frac{\partial^2}{\partial \theta^2} \log f(x,\theta)\right] \]

Now consider the joint probability function or the likelihood function of random variables \(X_1, X_2, \ldots, X_n\), then \(L(\theta) = f(X_1, X_2, \ldots, X_n,\theta)\).

\[ L(\theta) = \prod f(x,\theta) \]

Taking the logarithm on both sides:

\[ \log L(\theta) = \log\left(\prod f(x,\theta)\right) \]

\[ \log L(\theta) = \sum \log f(x,\theta) \]

Differentiating with respect to \(\theta\):

\[ \frac{\partial}{\partial \theta} \log L(\theta) = \sum \frac{\partial}{\partial \theta} \log f(x,\theta) \quad \text{(i)} \]

We have Fisher Information Measure (or the amount of information) about parameter \(\theta\) obtained in random variable \(X_1, X_2, \ldots, X_n\) is denoted as \(I_n(\theta)\) and it is defined as:

\[ I_n(\theta) = \text{Var}\left[\frac{\partial}{\partial \theta} \log L(\theta)\right] = \text{E}\left[\left(\frac{\partial}{\partial \theta} \log L(\theta)\right)^2\right] = -\text{E}\left[\frac{\partial^2}{\partial \theta^2} \log L(\theta)\right] \]

So, \(I_n(\theta) = n I(\theta)\), hence proved.

Result 2:

Show that for any statistic \(T\), \(I_n(\theta) \geq I_T(\theta)\).

Proof:

Let \(X_1, X_2, \ldots, X_n\) be a random sample from the distribution of random variable \(x\), and \(T = T(X_1, X_2, \ldots, X_n)\) be any statistic for the parameter \(\theta\), and \(g(t,\theta)\) is the probability function.

The Fisher information function (or the amount of information) about parameter \(\theta\) contained in statistic \(T\) is given by \(I_T(\theta)\) and is given as:

\[ I_T(\theta) = \text{Var}\left[\frac{\partial}{\partial \theta} \log g(t,\theta)\right] = \text{E}\left[\left(\frac{\partial}{\partial \theta} \log g(t,\theta)\right)^2\right] = -\text{E}\left[\frac{\partial^2}{\partial \theta^2} \log g(t,\theta)\right] \]

Now, consider the joint probability function or the likelihood function of random variables \(X_1, X_2, \ldots, X_n\).

\[ L(\theta) = f(X_1, X_2, \ldots, X_n,\theta) \]

\[ L(\theta) = g(t,\theta) \cdot h(x) \]

Taking the logarithm of \(L(\theta)\):

\[ \log L(\theta) = \log (g(t,\theta) \cdot h(x)) \]

\[ \log L(\theta) = \log (g(t,\theta)) + \log (h(x)) \]

Differentiating with respect to \(\theta\):

\[ \frac{\partial}{\partial \theta} \log L(\theta) = \frac{\partial}{\partial \theta} \log (g(t,\theta)) + \frac{\partial}{\partial \theta} \log (h(x)) \]

\[ \frac{\partial}{\partial \theta} \log L(\theta) \geq \frac{\partial}{\partial \theta} \log (g(t,\theta)) + 0 \quad \text{(ii)} \]

\[ \text{Var}\left[\frac{\partial}{\partial \theta} \log L(\theta)\right] \geq \text{Var}\left[\frac{\partial}{\partial \theta} \log (g(t,\theta))\right] \]

\[ I_n(\theta) \geq I_T(\theta) \]

Remark:

If \(T\) is a sufficient statistic for \(\theta\), then \(I_n(\theta) = I_T(\theta)\).

Example 1: Fisher Information Function for Exponential Distribution

Let \(X_1, X_2, \ldots, X_n\) be a random sample from the Exponential distribution with parameter \(\frac{1}{\theta}\). The probability density function is:

\[ f(x,\theta) = \begin{cases} \frac{1}{\theta} e^{-\frac{x}{\theta}}, & x \geq 0, \theta > 0 \\ 0, & \text{otherwise} \end{cases} \]

Likelihood Function

The likelihood function of the sample \(X_1, X_2, \ldots, X_n\) is given by:

\[ L(\theta) = \prod f(x,\theta) = \left(\frac{1}{\theta}\right)^n e^{-\sum \frac{x}{\theta}} \]

Taking the logarithm on both sides:

\[ \log L(\theta) = -n \log(\theta) - \sum \frac{x}{\theta} \]

Derivative with Respect to \(\theta\)

Differentiating with respect to \(\theta\):

\[ \frac{\partial}{\partial \theta} \log L(\theta) = -\frac{n}{\theta} + \frac{\sum x}{\theta^2} \]

Second derivative with respect to \(\theta\):

\[ \frac{\partial^2}{\partial \theta^2} \log L(\theta) = \frac{n}{\theta^2} - \frac{2 \sum x}{\theta^3} \]

Fisher Information Function

By definition of the Fisher information function:

\[ I_n(\theta) = -E\left[\frac{\partial^2}{\partial \theta^2} \log L(\theta)\right] = -E\left[\frac{n}{\theta^2} - \frac{2 \sum x}{\theta^3}\right] = -\frac{E(n)}{\theta^2} + \frac{2E(\sum x)}{\theta^3} \]

Since \(E(n) = n\) and \(E(\sum x) = n\theta\), we have:

\[ I_n(\theta) = -\frac{n}{\theta^2} + \frac{2n\theta}{\theta^3} = -\frac{n}{\theta^2} + \frac{2n}{\theta^2} = \frac{n}{\theta^2}\]

Answer:

So, \(I_n(\theta) = \frac{n}{\theta^2} \).

Example 2: Fisher Information Function for Poisson Distribution

Let \(X_1, X_2, \ldots, X_n\) be a random sample from the Poisson distribution with parameter \(\lambda\). The probability mass function is:

\[ f(x,\theta) = \begin{cases} \frac{e^{-\lambda} \lambda^x}{x!}, & x = 0, 1, 2, \ldots, \lambda > 0 \\ 0, & \text{otherwise} \end{cases} \]

Likelihood Function

The likelihood function is given by:

\[ L(\theta) = \prod f(x,\theta) = \left(\frac{e^{-n\lambda} \lambda^{\sum x}}{\prod x!}\right) \]

Taking the logarithm on both sides:

\[ \log L(\theta) = -n\lambda + \sum x \log \lambda - \sum \log x \]

Derivative with Respect to \(\lambda\)

Differentiating with respect to \(\lambda\):

\[ \frac{\partial}{\partial \lambda} \log L(\theta) = -n + \frac{\sum x}{\lambda} \]

Second derivative with respect to \(\lambda\):

\[ \frac{\partial^2}{\partial \lambda^2} \log L(\theta) = -\frac{\sum x}{\lambda^2} \]

Fisher Information Function

By definition of the Fisher information function:

\[ I_n(\theta) = -E\left[\frac{\partial^2}{\partial \lambda^2} \log L(\theta)\right] = -E\left[-\frac{\sum x}{\lambda^2}\right] = \frac{E(\sum x)}{\lambda^2} \]

Since \(E(\sum x) = n\lambda\), we have:

\[ I_n(\theta) = \frac{n\lambda}{\lambda^2} = \frac{n}{\lambda} \]

Answer:

So, \(I_n(\theta) = \frac{n}{\lambda}\).

Example3:

Let \(X_1, X_2, \ldots, X_n\) be a random sample from the Normal distribution with \(\mu\) and \(\sigma^2\), then the probability density function is:

\[ f(x, \theta) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2\sigma^2}(x-\mu)^2}, \quad -\infty \leq x, \mu \leq \infty, \sigma^2 > 0 \]

\[ f(x, \theta) = \begin{cases} \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2\sigma^2}(x-\mu)^2}, & \text{-∞≤x,μ≥∞,σ^2>0} \\ 0, & \text{otherwise} \end{cases} \]

1. \(I(\mu)\)

We have:

\[ \begin{align*} \log f(x, \theta) & = \log \left(\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2\sigma^2}(x-\mu)^2}\right) \\ & = -\log(\sqrt{2\pi}) - \log(\sigma) - \frac{1}{2\sigma^2}(x-\mu)^2 \quad \text{(i)} \end{align*} \]

Differentiating with respect to \(\mu\):

\[ \begin{align*} \frac{\partial}{\partial \mu} \log f(x, \theta) & = \frac{\partial}{\partial \mu} \left(-\log(\sqrt{2\pi}) - \log(\sigma) - \frac{1}{2\sigma^2}(x-\mu)^2\right) \\ & = 0 - 0 - \frac{-2}{2\sigma^2}(x-\mu)(-1) \\ & = \frac{x-\mu}{\sigma^2} \end{align*} \]

Second derivative with respect to \(\mu\):

\[ \frac{\partial^2}{\partial \mu^2} \log f(x, \theta) = -\frac{1}{\sigma^2} \]

So, \(I(\mu) = -E\left[\frac{\partial^2}{\partial \mu^2} \log f(x, \theta)\right] = -E\left[-\frac{1}{\sigma^2}\right] = \frac{1}{\sigma^2}\).

2. \(I(\sigma)\)

Differentiating equation (i) with respect to \(\sigma\):

\[ \begin{align*} \frac{\partial}{\partial \sigma} \log f(x, \theta) & = \frac{\partial}{\partial \sigma} \left(-\log(\sqrt{2\pi}) - \log(\sigma) - \frac{1}{2\sigma^2}(x-\mu)^2\right) \\ & = 0 - \frac{1}{\sigma} - \frac{-2}{2\sigma^3}(x-\mu)^2 \\ & = \frac{1}{\sigma} - \frac{(x-\mu)^2}{\sigma^3} \end{align*} \]

Second derivative with respect to \(\sigma\):

\[ \frac{\partial^2}{\partial \sigma^2} \log f(x, \theta) = \frac{1}{\sigma^2} - \frac{3(x-\mu)^2}{\sigma^4} \]

So, \(I(\sigma) = -E\left[\frac{\partial^2}{\partial \sigma^2} \log f(x, \theta)\right] = -E\left[\frac{1}{\sigma^2} - \frac{3(x-\mu)^2}{\sigma^4}\right] = \frac{2}{\sigma^2}\).

3. \(I(\sigma^2)\)

Let \(\theta = \sigma^2\), and rewrite equation (i):

\[ \log f(x, \theta) = -\log(\sqrt{2\pi}) - \frac{1}{2}\log(\theta) - \frac{1}{2\theta}(x-\mu)^2 \]

Differentiating with respect to \(\theta\):

\[ \begin{align*} \frac{\partial}{\partial \theta} \log f(x, \theta) & = \frac{\partial}{\partial \theta} \left(-\log(\sqrt{2\pi}) - \frac{1}{2}\log(\theta) - \frac{1}{2\theta}(x-\mu)^2\right) \\ & = 0 - \frac{1}{2\theta} - \frac{-1}{2\theta^2}(x-\mu)^2 \\ & = \frac{1}{2\theta} - \frac{(x-\mu)^2}{2\theta^2} \end{align*} \]

Second derivative with respect to \(\theta\):

\[ \frac{\partial^2}{\partial \theta^2} \log f(x, \theta) = \frac{1}{2\theta^2} - \frac{(x-\mu)^2}{\theta^3} \]

So, \(I(\sigma^2) = -E\left[\frac{\partial^2}{\partial \theta^2} \log f(x, \theta)\right] = -E\left[\frac{1}{2\theta^2} - \frac{(x-\mu)^2}{\theta^3}\right] = \frac{1}{2\theta^2}\).

\[ I(\mu) = \frac{1}{\sigma^2} \]

\[ I(\sigma) = \frac{2}{\sigma^2} \]

\[ I(\sigma^2) = \frac{1}{2\sigma^4} \]

Cramer-Rao Inequality

Regularity Conditions:

  • The parameter space \(\Theta\) is an open interval.
  • The support or range of the distribution is independent of \(\theta\).
  • For every \(x\) and \(\theta\), \(\frac{\partial}{\partial\theta} f(x,\theta)\) and \(\frac{\partial^2}{\partial\theta^2} f(x,\theta)\) exist and are finite.
  • The statistic \(T\) has finite mean and variance.
  • Differentiation and integration are permissible, i.e., \(\frac{\partial}{\partial\theta} \int T L(x,\theta) \, dx = \int \frac{\partial}{\partial\theta} T L(x,\theta) \, dx\).

Cramer-Rao Inequality

Regularity Conditions:

  • The parameter space \(\Theta\) is an open interval.
  • The support or range of the distribution is independent of \(\theta\).
  • For every \(x\) and \(\theta\), \(\frac{\partial}{\partial\theta} f(x,\theta)\) and \(\frac{\partial^2}{\partial\theta^2} f(x,\theta)\) exist and are finite.
  • The statistic \(T\) has finite mean and variance.
  • Differentiation and integration are permissible, i.e., \(\frac{\partial}{\partial\theta} \int T L(x,\theta) \, dx = \int \frac{\partial}{\partial\theta} T L(x,\theta) \, dx\).

Cramer-Rao Inequality Statement:

Let \(X_1, X_2, \ldots, X_n\) be a random sample from any probability density function (p.d.f.) or probability mass function (p.m.f.) \(f(x, \theta)\), where \(\theta \in \Theta\). If \(T = T(X_1, X_2, \ldots, X_n)\) is an unbiased estimator of \(\phi(\theta)\) under regularity conditions, then:

\[ \text{Var}(T) \geq \frac{(\frac{\partial}{\partial\theta}\phi(\theta))^2}{I_n(\theta)} \quad \text{or} \quad \text{Var}(T) \geq \frac{(\phi'(\theta))^2}{nI(\theta)} \]

Proof:

Let \(x\) be a random variable following the p.d.f. or p.m.f. \(f(x,\theta)\), \(\theta \in \Theta\), and \(L(\theta)\) is the likelihood function of a random sample \(X_1, X_2, \ldots, X_n\) from the distribution. Then:

\[L(\theta) = \prod f(x, \theta) = f(X_1, X_2, \ldots, X_n, \theta)\]

\(\int L(\theta) \, dx = 1\)

\(\frac{\partial}{\partial\theta} \int L(\theta) \, dx = 0\)

\(\int \frac{\partial}{\partial\theta} L(\theta) \, dx = 0\)

\(\int \frac{1}{L} \cdot \frac{\partial}{\partial\theta} L(\theta) \, L \, dx = 0\)

\(\int \frac{\partial}{\partial\theta} \log L(\theta) \, L \, dx = 0\)

\[E\left[\frac{\partial}{\partial\theta}\log L(\theta)\right] = 0\ldots \text{ii}\]

And we know that \(T\) is an unbiased estimator of \(\phi(\theta)\), such that:

\[E(T(X)) = \phi(\theta)\]

\(\int T \cdot L(\theta) \, dx = \phi(\theta)\)

Now, differentiating with respect to \(\theta\):

\(\frac{\partial}{\partial\theta} \int T \cdot L(\theta) \, dx = \frac{\partial}{\partial\theta} \phi(\theta)\)

\(\int \frac{\partial}{\partial\theta} T \cdot L(\theta) \, dx = \phi'(\theta)\)

\(\int \frac{1}{L} \cdot \frac{\partial}{\partial\theta} L(\theta) \cdot T \, L \, dx = \phi'(\theta)\)

\(\int \frac{\partial}{\partial\theta} \log L(\theta) \cdot T \cdot L \, dx = \phi'(\theta)\)

E\(\left[\frac{\partial}{\partial\theta} \log L(\theta) \cdot T\right] = \phi'(\theta) \quad \ldots \text{iii}\)

And we have:

COV\(\left(\frac{\partial}{\partial\theta} \log L(\theta) \cdot T\right) = E\left(\frac{\partial}{\partial\theta} \log L(\theta) \cdot T\right) - 0 \cdot E(T)\)

COV\(\left(\frac{\partial}{\partial\theta} \log L(\theta) \cdot T\right) = \phi'(\theta) \quad \ldots \text{iv}\)

By Cauchy-Schwarz inequality for covariance, we have:

\[\left(\text{COV}\left(\frac{\partial}{\partial\theta} \log L(\theta) \cdot T\right)\right)^2 \leq \text{Var}\left(\frac{\partial}{\partial\theta} \log L(\theta)\right) \cdot \text{Var}(T)\]

\[\left(\phi'(\theta)\right)^2 \leq I_n(\theta) \cdot \text{Var}(T)\]

\[\text{Var}(T) \geq \frac{\left(\phi'(\theta)\right)^2}{I_n(\theta)}\]

This is the lower bound given by Cramer-Rao inequality, known as Cramer-Rao Lower Bound.

Remark: If the estimator \(T\) is unbiased, then \(\phi(\theta) = \theta\) and \(\phi'(\theta) = 1\). In this case, Cramer-Rao Lower Bound is:

\[\text{Var}(T) \geq \frac{1}{I_n(\theta)}\]

Comments

Popular posts from this blog

MCQ'S based on Basic Statistics (For B. Com. II Business Statistics)

    (MCQ Based on Probability, Index Number, Time Series   and Statistical Quality Control Sem - IV)                                                            1.The control chart were developed by ……         A) Karl Pearson B) R.A. fisher C) W.A. Shewhart D) B. Benjamin   2.the mean = 4 and variance = 2 for binomial r.v. x then value of n is….. A) 7 B) 10 C) 8 D)9   3.the mean = 3 and variance = 2 for binomial r.v. x then value of n is….. A) 7 B) 10 C) 8 D)9 4. If sample space S={a,b,c}, P(a) = 0.6 and P(b) = 0.3 then P(c)=….. A)0.6 B)0.3 C)0.5 D)0.1   5 Index number is called A) geometer B)barometer C)thermometer D)centimetre   6.   Index number for the base period is always takes as

Basic Concepts of Probability and Binomial Distribution

 Probability:  Basic concepts of Probability:  Probability is a way to measure hoe likely something is to happen. Probability is number between 0 and 1, where probability is 0 means is not happen at all and probability is 1 means it will be definitely happen, e.g. if we tossed coin there is a 50% chance to get head and 50% chance to get tail, it can be represented in probability as 0.5 for each outcome to get head and tail. Probability is used to help us taking decision and predicting the likelihood of the event in many areas, that are science, finance and Statistics.  Now we learn the some basic concepts that used in Probability:  i) Random Experiment OR Trail: A Random Experiment is an process that get one or more possible outcomes. examples of random experiment include tossing a coin, rolling a die, drawing  a card from pack of card etc. using this we specify the possible outcomes known as sample pace.  ii)Outcome: An outcome is a result of experiment. an outcome is one of the pos

Statistical Inference II Notes

Likelihood Ratio Test 

Measures of Central Tendency :Mean, Median and Mode

Changing Color Blog Name  Measures of Central Tendency  I. Introduction. II. Requirements of good measures. III. Mean Definition. IV . Properties  V. Merits and Demerits. VI. Examples VII.  Weighted Arithmetic Mean VIII. Median IX. Quartiles I. Introduction Everybody is familiar with the word Average. and everybody are used the word average in daily life as, average marks, average of bike, average speed etc. In real life the average is used to represent the whole data, or it is a single figure is represent the whole data. the average value is lies around the centre of the data. consider the example if we are interested to measure the height of the all student and remember the heights of all student, in that case there are 2700 students then it is not possible to remember the all 2700 students height so we find out the one value that represent the height of the all 2700 students in college. therefore the single value represent the whole data and

Time Series

 Time series  Introduction:-         We see the many variables are changes over period of time that are population (I.e. population are changes over time means population increase day by day), monthly demand of commodity, food production, agriculture production increases and that can be observed over period of times known as time series. Time series is defined as a set of observation arranged according to time is called time series. Or a time Series is a set of statistical observation arnging chronological order. ( Chronological order means it is arrangements of variable according to time) and it gives information about variable.  Also we draw the graph of time series to see the behaviour of variable over time. It can be used of forecasting. The analysis of time series is helpful to economist, business men, also for scientist etc. Because it used to forecasting the future, observing the past behaviour of that variable or items. Also planning for future, here time series use past data h

Classification, Tabulation, Frequency Distribution, Diagrams & Graphical Presentation.

Business Statistics I    Classification, Tabulation, Frequency Distribution ,  Diagrams & Graphical Presentation. In this section we study the following point : i. Classification and it types. ii. Tabulation. iii. Frequency and Frequency Distribution. iv. Some important concepts. v. Diagrams & Graphical Presentation   I. Classification and it's types:        Classification:- The process of arranging data into different classes or groups according to their common  characteristics is called classification. e.g. we dividing students into age, gender and religion. It is a classification of students into age, gender and religion.  Or  Classification is a method used to categorize data into different groups based on the values of specific variable.  The purpose of classification is to condenses the data, simplifies complexities, it useful to comparison and helps to analysis. The following are some criteria to classify the data into groups.        i. Quantitative Classification :-

Sequential Analysis: (SPRT)

  Sequential Analysis: We seen that in NP theory of testing hypothesis or in the parametric test n is the sample size and is regarded as fixed and the value of α fixed , we minimize the value of β.  But in the sequential analysis theory invented by A Wald in sequential analysis n is the sample number is not fixed but the both values α and β are fixed as constant. Sequential Probability Ratio Test: (SPRT):

Measures of Dispersion : Range , Quartile Deviation, Standard Deviation and Variance.

Measures of Dispersion :  I.  Introduction. II. Requirements of good measures. III. Uses of Measures of Dispersion. IV.  Methods Of Studying Dispersion:     i.  Absolute Measures of Dispersions :             i. Range (R)          ii. Quartile Deviation (Q.D.)          iii. Mean Deviation (M.D.)         iv. Standard Deviation (S. D.)         v. Variance    ii.   Relative Measures of Dispersions :              i. Coefficient of Range          ii. Coefficient of Quartile Deviation (Q.D.)          iii. Coefficient of Mean Deviation (M.D.)         iv. Coefficient of Standard Deviation (S. D.)         v. Coefficient of Variation (C.V.)                                                                                                                    I.  Introduction. We have the various measures of central tendency, like Mean, Median & Mode,  it is a single figure that represent the whole data. Now we are interested to study this figure(i.e. measures of central tendency) is proper represe

Business Statistics Notes ( Meaning, Scope, Limitations of statistics and sampling Methods)

  Business Statistics Paper I Notes. Welcome to our comprehensive collection of notes for the Business Statistics!  my aim is to provided you  with the knowledge you need as you begin your journey to comprehend the essential ideas of this subject. Statistics is a science of collecting, Presenting, analyzing, interpreting data to make informed business decisions. It forms the backbone of modern-day business practices, guiding organizations in optimizing processes, identifying trends, and predicting outcomes. I will explore several important topics through these notes, such as: 1. Introduction to Statistics. :  meaning definition and scope of  Statistics. 2. Data collection methods. 3. Sampling techniques. 4. Measures of  central tendency : Mean, Median, Mode. 5. Measures of Dispersion : Relative and Absolute Measures of dispersion,  Range, Q.D., Standard deviation, Variance. coefficient of variation.  6.Analysis of bivariate data: Correlation, Regression.  These notes will serve as you

Statistical Quality Control

 Statistical Quality Control  Statistical quality control (S. Q. C.) is a branch of Statistics it deals with the application of statistical methods to control and improve that quality of product. In this use statistical methods of sampling and test of significance to monitoring and controlling than quality of product during the production process.  The most important word in statistical Quality control is quality  The quality of product is the most important property while purchasing that product the product fulfill or meets the requirements and required specification we say it have good quality or quality product other wise not quality. Quality Control is the powerful technique to diagnosis the lack of quality in material, process of production.  Causes of variation:   When the product are produced in large scale there are variation in the size or composition the variation is inherent and inevitable in the quality of product these variation are classified into two causes.  1) chan