Skip to main content

Statistical Inference I ( Theory of estimation : Efficiency)

🔖Statistical Inference I ( Theory of estimation : Efficiency) 

In this article we see the  terms: 

I. Efficiency.

II. Mean Square Error.

III. Consistency.


📚Efficiency: 

We know that  two unbiased estimator of parameter gives rise to infinitely many unbiased estimators of parameter. there if one of parameter have two estimators then the problem is to choose one of the best estimator among the class of unbiased estimators. in that case we need to some other criteria to to find out best estimator. therefore, that situation  we check the variability of that estimator, the measure of variability of estimator T around it mean is Var(T). hence If T is an Unbiased estimator of parameter then it's variance gives good precision. the variance is smaller then it give's greater precision.


📑 i. Efficient estimator: An estimator T is said to be an Efficient Estimator of 𝚹, if T is unbiased estimator of  𝛉. and it's variance is less than any other estimator. i.e. Var(T) < Var(T*).

where T* is any other estimator of  𝛉.

🔖ii. Relative Efficiency: If T1 and T2 are two unbiased estimators of parameter  𝛉 and E(T₁²) < ∞ and  E(T₂²) < ∞ then relative efficiency of T1  with respect to  T2 is denoted by Efficiency( T1 ,T2 ).

and T1  )/ V(T2 )

Remark: i. if relative efficiency =1 it means that  T1 and T2 are equal efficient estimators.

ii. If Relative Efficiency ( T1 and T2 ) < 1, it means that V( T1 ) <  V( T2 ). 

i.e. T1 is more efficient than T2 .

iii. If Relative Efficiency ( T1  and T2 ) > 1, it means that V( T1 ) >  V( T2 ). 

i.e. T2 is more efficient than  T1 

Example: 1. if X1 and Xare two independent observation from normal distribution with mean 𝚹 and variance  𝛔² , then show thatT1 = ( X1 X2)/2 , T2 = ( X1 + 2X2 )/3 are unbiased estimators of 𝚹 and also find the relative Efficiency (  T1 ,T2 ).

Solutionif  X1  and X2 are two independent observation from normal distribution with mean 𝚹 and variance  𝛔² 

E( X1 ) = E( X2 ) = 𝚹.

we have T1 = (X1 + X2)/2 , T2 (X1 + 2X2 )/3

E(T1 ) = E[(X1 X2 )/2] , E(T2 )= E [(X1 + 2X2 )/3].  we have E(X1 ) = E(X2) = 𝚹.

E(T1 ) = E[(𝚹+𝚹)/2] , E(T2 )= E [(𝚹 + 2𝚹 )/3

E(T1 ) = 2𝚹/2 , E(T2 )=  3𝚹 /3

E(T1 ) = 𝚹 , E(T2 ) =  𝚹 

Thus  T1 &T2 are unbiased estimators of parameter 𝚹.

now we obtain Variation of  T1 &T2 

V(T1 ) =V[ (X1 + X2 )/2] , V(T2)= (X1 + 2X2 )/3

V(T1 ) =[V(X1 )+V( X2 )] /4 , V(T2 )=[ V(X1 )+ 4V(X2 )] /3

V(T1 ) =[V(X1 )+V( X2 )] /4 , V(T2)=[ V(X1 )+ 4V(X2 )] /9

V(T1 ) =[𝛔² +𝛔² ] /4 , V(T2)=[ 𝛔² + 4𝛔² ] /9

V(T1 ) =2𝛔² /4 , V(T2)=5𝛔² /9

V(T1 ) =𝛔² /2 , V(T2)=5𝛔² /9

relative Efficiency  T1 relative to Tis 

E (  T1 ,T2) =  V(  T1 )/V(T2)

                    = (𝛔² /2 ) /(5𝛔² /9)

                    = 9/10

                    = 0.9 < 1

therefore  E (  T1 ,T2) < 1 , means V( T1 ) =𝛔² /2  < V(T2)=5𝛔² /9 

Hence T1 is more efficient than T

 

Example 2. if X1, X2, ........Xn. random sample form normal distribution with mean 𝚹 and variance  𝛔² ,    T1 = sample mean i.e. x̄, and T2= sample median. find the  Efficiency of  T1 relative to T2 .   given that variance of median is  = V( T2)= (𝝅 𝛔² )/2n.    (for large n)

solution:  X1, X2, ........Xn. random sample form normal distribution with mean 𝚹 and variance  𝛔² then we have V(x̄) = V( T1 ) = 𝛔² /n. and  V( T2 )= (𝝅 𝛔² )/2n.

therefore the  Efficiency of T1 relative to T2  = E (  T1 ,T2 ) =  V( T1 )/V(T2 )

E ( T1 ,T2 ) =  V( T1 )/V(T2

                       =[ 𝛔² /n] / [ (𝝅 𝛔² )/2n]

                        = [𝛔² /n] x[2n/((𝝅 𝛔² )]

                        = (2/𝝅 )

                        =  0.6369 < 1 

                  therefore  E (   T1 ,T2 ) < 1 , means V( T1  ) =𝛔² /n <  V( T2  )= 𝝅𝛔² /2n

Hence  T1  is more efficient than  T2  

i.e. sample mean is more efficient than sample median.


III. Mean Square Error (M. S. E.)

let   T1  &  T2   be the  two estimators of parameter 𝚹, where  T1   is a unbiased estimator of 𝚹 &   T2  is a biased estimator of 𝚹. suppose the possible value of    T1  is spread   around  𝚹 and the value of  T2  is near to the parameter.  therefore in this case  T2   may be preferred than that of  T1  .

that case we need to study the variability of estimator around the parameter. for this we have to find out the variance of the estimator, which measure the variability of estimator around it's mean. if  T1  is an unbiased estimator of it's variance gives a good measure of precision. but if   T2   is biased estimator of parameter then variability of  T around 𝚹. as measure of it's precision and it is called as Mean Square Error (M. S. E.).

 Mean Square Error : an estimator T is a said to be a good estimator of 𝚹 if it's mean square error is minimum.

 i.e. E(T-𝚹)² ≤ E(T*-𝚹)²

where 𝚹. = T and T* is any other estimator .

this criteria known as mean square error.


Result: show that M. S. E. of T = MSE(T) = Var(T) + b² (T,𝚹) where b(.) is biased estimator.

Proof: 

 by definition of M. S. E. (T) = E(T-𝚹.)²

                                                = E[T-E(T)+E(T)-𝚹]²

   = E{T-E(T)}²+E{E(T)-𝚹}² + E{[T-E(T)]x[E(T)-𝚹]}

                                          = E{T-E(T)}²+E{E(T)-𝚹}² 

                                                 = Var(T) + b² (T,𝚹)

{Var(T)  = E{T-E(T)}²,   b² (T,𝚹) =E{E(T)-𝚹}² }

therefore  M. S. E. of T = MSE(T) = Var(T) + b² (T,𝚹).


DefinitionMean Square Error (M. S. E.) :

The M.S.E.  of estimator T of parameter 𝚹  is defined as 

    M. S. E. of T = MSE(T) = Var(T) + b² (T,𝚹)     if  T is biased estimator of  𝚹.

    M. S. E. of T = MSE(T) = Var(T)            

          if an T unbiased estimator of 𝚹.

this is the definition of M.S.E.


Example: 1. if   T1 & T2 be the two unbiased estimators of parameter 𝚹 with variance 𝞂1² and 𝞂2² respectively and having correlation ρ then find the best liner unbiased combination of T1 & T2  and also find the expression of variance of such combination.

Solution: we have two estimators  T1 & T2 are unbiased estimators of parameter 𝚹 with variance 𝞂1² and  𝞂2².

 E(T1 ) = 𝚹 , E(T2 ) =  𝚹 

V(T1 ) =  𝞂1² , V(T2)=  𝞂2².

and correlation is ρ

consider a linear combination of estimators   T1 & T2 is T = ɑ T1 + (1-ɑ) T2

now taking expectation E(T)
E(T)  = E[ɑ T1 + (1-ɑ) T2]
           =ɑ E[T1 ]+ (1-ɑ)E[ T2]
            = ɑ 𝚹 + (1-ɑ) 𝚹 
                =  ɑ 𝚹 + 1𝚹-ɑ𝚹 
            = 𝚹
E(T) = 𝚹
therefore T is an unbiased estimator of 𝚹
now we find variance of T
Var(T) =  V[ɑ T1 + (1-ɑ) T2]
   =ɑ²V( T1 ) + (1-ɑ)² V(T2) +2ɑ(1-ɑ)COV( T1, T2 )
  V(T)ɑ² 𝞂1²+ (1-ɑ)²  𝞂2² +2ρ 𝞂1𝞂ɑ(1-ɑ)
           we find the value of ɑ and 1-ɑ the variance of (T)is minimise.
for that we find [dV(T) / da] = 0 and  [d²V(T) /da²] > 0.
now,  [dV(T) / da] = 0  i.e. differentiate w.r.t. a

 dV(T) / da  = {d/da} [  ɑ² 𝞂1²+ (1-ɑ)²  𝞂2² +2ɑ(1-ɑ)ρ 𝞂1 𝞂2]
=> 2ɑ 𝞂1² -2 (1-ɑ)  𝞂2² +2(1-2ɑ)ρ 𝞂1 𝞂2  = 0
              ɑ 𝞂1² - (1-ɑ)  𝞂2² + (1-2ɑ)ρ 𝞂1 𝞂2 = 0 
      ɑ 𝞂1² -  𝞂2² +ɑ 𝞂2² + ρ 𝞂1 𝞂-2ɑ ρ 𝞂1 𝞂2  = 0
          ɑ( 𝞂1²+𝞂2²-2  ρ 𝞂1 𝞂2 ) -  𝞂2² + ρ 𝞂1 𝞂= 0
                         ɑ( 𝞂1²+𝞂2²-2  ρ 𝞂1 𝞂) =  𝞂2² - ρ 𝞂1 𝞂
    ɑ =  ( 𝞂2² - ρ 𝞂1 𝞂2 ) / ( 𝞂1²+𝞂2²-2ρ 𝞂1 𝞂)       
therefore    
 1-  ɑ= ( 𝞂1² - ρ 𝞂1 𝞂)/ ( 𝞂1²+𝞂2²-2 ρ 𝞂1 𝞂) 
and 
d²V(T) /da² 
{d/da} [ 2ɑ 𝞂1² -2 (1-ɑ)  𝞂2² +2(1-2ɑ)ρ 𝞂1 𝞂2 ]
                    = 2𝞂1²+2𝞂2²-4  ρ 𝞂1 𝞂2
                      =    2( 𝞂1²+𝞂2²-2 ρ 𝞂1 𝞂)
                    =2[V( T1 ) +V(T2) -2COV( T1, T2 )]
                    = 2V( T1 - T2 ) > 0
hence variance of (T) is minimum at the value of ɑ =  ( 𝞂2² - ρ 𝞂1 𝞂) / ( 𝞂1²+𝞂2²-2ρ 𝞂1 𝞂)       
  1-  ɑ=( 𝞂1² - ρ 𝞂1 𝞂)/( 𝞂1²+𝞂2²-2 ρ 𝞂1 𝞂)  
for variance of T we put the values of  ɑ =  ( 𝞂2² - ρ 𝞂1 𝞂) / ( 𝞂1²+𝞂2²-2ρ 𝞂1 𝞂)       
     1-  ɑ =  ( 𝞂1² - ρ 𝞂1 𝞂) / ( 𝞂1²+𝞂2²-2 ρ 𝞂1 𝞂).
now find the variance of T 
Var(T) =  V[ɑ T1 + (1-ɑ) T2]
   =ɑ²V( T1 ) + (1-ɑ)² V(T2) +2ɑ(1-ɑ)COV( T1, T2 )
 V(T)   = ɑ² 𝞂1²+ (1-ɑ)²  𝞂2² +2ɑ(1-ɑ) ρ 𝞂1 𝞂2.





Note that: Smaller M.S.E. i.e. variance is small means greater precision. hence while comparing two estimators T1 & T2 of  𝞗  we choose an estimator with smaller M.S.E. this used to modifies the formula of efficiency

Efficiency of T& T2 = M.S.E(T/ M.S.E(T2 )

If efficiency (T, T2 ) < 1
then T is more efficient estimator than  T2 
i.e.  M.S.E(T <  M.S.E(T2 )

Locally Minimum Variance Unbiased Estimator (MVUE).
Definition: Let 𝞗0  𝚯 and U(𝞗0) be the class of all unbiased estimates of θ such that     E(T²) <∞. AndT0 U(𝞗0) then T0 is called locally minimum variance unbiased estimator(LMVUE).
OR simply minimum variance unbiased estimator( SMVUE). IF E(T00)² ≤E(T 0)² holds for all T.

Uniformly Minimum Variance Unbiased Estimator (UMVUE)
Definition: Let U be the set of all unbiased estimates of T     of 𝞗 ∈ 𝚯, such that  E(T²) <∞. and estimate T U is called uniformly minimum variance unbiased estimator of 𝞗 if 
E(T0)² ≤E(T )² for all 𝞗  𝚯 and every T Ɛ U.

Now we see the Definition of Minimum Variance Unbiased Estimator (MVUE)
Definition: if a statistics T=T(X1, X2, .....Xn) based on a Random sample of size n such that i) T is unbiased estimator for Ɣ(θ) for all 𝞗 ∈ 𝚯  ii) it has smallest variance among the class of all unbiased estimators of  Ɣ(θ), then T  is called  Minimum Variance Unbiased Estimator (MVUE)
OR
T is MVUE of  Ɣ(θ) if 
i)E(T) = Ɣ(θ)  for all 𝞗 ∈ 𝚯
ii) V(T) ≤ V(T*)
where T* is any other unbiased estimator of   Ɣ(θ).


Result: If UMVUE exist then it is unique.
Proof : Suppose that  T& T2 are two uniformly minimum variance unbiased estimator of parameter θ then 
we have     E(T1) =θ,   E(T2) =θ   for all θ ∈ 𝚯
and V(T1) =  V(T2   for all θ ∈ 𝚯         ............1
now we define new estimator T =(T+ T2 ) /2  which is unbiased estimator 
since E(T ) = E[(T+ T2 ) /2]
                    =  (θ+θ)/2
                     =2θ /2
                    =θ
And variance of T 
Var(T) = Var[(T+ T2 ) /2]
            = {V(T) +V( T2 ) + 2 Cov( T, T2)}/4
            = {2V(T) + 2 ⍴ √(V(T) xV( T2 ))} / 4
            = { V(T) +  ⍴ V(T) } / 2
            = { V(T)  (1+ ⍴)} / 2          ...............2
where   ⍴  is the Karl Pearson coefficient of correlation between T& T2 .
Since Tis minimum variance unbiased estimator of  θ.
then 
 V(T) ≤ V( T ) 
i.e.   V(T) ≤ V(T)  (1+ ⍴)} / 2   
      V(T) ≤ V(T (1+ ⍴)} / 2   
 1  (1+ ⍴) / 2   
 2   (1+ ⍴)   
 1   
we have |⍴ |   1 therefore we must have   = 1.
i.e.   T& Thave linear relationship of the form 
 T1  = ɑ + 𝜷T2.........3
Where  ɑ and  𝜷 are constant independent on parameterθ
E( T) = E(ɑ + 𝜷T2)
E( T) = E(ɑ )+ E(𝜷T2)
θ   = ɑ + 𝜷θ. ........4
now variance 

 V(T1  )=V( ɑ + 𝜷T2)
   V(T1  )=V( 𝜷T2)  
  V(T1  )=𝜷²V(T2)
therefore   𝜷² = 1
𝜷=+-1
𝜷 =1 ................5
because   = 1. T& T2 are positive 
from equation 4 and 5 we get
θ   = ɑ + θ
 ɑ = 0
putting 𝜷 =1,ɑ = 0 in equation 3
 T1  = 0 + 1xT2
     T1  = T2 
Thus if UMVUE exist, it is unique.

III. Consistency:

 Consistency

 In statistical inference, estimators are used to approximate unknown population parameters based on sample data. For an estimator to be considered “good,” it should possess certain desirable properties such as unbiasedness, efficiency, and consistency.

While unbiasedness ensures that an estimator gives the correct value on average, and efficiency ensures that it has the smallest possible variance among unbiased estimators, these properties alone are not sufficient.
It is also essential that the estimator becomes more accurate as the sample size increases. This is where the property of consistency becomes important.

Consistency refers to the long-run reliability of an estimator. An estimator is said to be consistent if it converges to the true value of the population parameter as the sample size increases indefinitely.

Definition: Let X1, X2, …..Xn be a r. s. of size n from the distribution f(x), let θ be a unknown parameter then the sequence of estimator T=t(x1,x2,….xn) is said to be consistent estimator for θ for every ε>0 Tn converges to θ I probability for each θ. Or following holds

 P{ | Tn – θ|≤ε} – 1 as n –

Or

P{ | Tn – θ|>ε} – 0 as n –

Note:  Therefore the consistent estimator is close to θ with probability  one for large sample size. i.e.  n –i›∞

Here  Tn is said to be converges in probability to θ it is denoted as Tn – θ

Ex. 1. If X1, X2, ……..Xn is a r. s. from the distribution having mean μ and finite variance σ2 then show that sample mean is consistent estimator of μ.

Answer: here given that E(X) = μ and var (X) = σ2

We known that sample mean = x̄ =  (xi) / n

E(x̄) = μ and var (x̄) = σ2 /n

Now by chebychev’s inequality

P{ | x̄μ|≤ε} 1- [var (x̄) / ε2 ]

P{ | x̄μ|≤ε} 1- [σ2 /nε2 ]

By taking limit as n –

lim(n –∞ ) P{ | x̄μ|≤ε} lim(n –∞ ) (1- [σ2 /nε2 ])

lim(n –∞ )P{ | x̄μ|≤ε} 1- lim(n –∞ ) [σ2 /nε2 ]

lim(n –∞ ) P{ | x̄μ|≤ε} 1- 0

lim(n –∞ ) P{ | x̄μ|≤ε} 1

we known probability of any event is not greater than 1

hence lim(n –∞ ) P{ | x̄μ|≤ε}= 1

x̄ μ

therefore the sample mean is consistent estimator for μ

 

 Sufficient condition for consistency:

If T= t(X1, X2, …….Xn) is a sequence of estimator such that

i.     E(Tn) =θ as n –

ii.   V(Tn ) =0 as n –∞ 

then Tn is Consistent estimator for θ.

 

 

 

 

 

 


                


 
 






Comments

Popular posts from this blog

MCQ'S based on Basic Statistics (For B. Com. II Business Statistics)

    (MCQ Based on Probability, Index Number, Time Series   and Statistical Quality Control Sem - IV)                                                            1.The control chart were developed by ……         A) Karl Pearson B) R.A. fisher C) W.A. Shewhart D) B. Benjamin   2.the mean = 4 and variance = 2 for binomial r.v. x then value of n is….. A) 7 B) 10 C) 8 D)9   3.the mean = 3 and variance = 2 for binomial r.v. x then value of n is….. A) 7 B) 10 C) 8 D)9 4. If sampl...

Basic Concepts of Probability and Binomial Distribution , Poisson Distribution.

 Probability:  Basic concepts of Probability:  Probability is a way to measure hoe likely something is to happen. Probability is number between 0 and 1, where probability is 0 means is not happen at all and probability is 1 means it will be definitely happen, e.g. if we tossed coin there is a 50% chance to get head and 50% chance to get tail, it can be represented in probability as 0.5 for each outcome to get head and tail. Probability is used to help us taking decision and predicting the likelihood of the event in many areas, that are science, finance and Statistics.  Now we learn the some basic concepts that used in Probability:  i) Random Experiment OR Trail: A Random Experiment is an process that get one or more possible outcomes. examples of random experiment include tossing a coin, rolling a die, drawing  a card from pack of card etc. using this we specify the possible outcomes known as sample pace.  ii)Outcome: An outcome is a result of experi...

Statistical Inference: Basic Terms and Definitions.

  📚📖 Statistical Inference: Basic Terms. The theory of estimation is of paramount importance in statistics for several reasons. Firstly, it allows researchers to make informed inferences about population characteristics based on limited sample data. Since it is often impractical or impossible to measure an entire population, estimation provides a framework to generalize findings from a sample to the larger population. By employing various estimation methods, statisticians can estimate population parameters such as means, proportions, and variances, providing valuable insights into the population's characteristics. Second, the theory of estimating aids in quantifying the estimates' inherent uncertainty. Measures like standard errors, confidence intervals, and p-values are included with estimators to provide  an idea of how accurate and reliable the estimates are. The range of possible values for the population characteristics and the degree of confidence attached to those est...

Index Number

 Index Number      Introduction  We seen in measures of central tendency the data can be reduced to a single figure by calculating an average and two series can be compared by their averages. But the data are homogeneous then the average is meaningful. (Data is homogeneous means data in same type). If the two series of the price of commodity for two years. It is clear that we cannot compare the cost of living for two years by using simple average of the price of the commodities. For that type of problem we need type of average is called Index number. Index number firstly defined or developed to study the effect of price change on the cost of living. But now days the theory of index number is extended to the field of wholesale price, industrial production, agricultural production etc. Index number is like barometers to measure the change in change in economics activities.   An index may be defined as a " specialized  average designed to measure the...

B. Com. -I Statistics Practical No. 1 Classification, tabulation and frequency distribution –I: Qualitative data.

  Shree GaneshA B. Com. Part – I: Semester – I OE–I    Semester – I (BASIC STATISTICS PRACTICAL-I) Practical: 60 Hrs. Marks: 50 (Credits: 02) Course Outcomes: After completion of this practical course, the student will be able to: i) apply sampling techniques in real life. ii) perform classification and tabulation of primary data. iii) represent the data by means of simple diagrams and graphs. iv) summarize data by computing measures of central tendency.   LIST OF PRACTICALS: 1. Classification, tabulation and frequency distribution –I: Qualitative data. 2. Classification, tabulation and frequency distribution –II : Quantitative data. 3. Diagrammatic representation of data by using Pie Diagram and Bar Diagrams. 4. Graphical representation of data by using Histogram, Frequency Polygon, Frequency Curve and     Locating Modal Value. 5. Graphical representation of data by using Ogive Curves and Locating Quartile Values....

B. Com. I Practical No. 3 :Diagrammatic representation of data by using Pie Diagram and Bar Diagrams.

Practical No. 3 :Diagrammatic representation of data by using Pie Diagram and Bar Diagrams. Diagrammatic Presentation. We have observed the classification and tabulation method. We use this method to take a lot of information and make it fit into a small table. The reason we do this is to make the information more organized and easier to understand. Tabulation helps us arrange data neatly so that it's not messy and confusing. tabulation is a way to make big files of information look neat and tidy in a table.  but better and beautiful way to represent data using diagrams and graphs. the diagram and graph have some advantages because that used to visualise the data. that helps to understand and give information easily to any common man or any one, following are the some  advantages of diagram and graph.  I. Advantages i. Data Representation: Diagrams and graphs are excellent for presenting data visually, making trends, comparisons, and statistical information easier to...

Method of Moment & Maximum Likelihood Estimator: Method, Properties and Examples.

 Statistical Inference I: Method Of Moment:   One of the oldest method of finding estimator is Method of Moment, it was discovered by Karl Pearson in 1884.  Method of Moment Estimator Let X1, X2, ........Xn be a random sample from a population with probability density function (pdf) f(x, θ) or probability mass function (pmf) p(x) with parameters θ1, θ2,……..θk. If μ r ' (r-th raw moment about the origin) then μ r ' = ∫ -∞ ∞ x r f(x,θ) dx for r=1,2,3,….k .........Equation i In general, μ 1 ' , μ 2 ' ,…..μ k ' will be functions of parameters θ 1 , θ 2 ,……..θ k . Let X 1 , X 2 ,……X n be the random sample of size n from the population. The method of moments consists of solving "k" equations (in Equation i) for θ 1 , θ 2 ,……..θ k to obtain estimators for the parameters by equating μ 1 ' , μ 2 ' ,…..μ k ' with the corresponding sample moments m 1 ' , m 2 ' ,…..m k ' . Where m r ' = sample m...

Time Series

 Time series  Introduction:-         We see the many variables are changes over period of time that are population (I.e. population are changes over time means population increase day by day), monthly demand of commodity, food production, agriculture production increases and that can be observed over period of times known as time series. Time series is defined as a set of observation arranged according to time is called time series. Or a time Series is a set of statistical observation arnging chronological order. ( Chronological order means it is arrangements of variable according to time) and it gives information about variable.  Also we draw the graph of time series to see the behaviour of variable over time. It can be used of forecasting. The analysis of time series is helpful to economist, business men, also for scientist etc. Because it used to forecasting the future, observing the past behaviour of that variable or items. Also planning for future...

Non- Parametric Test: Run Test

Non- Parametric Test  A Non-Parametric tests is a one of the part of Statistical tests that non-parametric test does not assume any particular distribution for analyzing the variable. unlike the parametric test are based on the assumption like normality or other specific distribution  of the variable. Non-parametric test is based on the rank, order, signs, or other non-numerical data. we know both test parametric and non-parametric, but when use particular test? answer is that if the assumption of parametric test are violated such as data is not normally distributed or sample size is small. then we use Non-parametric test they can used to analyse categorical data  or ordinal data and data are obtained form in field like psychology, sociology and biology. For the analysis use the  some non-parametric test that are Wilcoxon signed-ranked test, mann-whiteny U test, sign test, Run test, Kruskal-wallis test. but the non-parametric test have lower statistical power than ...