A Comparison of methods measuring impact of statistical methods with different sample sizes

Document Type : Original Article

Author

Abstract

The study's objective is to identify the difference between two values ( ) when using the(One-Factor Experiment With Repeated Measurement ANOVA) technique and to identify the difference between the values of ( ) when using the(Two-Factor Experiments With Repeated Measurements on one Factor or Mixed) of variance technique using different sample sizes(small-medium-large). Two approaches were used: the comparative approach and the simulation method using spss to generate data and increase the sample sizes enough to answer the study's questions.  
 
 The study's society is virtual created by simulating the data for the probabilistic samples and the supposed test for the application of a program for developing practical skills of the middle class (1st grade). The study's target society reached (140) students and the hypotheses required for the statistical methods used were verified. Then, a random sample of (80) students were chosen and multiplied by (20-40-80) to represent the different sizes of the study sample. The study's tool was their skills marks in the assumed test before and after the application of the program followed by retesting their skills two months later. The test consisted of (4) basic skills. The results that no significant difference in the magnitude of impact according to the size of the samples; the difference between the values of the medium sample of n = 40 was simple; but the sizes of the other samples in all cases, the value of (F) (the One-Factor Experiment With Repeated Measurements ANOVA) was statistically significant at a level lower than 0.001with a significant impact magnitude according to the standards and measures of impact magnitude (cohen, 1988) for both measurements. The values of the impact magnitude of the two measurements were equal to the different sizes of the study samples for all four skills. In the three cases, the value of (F) (Two-Factor Experiments With Repeated Measurements on one Factor or Mixed) was not statistically significant for the three sample sizes and with very little magnitude of impact according to (cohen, 1988) for both measurements. The results confirmed the magnitude of impact should be measured using both measurements together not ( ) just to be sure of the accuracy of the results. The results confirmed that the impact magnitude measures are not affected by sample sizes because they address the magnitude of the difference of correlation without being a function of the sample size; it does not depend on the size of the sample unlike the statistical significance tests that are affected by the size of the sample. The results confirmed the need to verify the equivalence of the groups and the homogeneity of the variance when calculating the values of the (One-Factor Experiment With Repeated Measurement ANOVA) test (F) is used to ensure the integrity of the results and not to get exaggerated estimates of the magnitude of impact. The study recommended to obligate researchers in different sciences (especially psychological, social and educational), to measure the magnitude of impact for each hypothesis of the research using the appropriate measures along with the statistical significance tests and interpretation of the results based on the results of each; which increases the quality of the results in future researches and to encourage researchers to address the issue of the magnitude of impact in each study; which gives more attention to the concept of the magnitude of impact in local and Arab educational research, since this issue has not attracted attention in the Arab world as in the Western World.

Main Subjects