Statistical Calculator
Descriptive Statistics Calculator
Computes multiple descriptive statistics for a data set including mean, median, mode, range, variance, and standard deviation. Provides a comprehensive overview of your data's distribution.
Average Calculator
Calculates the arithmetic mean of a set of numbers. Simply input your values to find their average. Useful for quickly determining the central tendency of a data set.
Standard Deviation Calculator
calculates both sample and population standard deviation with detailed analysis, normal distribution curve visualization, and professional export options for researchers, students, and analysts.
Statistical Test Tools
Parametric Tests
One-sample t-test
Compares a sample mean to a known or hypothesized population mean. Used when population standard deviation is unknown. Assumes data is normally distributed.
One-sample z-test
Compares a sample mean to a population mean when the population standard deviation is known. Requires larger sample size (n≥30) or normal distribution.
Independent samples t-test
(two-sample t-test or unpaired t-test)
(two-sample t-test or unpaired t-test)
Compares the means of two independent groups to determine if they are statistically different. Assumes equal variances and normal distribution in both groups.
Paired samples t-test
Compares means from the same subjects at two different times or under two different conditions. Used for before-after studies or matched pairs designs.
Welch's t-test
Modified t-test for comparing two groups with unequal variances. More robust than standard t-test when homogeneity of variance assumption is violated.
Two-sample z-test
Compares means of two independent groups when both population standard deviations are known. Typically used with large samples (n≥30 per group).
One-way ANOVA
Compares means across three or more independent groups for a single factor. Tests if at least one group mean differs from the others.
Two-way ANOVA
Analyzes the effect of two independent variables on a dependent variable, including their interaction effect. Used in factorial designs.
Repeated measures ANOVA
Compares means when the same subjects are measured multiple times. Accounts for correlation between repeated measurements from the same subject.
MANOVA (Multivariate ANOVA)
Extension of ANOVA for analyzing multiple dependent variables simultaneously. Tests if groups differ on a combination of dependent variables.
ANCOVA (Analysis of Covariance)
Combines ANOVA with regression to control for continuous covariates. Increases statistical power by reducing error variance.
Pearson correlation test
Measures the strength and direction of linear relationship between two continuous variables. Values range from -1 to +1.
t-tests for regression coefficients
Tests the significance of individual predictors in a regression model. Determines if each coefficient is significantly different from zero.
F-test
Compares the variances of two populations. Used to test homogeneity of variance assumption before conducting t-tests.
Levene's test
Tests equality of variances across groups. More robust to departures from normality than Bartlett's test.
Bartlett's test
Tests homogeneity of variances across multiple groups. Sensitive to departures from normality, best used with normally distributed data.
Non-Parametric Tests
Kolmogorov–Smirnov Test (one-sample)
Compares a sample distribution to a reference probability distribution. Tests if data follows a specific theoretical distribution (e.g., normal, exponential).
Chi-Square Goodness-of-Fit Test
Tests whether observed categorical frequencies differ from expected frequencies. Used to determine if a sample matches a population distribution.
Runs Test (Wald–Wolfowitz Runs Test)
Tests the randomness of a sequence of binary data. Detects non-random patterns such as trends or clustering in data.
Binomial Test
Tests whether the observed proportion of successes differs from a hypothesized proportion. Used for binary outcome data.
Sign Test (one-sample)
Tests whether the median of a sample equals a hypothesized value. Simple test based only on the direction of differences from the median.
Mann–Whitney U Test (Wilcoxon Rank-Sum Test)
Non-parametric alternative to independent t-test. Compares distributions of two independent groups without assuming normality.
Wilcoxon Signed-Rank Test (paired samples)
Non-parametric alternative to paired t-test. Compares paired observations using both the sign and magnitude of differences.
Kolmogorov–Smirnov Test (two-sample)
Compares the distributions of two samples. Tests if two samples come from the same underlying distribution without specifying what that distribution is.
Chi-Square Test of Independence
Tests whether two categorical variables are independent. Analyzes contingency tables to determine if there's an association between variables.
Fisher's Exact Test
Exact test for independence in 2x2 contingency tables. Used when sample sizes are small and chi-square test assumptions are violated.
Median Test
Compares medians across two or more groups. Tests if samples come from populations with the same median using a contingency table approach.
Wald–Wolfowitz Runs Test
Tests if two independent samples come from populations with the same distribution. Based on the number of runs when observations are ranked.
Kruskal–Wallis H Test
Non-parametric alternative to one-way ANOVA. Compares medians across three or more independent groups without assuming normality.
Friedman Test
Non-parametric alternative to repeated measures ANOVA. Compares three or more paired groups or repeated measurements on the same subjects.
Cochran's Q Test
Tests for differences in proportions across three or more matched groups with binary outcomes. Extension of McNemar's test for multiple groups.
Jonckheere–Terpstra Test
Tests for ordered differences among groups. More powerful than Kruskal-Wallis when groups have a natural ordering (e.g., dose-response).
Quade Test
Alternative to Friedman test that gives more weight to blocks with larger ranges. Used for randomized block designs with ranked data.
Spearman's Rank Correlation
Measures monotonic relationship between two variables using ranks. Non-parametric alternative to Pearson correlation for ordinal data or non-linear relationships.
Kendall's Tau
Measures ordinal association between two variables based on concordant and discordant pairs. More robust than Spearman's for small samples with ties.
Goodman–Kruskal Gamma
Measures association between two ordinal variables. Ranges from -1 to +1, based on concordant and discordant pairs while ignoring ties.
Somers' D
Asymmetric measure of ordinal association that accounts for which variable is dependent. Useful when directional relationship is important.
McNemar's Test
Tests for differences in paired proportions for binary outcomes. Used for before-after studies with dichotomous variables or matched case-control studies.
Bowker's Test of Symmetry
Extension of McNemar's test for tables larger than 2x2. Tests if a square contingency table is symmetric about its main diagonal.
Mantel–Haenszel Test
Tests association between two binary variables while controlling for a confounding variable. Combines multiple 2x2 tables across strata of the confounder.
Log-Rank Test
Compares survival distributions between two or more groups. Non-parametric test used in survival analysis to test if survival curves are identical.
