---
title: "Test and effect size details"
output:
  rmarkdown::html_vignette:
    toc: true
    toc_depth: 4
    keep_md: true
vignette: >
  %\VignetteIndexEntry{Test and effect size details}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
---

```{r}
#| label = "setup",
#| message = FALSE,
#| warning = FALSE,
#| include = FALSE,
#| echo = FALSE
source("setup.R")
```

This vignette can be cited as:

```{r citation, echo=FALSE, comment = ""}
citation("statsExpressions")
```

## Introduction

Here a go-to summary about statistical test carried out and the returned effect
size for each function is provided. This should be useful if one needs to find
out more information about how an argument is resolved in the underlying package
or if one wishes to browse the source code. So, for example, if you want to know
more about how one-way (between-subjects) ANOVA, you can run
`?stats::oneway.test` in your R console.

Abbreviations used: CI = Confidence Interval

## Data requirements

All functions expect data in **long (tidy) format** — one row per observation.
A few additional requirements are worth noting:

- **Within-subjects (repeated measures) designs**: The data must contain exactly
  *one* observation per subject per condition (a complete, balanced block design).
  If you have multiple trials per subject-condition cell, aggregate them first
  (e.g., by taking the mean) before passing the data.
  You can verify this with `table(data$subject, data$condition)` — every cell
  should equal `1`.

- **`subject.id` argument**: For within-subjects designs, always specify
  `subject.id` explicitly. If omitted, the function pairs observations by
  row order within each condition, so any data that is not already sorted
  identically within every condition level can produce silently incorrect
  paired tests — even with exactly two conditions and no missing values.

- **Missing data**: Missing values are handled internally by removing any subject
  who has `NA` in *any* condition, ensuring a balanced design is maintained.

## Summary of functionality

```{r child="../man/rmd-fragments/functionality.Rmd"}
```

## Summary of tests and effect sizes

Here a go-to summary about statistical test carried out and the returned effect
size for each function is provided. This should be useful if one needs to find
out more information about how an argument is resolved in the underlying package
or if one wishes to browse the source code. So, for example, if you want to know
more about how one-way (between-subjects) ANOVA, you can run
`?stats::oneway.test` in your R console.

### `centrality_description()`

```{r child="../man/rmd-fragments/centrality_description.Rmd"}
```

### `oneway_anova()`

```{r child="../man/rmd-fragments/oneway_anova.Rmd"}
```

### `two_sample_test()`

```{r child="../man/rmd-fragments/two_sample_test.Rmd"}
```

### `one_sample_test()`

```{r child="../man/rmd-fragments/one_sample_test.Rmd"}
```

### `corr_test()`

```{r child="../man/rmd-fragments/corr_test.Rmd"}
```

### `contingency_table()`

```{r child="../man/rmd-fragments/contingency_table.Rmd"}
```

### `meta_analysis()`

```{r child="../man/rmd-fragments/meta_analysis.Rmd"}
```

## Effect size interpretation

See `{effectsize}`'s interpretation functions to check different rules/conventions
to interpret effect sizes:

<https://easystats.github.io/effectsize/reference/index.html#section-interpretation>

## References

  - For parametric and non-parametric effect sizes:
    <https://easystats.github.io/effectsize/articles/>

  - For robust effect sizes:
    <https://CRAN.R-project.org/package=WRS2/vignettes/WRS2.pdf>

  - For Bayesian posterior estimates:
    <https://easystats.github.io/bayestestR/articles/bayes_factors.html>

## Suggestions

If you find any bugs or have any suggestions/remarks, please file an issue on GitHub:
<https://github.com/IndrajeetPatil/statsExpressions/issues>
