CRAN Package Check Results for Package effectsize

Last updated on 2026-03-10 22:50:20 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 1.0.1 8.33 321.50 329.83 ERROR
r-devel-linux-x86_64-debian-gcc 1.0.1 5.95 217.95 223.90 ERROR
r-devel-linux-x86_64-fedora-clang 1.0.1 16.00 543.71 559.71 ERROR
r-devel-linux-x86_64-fedora-gcc 1.0.1 17.00 546.85 563.85 ERROR
r-devel-macos-arm64 1.0.1 4.00 71.00 75.00 OK
r-devel-windows-x86_64 1.0.1 18.00 283.00 301.00 ERROR
r-patched-linux-x86_64 1.0.1 10.84 300.07 310.91 OK
r-release-linux-x86_64 1.0.1 8.67 300.39 309.06 OK
r-release-macos-arm64 1.0.1 OK
r-release-macos-x86_64 1.0.1 9.00 250.00 259.00 OK
r-release-windows-x86_64 1.0.1 14.00 285.00 299.00 OK
r-oldrel-macos-arm64 1.0.1 OK
r-oldrel-macos-x86_64 1.0.1 9.00 240.00 249.00 OK
r-oldrel-windows-x86_64 1.0.1 19.00 375.00 394.00 OK

Check Details

Version: 1.0.1
Check: tests
Result: ERROR Running ‘testthat.R’ [81s/44s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(effectsize) > > test_check("effectsize") Starting 2 test processes. Saving _problems/test-effectsize-317.R > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-print.R: For paired samples, 'repeated_measures_d()' provides more options. > test-print.R: For paired samples, 'repeated_measures_d()' provides more options. > test-utils_validate_input_data.R: For paired samples, 'repeated_measures_d()' provides more options. [ FAIL 1 | WARN 0 | SKIP 12 | PASS 841 ] ══ Skipped tests (12) ══════════════════════════════════════════════════════════ • On CRAN (12): 'test-convert_between.R:68:3', 'test-common_language.R:81:3', 'test-effectsize.R:334:3', 'test-eta_squared_posterior.R:2:3', 'test-interpret.R:139:3', 'test-interpret.R:233:3', 'test-print.R:110:3', 'test-eta_squared.R:111:3', 'test-eta_squared.R:373:3', 'test-eta_squared.R:668:3', 'test-eta_squared.R:686:3', 'test-eta_squared.R:700:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-effectsize.R:307:3'): htest | Get args from htest ──────────── Expected `rank_biserial(ww2)` to equal `rank_biserial(...)`. Differences: actual vs expected CI - actual[1, ] 0.8101379 + expected[1, ] 0.8000000 `actual[[2]]`: 0.810 `expected[[2]]`: 0.800 [ FAIL 1 | WARN 0 | SKIP 12 | PASS 841 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 1.0.1
Check: tests
Result: ERROR Running ‘testthat.R’ [55s/30s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(effectsize) > > test_check("effectsize") Starting 2 test processes. Saving _problems/test-effectsize-317.R > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-print.R: For paired samples, 'repeated_measures_d()' provides more options. > test-print.R: For paired samples, 'repeated_measures_d()' provides more options. > test-utils_validate_input_data.R: For paired samples, 'repeated_measures_d()' provides more options. [ FAIL 1 | WARN 0 | SKIP 12 | PASS 841 ] ══ Skipped tests (12) ══════════════════════════════════════════════════════════ • On CRAN (12): 'test-convert_between.R:68:3', 'test-common_language.R:81:3', 'test-effectsize.R:334:3', 'test-eta_squared_posterior.R:2:3', 'test-interpret.R:139:3', 'test-interpret.R:233:3', 'test-print.R:110:3', 'test-eta_squared.R:111:3', 'test-eta_squared.R:373:3', 'test-eta_squared.R:668:3', 'test-eta_squared.R:686:3', 'test-eta_squared.R:700:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-effectsize.R:307:3'): htest | Get args from htest ──────────── Expected `rank_biserial(ww2)` to equal `rank_biserial(...)`. Differences: actual vs expected CI - actual[1, ] 0.8101379 + expected[1, ] 0.8000000 `actual[[2]]`: 0.810 `expected[[2]]`: 0.800 [ FAIL 1 | WARN 0 | SKIP 12 | PASS 841 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 1.0.1
Check: tests
Result: ERROR Running ‘testthat.R’ [142s/111s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(effectsize) > > test_check("effectsize") Starting 2 test processes. Saving _problems/test-effectsize-317.R > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-print.R: For paired samples, 'repeated_measures_d()' provides more options. > test-print.R: For paired samples, 'repeated_measures_d()' provides more options. > test-utils_validate_input_data.R: For paired samples, 'repeated_measures_d()' provides more options. [ FAIL 1 | WARN 0 | SKIP 12 | PASS 841 ] ══ Skipped tests (12) ══════════════════════════════════════════════════════════ • On CRAN (12): 'test-convert_between.R:68:3', 'test-common_language.R:81:3', 'test-effectsize.R:334:3', 'test-eta_squared_posterior.R:2:3', 'test-interpret.R:139:3', 'test-interpret.R:233:3', 'test-print.R:110:3', 'test-eta_squared.R:111:3', 'test-eta_squared.R:373:3', 'test-eta_squared.R:668:3', 'test-eta_squared.R:686:3', 'test-eta_squared.R:700:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-effectsize.R:307:3'): htest | Get args from htest ──────────── Expected `rank_biserial(ww2)` to equal `rank_biserial(...)`. Differences: actual vs expected CI - actual[1, ] 0.8101379 + expected[1, ] 0.8000000 `actual[[2]]`: 0.810 `expected[[2]]`: 0.800 [ FAIL 1 | WARN 0 | SKIP 12 | PASS 841 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 1.0.1
Check: tests
Result: ERROR Running ‘testthat.R’ [143s/184s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > library(testthat) > library(effectsize) > > test_check("effectsize") Starting 2 test processes. Saving _problems/test-effectsize-317.R > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-print.R: For paired samples, 'repeated_measures_d()' provides more options. > test-print.R: For paired samples, 'repeated_measures_d()' provides more options. > test-utils_validate_input_data.R: For paired samples, 'repeated_measures_d()' provides more options. [ FAIL 1 | WARN 0 | SKIP 12 | PASS 841 ] ══ Skipped tests (12) ══════════════════════════════════════════════════════════ • On CRAN (12): 'test-convert_between.R:68:3', 'test-common_language.R:81:3', 'test-effectsize.R:334:3', 'test-eta_squared_posterior.R:2:3', 'test-interpret.R:139:3', 'test-interpret.R:233:3', 'test-print.R:110:3', 'test-eta_squared.R:111:3', 'test-eta_squared.R:373:3', 'test-eta_squared.R:668:3', 'test-eta_squared.R:686:3', 'test-eta_squared.R:700:3' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure ('test-effectsize.R:307:3'): htest | Get args from htest ──────────── Expected `rank_biserial(ww2)` to equal `rank_biserial(...)`. Differences: actual vs expected CI - actual[1, ] 0.8101379 + expected[1, ] 0.8000000 `actual[[2]]`: 0.810 `expected[[2]]`: 0.800 [ FAIL 1 | WARN 0 | SKIP 12 | PASS 841 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 1.0.1
Check: tests
Result: ERROR Running 'testthat.R' [35s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(testthat) > library(effectsize) > > test_check("effectsize") Starting 2 test processes. Saving _problems/test-effectsize-317.R > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-htest_data.R: For paired samples, 'repeated_measures_d()' provides more options. > test-rankES.R: Error: ! testthat subprocess exited in file 'test-rankES.R'. Caused by error: ! R session crashed with exit code -1073741819 Backtrace: ▆ 1. └─testthat::test_check("effectsize") 2. └─testthat::test_dir(...) 3. └─testthat:::test_files(...) 4. └─testthat:::test_files_parallel(...) 5. ├─withr::with_dir(...) 6. │ └─base::force(code) 7. ├─testthat::with_reporter(...) 8. │ └─base::tryCatch(...) 9. │ └─base (local) tryCatchList(expr, classes, parentenv, handlers) 10. │ └─base (local) tryCatchOne(expr, names, parentenv, handlers[[1L]]) 11. │ └─base (local) doTryCatch(return(expr), name, parentenv, handler) 12. └─testthat:::parallel_event_loop_chunky(queue, reporters, ".") 13. └─queue$poll(Inf) 14. └─base::lapply(...) 15. └─testthat (local) FUN(X[[i]], ...) 16. └─private$handle_error(msg, i) 17. └─cli::cli_abort(...) 18. └─rlang::abort(...) Execution halted Flavor: r-devel-windows-x86_64