Quick Start with Random Data

library(multimedia)
library(ggplot2)
library(ggraph)

This vignette gives a brief introduction using simulated that resemble a mediation analysis of the gut-brain axis. The basic question is – we know that meditation can reduce depression and anxiety symptoms, so is it possible that microbiome shifts might play a role? In the language of mediation analysis, does the microbiome mediate Public Health Questionnaire-9 (PHQ) score?

demo_joy()
#> class: SummarizedExperiment 
#> dim: 5 100 
#> metadata(0):
#> assays(1): counts
#> rownames(5): ASV1 ASV2 ASV3 ASV4 ASV5
#> rowData names(0):
#> colnames: NULL
#> colData names(2): treatment PHQ

For mediation analysis, we distinguish between the different variable types. This data structures defines treatment, mediator, and outcome group using tidyselect-style notation.

exper <- mediation_data(demo_joy(), "PHQ", "treatment", starts_with("ASV"))
exper
#> [Mediation Data] 
#> 100 samples with measurements for, 
#> 1 treatment: treatment 
#> 5 mediators: ASV1, ASV2, ... 
#> 1 outcome: PHQ

This is the main estimation function. By default, we fit a separate linear regression model for each mediation and outcome variable.

model <- multimedia(exper) |>
    estimate(exper)

model
#> [Multimedia Analysis] 
#> Treatments: treatment 
#> Outcomes: PHQ 
#> Mediators: ASV1, ASV2, ... 
#>   
#> [Models] 
#> mediation: A fitted lm_model(). 
#> outcome: A fitted lm_model().

The edges slot tracks all the variable relationships, and it can be accessed using the edges method. For example, we can visualize the causal graph using the ggraph code below.

ggraph(edges(model)) +
    geom_edge_link(arrow = arrow()) +
    geom_node_label(aes(label = name, fill = node_type))

Now that we’ve coupled the mediation and outcome models, we can propogate predictions and samples through them. That is, we can define certain configurations of the treatment (and pretreatments, if we have them) and then use the fitted models to simulate new mediation and outcome samples. By default, it will sample at the template data that was used to fit the model, just like the predict method for lm.

sample(model)
#> [Mediation Data] 
#> 2 samples with measurements for, 
#> 1 treatment: treatment 
#> 5 mediators: ASV1, ASV2, ... 
#> 1 outcome: PHQ
predict(model)
#> $mediators
#> # A tibble: 2 × 5
#>       ASV1    ASV2    ASV3    ASV4   ASV5
#>                 
#> 1 -0.143   -0.0721 -0.104  -0.0499  0.377
#> 2 -0.00991 -0.176   0.0668  0.0888 -0.261
#> 
#> $outcomes
#> # A tibble: 2 × 1
#>       PHQ
#>     
#> 1 -0.0264
#> 2 -0.222

Things get more interesting when we sample at new treatment and pretreatment configurations. We need to be careful with our accounting, because we want the flexibility to provide different combinations of treatments to different sets of edges. For example, we may want to imagine that the edge for one particular mediator was set to treatment while all others were left at control. The example below has one sample with this kind of configuration and three others that keep all edges at control.

t_mediator <- factor(c("Treatment", rep("Control", 3)))
t_outcome <- factor(rep("Control", 4), levels = c("Treatment", "Control"))

profile <- setup_profile(model, t_mediator, t_outcome)
sample(model, profile = profile)
#> [Mediation Data] 
#> 4 samples with measurements for, 
#> 1 treatment: treatment 
#> 5 mediators: ASV1, ASV2, ... 
#> 1 outcome: PHQ
predict(model, profile = profile)
#> $mediators
#> # A tibble: 4 × 5
#>       ASV1    ASV2    ASV3    ASV4   ASV5
#>                 
#> 1 -0.00991 -0.176   0.0668  0.0888 -0.261
#> 2 -0.143   -0.0721 -0.104  -0.0499  0.377
#> 3 -0.143   -0.0721 -0.104  -0.0499  0.377
#> 4 -0.143   -0.0721 -0.104  -0.0499  0.377
#> 
#> $outcomes
#> # A tibble: 4 × 1
#>       PHQ
#>     
#> 1  0.0294
#> 2 -0.0264
#> 3 -0.0264
#> 4 -0.0264

setup_profile(model, t_mediator, t_outcome)
#> An object of class "treatment_profile"
#> Slot "t_mediator":
#> $ASV1
#>   treatment
#> 1 Treatment
#> 2   Control
#> 3   Control
#> 4   Control
#> 
#> $ASV2
#>   treatment
#> 1 Treatment
#> 2   Control
#> 3   Control
#> 4   Control
#> 
#> $ASV3
#>   treatment
#> 1 Treatment
#> 2   Control
#> 3   Control
#> 4   Control
#> 
#> $ASV4
#>   treatment
#> 1 Treatment
#> 2   Control
#> 3   Control
#> 4   Control
#> 
#> $ASV5
#>   treatment
#> 1 Treatment
#> 2   Control
#> 3   Control
#> 4   Control
#> 
#> 
#> Slot "t_outcome":
#> $PHQ
#>   treatment
#> 1   Control
#> 2   Control
#> 3   Control
#> 4   Control

We can also contrast the predictions and samples under different profiles.

profile_control <- setup_profile(model, t_outcome, t_outcome)
contrast_predictions(model, profile, profile_control)
#> $mediators
#>        ASV1      ASV2      ASV3      ASV4       ASV5
#> 1 0.1326707 -0.104133 0.1704462 0.1386598 -0.6375517
#> 2 0.0000000  0.000000 0.0000000 0.0000000  0.0000000
#> 3 0.0000000  0.000000 0.0000000 0.0000000  0.0000000
#> 4 0.0000000  0.000000 0.0000000 0.0000000  0.0000000
#> 
#> $outcomes
#>          PHQ
#> 1 0.05583207
#> 2 0.00000000
#> 3 0.00000000
#> 4 0.00000000
contrast_samples(model, profile, profile_control)
#> $mediators
#>         ASV1       ASV2       ASV3       ASV4       ASV5
#> 1  1.2366139  3.9634563 -1.7420684 -3.6615879 -0.7752020
#> 2 -1.4849323 -1.8546055  2.4367964  0.3051953 -0.9029887
#> 3 -0.7695921  1.1156105  3.2685340 -2.1033051 -0.1665528
#> 4 -0.2403387 -0.1611509  0.7345826 -2.6691310  3.1923258
#> 
#> $outcomes
#>          PHQ
#> 1  3.0880590
#> 2 -0.9816616
#> 3  1.0361014
#> 4 -0.6991271

Effect Estimates

It’s a small step from contrasting different configurations to asking for the direct and indirect treatments effects. The direct effect is defined as the average of ((t′), 1) − ((t′), 0) across mediator treatment effects t. The hats mean that we use the predicted values from the mediation and outcome values. I’ve distinguished between “overall” and “pathwise” indirect effects because we’re working with high-dimensional mediators. In the overall effect, we toggle treatment/control status for incoming edges to all mediators. In pathwise indirect effects, we toggle only the treatment going into one mediator.

direct_effect(model, exper)
#>   outcome indirect_setting            contrast direct_effect
#> 1     PHQ          Control Control - Treatment     0.2511178
#> 2     PHQ        Treatment Control - Treatment     0.2511178
indirect_overall(model, exper)
#>   outcome direct_setting            contrast indirect_effect
#> 1     PHQ        Control Control - Treatment     -0.05583207
#> 2     PHQ      Treatment Control - Treatment     -0.05583207
indirect_pathwise(model, exper)
#>    outcome mediator direct_setting            contrast indirect_effect
#> 1      PHQ     ASV1        Control Control - Treatment    7.335676e-04
#> 2      PHQ     ASV2        Control Control - Treatment    3.315854e-03
#> 3      PHQ     ASV3        Control Control - Treatment   -1.269668e-02
#> 4      PHQ     ASV4        Control Control - Treatment   -1.456854e-05
#> 5      PHQ     ASV5        Control Control - Treatment   -4.717025e-02
#> 6      PHQ     ASV1      Treatment Control - Treatment    7.335676e-04
#> 7      PHQ     ASV2      Treatment Control - Treatment    3.315854e-03
#> 8      PHQ     ASV3      Treatment Control - Treatment   -1.269668e-02
#> 9      PHQ     ASV4      Treatment Control - Treatment   -1.456854e-05
#> 10     PHQ     ASV5      Treatment Control - Treatment   -4.717025e-02

So far, we’ve done everything using just linear models. We could actually have computed all these effects just by looking at parameter estimates. What’s nice is that we can plug in many differnet kinds of mediation or outcome models. The package already includes interfaces to the logistic-normal multinomial, sparse regression with glmnet, random forests with ranger, and bayesian models with brms. It’s also not too difficult to extend to new model types (we should add a vignette). Here’s an example of everything we did above but for glmnet. The fact that all the estimates are 0 is a good thing – there are no real effects in the simulated data.

model <- multimedia(exper, glmnet_model(lambda = .1)) |>
    estimate(exper)

direct_effect(model, exper)
#>   outcome indirect_setting            contrast direct_effect
#> 1     PHQ          Control Control - Treatment     0.1196578
#> 2     PHQ        Treatment Control - Treatment     0.1196578
indirect_overall(model, exper)
#>   outcome direct_setting            contrast indirect_effect
#> 1     PHQ        Control Control - Treatment               0
#> 2     PHQ      Treatment Control - Treatment               0
indirect_pathwise(model, exper)
#>    outcome mediator direct_setting            contrast indirect_effect
#> 1      PHQ     ASV1        Control Control - Treatment               0
#> 2      PHQ     ASV2        Control Control - Treatment               0
#> 3      PHQ     ASV3        Control Control - Treatment               0
#> 4      PHQ     ASV4        Control Control - Treatment               0
#> 5      PHQ     ASV5        Control Control - Treatment               0
#> 6      PHQ     ASV1      Treatment Control - Treatment               0
#> 7      PHQ     ASV2      Treatment Control - Treatment               0
#> 8      PHQ     ASV3      Treatment Control - Treatment               0
#> 9      PHQ     ASV4      Treatment Control - Treatment               0
#> 10     PHQ     ASV5      Treatment Control - Treatment               0

Inference

Effect estimates are rarely enough on their own. We need some uncertainty assessments to set appropriate expectations. The most straightforward approach is to use the bootstrap. Each function in the third argument, fs, will get its own data.frame with the bootstrap distribution for that estimator.

bootstrap(model, exper, c(direct_effect = direct_effect))$direct_effect |>
    head(10)
#>    bootstrap outcome indirect_setting            contrast direct_effect
#> 1          1     PHQ          Control Control - Treatment   0.101538729
#> 2          1     PHQ        Treatment Control - Treatment   0.101538729
#> 3          2     PHQ          Control Control - Treatment   0.395802681
#> 4          2     PHQ        Treatment Control - Treatment   0.395802681
#> 5          3     PHQ          Control Control - Treatment   0.000000000
#> 6          3     PHQ        Treatment Control - Treatment   0.000000000
#> 7          4     PHQ          Control Control - Treatment   0.102523495
#> 8          4     PHQ        Treatment Control - Treatment   0.102523495
#> 9          5     PHQ          Control Control - Treatment   0.003701135
#> 10         5     PHQ        Treatment Control - Treatment   0.003701135

We can also generate synthetic nulls to calibrate selection sets. The third argument says which set of edges we want to remove under the null. In this case we will generate synthetic null data where there is known to be no relationship between the mediators and outcome. The fourth argument says which effect estimates we should evaluate. We then fit the full model on both the original and the synthetic null data. We can define false discovery rate thresholds by ranking estimates across the two data sets. If we see many null effects mixed in among the strong effects in real data, we know to trust only the very strongest real effects (if any).

contrast <- null_contrast(model, exper, "M->Y", indirect_pathwise)
fdr <- fdr_summary(contrast, "indirect_pathwise", 0.05)
fdr
#> # A tibble: 10 × 7
#>    source    outcome mediator indirect_effect  rank fdr_hat keep 
#>                               
#>  1 synthetic PHQ     ASV3             0.0458      1   1     FALSE
#>  2 synthetic PHQ     ASV2            -0.0174      2   1     FALSE
#>  3 synthetic PHQ     ASV1            -0.00571     3   1     FALSE
#>  4 real      PHQ     ASV1             0           4   0.75  FALSE
#>  5 real      PHQ     ASV2             0           5   0.6   FALSE
#>  6 real      PHQ     ASV3             0           6   0.5   FALSE
#>  7 real      PHQ     ASV4             0           7   0.429 FALSE
#>  8 real      PHQ     ASV5             0           8   0.375 FALSE
#>  9 synthetic PHQ     ASV4             0           9   0.444 FALSE
#> 10 synthetic PHQ     ASV5             0          10   0.5   FALSE
sessionInfo()
#> R version 4.4.1 (2024-06-14)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.1 LTS
#> 
#> Matrix products: default
#> BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0
#> 
#> locale:
#>  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
#>  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
#>  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
#>  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
#>  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
#> 
#> time zone: Etc/UTC
#> tzcode source: system (glibc)
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#>  [1] ggraph_2.2.1       phyloseq_1.49.0    ggrepel_0.9.6      multimedia_0.2.0  
#>  [5] tidyselect_1.2.1   ranger_0.16.0      glmnetUtils_1.1.9  vroom_1.6.5       
#>  [9] lubridate_1.9.3    forcats_1.0.0      stringr_1.5.1      dplyr_1.1.4       
#> [13] purrr_1.0.2        readr_2.1.5        tidyr_1.3.1        tibble_3.2.1      
#> [17] tidyverse_2.0.0    patchwork_1.3.0    glue_1.7.0         ggdist_3.3.2      
#> [21] compositions_2.0-8 brms_2.21.0        Rcpp_1.0.13        ggplot2_3.5.1     
#> [25] rmarkdown_2.28    
#> 
#> loaded via a namespace (and not attached):
#>   [1] sys_3.4.2                   tensorA_0.36.2.1           
#>   [3] jsonlite_1.8.8              shape_1.4.6.1              
#>   [5] magrittr_2.0.3              farver_2.1.2               
#>   [7] zlibbioc_1.51.1             vctrs_0.6.5                
#>   [9] multtest_2.61.0             memoise_2.0.1              
#>  [11] htmltools_0.5.8.1           S4Arrays_1.5.7             
#>  [13] progress_1.2.3              distributional_0.5.0       
#>  [15] curl_5.2.2                  Rhdf5lib_1.27.0            
#>  [17] SparseArray_1.5.36          rhdf5_2.49.0               
#>  [19] sass_0.4.9                  StanHeaders_2.32.10        
#>  [21] bslib_0.8.0                 plyr_1.8.9                 
#>  [23] cachem_1.1.0                buildtools_1.0.0           
#>  [25] igraph_2.0.3                lifecycle_1.0.4            
#>  [27] iterators_1.0.14            pkgconfig_2.0.3            
#>  [29] Matrix_1.7-0                R6_2.5.1                   
#>  [31] fastmap_1.2.0               GenomeInfoDbData_1.2.12    
#>  [33] MatrixGenerics_1.17.0       digest_0.6.37              
#>  [35] colorspace_2.1-1            S4Vectors_0.43.2           
#>  [37] miniLNM_0.1.0               GenomicRanges_1.57.1       
#>  [39] vegan_2.6-8                 labeling_0.4.3             
#>  [41] timechange_0.3.0            fansi_1.0.6                
#>  [43] polyclip_1.10-7             httr_1.4.7                 
#>  [45] abind_1.4-8                 mgcv_1.9-1                 
#>  [47] compiler_4.4.1              bit64_4.0.5                
#>  [49] withr_3.0.1                 backports_1.5.0            
#>  [51] inline_0.3.19               viridis_0.6.5              
#>  [53] highr_0.11                  QuickJSR_1.3.1             
#>  [55] pkgbuild_1.4.4              ggforce_0.4.2              
#>  [57] MASS_7.3-61                 bayesm_3.1-6               
#>  [59] DelayedArray_0.31.11        biomformat_1.33.0          
#>  [61] loo_2.8.0                   permute_0.9-7              
#>  [63] tools_4.4.1                 ape_5.8                    
#>  [65] nlme_3.1-166                rhdf5filters_1.17.0        
#>  [67] grid_4.4.1                  checkmate_2.3.2            
#>  [69] cluster_2.1.6               reshape2_1.4.4             
#>  [71] ade4_1.7-22                 generics_0.1.3             
#>  [73] operator.tools_1.6.3        gtable_0.3.5               
#>  [75] tzdb_0.4.0                  formula.tools_1.7.1        
#>  [77] data.table_1.16.0           hms_1.1.3                  
#>  [79] tidygraph_1.3.1             utf8_1.2.4                 
#>  [81] XVector_0.45.0              BiocGenerics_0.51.1        
#>  [83] foreach_1.5.2               pillar_1.9.0               
#>  [85] posterior_1.6.0             robustbase_0.99-4          
#>  [87] splines_4.4.1               tweenr_2.0.3               
#>  [89] lattice_0.22-6              bit_4.0.5                  
#>  [91] survival_3.7-0              maketools_1.3.0            
#>  [93] Biostrings_2.73.1           knitr_1.48                 
#>  [95] gridExtra_2.3               V8_5.0.0                   
#>  [97] IRanges_2.39.2              SummarizedExperiment_1.35.1
#>  [99] stats4_4.4.1                xfun_0.47                  
#> [101] graphlayouts_1.1.1          bridgesampling_1.1-2       
#> [103] Biobase_2.65.1              matrixStats_1.4.1          
#> [105] DEoptimR_1.1-3              rstan_2.32.6               
#> [107] stringi_1.8.4               UCSC.utils_1.1.0           
#> [109] yaml_2.3.10                 evaluate_1.0.0             
#> [111] codetools_0.2-20            cli_3.6.3                  
#> [113] RcppParallel_5.1.9          munsell_0.5.1              
#> [115] jquerylib_0.1.4             GenomeInfoDb_1.41.1        
#> [117] coda_0.19-4.1               parallel_4.4.1             
#> [119] rstantools_2.4.0            prettyunits_1.2.0          
#> [121] bayesplot_1.11.1            Brobdingnag_1.2-9          
#> [123] glmnet_4.1-8                viridisLite_0.4.2          
#> [125] mvtnorm_1.3-1               scales_1.3.0               
#> [127] crayon_1.5.3                rlang_1.1.4