Complexities and considerations in conducting animal-assisted intervention research: A discussion of randomized controlled trials
Abstract
The field of human-animal interaction (HAI) has experienced prolific growth in the scope, breadth, and rigor of research conducted on animal-assisted interventions (AAIs). As knowledge regarding the preliminary efficacy of AAIs on outcomes of human health and wellbeing continues to accumulate, so has information regarding the feasibility, safety, and acceptability of AAIs. This progression, combined with an increase in funding opportunities, institutional resources, and growing recognition of the field from mental and medical health professionals, has led to more widespread implementation of randomized controlled trials (RCTs) in the field. While conducting RCTs in any field of study is an intensive and complex undertaking, researchers conducting RCTs to evaluate the efficacy of AAIs are faced with unique considerations. The goal of this manuscript is to discuss these complexities and considerations surrounding conducting an RCT of an AAI program in regard to study planning, conceptualization, design, implementation, and dissemination. We highlight common confounders in HAI research and provide strategies for minimizing or ameliorating them. Recommendations pertain to such unique issues as ethical considerations, theory, control and comparison groups, sampling, implementation fidelity, and transparent reporting of findings. These considerations and recommendations seek to aid HAI researchers in the design, implementation, and dissemination of future RCTs to continue to advance the rigor of the field.
Introduction
Research quantifying outcomes from Animal-Assisted Interventions (AAIs) has increased exponentially in recent decades. This interdisciplinary field features researchers from a multitude of backgrounds, working with a variety of populations, examining the effects of human-animal interactions (HAI). A Web of Science citation report for the terms “animal-assisted intervention” or “animal-assisted therapy” yielded almost 1500 publications from 1990 to 2022, of which 50% have been published in the past five years (Figure 1). In line with this growth, an increasing number of systematic reviews and meta-analyses have emerged to summarize knowledge on the effects of AAIs for conditions ranging from dementia (Batubara et al., 2022), to post-traumatic stress disorder (Hediger et al., 2021), to depression (McFalls-Steger et al., 2021). As knowledge accumulates, so has information regarding the feasibility, safety, and patient acceptability of AAIs (Allen et al., 2022; Künzi et al., 2022). This progression, combined with an increase in funding (McCune et al., 2020), institutional resources (O’Haire et al., 2018), and an increased interest in AAIs from practitioners worldwide, has led to the growth of randomized controlled trials (RCTs) conducted in the field. Experts in the field have touted this growth as the beginning of a paradigm shift in which AAIs will incorporate more evidence-based research to promote advances in research, practice, and public perception (Fine et al., 2019).

Considered the “gold standard” research design for determining the effectiveness of interventions, RCTs permit researchers to establish a causal link between an intervention and its outcomes (Sibbald and Roland, 1998; Hariton and Locascio, 2018). By randomly assigning participants to treatment and control groups, the RCT design allows researchers to isolate a causal effect of the AAI on outcome variables. This step is an important distinction from the largely anecdotal and correlational findings that characterized the field’s early beginnings (Fine et al., 2019). Further, results from RCTs can be synthesized with meta-analytic methods leading to the development of evidence-based practice (Bhide et al., 2018). However, conducting RCTs in any field, let alone in the field of HAI, is not without challenges. In fact, current reviews of RCTs in the field indicate that the RCTs conducted have been of relatively low quality (Kamioka et al., 2014; Maujean et al., 2015).
In this commentary, we discuss both considerations and challenges for conducting RCTs on the efficacy of AAIs, including considerations during the planning and design stage, the intervention implementation and data collection, and the final stage of data analysis and dissemination. Although a host of best practice guidelines exist for conducting RCTs (Kendall, 2003; Bhide et al., 2018), our aim is to provide a considered discussion of conducting RCTs in light of the complexities involved in planning, implementing, and evaluating AAIs. Below, we have organized this discussion into five parts: Part I, Considerations During the Planning Stage; Part II, Research Questions, Theory, and Outcomes; Part III, Study Design; Part IV, Intervention Implementation and Data Collection; and Part V, Data Analysis and Dissemination.
Part I: Considerations during the planning stage
DEFINITIONS
AAIs are defined as goal-oriented and structured interventions that intentionally include or incorporate animals in health, education, and human services for the purpose of therapeutic gains in humans (Jegatheesan et al., 2014). Animals that may be integrated into AAIs include (but are not limited to) companion animals and farm animals (Ng et al., 2019). AAIs are often described as adjunct or complementary interventions to supplement evidence-based care (Nimer and Lundahl, 2007; Rossetti and King, 2010; Nepps et al., 2014). AAIs are often not meant to stand on their own, but instead might be considered a complementary intervention that renders individuals more open to seeking additional, formal support. However, this does not mean that assessing AAI outcomes does not require the same rigorous, methodological standards as other psychological interventions.
ETHICAL CONSIDERATIONS
Successful AAI research requires careful ethical consideration for human and animal participants. Therefore, AAI RCTs must have ethical protocols reviewed and approved for both humans involved in the study (via an Institutional Review Board, or IRB) as well as participating therapy animals involved in the study (via an Institutional Animal Care and Use Committee, or IACUC). However, ensuring the ethical and humane treatment of therapy animals involved in AAI research (before, during, and after therapy work) has only recently received attention (Fine et al., 2019). In fact, a recent review of 139 AAI research studies found that only 14 (10%) reported attaining IACUC approval or exemption in the manuscript (Ng et al., 2019). Ng et al. (2019) stress that it is critical to clearly describe how animal welfare is protected in AAI studies for transparency and accountability. It is also highly encouraged that researchers document considerations of animal welfare to ensure that the animals participating in AAIs have provided consent and are given the autonomy to discontinue participation (VanFleet and Faa-Thompson, 2017). Lastly, researchers should consider referring to previously published best practices in relation to risk assessment, participant safeguarding, and animal welfare for large-scale RCTs (Lincoln Education Assistance with Dogs (LEAD) Risk Assessment Toolkit; Brelsford et al., 2020).
PERSONNEL AND MULTI-SITE COLLABORATIONS
A particular consideration for AAI RCTs is that they may feature many sites, collaborators, and partnering agencies. Researching AAIs is a complicated undertaking. Rodriguez and colleagues (Rodriguez et al., 2021, p. 1) describe human-animal interaction research: “. … in its’ simplest form, it involves two complex organisms, a human and an animal, interacting in dynamic ways.” Relatedly, also undergirding the delivery of AAI research is requisite that the research team has both human- and animal-centric expertise. That is, researchers must have in-depth knowledge of human development (or cognition, physiology, etc.) while concomitantly having an advanced understanding of the animals participating in their studies (e.g., disposition, behavior, signs of distress, etc.). Therefore, conducting RCTs in this field is often a complicated affair that challenges researchers to master human and animal fields of study within a context characterized by several moving parts.
Due to this complexity, RCTs in the field benefit from close collaborations between HAI organizations and academic and research institutions. When researchers from academic institutions partner with an agency or organization to implement AAIs, careful consideration is required in forming and maintaining this relationship. Leighton et al. (2022) describe these considerations thoroughly, providing a six-step model for developing a collaborative research-practitioner partnership. It is suggested that, when possible, researchers should consider a community participatory action model to allow stakeholders, including therapists, practitioners, managers, and directors, to contribute to the development of the study’s program design, intervention manual, and assessment tools (for examples of this in practice, see: Fields et al., 2021; Linder et al., 2021). This input from community partners can significantly improve a study’s effectiveness, validity, and quality (Leighton et al., 2022).
Most AAI RCTs, especially in the early stages, are done on a small-scale and at a single site. However, as more resources and funding become available, there has been an increase in multi-site RCTs (e.g., Hauge et al., 2014; Saunders et al., 2017; McCullough et al., 2018). Multi-site studies are beneficial for the progression of the field as they not only increase and diversify the sample size but also improve the generalizability of findings. However, they also present a number of challenges (Flynn, 2009). First, investigators planning a multi-site study should allow ample time for ethical protocols to be reviewed and processed due to the complexity of assessing human subjects’ protection across many settings (Diamond et al., 2019). Multi-site trials can address some of these challenges by designating a coordinating center and maintaining consistent communication (Goodlett et al., 2020). A coordinating center oversees study document development, IRB approvals, communication between sites, and data management, while local sites focus on site-specific research activities such as participant recruitment. Third, multi-site trials may suffer from data integrity issues, such as inconsistencies in data entry or collection across sites. Strategies to maximize data integrity include study procedure manualization, site start-up visits, weekly meetings to resolve site-specific issues, and periodic inter-rater reliability checks.
DEVELOPING A RESEARCH STRATEGY
Riva et al. (2012) describe how an RCT may be developed by using the PICOT format to map a research strategy. “P” refers to the population from which the study participants will be drawn. Here, the focus is on selecting a sample that is most likely to benefit while also allowing for the generalization of results. “I” refers to the intervention or treatment under investigation. “C” refers to the comparison or control group(s). “O” refers to outcomes to be measured to examine the efficacy of the intervention. “T” refers to the time or duration of data collection. While the PICOT format described by Riva et al. (2012) was described for clinicians, it can also be useful as a guide for HAI researchers.
Part II: Research questions, theory, and outcomes
DEVELOPING HYPOTHESES
Perhaps the most important aspect of scientific exploration is developing well-thought-out questions that guide research investigations. Reliance on theory for hypothesis development is a fundamental aspect of the scientific method as it provides an expectation of the mechanisms of change underpinning the success of the proposed intervention. Theoretical models to guide hypotheses and outcomes in AAI research will largely differ by species, population, and activity. For example, an RCT evaluating equine-assisted therapy for children with autism spectrum disorder will differ significantly from an RCT evaluating therapy dog interaction for hospitalized children. However, some unifying theories such as the biopsychosocial model, attachment theory, and social support theory may be applicable to many AAI studies (Beetz, 2017; Gee et al., 2021).
While it is best practice to form theory-driven hypotheses to drive the research design and outcomes, most published, peer-reviewed HAI research does not rely on theory to guide research questions (Gee and Mueller, 2019). A 2022 analysis of 174 AAI research studies found that 28 (16%) did not specify a hypothesis regarding the proposed mechanisms of AAIs in their introductions (Wagner et al., 2022). This may be largely due to the fact that the field is in continued need of improved theory development and application to guide hypotheses. Existing theories such as Cobb’s social support theory (Cobb, 1976) and Wilson’s biophilia hypothesis (Wilson 1984) encompass a wide range of human and animal behavior and inform current AAI practices, yet do not necessarily translate into testable hypotheses that drive change, clarify mechanisms underlying the human-animal bond, or show us when, for whom, and how people and animals can mutually benefit from one another. While the field has seen an increase in studies using theory application (e.g., Gee et al., 2019) and theory-driven research (e.g., Green et al., 2018), further legitimacy of the field requires expansion of these efforts and acceptance of the importance of the inclusion of theory in the research process, especially RCTs. As described by Holder et al. (2020), “overlooking theoretical considerations in the field of AAI perpetuates a lack of empirical evidence based on clear hypotheses and slows progress toward a general understanding of how best to employ this alternative treatment.”
OUTCOMES
When designing an RCT, a crucial step is defining the primary and secondary outcomes of interest that the researchers hypothesize the intervention to impact. As described by best practice guidelines, “the primary outcome measure represents how the researcher has operationalized a ‘successful outcome’; thus, the usefulness of the study as a contribution to clinical knowledge hinges on the adequacy of the measure for this purpose” (Coster, 2013). Secondary outcomes are then chosen to measure additional hypothesized effects of the intervention, which are useful to inform future research. Both primary and secondary outcomes inform many aspects of the study design, including the assessment methods, follow-up time points, and data analyses (Macefield et al., 2014).
PRE-REGISTRATION
To improve the quality and transparency of AAI RCTs, researchers should pre-register all hypotheses, outcomes, and planned analyses before data collection begins (Nosek et al., 2018). Not only does this clearly define the research questions at the outset of the study, but it also helps to distinguish between prediction and postdiction in research. While both approaches are valid, postdiction is more at risk of being influenced by ordinary biases in human reasoning, such as hindsight bias (Nosek et al., 2018). Pre-registration of hypotheses and analyses prior to the start of data collection acts as an effective solution for avoiding such biases. In addition, pre-registration of planned analyses is an essential step toward reducing publication and outcome reporting biases and improving transparency of AAI findings. Although many clinical trials are required to be pre-registered due to funding and regulatory reporting processes (e.g., RCTs funded by the NIH at clinicaltrials.gov), RCTs funded by other sources, or even non-funded, are also encouraged to engage in pre-registration.
Part III: Study design
RANDOMIZATION PROCEDURES
A main feature of an RCT design is the random assignment of participants to treatment or control conditions. In other fields of study (e.g., biomedicine or pharmacology), the use of placebos allows blinding, or masking, of assignment such that participants and/or researchers are unaware of the group in which the participant has been assigned. However, as AAIs involve live interaction between human and animal participants, it is not possible to mask participants to their assignment. While often unavoidable, this can create conscious or unconscious bias in both participants and researchers (Viera and Bangdiwala, 2007). Specifically, a participant’s knowledge of their group assignment can result in significant bias, especially when the control group may be perceived as inferior. For example, consider the experience of a participant drawn to participate in a study involving therapy dog interaction who learns that they are assigned to a control condition void of any animal contact. This may result in expectancy biases which could subsequently affect their engagement in the study, their completion of pre- and post-test measures, and even their desire to remain in the study.
In order to minimize these biases, it is important to treat participants identically across groups in all respects except for the intervention and to conceal treatment assignment from those involved in participant recruitment (Kendall, 2003). However, a recent review of RCTs on the effectiveness of AAIs found that there was a “remarkable” lack of execution and description of concealing or masking procedures (Kamioka et al., 2014). While AAI RCTs often cannot mask participants to their assignment, other strategies are suggested to reduce bias. For example, the research team can mask key personnel including outcome assessors, data analysts, statisticians, and even data safety and monitoring committees (Juul et al., 2021).
A final consideration for randomization procedures in AAI RCTs is that in addition to random assignment to groups, participants assigned to the treatment group can also be randomly assigned to interact with a specific individual animal-handler team. This is especially appropriate in contexts where multiple animal-handler teams simultaneously participate in an AAI (e.g., multiple dog-handler teams participating in an on-campus canine-assisted intervention; Binfet et al., 2022a). Thus, randomization to condition (i.e., treatment or control) and within the treatment condition can help researchers in their efforts to establish claims that changes in outcome variables were due uniquely to the AAI itself.
CONTROL OR COMPARISON GROUPS
The inclusion of a control or comparison group is vital in allowing inferences to be drawn about treatment efficacy (Kazdin, 2015). Using an inactive control group in AAI RCTs, in which participants do not receive any intervention or treatment, can not only result in bias but also may be unethical. A waitlist control group design is one way to overcome the ethical limitations of an inactive control group, allowing the control group access to the AAI intervention at the study’s conclusion. However, the use of inactive control groups has been criticized in discussions of AAI research as they cannot account for nonspecific treatment effects, threatening the study’s internal validity (Marino, 2012; López-Cepero, 2020). Specifically, to build the evidence base of AAIs, it is necessary to isolate the specific effects of the intervention itself, and control for the effects that are not specific to the intervention (Herzog, 2015). In a 2022 analysis of control and comparison groups across almost 200 AAI research studies, the most commonly controlled for non-specific treatment effects included therapeutic aspects, social interaction, physical activity, education or training, and a plush or toy animal (Wagner et al., 2022).
In contrast, the use of active comparison groups can not only minimize participant biases but also can control for these nonspecific treatment effects (Kazdin, 2015; López-Cepero, 2020). For example, comparison groups can receive a dose-matched intervention or activity without the animal present (e.g., Hediger et al., 2019; Chen et al., 2022) or the AAI can be examined as an adjunct to an existing evidence-based intervention (e.g., Flynn et al., 2019). In this way, researchers can harness the knowledge and theoretical mechanisms already gathered in evidence-based treatments to assess AAI as a component of a program rather than a standalone program (López-Cepero, 2020). Most importantly, active control groups can also isolate specific treatment effects. In the same analysis of control and comparison groups in AAI research mentioned above, the most common specific treatment effects examined included animal presence, interaction with an animal, movement by the animal, physical contact, and taking care of an animal (Wagner et al., 2022).
MEASUREMENT
When operationalizing the primary and secondary outcomes, the choice of measurement strategy affects conclusions about the intervention effects. Quantitative methods are traditionally used in RCTs to assess change in self-report, observational, clinical, and physiological outcomes (Coster, 2013). Using standardized measurement tools allows for statistical analyses, hypothesis testing, and direct comparison across studies via meta-analytic methods. In addition, qualitative methods provide a rich source of information that can enhance understanding of results from AAI RCTs as well as aid in generating new hypotheses. Qualitative methods can also be combined with quantitative research methods to conduct a mixed-methods approach to explore outcomes from AAIs (Gilad, 2021). This approach is beneficial to AAI research as it can allow participants, therapists, and handlers to describe their experiences in their own words, which can ultimately be used to test theories and hypotheses in future studies (Kazdin, 2015).
When using quantitative methods in AAI RCTs, measures must be valid and reliable to maximize confidence in the study’s findings (Fitzner, 2007). Researchers should consider the strengths and limitations of different types of measures in terms of their reliability and validity as well as their cost and feasibility (Rodriguez et al., 2018). For example, AAI RCTs may incorporate outcomes that are clinical (e.g., depressive symptoms), patient-reported (e.g., self-esteem, loneliness), observational (e.g., social behavior), or physiological (e.g., heart rate, cortisol). Another consideration is that AAI researchers must choose measures that are sensitive to change following animal interaction (Vermeersch et al., 2000). Some strategies that maximize a measure’s sensitivity to change include: ensuring item comprehensibility and cultural appropriateness; covering a construct’s entire potential range; eliminating duplicate items; selecting response options that optimally reflect participant perception of the construct; and directly asking about change (for a detailed analysis, see Fok and Henry, 2015). Finally, researchers must consider the measurement time points, potentially administering both short-term and follow-up post-test measures to determine long-lasting effects (i.e., the extent to which the intervention “sticks”).
Although RCTs represent the gold standard in study designs, they remain vulnerable to validity threats. Unmeasured characteristics of study participants, therapy animals, handlers, and the intervention context can suppress or augment intervention effects. Common confounders in HAI research include selection bias, animal type and temperament, handler attributes, study personnel, the study setting, differential attrition, and interactions between these variables (Wilson and Barker, 2003; Serpell et al., 2017; Rodriguez et al., 2021). Reliable and valid measurement of all variables allows researchers to examine whether relationships between extraneous variables influence intervention outcomes. Further, it is important to keep in mind that if a human-animal bond outcome is explored, many of these measures have not undergone psychometric testing, which contributes to the heterogeneity of findings in HAI research (Wilson and Netting, 2012). In fields such as AAI, self-report measures can also be biased by social desirability bias, or a tendency of participants to respond in a way they perceive as socially desirable by society. Social desirability responding exerts its greatest influence when study participants are identifiable (e.g., answering questions in front of a researcher) or when self-perception matters (Krumpal, 2013). Examples of measures that may be impacted by social desirability bias in AAI studies are attachment to pets, provision of pet care, and perceptions of a therapy animal interaction.
SAMPLE
In all RCTs, it is important to calculate power analyses to estimate the number of participants needed in each group to detect a treatment effect on the primary outcome. While this practice may be widespread in current AAI RCTs, it is not well-reported. A systematic review of AAI RCTs on psychosocial outcomes found that not only were sample sizes relatively small, but only one of eight RCTs in the review reported a statistical power analysis to confirm that the sample size was sufficient to detect an effect on the primary outcome (Maujean et al., 2015). A second sample consideration specific to research on AAIs is the potential for a participation bias such that participants who choose to be in AAI studies disproportionally possess certain traits that make them unrepresentative of the broader population (Elston, 2021). For example, many AAI studies exclude participants with a fear of or allergy to the species incorporated into the intervention. Participants are also usually self-selected in that they are amenable to the AAI and have an affinity toward animals. Thus, researchers should be clear about how this bias limits the generalizability of their findings.
Second, like other scientific fields, HAI research faces sampling challenges by an oversampling of WEIRD (Western, educated, industrialized, rich, and democratic; Henrich et al., 2010) participants. Specifically, AAI studies continue to be conducted with homogeneous samples with poor representation of varying gender, racial, socioeconomic, and cultural identities. Further, in fields such as experimental psychology and cognitive science, researchers rely heavily on undergraduate student participants for a large majority of studies, which severely limits the generalizability of findings (Henrich et al., 2010). Aligning with this, many AAI RCTs (especially with therapy dogs) are also conducted on university campuses with undergraduate students, causing concern regarding the generalizability of findings from this unique population (see Peterson and Merunka, 2014; Hanel and Vione, 2016). In AAI research and across psychological studies, there remains a dependence on convenience sampling with minimal efforts toward drawing from diverse samples (Nielsen et al., 2017). Future efforts should be made to recruit more varied samples, including considerations of diversity, equity, inclusion and representation, and decolonization (Sheade and Chandler, 2014).
Part IV: Intervention implementation and data collection
IMPLEMENTATION FIDELITY
All RCTs require considerations in ensuring implementation fidelity or ensuring that the intervention was administered to participants as intended (Carroll et al., 2007). This is because factors other than an intervention can affect treatment outcomes, including interventionist skill and training, protocol adherence, and intervention complexity (Perepletchikova and Kazdin, 2005). Due to the complexities inherent in conducting AAI research, we might consider implementation fidelity as a sort of checkpoint that holds researchers accountable to report the extent to which the intervention was successfully executed. Reporting fidelity is also critical for replication efforts and comparing efficacy across studies. Further, without documentation of the intervention’s delivery, we cannot determine whether null findings from an RCT are due to a lack of benefits from animal interaction itself or poor implementation of the intervention (An et al., 2020). Implementation fidelity is commonly conceptualized by measures of adherence, dosage, quality of intervention delivery, and participant responsiveness (An et al., 2020), which we describe in detail below.
Protocol adherence
Achieving high implementation fidelity in RCTs requires adherence to protocols to ensure that the intervention is delivered as designed (Carroll et al., 2007). Consistency of intervention delivery and fidelity to the treatment protocol is paramount. AAI research will benefit from continued efforts to develop intervention manualization to promote consistency and reproducibility (O’Haire, 2013). Intervention manuals specify the order and nature of intervention elements, the time spent on each treatment component, and guidelines for the language used in the sessions. Researchers should carefully document whether intervention personnel followed a semi-structured script (versus a free-flowing conversation) and whether any instructions were provided to participants encouraging or restricting their behavior or engagement with animals or other participants. Other considerations include clear descriptions of the duration of the intervention (i.e., the number of minutes participants spent interacting with animals), the ratio of the animal-to-human participants within a given session, and the number of sessions offered as part of the intervention. The use of intervention manuals can encourage fidelity by providing specific guidelines for each of these elements of implementation.
Dosage
Implementation fidelity also requires that the dosage of the intervention is received at the frequency and duration intended. However, even if an AAI is delivered with high adherence and with the intended number of sessions, participants will still vary in the degree to which they physically or socially interact with the animal (Friedmann et al., 2019). Therefore, an evaluation of dosage in AAI RCTs should ideally also contain measures of the actual interactions that occur between human and animal participants during the intervention. Doing so will help to determine the components of AAIs that are most critical for producing positive outcomes. For example, a recent study on the effects of AAI on assisted-living residents monitored treatment fidelity by quantifying interactions between residents and therapy dogs (e.g., looking at, talking to, brushing, petting) for each AAI session (Friedmann et al., 2019). Researchers not only found that these interactions varied both within and between residents over sessions, but also found a positive relationship between the types of interactions engaged in and improvements in residents’ positive affect and mood (Friedmann et al., 2019). Therefore, researchers should consider reporting interactions between participants and therapy animals in detail when possible.
Quality of intervention
Implementation fidelity also includes assessing the quality of the intervention, or the manner in which personnel delivers an intervention (Carroll et al., 2007). This requires researchers to describe, in-depth, the preparation and training of research personnel, practitioners, therapists, animal handlers, and the animals involved in delivering treatment protocols (e.g., Kivlen et al., 2022). Specifically, researchers should describe the suitability of the animal-handler team for therapy work, and whether any welfare issues arose that compromised the team’s participation (Santaniello et al., 2021). Because the collective experience of the animal, handler, and/or mental health professional involved in the AAI impacts the very nature of how the intervention is delivered and experienced by participants in the study, this information is crucial in documenting the quality of the intervention delivery. The key characteristics of the animals involved are also important to describe (e.g., species and breeds, sex, age, rearing/training history, and health status). This information is not only recommended as best practice, but also ensures that the study is replicable (Ng et al., 2019).
Participant engagement
Finally, implementation fidelity can also include an assessment of participant responsiveness within each session, which measures how participants respond to or engaged with the intervention (Carroll et al., 2007). In AAI research, this can include evaluations of participants’ level of enjoyment, connection, and/or satisfaction with the AAI. For example, in a recent RCT examining the effects of virtual therapy dog visits for undergraduate students, researchers asked participants to report their engagement by asking two questions: 1) “How engaged were you over the course of this session?” and 2) “How connected to the handler did you feel?” (Binfet et al., 2022b). Findings revealed that participants with higher self-ratings of engagement in the intervention had significantly better outcomes on anxiety, campus connectedness, positive and negative effect, and stress (Binfet et al., 2022b).
DATA SAFETY AND MONITORING
Data Safety and Monitoring Boards (DSMBs) or plans (DSMPs) are frequently required for clinical trials, including those evaluating AAIs. DSMBs are comprised of scientific experts and biostatisticians who are not intimately involved in the trial. DSMBs monitor study safety outcomes, conduct interim data analyses, and request study termination according to pre-specified "stopping rules" (Slutsky and Lavery, 2004). Such boards are generally convened for RCTs when questions exist regarding patient safety and there is a need to guard against enrolling additional patients if there is evidence of substantial risk or lack of benefit. For AAI RCTs, this is critical to ensure participants’ safety. For example, DSMBs elements include mechanisms for tracking and reporting adverse events. An example of the need for a DSMB in an AAI RCT may occur if suicidality is an outcome of interest. In this case, the role of the DSMB may be to conduct interim analyses to determine if rates of suicide attempts are greater in the control group than in the AAI group, such that enrolling additional participants would not change the study conclusions.
Part V: Data analysis and dissemination
STATISTICAL CONSIDERATIONS
The need to account for species- and animal-specific intervention effects are unique to RCTs conducted in AAI. This represents both a disadvantage (unintended influences on treatment outcomes) and an opportunity (gathering valuable information about human-animal interaction dynamics). Including interventionist and animal variables in patient outcome analyses allow researchers to account for variability. Animals vary widely in demographic characteristics, genetics, temperament, and behavior; this variability likely influences AAI outcomes, with higher levels of participant-animal "fit" leading to potentially greater intervention effectiveness (for a thorough review on variability in human and animal participants in AAIs, see Rodriguez et al., 2021). Likewise, researchers and therapists vary in personality, training, and experience, which could influence outcomes (Johns et al., 2019). Statistical methods such as the intraclass correlation coefficient (ICCs) and multilevel modeling offer ways to quantify these various effects on study outcomes.
Previous RCTs in the field have shown that the effects of AAIs and patient health outcomes can be incredibly variable, such that benefits may apply to some participants only under certain conditions (Kamioka et al., 2014; Maujean et al., 2015; Chen et al., 2022). Therefore, moderator analyses are fundamental to the development of the field to answer questions about when, under what conditions, and for whom interventions are effective (MacKinnon and Luecken, 2008). Additionally, mediators provide information regarding mechanisms of effect by articulating pathways through which an intervention influences an outcome (Chmura Kraemer et al., 2008). These approaches are helpful in exploring the role that individual differences, including past experiences with animals, personality characteristics, animal type and temperament, and person-animal “fit” play in AAIs (Rodriguez et al., 2021).
INTER-RATER RELIABILITY
Inter-rater reliability (IRR) measures are used when researchers assign values to constructs of interest in study participants (Gwet, 2014). Despite wide variation in human and animal responses to their environments, most contain common elements. For example, humans and dogs each exhibit stress in characteristic ways. Fearful expressions, trembling, and avoidance are hallmarks of anxiety in humans; canine stress can be indicated by yawning or lip-licking (McGreevy et al., 2012). IRR measures capitalize upon these commonalities to increase the likelihood that researchers are measuring similar constructs when they rate study participants. The choice of IRR statistic depends upon the level of measurement, the likelihood of chance agreement between raters, and whether the partial agreement is possible (Gwet, 2014). Several ethograms to evaluate behavior during AAIs have been developed in the literature, which should be replicated when possible. For example, Pendry et al. (2020) developed a human and canine ethogram to evaluate behavior during an AAI to train raters to a concordance standard. Such training involves reviewing scoring guidelines and behavioral anchors, operationalizing human-animal interactions, assigning ratings, then discussing scores to resolve discrepancies. Periodic IRR checks are recommended to guard against rater drift (Gwet, 2014).
REPORTING FINDINGS
As with all RCTs, a detailed description of limitations is important for contextualizing the results and considerations of replication or extension. Reporting all data analyses and outcomes with thorough detail, including effect size estimates, is important for many reasons, but key to future meta-analytic efforts and cross-study comparisons. Aligning with a pattern of poor reporting in AAI RCTs, reviews of RCTs in the field have found that many studies do not report outcomes in detail enough to be extracted for meta-analyses (Kamioka et al., 2014; Maujean et al., 2015; Chen et al., 2022). Other reporting biases may occur by omitting null findings from results or selectively highlighting only significant findings in abstracts (Herzog, 2015). Finally, the nature of AAI research may predispose researchers to publish only results that support their inherent feelings about positive relationships between species. Complicating matters further, it has been argued that AAI research is especially prone to researcher bias; notably, that researchers conducting AAI studies are prone to be animal lovers who, at their core, want to see the interventions they employ in their study work (Griffin et al., 2011). The tendency toward a publication bias toward positive findings, called the “file drawer” effect, should be minimized to encourage more transparency in the field.
Discussion
Although this commentary has largely supported RCTs for their ability to examine causal effects, it is worth noting that RCTs are not without their flaws (Simon, 2001). RCTs are expensive to conduct, require both human and animal expertise from the research team, and require access to qualified human-animal handler teams. They can be limited in their generalizability in light of the often-restricted participant inclusion/exclusion criteria and can be delivered with an inadequate description or in nuanced ways that constrain their replicability. An RCT design will also not be universally effective at answering the large range of research questions in the field of AAI. For example, nonrandomized quasi-experimental designs are often more logistically possible and may be more ethical to conduct. Crossover designs are also beneficial for observing changes in individual outcomes following AAIs. In addition, qualitative methods are necessary to accompany quantitative designs like RCTs to capture the lived experience of participants so that researchers can understand how AAIs are experienced and received from the participant’s perspective. Although RCT designs will continue to be important to establish efficacy in the field, AAI research requires a combination of many designs, approaches, and methods.
In conclusion, this commentary has provided a series of recommendations, considerations, and discussions of challenges involving conducting RCTs on AAIs with a goal of advocating for higher rigor in AAI research. Certainly, spurious claims of cause and effect have been made in AAI research, leading to misinformation in public and media perception (for detailed discussions, see Herzog, 2011; Rodriguez et al., 2021). As noted throughout this commentary, conducting RCTs to establish the causal efficacy of AAIs for human health and well-being is a complicated and complex undertaking. As AAI research continues to advance and we seek to address gaps in our knowledge about how people and animals benefit from interacting with one another, it is our goal that these considerations and recommendations aid researchers in the design, implementation, and dissemination of future RCTs in the field.
CONFLICT OF INTEREST
The fourth author’s partner receives or has received research support, acted as a consultant and/or has received honoraria from Acadia, Adamas, Aevi, Afecta, Akili, Alkermes; Allergan, American Academy of Child & Adolescent Psychiatry, American Psychiatric; Press, Arbor, Axsome, Daiichi-Sankyo, Emelex, Gedeon Richter, Genentech, Idorsia; Intra-Cellular Therapies, Kempharm, Luminopia, Lundbeck, MedAvante-ProPhase, Merck; MJH Life Sciences, NIH, Neurim, Otsuka, PaxMedica, PCORI, Pfizer, Physicians; Postgraduate Press, Q BioMed, Receptor Life Sciences, Roche, Sage, Signant Health; Sunovion, Supernus Pharmaceuticals, Syneos, Syneurx, Takeda, Teva, Tris, and Validus.
ETHICS STATEMENT
The authors confirm that the research meets any required ethical guidelines, including adherence to the legal requirements of the study country.
AUTHOR CONTRIBUTIONS
K.E.R: Conceptualization, Writing – Original Draft, Writing – Review & Editing, Supervision; F.L.L.G, J.T.B., L.T., and N.R.G.: Conceptualization, Writing – Original Draft, Writing – Review & Editing.
ACKNOWLEDGMENTS
This commentary was inspired by a discussion held at the 2021 Centers for the Human-Animal Bond Conference hosted by the Purdue University Center for the Human-Animal Bond. We acknowledge the contributions of the attendees of this conference as well as the Human-Animal Bond Research Institute (HABRI), Mars Petcare, and Purdue University for their financial support of the conference.
FUNDING STATEMENT
The authors have no funders to declare.
References
Allen, B., Shenk, C.E., Dreschel, N.E., Wang, M., Bucher, A.M.et al. (2022) Integrating animal-assisted therapy into tf-cbt for abused youth with ptsd: A randomized controlled feasibility trial. Child Maltreatment27(3), 466–477.
An, M., Dusing, S.C., Harbourne, R.T., Sheridan, S.M. and Consortium, S.-P. (2020) What really works in intervention? Using fidelity measures to support optimal outcomes. Physical Therapy100(5), 757–765.
Batubara, S.O., Tonapa, S.I., Saragih, I.D., Mulyadi, M. and Lee, B.-O. (2022) Effects of animal-assisted interventions for people with dementia: A systematic review and meta-analysis. Geriatric Nursing43, 26–37.
Beetz, A.M. (2017) Theories and possible processes of action in animal assisted interventions. Applied Developmental Science21(2), 139–149.
Bhide, A., Shah, P.S. and Acharya, G. (2018) A simplified guide to randomized controlled trials. Acta Obstetricia et Gynecologica Scandinavica97(4), 380–387.
Binfet, J.T., Green, F.L.L. and Draper, Z.A. (2022a) The importance of client-canine contact in canine-assisted interventions: A randomized controlled trial. Anthrozoös35(1), 1–22.
Binfet, J.T., Tardif-Williams, C., Draper, Z.A., Green, F.L.L., Singal, A.et al. (2022b) Virtual canine comfort: A randomized controlled trial of the effects of a canine-assisted intervention supporting undergraduate wellbeing. Anthrozoös35(6), 809–832.
Brelsford, V.L., Dimolareva, M., Gee, N.R. and Meints, K. (2020) Best practice standards in animal-assisted interventions: How the lead risk assessment tool can help. Animals10(6), 974.
Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J.et al. (2007) A conceptual framework for implementation fidelity. Implementation Science2(1), 1–9.
Chen, C.-R., Hung, C.-F., Lee, Y.-W., Tseng, W.-T., Chen, M.-L.et al. (2022) Functional outcomes in a randomized controlled trial of animal-assisted therapy on middle-aged and older adults with schizophrenia. International Journal of Environmental Research and Public Health19(10), 6270.
Chmura Kraemer, H., Kiernan, M., Essex, M. and Kupfer, D.J. (2008) How and why criteria defining moderators and mediators differ between the baron & kenny and macarthur approaches. Health Psychology27(2S), S101.
Cobb, S. (1976) Social support as a moderator of life stress. Psychosomatic Medicine38(5), 300–314.
Coster, W.J. (2013) Making the best match: Selecting outcome measures for clinical trials and outcome studies. The American Journal of Occupational Therapy67(2), 162–170.
Diamond, M.P., Eisenberg, E., Huang, H., Coutifaris, C., Legro, R.S.et al. (2019) The efficiency of single institutional review board review in national institute of child health and human development cooperative reproductive medicine network–initiated clinical trials. Clinical Trials16(1), 3–10.
Elston, D.M. (2021) Participation bias, self-selection bias, and response bias. Journal of the American Academy of Dermatology.
Fields, B., Peters, B.C., Merritt, T. and Meyers, S. (2021) Advancing the science and practice of equine-assisted services through community-academic partnerships. Human-Animal InteractionBulletin.
Fine, A.H., Beck, A.M. and Ng, Z. (2019) The state of animal-assisted interventions: Addressing the contemporary issues that will shape the future. International Journal of Environmental Research and Public Health16(20), 3997.
Fitzner, K. (2007) Reliability and validity a quick review. The Diabetes Educator33(5), 775–780.
Flynn, E., Roguski, J., Wolf, J., Trujillo, K., Tedeschi, P.et al. (2019) A randomized controlled trial of animal-assisted therapy as an adjunct to intensive family preservation services. Child Maltreatment24(2), 161–168.
Flynn, L. (2009) The benefits and challenges of multisite studies: Lessons learned. AACN Advanced Critical Care20(4), 388–391.
Fok, C.C.T. and Henry, D. (2015) Increasing the sensitivity of measures to change. Prevention Science16(7), 978–986.
Friedmann, E., Galik, E., Thomas, S.A., Hall, S., Cheon, J.et al. (2019) Relationship of behavioral interactions during an animal-assisted intervention in assisted living to health-related outcomes. Anthrozoös32(2), 221–238.
Gee, N.R. and Mueller, M.K. (2019) A systematic review of research on pet ownership and animal interactions among older adults. Anthrozoös32(2), 183–207.
Gee, N.R., Reed, T., Whiting, A., Friedmann, E., Snellgrove, D.et al. (2019) Observing live fish improves perceptions of mood, relaxation and anxiety, but does not consistently alter heart rate or heart rate variability. International Journal of Environmental Research and Public Health16(17), 3113.
Gee, N.R., Rodriguez, K.E., Fine, A.H. and Trammell, J.P. (2021) Dogs supporting human health and well-being: A biopsychosocial approach [Conceptual Analysis]. Frontiers in Veterinary Science8(246), 630465.
Gilad, S. (2021) Mixing qualitative and quantitative methods in pursuit of richer answers to real-world questions. Public Performance & Management Review44(5), 1075–1099.
Goodlett, D., Hung, A., Feriozzi, A., Lu, H., Bekelman, J.E.et al. (2020) Site engagement for multi-site clinical trials. Contemporary Clinical Trials Communications19, 100608.
Green, J.D., Coy, A.E. and Mathews, M.A. (2018) Attachment anxiety and avoidance influence pet choice and pet-directed behaviors. Anthrozoös31(4), 475–494.
Griffin, J., McCune, S., Maholmes, V., Hurley, K., McCardle, P.et al. (2011) Scientific research on human-animal interaction. In: Animals in Our Lives: Human–Animal Interaction in Family, Community, and Therapeutic Settings, pp. 227–236.
Gwet, K.L. (2014) Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring the Extent of Agreement Among Raters. Advanced Analytics, LLC, Gaithersburg, MD.
Hanel, P.H. and Vione, K.C. (2016) Do student samples provide an accurate estimate of the general public?PloS One11(12), e0168354.
Hariton, E. and Locascio, J.J. (2018) Randomised controlled trials—The gold standard for effectiveness research. BJOG: An International Journal of Obstetrics and Gynaecology125(13), 1716.
Hauge, H., Kvalem, I.L., Berget, B., Enders-Slegers, M.-J. and Braastad, B.O. (2014) Equine-assisted activities and the impact on perceived social support, self-esteem and self-efficacy among adolescents–an intervention study. International Journal of Adolescence and Youth19(1), 1–21.
Hediger, K., Thommen, S., Wagner, C., Gaab, J. and Hund-Georgiadis, M. (2019) Effects of animal-assisted therapy on social behaviour in patients with acquired brain injury: A randomised controlled trial. Scientific Reports9(1), 1–8.
Hediger, K., Wagner, J., Künzi, P., Haefeli, A., Theis, F.et al. (2021) Effectiveness of animal-assisted interventions for children and adults with post-traumatic stress disorder symptoms: A systematic review and meta-analysis. European journal of Psychotraumatology12(1), 1879713.
Henrich, J., Heine, S.J. and Norenzayan, A. (2010) The weirdest people in the world?Behavioral and Brain Sciences33(2–3), 61–83.
Herzog, H. (2011) The impact of pets on human health and psychological well-being: Fact, fiction, or hypothesis?Current Directions in Psychological Science20(4), 236–239.
Herzog, H. (2015) The research challenge: Threats to the validity of animal-assisted therapy studies and suggestions for improvement. In: Fine, A.H. (ed.) Handbook on Animal-Assisted Therapy: Foundations and Guidelines for Animal-Assisted Interventions,4th edn, pp. 402–407. Academic Press.
Holder, T.R., Gruen, M.E., Roberts, D.L., Somers, T. and Bozkurt, A. (2020) A systematic literature review of animal-assisted interventions in oncology (part II): Theoretical mechanisms and frameworks. Integrative Cancer Therapies19, 1534735420943269.
Jegatheesan, B., Beetz, A., Ormerod, E., Johnson, R., Fine, A.et al. (2014) Iahaio Whitepaper 2014 (updated for 2018). Available at: https://iahaio.org/wp/wp-content/uploads/2018/04/iahaio_wp_updated-2018-final.pdf.
Johns, R.G., Barkham, M., Kellett, S. and Saxon, D. (2019) A systematic review of therapist effects: A critical narrative update and refinement to review. Clinical Psychology Review67, 78–93.
Juul, S., Gluud, C., Simonsen, S., Frandsen, F.W., Kirsch, I.et al. (2021) Blinding in randomised clinical trials of psychological interventions: A retrospective study of published trial reports. BMJ Evidence-Based Medicine26(3), 109–109.
Kamioka, H., Okada, S., Tsutani, K., Park, H., Okuizumi, H.et al. (2014) Effectiveness of animal-assisted therapy: A systematic review of randomized controlled trials. Complementary Therapies in Medicine22(2), 371–390.
Kazdin, A.E. (2015) Methodological standards and strategies for establishing the evidence base of animal-assisted therapies. In: Fine, A.H. (ed.) Handbook on Animal-Assisted Therapy: Foundations and Guidelines for Animal-Assisted Interventions,4th edn, pp. 377–390. Academic Press.
Kendall, J. (2003) Designing a research project: Randomised controlled trials and their principles. Emergency Medicine Journal: EMJ20(2), 164.
Kivlen, C., Winston, K., Mills, D., DiZazzo-Miller, R., Davenport, R.et al. (2022) Canine-assisted intervention effects on the well-being of health science graduate students: A randomized controlled trial. The American Journal of Occupational Therapy76(6).
Krumpal, I. (2013) Determinants of social desirability bias in sensitive surveys: A literature review. Quality & Quantity47(4), 2025–2047.
Künzi, P., Ackert, M., Hund-Georgiadis, M. and Hediger, K. (2022) Effects of animal-assisted psychotherapy incorporating mindfulness and self-compassion in neurorehabilitation: A randomized controlled feasibility trial. Scientific Reports12(1), 1–14.
Leighton, S.C., Nieforth, L.O., Griffin, T.C. and O’Haire, M.E. (2022) Researcher-practitioner interaction: Collaboration to advance the field of human-animal interaction. Human-Animal Interactions8.
Linder, D.E., Folta, S.C., Must, A., Mulé, C.M., Cash, S.B.et al. (2021) A stakeholder-engaged approach to development of an animal-assisted intervention for obesity prevention among youth with autism spectrum disorder and their pet dogs. Frontiers in VeterinaryScience8, 735432.
López-Cepero, J. (2020) Current status of animal-assisted interventions in scientific literature: A critical comment on their internal validity. Animals10(6), 985.
Macefield, R.C., Boulind, C.E. and Blazeby, J.M. (2014) Selecting and measuring optimal outcomes for randomised controlled trials in surgery. Langenbeck’s Archives of Surgery399(3), 263–272.
MacKinnon, D.P. and Luecken, L.J. (2008) How and for whom? Mediation and moderation in health psychology. Health Psychology27(2S), S99.
Marino, L. (2012) Construct validity of animal-assisted therapy and activities: How important is the animal in aat?Anthrozoös25(3), 139–151.
Maujean, A., Pepping, C.A. and Kendall, E. (2015) A systematic review of randomized controlled trials of animal-assisted therapy on psychosocial outcomes. Anthrozoos28(1), 23–36.
McCullough, A., Ruehrdanz, A., Jenkins, M.A., Gilmer, M.J., Olson, J.et al. (2018) Measuring the effects of an animal-assisted intervention for pediatric oncology patients and their parents: A multisite randomized controlled trial. Journal of Pediatric Oncology Nursing35(3), 159–177.
McCune, S., McCardle, P., Griffin, J.A., Esposito, L., Hurley, K., Bures, R. and Kruger, K.A. (2020) Editorial: Human-animal interaction (hai) research: A decade of progress [Editorial]. Frontiers in Veterinary Science7(44).
McFalls-Steger, C., Patterson, D. and Thompson, P. (2021) Effectiveness of animal-assisted interventions (aais) in treatment of adults with depressive symptoms: A systematic review. Human-Animal InteractionBulletin.
McGreevy, P.D., Starling, M., Branson, N., Cobb, M.L. and Calnon, D. (2012) An overview of the dog–human dyad and ethograms within it. Journal of Veterinary Behavior7(2), 103–117.
Nepps, P., Stewart, C.N. and Bruckno, S.R. (2014) Animal-assisted activity: Effects of a complementary intervention program on psychological and physiological variables. Journal of Evidence-Based Complementary & Alternative Medicine19(3), 211–215.
Ng, Z., Morse, L., Albright, J., Viera, A. and Souza, M. (2019) Describing the use of animals in animal-assisted intervention research. Journal of Applied Animal Welfare Science22(4), 364–376.
Nielsen, M., Haun, D., Kärtner, J. and Legare, C.H. (2017) The persistent sampling bias in developmental psychology: A call to action. Journal of Experimental Child Psychology162, 31–38.
Nimer, J. and Lundahl, B. (2007) Animal-assisted therapy: A meta-analysis. Anthrozoös20(3), 225–238.
Nosek, B.A., Ebersole, C.R., DeHaven, A.C. and Mellor, D.T. (2018) The preregistration revolution. Proceedings of the National Academy of Sciences115(11), 2600–2606.
O’Haire, M.E. (2013) Animal-assisted intervention for autism spectrum disorder: A systematic literature review. Journal of Autism and Developmental Disorders43(7), 1606–1622.
O’Haire, M.E., Bibbo, J., Mueller, M.K., Ng, Z.Y., Buechner-Maxwell, V.A.et al. (2018) Overview of centers and institutes for human-animal interaction in the united states. Human-Animal Interaction Bulletin6, 3–22.
Pendry, P., Kuzara, S. and Gee, N.R. (2020) Characteristics of student–dog interaction during a meet-and-greet activity in a university-based animal visitation program. Anthrozoös33(1), 53–69.
Perepletchikova, F. and Kazdin, A.E. (2005) Treatment integrity and therapeutic change: Issues and research recommendations. Clinical Psychology: Science and Practice12(4), 365.
Peterson, R.A. and Merunka, D.R. (2014) Convenience samples of college students and research reproducibility. Journal of Business Research67(5), 1035–1041.
Riva, J.J., Malik, K.M., Burnie, S.J., Endicott, A.R. and Busse, J.W. (2012) What is your research question? An introduction to the picot format for clinicians. The Journal of the Canadian Chiropractic Association56(3), 167.
Rodriguez, K.E., Guérin, N.A., Gabriels, R.L., Serpell, J.A., Schreiner, P.J.et al. (2018) The state of assessment in human-animal interaction research. Human-Animal Interactions6, 63–81.
Rodriguez, K.E., Herzog, H. and Gee, N.R. (2021) Variability in human-animal interaction research [Conceptual Analysis]. Frontiers in Veterinary Science7(1207).
Rossetti, J. and King, C. (2010) Use of animal-assisted therapy with psychiatric patients: A literature review. Journal of Psychosocial Nursing and Mental Health Services48(11), 44–48.
Santaniello, A., Garzillo, S., Cristiano, S., Fioretti, A. and Menna, L.F. (2021) The research of standardized protocols for dog involvement in animal-assisted therapy: A systematic review. Animals11(9), 2576.
Saunders, G.H., Biswas, K., Serpi, T., McGovern, S., Groer, S.et al. (2017) Design and challenges for a randomized, multi-site clinical trial comparing the use of service dogs and emotional support dogs in veterans with post-traumatic stress disorder (ptsd). Contemporary Clinical Trials62, 105–113.
Serpell, J., McCune, S., Gee, N. and Griffin, J.A. (2017) Current challenges to research on animal-assisted interventions. Applied Developmental Science21(3), 223–233.
Sheade, H.E. and Chandler, C.K. (2014) Cultural diversity considerations in animal assisted counseling. Ideas and Research You Can Use: VISTAS.
Sibbald, B. and Roland, M. (1998) Understanding controlled trials. Why are randomised controlled trials important? BMJ:British Medical Journal316(7126), 201.
Simon, S.D. (2001) Is the randomized clinical trial the gold standard of research?Journal of Andrology22(6), 938–943.
Slutsky, A.S. and Lavery, J.V. (2004) Data safety and monitoring boards. Mass Medical Soc350(11), 1143–1147.
VanFleet, R. and Faa-Thompson, T. (2017) Animal Assisted Play Therapy. Professional Resource Press Sarasota, FL.
Vermeersch, D.A., Lambert, M.J. and Burlingame, G.M. (2000) Outcome questionnaire: Item sensitivity to change. Journal of Personality Assessment74(2), 242–261.
Viera, A.J. and Bangdiwala, S.I. (2007) Eliminating bias in randomized controlled trials: Importance of allocation concealment and masking. Family Medicine-Kansas City39(2), 132.
Wagner, C., Grob, C. and Hediger, K. (2022) Specific and nonspecific factors of animal-assisted interventions: A systematic review. Frontiers in Psychology13, 931347.
Wilson, C.C. and Barker, S.B. (2003) Challenges in designing human-animal interaction research. American Behavioral Scientist47(1), 16–28.
Wilson, C.C. and Netting, F.E. (2012) The status of instrument development in the human–animal interaction field. Anthrozoös25(Sup 1), s11–s55.
Wilson, E.O. (1984) Biophilia. Harvard University Press, Cambridge, MA.
Information & Authors
Information
Published In
Copyright
© The Authors 2023. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long the use is non-commercial and you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by-nc/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
History
Issue publication date: 1 January 2023
Submitted: 24 January 2023
Accepted: 7 February 2023
Published online: 10 March 2023
Keywords
Language
English
Authors
Metrics & Citations
Metrics
SCITE_
Citations
Export citation
Select the format you want to export the citations of this publication.
EXPORT CITATIONSExport Citation
Citing Literature
- John-Tyler Binfet, Freya L.L. Green, Rebecca J.P. Godard, Madisyn M. Szypula, Amelia A. Willcox, Keeping loneliness on a short leash: Reducing university student stress and loneliness through a canine-assisted intervention, Human-Animal Interactions, 10.1079/hai.2025.0001, 13, 1, (2025).
- John-Tyler Binfet, Camille X. Rousseau, Freya L.L. Green, Animal-Assisted Interventions in Specialized Settings, Handbook on Animal-Assisted Therapy, 10.1016/B978-0-443-22346-4.15006-1, (373-387), (2025).
- Melanie G. Jones, Kate Filia, Simon M. Rice, Sue M. Cotton, Guidance on Minimum Standards for Canine-Assisted Psychotherapy in Adolescent Mental Health: Delphi Expert Consensus on Health, Safety, and Canine Welfare, Animals, 10.3390/ani14050705, 14, 5, (705), (2024).
- Marta De Santis, Lorena Filugelli, Alberto Mair, Simona Normando, Franco Mutinelli, Laura Contalbrigo, How to Measure Human-Dog Interaction in Dog Assisted Interventions? A Scoping Review, Animals, 10.3390/ani14030410, 14, 3, (410), (2024).
- Freya L.L. Green, Mikaela L. Dahlman, Arielle Lomness, John-Tyler Binfet, For the love of acronyms: An analysis of terminology and acronyms used in AAI research 2013–2023, Human-Animal Interactions, 10.1079/hai.2024.0024, 12, 1, (2024).
- John-Tyler Binfet, Freya L.L. Green, Rebecca J.P. Godard, Madisyn M. Szypula, Camille X. Rousseau, Jordy Decker, A Mixed-Methods Examination of an On-Campus Canine-Assisted Intervention by Gender: Women, Men, and Gender-Diverse Individuals’ Self-Reports of Stress-Reduction and Well-Being, The Impact of Therapy and Pet Animals on Human Stress, 10.1079/9781800626539.0012, (170-186), (2024).
- Melanie G. Jones, Kate Filia, Simon Rice, Sue Cotton, Guidance on minimum standards for canine-assisted psychotherapy in adolescent mental health: Delphi expert consensus on terminology, qualifications and training, Human-Animal Interactions, 10.1079/hai.2023.0041, 2023, (2023).
- John-Tyler Binfet, Freya L.L. Green, Rebecca J.P. Godard, Madisyn M. Szypula, Camille X. Rousseau, Jordy Decker, A mixed-methods examination of an on-campus canine-assisted intervention by gender: Women, men, and gender-diverse individuals’ self-reports of stress-reduction and well-being, Human-Animal Interactions, 10.1079/hai.2023.0037, 2023, (2023).
View Options
View options
Login Options
Check if you access through your login credentials or your institution to get full access on this article.