Skip the header
Open access
Case Report
25 September 2024

Measurement of attachment in human-animal interaction research

Abstract

Researchers in the field of human-animal interaction often investigate the construct of the human-animal bond in order to measure contextual information about companion-animal relationships beyond simply living with a pet. This practice allows researchers to go beyond a simple manifest variable (e.g., dog-owning vs. cat-owning participants) to investigate how specific relationships with companion animals may impact outcomes. However, the current measures of the relationship between people and their pets (i.e., attachment) are flawed. Specifically, their initial validations were based on homogenous samples, the original conceptualizations of the human-animal bond were fragmented, and some used improper statistical techniques in determining the underlying factor structure of the measure. The current study utilized confirmatory factor analysis (n = 589) to examine the factor structure and underlying psychometric properties of the Lexington Attachment to Pets Scale and the Short Attachment to Pets Scale. The factor analysis revealed that the Lexington Attachment to Pets Scale and the Short Attachment to Pets Scale both have issues with model fit in a sample of undergraduate students at a public Southwestern University. This article offers recommendations for future research investigating the human-animal bond.

Introduction

Although significant progress has been made in human-animal interaction (HAI) research, the field is still hampered by inconsistent methodologies, outcomes, and measurement issues (Rodriguez et al., 2021). Between studies, there has been little standardization of measures, and even for widely used measures, validation procedures have not always utilized contemporary approaches (Wilson and Netting, 2015; Samet et al., 2023). Specifically, there is a problem in the measurement of the relationship between humans and their pets (i.e., attachment; Wilson and Netting, 2015). The origin of this measurement issue may be at least partially rooted in the original conceptualization of the construct, which began to emerge in the HAI literature in the late 1970s and early 1980s (e.g., Rynearson, 1978; Baun et al., 1984). From the onset, there was a lack of a consistent label for the construct that intended to assess the strength of the relationship between a person and their companion animal. As a result, a jangle fallacy (i.e., multiple names given to the same construct) developed within the measurement of this bond (Kelley, 1927). It follows that a person’s relationship with their companion animal may be informed by their attachment style (e.g., secure, avoidant, etc.; Ainsworth et al., 1978), but that does not necessarily mean that they have a distinct attachment style with their pet. With the exception of studies wherein HAI researchers have investigated specific attachment styles as originally operationalized by Ainsworth et al. (1978; e.g., Zilcha-Mano et al., 2011) and how this relates to their relationship with their companion animal, it is more appropriate to refer to the strength of the relationship as the human-animal bond (HAB). Thus, in this article we will refer to the HAB construct when discussing prior research, even if those articles and measures reported an intention to measure the strength of attachment.
To be clear, this issue is not isolated to the field of HAI – measurement issues and problematic assumptions in research are widespread across psychological sciences (Ioannidis, 2005), and it is our ethical duty as scientists and researchers to correct these issues when we become aware of them.

THE HUMAN-ANIMAL BOND

At least some of the variability in HAI research is thought to be due to the specific bonds between humans and their companion animals – which, when incorrectly measured, or unmeasured, could partially moderate outcomes (Rodriguez et al., 2021). There are many different measures of the HAB, including the Pet Attachment and Life-Impact Scale (PALS; Cromer and Barlow, 2013), the Short Attachment to Pets Scale (SAPS; Marsa-Sambola et al., 2016), and most notably, the Lexington Attachment to Pets Scale (LAPS; Johnson et al., 1992).
The LAPS serves as a good encapsulation of the measurement problem as it relates to the bond between a person and a companion animal. The LAPS is, as of writing this article in March 2024, the most cited measure of the HAB with 590 citations (per Google Scholar). It has been translated into Spanish (Ramírez et al., 2014), Italian (Testoni et al., 2017; Uccheddu et al., 2019; Riggio et al., 2021), German (Hielscher et al., 2019), Portuguese (De Albuquerque et al., 2023), and Japanese (Volsche et al., 2023). Many of the articles citing to the LAPS have used it as a measure of the HAB. However, there were a number of issues in the creation and initial validation of the LAPS.
The original study design of the LAPS (Johnson et al., 1992) was conducted via telephone survey in Fayette County, Kentucky. The sample was 94% White, with an age range of 18–83 years. Given that the HAB construct appears to vary across the lifespan (Poresky and Daniels, 1998; Zilcha-Mano et al., 2011; Hirschenhauser et al., 2017), and that the sample was not nationally representative, this initial validation may not generalize to the larger population. Sampling issues aside, some of the items in the LAPS have questionable face validity; for example, there is a conflation between attitudes and the HAB in Item C (“I believe that pets should have the same rights and privileges as family members”) and Item N (“Pets deserve as much respect as humans do”). The HAB construct is theoretically bound in terms of the relationship between a specific person and their companion animal, and ipso facto does not cover the concept of attitudes toward companion animals generally.
In addition to the face validity issue described above, the authors of the LAPS used flawed statistical methods in their initial validation. Specifically, they sought to investigate the factor structure of the LAPS but conducted a principal components analysis (PCA), which does not identify the factors underlying the covariation in a dataset but is instead intended to measure total variation for the purpose of reducing the number of variables in a dataset (Lilienfeld et al., 2015). Essentially, the researchers claimed to have found distinct factors, when in reality they reported components, which are not indicative of the underlying dimensions of the variables (Lilienfeld et al., 2015). This analytical error is not limited to the LAPS; it is featured in a number of measures of the HAB, such as the SAPS (Marsa-Sambola et al., 2016), a popular HAB measure for children and youth. There is additional value in conducting survey research that investigates the more granular, species-specific relationship between people and specific species (e.g., dogs, cats, bunnies). These measures would allow for complex analyses that could account for participants who have multiple species of companion animals. However, these measures often suffer from the same issues as mentioned above (e.g., Dwyer et al., 2006; Coleman et al., 2016; Howell et al., 2017). Furthermore, a dissertation utilizing a large – albeit predominantly White (91.5%) – sample, could not confirm the proposed factor structure of the LAPS due to poor model fit and concluded that the measure may not be capturing the HAB construct (Zaparanick, 2008). Taken together, these limitations indicate that the measures being used to measure the HAB could be improved upon in future research. It is also important to note that we are not advocating for ignoring or invalidating all previous research that has relied upon measures like the LAPS. Instead, it is our hope that the field can take note of the significant limitations of existing measures and use this study as a guidepost for improvement going forward.
Therefore, the present study was designed to explore the factor structure of common measures of the HAB. To better inform recommendations regarding the use of the SAPS and the LAPS, the research team chose to explore the factor structures of those measures. The aims of this study are to confirm the factor structures of the LAPS and SAPS and to propose suggested remedies in order to improve measurement of the HAB.

Methods

PARTICIPANTS

Participants were 589 undergraduate students who participated in the biannual “Mass Survey” of students at a large state university in West Texas. The students completed the measures online and received experimental credit for their time and effort.

MEASURES

The Lexington attachment to pets scale (LAPS)

The LAPS is a 23-item scale designed to measure emotional attachment of individuals to their pets (Johnson et al., 1992). It uses a 4-point Likert-type scale (0 = Strongly disagree to 3 = Strongly agree) to measure three distinct components: general attachment, people substituting, and animal rights/welfare. Example items include: “I often talk to other people about my pet,” “Quite often my feelings towards people are affected by the way they react to my pet.” To score the LAPS, items “h” and “u” must be reverse scored, and then the item scores are summed. Total scores range from 0 to 69, with higher scores indicating a stronger bond to one’s pet. The LAPS has demonstrated acceptable internal reliability (α = 0.928), and consists of three components labeled general attachment, people substitution, and animal rights/welfare, that account for 53.5% of the measured variance (Johnson et al., 1992).

The short attachment to pets scale (SAPS)

The SAPS is a 9-item scale that measures children and adolescents’ attachment to pets (Marsa-Sambola et al., 2016; Muldoon and Williams, 2019). In the original paper by Marsa-Sambola et al. (2016), the SAPS demonstrated acceptable internal reliability (α = 0.894). A PCA performed by the original authors resulted in a single component, accounting for 67.78% of the variance (Marsa-Sambola et al., 2016). To date, a factor analysis to confirm this component has not been conducted. The SAPS items utilize a five-point Likert-type scale (1 = Strongly agree to 5 = Strongly disagree) to assess love and interaction, joy of pet ownership, affectionate companionship, equal family member status, mutual physical activity, pet problems, and general attachment. Example items are as follows: “I consider my pet to be a friend,” “My pet knows when I’m upset and tries to comfort me.” Scoring includes reverse scoring items 2–9 and taking the sum of the items. Total scores range from 9 to 45, with higher scores indicating a stronger bond with pets.

Companion animals

Participants were asked basic information about their pets. Specifically, they were asked what kind of animal they have (excluding livestock), if they live with their pet, if they are the primary caretaker for said pet, if they plan to obtain an animal in the upcoming 6 months, if they’ve acquired a pet in the last 6 months, and what role their pet has (e.g., emotional support, service animal). These questions were meant to garner contextual information regarding the relationship between participants and their pets.

Demographic variables

Participant demographic variables obtained include race and ethnicity, gender identity, trans identity, if they are an international student, and age.

PROCEDURE

Incoming undergraduate students at a large, public, Hispanic-Serving Institution in West Texas were invited to complete a series of measures including, but not limited to, the survey questions used in these analyses. There were originally 962 responses to the Mass Survey questions. Any participants missing responses to the pet questions were dropped from the analytic group, resulting in 734 responses. Then, any non-pet-owning participants were removed, resulting in a final analytic sample of 589. The study was approved by the university’s institutional review board.

STATISTICAL ANALYSIS

Given that the LAPS and SAPS have received scant attention to their factor structure and underlying psychometric properties, analyses were conducted in an attempt to replicate the originally proposed components using confirmatory factor analysis (CFA). As mentioned above, CFA functions differently compared to other statistical techniques, such as PCA. For example, CFA assumes that both common (resulting from the factor) and unique variance exist for indicators. By accounting for this unique variation, CFA provides a more accurate estimate of the variation resulting from the common factor(s) therein (Kim, 2008). Further, though PCA is often used in place of CFA, it does not assume an underlying factor structure and may provide biased results if used for this aim (Kim, 2008; de Winter and Dodou, 2016). Analyses were conducted in Mplus version 8.10 (Muthén and Muthén, 2017). Pet-owning participants were partitioned into two distinct data sets using random assignment (SAPS n = 295; LAPS n = 294). Monte Carlo power analyses indicated that these sample sizes were appropriate for estimating the observed parameters (Muthén and Muthén, 2002).
Given that participants answered questionnaire items using a Likert scale, indicators were modeled using weighted least-squares mean and variance-adjusted (WLSMV) estimation (Muthén, 1993). Several model fit indices were used for each CFA, such as the chi-square, the comparative fit index (CFI), the Tucker-Lewis index (TLI), and the root mean square error of approximation (RMSEA). Adequate model fit was indicated for CFI and TLI values > 0.90 and an RMSEA value less than < 0.06 (Hu and Bentler, 1999). Further, we assessed the internal consistency of the SAPS and LAPS using alpha coefficients. We also examined multicollinearity by using the variance inflation factor (VIF) and tolerance using SPSS Version 29 (IBM Corporation, 2023). VIF values greater than 10 and less than 0.10 were used as benchmarks.

Results

DESCRIPTIVE STATISTICS

The demographic characteristics of the sample are included in Table 1. The demographic characteristics between SAPS and LAPS groups do not significantly differ on gender, trans identity, race/ethnicity (χ2[18] = 12.06, P = 0.844), or age (t = −1.75[451.79], P = 0.081). Both the SAPS (α = 0.793) and LAPS (α = 0.939) demonstrated acceptable internal reliability (Tavakol and Dennick, 2011). VIF values for both the SAPS (1.383–2.781) and LAPS (1.387–3.506) were within the acceptable range, suggesting little evidence of multicollinearity (Hair et al., 1998).
Table 1.  Demographic characteristics of participants.
 SAPSLAPS
 n%n%
Gender
Gender fluid00.0010.34
Identity not listed10.3400.00
Man5418.315920.07
Non-binary/genderqueer31.0220.68
Prefer not to answer00.0010.34
Questioning/unsure10.3410.34
Woman23680.0023078.23
Trans identity
Cis22174.9221372.45
N/A4013.564715.99
Prefer not to say3210.853311.22
Trans20.6810.34
Race/ethnicity
Arab00.0010.34
Asian Indian20.6831.02
Eastern Asian41.3682.72
Hispanic/Latine8227.808328.23
Multiracial4816.276020.41
Native Hawaiian00.0010.34
Other00.0010.34
White15953.9013746.60
International students31.0210.34
 msdmsd
Age18.572.1919.034.03
Estimates for the SAPS model resulted in acceptable CFI and TLI estimates above the cutoff of 0.90. However, the model also resulted in an estimated RMSEA above the acceptable cutoff of 0.06. Several mechanisms were targeted to improve model fit. First, factor loadings were examined to identify items that did not significantly load onto the factor (i.e., factor loadings less than 0.4; Matsunaga, 2010). However, all factor loadings were above this cutoff. The smallest factor loading was item seven (0.587). Second, modification indices were examined. Several indices were identified but were all similar in their possible reduction of the chi-square value. There is limited theoretical evidence that would warrant the application of modification indices or removal of an item. Thus, the model was not altered to improve its fit.
Regarding the LAPS model, like the SAPS model, acceptable CFI and TLI values were identified. However, the estimated RMSEA fell below the acceptable cutoff. Again, estimates were evaluated to identify weak factor loadings. All loadings were above 0.4, the smallest being item 15 (0.527). Further, estimated modification indices would have resulted in similar reductions to the chi-square value if incorporated. Limited theoretical evidence exists for removing an item or specifying a new model using identified modification indices. Therefore, we did not attempt to alter the model to improve model fit. Fit indices and factor loadings for the SAPS and LAPS are included in Tables 24.
Table 2.  Model fit indices for SAPS and LAPS estimates.
Modelχ2dfCFITLIRMSEA
A: SAPSa139.71c 270.9380.9180.119
B: LAPSb789.34c2270.9290.9210.092

Note: CFI = comparative fit index; TLI = Tucker–Lewis index; and RMSEA = root mean square error of approximation.

a
In Model A, all nine items were loaded onto one factor.
b
In Model B, the 11 items of general attachment were loaded onto one factor, the seven items of person replacement were loaded onto a second factor, and the five items of animal rights and welfare were loaded onto a third factor.
c
p < 0.001.
Table 3.  Standardized factor loadings and standard errors for SAPS.
ItemFactor loadings (SE)p
General attachment
1. I don’t really like animals0.686 (0.059)<0.001
2. I spend time every day playing with my pet0.605 (0.040)<0.001
3. I have sometimes talked to my pet and understood what it was trying to tell me0.693 (0.038)<0.001
4. I love pets0.895 (0.033)<0.001
5. I talk to my pet quite a lot0.730 (0.034)<0.001
6. My pet makes me feel happy0.887 (0.031)<0.001
7. I consider my pet to be a friend0.581 (0.054)<0.001
8. My pet knows when I’m upset and tries to comfort me0.627 (0.044)<0.001
9. There are times I’d be lonely except for my pet0.628 (0.042)<0.001
Table 4.  Standardized factor loadings and standard errors for LAPS.
ItemFactor loadings (SE)p
General attachment
1. I play with my quite often.0.626 (0.039)<0.001
2. Owning a pet adds to my happiness.0.857 (0.022)<0.001
3. My pet and I have a very close relationship.0.889 (0.016)<0.001
4. My pet makes me feel happy.0.899 (0.023)<0.001
5. I consider my pet to be a great companion.0.856 (0.021)<0.001
6. I am not very attached to my pet.0.547 (0.049)<0.001
7. My pet knows when I’m feeling bad.0.705 (0.034)<0.001
8. I often talk to other people about my pet.0.625 (0.038)<0.001
9. I consider my pet to be a friend.0.652 (0.045)<0.001
10. I believe that loving my pet helps me to stay heathy.0.832 (0.023)<0.001
11. My pet understands me.0.745 (0.029)<0.001
People substituting
12. I love my pet because he/she is more loyal to me than most people in my life.0.790 (0.024)<0.001
13. My pet means more to me than any of my friends.0.736 (0.028)<0.001
14. I love my pet because it never judges me.0.773 (0.027)<0.001
15. Quite often, my feelings towards other people are impacted by the way they react to my pet.0.527 (0.045)<0.001
16. I believe my pet is my best friend.0.710 (0.035)<0.001
17. Quite often, I confide in my pet.0.729 (0.031)<0.001
18. I enjoy showing other people pictures of my pet.0.747 (0.034)<0.001
Animal rights and welfare
19. Pets deserve as much respect as people do.0.690 (0.039)<0.001
20. I believe that pets should have the same rights and privileges as family members.0.638 (0.039)<0.001
21. I feel that my pet is a part of my family.0.884 (0.034)<0.001
22. I think my pet is just a pet.0.670 (0.041)<0.001
23. I would do almost anything to take care of my pet.0.832 (0.026)<0.001

Discussion

Although both the SAPS and LAPS demonstrated acceptable internal reliability, they also demonstrated poor model fit as evidenced by poor RMSEA values, which do not support the proposed factor structures. Although the model fit was not wholly inadequate (i.e., acceptable CFI and TLI values), the issues with face validity discussed earlier, as well as the atheoretical construction of the measures, raise some concerns. This discrepancy could also result from different statistical methods used to identify originally proposed components in comparison to the current study, which uses latent factor modeling to confirm proposed factor structures. Finally, sampling considerations may have played a role in the outcomes of the current study. Thus, recommendations to address three HAB measurement issue domains are included below, specifically focusing on construct definition, statistical literacy, and sampling.
First, regarding construct definition, we suggest that future research should expand the attachment literature proposed by Bowlby (1969), who looked toward Harlow’s work on non-human primates in generalizing those findings to humans (Van Der Horst et al., 2008) to confirm existing attachment constructs, such as secure, avoidant, and anxious attachment. This will help promote theoretically defined questions when generating hypotheses related to constructs.
Further, as discussed earlier, the HAB construct is poorly defined in the literature. One potential solution to this problem is to issue an authoritative public document that can serve as a guidepost for terminology, similar to the International Association of Human-Animal Interaction Organization’s Definitions for Animal-Assisted Intervention (AAI) and Guidelines for Wellness of Animals Involved in AAI, last updated in 2018 (Jegatheesan et al., 2018). Similar to the recommendations of Samet et al. (2023), the field would benefit from better definitions of the constructs being used in research. Additionally, the field is hampered by myriad measures of the same construct.
Second, the field of HAI is interdisciplinary, which means that researchers in the field have varying degrees of statistical training. Although this does not necessarily imply an inherent weakness of any kind in the field, it may contribute to complexities in measurement consistency. When it comes to measure development – wherein hundreds of studies may come to rely upon the construct and face validity of a measure – it is essential that the psychometric analyses underlying the measures are as robust as possible and that their properties are continually probed. It is important to remember that construct validation is not something that is ever completed (Lilienfeld et al., 2015), it is an ongoing and iterative process, with each study adding to the empirical base of support for a measure.
Third, the reliability and validity of a measure are inherently based upon study samples, and so, measures that have accrued evidence for construct validity based on non-representative samples should not be used to generalize findings to a nationally representative population (this is not to say that the samples are not representative of some population, for example, local sub-populations). Thus, the HAI field should take care to consider the representativeness of samples and how they may generalize, especially when being applied to national populations.
Taking these recommendations and ethical concerns into consideration may help the field of HAI put into perspective the importance of rigorous statistical analyses, proper conceptualization of constructs, and diversity and inclusion in our samples, and help move the field further toward capturing the true impact of our relationships with companion animals.

LIMITATIONS

The data used as the basis for these analyses were collected via convenience sampling of incoming students at a large state university in West Texas. Though data missingness was not a particular concern, it is possible that participants could have misunderstood some of the questions, as there were no study administrators in the immediate vicinity to assist, which is one limitation of the online survey format. Additionally, although this study draws from a more racially diverse sample than the original Johnson et al. (1992) article, our sample is still non-representative and is in fact less representative in terms of gender compared to the Johnson et al. (1992) article. It is possible that our sample is biased as a result of its demand characteristics, or also because of the context from which it is drawn; therefore, we would like to re-emphasize that this article is supportive of other research groups’ undergoing validation attempts of the LAPS and SAPS on nationally representative samples (as well as samples that are particularly relevant to their own theoretical conceptualization; i.e., local sub-populations), as construct validation is never complete (Lilienfeld et al., 2015) and it is possible that the validity of the LAPS and SAPS may bear out.

Conclusion

Our intention in writing this article is not to chastise research conducted in the past – we have relied upon these same measures in our own research. Instead, our aim is to provide some recommendations for research on the HAB going forward. Specifically, we recommend that researchers base future questions upon an established theoretical framework, such as that generated by Bowlby (1969), who was inspired by Harlow (Van Der Horst et al., 2008). Next, when examining the underlying factor structure of a measure, we recommend using the appropriate method of factor analysis. Although both PCA and EFA are data reduction techniques, if the goal of researchers is to examine the common factor(s) between observed variables, EFA is more appropriate (Fabrigar et al., 1999). In addition, when confirming the EFA-derived factor structure, researchers should perform a CFA, not PCA, using a separate sample. Further, we recommend taking care when considering the representativeness of samples when drawing conclusions that could reasonably be interpreted as universal. Even if a sample were nationally representative, that sample may not necessarily be representative of national sub-populations or other nations, where contextual characteristics change drastically.
We are also not recommending that the field should ignore or throw out all previous research that has included measures of the HAB. Our only intention is to provide the appropriate context for how research relying on those measures could be interpreted so that researchers can be better able to describe the limitations of the measures when they discuss and frame their findings. It is our hope that those interested in measuring the HAB may view the recommendations laid out in this article as a helpful roadmap going forward. Finally, we hope that this article starts a discourse on the state of measurement broadly in the field of HAI.

CONFLICTS OF INTEREST

The authors have no financial or non-financial conflicts of interest to declare.

ETHICS STATEMENT

The study was approved by the university’s institutional review board. All relevant guidelines were followed by the research team.

AUTHOR CONTRIBUTIONS

The last author named is lead and corresponding author. EH and TH contributed in writing – original draft; EH, TH, AH, and JV performed writing – review and editing; EH and TH conceptualized the data; EH, TH, AH, and JV investigated the study; EH, TH, and JV carried out methodology; TH carried out formal analysis of the study; and JV carried out project administration.

FUNDING STATEMENT

No external funding was relied upon or received in support of this research project.

DATA AVAILABILITY

For privacy purposes, the data are stored in a password-protected file accessible only to the authors. A copy of the data may be requested by emailing the first author.

References

Ainsworth, M.D.S., Blehar, M.C., Waters, E. and Wall, S. (1978) Patterns of Attachment: A Psychological Study of the Strange Situation. Lawrence Erlbaum Associates, Hillsdale, NJ.
Baun, M.M., Bergstrom, N., Langston, N.F. and Thoma, L. (1984) Physiological effects of human/companion animal bonding. Nursing Research 33(3), 126–129.
Bowlby, J. (1969) Attachment and loss: Vol. 1. Loss. Basic Books, New York.
Coleman, J.A., Green, B., Garthe, R.C., Worthington, E.L., Barker, S.B. and Ingram, K.M. (2016) The Coleman Dog Attitude Scale (C-DAS): Development, refinement, validation, and reliability. Applied Animal Behaviour Science 176, 77–86.
Cromer, L.D. and Barlow, M.R. (2013) Factors and convergent validity of the Pet Attachment and Life Impact Scale (PALS). Human-Animal Interaction Bulletin 1(2), 34–56.
De Albuquerque, N.S., Costa, D.B., Dos Reis Rodrigues, G., Sessegolo, N.S., Moret-Tatay, C. and Irigaray, T.Q. (2023) Adaptation and psychometric properties of Lexington Attachment to Pets Scale: Brazilian version (LAPS-B). Journal of Veterinary Behavior 61, 50–56.
de Winter, J.C. and Dodou, D. (2016) Common factor analysis versus principal component analysis: A comparison of loadings by means of simulations. Communications in Statistics-Simulation and Computation. 46(1), 299–321.
Dwyer, F., Bennett, P.C. and Coleman, G.J. (2006) Development of the Monash Dog Owner Relationship Scale (MDORS). Anthrozoös 19(3), 243–256.
Fabrigar, L.R., Wegener, D.T., MacCallum, R.C. and Strahan, E.J. (1999) Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods 4(3), 272–299.
Hair J.F. Jr., Anderson, R.E., Tatham, R.L. and Black, W.C. (1998) Multivariate Data Analysis. Prentice Hall, Upper Saddle River, NJ.
Hielscher, B., Gansloßer, U. and Froboese, I. (2019) Attachment to dogs and cats in Germany: translation of the Lexington Attachment to Pets Scale (LAPS) and description of the pet owning population in Germany. Human-Animal Interaction Bulletin, 2019.
Hirschenhauser, K., Meichel, Y., Schmalzer, S. and Beetz, A.M. (2017) Children love their pets: Do relationships between children and pets co-vary with taxonomic order, gender, and age?Anthrozoös 30(3), 441–456.
Howell, T.J., Bowen, J., Fatjó, J., Calvo, P., Holloway, A. and Bennett, P.C. (2017) Development of the cat-owner relationship scale (CORS). Behavioural Processes 141, 305–315.
Hu, L. and Bentler, P.M. (1999) Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal 6, 1–55.
IBM Corporation (2023) IBM SPSS Statistics for Windows, Version 29.0. IBM Corporation, Armonk, NY.
Ioannidis, J.P.A. (2005) Why most published research findings are false. PLOS Medicine 2(8), e124.
Jegatheesan, B., Beetz, A., Ormerod, E., Johnson, R., Fine, A.H.et al. (2018) The IAHAIO Definitions for Animal Assisted Intervention and Guidelines for Wellness of Animals Involved in AAI. International Association of Human-Animal Interaction Organizations, Seattle, WA. Available at: https://iahaio.org/wp/wp-content/uploads/2021/01/iahaio-white-paper-2018-english.pdf (accessed 26 July 2023).
Johnson, T.P., Garrity, T.F. and Stallones, L. (1992) Psychometric evaluation of the Lexington Attachment to Pets Scale (LAPS). Anthrozoös 5(3), 160–175.
Kelley, T.L. (1927) Interpretation of Educational Measurements.World Book Company. Available at: https://books.google.com/books?id=bcMIAQAAIAAJ (accessed 21 July 2023).
Kim, H.J. (2008) Common factor analysis versus principal component analysis: choice for symptom cluster research. Asian Nursing Research 2(1), 17–24.
Lilienfeld, S.O., Sauvigné, K.C., Lynn, S.J., Cautin, R.L., Latzman, R.D. and Waldman, I.D. (2015) Fifty psychological and psychiatric terms to avoid: A list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology 6.
Marsa-Sambola, F., Muldoon, J., Williams, J., Lawrence, A., Connor, M. and Currie, C. (2016) The Short Attachment to Pets Scale (SAPS) for children and young people: Development, psychometric qualities and demographic and health associations. Child Indicators Research 9(1), 111–131.
Matsunaga, M. (2010) How to factor-analyze your data right: Do’s, don’t and how-to’s. International Journal of Psychological Research 3(1), 97–110.
Muldoon, J.C., Williams, J.M. and Currie, C. (2019) Differences in boys’ and girls’ attachment to pets in early-mid adolescence. Journal of Applied Developmental Psychology 62, 50–58.
Muthén, B. (1993) Goodness of fit with categorical and other non-normal variables. In: Bollen, K.A. and Long, J.S. (eds) Testing Structural Equation Models. Sage, Thousand Oaks, CA, pp. 205–243.
Muthén, L.K. and Muthén, B.O. (2002) How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling: A Multidisciplinary Journal 9, 599–620.
Muthén, L.K. and Muthén, B.O. (2017) Mplus User’s Guide. 8th edn, Muthén and Muthén, Los Angeles, CA.
Poresky, R.H. and Daniels, A.M. (1998) Demographics of pet presence and attachment. Anthrozoös 11(4), 236–241.
Ramírez, M.T.G., Quezada Berumen, L.D.C. and Hernández, R.L. (2014) Psychometric properties of the Lexington Attachment to Pets Scale: Mexican version (LAPS-M). Anthrozoös 27(3), 351–359.
Riggio, G., Piotti, P., Diverio, S., Borrelli, C., Di Iacovo, F.et al. (2021) The dog–owner relationship: Refinement and validation of the Italian C/DORS for dog owners and correlation with the LAPS. Animals 11(8), 2166.
Rodriguez, K.E., Herzog, H. and Gee, N.R. (2021) Variability in human-animal interaction research. Frontiers in Veterinary Science 7, 619600.
Rynearson, E.K. (1978) Humans and pets and attachment. British Journal of Psychiatry 133(6), 550–555.
Samet, L., Vaterlaws-Whiteside, H., Upjohn, M. and Casey, R. (2023) Status of instrument development in the field of human-animal interactions and bonds: Ten years on. Society and Animals 1–21.
Tavakol, M. and Dennick, R. (2011) Making sense of Cronbach’s alpha. International Journal of Medical Education 2, 53–55.
Testoni, I., De Cataldo, L., Ronconi, L. and Zamperini, A. (2017) Pet loss and representations of death, attachment, depression, and euthanasia. Anthrozoös 30(1), 135–148.
Uccheddu, D.C., Albertini, C., Pereira, D.G., Haverbeke, M., Pierantoni, R., Ronconi, T. and Pirrone. (2019) Pet humanisation and related grief: Development and validation of a structured questionnaire instrument to evaluate grief in people who have lost a companion dog. Animals 9(11), 933.
Van Der Horst, F.C.P., LeRoy, H.A. and Van Der Veer, R. (2008) “When strangers meet”: John Bowlby and Harry Harlow on attachment behavior. Integrative Psychological and Behavioral Science 42(4), 370–388.
Volsche, S., Frailey, F., Ihara, N. and Nittono, H. (2023) Sex and parental status impacts human-to-pet attachment and caregiving attitudes and behaviors in a Japanese sample. Anthrozoös 36(3), 309–321.
Wilson, C.C. and Netting, F.E. (2015) The status of instrument development in the human–animal interaction field. Anthrozoös 25(sup1), s11–s55.
Zaparanick, T.L. (2008) A confirmatory factor analysis of the Lexington Attachment to Pets Scale [Doctoral dissertation, University of Tennessee]. TRACE: Tennessee Research and Creative Exchange. Available at: https://trace.tennessee.edu/utk_graddiss/359 (accessed 5 July 2023).
Zilcha-Mano, S., Mikulincer, M. and Shaver, P.R. (2011) An attachment perspective on human–pet relationships: Conceptualization and assessment of pet attachment orientations. Journal of Research in Personality 45(4), 345–357.

Information & Authors

Information

Published In

History

Issue publication date: 1 January 2024
Received: 15 April 2024
Accepted: 29 July 2024
Published online: 25 September 2024

Keywords:

  1. human-animal interaction
  2. human-animal bond
  3. attachment
  4. companion animals
  5. psychology

Language

English

Authors

Affiliations

Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA

Notes

*
Corresponding Author: Jason Van Allen. Email: [email protected]

Metrics & Citations

Metrics

VIEW ALL METRICS

SCITE_

Citations

Export citation

Select the format you want to export the citations of this publication.

EXPORT CITATIONS

View Options

View options

PDF

View PDF

Login Options

Restore your content access

Enter your email address to restore your content access:

Note: This functionality works only for purchases done as a guest. If you already have an account, log in to access the content to which you are entitled.

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share on social media

Skip the navigation