Partner Models for Speech
This is the most up to date version of this scale.
Downloads
PAPER
Doyle, P. R., Clark, L., & Cowan, B. R. (2021, May). What do we see in them? identifying dimensions of partner models for speech interfaces using a psycholexical approach. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
Construct Summary
The author claims that the scale was developed to measure people’s partner models for speech agents (e.g., Alexa, Google Home, Amazon Echo). They define the term “partner model” as:
The term partner model refers to an interlocutor’s cognitive representation of beliefs about their dialogue partner’s communicative ability. These perceptions are multidimensional and include judgements about cognitive, empathetic and/or functional capabilities of a dialogue partner. Initially informed by previous experience, assumptions and stereotypes, partner models are dynamically updated based on a dialogue partner’s behaviour and/or events during dialogue” (p. 5).
Final Scale Items (23 total):
Semantic differential response scale. Items listed below are endpoints.
Partner Competence and Dependability: Competent/Incompetent, Dependable/Unreliable, Capable/Incapable, Consistent/Inconsistent, Reliable/Uncertain, Ambiguous/Clear, Meandering/Direct, Expert/Amateur, Efficient/Inefficient, Misleading/Honest, Precise/Vague, Cooperative/Uncooperative
Human-likeness: Human-like/Machine-like, Life-like/Tool-like, Warm/Cold, Empathetic/Apathetic, Personal/Generic, Authentic/Fake, Social/Transactional
Cognitive Flexibility: Flexible/Inflexible, Interactive/Start-stop, Interpretive/Literal, Spontenous/Predetermined
Rating = 62%
| Check? | Guideline Item |
|---|---|
| ✓ | Is the construct defined? |
| ✓ | Does the final version of the items capture the construct as it has been defined by the authors? |
| ✓ | Is the item generation process discussed (e.g., literature review, Delphi method, crowd-sourcing)? |
| ✖ | Person to items 10:1 for the initial set of items? |
| ✓ | Did they perform an EFA, PCA, Rasch, or similar test to determine the item to factor relationship? |
| ✓ | Did they describe how they determined number of factors? |
| ✓ | Did they report the full initial set of items? |
| ✖ | Did they provide loadings (EFA) or item fits (Rasch) of all items? |
| ✓ | Is there a description of the item removal process (e.g., using infit/outfit, factor loading minimum value, or cross-loading values)? |
| ✓ | Did they list the final items included in the scale? |
| ✖ | Did they include a factor structure test (e.g., second EFA, CFA, DIF, test for unidimensionality when using Rasch, or similar)? |
| ✖ | Was a measure of reliability (e.g., Cronbach’s alpha, McDonalds Omega_h or Omega_t, Tarkkonen’s Rho) reported? |
| ✖ | Was a test of validity (e.g., predictive, concurrent, convergent, discriminant) reported? |
Comments Full list of items in supplementary materials.
PDF of scale as well as instructions for administration and scoring are not readily available. Check the paper for more details or email hriscaledatabase@gmail.com submit this information if you are the author of this scale.