Work Description

Title: The Digitized Archival Document Trustworthiness Scale Study Dataset and Associated Files Open Access Deposited

http://creativecommons.org/licenses/by/4.0/
Attribute Value
Methodology
  • Scale development was used to collect and analyze the data.Scale development involves four primary steps (DeVellis, 2012; Spector, 1992): Step 1—Construct Definition, Step 2—Generating an Item Pool, Step 3—Designing the Scale, and Step 4—Full Administration and Item Analysis. Step 1 of scale development is to construct a definition. In this study, it involved a review of the literature to identify the scope of trustworthiness for the purpose of empirical investigation. Step 1 also involved focus groups to understand how members of a designated community (i.e., genealogists) talk about trustworthiness. The findings from the focus groups are reported elsewhere (Donaldson & Conway, 2015). Step 2 of scale development is to generate an item pool. It involved identifying items for measuring trustworthiness from multiple sources, including the literature, subject matter experts, and focus groups data (Donaldson & Conway, 2015). Step 3 of scale development is to design the scale. It involved transforming the item pool resulting from Step 2 into a web survey for pretesting and refinement. Step 4 of scale development is the full administration of the survey and subsequent item analysis. It involved administering the final item pool comprising items gathered from earlier steps of scale development to a large sample of designated community members for their evaluation. Each item described a circumstance one might encounter while using a digitized archival document. Participants answered whether the circumstance described by each item would cause them to perceive a digitized archival document as either untrustworthy or trustworthy on a 7‐point scale: very untrustworthy, untrustworthy, slightly untrustworthy, neither untrustworthy nor trustworthy, slightly trustworthy, trustworthy, or very trustworthy. An eighth option, “Not Applicable,” was included for participants to choose if the circumstance an item described was not relevant to their experience of using digitized archival documents. Step 4 also involved analyzing designated community members’ responses via factor analysis to identify the items that were most essential for measuring the trustworthiness of preserved information (in this case, digitized genealogical records). Two types of analysis were performed: item analysis and exploratory factor analysis. Item analysis involved analysis of item variances, item‐total correlations, item means, and item standard deviations (DeVellis, 2012). To assess item variances, the range of responses (i.e., items’ minimums and maximums) for each item were inspected. To assess item-total correlations, each item was examined to determine the extent to which it correlated with the collection of remaining items. Items’ means and standard deviations were examined to ensure that, for each item, the means were near the midpoint of the 7-point scale on which participants rated the items while also ensuring that there was variation involved in attaining the means. Standard deviations above zero indicated that not everyone provided the same ratings for each item to arrive at a mean, but instead, different participants provided different ratings of each item to arrive at a mean. After performing item analysis, exploratory factor analysis (EFA) was conducted using SPSS Statistics 22.0, a software package for statistical analysis, to establish the factor structure of the trustworthiness items (Kline, 2013). EFA was used as a tool to help identify the most critical items for measurement of trustworthiness. “Important” trustworthiness items were operationalized as items with high factor loadings on factors with large eigenvalues. To assign items to factors, factor loadings equal to or higher than .32 were considered (Tabachnick & Fidell, 2001). Two tests were performed to assess the appropriateness of the data that were collected during this study for EFA: the Kaiser‐Meyer‐Olkin Measure of Sampling Adequacy (Kaiser, 1970; Kaiser & Rice, 1974) and Bartlett’s Test of Sphericity (Bartlett, 1954). Afterwards, EFA was conducted using principal axis factoring with oblique rotation; oblique rotation allows factors to correlate (Kline, 2013). Since the items were all trustworthiness items, there was no reason to think that any factors underlying the items would not be correlated, hence the decision to employ oblique rotation. Results of Cattell’s (1966) scree test were used to determine the number of factors to retain.
Description
  • SPSS 22.0 and SPSS 23.0 were used to access the SPSS files (e.g., the .sav, .sps, .spv files). Microsoft Word 2011 for Mac was used to access the .doc file.
Creator
Depositor
  • marmclau@iu.edu
Contact information
Keyword
Date coverage
  • 2013-12-21 to 2014-03-31
Citations to related material
Publisher
Resource type
Last modified
  • 09/21/2020
License
To Cite this Work:
Donaldson, D. The Digitized Archival Document Trustworthiness Scale Study Dataset and Associated Files [Data set]. Indiana University - DataCORE.

Relationships

Files (Count: 1; Size: 135 KB)

Download All Files (To download individual files, select them in the “Files” panel above)

Best for data sets < 3 GB. Downloads all files plus metadata into a zip file.