Learning Resources LSP0339-UK 5-in-1 Outdoor Measure-Mate

£19.5
FREE Shipping

Learning Resources LSP0339-UK 5-in-1 Outdoor Measure-Mate

Learning Resources LSP0339-UK 5-in-1 Outdoor Measure-Mate

RRP: £39.00
Price: £19.5
£19.5 FREE Shipping

In stock

We accept the following payment methods

Description

Popper K. Philosophy of science. In: Mace CA, editor. British Philosophy in the Mid-Century. London: George Allen and Unwin; 1957. We have recently had fun using their Measure Mate, to work on some maths and geography skills, within the curriculum we are working on. Valentova, J. V., Junior, F. P. M., Štěrbová, Z., Varella, M. A. C., & Fisher, M. L. (2020). The association between Dark Triad traits and sociosexuality with mating and parenting efforts: A cross-cultural study. Personality and Individual Differences, 154, 109613. https://doi.org/10.1016/j.paid.2019.109613. Wang, S., Chen, C. C., Dai, C. L., & Richardson, G. B. (2018). A call for, and beginner’s guide to, measurement invariance testing in evolutionary psychology. Evolutionary Psychological Science, 4(2), 166–178. https://doi.org/10.1007/s40806-017-0125-5). The purpose of this study was to examine the measurement properties of the items contained in three quantitative evolution acceptance instruments, each of which captures evolution acceptance differently in terms of dimensionality and context. The MATE was designed as a unidimensional measure which used macroevolution and human evolution contexts in its items (Rutledge and Warden 1999). The I-SEA was developed to capture these contexts in separate constructs; it was designed to measure acceptance of microevolution, macroevolution, and human evolution as three respectivedimensions (Nadelson and Southerland 2012). The GAENEwas designed to provide a unidimensional measure of evolution acceptance which is independent of knowledge of evolution and religious orientation (Smith et al. 2016). Given the high similarity of wording between many of the items on the three instruments and the fact that they share a common Likert measurement scheme, we hypothesized that these instruments may share more similarities than differences and actually provide similar quantitative information about evolution acceptance. We found that this is the case. Putting the 57 items from the three instruments together to form an instrument-independent scaling results in useful unidimensional and two-dimensional parametrizations of evolution acceptance.

Amir, Y., & Sharon, I. (1990). Replication research: a “must” for the scientific advancement of psychology. Journal of Social Behavior & Personality, 5(4), 51–59. Penke, L., & Asendorpf, J. B. (2008). Beyond global sociosexual orientations: A more differentiated look at sociosexuality and its effects on courtship and romantic relationships. Journal of Personality and Social Psychology, 95(5), 1113. https://doi.org/10.1037/0022-3514.95.5.1113. Todd A, Romine WL, Cook Whitt K. Development and validation of the learning progression-based assessment of modern genetics in a high school context. Sci Educ. 2017;101(1):32–65. Rutledge ML, Warden MA. The development and validation of the measure of acceptance of the theory of evolution instrument. School Sci Math. 1999;99(1):13–8.

In addition to understanding how our current measures of evolution acceptance fit with conceptual work, this study also yields insight into the empirical implications for parametrizing evolution acceptance. Various parametrizations have been explored. Nadelson and Southerland ( 2012) utilized a three-dimensional model for evolution acceptance in their construction of the I-SEA, suggesting that the dimensions of evolution acceptance should be delineated by the type of evolution: microevolution, microevolution, and human evolution. What this study suggests is that, from a quantitative perspective, topic is a determinant of the difficulty of an item along the same sub-construct(Figs. 3 and 4), but it does not seem to serve as the key delimiter in terms of the unique sub-constructs. In other words, while acceptance of macroevolution, human evolution, and microevolution may be distinct in their difficulty, it may not be necessary to treat them asdistinct sub-constructs. Rather, the data show that differences between students’ responses on items across contexts are accounted for by the expected difficulty hierarchy imposed by the Rasch model (Boone 2016), making it unnecessary to define new sub-constructs to account for the different response patterns across contexts. Jonason, P. K., & Buss, D. M. (2012). Avoiding entangling commitments: Tactics for implementing a short-term mating strategy. Personality and Individual Differences, 52(5), 606–610. https://doi.org/10.1016/j.paid.2011.12.015. Greiling, H., & Buss, D. M. (2000). Women’s sexual strategies: The hidden dimension of extra-pair mating. Personality and Individual Differences, 28(5), 929–963. https://doi.org/10.1016/S0191-8869(99)00151-8. Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation approach. Multivariate Behavioral Research, 25(2), 173–180. https://doi.org/10.1207/s15327906mbr2502_4. This versatile tool is the perfect way to introduce the concept of measurement to young learners in an outdoor setting.

Nadelson LS, Southerland SA. Examining the interaction of acceptance and understanding: how does the relationship change with a focus on macroevolution. Evolution. 2010;4:82–8. Webster, G. D., Jonason, P. K., & Schember, T. O. (2009). Hot topics and popular papers in evolutionary psychology: analyses of title words and citation counts in evolution and human behavior, 1979–2008. Evolutionary Psychology, 7(3), 348–362. Todd A, Romine WL. Validation of the learning progression-based assessment of modern genetics in a college context. Int J Sci Educ. 2016;38(10):1673–98. Smith MU, Snyder SW, Devereaux RS. The GAENE—generalized acceptance of evolution evaluation: development of a new measure of evolution acceptance. J Res Sci Teach. 2016;53(9):1289–315.

Wright BD, Linacre JM, Gustafson JE, Martin-Loff P. Reasonable mean square fit values. Rasch Meas Trans. 1994;8(3):370. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. https://doi.org/10.1037/1082-989X.1.2.130. Revelle, W. (2018). Psych: Procedures for personality and psychological research. Evanston, IL: Northwestern University.

Romine WL, Todd AN, Clark TB. How do undergraduate students conceptualize acid–base chemistry? Measurement of a concept progression. Sci Educ. 2016;100(6):1150–83. Nadelson LS, Hardy KK. Trust in science and scientists and the acceptance of evolution. Evolution. 2015;8(1):9. When the 57 items are broken into two separate sub-constructs consisting of the 38 positively worded and 19 negatively worded items, respectively, excellent reliability is maintained and the measures become unidimensional. The 38 positively-worded items provide a measure for acceptance of the truth of evolution with a Rasch person reliability of 0.97 (separation = 5.95). The first eigenvalue from PCA on residuals from the Rasch rating scale model fitting these items consists of 1.51 items of variance, which is well below 2. The 19 negatively worded items generate measures for rejection of incredible ideas about evolution with a Rasch person reliability of 0.94 (separation = 4.09). The Rasch model also shows that this scale is unidimensional, with a first eigenvalue of 1.15 items of variance from PCA on the residuals with respect to the model. We used exploratory factor analysis (EFA) in SPSS 21.0 to reconcile how the items on these three instruments define latent dimensions related to evolution acceptance, and then followed Nadelson and Southerland ( 2012) in proceeding to use confirmatory factor analysis (CFA) to evaluate how well the hypothesizedfactor solution derived from EFA reproduces the relationships between the item responses. While the dimensionality of these individual instruments has been discussed and explored in previous work, we initially took the position in this study that we did not know the dimensionality of this collection of items when they are administered together, thereby warranting an a posteori approach for exploring dimensionality. This can be contrasted with the a priori, or hypothesis-driven, approach that involves study of the itemsthrough apre-specified model.EFA is a variable-centered clustering technique where latent factors among observed variables are extracted mathematically from the data (Collins and Lanza 2013). EFA has been used extensively in science education research, particularly in the context of instrument validation (i.e. Romine et al. 2013; Corwin et al. 2015), and is a technique that is oftenassociated with classical test validation methods and other situations where the researcher wishes to reduce a larger feature set to relatively few latent constructsor dimensions (Henson and Roberts 2006). The Inventory of Student Evolution Acceptance (I-SEA) and the Evolution Attitudes and Literacy Survey (EALS) are also constructed multidimensionally, meaning that multiple quantitative measures are used to account for students’ observed responses (Kline 2014). The I-SEA (Nadelson and Southerland 2012) was designed to improve on the MATE by disentangling microevolution, macroevolution, and human evolution contexts. This is an important contribution given that microevolutionary events may be easier for students to accept than those related to macro- and human evolution (Alters and Alters 2001; Scott 2005). Given these goals, the I-SEA consists of 24 total items, 9 of which are negatively worded, with 8 items assigned to one of three subscales for constructs defining acceptance of microevolution, macroevolution, and human evolution, respectively. The authors hypothesized that the items related to acceptance of microevolution would be easier than the items measuring acceptance of macroevolution and human evolution, and their analysis bears this out (Nadelson and Southerland 2012). Here, it may be useful to draw a distinction between Nadelson and Southerland’s ( 2012) decision to use 3 dimensions to explain the differences in items, including their difficulty, versus letting the items take a difficulty hierarchy along a single dimension as is often done in Rasch studies (Boone et al. 2013). Nadelson and Southerland show that the 3-dimensional model explains students’ responses adequately; however, the efficacy of the comparatively parsimonious approach of defining the items hierarchically along a single unidimensional Rasch scale remains unexplored.Some items of the MATE involve context and others do not but the role of context in measurement of acceptance was not carefully considered. Hence the I-SEA instrument puts forth a three-dimensional model where acceptance of evolution is assessed along three constructs: (1) microevolution, (2) macroevolution, and (3) human evolution. In this article, we use the word “dimension” to refer to a quantitative representation of a construct which accounts for the correlation between item responses (Kline 2014). In discussion of the methods and results, we will also use the term “factor”, which refers to an individual construct or dimension (Kline 2014). In the case of the I-SEA, Nadelson and Southerland ( 2012) use three dimensions to account for the relationships between the responses, whereas the MATE and the GAENE use a single dimension to account for the correlation between responses on their respective items. In the development of the GAENE, Smith et al. ( 2016) argue that conflation of acceptance with knowledge, belief, and religious connotation limits the content validity of the MATE, thereby limiting our ability to use the MATE as a valid measure of evolution acceptance. Smith and colleagues henceforth developed set of items which are worded in such a way that they avoid these confounding factors. Young learners can have fun exploring and measuring the natural landscape using five different tools:

Brown, T. A. (2014). Confirmatory factor analysis for applied research. New York, NY: Guilford Publications. Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift’s electric factor analysis machine. Understanding Statistics, 2(1), 13–43. https://doi.org/10.1207/S15328031US0201_02. Seven of the 38 items measuring acceptance of the truth display at least one mean squares fit index of 1.30 or above. Those items with the greatest misfit (a mean squares fit index above 1.50) come from the GAENE (items GAENE1, GAENE6, and GAENE8from GAENE 2.1). GAENE1 states everyone should understand evolution. GAENE6 states I would be willing to argue in favor of evolution in a public forum such as a school club, church group, or meeting of public school parents. GAENE8 states nothing in biology makes sense without evolution. These are of moderate-to-high difficulty, indicating that even students who accepted that evolution is true tended to mark lower levels of acceptance on these items. These items are getting at other factors outside of acceptance of the truth of evolution. Schmitt N, Stuits DM. Factors defined by negatively keyed items: the result of careless respondents? Appl Psychol Meas. 1985;9(4):367–73. Buss, D. M., Goetz, C., Duntley, J. D., Asao, K., & Conroy-Beam, D. (2017). The mate switching hypothesis. Personality and Individual Differences, 104, 143–149. https://doi.org/10.1016/j.paid.2016.07.022.Nadelson and Southerland ( 2012) developed the I-SEA to make measures of evolution acceptance more fine-grained, embracing the possibility that evolution acceptance may comprise multiple related constructs which account for the specific type of evolution being considered. The authors cite that micro- and macroevolution are viewed differently by students (Nehm and Ha 2011). Specifically, many who reject macroevolution may readily accept ideas about microevolution (Scott 2005), and further, even those who accept evolution over long time scales often believe that humans are exempt from the process of evolution (Gallup 2010). Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70. https://doi.org/10.1177/109442810031002. Boone WJ. Rasch analysis for instrument development: why, when, and how? CBE Life Sci Educ. 2016;15(4):4. Ellis, B. J. (1992). The evolution of sexual attraction: evaluative mechanisms in women. In J. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind (pp. 267–288). New York: Oxford University Press.



  • Fruugo ID: 258392218-563234582
  • EAN: 764486781913
  • Sold by: Fruugo

Delivery & Returns

Fruugo

Address: UK
All products: Visit Fruugo Shop