Dear All,
The Philosophy Department of the Central European University, the
Institute Vienna Circle and the Unit for Applied Philosophy of Science
and Epistemology (of the Department of Philosophy of the University of
Vienna) are jointly organizing a series of talks this term:
https://wienerkreis.univie.ac.at/news-events/apse-ceu-ivc-talks-wintersemes…
The next talk will be Thursday, January 26th, 3-5pm CEST.
The speaker will be Uljana Feest (University of Hannover).
The title of the talk is:
'Big Data and Machine Learning in the Measurement of Personality Traits'
(Abstract below)
This talk is online only. Online access (without registration):
univienna.zoom.us/j/61475205762
Meeting-ID: 614 7520 5762
Password: 264065
Abstract:
In the last 15 years or so, the use of big data and machine learning has
gained traction in some areas of personality psychology (Rauthmann
2020): While traditional personality research relies largely on
self-reports and third-person assessments, the new area of "personality
computing" (Vinciarelli & Mohammadi 2014) promises to be more
unobtrusive and deliver data from subjects' everyday behavior, such as
cell-phone use and "likes" on social media, which are processed by
machine learning algorithms to produce predictions about personality
traits and behaviors. Some commentators have hailed this method as a new
psychometric tool, which can compete with (and will perhaps replace)
old-fashioned questionnaire-based personality tests (Boyd et al 2020)
and which has the advantage of using naturally occurring behaviors (Furr
2009).
If we view personality computing as a tool of psychometric measurement,
the question is how it fares with regard to standard criteria of test
evaluation, such as validity (Harari et al 2020; Phan & Rauthmann 2021;
Bleidorn & Hopwood 2020). In my talk, I will pick up on some recent
discussions about the construct validity of PC-models. I will begin by
explaining the notion of construct validity as a property of both, tests
and constructs. For example, if a test is claimed to measure the
purported personality trait of introversion, it has construct validity
if it in fact measures introversion, which in turn means that the
construct (=concept) _introversion_ has a legitimate referent. Within
psychology, it is, however, highly controversial what standards of
evidence have to be met in order for a test to have construct validity.
Two opposing sides focus on either correlational or experimental
evidence (Borsboom et al 2004). Advocates of the former approach look
for correlations between different measures of the same thing, whereas
advocates of the latter demand that the data produced by the test in
fact be caused by the phenomenon under investigation. (Feest 2020)
I will argue that while the outputs of PC models appear to be correlated
with the outcomes of traditional personality measures, the precise
targets of those traditional personality measures remain contested.
Moreover, big data are typically "mobilized from a variety of sources"
(Leonelli 2020), which means that the material circumstances of their
production recede into the background and the data become
decontextualized. In turn, this means that their quality as evidence for
the phenomena in question (and thus the validity of the PC models that
utilize them) cannot easily be established (Feest 2022). I will conclude
that while all of this does not negate the potential heuristic
fruitfulness of PC models, it strongly suggests that these models need
to be supplemented with theoretical and experimental work, which should
(a) articulate and develop the relevant constructs, and (b) establish
the suitability of the data as evidence. Everyone welcome!
On behalf of the organizers,
Iulian Toader
https://wienerkreis.univie.ac.at/institut/personal-detailansicht/user/toade…