Internationalization. Diversity. Inclusion.
Let’s face it, if your institution is anything like Rock City U’s, you’re starting to see an increased internationalization and diversification of your student body. This is good… but if your IR office is anything like ours, you’re going to be asked to study these students in detail. One of our recent tasks was to see if scores on the Test of English as a Foreign Language (TOEFL) test were related to retention, GPA, satisfaction, etc., etc. Easy, right? Throw together some correlations, a t-test or two, maybe even a logistical regression if you’re feeling fancy…
Oh but if it were that simple. If you’ve worked with TOEFL scores in the past, you know that ETS has created a wonderfully convoluted exam. Not only do they provide students with the option of taking the test in one of three distinct modes of administration… they score each mode on a separate scale. And what glorious scales: 0-120 for your internet test, 0-300 for the computer test, and… wait for it… 310 to 677 for the old-skool paper version. Beautiful.
If you search around the ETS website, you can find a comparison table that will help you figure out how to convert these three scales to one distinct measure. Or you could click here.
Thankfully, my administration was only interested in looking at the overall total score. You can find this comparison table on page 6. One thing should come clear pretty quickly: The computer-based test is the only one that never has a range as a converted value. For this reason, and this reason alone, I chose to convert all scores to their computer-based equivalent.
Of course, it wouldn’t be IR without an additional wrinkle. Rock City’s home institution, instead of creating a separate variable for each type of test, lumps them all together in our admissions database under the variable TOEFL. Yup. All in the same variable. Soooooo…. I have to make some assumptions. They are this:
- If a student’s TOEFL score fell between 0 and 120, they took the internet-based test and needed to be upconverted to the computer-based range.
- If a student’s TOEFL score fell between 120 and 300, they took the computer-based test and did not need conversion.
- If a student’s TOEFL score was greater than 300, they took the paper-based test and needed to be downcoverted to the computer-based range.
Obvious problems here, right? It’s totally possible that a student did a *really* crappy job on the computer-based test (say, got a 110) and, in this conversion scheme, end up looking quite good. Unfortunately, that’s the type of error imperfect data introduces into an analysis. Hopefully your institution makes it clear which test the student took so you don’t have to make this kind of assumption.
So… that’s the background of this little piece of SPSS code. I’ve recoded all the way down to a 49 on the internet test, and all the way down to a 463 on the paper test. At the very least, this should give you a good start. If, like me, your school puts everything under one variable, you can rename that variable to “testscore” and you should be able to run this as-is. You’ll get your converted scores in a variable called “testscore_R”. Enjoy… and please let me know if you know a more streamlined way to do this (or, heaven forbid, if you find any errors).
p.s. – WordPress won’t let me upload a .sps file, so you’ll have to cut and past this one.