Parsing the data on student success

Certainly a large part of understanding student success depends on the data that one has available. In this case, I assume that the more data the better – ASQ or NSSE data that go back several years, various admission rates as well retention, graduation rates, the growth of transfer students, student GPAs and other measures of success, along with good quality outcomes assessment data from each university program, especially those related to mentoring or tutoring students from special populations. How many students are in these programs, and what happens to their grades, their feelings about the institution, and their retention rates after they participate?

All that seems pretty straightforward. But part of the problem I think “we” in academia have is a trick of data presentation. We rarely slice and dice the data as finely as we should. Wanting to know “hard numbers” and how they change over time, we end up with simple crosstabulations — retention rates for various years, say — rather than thinking more carefully about what control variables might help to show the causality. We might get data parsed out by a third (demographic) variable, such as race or gender, but rarely parental income or other class proxies such as first generation or immigrant status. Even less likely are we to think about external economic conditions.

One simple example may suffice: a few years ago in the enrollment management committee I sit on, we were shown six years of data on what are called completion rates: the percentage of students who complete a course with C or better (not passing rates, which are less severe – a D or better). After a quick perusal of the table, something interesting jumped out at me — the rates in the second three years were exactly twice as high as the first three years. Most changes don’t happen in such a stark way — they go up and down, or they change gradually over time. What would cause such a huge jump?

The answer seemed obvious to me (when I showed the table to my sociology colleagues, they all saw it as well). But round the table we went, each with a different thought as to why. Finally, I said, notice when this happened (2007-2008). Might not the recession and downturn in the economic climate have something to do with this? It had to be some fairly singular event to create a change that quickly.

Of course, I’d need more data to know for sure, but it seems as likely as any of the other reasons people gave. “A change in General Education or other curriculum issues?” Possibly, except changes in our General Education program took place AFTER this change. “Stricter grading policies? Changes in the professors or the classes?” Possibly. But the types of classes, although largely in the hard sciences, change a bit from year to year; sometimes they include courses in philosophy or a language. “The quality of students going down?” Sure, if you measure quality in terms of test scores, that is demonstrable. But the test scores don’t go down suddenly; they go down gradually, surely imperceptibly. To say that any professor can notice a difference in student quality from one year to the next is ridiculous. Nothing else changed (“college name change?” Nope, that was 2004), except the economy tanked.

So how does the economy explain it? Well, students who need to stretch their incomes with a new job or an additional job to supplement help they might have gotten from their parents don’t do as well in those courses that are already difficult. The stress of wondering what’s going to happen to the economy, and whether one will have a job at graduation, probably didn’t help with course success either. Students’ anxiety and mental health issues increased during this time as well (and not just because we are a Prozac nation; much like the homeless women studied by Elliot Liebow, who became “crazier” at the end of the month when the money ran out, students’ mental health issues are surely also part situational).

I talked to one colleague about this, who felt that the economic downturn and heightened labor market challenges would encourage students to work harder rather than give up. Possibly for some, yes. But maybe more students are now taking those premed or other difficult gateway courses who wouldn’t have taken them before, out of desperation perhaps, willing to take a gamble on a lucrative career they might not have taken before. Maybe that explains lower completion rates as well. At any rate, data on individual income levels or whether or not the student is employed seem quite relevant.

Next up: But what if they don’t have the resources to work as hard as they can?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>