The Central Problem of Academic Hiring [Uncertain Principles]


A bunch of people in my social-media feeds are sharing this post by Alana Cattapan titled Time-sucking academic job applications don’t know enormity of what they ask. It describes an ad asking for two sample course syllabi “not merely syllabi for courses previously taught — but rather syllabi for specific courses in the hiring department,” and expresses outrage at the imposition on the time of people applying for the job. She argues that the burden falls particularly heavily on groups that are already disadvantaged, such as people currently in contingent faculty positions.

It’s a good argument, as far as it goes, and as someone who has been on the hiring side of more faculty searches than I care to think about, the thought of having to review sample syllabi for every applicant in a pool is… not exactly an appealing prospect. At the same time, though, I can see how a hiring committee would end up implementing this for the best of reasons.

Many of the standard materials used in academic hiring are famously rife with biases– letters of reference being the most obviously problematic, but even the use of CV’s can create issues, as it lends itself to paper-counting and lazy credentialism (“They’re from Bigname University, they must be good…”). Given these well-known problems, I can see a chain of reasoning leading to the sample-syllabus request as a measure to help avoid biases in the hiring process. A sample syllabus is much more concrete than the usual “teaching philosophy” (which tends to be met with boilerplate piffle), particularly if it’s for a specific course familiar to the members of the hiring committee. It offers a relatively objective way to sort out who really understands what’s involved in teaching, that doesn’t rely on name recognition or personal networking. I can even imagine some faculty earnestly arguing that this would give an advantage to people in contingent-faculty jobs, who have lots of teaching experience and would thus be better able to craft a good syllabus than some wet-behind-the-ears grad student from a prestigious university.

And yet, Cattapan’s “too much burden on the applicant” argument is a good one. Which is just another reminder that academic hiring is a lot like Churchill’s famous quip about democracy: whatever system you’re using is the worst possible one, except for all the others.

And, like most discussions of academic hiring, this is frustrating because it dances around what’s really the central problem with academic hiring, namely that the job market for faculty positions absolutely sucks, and has for decades. A single tenure-track opening will generally draw triple-digit numbers of applications, and maybe 40% of those will be obviously unqualified. Which leaves the people doing the hiring with literally dozens of applications that they have to cut down somehow. It’s a process that will necessarily leave large numbers of perfectly well qualified people shut out of jobs through no particular fault of their own, just because there aren’t nearly enough jobs to go around.

Given that market situation, most arguments about why this or that method of winnowing the field of candidates is Bad feel frustratingly pointless. We can drop some measures as too burdensome for applicants, and others as too riddled with bias, but none of that changes the fact that somehow, 149 of 150 applicants need to be disappointed at the end of the process. And it’s never really clear what should replace those problematic methods that would do a substantially better job of weeding out 99.3% of the applicants without introducing new problems.

At some level the fairest thing to do would be to make the easy cut of removing the obviously unqualified and then using a random number generator to pick who gets invited to campus for interviews. I doubt that would make anybody any happier, though.

Don’t get me wrong, this isn’t a throw-up-your-hands anti-measurement argument. I’d love it if somebody could find a relatively objective and reasonably efficient means of picking job candidates out of a large pool, and I certainly think it’s worth exploring new and different ways of measuring academic “quality,” like the sort of thing Bee at Backreaction talks about. (I’d settle for more essays and blog posts saying “This is what you should do,” rather than “This is what you shouldn’t do”…) But it’s also important to note that all of these things are small perturbations to the real central problem of academic hiring, namely that there are too few jobs for too many applicants.



from ScienceBlogs http://ift.tt/2mY7ovo

A bunch of people in my social-media feeds are sharing this post by Alana Cattapan titled Time-sucking academic job applications don’t know enormity of what they ask. It describes an ad asking for two sample course syllabi “not merely syllabi for courses previously taught — but rather syllabi for specific courses in the hiring department,” and expresses outrage at the imposition on the time of people applying for the job. She argues that the burden falls particularly heavily on groups that are already disadvantaged, such as people currently in contingent faculty positions.

It’s a good argument, as far as it goes, and as someone who has been on the hiring side of more faculty searches than I care to think about, the thought of having to review sample syllabi for every applicant in a pool is… not exactly an appealing prospect. At the same time, though, I can see how a hiring committee would end up implementing this for the best of reasons.

Many of the standard materials used in academic hiring are famously rife with biases– letters of reference being the most obviously problematic, but even the use of CV’s can create issues, as it lends itself to paper-counting and lazy credentialism (“They’re from Bigname University, they must be good…”). Given these well-known problems, I can see a chain of reasoning leading to the sample-syllabus request as a measure to help avoid biases in the hiring process. A sample syllabus is much more concrete than the usual “teaching philosophy” (which tends to be met with boilerplate piffle), particularly if it’s for a specific course familiar to the members of the hiring committee. It offers a relatively objective way to sort out who really understands what’s involved in teaching, that doesn’t rely on name recognition or personal networking. I can even imagine some faculty earnestly arguing that this would give an advantage to people in contingent-faculty jobs, who have lots of teaching experience and would thus be better able to craft a good syllabus than some wet-behind-the-ears grad student from a prestigious university.

And yet, Cattapan’s “too much burden on the applicant” argument is a good one. Which is just another reminder that academic hiring is a lot like Churchill’s famous quip about democracy: whatever system you’re using is the worst possible one, except for all the others.

And, like most discussions of academic hiring, this is frustrating because it dances around what’s really the central problem with academic hiring, namely that the job market for faculty positions absolutely sucks, and has for decades. A single tenure-track opening will generally draw triple-digit numbers of applications, and maybe 40% of those will be obviously unqualified. Which leaves the people doing the hiring with literally dozens of applications that they have to cut down somehow. It’s a process that will necessarily leave large numbers of perfectly well qualified people shut out of jobs through no particular fault of their own, just because there aren’t nearly enough jobs to go around.

Given that market situation, most arguments about why this or that method of winnowing the field of candidates is Bad feel frustratingly pointless. We can drop some measures as too burdensome for applicants, and others as too riddled with bias, but none of that changes the fact that somehow, 149 of 150 applicants need to be disappointed at the end of the process. And it’s never really clear what should replace those problematic methods that would do a substantially better job of weeding out 99.3% of the applicants without introducing new problems.

At some level the fairest thing to do would be to make the easy cut of removing the obviously unqualified and then using a random number generator to pick who gets invited to campus for interviews. I doubt that would make anybody any happier, though.

Don’t get me wrong, this isn’t a throw-up-your-hands anti-measurement argument. I’d love it if somebody could find a relatively objective and reasonably efficient means of picking job candidates out of a large pool, and I certainly think it’s worth exploring new and different ways of measuring academic “quality,” like the sort of thing Bee at Backreaction talks about. (I’d settle for more essays and blog posts saying “This is what you should do,” rather than “This is what you shouldn’t do”…) But it’s also important to note that all of these things are small perturbations to the real central problem of academic hiring, namely that there are too few jobs for too many applicants.



from ScienceBlogs http://ift.tt/2mY7ovo

Aucun commentaire:

Enregistrer un commentaire