There’s a piece in Inside Higher Ed today on yet another study showing that student course evaluations don’t correlate with student learning. For a lot of academics, the basic reaction to this is summed up in the Chuck Pearson tweet that sent me to the story: “Haven’t we settled this already?”
The use of student course evaluations, though, is a perennial argument in academia, not likely to be nailed into a coffin any time soon. It’s also a good example of a hard problem made intractable by a large number of assumptions and constraints that are never clearly spelled out.
As discussed in faculty lounges and on social media, the basic argument here (over)simplifies to a collection of administrators who like using student course evaluations as a way to measure faculty teaching, and a collection of faculty who hate this practice. If this were just an argument about what is the most accurate way to assess the quality of teaching in the abstract, studies like the one reported in IHE (and numerous past examples) would probably settle the question, but it’s not, because there’s a lot of other stuff going on. And because a lot of the other stuff that’s going on is never clearly stated, a lot of the stuff people wind up saying in the course of this argument is not actually helpful.
One source of fundamental conflict and miscommunication is over the need for evaluating teaching in the first place. On the faculty side, administrative mandates for some sort of teaching assessment are often derided as brainless corporatism– pointless hoop-jumping that is being pushed on academia by people who want everything to be run like a business. The preference of many faculty in these arguments would be for absolutely no teaching evaluation whatsoever.
That kind of suggestion, though, gives the people who are responsible for running institutions the howling fantods. Not because they’ve sold their souls to creeping corporatism, but because some kind of evaluation is just basic, common-sense due diligence. You’ve got to do something to keep tabs on what your teaching faculty are doing in the classroom, if nothing else in order to have a response when some helicopter parent calls in and rants about how Professor So-and-So is mistreating their precious little snowflake. Or, God forbid, so you get wind of any truly outrageous misconduct on the part of faculty before it becomes a giant splashy news story that makes you look terrible.
That helps explain why administrators want some sort of evaluation, but why are the student comment forms so ubiquitous in spite of their flaws? The big advantage that these have is that they’re cheap and easy. You just pass out bubble sheets or direct students to the right URL, and their feedback comes right to you in an easily digestible form.
And, again, this is something that’s often derided as corporatist penny-pinching, but it’s a very real concern. We know how to do teaching evaluation well– we do it when the stakes are highest— but it’s a very expensive and labor-intensive process. It’s not something that would be practical to do every year for every faculty member, and that’s not just because administrators are cheap– it’s because the level of work required from faculty would be seen as even more of an outrage than continuing to use the bubble-sheet student comment forms.
And that’s why the studies showing that student comments don’t accurately measure teaching quality don’t get much traction. Everybody knows that it’s a bad measurement of that, but doing a good measurement of that isn’t practical, and also isn’t really the point.
So, what’s to be done about this?
On the faculty side, one thing to do is to recognize that there’s a legitimate need for some sort of institutional oversight, and look for practical alternatives that avoid the worst biases of student course comment forms without being unduly burdensome to implement. You’re not going to get a perfect measure of teaching quality, and “do nothing at all” is not an option, but maybe there’s some middle ground that can provide the necessary oversight without quintupling everybody’s workload. Regular classroom observations, say, though you’d need some safeguard against personal conflicts– maybe two different observers, one by the dean/chair or their designee, one by a colleague chosen by the faculty member being evaluated. It’s more work than just passing out forms, but better and fairer evaluation might be worth the effort.
On the administrative side, more acknowledgement that evaluation is less about assessing faculty “merit” in a meaningful way, and more about assuring some minimum level of quality for the institution as a whole. And student comments have some role to play in this, but it should be acknowledged that these are mostly customer satisfaction surveys, not serious assessments of faculty quality. In which case they shouldn’t be tied to faculty compensation, as is all too often the case– if there must be financial incentives tied to faculty evaluation, they need to be based on better information than that, and the sums involved should be commensurate with the level of effort required to make the system work.
I don’t really expect any of those to go anywhere, of course, but that’s my $0.02 on this issue. And though it should go without saying, let me emphasize that this is only my opinion as an individual academic. While I fervently hope that my employer agrees with me about the laws of physics, I don’t expect that they share my opinions on academic economics or politics, so don’t hold it against them.
from ScienceBlogs http://ift.tt/2cV7yPE
There’s a piece in Inside Higher Ed today on yet another study showing that student course evaluations don’t correlate with student learning. For a lot of academics, the basic reaction to this is summed up in the Chuck Pearson tweet that sent me to the story: “Haven’t we settled this already?”
The use of student course evaluations, though, is a perennial argument in academia, not likely to be nailed into a coffin any time soon. It’s also a good example of a hard problem made intractable by a large number of assumptions and constraints that are never clearly spelled out.
As discussed in faculty lounges and on social media, the basic argument here (over)simplifies to a collection of administrators who like using student course evaluations as a way to measure faculty teaching, and a collection of faculty who hate this practice. If this were just an argument about what is the most accurate way to assess the quality of teaching in the abstract, studies like the one reported in IHE (and numerous past examples) would probably settle the question, but it’s not, because there’s a lot of other stuff going on. And because a lot of the other stuff that’s going on is never clearly stated, a lot of the stuff people wind up saying in the course of this argument is not actually helpful.
One source of fundamental conflict and miscommunication is over the need for evaluating teaching in the first place. On the faculty side, administrative mandates for some sort of teaching assessment are often derided as brainless corporatism– pointless hoop-jumping that is being pushed on academia by people who want everything to be run like a business. The preference of many faculty in these arguments would be for absolutely no teaching evaluation whatsoever.
That kind of suggestion, though, gives the people who are responsible for running institutions the howling fantods. Not because they’ve sold their souls to creeping corporatism, but because some kind of evaluation is just basic, common-sense due diligence. You’ve got to do something to keep tabs on what your teaching faculty are doing in the classroom, if nothing else in order to have a response when some helicopter parent calls in and rants about how Professor So-and-So is mistreating their precious little snowflake. Or, God forbid, so you get wind of any truly outrageous misconduct on the part of faculty before it becomes a giant splashy news story that makes you look terrible.
That helps explain why administrators want some sort of evaluation, but why are the student comment forms so ubiquitous in spite of their flaws? The big advantage that these have is that they’re cheap and easy. You just pass out bubble sheets or direct students to the right URL, and their feedback comes right to you in an easily digestible form.
And, again, this is something that’s often derided as corporatist penny-pinching, but it’s a very real concern. We know how to do teaching evaluation well– we do it when the stakes are highest— but it’s a very expensive and labor-intensive process. It’s not something that would be practical to do every year for every faculty member, and that’s not just because administrators are cheap– it’s because the level of work required from faculty would be seen as even more of an outrage than continuing to use the bubble-sheet student comment forms.
And that’s why the studies showing that student comments don’t accurately measure teaching quality don’t get much traction. Everybody knows that it’s a bad measurement of that, but doing a good measurement of that isn’t practical, and also isn’t really the point.
So, what’s to be done about this?
On the faculty side, one thing to do is to recognize that there’s a legitimate need for some sort of institutional oversight, and look for practical alternatives that avoid the worst biases of student course comment forms without being unduly burdensome to implement. You’re not going to get a perfect measure of teaching quality, and “do nothing at all” is not an option, but maybe there’s some middle ground that can provide the necessary oversight without quintupling everybody’s workload. Regular classroom observations, say, though you’d need some safeguard against personal conflicts– maybe two different observers, one by the dean/chair or their designee, one by a colleague chosen by the faculty member being evaluated. It’s more work than just passing out forms, but better and fairer evaluation might be worth the effort.
On the administrative side, more acknowledgement that evaluation is less about assessing faculty “merit” in a meaningful way, and more about assuring some minimum level of quality for the institution as a whole. And student comments have some role to play in this, but it should be acknowledged that these are mostly customer satisfaction surveys, not serious assessments of faculty quality. In which case they shouldn’t be tied to faculty compensation, as is all too often the case– if there must be financial incentives tied to faculty evaluation, they need to be based on better information than that, and the sums involved should be commensurate with the level of effort required to make the system work.
I don’t really expect any of those to go anywhere, of course, but that’s my $0.02 on this issue. And though it should go without saying, let me emphasize that this is only my opinion as an individual academic. While I fervently hope that my employer agrees with me about the laws of physics, I don’t expect that they share my opinions on academic economics or politics, so don’t hold it against them.
from ScienceBlogs http://ift.tt/2cV7yPE
Aucun commentaire:
Enregistrer un commentaire