Knowing vs. Knowing How

By Charles Blaich and Kathleen Wise

 

For over a decade, we have been working with faculty, staff, and administrators from colleges and universities to translate evidence into improved student learning. We are not higher education researchers but are, in fact, college teachers from different areas who are interested in improving student learning.

Our success is mixed, at best. Early on, we assumed that evidence from well-designed research on the practices, conditions, and pedagogies that promote student learning would convince faculty to revise and improve their teaching. This turned out to be wrong. At most of the colleges and universities we worked with, evidence of how variations in students’ learning environments drove changes in their learning had almost no impact on its own without years-long work at the institution to embed this knowledge in institutional governance and faculty development programs.

Of course, if we’d been good researchers, we would have done a thorough lit review and looked at research on the extent to which knowledge about good pedagogy leads faculty to improve their teaching. There’s some terrific and wholly depressing research on this in science, technology, engineering, and mathematics (STEM). According to the 2010–2011 HERI Faculty Survey, more than half of all STEM faculty use extensive lecturing as their primary teaching tool (Hurtado, Eagan, Pryor, Whang, & Tran, 2012). Yet, Freeman et al.’s (2014) recent meta-analysis of hundreds of studies found about a half a standard deviation benefit for student learning in STEM classes in which faculty used some form of active learning instead of lecturing. This is the third such meta-analysis on this question, and all three have shown almost identical positive effects.

Freeman et al. (2014) were, perhaps, showing their exasperation at the slow uptake of empirically validated teaching techniques in the sciences when they said, “If the experiments analyzed here had been conducted as randomized controlled trials of medical interventions, they may have been stopped for benefit—meaning that enrolling patients in the control condition might be discontinued because the treatment being tested was clearly more beneficial” (p. 8413).

Even though we know about the benefit of active pedagogies, and even though they have received a great deal of attention at STEM conferences and in STEM journals, they have yet to become the norm in science classes. Why is this the case?

We know some of what blunts the spread of these more effective approaches to teaching. These approaches take more time—and time is in short supply at colleges and universities. More and more intro science courses are taught by adjuncts who don’t have the time, space, and support to implement and refine active learning techniques in their classes. Departments and institutions continue to push for greater research productivity, and spending more time on research leaves less time for other things. Many faculty still view teaching as an art steeped in disciplinary knowledge rather than in pedagogical skill. Furthermore, many faculty respect their intuitions and recollections about how they learned more than they do educational research on learning.

But there are other important reasons. The first has to do with how we train faculty to teach better. If the word train clanged against your ear, you understand the first problem. We don’t train faculty to teach better, we help them develop. Most faculty development efforts are light-touch affairs that rely on the belief that hearing about a successful teaching technique at a conference, reading about one in an article, or participating in a beginning-of-the-semester faculty development workshop will translate into improved teaching. In other words, give smart, dedicated people a little knowledge, and they will use it to become better teachers.

Unfortunately, not much evidence supports this belief. In a 2011 review of the impact of faculty development efforts in the STEM fields, Henderson, Beach, and Finkelstein reported that the least effective strategy was “developing and testing ‘best practice’ curricular materials and then making these materials available to other faculty” (p. 978). Henderson et al. cite another researcher, Elaine Seymour, who said that this strategy is based on the unproven theory that “good ideas, supported by convincing evidence of efficacy, will spread ‘naturally’—that, on learning about the success of particular initiatives, others will become convinced enough to try them” (as cited in Henderson, et al., p. 972).

Effective faculty development strategies involved more work, i.e., (1) coordinated, focused, and many-weeks-long programs, (2) providing faculty with performance evaluations and feedback, and (3) helping faculty change the conceptual frameworks they use in designing their classes. Like any kind of deeper learning, improving one’s teaching requires sustained effort, feedback, and careful thinking.

Right now you might be thinking, “Sure, that makes sense, but shouldn’t knowing more about effective teaching strategies cause some improvement?” Perhaps, but the distance between hearing about good pedagogy and using it effectively may be farther than we think. The reason is that the kinds of knowledge generated from good research on pedagogy is different from the kinds of knowledge it takes to create a better classroom environment for learning. Just as reading a research methods text, listening to a lecture on statistics, or participating in a webinar about using NVivo doesn’t make you into an excellent researcher, reading or hearing about research on effective pedagogies doesn’t make you an excellent teacher.

In the 2007 NSSE Annual Report, George Kuh says, “While high impact practices are appealing, to engage students at high levels, these practices must be done well” (p. 9). Sounds simple, but what exactly does a well-done first-year or senior seminar look like? Sure, it shouldn’t ask students to just memorize materials and have a faculty member droning on with boring back-turned-to-the-class lectures. But if we avoid everything that would make the class heinous, what are the qualities that would make it excellent? And are these qualities the same for students at an Ivy League school as they are for students at a regional state university? What about for women and for men? Or for women and men who come from different ethnic and socioeconomic backgrounds? Or women and men who vary in all of these ways at two different institutions who are taking classes in different majors? Now we’re starting to talk about the complicated environments that teachers face every day.

Aristotle distinguished practical wisdom (phronesis) from theoretical wisdom (sophia). Gilbert Ryle (1946) described a contemporary version of these two ideas when he distinguished between “knowing that” and “knowing how.”

 

Effective possession of a piece of knowledge-that involves knowing how to use that knowledge, when required, for the solution of other theoretical or practical problems. There is a distinction between the museum-possession and the workshop-possession of knowledge. A silly person can be stocked with information, yet never know how to answer particular questions. (p. 16)

 

We can also see this distinction when looking at the obviously related, yet very different, fields of educational research and teaching. Researchers are after a different kind of knowledge from teachers who seek to improve their effectiveness. Researchers want generalized knowledge; teachers want to apply their knowledge in a particular context with particular students. Researchers seek to generate knowledge that speaks to an abstracted reality in which they have controlled for variables and carefully measured things according to professional standards. Teachers don’t control for background variables; their classes are a brew of people who differ in their gender identities, ethnicity, entering preparation, socioeconomic class, and other variables that may affect the impact of any pedagogy they use.

We suggest that when faculty learn about research on good pedagogy from reading an article, listening to a presentation at a conference, or participating in a workshop, they are just starting to “know that” about the pedagogy. They do not yet “know how” to do it well.

Imagine that you’re a faculty member in sociology who wants to engage a team of second-year students in an undergraduate research project. You don’t know much about their high school experiences, their SAT/ACT scores, or their socioeconomic backgrounds, and you only know a little about their ethnic identities. To help you figure out whether working on undergraduate research would be beneficial for these students, you read the following from a new study on the effects of high-impact practices:

There were also instances of nonsignificant general effects that masked significant effects of high impact/good practices for particular subgroups of students. For example, net of other factors working with a faculty member on a research project failed to have a significant general effect on fourth-year critical thinking skills. However, for students who entered postsecondary education with critical thinking skills in the lowest third of the distribution, working with a faculty member on a research project conferred a substantial and statistically significant advantage in fourth-year critical thinking skills. Similarly, we found no statistically significant relationship between cooperative learning experiences and fourth year positive attitude toward literacy in our general effects model but this masked a statistically significant negative interaction effect between cooperative learning experiences and precollege ACT score on the outcome. (Seifert, Gillig, Janson, Pascarella, & Blaich, 2014, pp. 556–557)

Based on these findings, should you ask the students to do the project? If you do, should the students work together? What lesson do you take from this research?

This kind of knowledge is not useless to a teacher, but neither is it immediately useful. Researchers abstract things. They engage in “systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalized knowledge” (Office for Human Research Protections, 2009). But when teachers, and other practitioners, use research findings to guide their work, they work in context and in the moment. Applying research findings to complicated and fluid classroom settings requires experimentation and persistence. It’s not impossible, but it takes time.

This is why faculty who learn about a new way of teaching from an article, conference, or workshop often fail in their first attempts to translate this knowledge into better learning for their students. Research findings on good practices don’t improve student learning—they are not recipes for creating more effective learning environments. They can, however, inspire and guide faculty who want to become more effective teachers. Good faculty will persist and tune the ideas they get from research on good teaching techniques until they figure out what works for their students, and good institutions have ways of supporting faculty while they do this.

We interviewed Teagle Scholars Ty Buckman (Wittenberg University), Cindy Crimmins (York College of Pennsylvania), and Paul Sotherland (Kalamazoo College), who have led faculty development efforts at their campuses, on the principles that guide their work and the lessons they have learned about helping faculty become more effective at improving student learning. Listen our conversation with them here.

If you have questions about this article, contact the authors Charles Blaich and Kathleen Wise.

References

Freeman S., Eddy, S. L., McDonough M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111, 8410–8415.

Henderson, C., Beach, A., and Finkelstein, N. (2011). Facilitating change in undergraduate STEM instructional practices: An analytic review of the literature. Journal of Research in Science Teaching, 48, 952–984.

Hurtado, S., Eagan, M. K., Pryor, J. H., Whang, H., & Tran, S. (2012). Undergraduate teaching faculty: The 2010–2011 HERI Faculty Survey. Los Angeles: Higher Education Research Institute, UCLA.

National Survey of Student Engagement. (2007). Experiences that matter: Enhancing student learning and success—Annual Report. Retrieved January 6, 2014, from http://nsse.indiana.edu/NSSE_2007_Annual_Report/docs/withhold/NSSE_2007_Annual_Report.pdf

Office for Human Protections, Human Subjects Research. (2009). Code of Federal Regulations 45 46.102d. Retrieved January 6, 2015, from the U.S. Department of Health and Human Services website, http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html#46.102

Ryle, G. (1946). Knowing how and knowing that: The presidential address. Proceedings of the Aristotelian Society, New Series, 46, 1–16. Published by Wiley on behalf of The Aristotelian Society. Retrieved January 6, 2015, from http://www.jstor.org/stable/4544405

Seifert, T. A., Gillig, B, Janson, J. M., Pascarella, E. T., & Blaich, C. F. (2014). The conditional nature of high impact/good practices on student learning outcomes. The Journal of Higher Education, 85, 531–564.