Work-Learning Research

  • Work-Learning Research
  • Catalog of Publications
  • LearningAudit.com
  • Will's Speaking & Workshops
My Photo

About

Search

  • Google
    This Blog Web

Notable Books

Recommended Books

  • Turning Research into Results: A Guide to Selecting the Right Performance Solutions, by Richard E. Clark, Fred Estes
  • How People Learn: Brain, Mind, Experience, and School: Expanded Edition, by National Research Council, edited by John Bransford, Ann L. Brown, Rodney R. Cocking
  • Criterion-Referenced Test Development 2nd Edition, by Sharon Shrock, William Coscarelli, Patricia Eyres
  • Michael Allen's Guide to E-Learning, by Michael Allen
  • e-Learning and the Science of Instruction, by Ruth Colvin Clark, Richard E. Mayer
  • Efficiency in E-Learning by Ruth Colvin Clark, Frank Nguyen, John Sweller (2006)

Best-Selling Books

  • The Long Tail

Google Advertisements

Tuesday, 07 November 2006

Response Cards to Facilitate Active Learning in Lectures

Lectures are widely reviled for putting learners in a passive mode. On the other hand, lectures are relatively easy to implement, even with large numbers of learners. And regardless of the pluses and minuses, lectures are ubiquitous. While there aren't many lectures in kindergarten, by third grade teachers are talking a lot and learners are listening. The college classroom is dominated by lecture. So is the corporate training session, conference presentations, church sermons, public meetings, elder hostels, and the local library's evening speaker series. Lectures aren't going away anytime soon, nor should they. Like all tools for learning, they provide certain unique advantages and have certain unique limitations.

Lectures can be modified in different ways to increase the amount of active learning---to ensure that learners are more fully engaged, have a more robust understanding of the learning material,  are more likely to remember what they learned, are more likely to utilize the information at a later time.

One such method to increase active learning are "response cards." Response cards are provided to  students so that each one can respond to instructor questions. Two types of response cards are available, (1) those that enable each learner to write his or her answer on the card (for example with a dry-erase marker), and (2) those that enable learners to hold up preprinted answers (either True or False; or A, B, C, or D for example).

Research

While not a lot of good research has been done on response cards, the research seems to suggest that compared with the traditional method of having students raise their hands in response to questions, response cards improve learners' classroom engagement, the amount they learn, and the amount they retain after a delay (Marmolejo, Wilder, & Bradley, 2004; Gardner, Heward, Grossi, 1994; Kellum, Carr, and Dozier, 2001; Narayan, Heward, Gardner, Courson, Omness, 1990; Christle & Schuster, 2003). Learners generally prefer response cards to simple hand-raising. Most of the research has focused on K-12 classrooms, with some research done in community college. The research has tended to focus on relatively low-level information and has not tested the value of response cards on higher-order thinking skills.

Recommendations

Getting learners to actively respond in lectures is certainly a worthwhile goal. Research has been fairly conclusive that learners learn better when they are actively engaged in learning (Bransford, Brown, & Cocking, 1999). Response cards may be one tool in the arsenal of methods to generate learner engagement. Of course, electronic keypads can be used in a similar way, at a significantly increased cost, with perhaps some added benefits as well. Still, at less than $30 a classroom, response cards may be worth a try.

Personally, I'm skeptical that audiences in adult training situations would be open to response cards. While 87% of college students rated the cards highly (Marmolejo, Wilder, & Bradley, 2004), the corporate audiences I've worked with over the years, might find them childish or unnecessary ("hey, why can't we just raise our hands?"). On the other hand, electronic keypads are more likely to be accepted. Of course, such acceptance---whether we're talking about response cards or electronic keypads---really depends on the relevance of the material and the questions used. If the questions are low-level rote memorization, adult audiences are likely to reject the instruction regardless of the technology employed.

Making lectures interactive has to be done with care. Adding questions and student responses can have negative consequences as well. When we ask questions, we signal to learners what to pay attention to. If we push our learners to think about low-level trivia, they will do that to the detriment of focusing on more important high-level concepts.

Limitations of the Research

The research on response cards tends to focus on low-level questions that are delivered all-to-frequently throughout lectures. Learners who have to answer a question every two minutes are being conditioned to focus on trivia, facts, and knowledge. Future research on response cards should focus on higher-level material in situations where more peer discussion are enabled.

Most of the research on response cards suffered from minor methodological difficulties (e.g., weaker than preferred comparison designs and a low level of learners actually tracked) and ambiguity (e.g., in reading the research articles, it was often difficult to tell whether the in-class questions were repeated on the final quizzes---those used as dependent variables; and no inferential statistics were available to test hypotheses).

References

Marmolejo, E. K., Wilder, D. A., & Bradley, L. (2004). A preliminary analysis of the effects of response cards on student performance and participation in an upper division university course. Journal of Applied Behavior Analysis, 37, 405-410.

Cristle, C. A., & Schuster, J. W. (2003). The effects of using response cards on student participation, academic achievement, and on-task behavior during whole-class, math instruction. Journal of Behavioral Education, 12(3), 147-165.

Gardner, R., Heward, W. L., & Grossi, T. A. (1994). Effects of response cards on student participation and academic achievement: A systematic replication with inner-city students during whole-class science instruction. Journal of Applied Behavior Analysis, 27, 63-71.

Kellum, K. K., Carr, J. E., & Dozier, C. L. (2001). Response-card instruction and student learning in a college classroom. Teaching of Psychology, 28(2), 101-104.

Narayan, J. S., Heward, W. L., Gardner, R., Courson, F. H., & Omness, C. K. (1990). Using response cards to increase student participation in an elementary classroom. Journal of Applied Behavior Analysis, 23, 483-490.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.

Wednesday, 31 May 2006

Research Brief: Animations versus Static Paper-based Diagrams

Which is better, (1) computer-based animations with audio narration or (2) paper-based diagrams with text narratives?

Suppose further that the animations and diagrams equalized as much as possible the amount of visual information presented, and the words in the narration and text were identical. In other words, the comparison was fair.

Suppose also that the content areas in the learning materials dealt with dynamic spatial causation, utilizing topics seemingly appropriate for dynamic graphical displays. Specifically, the topic areas included:

  • How lightning forms.
  • How a toilet works.
  • How ocean waves work.
  • How a car’s braking system works.

Suppose also that the tests queried learners on retention and transfer? In other words, to gauge their memory, learners were asked questions such as, “Please write down an explanation of how lightning works,” and to assess transfer, they were asked, “What could you do to decrease the intensity of a lightning storm?”

Under those conditions, which would produce the best learning on the questions asked?

A. Animations with audio narration.
B. Paper-based diagrams with text narratives.
C. Both would produce equal learning benefits.

Richard Mayer, Mary Hegarty, Sarah Mayer, and Julie Campbell (all of the University of California at Santa Barbara) created four experiments that attempted to answer this question. Given that each experiment had two comparisons (retention and transfer), they ended up with eight comparisons.

The results were clear. In not one case did the computer-based animations outperform the static paper-based depictions!! In four of the eight cases, the static diagrams outperformed the animations, and in the other four cases, the differences were not statistically significant. The animation conditions never outperformed the paper-based conditions.

The average percentage difference (for the paper-based depictions compared with the animation depiction) was 27% (with the average Cohen’s d effect size of 0.68, a moderately high magnitude difference). The animation conditions never outperformed the paper-based conditions.

The Authors’ Explanations of these Remarkable Findings

For many of us, this result is non-intuitive. Why would paper-based diagrams outperform animations? Although the authors of the research paper make some conjectures, their experiments don’t really shed light on this question. The experiments simply compare animations to paper-based depictions.

The authors suggest that paper-based depictions may have outperformed the animation-based depictions because (described in detail on page 264):

  1. The paper-based depictions involve simultaneous presentation of the graphical illustrations, whereas the animation-based depictions presented the graphical content in a chronological flow with no simultaneity.
  2. The paper-based materials enable learner control through pacing and eye movements, whereas the animations do not.
  3. The paper-based materials are purposely segmented into meaningful units showing crucial states of the system, whereas the animation presents the diagrams in one continuous flow.
  4. The paper-based materials utilize printed words, whereas the animation condition uses audio narration.
  5. The paper-based materials are presented on paper, whereas the animation materials are presented on a computer screen.

In future experiments, these things will need to be varied to determine the actual cause of the differences. Specifically, it would be helpful for e-learning designers to know the relative effectiveness of animations that also show crucial states of the system and enable more learner control.

Other Caveats and Shortcomings

Skeptical instructional designers may wonder about the target audience. Could it be, for example, that these results aren’t relevant for young adults (those who have great experience in using computers)? This worry seems misplaced. The learners in these experiments were all young college students, with an average age about 19 years old. On the other hand, 82% of the learners were women, suggesting that the results may not apply to men.

I worry about the short retention interval. As in most of Mayer’s experiments, immediate tests of retention and transfer are used. In other words, the students encounter the learning material and then are immediately tested on it. This should make us wonder whether the differences between animation and static images would survive the vagaries of cognitive forgetting processes. It might be true for example, that static images help for short retention intervals and animations help for longer—more realistic—retention intervals.

The experiments also use very short learning events—seven minutes or less in length, with some learning sessions lasting only a minute or two. This tends to limit the generalizability of the results. Real-world instructional designers are apt to question these results by noting that animations may energize learners to pay attention to e-learning courses that take, say, 30 minutes or more, whereas static animations are less likely to produce this energizing effect. So while static graphics may work for five-minute snippets of learning, more authentic learning events may benefit from animations.

Despite these major limitations, the findings are compelling. They show, at the very least, that in micro-learning situations, animations may not be as obvious a choice as we might have believed.

The experimental results also are partly consistent with a recent review of the research literature which found no difference in learning results between animations and paper-based depictions (Tversky, Morrison, & Betrancourt, 2002). Neither the current study or the review of the literature found any advantage for animations.

Again, it could be that well-designed animations have a facilitative effect. On the other hand, it appears that more research is needed to uncover principles that outline effective animation design.

Will’s Recommendations for Instructional Designers/Developers:

  1. If possible, utilize evidence-based instructional-design practices to experiment with different animation designs (to see which work for your content, your learners, and your delivery methods). Specifically, compare static graphics to animations and compare different animation designs.
  2. As a first cut in designing animations, enable learners to control the movement from one crucial system state to the next.
  3. As a first cut in designing animations, utilize audio narration, but also provide a text version that can be read separately (not simultaneously).
  4. Consider utilizing the spacing effect by presenting both a dynamic animation and a later static depiction with simultaneous text presentation. The second depiction, because it enables studying, could be utilized with some augmenting questions or exercises to get the learners to think deeply about the dynamic flow of events. Also consider alternating between dynamic and static depictions or presenting the static one before the dynamic.

Citations:

Mayer, R. E.; Hegarty, M.; Mayer, S.; Campbell, J. (2005). When Static Media Promote Active Learning: Annotated Illustrations Versus Narrated Animations in Multimedia Instruction. Journal of Experimental Psychology: Applied, 11, 256-265.

Tversky, B.; Morrison, J. B.; Betrancourt, M. (2002). Animation: Can it facilitate? International Journal of Human-Computer Studies, 57, 247-262.

Thursday, 01 December 2005

Conversational Writing is Better than Formal Writing

Research has shown that a conversational writing style is generally more effective at producing learning results than more formal writing.

See a blog blurb by Kathy Sierra to learn more. She did a nice job of writing a review of a 2000 study by Moreno and Mayer from the Journal of Educational Psychology.

Also, check out all the comments after her blog post to see the research findings put into perspective. Some people loved the comments. Others went crazy with angst.


Some Minor Caveats (it might be best to read this after you read Kathy's post)

In the Moreno and Mayer study, the researchers found the following improvements due to a more personalized style.

  • Transfer Improvements: Experiment 1: 36%, Exp. 2: 116%, Exp. 3: 46%, Exp. 4: 20%, Exp. 5: 27%
  • Retention Improvements: Experiment 1: 3%, Exp. 2: 6%, Exp. 3: 22%, Exp. 4: 10%, Exp. 5: 12%

Transfer in this case meant the ability to answer questions regarding the topic that were not directly discussed in the text. So for example, one transfer question used was, "What could be done to decrease the intensity of a lightning storm?"

Retention was measured with the question, "Please write down an explanation of how lightning works."

You'll notice from the above numbers that transfer improved learning results more than retention did. In fact, 2 of the 5 experiments did NOT show statistical improvements in retention. While transfer measures are generally considered more difficult to obtain and thus more important, the actual tests of transfer and retention in the 5 experiments cited are roughly equal in difficulty. Certainly if we wanted learners to be able to explain how lightning works, the experiments do NOT show definitively that a more personalized writing style would guarantee such a result. On the other hand, personalization did not hurt the learning either.

Note further that in the two experiments where retention was not statistically improved (Experiments 1 and 2), the learners were observers and did not have to interact with the learning material. This is relevant to the other caveat I want to discuss.

The other caveat is that while conversational style is highlighted in Kathy's blog, the researchers are very careful to focus on the personalization of the writing, and drawing the readers (or listeners) into the dialogue. So for example, it may be helpful to do the following in our instructional designs (these insights are not necessarily empirically tested, but they are consistent with the research results):

  1. When writing or speaking you should use the word "you," instead of a third-person more-formal style.
  2. When writing or speaking, it may also be useful to use the word "I," as such a use may encourage your audience members to respond on a personal level.
  3. It may be best to address learners as participants not as observers.
  4. It may be best to relate the content to the learners' real world experiences.

Note that more research is needed in this area. There are not enough studies to predict this same effect with all learners, all learning materials, and all learning and performance situations.

Research Article Cited by Kathy: 

Moreno, R.,  & Mayer, R. E. (2000). Engaging students in active learning: The case for personalized multimedia messages.   Journal of Educational Psychology, 92, 724-733.