Work-Learning Research

  • Work-Learning Research
  • Catalog of Publications
  • LearningAudit.com
  • Will's Speaking & Workshops
My Photo

About

Search

  • Google
    This Blog Web

Notable Books

Recommended Books

  • Turning Research into Results: A Guide to Selecting the Right Performance Solutions, by Richard E. Clark, Fred Estes
  • How People Learn: Brain, Mind, Experience, and School: Expanded Edition, by National Research Council, edited by John Bransford, Ann L. Brown, Rodney R. Cocking
  • Criterion-Referenced Test Development 2nd Edition, by Sharon Shrock, William Coscarelli, Patricia Eyres
  • Michael Allen's Guide to E-Learning, by Michael Allen
  • e-Learning and the Science of Instruction, by Ruth Colvin Clark, Richard E. Mayer
  • Efficiency in E-Learning by Ruth Colvin Clark, Frank Nguyen, John Sweller (2006)

Best-Selling Books

  • The Long Tail

Google Advertisements

Thursday, 28 September 2006

Precisely False Industry Data

Recently, the New York Times Public Editor wrote an article on polling. The article skewered many current practices. It also was educational for any of us who want to truly understand the polling data we may hear in the news.

Sadly, the article made me think of the pathetic data that runs around our field---the training, learning, development, e-learning field. I've already mentioned some of the data allegedly posing as learning research.

But there is another wide swath of data that we should be very skeptical about---the data some "research" firms and trade organizations are peddling as industry data. The data is typically gathered by sending out surveys to an unrepresentative sample of companies, by having only a fraction of respondants complete the data, by gathering opinions, and by boisterously proclaiming that the data tell us what the industry best-practices are. Here are some of the problems with this farce:

  1. Biased sampling of organizations.
  2. No control for the biasing effects of non-respondants.
  3. Assuming that opinion is fact.
  4. Relying on the averaging of opinions.
  5. Assuming that the average respondants have the best insights.
  6. The arrogance and lack of caveats in the reporting.
  7. Year-to-Year comparisons with different companies in each year's sample.
  8. Additional biasing due to fraud and corruption, as when these "research" organizations tilt the best-practice results to their paid customers.

Monday, 31 July 2006

Biasing Awards with Entrance Fees?

CLO magazine (Chief Learning Officer) has just finished accepting nominations for it's 2006 "Learning In Practice Awards."

Awards are a great thing of course, because theoretically they can reward outstanding achievements and highlight for the industry the leading-edge thinkers and product/service implementations.

But award selections are not easy to administer. They involve a great deal of investment by the sponsoring organization. Sponsoring organizations almost always get payback from giving awards because their name becomes synonomous with power and credibility.

Unfortunately, the cost of these awards programs almost always push awards providers to ask for an entry fee of some sort. For example, the CLO awards ask for $149 per entry. The Brandon-Hall awards have an entry fee as well.

These entrance fees completely bias the results, and should make all of us suspicious about whether the award winners truly represent the best in our industry. All we need to do is ask ourselves this question, "Am I likely to spend $150 if I have no likelihood of personal, social, or financial gain?" Even allowing for the occasional radical altruist or rich benefactor, most of us want to get something in return for spending $20, let alone $150. So why do award nominations get submitted? For personal or business gain. And what kind of organizations are most likely to be nominated? Organizations with lots of money and marketing clout.

What kind of products/services and people get overlooked?

  • Innovators
  • Individual contributors
  • Small businesses
  • Those who worry about associating themselves with phony awards

One solution to this problem is to use a sliding scale for entrance fees or to offer scholarships for organizations or individuals without the financing of our largest vendors.

In the meantime, when we see an award, we ought to be aware that somebody else was probably more deserving.

Friday, 21 July 2006

The Long Tail: How it Applies to the Learning and Performance Field.

Chris Anderson, editor of Wired Magazine, has a new book—The Long Tail—another inspired insight ready to rear up like a tsunami and sweep indiscriminately over everything.

The insight from the book and from the original article in a 2004 edition of Wired is this: The low cost of infinite shelf space and the reach of the internet enables niche products to reach an audience—and for this reach to be economically viable. The following chart from Slate Magazine captures the concept nicely.

Longtailchartfromslate

It’s a nice insight and Anderson musters compelling evidence for the increasing power of the long tail to transform our economy and our business infrastructure.

To hear Anderson being interviewed by the incomparable Tom Ashbrook, check out the On Point archive from July 18, 2006.

To read a critique of the concept—a warning about its boundary conditions—from Slate Magazine’s Tim Wu, check out the article entitled, "The Wrong Tail: How to turn a powerful idea into a dubious theory of everything."

To read how the concept might affect the publishing industry, check this out from the NY Times Book Review.

To read how the concept might apply to the healthcare field, read Jim Walker's thoughtful analysis.

To read a critique of the concept (and the actual economic data) from the Wall Street Journal, read Lee Gomes excellent article, "It May Be a Long Time Before the Long Tail Is Wagging the Web."

To read more about the Long-Tail concept, check out The Long Tail blog.

How does the long tail relate to the learning-and-performance field? I offer some initial thoughts:

1. Course content: The long tail may enable more and more niche players to succeed in the marketplace, potentially hurting companies with large libraries of everything. Is this why Skillsoft’s stock has lost 75% of its value over the last four years (though it’s been inching up lately).
2. Conferences: The long tail may kill, or significantly weaken, the mega conference, pushing attendees into smaller, niche-driven conferences. Simultaneously, vendors (the financial life blood of most conferences), may avoid mega conferences and spend their exhibit dollars in smaller conferences, especially industry-specific conferences.
3. Industry organizations: Organizations like ASTD, ISPI, Masie Center, etc., may lose their influence to specialty organizations (like the elearningGuild) that are attracting an increasingly devoted membership.
4. Employment: Perhaps the long tail will push more and more individuals and small groups into external consulting, development, and delivery functions.
5. Publishing (including books, magazines, eBooks, and blogs): The long tail may make it easier and easier for authors and thought leaders to distribute their works online, putting pressure on book publishers like Pfeiffer and ASTD to compete, and periodicals like T+D, PI, and CLO to maintain an audience in the face of a burgeoning swell of blogs, white papers, and webinars.
6. Industry Advertising: If we think of advertising placement opportunities as products, the long tail may push vendors in our industry to seek niche placements. One opportunity for this is the Google or Yahoo! online advertisements that are already changing the advertising game, but other niche opportunities are available as well.
7. Best practices: Until a seismic event occurs in our industry, professionals have too little impetus to create effective learning-and-performance interventions. Thus, sadly, we will continue to window shop for fad-of-the-moment ideas emerging from the long tail. After emerging from the long tail, fad-of-the-moment ideas will enjoy a brief explosion into the head of the curve, before gradually fading back into the obscurity of the long tail.

Where will the long tail not apply:

1. Credentialing: Although ISPI’s CPT (Certified Performance Technologist) and ASTD’s CPLP (Certified Professional in Learning and Performance)—are both available (as are a few other credentials), multiple credentialing agencies weaken the meaningfulness of a credential, making it likely to put a ceiling on the market power of these credentials.
2. Authoring Tools: Buyers tend to gravitate toward stable tools and systems, not wanting to deal with the uncertainty of new technologies.
3. LMS’s: Again, buyers tend to gravitate toward tools that are proven and vendors that can afford to invest in interface-diverse interoperability.
4. Work-Learning Research and Will Thalheimer: Though I am likely to always to be hidden somewhere in the long tail, I have ambitions to be ubiquitous with my message of research-based practice. BIG TONGUE-IN-CHEEK SMILE.

Thursday, 13 July 2006

Book Review -- Wick, Pollock, Jefferson, & Flanagan (2006).

The book, The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results; by Calhoun Wick, Roy Pollock, Andrew Jefferson, and Richard Flanagan; is one of the most important books published in the training and development industry in a very long time.

There are three ways to access my book review:

  1. Listen online (Click on the play button)
  2. Listen on your computer or MP3 play (Click on the MP3 link)
  3. Read the review below.

The audio is about 20 minutes.


MP3 File

Book Review by Will Thalheimer
President of Work-Learning Research, Inc.

Book: The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results

Authors: Calhoun Wick, Roy Pollock, Andrew Jefferson, and Richard Flanagan

Publisher: Pfeiffer

Publication Date: April 2006

Introduction

The learning-and-performance field—of which I am a devoted member—hasn’t had a really big idea since the performance-improvement crusade began gathering momentum in the 1980’s. But now, thanks to the work of Cal Wick, Roy Pollock, Andrew Jefferson, Richard Flanagan and their colleagues at the Fort Hill Company, we finally have a new innovation—a systematic method for training follow-through.

It’s not a surprise that training can only be effective if learners put what they learn into practice. What Wick and company have done is demonstrate the feasibility of driving training transfer into the flow of work. Their book is really a culmination of years of exploration as they bravely embraced the exhausting and dangerous work of pioneers.

They’ve taken an evidence-based approach to learning design—grappling with real-world clients, making careful observations, gathering data, utilizing research findings, and fine-tuning their practices. Perhaps most importantly, they’ve created a breakthrough technology that enables training-and-development leaders to push learning results into the actual workplace.

E-learning pundits haven’t recognized it yet, but Fort Hill’s Friday5s training-follow-through software (along with competitive products like ZengerFolkman’s ActionPlan Mapper) may be the most disruptive e-learning technology yet devised. While web-meeting platforms, LMS’s, rapid authoring tools, and even Google may seem potent, they don’t change training effectiveness as much as a good training follow-through system.

Wick, Pollock, Jefferson, and Flanagan may enjoy promoting Fort Hill products, but they go out of their way to craft a broader message in their brilliant new book, The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results. The authors lay out a devastating analysis of the current state of training practice—not by being negative—but by illustrating with cases, examples, and research how to do training right.

The book is nothing short of revolutionary. Unfortunately, in our dysfunctional field not everyone will take up arms against their own ineffective practices, but the book provides solid guidance to the enlightened soldiers in our midst. If you want to improve on-the-job performance and business results, this book is a guiding light.

Changing the Paradigm and Technology of Learning

In the flow of our everyday lives, the world as we know it follows predictable patterns. Things change, but they change predictably. Every once in while, however, something new appears—an innovation or idea so strange and yet so perfectly in tune with the cravings, resources, and zeitgeist of the time that it changes everything.

Disruptive technologies like electricity, phones, computers, and the internet have produced powerful ripples through the human fabric. Automobiles not only displaced the horse, they enabled the rise of the middle class, the building of suburbs, and intellectual and social freedom for young adults. Paradigm shifts and scientific discoveries create the same effects, changing the way we see the world—changing the possibilities. If not for the ideas of Jesus, Darwin, Gandhi, Confucius, Freud, Einstein, Watson and Crick, Kuhn, and others, we would live in a different world.

The last great disruptive innovation to arise within the learning-and-performance field was the move away from “training” and toward “performance improvement.” Unfortunately, that movement is not yet complete. The hard truth is that we talk more about on-the-job results than we achieve them.

In the move from training to performance improvement, something got lost. Performance gurus often badmouth training as inadequate, but they give short shrift to its strengths, and are blind about how to design the complete training experience to make training work. This kind of blindness is endemic in our field for two reasons, (1) because we have so little understanding of the basics of human learning, and (2) because we rarely evaluate our performance.

Thankfully, Cal Wick and his team (as well as a few others) have tired of training’s big lie. They know that training can be powerful—if only the right processes and procedures are put into place. Because they understand learning, they can envision a systematic set of guidelines that work. Because they measured the performance of their learners, they have been able to fine-tune their recommendations.

The Six Disciplines is poised to become one of the most important books in the learning-and-performance field. Not since the publication of Dana Gaines Robinson and James C. Robinson’s book on performance consulting or the seminal work of Bob Mager on performance-based instructional design, has our field been offered a new system of thinking—a new way to do our jobs as learning-and-performance professionals.

The Book’s Overarching Message

The book proposes six disciplines and offers scores of recommendations, but it’s central message is that what happens after training is just as important—and probably more important—than the training itself.

The six disciplines are:

1. Define Outcomes in Business Terms
2. Design the Complete Experience
3. Deliver for Application
4. Drive Follow-Through
5. Deploy Active Support
6. Document Results

Wick, Pollock, Jefferson, and Flanagan suggest that training ought to be conceptualized with a new finish line.

The “finish line” for learning and development has been redefined. It is no longer enough to deliver highly rated and well-attended programs; learning and development’s job is not complete until learning has been converted into results that matter to the business. (p. 13)

This new finish line enables us to see possibilities beyond the completion of smile sheets. A learner’s job—indeed an organization’s job—is not done when the classroom door swings shut.

The authors also emphasize the importance of visualizing training as something that occurs within an expanded timeline. Before-training efforts and after-training efforts are just as critical as the training efforts themselves. Particularly important are the after-training efforts because they focus learner attention on implementing the learning, reinforce fading memories, and transform the process of learning from an individual pursuit to an organizational responsibility. Learning changes from a love-it-and-leave-it experience to a system of reciprocal reinforcement where the results are measured in on-the-job performance.

The Book’s Evidence

The authors cite lots of organizational research to back up their claims, from thinkers and researchers like Broad, Brinkerhoff, and Newstrom. And, the notion of a new finish line is entirely consistent with the research on fundamental learning factors—the kind of research I’ve been working with for almost a decade. For example, we know learners forget most of what they learn—unless that information is reinforced in the workplace. Each one of the six disciplines push us to design an expanded learning experience, one that focuses on workplace implementation, not training per se.

Other forms of evidence are equally important. In addition to research from refereed journals, the book details dozens of real-world learning executives describing their successes in broadening the conception of training and implementing the six disciplines. Wisdom from learning leaders was relayed from these and other organizations: Sony, Gap, 3M, Humana, BBC, Center for Creative Leadership, General Mills, Corning, Forum, University of Notre Dame, Honeywell, AstraZeneca, Pfizer.

Evidence of the effectiveness of technology-based training follow-through is described using data from the powerful methodology of control-group designs. Graphs and text clearly illustrate the results. For example, page 128 conveys how “Use of a Follow-Through Management System Increased Managers’ Awareness of Their Direct Reports’ Development Goals from 40 Percent to 100 Percent.”

While the majority of books in our field fail to convey more than a few breadcrumbs of credible evidence, The Six Disciplines hits for the triple crown, utilizing refereed research, experience of real-world learning leaders, and data from control-group studies. In our field, it simply doesn’t get any better than this.

The Book’s Design

The book is well organized, with an introductory chapter, a summary chapter, and one chapter for each of the six disciplines described in the title. Each chapter ends with a nice twist—two lists of action points; one for “learning leaders” and one for “line leaders.” There are many design examples such as this that demonstrate that the authors are really serious about on-the-job performance. The book utilizes some valuable repetitions of key points. The text design makes reading a pleasure. Quotations are pithy and relevant. Examples are illustrative of the main points in the text.

I read every page of the book, so I can tell you with confidence that it is well written. There are hundreds of specific recommendations throughout the book. I found many insights that I hadn’t thought of—ideas that I will use in my work as a consultant, instructional-design strategist, and creator of training. The graphs and charts are clear and there are some very useful templates. For example, the first chapter concludes with the “Learning Transfer and Application Scorecard,” a 10-item questionnaire. It’s a powerful tool because—and this is my opinion not the authors’—most current training programs will fail miserably when measured by these questions. I’d bet that most training programs will have low scores on ALL 10 items of the scorecard.

I have two almost insignificant complaints about the book. First, the cover is uninspired. The book deserves better. Second, the six disciplines are shoehorned into starting with the letter “D,” in a way that is more misleading than it should be. For example, the second “D” stands for Design the Complete Experience. The author’s emphasis is on the complete experience, but the shorthand version “Design” connotes the traditional instructional-design notion of design—a notion completely inadequate; as the authors argue persuasively in the actual text.

The Book’s Recommendations

The book is jam-packed with recommendations, so I’ll only convey a few of the specific recommendation here. You really ought to buy The Six Disciplines and read it and share it with everyone you know who cares about doing training right. Here’s my short list:

  1. View training follow-up as part of every training intervention.
  2. Get learners’ managers involved before and after training.
  3. Evaluate your training programs to determine whether they’re working and to improve subsequent training.
  4. Before designing a training program, determine what learners will be doing better and differently after the program. Be clear about what evidence will be acceptable to determine success.
  5. Understand the business. Be proactive in suggesting training-and-development solutions. Check your understanding with line leaders.
  6. Utilize a technology-based training-follow-through system to drive learning application and accountability.
  7. Utilize evidence-based practices, including research-based instructional design and after-training evaluation.
  8. Avoid “dense-pack education—the tendency to cram every conceivable topic into a program of a few days.”
  9. Focus on creating transfer during all phases of training—while designing the training, while delivering it, and during follow-up.
  10. Consider using senior executives to teach leadership—it is one of the fastest- growing trends in executive education.
  11. During training, stop after each topic and ask participants questions that challenge them to think about applying what they know.
  12. Learners should develop “learning transfer objectives” and be prepared to work toward them while back on the job.
  13. Send learners’ objectives to the learners’ managers to increase follow-up application and accountability.
  14. Utilize Marshall Goldsmith’s “feedforward” techniques to help learners generate ideas for training application.
  15. Recognize that there are factors that decrease the likelihood that learners will put their learning into practice, and that the impact of these factors can be minimized only through a systematic follow-through process.
  16. Utilize reminders to facilitate memory and spur on-the-job application of training.
  17. Hold employees accountable for making effective use of the training they receive.
  18. Consider coaching as a complement to training, providing learners with coaches to increase the likelihood of energetic and appropriate application.
  19. Learning programs that “demonstrate sound, thorough, credible, and auditable evidence of results are able to garner additional investment; those that cannot are at risk.”
  20. Learning and development units within companies need to communicate their results to the organization using multiple communication attempts and various communication channels.

How do Your Learning Programs Rate?

As I mentioned earlier, the Fort Hill Company has developed a Learning Transfer and Application Scorecard (displayed on pages 10 and 11) that targets the most important and leveragable characteristics that make training effective. Every training program ought to be measured with this scorecard. To get an idea of how well your training stacks up, I’ve included three of the ten items. I changed the wording slightly to help you make sense of the items before you read the book. How well do your training programs do the following?

  • After the program, participants are reminded periodically of their post-learning objectives and of opportunities to apply what they learned.
  • Participants’ managers are actively engaged during the postprogram period. They review and agree on after-learning objectives, and expect and monitor the progress that learners are making in applying what they’ve learned.
  • The design of the learning program covers the entire process from initial invitation to attend, through the learning sessions, and through on-the-job application and measurement of results.

Summary

The Six Disciplines is the most important book written in our field in quite some time. It provides a comprehensive system to make training effective. Its radical new nugget of truth is its insistence on training follow-through. The book’s ideas are evidence-based and are consistent with the human learning system. The messages in the book have been tested and refined in the real world. Tools are available (for example, Fort Hill’s Friday5s follow-through management system) that make the recommendations actionable.

Training Follow-Through Systems

I am aware of two training follow-through systems, Fort Hill’s Friday5s, and ZengerFolkman’s ActionPlan Mapper. I have formally reviewed the ZengerFolkman product, but have yet to put my review of Friday5s on paper. Both are powerful programs. Friday5s may have an edge given its longer tenure in the marketplace and its ability to provide learning reminders, not just reminders about learning transfer objectives. My recommendation is that you test them for yourself.

Notes

My contact information is as follows. Will Thalheimer, PhD, is a learning consultant and researcher. He can be reached at 617-718-0067, and [email protected], and www.work-learning.com, and www.willatworklearning.com.

The Fort Hill Company is available at 302-651-9223, and [email protected], and www.forthillcompany.com.

Monday, 10 July 2006

Sexual Harassment Training Required in California

California has implemented a law that requires all managers to have sexual harassment training at least once every two years, with new managers getting the training within six months of employment. An upcoming webinar on this issue is offered that includes the author of the amendment.

While the law's requirements will create mediocre learning design (because people need more frequent reminders to maximize spontaneous remembering), the law is newsworthy as a potential omen for what may come in the training-and-development industry (and not just for sexual harassment training).

The law as written may have benefits because it is certainly better than nothing, but unfortunately the law repeats several mistakes endemic in our field:

  1. It utilizes a "butts in seats" standard.
  2. It assumes training will be sufficient.
  3. It doesn't provide for any testing (except seat butts).
  4. It doesn't assess performance follow-through at all.

The law does say:

The training and education required by this section is
intended to establish a minimum threshold and should not discourage
or relieve any employer from providing for longer, more frequent, or
more elaborate training and education regarding workplace harassment
or other forms of unlawful discrimination in order to meet its
obligations to take all reasonable steps necessary to prevent and
correct harassment and discrimination.

Employers who really care about minimizing sexual harassment will provide for "longer, more frequent, [and] more elaborate training."

Thursday, 25 May 2006

Biased Myers-Briggs (MBTI) Research Wanted

CPP, Inc., known formerly as Consulting Psychologists Press, announces that it is offering research grants for research on the Myers-Briggs Type Indicator.

This may seem commendable, but their research-grant program is biased. Here are the facts:

  1. CPP makes money by selling MBTI implementations, consulting, and paraphernalia.
  2. The MBTI (Myers-Briggs) is widely discredited by researchers. It is considered neither reliable nor valid. For example, see Pittenger, D. J. (2005). Cautionary Comments Regarding the Myers-Briggs Type Indicator. Consulting Psychology Journal: Practice and Research, 57, 210-221.
  3. The research grant program is biased toward research findings that support the MBTI. Here are some details:
    • CPP, a biased party, selects the grantees.
    • One of the criteria for selection is "advancement of the MBTI assessment."
    • Money is distributed only for research reports selected by CPP for the "Best Paper Awards."
  4. Instead of these regrettable procedures, CPP should form a body of unbiased reviewers, have criteria that don't push toward a confirmatory bias, distribute money for good proposals not "favorable" results, and form an unbiased committee to select the best papers.

This Research Grant Program (as outlined in the publicly available materials produced by CPP) is clearly designed to produce results that support CPP's financial interests and resurrect the flagging image of the MBTI. Statements in the proposal requiring researchers to "conform to the Americal Psychological Association's Ethical Principles of Psychologists" do little to overcome the biases built into the program. As the materials make clear, the intention is to provide comfort to CPP's clients. How else are we to interpret the following statement in CPP's research-grant announcement?

"Abstracts from the papers will be used by CPP to communicate results with its customers."

This type of biased research program is completely unacceptable. Not only does it have the potential to create biased information and lead to suboptimal or dangerous recommendations, but it also casts a shadow on fair-and-balanced research that might be used to guide learning-and-performance agendas.

If you'd like to share your thoughts with CPP, it appears that the person to write is available through this email address.

Monday, 13 March 2006

What Prevents the Use of Research

What prevents people in the learning-and-performance field from utilizing proven instructional-design knowledge?

This is an update to an old newsletter post I wrote about in 2002. Most of it is still relevant, but I've learned a thing or two in the last few years.

Back in 2002, I spoke with several very experienced learning-and-performance consultants who have each---in their own way---asked the question above. In our discussions, we've considered several options, which I've flippantly labeled as follows:

  1. They don't know it. (They don't know what works to improve instruction.)
  2. They know it, but the market doesn't care.
  3. They know it, but they'd rather play.
  4. They know it, but don't have the resources to do it.
  5. They know it, but don't think it's important.

Argument 1.
They don't know it. (They don't know what works to improve instruction.)
Let me make this concrete. Do people in our field know that meaningful repetitions are probably our most powerful learning mechanism? Do they know that delayed feedback is usually better than immediate feedback? That spacing learning over time facilitates retention. That it's important to increase learning and decrease forgetting? That interactivity can either be good or bad, depending on what we're asking learners to retrieve from memory? One of my discussants suggested that "everyone knows this stuff and has known it since Gagne talked about it in the 1970's."

Argument 2.
They know it, but the market doesn't care.
The argument: Instructional designers, trainers, performance consultants and others know this stuff, but because the marketplace doesn't demand it, they don't implement what they know will really work. This argument has two variants: The learners don't want it or the clients don't want it.

Argument 3.
They know it, but they'd rather play.
The argument: Designers and developers know this stuff, but they're so focused on utilizing the latest technology or creating the snazziest interface, that they forget to implement what they know.

Argument 4.
They know it, but don't have the resources to use it.
The argument: Everybody knows this stuff, but they don't have the resources to implement it correctly. Either their clients won't pay for it or their organizations don't provide enough resources to do it right.

Argument 5.
They know it, but don't think it's important.
The argument: Everybody knows this stuff, but instructional-design knowledge isn't that important. Organizational, management, and cultural variables are much more important. We can instruct people all we want, but if managers don't reward the learned behaviors, the instruction doesn't matter.

My Thoughts In Brief

First, some data. On the Work-Learning Research website we provide a 15-item quiz that presents people with authentic instructional-design decisions. People in the field should be able to answer these questions with at least some level of proficiency. We might expect them to get at least 60 or 70% correct. Although web-based data-gathering is loaded with pitfalls (we don't really know who is answering the questions, for example), here's what we've found so far: On average, correct responses are running at about 30%. Random guessing would produce 20 to 25% correct. Yes, you've read that correctly---people are doing a little bit better than chance. The verdict: People don't seem to know what works and what doesn't in the way of instructional design.

Some additional data. Our research on learning and performance has revealed that learning can be improved through instruction by up to 220% by utilizing appropriate instructional-design methods. Many of the programs out there do not utilize these methods.

Should we now ignore the other arguments presented above? No, there is truth in them. Our learners and clients don't always know what will work best for them. Developers will always push the envelope and gravitate to new and provocative technologies. Our organizations and our clients will always try to keep costs down. Instruction will never be the only answer. It will never work without organizational supports.

What should we do?

We need to continue our own development and bolster our knowledge of instructional-design. We need to gently educate our learners, clients, and organizations about the benefits of good instructional design and good organizational practices. We need to remind technology's early adopters to remember our learning-and-performance goals. We need to understand instructional-design tradeoffs so that we can make them intelligently. We need to consider organizational realities in determining whether instruction is the most appropriate intervention. We need to develop instruction that will work where it is implemented. We need to build our profession so that we can have a greater impact. We need to keep an open mind and continue to learn from our learners, colleagues, and clients, and from the research on learning and performance.

New Thoughts in 2006

All the above suggestions are worthy, but I have two new answers as well. First, people like me need to do a much better job (me included) communicating research-based ideas. We need to figure out where the current state of knowledge stands and work the new information into that tapestry in a way that makes sense to our audiences. We also have to avoid heavy-handedness in sharing research-based insights, as we must realize that research is not the only means of moving us toward more effective learning interventions.

Secondly, I have come to believe that sharing research-based information like this is not enough. If the field doesn't get better feedback loops into our instructional-design-and-development systems, then nothing much will improve over time, even with the best information presented in the most effective ways.

Wednesday, 30 November 2005

Interview with Will Thalheimer

Recently, I had the honor of being interviewed by Karl Kapp, EdD, in an interview sponsored by The E-Learning Guru, Kevin Kruse.

It was a fun interview, covering many wide-ranging issues in our industry and in learning research. Click to read more.

Tuesday, 29 November 2005

Why is Research Knowledge Not Utilized

This blurb is reprised from an earlier Work-Learning Research Newsletter, circa 2002. These "classic" pieces are offered again to make them available permanently on the web. Also, they're just good fun. I've added an epilogue to the piece below.

"What prevents people in the learning-and-performance field from utilizing proven instructional-design knowledge?"

Recently, I've spoken with several very experienced learning-and-performance consultants who have each---in their own way---asked the question above. In our discussions, we've considered several options, which I've flippantly labeled as follows:

  1. They don't know it. (They don't know what works to improve instruction.)
  2. They know it, but the market doesn't care.
  3. They know it, but they'd rather play.
  4. They know it, but don't have the resources to do it.
  5. They know it, but don't think it's important.

Argument 1.

They don't know it. (They don't know what works to improve instruction.)
Let me make this concrete. Do people in our field know that meaningful repetitions are probably our most powerful learning mechanism? Do they know that delayed feedback is usually better than immediate feedback? That spacing learning over time facilitates retention. That it's important to increase learning and decrease forgetting? That interactivity can either be good or bad, depending on what we're asking learners to retrieve from memory? One of my discussants suggested that "everyone knows this stuff and has known it since Gagne talked about it in the 1970's."

Argument 2.

They know it, but the market doesn't care.
The argument: Instructional designers, trainers, performance consultants and others know this stuff, but because the marketplace doesn't demand it, they don't implement what they know will really work. This argument has two variants: The learners don't want it or the clients don't want it.

Argument 3.

They know it, but they'd rather play.
The argument: Designers and developers know this stuff, but they're so focused on utilizing the latest technology or creating the snazziest interface, that they forget to implement what they know.

Argument 4.

They know it, but don't have the resources to use it.
The argument: Everybody knows this stuff, but they don't have the resources to implement it correctly. Either their clients won't pay for it or their organizations don't provide enough resources to do it right.

Argument 5.

They know it, but don't think it's important.
The argument: Everybody knows this stuff, but instructional-design knowledge isn't that important. Organizational, management, and cultural variables are much more important. We can instruct people all we want, but if managers don't reward the learned behaviors, the instruction doesn't matter.

My Thoughts In Brief

First, some data. On the Work-Learning Research website we provide a 15-item quiz that presents people with authentic instructional-design decisions. People in the field should be able to answer these questions with at least some level of proficiency. We might expect them to get at least 60 or 70% correct. Although web-based data-gathering is loaded with pitfalls (we don't really know who is answering the questions, for example), here's what we've found so far: On average, correct responses are running at about 30%. Random guessing would produce 20 to 25% correct. Yes, you've read that correctly---people are doing a little bit better than chance. The verdict: People don't seem to know what works and what doesn't in the way of instructional design.

Some additional data. Our research on learning and performance has revealed that learning can be improved through instruction by up to 220% by utilizing appropriate instructional-design methods. Many of the programs out there do not utilize these methods.

Should we now ignore the other arguments presented above? No, there is truth in them. Our learners and clients don't always know what will work best for them. Developers will always push the envelope and gravitate to new and provocative technologies. Our organizations and our clients will always try to keep costs down. Instruction will never be the only answer. It will never work without organizational supports.

What should we do?

We need to continue our own development and bolster our knowledge of instructional-design. We need to gently educate our learners, clients, and organizations about the benefits of good instructional design and good organizational practices. We need to remind technology's early adopters to remember our learning-and-performance goals. We need to understand instructional-design tradeoffs so that we can make them intelligently. We need to consider organizational realities in determining whether instruction is the most appropriate intervention. We need to develop instruction that will work where it is implemented. We need to build our profession so that we can have a greater impact. We need to keep an open mind and continue to learn from our learners, colleagues, and clients, and from the research on learning and performance.

Will's New Thoughts (November 2005)

I started Work-Learning Research in 1998 because I saw a need in the field to bridge the gap between research and practice. In these past seven years, I've made an effort to compile research and disseminate it, and though partly successful, I often lament my limited reach. Like most entrepreneurs, I have learned things the hard way. That's part of the fun, the angst, and the learning. 

In the past few years, the training and development field has gotten hungrier and hungrier for research. I've seen this in conferences where I speak. The research-based presentations are drawing the biggest crowds. I've seen this in the increasing number of vendors who are highlighting their research bonafides, whether they do good research or not. I've seen this recently in Elliott Masie's call for the field to do more research.

This hunger for research has little to do with my meager efforts at Work-Learning Research. Though sometimes in my daydreams I like to think I have influenced at least some in the field---maybe even some opinion leaders. As a data-first empiricist, the evidence is clear. I know that my efforts are often under the radar. Ultimately, this is unimportant. What is important is what gets done.

I'm optimistic. Our renewed taste for research-based practice provides an opportunity for all of us to keep learning and to keep sharing with one another. I've got some definite ideas about how to do this. I know many of you who read this do too. We may not---as a whole industry---use enough research-based practices, but there are certainly some individuals and organizations who are out there leading the way. They are the heroes, for it is they who are out there taking risks, asking for the organizational support most of us don't ask for, making a difference one mistake at at time.

One thing we need to spur this effort is a better feedback loop. If we don't go beyond the smile sheet, we're never going to improve our practices. We need feedback on whether our learning programs are really improving learning and long-term retrieval. Don't think that just because you are out there on the bleeding edge that you're championing the revolution. You need to ensure that your efforts are really making things better---that your devotion is really improving learning and long-term retention. If you're not measuring it, you don't really know.

Let me end by saying that research from refereed journals and research-based white papers should not be the only arbiter of what is good. Research is useful as a guide---especially when our feedback loop is so enfeebled and organizational funds for on-the-job learning measurement are so impoverished.

It would be better, of course, that we might all test our instructional designs in their real-world contexts. Let us more toward this, drawing from all sources of wisdom, dipping our ladles into the rich research-base and the experiences of those who measure their learning efforts.