Work-Learning Research

  • Work-Learning Research
  • Catalog of Publications
  • LearningAudit.com
  • Will's Speaking & Workshops
My Photo

About

Search

  • Google
    This Blog Web

Notable Books

Recommended Books

  • Turning Research into Results: A Guide to Selecting the Right Performance Solutions, by Richard E. Clark, Fred Estes
  • How People Learn: Brain, Mind, Experience, and School: Expanded Edition, by National Research Council, edited by John Bransford, Ann L. Brown, Rodney R. Cocking
  • Criterion-Referenced Test Development 2nd Edition, by Sharon Shrock, William Coscarelli, Patricia Eyres
  • Michael Allen's Guide to E-Learning, by Michael Allen
  • e-Learning and the Science of Instruction, by Ruth Colvin Clark, Richard E. Mayer
  • Efficiency in E-Learning by Ruth Colvin Clark, Frank Nguyen, John Sweller (2006)

Best-Selling Books

  • The Long Tail

Google Advertisements

Friday, 04 August 2006

Learning Styles Instructional-Design Challenge

I will give $1000 (US dollars) to the first person or group who can prove that taking learning styles into account in designing instruction can produce meaningful learning benefits.

I've been suspicious about the learning-styles bandwagon for many years. The learning-style argument has gone something like this: If instructional designers know the learning style of their learners, they can develop material specifically to help those learners, and such extra efforts are worth the trouble.

I have my doubts, but am open to being proven wrong.

Here's the criteria for my Learning-Styles Instructional-Design Challenge:

  1. The learning program must diagnose learners' learning styles. It must then provide different learning materials/experiences to those who have different styles.
  2. The learning program must be compared against a similar program that does not differentiate the material based on learning styles.
  3. The programs must be of similar quality and provide similar information. The only thing that should vary is the learning-styles manipulation.
  4. The comparison between the two versions (the learning-style version and the non-learning-style version) must be fair, valid, and reliable. At least 70 learners must be randomly assigned to the two groups (with at least 35 minimum in each group completing the experience). The two programs must have approximately the same running time. For example, the time required by the learning-style program to diagnose learning styles can be used by the non-learning-styles program to deliver learning. The median learning time for the programs must be no shorter than 25 minutes.
  5. Learners must be adults involved in a formal workplace training program delivered through a computer program (e-learning or CBT) without a live instructor. This requirement is to ensure the reproducability of the effects, as instructor-led training cannot be precisely reproduced.
  6. The learning-style program must be created in an instructional-development shop that is dedicated to creating learning programs for real-world use. Programs developed only for research purposes are excluded. My claim is that real-world instructional design is unlikely to be able to utilize learning styles to create learning gains.
  7. The results must be assessed in a manner that is relatively authentic--at a minimum level learners should be asked to make scenario-based decisions or perform activities that simulate the real-world performance the program teaches them to accomplish. Assessments that only ask for information at the knowledge level (e.g., definitions, terminology, labels) are NOT acceptable. The final assessment must be delayed at least one week after the end of the training. The same final assessment must be used for both groups. It must fairly assess the whole learning experience.
  8. The magnitude of the difference in results between the learning-style program and the non-learning-style program must be at least 10%. (In other words, the average of the learning-styles scores subtracted by the average of the non-learning-styles scores must be more than 10% of the non-learning-styles scores). So for example, if the non-learning-styles average is 50, then the learning-styles score must be equal to 55 or more. This magnitude is to ensure that the learning-styles program produces meaningful benefits. 10% is not too much to ask.
  9. The results must be statistically significant at the p<.05 level. Appropriate statistical procedures must be used to gauge the reliability of the results. Cohen's d effect size should be equal to .4 or more (a small to medium effect size according to Cohen, 1992).
  10. The learning-style program cannot cost more than twice as much as the non-learning-style program to develop, nor can it take more than twice as long to develop. I want to be generous here.
  11. The results can be documented by unbiased parties.

To reiterate, the challenge is this:

Can an e-learning program that utilizes learning-style information outperform an e-learning program that doesn't utilize such information by 10% or more on a realistic test of learning, even it is allowed to cost up to twice as much to build?

$1,000 says it just doesn't happen in the real-world of instructional design. $1,000 says we ought to stop wasting millions trying to cater to this phantom curse.

Thursday, 01 June 2006

New Taxonomy for Learning Objectives

Let me propose a new taxonomy for learning objectives.

This taxonomy is needed to clear up the massive confusion we all have about the uses and benefits of learning objectives. I have tried to clarify this in the past in some of my conference presentations—but I have not been successful. When I get evaluation-sheet comments like, "Get real you idiot!" from more than a few people, I know I've missed the mark. SMILE

Because I don't give up easily—and because learning objectives are so vitally important—I'm going to give this another try. Your feedback is welcome.

The premise I'm working from is simple. Instructional professionals use learning objectives for different purposes—even for different audiences. Learning objectives are used to guide the attention of the learner toward critical learning messages. Learning objectives are used to tell the learner what's in the course. They are used by instructional designers to guide the design of the learning. They are used by evaluation designers to develop metrics and assessments.

Each use requires its own form of learning objective. Doesn't it seem silly to use the exact same wording regardless of the use or intended audience? Do we provide doctors and patients with the exact same information about a particular prescription drug? Do designers of computer software require the same set of goal statements as users of that software? Do creators of films need to have the same set of objectives as movie goers?

Until recently I have argued that we ought to delineate between objectives for learners and objectives for designers. This was a good idea in principle, but it still left people confused because it didn't cover all the uses of objectives. For example, learners can be presented with objectives to help guide their attention or to simply give them a sense of the on-the-job performance they'll be expected to perform. Instructional designers can utilize objectives to guide the design process or to develop evaluations.

The New Taxonomy

  1. Focusing Objective
    A statement presented to learners before they encounter learning material—provided to help guide learner attention the most important aspects of that learning material.
  2. Performance Objective
    A statement presented to learners before they encounter learning material—provided to help learners get a quick understanding of the competencies they will be expected to learn.
  3. Instructional-Design Objective
    A statement developed by and for instructional designers to guide the design and development of learning and instruction.
  4. Instructional-Evaluation Objective
    A statement developed by and for program evaluators (or instructional designers) to guide the evaluation of instruction.

I made a conscious decision not to include a "table-of-contents objective" despite the widespread use of this method for presenting learners with objectives. I can't decide whether this should be included. There's no direct research on this (that I've encountered), but there may be some benefit for learners in having an outline of the coming learning material. Your comments welcome. I'm leaning toward including this notion into the taxonomy because it is a stategy that I've seen in use. Maybe I'll call them "Content-Outlining Objectives" or "Outlining Objectives."

One of the clear benefits of this taxonomy is that it separates Focusing Objectives from the other objectives. These objectives—those presented to learners to help focus their attention—have been researched with the greatest vigor. And the results of that research are clear:

  1. Focusing objectives guide learner attention to the information in subsequent learning material that has been targeted by objectives, but they also take attention away from the information not targeted by objectives.
  2. Similarly, focusing objectives improve learning for the targeted information and hurt learning for the information not targeted.
  3. Prequestions are as powerful in creating this focusing effect as learning objectives, and they may be more powerful.
  4. The wording of the focusing objective or prequestion must specifically mirror the wording in the learning material. General or abstract wording doesn't cut it.
  5. Adding extra words, particularly words that specify the criteria of performance (ala Mager) will actually distract learners and hurt learning.

Thursday, 16 March 2006

March Madness and Productivity Loss

March Madness, three weeks of college basketball tournaments in the month of March, has been estimated to cost U.S. companies 3.8 billion dollars of lost productivity.

Various estimates and commentary:

  • www.workfamily.com/MarchMadness.htm
  • www.stltoday.com/blogs/business-mound-city-money/2006/02/productivity-madness/
  • https://www.cpa2biz.com/Career/March+Madness+Workplace+Fun.htm

There can be upsides to such an energizing event as well, of course. What I wonder is whether any learning or organizational development initiatives might be wrapped around such events. Imagine the following:

  1. Company intranet portal offers March Madness links, and also includes corporate advertising of key business initiatives, strategic messages, or even training opportunities.
  2. Maybe company even makes links unavailable except through this central march-madness portal.
  3. Managers initiate business-critical conversations in staff meetings after highlighting the latest results for the office pool.
  4. Managers utilize March Madness frenzy for reward and recognition.

Anyway, those are just some ideas off the top of my head. I'd love to get your more thoughtful ideas in the comments. Better yet, have you seen any real-world implementations? Have they been successful?

Bird Flu Epidemic and Learning

The Bird Flu epidemic is coming. Perhaps. If it does come, and if it's as bad as they say it might be, our lives, our families, our jobs, and our learning will all be disrupted. Here's some of the headlines:

  • The Red Cross says to stock 2 weeks of food.
  • Schools will be shut down for up to 3 months.
  • Employees will stay home from work.
  • Some businesses may lay off their workforces.
  • The food supply might be cutoff temporarily.
  • Medical institutions may be overwhelmed.

NPR had a great radio segment on this. Check it out here.

The implications for business are almost incomprehensible. From world economic collapse, to laying off the workforce, to protecting employees, to enabling work from home, to utilizing training and development to limit the repercussions.

Training-Learning-and-Development could be vital in at least two ways: First, it could help mobilize and educate workers. Second, it could ramp up to provide extra learning services during the crisis. If workers can't work, they can learn. Certainly, businesses will want the workers to work, but if they can't, perhaps this is an opportunity for strategic, revolutionary organizational change. A time for reflection, learning, bonding, and helping others.

To learn more about the flu, you can check out Elliott Masie's compilation of sources.

Wednesday, 15 March 2006

Be Careful Using Experts as e-Trainers.

I've been reading Richard E. Clark and Fred Estes' recently released book, Turning research into results: A guide to selecting the right performance solutions. They recounted research that shows that an expert's knowledge is largely "unconscious and automatic" to them. In other words, experts have retrieved their knowledge from memory so many times that they've forgotten how they do this and how the information all fits together---the knowledge just comes into their thoughts when they need it. This is helpful to them as they use their expertise, but it makes it difficult for them to explain to other people what they know. They forget to tell others about important information and fail to describe the links that help it all make sense.

In the learning-and-performance field we often use experts as trainers. Clark and Estes suggest that when experts teach, their courses ought to be pilot tested to work out the kinks. In my experience as a trainer, I've found that the first few deliveries always need significant improvements. I learn by seeing blank stares, sensing confusion, receiving questions, and watching exercises veer off in the wrong direction. This has me thinking about synchronous instructor-led web-based training.

If it's hard to create fluent classes in face-to-face situations, it's going to be more difficult to do this over the web. We humans are hardwired to look in people's eyes and read their expressions. Should we avoid having experts teach our courses? Probably not. Only experts have the credibility and knowledge to teach best practices.

What does this insight say about using synchronous delivery for quick information transfer? It means that it may not be effective to hook our resident expert up to a microphone and have them start talking. If they're talking with other experts, they'll be fine. But we ought to be skeptical about our experts' ability to do ad-hoc sessions without practice.

How are our expert trainers going to get the practice and feedback they need to fix the problems their expertise creates? I'm sure you can create your own list of recommendations. Here's mine:

1. Teach the course in a face-to-face situation first to work out the major bugs. This should not be used as a substitute for realistic online pilot testing. A variation is to use focus-group rooms with the instructor behind a one-way mirror. The instructor will see how the audience reacts, but will have to use the e-learning platform to deliver the learning. If technology allows, perhaps a small number of learners can be the target of webcams that enable the instructor to see their progress.

2. Beta-test the course online (at least once or twice) with a small number of learners who are primed to give feedback. Make sure they are encouraged to register their confusion immediately and allow lots of extra time for the instructor, learners, and observers to write down their confusions, discomforts, and skepticisms.

3. Make sure the learning design provides lots of opportunities for the learners to register their confusion and ask for clarification, especially the first few times the course is run. This can involve some sort of questioning of the learners, but the questions should be meaningful, not just superficial regurgitations.

Thursday, 08 December 2005

Are Wiki's Inherently Flawed?

Wiki's are all the rage in the training and development industry, but are they really workable?

Wikipedia is the most popular wiki in the world. It compiles information when users add, modify, or delete entries. Wikipedia is intended to mimic an encyclopedia, but wikis have other uses. For example, the Learning 2005 conference used a wiki (and is still using a wiki) at www.learningwiki.com.

John Seigenthaler was recently wikied when someone edited his Wikipedia entry in a most unflattering way, describing him as involved in John F. Kennedy's and Robert Kennedy's assasinations. He was not. Now his wrong information has spread all over the web. Not only that, but "vicious, vindicative, almost violent stuff, homophobic, racist stuff" about him was later added to his entry. Seigenthaler has thoughtfully suggested that there are "incurable flaws in the Wikipedia method of doing things."

You can listen to Seigenthaler tell his own story along with the founder of Wikipedia, Jimmy Wales. It's a fascinating online interview by the host of NPR's "Talk of the Nation."

Wikipedia is changing it's methods to minimize these types of issues, but the question is, will these methods be enough. Jimmy Wales states that "You should take Wikipedia with a grain of salt. I think you should take almost everything with a grain of salt, but in particular Wikipedia is definitely a work in process."

The underlying belief about wikis is that "all of us are smarter than a few of us." This is comforting illusion in theory, but is just plain wrong in practice. The mediocre don't always understand enough to judge an expert's pronouncements. Groups of people often tend toward groupthink or mob psychosis. Powerful interests often control the public conversation and thus become the final arbiters of what is fact. Conspiracy theories often have ninety-nine lives.

Wikis, blogs, websites (indeed, all forms of communication) carry with them the possibility that the information conveyed is not true. The more widely some information is dispersed, the bigger the potential problems. The more our communication channels have validators who correct inaccuracies, the more we tend to move toward the truth. For example, the press has traditionally played a role in holding public officials to account and conveying the news to people. Competition, as between political parties, can surface truths sometimes. Peer policing, as academic researchers do through research referring mechanisms, offer a correcting mechanism. Credentialling standards or agencies control who gets into a field or who advances.

Sometimes having more people can bring more truth to light. There are recent cases where political bloggers have uncovered facts regarding scandalous actions that have otherwise gone unnoticed. Reading a newspaper's letters to the editor is often quite enlightening, offering improvements and corrections to the regular writers' commentary.

In my work at Work-Learning Research, I have tried to track down myths that have led us astray in the learning-and-performance industry. By now you have probably seen my investigation of the notion that people remember 10% of what they read, 20% of what they see, 30% of what they hear...etc." Read this and you'll see that it's not true.

In using Wiki's to promote learning and knowledge, consider doing the following:

  • Consider who will be able to add and/or edit the information. The higher the percentage of expertise in your population, the better. The lower the opportunities for personal gain, the less likely you'll get intentionally troublesome information.
  • Build in some validation methods. Build in some skepticism.
  • Consider not letting anyone post anonymously.
  • Consider forgoing the goal of knowledge creation or learning, and instead focusing on creating hypotheses and generating ideas for future consideration and judgment, networking to increase informal-learning connections.
  • Consider building in some sort of assessment system on the value of entries, whether through community scoring, expert scoring, or openness about a person's posting history and background.
  • Insist that each posting include a section entitled, "Why should anyone listen to me about this topic," or some such addendum.

Wednesday, 07 December 2005

How Google Can Facilitate Learning

Google's mission is "to make the world's information universally accessible and useful."

Good for Google. But implied in this statement is that the world's information should be universally accessible and useful TO ACTUAL INDIVIDUAL HUMAN BEINGS.

This is a very important clarifier. Why? Because IF information is for the use of humans, it must be formulated and delivered in a way that aligns with the human learning system.

Here are some ideas for Google (and its competitors) to consider:

  • People store information in their heads (in their long-term memory systems).
  • People can sometimes access information in other people's heads. For example, my wife might spontaneously remind me of some romantic moment when we first met, I might ask her a question about sustainable agriculture practices (one of her knowledge specialties) and she might tell me what she knows. Thus, there is (1) information from other's heads that is pushed to us and (2) information that we pull from their heads as well (don't visualize this).
  • People can store information intentionally in notes, documents, etc. Information can also be unintentionally stored. In either case, this type of storage has been referred to "external memory" by research psychologists.
  • The information in each person's information storage system degrades with time and experience, and different items of information can degrade at different rates. This process is often called "forgetting." Forgetting is actually an adaptive mechanism because it enables us to access the information most critical to our current performances (in our day-to-day lives).
  • The internet is just one information storage system of importance to an individual person. In its present state, the internet is generally not as effective as an individual's personal storage system. At best, it is a different type of storage system.
  • For the internet and human memory, both storage AND retrieval are critical processes.
  • Information, no matter where it is stored, can be good information or bad. It can be attached to appropriate contextualizing information or inappropriate contextualizing information.
  • We might consider the following six information storage systems as critical to an individual's informational success:
    • their personal memory system
    • their external memory systems (intentional and unintentional)
    • the memory systems of their relatively-contiguous human associates
    • the internet
    • books, magazines, libraries (and all other formal knowledge not yet available on the internet)
    • their immediate surroundings and all the stimuli and cause-and-effect relationships inherent in that wonderful "stimulus swarm" (term heard first from the vocal vibrations of Ernie Rothkopf). Hidden in this reality, is much information, if only we have the knowledge and experience to know how to parse it and make sense of it.

What Google (and its competitors) might do given the information above:

  • Help make the internet forget (or make the retrieval system mimic forgetting)
  • Create reminding systems (or individual learning-management systems, iLMS's) to help people maintain high-importance information in a highly-accessible (easily-retrievable) state (regardless of which storage system we're talking about).
  • Create a methodology to help people work with all these storage systems in a manner that is synergistic.
  • Develop powerful validation systems to help people test or vet their information so they can determine how valid and relevant it is.
  • Do all this in a way that is inuitively simple and easy to use.

Did I forget to mention that I am available to brainstorm ideas for a relatively modest fee (I say modest, because we're talking about the future of all human knowledge). I do realize that this information (that I am available for a fee) is accessible on the internet. But it is better and more useful (for everyone, but especially for me) that this information is highly accessible in your long-term memory, and that you---particularly you folks at Google---utilize that information before you forget it.

Friday, 18 November 2005

Learning in the Citizenry

Learning is a many-splendored thing. Want evidence? Consider the overabundance of theories of learning. Greg Kearsley has a nice list. To me, this overabundance is evidence that the human learning system has not yet been lassoed and cataloged with any great precision. Ironic that DNA is easier to map than learning.

Being a political junkie, I'm fascinated with how a population of citizens learns about their government and the societal institutions of power. Democracy is rooted in the idea that we the citizenry have learned the right information to make good decisions. In theory this makes sense, while in practice imperfect knowledge is the norm. This discussion may relate to learning in the workplace as well.

Take one example from recent events. On September 11th, 2001, the United States was attacked by terrorists. The question arose, who were these terrorists? Who sent them? Who helped them? One particular question was asked. "Was Saddam Hussein (dictator of Iraq) involved?" I use this question because there is now generally-accepted objective evidence that Saddam Hussein was not involved in the 9/11 attack in any way. Even President Bush has admitted this. On September 17th, 2003, Bush said, in answer to a question from a reporter, "No, we've had no evidence that Saddam Hussein was involved with September the 11th." Despite this direct piece of information, the Bush administration has repeatedly implied, before and after this statement, that the war in Iraq is a response to 9/11. We could discuss many specific instances of this---we could argue about this---but I don't want to belabor the point. What I want to get at is how U.S. citizens learned about the reality of the question.

Poll_data_4

Take a look at polling data, which I found at PollingReport.com. I've marked it up to draw your eyes toward two interesting realities. First, look at the "Trend" data. It shows that we the citizens have changed our answer to the question asked over time. In September of 2002, 51% of Americans incorrectly believed that Saddam was personally involved in September 11th. Last month in October or 2005, the number had dived to 33%. The flip side of this showed that 33% correctly denied any link between Saddam and 9/11 in October of 2002, while today the number is a more healthy 55% correct, but still a relatively low number. If we think in terms of school-like passing-grade cutoffs, our country gets a failing grade.

The second interesting reality is how different groups of people have "Different Realities" about what is true. You'll notice the difference in answering these questions between Republicans and Democrats.

These data encourage me to conclude or wonder about the following:

  1. Even well-established facts can engender wide gaps in what is considered true. Again, this highlights the human reality of "imperfect knowledge."
  2. Stating a fact (or a learning point) will not necessarily change everyone's mind. It is not clear from the data whether the problem is one of information exposure or information processing. Some people may not have heard the news. People who heard the news may not have understood it, they may have rejected it, or they may have subsequently forgotten it.
  3. Making implied connections between events can be more powerful than stating things explicitly. It is not clear whether this is also a function of the comparative differences in the number of repetitions people are exposed to. This implied-connection mechanism reminds me of the "false-memory" research findings of folks like Elizabeth Loftus. Are the Republicans better applied psychologists than the Democrats?
  4. Why is it that so many citizens are so ill-informed? Why don't (or why can't) our societal information-validators do their jobs? If the media, if our trusted friends, if our political leaders, if our religious leaders, if opinion leaders can't persuade us toward the truth, is something wrong with these folks, is something wrong with us, is there something about human cognitive processing that enables this disenfranchisement from objective reality? (Peter Berger be damned).
  5. I'm guessing that lots of the differences between groups depends upon which fishtank of stimuli we swim in. Anybody who has friends, coworkers, or family members in the opposing political encampment will recognize how the world the other half swims in looks completely different than the world we live in.
  6. It appears from the trend data that there was a back-and-forth movement. We didn't move inexorably toward the truth. What were the factors that pushed these swings?

These things are too big for me to understand. But lots of the same issues are relevant to learning in organizations---both formal training and informal learning.

  1. How can we better ensure that information flows smoothly to all?
  2. How can we ensure that information is processed by all?
  3. How can we ensure that information is understood in more-or-less the same way by all?
  4. How can we be sure that we are trusted purveyors of information?
  5. How can we speed the acceptance of true information?
  6. How can we prevent misinformation from influencing people?
  7. How can we use implied connections, as opposed to explicit presentations of learning points, to influence learning and behavior? Stories is one way, perhaps.
  8. Can we figure out a way to map our organizations and the fishtanks of information people swim in, and inject information into these various networks to ensure we reach everyone?
  9. What role can knowledge testing, performance testing, or management oversight (and the feedback mechanisms inherent in these practices) be used to correct misinformation?

Thursday, 17 November 2005

Priming the Learning Appartus for Future Learning

Most of what we call "training" is designed with the intention of improving people's performance on the job. While it is true that much of training does not do this very well, it is still true that on-the-job performance is the singular stated goal of training.

But something is missing from this model. What's missing is that a learning intervention can also prepare learners for future on-the-job learning. Let's think this through a bit.

People on the job---people in any situation---are faced with a swarm of stimuli that they have to make sense of. Their mental models of how the world works will determine what they perceive. I've noticed this myself when I walk in the woods with experienced bird watchers. I hear birds, but can't see them, no matter how hard I look. Experienced bird watchers see birds where I see nothing. The same stimuli have different outcomes because the expert birders have superior mental models about where birds might locate themselves.

The same is true for many things. As a better-than-average chess player, I will understand the patterns of the pieces better than a novice will. Experienced computer programmers see things that inexperienced programmers do not. Experienced lawyers will understand the nuances in someone's testimony more than a novice lawyer.

Experience enables distinctions to be drawn between otherwise ambiguous stimuli. It enables people to perceive things that others don't perceive. It helps people notice what others ignore.

Learning can be designed to provide amazing-grace moments, helping those who were once blind to see. If we're serious about on-the-job learning, we ought to begin to build models of how to design formal learning to facilitate informal on-the-job learning.

Dan Schwartz, PhD (a learning psychologist at Stanford) has written recently about a concept called Preparation for Future Learning or PFL. Schwartz argues that generally poor transfer results may be due to the common practice of assessing what was learned but failing to assess what learners are able to learn. This makes a lot of sense given how complex the real world is, how learners forget stuff so quickly, and how much they learn on the job.

Schwartz and his colleagues are working on ways to improve future learning by using "contrasting cases" that enable  learners to see distinctions they hadn't previously noticed. This concept might be used in formal training courses to prepare learners to see things they hadn't seen before when they return to the job. For example, a manager being trained on supervisory skills may be taught that some decisions require group input, whereas other decisions require managers to decide on their own. Cases of both types could be provided in training so that relevant distinctions will be better noticed on the job.

A different way to prepare learners for future learning is to prime them with questions. In my dissertation research, I included one experiment that I asked college students questions about campus attractions. For example, I asked them what the statue "Alma Mater" was carrying. A week later, I suprised the students by asking them some of the same questions again. The results revealed that simply asking them questions (even when no feedback was provided) improved how much they paid attention to the items on which they were queried. Between the two sets of questions, learners apparently paid attention to the statue in ways they hadn't before. By being asked about an item, the learners were more likely to spend time learning about that item when they encountered those items in their day-to-day walking around.

There are likely to be other similar learning opportunities, but the point is that we need ways to design our learning interventions to intentionally create these types of learning responses. I'm going to be thinking about this for a while. My hope is that you will too.

Perhaps these meager paragraphs have prepared you for future learning. SMILE.