Wisconsin

Extension’s bias toward public value

Key to making the case for funding for Extension is our ability to explain why Extension–and not some other public or nonprofit organization–should provide programming aimed at improving conditions in the state. In other words, we need to answer the “Why Extension?” question. When I ask Extension professionals to name Extension’s strengths relative to other possible program providers, the first response is usually that Extension provides sound, unbiased, research-based programs. Case closed, right?

At a recent workshop for University of Wisconsin Extension’s Western Region, this question arose: Can we really say that Extension has no bias? We do not have a profit motive, like private sector service providers. And we do not have specific mandates, like many local government service providers. But, can we say that our program content that has no bias at all? Isn’t striving to improve conditions in the state a bias? Isn’t striving for public value a bias? Isn’t using scientific research as a base from programming a bias?

This discussion brought two things to the forefront for me. First, we need some language other than “unbiased” to describe Extension programming. “Motivated by the public good”? “Based on the best scientific knowledge”? “Designed to create public value”? I’m not yet sure what the answer is…Second, for whatever descriptor we use, we need to ensure that Extension programming actually fits the descriptor. We need to be certain that we are doing whatever it is that separates Extension from other program providers.

Teaching public value to all types of learners

Last week I spoke at a workshop for grantees of the North Central Risk Management Education Center (NCRMEC). At the workshop, Karl Duley of University of Wisconsin Extension gave a presentation on meeting the needs of learners with different personality types, using the Myers-Briggs taxonomy. I wondered how well the “Building Extension’s Public Value” (BEPV) curriculum can be adapted for different kinds of learners. Below are a few of my own observations about how well the BEPV workshop–as I teach it–matches some of the learning preferences Karl described.

  • Extroverts (E-types) prefer thinking out loud, working with other people, and group activities. The BEPV curriculum includes many small group activities, so I think we may do a good job of reaching E-types.
  • Introverts (I-types) prefer quiet reflection and keeping their thoughts to themselves until they are comfortable. Karl demonstrated teaching to this preference by having us write down our answers to some of his questions, only sharing them later in the small group. In the BEPV workshop, when I introduce the different criteria for public sector action, I ask participants to think about the ways those criteria apply to their own programs. I can easily augment this by, after explaining each criterion, asking participants to write down the ways their program satisfies the criterion. This would give I-types a chance to reflect before speaking up, and would give everyone a list of ideas to refer to during the small group exercise.
  • Sensing learners (S-types) prefer a practical approach to new material, providing skills and facts they can currently use. I think the worksheets, exercises, and examples in the BEPV workbook should serve this kind of learner well.
  • Intuitive learners (N-types) prefer seeing the big picture before details. Spending adequate time on the introductory BEPV material (the workshop learning objectives, what is public value?, what is the purpose of the public value approach?) and periodic reminders of that material throughout the workshop can help I-type learners to keep track of the big picture.
  • Thinking learners (T-types) focus on objective truths, free from emotional distractions. Basing the public value approach on the (somewhat clinical) principles of public economics should be satisfying to these learners.
  • Feeling learners (F-types) feel comfortable taking into account people’s motives and personal values. One key objective of the BEPV workshop is to help learners see the value of their own program from the perspective of someone who is not a participant in that program. I ask them to “put themselves in the shoes of” that non-participant stakeholder and imagine what matters to that person. This exercise should be a cinch for the F-types!
  • Judging learners (J-types) want clear structure in the learning situation from the beginning. The BEPV workshop is carefully organized into modules, each with learning objectives and exercises. I think J-types will feel comfortable the degree of organization in the curriculum.
  • Perceiving learners (P-types) prefer open exploration with limited structure. Hmm. Being a clear J-type myself, I may have designed a curriculum that doesn’t serve this type of learner very well. I need to think of ways to insert–into a highly structured workshop!–some unstructured time to allow for a more creative flow of ideas.

Have you ever tried to modify a curriculum to meet different learning styles? Did you use the Myers-Briggs taxonomy, or do you find a different approach more useful? Do you have suggestions for how to make the BEPV curriculum more learner-friendly?

Are we storytellers or statisticians?

A couple of weeks ago I gave a talk on “Building Public Value with Extension and Research” at the National Extension and Research Administrative Officers Conference in Madison, WI. I heard a question that echoed one that I once asked of an Extension legislative affairs officer: “When making the case for Extension funding to an elected official, is it more effective to tell personal stories about positive experiences with Extension, or to share statistics about the impact of Extension programs?”

The answer I got from the official who visits regularly with state legislators was, “We need a lot of both!” He said that the evidence on program impact is crucial for showing legislators that Extension is improving conditions in their districts. However, we make a stronger case when we can also “put a face” on those statistics with personal stories about Extension and, importantly, personal stories about how the improved community conditions have positively affected a constituent. So, it seems to me, our best case has three components: (1) evidence of program impact, (2) testimony from individuals whose lives were improved by their own participation in Extension programs, and (3) testimony from individuals who benefit from the improved conditions–environmental, social, economic, etc.–that Extension programs helped generate.

2010 National Extension and Research Administrative Officers’ Conference

On May 18 in Madison, WI, I will lead a breakout session at the National Extension and Research Administrative Officers’ Conference (NERAOC). I will present an overview–and the basic concepts–of the “Building Extension’s Public Value” workshop, and talk about how to make a case for funding for outreach, extension, and research. If you are planning to attend the conference, please join me at the 10:15 session.

Logically speaking about public value

Many of you use the University of Wisconsin Extension logic model to guide program development and evaluation. Below is my first attempt at mapping the elements of the logic model to a public value message.

logic model

The “short-term” or “learning outcomes” in the logic model are a means to achieving the behavior changes and outcomes contained in the public value message. These learning outcomes lead the way to public value–and we must identify and measure them–but they are not the focus of the public value message. A skeptical stakeholder is unlikely to be persuaded of a program’s value be hearing that a participant learned or became aware of something. The stakeholder is concerned with what the participant actually did with that knowledge.

What I call “changes” in the public value message are called “intermediate” or “medium term outcomes” in the logic model. What I call “outcomes” are the logic model’s “long-term outcomes” or changes in conditions.

It seems to me that public value typically arises from a program’s long-term outcomes. In some cases, a program’s logic model will already include the outcomes that a stakeholder cares about (public value). In other cases, the public value exercise will tell us which additional outcomes we need to monitor–how we should extend the logic model–in order to substantiate our public value messages.

I believe that the public value approach must work hand in hand with program evaluation: it is through good program evaluation that we are able to make credible statements about our programs’ public value.