printable version of page Printer-friendly page

Focus On Basics

Volume 3, Issue B ::: June 1999

Performance Accountability: For What? To Whom? And How?

by Juliet Merrifeld
In everyday life, accountability means responsibility; it means being answerable to someone else for one's actions. We cannot, however, use the term without specifying accountability to whom and for what. In adult basic education (ABE), how we answer the question "to whom" depends a lot on our position in the system. Teachers may answer that they feel answer accountable to their students. Program directors may that they are accountable to their funders and staff as well as to students. State adult education offices may feel accountable to the governor, the legislature, to other state agencies, to workforce development boards, as well as to taxpayers. In addition, no clear consensus exists about "for what" adult education is accountable. Where does the balance lie between providing services and delivering results? Is the main purpose increased literacy proficiency, or are more diffuse social outcomes the emphasis? Until recently, the focus has been on providing services, with little emphasis on the results or the impact of those services. In the last few years, a number of policy initiatives at state and federal levels have begun to shift the emphasis to delivering results, with services seen as the means to an end. But what the "end" should be is by no means clear.

I would like to suggest that developing performance accountability is not just technically  challenging but also challenges our values. The key issues do not have purely technical solutions. They require agreement on what is important to us, on what we want out of adult education. If they are to be resolved, they require involvement by the ABE field as a whole.

Adult basic education is facing serious demands from policy-makers and funders to be accountable for its performance. The 1998 Workforce Investment Act (WIA) requires that each state report on performance measures. The emphasis on results shifts attention from simple delivery of services to the outcomes of learning: learning gains measured on standardized tests or social and economic outcomes such as getting a job, getting off welfare, and children's school success.

The key issues in the development of performance accountability in adult education are:

  • What does good performance mean?
  • Do programs have the capacity to be accountable?
  • Are the tools commonly used for measuring and documenting performance adequate and useful?
  • Are accountability relationships in place to link ABE into a coherent system?

Good Performance

Accountability systems work best if stakeholders -
those who have an interest in the outcomes of the system - agree on what success looks like. For adult basic educators, the heart of the matter is our concept of literacy. That concept has shifted over time from reading and writing text to functioning in society, from a simple dichotomy of illiterate/literate to multiliteracies. Brian Street characterizes two broad conceptual notions of literacy. The autonomous model conceives of literacy as a discrete and fixed set of skills, transferable from one context to another. The ideological model conceives of literacy as practices that are sensitive to social context and inherently associated with issues of power and access (Street, 1984).

Much recent research on multi-literacies suggests that there are multiple purposes for literacy and multiple goals and expectations for literacy education (Heath, 1983; Barton, 1994; Street, 1984,1995; Lankshear, 1997; New London Group, 1996). In such an understanding, notions of success must also be multiple. A single definition of success - gaining the GED, for example, or getting a job - excludes learners who have different purposes.

Definitions of success should be negotiated among all the stakeholders, learners, and practitioners as well as policymakers and funders. Although the legislative goals of the Workforce Investment Act reflect a majority among lawmakers, other stakeholders - including policy makers, program managers, teachers and students - may focus on other purposes for adult education and look to other measures of good performance.

Next Steps: Agree on Performance

Practitioners can play a role in defining performance within their own states. The WIA requires that each state develop a plan of the performance measures it will use to track results, including but not limited to those required by the Act. Whether explicitly or implicitly, these measures will define what counts for the field. The challenge is to come to an agreement on performance that includes the full diversity of learner and societal purposes. Lessons from the literature and experience in education and other fields suggest states should:

  • Invest time and energy in agreeing on what performance means;
  • Involve stakeholders and seek consensus;
  • Reflect newer understandings of literacy and connect performance with real life; and
  • Acknowledge a variety of outcomes as acceptable performance, as a way of including the full diversity of learners and programs.
Capacity to be Accountable

Adult education is trying to develop a national accountability system without having developed the capacity of the service delivery system to document and report results (Moore & Stavrianos, 1995). Plenty of evidence documents the lack of valid, reliable, and useful data about performance (Young et al., 1995; GAO, 1995; Condelli, 1994). These studies suggest some of the most basic data are absent, incomplete, or of low quality.

When asked to report numbers, programs will indeed report numbers. But as the GAO report on adult education says, "the data the Department receives are of questionable value" (GAO, 1995, p. 33). This is not surprising, since staff in programs usually do not use the data, rarely see reports based on them, and see no one else placing any real value on them.

Performance accountability requires investment in
the ability of local programs to collect, interpret, and use data to monitor how well they are doing. A number of states such as Pennsylvania, Connecticut, and Arkansas have already begun to develop their capacity for accountability. (For an overview of Pennsylvania's program, Keenan's article.) They consistently learned from their experiences that the key is to get buy-in from programs and practitioners from the beginning (Merrifield, 1998). They are also acutely aware of the problems of deciding what is counted, as well as how it is counted.

What is counted becomes what counts. Many examples of the hazards of counting the wrong things exist. A healthcare delivery system emphasizes cutting the numbers of people on a waiting list for surgery, thus ensuring that people with minor needs get served quickest because more operations for varicose veins than for heart bypasses can be performed in one day. The original performance standards of the Job Training Partnership Act (JTPA), an education and training program, emphasized the numbers of people placed in jobs within a specific time frame. This ensured that programs recruit clients who were most qualified and therefore easiest to move into jobs quickly and cheaply (GAO, 1989).

Next Steps: Build Capacity

Two kinds of capacity - to perform and to be accountable - are linked (Merrifield, 1998). By instituting a learning organization approach with feedback loops, performance data can help programs improve performance and increase accountability. Building the capacity to perform involves:

  • Increasing resources and focusing them on quality rather than quantity;
  • Providing staff development and training and technical support;
  • Using performance data for continuous improvement. Building the capacity to be accountable involves ensuring that:
  • Accountability demands are commensurate with resources and capacity;
  • Users of measurement tools are engaged in their  development;
  • Staff training and support are provided;
  • Information is timely;
  • Improved performance is rewarded.

A variety of efforts are already underway to build capacity to perform and to be accountable. Teacher inquiry projects have involved individual teachers in examining their practice and identifying ways to change and develop (Smith & Lytle, 1993). Some programs, such as those described elsewhere in this issue, have been working on their capacity to use data for continuous improvement. Some states have begun efforts to build local program capacity for both performance and accountability. The National Accounting and Reporting System (Condelli, 1998), will be providing training and support on how to use newly revised WIA-related performance measures. (See page 11 for more on the NRS.)

Accountability Tools

For accountability purposes, it is crucial that we collect data that are relevant, adequate, and important. To do so, we need tools - indicators and measures - that we believe in and use well. Indicators and measures are approximations of reality, not reality itself. They can be good, bad, and indifferent. An indicator that measures something unrelated to literacy learning- the number of brown-eyed learners, for example - is irrelevant. An indicator that measures something relevant - the prior learning that students bring, for example - but in an inadequate way, is dangerous. An inability to measure something important - affective changes in learners, for example - can be disastrous.

Some of our current accountability tools are inadequate: what we use to measure literacy gains is one example. Standardized tests are widely used. While such tests have their uses for placement purposes, their validity as measures of performance is questionable (Venezky, 1992). "The research literature raises questions about the validity of standardized tests... and local program staff have questioned the appropriateness of using these assessment to measure program results" (GAO, 1995, p. 24). As yet, however, few alternatives to standardized tests exist. Some programs are using various tools, such as portfolios, that allow learners to demonstrate their learning authentically (Literacy South, 1997), but so far these cannot compare learning between learners and across programs. Without external criteria or standards, authentic assessment will not meet the needs of accountability systems.

How we collect data for accountability is also important. Different approaches to data collection and analysis meet different purposes. A complete performance accountability system would include several approaches: monitoring, evaluation, and research would all have a place.

Monitoring can answer ongoing questions about day- today program operations. What kinds of students are being recruited? How long are they staying? What do they say they want from their learning experiences? How satisfied are they with the program? Monitoring is part of everyday management, providing a routine way for program staff to see how well the program is working.

Evaluation can answer particular questions about program operations at particular points of time. How are learners being served? Are they making progress on their learning goals? Is the program meeting quality standards? Evaluation may include a look at program-monitoring data. It may also involve gathering new data to answer specific questions. Surveys or focus groups are useful evaluation techniques.

Research can answer questions about associations, correlations, and meaning, and often takes a broader focus than one program. Research questions might examine: What are the benefits to individuals and society of participation in adult education? Which program designs are associated with different results? What kinds of resources are needed to support specific program designs? Research may be conducted by outside researchers or by practitioners themselves (Quigley & Kuhne, 1997).

Each of these accountability technologies illuminates different aspects of reality. They have different strengths and need to be used appropriately. Carrying them out involves scarce resources, so they should be applied carefully and economically to ensure that the data collected are both useful and used.

Next Steps: Develop New Measurement Tools

New approaches and tools for measurement are needed that are linked to performance. Performance assessment tools enable us to assess literacy practices. For accountability purposes, this more authentic assessment of literacy practices demands that we develop external standards or criteria against which individual student learning can be measured, and through which program performance can be assessed. Initiatives in performance assessment in countries such as Britain and Australia may provide useful models for measuring and assessing learning. We should use the full potential of research, evaluation, and monitoring technologies to meet the needs of different stakeholders.

Mutual Accountability

Underlying all the other issues in performance accountability for ABE is the question of accountability relationships. Traditional approaches to accountability echo Taylorist manufacturing systems, in which quality control checks at the end of the production line ensure that widgets meet product specifications and accountability runs only one way. Assessing outcomes at the end of the production process has its place in quality control systems, but increasingly businesses are turning to more participatory approaches to managing work processes and using production data for continuous improvement (Stagg, 1992).

High performance workplaces build in processes at
each stage of production to monitor and improve performance. They involve workers in this monitoring. The business world is now utilizing concepts such as the learning organization: one that facilitates the learning of its members to transform itself continuously (Pedler et al, 1991). This approach is seen as a way of responding to changing environments and  multiple demands. This kind of learning and transformation has to be shared and internalized: it cannot be imposed from the top (Stein, 1993). Accountability is shared or mutual.

In ABE, mutual accountability would engage members of the organization in creating a common vision, determining goals and customer expectations, and designing effective means of monitoring processes and results. Every member would be both accountable to others and held accountable by them. Learners would hold teachers, for example, accountable for providing learning opportunities that meet their needs. Teachers, in turn, would hold program directors and funders accountable for providing the resources they need to meet learner needs. These might include materials, space, training, pay for lesson planning and assessment.

Spelling out relationships of mutual accountability reveals some that are overlooked in conventional accountability systems. Congress, for example, holds adult education programs accountable for providing effective and efficient services. But Congress should also be held accountable by programs, by learners, and by voters for identifying a social need, passing appropriate guiding legislation, and providing the resources needed to create a strong adult education system.

Learners should hold their teachers accountable. But programs should also hold learners accountable for taking learning seriously and for making an effort to participate fully.

Businesses who expect adult education to provide them with workers equipped with basic skills might be expected in turn to provide jobs for those workers, or to continue a workplace basic skills program when the grant runs out. Mutual accountability would require all the partners to honor their contracts.

An accountability system based in the concept of mutuality has several characteristics:

  • It is negotiated between the stakeholders in a process that engages all the players in clarifying expectations, designing indicators of success, negotiating information flows, and building capacity.
  • Each responsibility is matched with an equal, enabling right: the right to a program that meets one's learning needs with the responsibility to take learning seriously, for example.
  • Every player knows clearly and agrees to what is expected of them.
  • Every player has the capacity to be held and to hold others accountable.
  • Efficient and effective information flows enable all players to hold others accountable.

Inequalities of power and uneven access to information prevent the development of mutual accountability. Learners, for example, cannot become real stakeholders in mutual accountability until they have other ways to effect change beyond dropping out. They will only become part of the structure of accountability when they have real power to make choices. Some community- based programs encourage learner participation in management, with learner representatives sitting on boards, and being involved in management decisions about the program. Many state-level adult learner organizations are working to address the inequalities in power and in access to information, and to strengthen the voice of adult learners in the system.

How information flows is also a central issue in mutual accountability. Without adequate access to information, stakeholders cannot hold others accountable. In traditional information flow designs, information is collected at the base and increasingly summarized for the purposes of different levels on the way up: from program to community, state, and national levels. In this simplistic model, information flows only one way: up the system to the state and national levels. Few people have either access to or the ability to use the data.

This model will not fit the needs of an accountability system that takes into account different performances and purposes and has mutuality as an underlying  assumption. A more complex information model should allow information to be generated at all levels and to flow around the system, up, down and across it, among and between different players who use it for specific purposes at specific times.

Next steps: Develop Mutual Accountability

Reforming accountability requires moving from one- way, top- down lines of accountability to a mutual web of accountability relationships. To make this switch, we must:

  • Bring the full range of stakeholder groups into the process - including teachers and learners;
  • Provide support for stakeholders who have least access to information and power;
  • Increase information flows among and between all stakeholders and make the information transparent (accessible to all);
  • Develop learning organizations at the program and state levels that would emphasize learning and continuous improvement, shared
  • responsibility, and engagement in monitoring results.

What Next?

To implement performance accountability well requires agreement on good performance, capacity both to perform and be accountable, new tools to measure performance, and a strong system of mutual accountability relationships. In the business world, high performance is associated with extensive changes in organizational practices, including a broadly understood vision and mission, flatter hierarchies with decision-making pushed as close to the shop floor as feasible, and participation at all levels of the organization in monitoring and improving performance. If ABE is to meet society's need for high performance, it too needs to change. But these changes cannot be implemented from the top alone. They will require federal and state government departments to consult with the field and with stakeholders. They need willingness to learn lessons from the past and from other countries. They demand a commitment of resources to building the capacity of the field. Above all, they call for the contributions of all players, practitioners and learners as well as policymakers and researchers.

References

Barton, D. (1994). Literacy: An Introduction to the Ecology of Written Language, Oxford: Blackwell.

Condelli, L. (1994). Implementing the Enhanced Evaluation Model: Lessons Learned from the Pilot Test. Prepared for the US Department of Education, Division of Adult Education and Literacy. Washington DC: Pelavin Research Institute.

General Accounting Office (GAO) (1989). Job Training Partnership Act: Services and Outcomes for Participants with Differing Needs. Report No. GAO/HRD-89-52. Gaithersburg, MD: US General Accounting Office.

General Accounting Office (GAO) (1995). Adult Education: Measuring Program Results Has Been Challenging. Report to Congressional Requesters. Washington DC: US General Accounting Office, September, GAO/HEHS-95-153.

Heath, S. B. (1983). Ways with Words: Language, Life and Work in Communities and Classrooms. Cambridge: Cambridge University Press.

Lankshear, C. (1997). Changing Literacies, with James Paul Gee, Michele Knobel & Chris Searle, Buckingham, England: Open University Press. Literacy South (1997), Phenomenal Changes: Stories of Participants in the Portfolio Project. Durham, NC: Literacy South.

Merrifield, J. M. (1998). Contested Ground: Performance Accountability in Adult Basic Education. NCSALL Reports #1. Boston: National Center for the Study of Adult Learning and Literacy.

Moore, M. T. & Stavrianos, M. (1995). Review of Adult Education Programs and Their Effectiveness: A Background Paper for Reauthorization of the Adult Education Act. Submitted to US Department of  Education. Washington DC: National Institute for Literacy.

New London Group (1996). "A pedagogy of   multiliteracies: designing social futures,"Harvard Educational Review, Vol. 66 (1).

Pedler, M., Burgoyne, J., & Boydell, T. (1991). The Learning Company: A Strategy for Sustainable Development. London: McGraw-Hill Company.

Quigley, B. A. & Kuhne, G. W. (1997). "Creating practical knowledge through action research: posing problems, solving problems, and improving daily Practice." New Directions for Adult and Continuing Education, No. 73. San Francisco, CA: Jossey-Bass.

Smith, M. C. & Lytle, S. (1993). Inside/Outside: Teacher Research and Knowledge, NY: Teachers College Press.

Stagg, D. D. (1992). Alternative Approaches to Outcomes Assessment for Postsecondary Vocational Education. Berkeley, CA: National Center for Research in Vocational Education.

Stein, S. G. (1993). Framework for Assessing Program Quality, Washington DC: Association for Community Based Education (ACBE).

Street, B. (1984). Literacy in Theory and Practice. Cambridge: Cambridge University Press. (reprinted 1995). Street, Brian (1995), Social Literacies: Critical Approaches to Literacy in Development, Ethnography and Education, London and New York: Longman.

Venezky, R. L. (1992). Matching Literacy Testing with Social Policy: What are the Alternatives? NCAL Policy Brief, Doc. No. PB92-1. Philadelphia, PA: National Center on Adult Literacy, University of Pennsylvania.

Young, M. B., Fleischman, H, Fitzgerald, N., & Morgan, M. A. (1995). National Evaluation of Adult Education Programs: Executive Summary. Prepared for US Department of Education, Office of the Under Secretary, by Development Associates Inc., Arlington, VA.

About the Author

Juliet Merrifield is now Director of the Learning from Experience Trust in England. She is an adult educator and researcher who worked in the United States for 20 years, and was the founding director of the Center for Literacy Studies in Tennessee.

Full Report Available

The research report upon which this article is based is available from NCSALL Reports for $10. For information on how to order this report, please click on to NCSALL Reports.

Updated 7/27/07 :: Copyright © 2005 NCSALL