Office of Assessment
University of Wisconsin-Superior
Belknap and Catlin
P.O. Box 2000
Superior, WI 54880
Office of Assessment
Action Verbs are associated with the Bloom's Taxonomy. Used in writing useful student learning outcomes. A list of examples is available.
The process of linking content and performance standards to assessment, instruction, and learning in classrooms. One typical alignment strategy is the step-by-step development of (a) content standards, (b) performance standards, (c) assessments, and (d) instruction for classroom learning. Ideally, each step is informed by the previous step or steps, and the sequential process is represented as follows: Content Standards - Performance Standards - Assessments - Instruction for Learning. In practice, the steps of the alignment process will overlap. The crucial question is whether classroom teaching and learning activities support the standards and assessments. System alignment also includes the link between other school, district, and state resources. Alignment supports the goals of the standards, i.e., whether professional development priorities and instructional materials are linked to what is necessary to achieve the standards (CRESST, n.d.).
"an ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. Assessment helps us create a shared academic culture dedicated to assuring and improving the quality of higher education" (Angelo, 1995, p.7). "Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development"(Palomba & Banta, 1999).
Assessment of a unit (which could be a department, program or entire institution) to satisfy stakeholders external to the unit itself. Examples: assessment to retain state approval and to renew accreditation by DPI, VSA and others. Results are often compared across units. Always summative. (Leskes, 2002, p.42-43)
Assessment that feeds directly, and often immediately, back into revising the course, program, or institution to improve student learning results. Can be formative or summative. (Leskes, 2002, p.42)
Assessment that measures the institution-wide, general education competencies agreed upon by the faculty. General education assessment is more holistic in nature than program outcomes assessment because general education competencies are measured across disciplines, rather than just within a single discipline.
Uses the individual student, and his/her learning, as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement. Would need to be aggregated if used for accountability purposes. Examples: improvement in student knowledge of a subject during a single course; improved ability of a student to build cogent arguments over the course of an undergraduate career (Leskes, 2002, p.43).
Uses the institution as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value-added, and used for improvement or for accountability. Ideally institution-wide goals and objectives would serve as a basis for the assessment. Example: how well students across the institution can work in multi-cultural teams as sophomores and seniors (Leskes, 2002, p.43).
Uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value-added, and used for improvement or for accountability. Ideally program goals and objectives would serve as a basis for the assessment. Example: how sophisticated a close reading of text senior English majors can accomplish (if used to determine value added, would be compared to the ability of newly declared majors) (Leskes, 2002, p.43).
"a written assessment plan is a statement of agreement about the measures and processes to be used in examining educational programs [and institutional performance]" (Palomba & Banta, 1999).
A measurement taken before the introduction of an activity or intervention. This is compared with the measurement of target behaviors taken on a same measure after the activity or intervention (Marlow, 2011).
A preset standard against which outcome is measured. Example: 80% of the class will be able to demonstrate a competency at a satisfactory level.
A multi-tiered model of classifying thinking according to six cognitive levels of complexity. Each level is associated with action verbs. In the context of student learning outcome assessment, these verbs help write SMART outcomes.
The individual's demonstrated capacity to perform, i.e., the possession of knowledge, skills, and personal characteristics needed to satisfy the special demands or requirements of a particular situation is referred to as competence.
Involves taking a second look at course assignments and activities that are primarily for grading students and using them also as a means of assessing student learning and program effectiveness (Palomba & Banta, 1999).
Guidelines, rules, characteristics, or dimensions that are used to judge the quality of student performance. Criteria indicate what we value in student responses, products, or performances. They may be holistic, analytic, general, or specific. Scoring rubrics are based on criteria and define what the criteria mean and how they are used (CRESST, n.d.). Performance criteria help assessors maintain objectivity and provide students with important information about expectations, giving them a target or goal to strive for (New Horizons for Learning, 1995).
Refer to groups of information that represent the qualitative or quantitative attributes of a variable or set of variables and can be viewed as the lowest level of abstraction from which information and knowledge are derived. In the context of student learning outcome assessment, data refer to the measurements on specific learning outcomes. Data can be used as evidence when applied to make and support a case.
Require students to demonstrate their knowledge and skills as they respond to the assessment technique employed. Examples include classroom assignments, standardized tests, portfolios. Direct Assessment of Learning gathers evidence, based on student performance, which demonstrates the learning itself. Can be value added, related to standards, qualitative or quantitative, embedded or not, using local or external criteria. Examples: most classroom testing for grades is direct assessment (in this instance within the confines of a course), as is the evaluation of a research paper in terms of the discriminating use of sources. The latter example could assess learning accomplished within a single course or, if part of a senior requirement, could also assess cumulative learning (Leskes, 2002, p.43).
In its broadest sense includes anything that is used to determine or demonstrate the truth of an assertion. In the context of student learning outcome assessment, "evidence can embrace the results of both quantitative and qualitative approaches to gathering information, both of which may be useful in judging learning. At the same time, the term evidence suggests both the context of "making and supporting a case" and the need to engage in consistent investigations that use multiple sources of information in a mutually reinforcing fashion. But to count as evidence of student learning outcomes, the information collected and presented should go beyond such things as surveys, interviews, and job placements to include the actual examination of student work or performance. As a consequence, assessment of studentlearning outcomes is most appropriately defined for accreditation purposes as the processes that an institution or program uses to gather direct evidence about the attainment of student learning outcomes, engaged in for purposes of judging (and improving) overall instructional performance" (Ewell, 2001).
A means of gathering information about student learning is built into and a natural part of the teaching-learning process. Can assess individual student performance or aggregate the information to provide information about the course or program. Can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate web-based information (as part of a college-wide outcome to demonstrate information literacy) (Leskes, 2002, p.43).
Use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. Usually summative, quantitative, and often high-stakes (see below). Example: CAPP and GRE exams (Leskes, 2002, p.43).
Is in its broadest sense verified information about past or present circumstances or events that are presented as objective reality. Interpretation of a fact in a particular context is necessary to give meaning to the fact.
Assessment results provide feedback, leading to decisions that in turn lead to new goals and objectives emerging for a given curricular/co-curricular program. Its process is the continuous cycle of collecting assessment results, evaluating them, using the evaluations to identify actions that will improve student learning, implementing those actions, and then cycling back to collecting assessment results, etc.
"is conducted during the life of a program (or performance) with the purpose of providing feedback that can be used to modify, shape, and improve the program (or performance)" (Palomba & Banta, 1999). The gathering of information about student learning during the progression of a course or program and usually repeatedly-to improve the learning of those students. Example: reading the first lab reports of a class to assess whether some or all students in the group need a lesson on how to make them succinct and informative (Leskes, 2002, p.42).
Are useful in facilitating conversations about students learning and assessment activities. 1) How are your stated student learning outcomes appropriate to your mission, programs, degrees, and students? 2) What evidence do you have that students achieve your stated learning outcomes? 3) In what ways do you analyze and use evidence of student learning? 4) How do you ensure shared responsibility for student learning and for assessment of student learning? 5) How do you evaluate and improve the effectiveness of your efforts to assess and improve student learning? 6) In what ways do you inform the public and other stakeholders about what and how well your students are learning?
"General education is intended to impart common knowledge and intellectual concepts to students and develop them in the skills and attitudes that an organization's faculty believe every person should possess. From an organization's general education, a student acquires a breadth of knowledge in the areas and proficiency in the skills that the organization identifies as hallmarks of being college educated. Moreover, effective general education helps students gain competence in the exercise of independent intellectual inquiry and also stimulates their examination and understanding of personal, social, and civic values" (Higher Learning Commission, 2003). General education can be shaped to fit to the organizational context.
"are used to express intended results in general terms. The term goal is used to describe broad learning concepts." Examples of learning concepts are effective writing skills; critical thinking; general knowledge in the natural world, the social world, and the world of arts and letters; the knowledge and skills necessary to engage as an informed and involved citizen in a democratic society (Palomba & Banta, 1999).
More specifically identify how a student will demonstrate knowledge and skills identified in an outcome statement. Indicators are not means of assessment. They are the criteria established to assist in the specific measurement of learning. For example, with regard to the following outcome statement, students will demonstrate the ability to write a critical essay, indicators (criteria) need to be established for what constitutes a "critical essay." Thus, the outcome statement would read: Students will demonstrate the ability to write a critical essay by [identify specific indicators] (Keene State College, 2003).
"ask students to reflect on their learning rather than to demonstrate it" (Palomba & Banta, 1999). Methods include surveys, interviews, focus groups. An indirect assessment of learning gathers reflection about the learning or secondary evidence of its existence. Example: a student survey about whether a course or program helped develop a greater sensitivity to issues of diversity (Leskes, 2002, p.42).
Identify the knowledge and skills that students will be able to demonstrate. They answer the question: what knowledge and skills will students be expected to demonstrate? "To be most useful, outcome statements should describe, using action verbs, student learning or behavior rather than teacher behavior; use simple language; and describe an intended outcome rather than subject matter coverage. Care should be used to choose words that are not open to interpretation. Words like identify, solve, and construct are better than vague words such as understand and appreciate" (Palomba & Banta, 1999). Outcomes need to be SMART (i.e., specific, measurable, attainable, relevant and time-bound). "Statements of intended educational (student) outcomes are descriptions of what academic departments intend for students to know (cognitive), think (attitudinal) or do (behavioral) when they have completed their degree programs, as well as their general education or "core" curricula (Nichols & Nichols, 2000).
"Liberal Education is an approach to learning that empowers individuals and prepares them to deal with complexity, diversity, and change. It provides students with broad knowledge of the wider world (e.g. science, culture, and society) as well as in-depth study in a specific area of interest. A liberal education helps students develop a sense of social responsibility, as well as strong and transferable intellectual and practical skills such as communication, analytical and problem-solving skills, and a demonstrated ability to apply knowledge and skills in real-world settings" (AAC&U, n.d.).
Five institutional learning goals adopted at UW-Superior in Spring 2010. They are the ability and inclination to think and make connections across academic disciplines; the ability and inclination to express oneself in multiple forms; the ability and inclination to analyze and reflect upon multiple perspectives to arrive at a perspective of one's own; the ability and inclination to think and engage as a global citizen; and the ability and inclination to engage in evidence-based problem solving.Five institutional learning goals adopted at UW-Superior in Spring 2010. They are the ability and inclination to think and make connections across academic disciplines; the ability and inclination to express oneself in multiple forms; the ability and inclination to analyze and reflect upon multiple perspectives to arrive at a perspective of one's own; the ability and inclination to think and engage as a global citizen; and the ability and inclination to engage in evidence-based problem solving.
Means and methods that are developed by an institution's faculty based on their teaching approaches, students, and learning goals. Can fall into any of the definitions here except "external assessment," for which it is an antonym. Example: one college's use of nursing students' writing about the "universal precautions" at multiple points in their undergraduate program as an assessment of the development of writing competence (Leskes, 2002, p.43).
Broadly defined products and performance that can be used for assessing student learning outcomes. Direct measures directly evaluate student work; exams, papers, projects, computer programs, musical performances, and interaction with a client are examples. Indirect measures include grades, graduation rates, NSSE, and alumni survey (Walvoord, 2004).
"are used to express intended results in specific terms" (Palomba & Banta, 1999). They identify what the program, course, and/or faculty will do. They answer the question, what should students learn?
"items or tasks that require students to apply knowledge to/in real world situations" (College of Dupage, 2000).
A collection of student work at different stages of development during a course or over a series of courses. Includes work from one course or from a discipline with samples drawn from a variety of genres within that discipline. Typically includes a student's evaluation of his or her work. (College of Dupage, 2000).
Collects data that does not lend itself to quantitative methods but rather to interpretive criteria (see the first example under "standards") (Leskes, 2002, p.43).
Collects data that can be analyzed using quantitative methods (see"assessment for accountability" for an example) (Leskes, 2002, p.43).
"a matrix that explicitly states the criteria and standards for student work. It identifies the traits that are important and describes levels of performance within each of the traits" (Walvoord, 2004, 81). It translates informed professional judgment into numerical [or hierarchical] ratings on a scale" (Walvoord, 2004, 19). Examples are found in an AAC&U publication Assessing Outcomes and Improving Achievement: Tips and Tools for Using Rubrics. (edited by Rhodes, 2010)
Specific, measurable, attainable, relevant, and time-bound outcomes.
Sets a level of accomplishment all students are expected to meet or exceed. Standards do not necessarily imply high quality learning; sometimes the level is a lowest common denominator. Nor do they imply complete standardization in a program; a common minimum level could be achieved by multiple pathways and demonstrated in various ways. Examples: carrying on a conversation about daily activities in a foreign language using correct grammar and comprehensible pronunciation; achieving a certain score on a standardized test (Leskes, 2002, p.42).
"is conducted after a program has been in operation for awhile, or at its conclusion, to make judgments about its quality or worth compared to previously defined standards for performance" (Palomba & Banta, 1999). The gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines (Leskes, 2002, p.42).
Is conducted to find out the increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). Requires a baseline measurement for comparison (Leskes, 2002, p.42).
Angelo, T. A. (1995). Reassessing (and defining) assessment. AAHE Bulletin, 48(3), 7-9.
Association of American Colleges and Universities (AAC&U). (n.d.). What is liberal education? Resources. Retrieved August 2, 2010, from the website http://www.aacu.org/leap/what_is_liberal_education.cfm
College of Dupage. (2000). Assessment at the College of Dupage. Retrieved August 2, 2010, from the webpage http://www.cod.edu/Dept/Outcomes/AssessmentBook.pdf
Ewell, P.T. 2001. Accreditation and Student Learning Outcomes: A Proposed Point of Departure. CHEA Occasional Paper. Washington, D.C.: Council for Higher Education Accreditation.
Higher Learning Commission. 2003. Commission Statement on General Education. Chicago, IL: Higher Learning Commission.
Leskes, A. (2002). Beyond confusion: An assessment glossary. AAC&U Peer Review, 4(2-3), 42-43.
Marlow, C.R. (2011). Research Methods for Generalist Social Work (5th ed.). Belmont, CA: Brooks/Cole.
National Center for Research and Evaluation, Standards & Student Testing (CRESST). (n.d.). Glossary. Retrieved August 2, 2010, from the webpage http://www.cse.ucla.edu/products/glossary.html
New Horizon for Learning. 1995. Assessment Terminology: A Glossary of Useful Terms. Retrieved August 2, 2010, from the website http://www.newhorizons.org/strategies/assess/terminology.htm
Nichols, J.O. & Nichols, K.W. (2000). The Departmental Guide and Record Book for Student Outcomes Assessment and Institutional Effectiveness.
Palomba, C.A. & Banta, T.W. (1999). Assessment Essentials: Planning, Implementing, Improving. San Francisco, CA: Jossey-Bass.
Rhodes, T.L. (ed). (2010). Assessing Outcomes and Improving Achievement: Tips and Tools for Using Rubrics. Washington, D.C.: Association of American Colleges and Universities (AAC&U).
Walvoord, B.E. (2004). Assessment Clear and Simple. San Francisco, CA: Jossey-Bass.
Copyright © The Board of Regents of the University of Wisconsin System
University of Wisconsin-Superior is an equal opportunity educator and employer