Department and Program Assessment
Unit of Analysis
We seem to be bouncing back and forth between student-based, assignment-based, course-based, and program-based assessment. I can get my head around an assessment plan for my department that focuses on comps, but…. Help!
You are right. We bounce back and forth talking about several levels of assessment, since all are important. It is important to clarify what level of assessment you are intending while also understanding that these different levels intertwine. You may find the following questions helpful in determining the focus of each level.
What are the different levels of assessment?
- In any given course, have the individual students fulfilled the learning goals that I have for this course?
- As a group, or on the whole, have the students in my course successfully met the student learning goals that are stated for this course?
- Does the structure of the assignments in my course help students display evidence that they have achieved the learning goals for this course?
- On the whole, as a group, do students completing the major/minor/program show evidence that they have achieved the program learning goals?
- Do graduating Carleton students display the characteristics that are embedded in our mission statement (including the 6 institutional learning outcomes)?
The Middle States Accrediting Agency has a useful table of example assessment measures, organized by level of assessment.
Jargon or Vocabulary
What is “ground-level” assessment?
Ground-level assessment refers to assessments that are embedded closest to where and when learning occurs; e.g., a classroom-based assessment that may be used to determine whether students have achieved a particular skills level —such as in graph interpretation. At a program level, a yearly discussion of students’ comps work would be an example of a ground-level assessment.
I thought I understood the difference between “formative” and “summative” assessment (that formative happens while students are learning within a course and summative happens at the end of the course) but now I’m confused about the use of the terms in program assessment.
You are right. For students, a formative assessment is designed to improve learning along the way (such as comments on a draft of a paper). The final grade on the paper is an example of a summative assessment at this level. You can think about a program evolving over several years in the same way as a student “evolves” in learning throughout a single course. At the program level, sets of papers (or exam questions, projects or presentations) from individual courses can be used as a formative assessment of how well students are achieving program level goal(s). That is formative because it is used to give insight about student learning at the program level so faculty can discuss and modify curriculum/courses as deemed prudent. Please follow this link for more information about formative and summative assessment.
Remind me again of the difference between observable (direct) and inferable (indirect) forms of assessment.
Your department’s analysis of student comps projects relative to your learning outcomes is a direct measure— it is observable. When a student answers a survey or questionnaire that you send out asking for their feedback on the comps process, the survey response is an indirect —inferable— assessment. The best assessment programs contain, at a minimum, direct measures of learning. However, you can augment your understanding of the learning that has occurred in valuable ways through indirect measures, although they do not take the place of direct assessment.
Intersection between institutional and program level assessment
Do we need to have learning outcomes tied to college goals? Do all of the college goals have to be reflected in our department outcomes?
We do have a set of institutional learning outcomes (six of them) and have begun ECC discussions on a plan to meet these goals. [link to Draft Institutional Assessment Plan] Because Carleton takes its liberal arts mission seriously, departments and programs are also engaged in a cooperative and complementary effort to fulfill broad these institutional learning goals (at the same time as they are teaching a narrower set of discipline/topic-based habits of mind). As a result, we expect that many department/program learning outcomes will align with college-level learning outcomes. Of course, each department/program also has its own unique set of objectives and so not all department/program outcomes will map onto college-level outcomes.
Assessment processes and plans
How do the “elements of an assessment plan” differ from a list of learning outcomes?
The learning outcomes are the targets we are trying to hit; e.g., the English department may desire that its students develop skills of close reading of texts. An assessment plan describes the process we will use to a) find out whether we are achieving our goals and b) organize activities to reach that end. In the example above, the English department would want to look at evidence that could show whether students had developed close reading skills. Further, they would want to organize department conversations about that evidence, deciding whether it appeared the department was successfully reaching this goal and, if not, deciding on an appropriate curricular response.
What exactly does our department need to do? Is it better to start with a full list of learning goals or to pick one and get its assessment process really right?
The primary goal is to ensure that each department and program develops a deliberate and transparent process for reflecting on how well students are achieving department/program learning goals. The best way to move toward that end will depend on the department, but several points are worth remembering.
- Assessment is a process, not an end-point. It is valuable to move forward even if that process is imperfect. You can always do things differently the next time you examine a given learning goal, building on lessons learned the first time around.
- The whole point of assessment is to provide your department/program with information that can be used to improve student learning experiences. We expect over a 10-year period each department will be able to come up with a sustainable process to assess all the learning outcomes they have articulated. At any given point in time, the department may pay more attention to one outcome and less to others.
- Ultimately, assessment is successful if it informs educational practice. Because you and your colleagues are unlikely to change teaching practice without solid evidence, it is often helpful to take on a smaller more manageable task and do it well than to try to do everything all at once.
The suggested methods of assessment seemed too vague —what are some specific methods that we can apply?
The following are options and not a complete listing of assessment methods—your own creativity might be very useful in developing methods or measures that fit your needs. You may also find helpful information about assessment strategies at:
General information about assessment accessible at this link may also be useful.
- Rubrics (criteria-based rating scales):
- Rubrics for Papers
- Oral Presentation Rubrics
- Behavior Observation Rubrics ( e.g., working in groups)
- For example: Design a rubric that identifies dimensions you expect to see in a particular kind of paper type. (For instance, the History Department may expect students to be able to weave evidence drawn from primary sources into an argument.) For each dimension, discuss what it looks like when students produce adequate, proficient, and exemplary work. Use the rubric to score a sample of papers written in several courses. (Note: As a variant, you might ask students to turn in a portfolio of papers as the basis of your evaluation).
- For example rubrics, see the Draft Institutional Assessment Plan.
- Common Test Question or Question Type: Your department/program may agree that all professors who teach a given class (or some subset of such folks, if you teach many sections of the relevant course) will include a particular question of a question of a given type on the midterm in one year. (For instance, the Economics Department might expect students to be able to employ a basic model of supply and demand.) When the exams are turned in, photocopy student responses to the question. At the end of the year, the department/program can evaluate the student responses, noting what fraction are adequate, proficient, or exemplary.
- Standardized Tests: Some disciplines have designed standardized content tests that evaluate students’ content knowledge.
- Institutional Research Data: The Institutional Research and Assessment (IRA) office collects a wide range of data. Examples include survey data indicating self-reported competency in various critical thinking tasks, IRA can also help pull information from the institutional database to evaluate success in a program based upon certain characteristics (ex. test scores, prerequisites, course-taking patterns). It can also pull data from alumni surveys and institutional and national databases to examine alumni employment and/or graduate school attendance. Depending on your department/program learning outcomes, some of these data may provide useful indirect evidence of success. See the IRA web pages for a growing set of resources that may support your needs.
- Student Interviews: Interviews of graduating seniors, new majors, or even students who took intro courses and then chose not to major can be rich sources of information. Most interviews constitute indirect assessment. However, interviews designed to reveal student learning (e.g. foreign language oral exams) can be direct measures. Here are some suggested tips and readings for those interested in conducting student interviews.
Can we use group-based work as an element of program assessment?
Yes. But remember that our goal is to give every student a high quality experience. Because some group work assignments can be done by a single member of the group, your department/program will want to take care in interpreting information based on group work.
If you have other questions, please email Andrea Nixon.