University of Idaho - I Banner
A student works at a computer

VandalStar

U of I's web-based retention and advising tool provides an efficient way to guide and support students on their road to graduation. Login to VandalStar.

Assessment Coordinator

Assessment of program learning outcomes is coordinated by the academic program or department. Some programs have identified an assessment coordinator to guide the process and/or complete the annual assessment report. The exact role and responsibilities are determined by the program. This webpage offers general support for this role.

The learning outcomes assessment process at U of I has two parts:

Step One: Faculty Contributions of Assessment Data. Deadlines: Spring Data: Labor Day. Fall Data: Jan. 31. Step Two: Annual Student Learning Assessment Report. Deadline: Oct. 31. Step Three: Internal Review and Feedback (Meta-Assessment). College Review of Annual Report Due: Dec. 31. Institutional Review Conducted during the following Spring semester.

The Assessment Coordinator role may include the following tasks:

Developing and Coordinating the Assessment Plan

A minimum of one program learning outcome should be assessed each year with all program learning outcomes assessed over a three-year period. The assessment coordinator may be tasked with deciding what the assessment cycle looks like for the program. The schedule may be the same for all majors/degrees in the department or may vary by specific ‘program of study’ (major/degree level). Faculty will need to know which program learning outcome they will contribute data for. Communicating the assessment cycle in advance helps faculty plan too.

Some programs are required to collect data on all outcomes for a programmatic accreditor or other requirement. However, for the purposes of the University of Idaho’s systematic assessment process, programs are afforded the flexibility to develop a plan that is manageable and encourages meaningful engagement with the data being collected. Collecting data is the first step in assessment. Assessment is complete when the program has also analyzed and discussed the data and incorporated the findings into action and curricular changes. The assessment cycle should allow time for faculty to engage in all aspects of assessment each year.

For each program learning outcome being assessed the assessment plan should detail where in the curriculum or student experience the student will receive instruction in support of this outcome and where in the curriculum the student should have finally mastered the full outcome. These touchpoints are opportunities to assess learning of the outcome. A solid plan assesses students throughout the curriculum and faculty have the opportunity to contribute data at multiple levels. The U of I Assessment Planning Worksheet can help you develop your plan.

  • Baseline/Diagnostic – assessing student knowledge or skill when they enter the program before any instruction has occurred or during a 100-level course open to non-majors. This could also serve as a pre-test with a post-test administered later in the curriculum. This data helps a program understand what students already know and also provides data for measuring growth.
  • Formative – assessing student knowledge or skill as they progress through the program to ensure they are making adequate progress toward mastery. For this data, faculty are not yet looking for mastery of the learning outcome. Instead, they are looking for evidence that students are developing or making suitable progress toward mastery. Student may “meet” the expectation with a lower score on an assignment or by achieving a “developing” rating on a rubric.
  • Mastery/Summative – assessing student mastery of the learning outcome. For this data, faculty are looking for mastery of the learning outcome. They may be assessing mastery level of the learning outcome (example: all forms of communication) or for a specific component (example: written communication only). A senior capstone project may be evaluated for student achievement of the full outcome statement or several upper-level courses may each evaluate a component of the outcome to collectively cover the whole outcome statement. Students have achieved the outcome or they haven’t.

The following should be considered when developing this plan:

  1. Definitions of rigor
    Definitions of rigor can help faculty with judging student work or performance. Additionally, having shared definitions means that faculty are applying these same definitions across the curriculum. These definitions provide transparency to students, faculty and external reviewers about what is expected for each level of learning. For example, what should students demonstrate when they enter the program (baseline/diagnostic), as they progress through the program (formative/develop) and when they have finally “mastered” the program learning outcome (mastery/summative). When faculty are assessing students using the same definitions your data will be more informative of trends and progress.
  2. Intentionally crafted and sequenced learning activities
    Students should have plenty of opportunity to learn and practice what is required of them before they are assessed for mastery of the program learning outcome.
  3. Mapping of learning outcomes
    Knowing how course learning outcomes support mastery of the program learning outcomes is important and helpful. Faculty are already assessing the course learning outcomes as part of the course. If the course learning outcome has been identified to align with the program learning outcome, this course level assessment could be used as evidence to support the program learning outcome. Additionally, this mapping helps the program understand where students are learning the content and skills needed to master the program level outcome. It may be helpful to regularly review the course learning outcomes on syllabi and make a map for which courses are introducing knowledge, reinforcing knowledge, or already assessing mastery of the program learning outcomes. By mapping the program learning outcomes with courses this document can be used to plan for different levels of assessment. It can also help the program identify any gaps that might exist.
  4. Multiple methods of assessment
    Each program learning outcome should have multiple methods of assessment. Methods are different than assessing at different levels. The assessment plan includes both direct and indirect methods of assessment.
    • Direct Measure – the student is evaluated for something they have produced or performed. This requires identifying an action the student will engage in which will be evaluated. Examples include assignments, projects, exams and performances. Faculty judge the student’s performance and assign the achievement level or score. Direct measures can be found at all levels of assessment.
    • Indirect Measure – the student is evaluated based on their opinion/satisfaction, participation, grades, or data not collected from a student’s demonstration of the program learning outcome specifically. Examples include surveys (how well did the student feel they learned something or how satisfied are they with the instruction of something), course grade, or number of publications. Indirect measures can be found at all levels of assessment.
    • External Measure – the student is evaluated on their performance of the program learning outcome outside of the program itself. Examples include a subject-specific exam, licensure exam or employer satisfaction/feedback of graduates. Someone external to the academic program judges the student’s performance and assigns the achievement level or score. External measures are generally at the mastery level as they assess concrete criteria. The student has achieved it or hasn’t.
  5. Clear linkage between learning outcomes and assessment measures
    Someone who looks at the assignment, project, or exam that was used to evaluate the student on the program learning outcome should understand why it was chosen and feel pretty confident that the data collected is evidence that the student accomplished the program learning outcome. Our programs need good data so they can make good decisions that support continuous improvement. The assignment, project, or exam should not seem random. Additionally, if the entire assignment score is being used, then the entire score should reflect the program learning outcome only. Otherwise, a sub score might make more sense. Similar to how faculty develop methods for evaluating students on course learning outcomes, assessments for program learning outcomes should be intentionally designed and clearly linked.

All faculty are asked to engage in assessment of program learning outcomes. In general, faculty are asked to contribute assessment data toward one learning outcome from one course each semester. However, the assessment coordinator may make some exceptions based on the assessment plan and/or ask a faculty member to contribute to the program’s assessment in another way (example: leading analysis or another task). General guidelines are intended to ensure U of I is engaged in a systematic process and to support faculty engagement in program assessment efforts. Programs with an established assessment plan may provide further guidelines to their faculty.

In addition to contributing assessment data, faculty should also be engaged in discussion of the program’s assessment findings. If a single person is tasked with analyzing the data collected the findings should be shared with all stakeholders in an inclusive and respectful manner. Additionally, decision-makers should be aware of the findings and faculty recommendations so the data can be used to make improvements to the curriculum and program.

Beginning Fall 2021, data on program learning outcomes should be reported at the individual student level. This is similar to how students receive an individual grade. A quantitative score for how well the outcome was performed should be collected for each student in the program (when possible). The score can be a percentage or points. The total points are customizable. For example, something could be worth 100 points (on an assignment or a rubric) or maybe something is rated on a 1 -4 scale (1 – beginning, 2 -developing, 3 – meeting, 4 – exceeding). Prior to Fall 2021, each program had the option to report on overall performance of students (Example: 80% met the outcome).

List of information needed for each assessment: (see example of how this data is entered under “how should the data be reported?”)

  • List of students with their scores
  • Name of the assessment (assignment, exam, project name, etc.)
  • Level of assessment (baseline/diagnostic, formative, or mastery/summative)
  • Scale type (pass/fail, or standard scale)
  • Scoring type (are the scores percentages or points)
  • Threshold (what score is considered to demonstrate the student has “met” the outcome)
  • Faculty comments on the assessment (optional)
  • Evidence or documentation such as a copy of the assignment, or an example of student work (optional)

The University of Idaho’s assessment management system, Anthology, offers various tools for collecting assessment data from faculty. This is because many of our programs are unique or have an established process we need to support. To reduce confusion for faculty, assessment coordinators should communicate to faculty how the data should be reported, including which tool and/or method using Anthology. More information on options can be found on this webpage in the Anthology Assessment Tools section.

In most cases, assessment data is entered by completing the two templates shown below (minimum sections/clicks are highlighted):

Entering Assessment Data - Step 1

Entering Assessment Data - Step 2

 

Entering Assessment Data - Step 3

Note: Most options require coordination by the program and will not work for faculty if it is not set up for them. Additionally, once the assessment coordinator chooses the method for reporting, all contributors will use the same method for that semester and that program learning outcome. However, you can use a different method for each program learning outcomes and/or each semester.

Try the Assessment Reporting Method Recommendation Tool below to learn which method is recommended for your program.

 

A general deadline is shared annually for all programs and faculty.

For Spring 2021, the deadlines are:

  • Step 1: Deadline for faculty contributions is Sept. 3, 2021.
  • Step 2: Deadline for assessment coordinators responsible for the annual assessment report, using the data faculty have provided, is Oct. 31, 2021. (For more information on this process, see “Completing the annual summary report on assessment for the major/degree level.”)

For questions about future deadlines, email assessment@uidaho.edu.

Assessment coordinators may be tasked with ensuring results of program learning outcomes assessment are reviewed by program faculty and used to inform programs. It can be helpful to establish practices for reviewing the curriculum, analyzing student learning and planning for instructional improvement. These practices should be well-documented and may include:

  • Ensuring the impacts of curricular decisions on programs of study and its outcomes are carefully reviewed
  • Consulting faculty from other disciplines when reviewing data and/or discussing opportunities for improvement
  • Consulting with learning support services – both specific to the program and institution-wide – on academic needs identified from assessment efforts (including indirect measures such as surveys or other student opinions/feedback)
  • Consulting with an advisory and/or alumni group
  • Engaging students in a focus group or other feedback session when reviewing the findings or identifying opportunities for improvements

Annual program assessment reporting is done in Anthology Planning and is one section of the academic program review process. Program review evaluates the overall “program,” often called a department, and includes a summary report on each “program of study” assessment plan. The assessment coordinator may be tasked with completing the program assessment report template for one or more “programs of study” each year. The data collected from faculty, along with other data points the program tracks, are analyzed and summarized in this report.

Anthology Assessment Tools

Anthology is the U of I’s assessment management system. Users select from the available assessment tools after logging into the system.

On the Anthology homepage you can access all Anthology assessment tools. All U of I employees and students have access to Anthology and can login using U of I credentials (same as VandalWeb). The homepage looks like this:

View of Anthology Homepage

The following tools are used for learning outcomes assessment:

Anthology Tool Description Supports U of I Process General Users
Anthology Outcomes Data collection tool; used to collect data on a specific learning outcome Learning Outcomes Assessment Faculty, Staff, Department Chairs/Program Heads
Anthology Rubrics Data collection tool; used to collect individual student data using a rubric Learning Outcomes Assessment Faculty
Anthology Planning Reporting tool; used to produce annual reports at U of I Learning Outcomes Assessment; Annual Program Review Department Chairs, Administrators
Anthology Compliance Assist Reporting tool; used for accreditation report writing and student service program reviews Accreditation, Annual Program Review (student services only) Staff, Administrators
Anthology Baseline Data collection tool; used to collect and share survey data General Assessment, Accreditation Faculty, Staff, Department Chairs, Administrators
Anthology Insight Data visualization tool; used to analyze data from other Anthology tools and Banner General Assessment, Accreditation Department Chairs, Administrators

You can access these tools by clicking on the corresponding tile from the homepage. You can switch between tools by clicking on the colorful bar chart icon in the upper left-hand corner of most screens to return to the homepage. Users can have multiple tabs and/or multiple Anthology tools open at once.

Accessing Anthology tools

Anthology Outcomes Instructions

Assessment Reporting Method Recommendation Tool

Please do not edit the program learning outcomes directly in Anthology. To make changes to the program learning outcomes, please send an email to assessment@uidaho.edu with the changes you wish to make. Changes to the program learning outcomes must also be made in the U of I Catalog and requesting this change via email ensures they match in both places. This change does not need to go through a committee at this time. Changes to the catalog may not be possible until the following year. You will receive an email confirmation confirming these changes.

Considerations: This method provides the Department Chair/Program Head with the greatest flexibility on the types of data that can be entered into the assessment plan. The downside is that the Department Chair/Program Head will have to enter the data from all contributors or share administrative access with faculty who will enter their own data directly at the program level. Using this method, everyone will login to Anthology the same way to add data and any data can also be added (example: survey data not related to a course section). This method also works with Anthology Rubrics. If you enter data using this method for this program learning outcome you will not be able to “assign” or “relate” the outcome in the same semester.

Directions: Entering Program Level Assessment in Anthology Outcomes

Considerations: This method provides a streamlined approach for the Department Chair/Program Head to assign whichever program learning outcomes are being assessed during the semester/year to specific course sections. When faculty login to Anthology Outcomes, they will see the pending connection waiting for them and once they accept it, the system will take them directly into the assessment reporting template. When they have completed the assessment process and added their results, the data will automatically show up at the program level. This method also allows faculty to import an Anthology Rubric they may have used. Using this method requires that all data come from a course section(s) during this semester for this program learning outcome. Each program learning outcome can be assigned to multiple sections.

Directions: Assigning Program Learning Outcomes to a Course Section in Anthology Outcomes

Considerations: This method provides a way to reuse assessment data already collected for another learning outcome (either another program learning outcome or a course learning outcome). Using this method requires that all data for this semester/program learning outcome come from assessment of another learning outcome. Faculty can login to Anthology Outcomes and “create a learning outcome” for their course section. Then they can enter data on that learning outcome at any time. The Department Chair/Program Head can then “capture” that data at the program level. However, there must be communication with the faculty member as the relationship is not automatic and the Department Chair/Program Head will need to know where the data is located. Connections can be made from multiple learning outcomes.

Directions: Relating Program Learning Outcomes to a Course Learning Outcome or another Program Learning Outcome in Anthology Outcomes 

Anthology Rubrics Instructions

Anthology Planning Instructions

Completing the Annual Assessment Report in Anthology Planning

Physical Address:
General Education and Assessment
Administration, Room 104
Moscow, ID 83844-3152

Mailing Address:
875 Perimeter Drive MS 3152
Moscow, ID 83844-3152

Phone: 208-885-6448

Email: assessment@uidaho.edu

Web: Assessment and Accreditation