Program assessment helps determine the effectiveness of an institute's undergraduate and graduate programs in terms of measurable student outcomes. Assessment plans consist of
Assessment is an integral part of the teaching and learning process. To be useful, assessment results need to be part of everyday processes, but the timing and scope of efforts should be logical and appropriate for individual programs. Because the purpose of assessment is to improve the quality of student learning through teaching, it's essential that assessment be ongoing and useful. Do not try to assess every program goal and student learning outcome at once; focus on a small number of fundamental goals over a period of time.
Student learning outcomes are statements of what a student is expected to know, be able to do, or be disposed toward, following the completion of a course, an academic experience, or a degree program. Student learning outcomes are key indicators of what students have learned as a result of their experience. Student learning outcome statements are comprehensive and detailed and include knowledge, skills, and attitudes specific to the major. Outcomes should be both observable and measurable.
A variety of approaches may be used as evidence of student learning. The best approaches clearly and purposefully relate to the goals and outcomes they are assessing, maximize the use of existing data and information, and include direct evidence of whether or not a student has command of a specific subject or content area, can perform a certain task, exhibits a particular skill, demonstrates a certain quality in his or her work, or holds a particular value. These may include writing samples, presentations, artistic performances, art work, research projects, field work, or service learning. Grades alone do not always give direct evidence because often grades don't identify specific student learning outcomes and at what levels students have learned. Some course grades also include additional student behaviors that are not related to student learning outcomes (e.g. attendance and participation). There is a difference between assessment and grading, but they do have one common characteristic as they both intend to identify what students have learned. Grades are based on direct evidence of student learning such as the evaluations of tests, papers, and projects, but need to be clearly linked and aligned to learning goals and rubrics to suffice as direct evidence for assessment purposes.
There is a distinction between direct and indirect measures of student learning. Direct measures include course embedded assessments, the capstone experience, portfolio assessment, standardized tests, certification and licensure exams, locally developed exams, essay exams blind scored by multiple scorers, juried review of student performances and projects, and external evaluation of student performance in internships. Indirect measures include surveys, exit interviews, retention and transfer rates, length of time to degree, SAT and ACT scores, graduation rates, and placement and acceptance data. Grade point averages, grades in the major, faculty/student ratios, curriculum review documents, accreditation reports, demographic data, and other administrative data are not acceptable measures of student learning outcomes.
Standards constitute performance goals and should be defined in terms appropriate to the relevant method of measurement. Where comparative data are available, a department might define standards in terms of the percentage of students at or above a particular percentile. An individual department, however, might have good reasons to state that all of its students should score above the 50th (or 65th, 70th, etc.) percentile on a standardized test in the major, provided that this is a meaningful expression of standards. Departments with licensure exams might want to state that no fewer than 95% of its students will pass the exam on the first attempt. Departments are not required to use nationally normed tests. In fact, some nationally normed tests may not provide relevant information about student achievement in the major. One advantage of nationally normed tests is that they provide a comparative standard of performance; a disadvantage is that they often do not relate directly to a department's program objectives. Popular alternatives to the nationally normed exam are locally developed exams and performance-based assessments (a capstone project or a portfolio). Departments with a criterion-referenced capstone project (or internship evaluations based on specified criteria) might want to state that all students will receive at least a satisfactory score in each criteria area with 30% performing at a level higher than satisfactory.
The final portion of an assessment plan should outline a process of how results will be used to make program improvements and inform decisions. Ideally, faculty or a committee of faculty will produce an annual report of assessment results to share with the college dean, department or program faculty, and external or internal advisory boards as part of its continuous improvement process. At RIT, all annual program level assessment reports will be reviewed by the Student Learning Outcomes Assessment Committee. Periodic reports will be submitted during the MSCHE review, Academic Program Review, and Accreditation Reviews; and selected results will be published in annual reports to the Provost and the Board of Trustees.
To the extent that one can incorporate assessment into daily practice, assessment will not appear as an additional burden. Programs need to find creative ways to incorporate assessment into curriculum and instruction so that it is part of a normal work load. The burden will seem unbearable to a chairperson who tries to pull together disparate elements of an uncoordinated assessment program on the weekend before the Departmental Annual Report is due. For the chairperson who plans ahead and fully involves faculty in the collection, interpretation, and use of assessment data, the burden will be less onerous.