Sorry, you need to enable JavaScript to visit this website.

Program Review & Analysis

March 2, 2015

The strategic planning process brought the RIT community together and generated considerable buzz as well as great ideas for our future. Skimming through the plan, you will see ideas of expanding research, adding more experiential learning, establishing co-curricular transcripts, and many more. But throughout the discussion, the question of what are we going to stop doing? kept coming up. 

Whenever I am in the room and this question comes up, people invariably turn to me and ask about program review, thinking that what there are programs that we may want to eliminate. It is a fair question given that we continue to add new programs to RIT's portfolio at a very healthy rate yet rarely do we decommission programs. For example, last year, we gave the green light to consider 8 new full programs and in the past 3 months the President has approved 3 new programs. 

One of the first tasks given to me when I became RIT's provost was to establish a healthy program review process. (In fact, I remember receiving this task in April 2008, a full 2 months before I officially started!) Working on it for the first 2 years, we established a framework in 2010. Of course, along came that decision to implement the semester conversion, so we postponed implementation of program review until 2015. 

During those years in hiatus, we observed a change in thinking about program review. Increasingly, institutions have moved away from the burdensome and slow process that would review programs every 7 years. Instead, they have developed an annual program analysis process that basically empowers the faculty and college administration to review core data. The review is meant to instigate a discussion about what seems to work and what needs tweaking. If the data indicates that there are real challenges, the review shifts to a deeper dive with more data available. 

The traditional 7-year cycle has its issues. First, with more than 150 programs at the undergraduate level, the campus would need to review more than 20 programs a year. With an estimated cost of a minimum $5,000 per program review (for external evaluators), we would need $100,000 just to cover direct expenses. But the real cost comes in time - departments have to carefully assemble data, write a self-study, host an external team, respond to the external evaluator report, etc. Often this falls as a responsibility of the department chair. 

Such traditional reviews do have their merits because the external evaluators can point to areas of improvement. But rarely do they suggest radical and meaningful changes. 

My philosophy is that the principle guiding the program review process is not about discontinuance but rather about continuous improvement. Indeed, the principle is how a review process can help faculty make changes to the curriculum so that the program enjoys healthy student demand and the quality is excellent. 

A recent success story at RIT illustrates the power of this principle. Enrollment data indicated that in GCCIS, the information technology program had become stagnant and the networking and systems administration program was experiencing significantly declining student demand. The information sciences and technologies faculty worked diligently over several months to recommend substantial changes to these programs. New program titles, along with new curricula, have been proposed and will make these programs more modern and more attractive to prospective students, likely increasing demand. For me, that's the best possible outcome and the way our program review should work.