lurie

Student Learning and Effective Teaching

 

Introduction

Criterion 3: The organization provides evidence of student learning and teaching effectiveness that demonstrates it is fulfilling its educational mission.  

Examining teaching and learning issues at today’s research universities is often challenging because it is the hidden centerpiece of our collective work. Whereas the outcomes of our research efforts are inherently public, the outcomes of our teaching activities typically are not. With an enrollment of over 40,000 students, the learning environment is expansive at the University of Michigan, and the decentralized nature of the institution means that assessment reflects the range of our enterprise and is equally diverse.

Framework

In order to frame our examination we use a simple evaluation schematic that was popularized in the higher education community by Alexander Astin [3]. Astin’s Input-Environment-Output model (figure below) underscores the need to have an understanding of student qualities and characteristics upon their entry into an educational program, the nature of the educational environments with which they come into contact, and their qualities and characteristics as they exit the program in order to be able to fully evaluate its effectiveness.

astin
Astin’s I-E-O model

In applying this scheme to the University’s learning environment, several key points became clear that serve to guide our examination:

  • The student learning environment at the University of Michigan is not confined to formal classroom or institutionally- sponsored educational activities, and any consideration of the student learning environment that does not explicitly recognize this fact will be limited.
  • It is important to have a common set of educational goals or competencies against which progress can be measured and achievement assured.
  • Individual faculty members and academic programs have unique and specialized outcomes that they may pursue in addition to the general institutional effort.

[3 Astin, A.W. (1991). Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education. Washington, DC: American Council on Education/Oryx Press Series on Higher Education.]

Mapping Assessment

In the absence of a centralized, coordinated institutional effort to undertake assessment and evaluation activities, it would be wrong to conclude that there is little such activity at the University. Rather, we can identify and map a broad range of ongoing assessment and evaluation efforts that occur at various levels throughout the University; these are presented below. As indicated on the left axis, activities range in scale and scope from individual instructor/class-based, to programmatic, to institutional, to national.

assessment
Generalized map of assessment activities at the University of Michigan.
Abbreviations are: CIRP-Cooperative Institutional Research Program, ISL/SoTL-Investigating Student Learning/Scholarship of Teaching and Learning, MSS-Michigan Student Study, NSSE-National Survey of Student Engagement, SERUStudent Experience in the Research University, TQ-Teaching Questionnaire, US News-U.S. News and World Report.

At the institutional level we have centrally-coordinated efforts to collect and disseminate data based on information from both internal and external audiences. In addition to ongoing participation in externally-based activities such as the Cooperative Institutional Research Program (CIRP), the National Survey of Student Engagement (NSSE), and the new Student Experience in the Research University (SERU) project, the University develops its own student research and assessment initiatives. Most notable among these is the ongoing Michigan Student Study (MSS), as well as more targeted data collected from students on their evaluations of classroom experiences through the Teaching Questionnaire (TQ) system managed by the Office of Evaluations and Examinations (E&E), and surveys completed by graduating seniors and alumni (see recent reports under Resources). The University has also been involved in a number of centralized assessment studies, including the Collegiate Learning Assessment (CLA) pilot in 2007-08 and the FIPSE-supported inter-university learning assessment study that produced its Test Validity Study report in 2009.

Consistent with the decentralized organization of the University of Michigan, a good deal of assessment and evaluation activity occurs in the schools and colleges, as well as in many academic programs. These activities vary according to disciplinary norms, and are often aligned with specialized accreditation requirements for some of the professional schools, such as the College of Engineering and the Medical School. Ongoing campus programs, such as the Undergraduate Research Opportunity Program (UROP), have long-established assessment efforts, while new programmatic activities, such as the Instructional Development and Educational Assessment (IDEA) Institute, are poised to foster additional cross-unit collaboration. In addition, departments, schools, and colleges can draw on the Center for Research on Learning and Teaching (CRLT) to assist with data collection and to facilitate discussions around goal setting, data interpretation, and other aspects of assessment. A CRLT-hosted website on assessment offers resources and describes several campus practices.

At the individual level, assessment activities are harder to summarize, but they generally are robust and increasing. For example, in September 2008 the Medical School, the School of Education, and the Center for Research on Learning and Teaching (CRLT) co-hosted a “walking dinner” in which faculty members and research groups presented posters that summarized more than 60 research and assessment efforts related to teaching and learning. The College of Engineering and CRLT-North similarly hosted a lunch event in October 2008, at which about 20 posters summarized research on teaching and learning underway at the College of Engineering. Further supporting this range of individualized activities, the University offers an internal grants program titled “Investigating Student Learning,” intended to foster additional activity in this area.

We also find assessment activities in a number of cross-cutting areas that reflect work focused around specific themes. Examples include teaching improvement projects involving CRLT personnel, as they typically incorporate evaluation components that assess student learning, as well as other improvement-related outcomes. Similarly, activities exploring instructional technology innovations often incorporate evaluation and assessment measures to gauge their effectiveness. Broader-based research and assessment efforts, including work by the Office of Student Affairs Research within the Division of Student Affairs, examine the effects of the general education on campus, including co-curricular engagements.

 

umseal