Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.
Exploring the Teaching Evaluation Landscape
Transcript of Exploring the Teaching Evaluation Landscape
A proposal for a comprehensive approach to assessing teaching quality that will integrate 1) summative and formative feedback to individual teachers, coupled with educational development opportunities and 2) a periodic assessment of the institutional climate for teaching and learning.
To provide a comprehensive, evidence-based approach to collecting and using evidence of teaching effectiveness that focuses on:
enhancing the learning of students;
enabling the professional learning of teachers;
providing a trustworthy system for evaluating teaching that will inform important decisions with respect to academic career progression and recognition of teaching in academic work; and
enhancing our institutional environment to foster teaching and learning quality.
Consultations on the draft proposal are ongoing. The Framework will be publicly available when it is approved by General Faculties Council, likely by June, 2016.
Changing the current system, even for a more robust, evidence-based system, creates uncertainty.
Implementation of Online Student Evaluation of Instruction
Following an extensive and graduated 2-year pilot, MRU is fully and formally transitioning to electronic SEIs this fall. All sections of all courses will be evaluated. The implementation features a multi-faceted marketing and communications strategy with students and faculty, and the creation of an online proctoring video which will enable faculty to dedicate 10-15 minutes of in-class time for completion of the e-SEI after watching the video.
We hope to attain 80% response rates this semester
The marketing and communication effort is underway. The proctor video is complete and is available online. The response window for students this fall is Nov 24 through Dec 9.
Questions have arisen from contract faculty as to whether having all sections of all courses evaluated violates an article in the collective agreement. We are attempting to work through this with the Faculty Association.
Implementation of a Common Online Teaching Evaluation
Laurier is in the midst of implementing a new teaching evaluation to be used across the institution, and it is also to be administered online. The instrument will have mandatory questions and will also provide an opportunity for faculty to add optional questions that address effectiveness of their teaching methodologies, etc.
To successfully implement the new instrument and have the new software function properly. To provide better information and feedback on teaching effectiveness. To move to a greener, more efficient process that provides feedback faster than a manual process.
Current status: Pilots are underway.
Explaining the new system to faculty; working with new software that has been revised and is yet untested; having a successful pilot; taking steps for full institutional rollout.
Inquiry into Teaching Development and Evaluation
Teaching evaluation provides a window through which larger and more systematic issues are invoked. Through the lens of inquiry, we are exploring these broader issues, hoping to push our cultural and institutional boundaries.
Moving beyond myopia: from the rhetoric of despised and flawed questionnaires, poor participation rates and administrative imperatives to a culture that values assessment (vs. evaluation) as a critical part of continuous learning.
Meeting with diverse groups.
Time, expertise, commitment, risk-taking, and reinforcing poor practices.
Review of Evaluation Tool
In early 2016, Queen’s will be striking a subcommittee of our Joint Committee on the Administration of the Agreement to review the teaching evaluation tool. Based on current research in the field, best practice and recent adoptions by our comparator universities, Queen’s will either adopt an existing tool or develop its own.
The aim is to agree to a new instrument that reflects the changing nature of instruction, including online and blended learning as well as graduate-level courses.
Queen’s University and the Faculty Association are equal partners in managing the adoption and use of any tools that will be used to evaluate teaching. Consultation will be an important part of the process.
Review of Carleton University Teaching Evaluations for CUASA Members and CUPE 4600 Unit 2 Contract Instructors
Two committees are developing unique proposals to update and enhance the evaluation of teaching at Carleton University for: (1) Carleton University Academic Staff Association (CUASA) faculty and instructors and (2) CUPE 4600 Unit 2 – Contract Instructors. Both committees include representation from the university administration and the respective unions.
To replace the existing survey instrument which students complete for each course with a more effective and appropriate suite of evaluation procedures.
Address the biases existing within the current survey instrument
Address the delivery mechanism (paper versus online)
Provide a better opportunity for students to provide feedback on their experience of teaching
Provide effective and qualified feedback on teaching through multiple procedures that capture the diversity of learning experiences at Carleton University
Provide an opportunity for faculty members to demonstrate learning about their teaching
Better align the formative and summative aspects of teaching evaluation with the principles of the collective agreements between the university and the respective unions
Examine the differentiated impact of instruments used to evaluate teaching on the following designated groups: aboriginal peoples, persons with disabilities, visible minorities, women and LGBTTQUI identified persons
Align proposed suite of evaluation procedures for CUASA members and CUPE 4600 Unit 2 Contract Instructors
The Online Evaluation of Courses: Impact on Participation Rates and Evaluation Scores
University of Ottawa
To examine the impact of a shift to an online system for the evaluation of courses.
Average decrease in participation rate of 12-15% when using an online system. No significant differences in evaluation scores were observed
Improving Student Participation: Strategies for the Evaluation of Online Courses
University of Ottawa
To assess the impact of various strategies designed to promote the completion of evaluations of online courses.
The integration of informal mid-term course evaluations along with targeted messaging to students by professors has the greatest impact.
Online Course Evaluations
A robust, centralized yet flexible, online system for the completion of questionnaires, data analyses, and communication of appropriate information to faculty, students, and administrators.
1) Allow all students to provide anonymous feedback on courses and instructors in a timely, meaningful fashion. 2) Encourage the intelligent use of results as part of both formative and summative efforts to assess and improve teaching at the University.
Mercury, the online course evaluation system, is the official system, and the default period is now after the exams, so in some ways, the project is complete. However, it is always a work in progress, as we continue to work on increasing response rates, and reducing skepticism about the validity of the results.
A significant percentage of the professoriate is still convinced that student feedback is not given in good faith, and a significant percentage of the student population is convinced that student feedback has no impact.
Formative Evaluation of Teaching
Develop a range of tools for students and professors to provide and receive formative feedback. Methods include formal methods of mid-course feedback such as surveys, discussion groups, and peer observation, as well as informal tools such as the Thank a Prof initiative.
To create a culture of continuous feedback and the improvement of teaching.
We continue to promote the mid-course evaluation strategies and other communication channels to professors.
Creating a set of guidelines that provides simple ways to do this, as professor time is the biggest challenge.
Review and Re-Development of the Student Evaluation of Instruction Instrument
Mount Royal University
MRU's current student evaluation of instruction instrument has been in use since 1998. A shift to the electronic administration of SEIs in fall 2015, created a timely opportunity to review and re-develop the instrument. A task force has been struck to lead this work. The task force is comprised largely of tenured faculty and is co-chaired by the AVP-Teaching and Learning and the MRU Students' Association VP-Academic.
Develop a new Student Evaluation of Instruction instrument in consultation with faculty and students. Align the new instrument with principles of effective practice in undergraduate teaching and with institutional criteria for the evaluation of teaching.
The Task Force has been meeting since September. The goal is to pilot the new instrument in 2016-17 and aim for full implementation in 2017-18.
Striking the appropriate balance between adequacy of stakeholder consultation, adequacy of lit review and research, and the university's desire to have a new instrument piloted in 2016-17.
Survey of Institutional Practices Related to Student Evaluation of Teaching
Mount Royal University
An open-ended survey of university practices related to the administration, use, and reporting of SETand SET results. We would like to pilot the survey with AVP's of Teaching and Learning and then perhaps refine the survey and solicit responses from other institutions. The survey would be undertaken in the context of MRU's review of SET-related policies and procedures, currently in process.
Use survey results to inform MRU's review of its own SET-related policies and procedures.
Survey questions are written, and we would like to send the survey out very soon.
Narrowing the list of possible questions to a manageable number!
Building a shared framework of teaching quality at the University of Saskatchewan
This 2-year project will 1) bring together institutional policies and documentation that describes teaching quality (e.g. tenure and review criteria, foundational documents, learning charter) to build from these currently fragmented descriptions, a more comprehensive view of teaching quality; 2) gather and consider best practices from peer institutions’ practices; 3) consult with internal stakeholders on the emerging framework; and 4) map the sources of evidence we currently have/use to document teaching quality against the framework.
The project will:
Develop a framework of teaching quality for the institution that can be used as a common point of reference for processes that relate to quality teaching across the institution.
Allow for evaluation of our current sources of evidence for teaching quality in light of the framework and identify consistency, redundancies, and gaps.
Develop recommendations for considering how we collect evidence for teaching quality.
Document the process and methodology to inform ongoing review (e.g., 5-year cycles) and to ensure alignment as the institution continues to evolve.
Ultimately the framework and the processes built around it will allow us to fairly assess, reward, and continually enhance teaching practice
The project has launched, with a dedicated team in place and a clear project plan developed. The initial work on gathering and assessing institutional documentation is underway.
Pulling a common framework from what are now relatively fragmented institutional documents and processes will be no small undertaking but one we feel recognizes the extensive work done in various areas around aspects of teaching quality. We also recognize that this topic is one that many stakeholders will be directly impacted by and will therefore have a significant interest in. The consultation within the campus community will therefore be an essential element of the project, with external consultation (i.e., broader community) following in future phases.
Peer Collaboration Network (PCN)
The PCN utilizes a model of participation involving three meetings, the central one being a classroom observation. The primary characteristics of the PCN, which account for its uniqueness and participant appreciation, are that it is driven by the participants, it is voluntary, non-evaluative, reciprocal, confidential, and the focus is on the sharing of ideas and experiences related to teaching and learning, and not on evaluation.
The overarching goal of the PCN is to provide faculty and staff a means by which they can develop their own teaching practices, which, when considered collectively, will enhance teaching practices across all academic units at the University of Windsor. It is also hoped that teachers will benefit from their participation in the network by being able to demonstrate their effectiveness and dedication to teaching in a more sophisticated way than currently available through student evaluations of teaching alone.
Expanding network and assessing network effectiveness.
Instructor vulnerability, overcoming perceptions that it will be evaluative and will affect promotion decisions.
Forum on Teaching Evaluation
Organize and host an interinstitutional forum for university instructors, staff, and administrators to explore how teaching is documented and evaluated, here and elsewhere. The forum is intended as a collective opportunity to take stock of our experiences, our beliefs and doubts, the challenges we face, and some of the research in the field, with a goal of fostering an ongoing exploration and dialogue about how to enhance teaching evaluation practices.
to begin the process of identifying fair and effective evaluation practices that legitimately contribute to teaching improvement and to a more comprehensive, sophisticated understanding of teaching quality; and
to explore how universities have successfully taken on the task of doing evaluation better.
Ongoing – sixteen universities are taking part.
Ensuring that the event is informative for individuals with diverse backgrounds, experience, and expertise. Balancing the opportunity for exchange of views based on experience on the ground with opportunities to explore the research. Responding to the high degree of interest in the topic and event.
The majority of the information used in this presentation was gathered by a team from Carleton University as part of the interinstitutional Productivity and Innovation Fund Project on teaching evaluation led by the University of Windsor in 2014. University administrators and teaching and learning centres were asked to provide information about their institution’s current practices, and to upload their SRI questionnaires for review. The team received information from more than 80% of Ontario universities. This was supplemented by review of institutional documents, with a goal of getting a snapshot of how teaching evaluation is being practiced in the Province. Pamela Gravestock’s 2011 dissertation on the role of teaching evaluation in tenure policies at Canadian universities provided additional evidence. Project descriptions and vignettes were gathered as part of the planning and preparation for the University of Windsor Forum on Teaching Evaluation.
Gravestock, P. (2011).
Does teaching matter? The role of teaching evaluation in tenure policies at selected Canadian universities
. (Unpublished doctoral dissertation). University of Toronto, Toronto, Ontario.
Wright, A., Hamilton, B., Mighty, J., Muirhead, B., & Scott, J. (2014).
The Ontario Universities’ Teaching Evaluation Toolkit: A Feasibility Study
. Report to the Ministry of Training, Colleges and Universities – Productivity and Innovation Fund Program. University of Windsor: Windsor (ON).
What evidence is used in teaching evaluation…
Questions commonly asked about instructors (proportion of Ontario SRI instruments reviewed which contained these question types):
Enthusiasm for course 83%
Overall effectiveness 80%
Questions commonly asked about courses (proportion of Ontario SRI instruments reviewed which contained these question types):
Course difficulty 100%
Recommend to others 100%
Course activities 100%
Quality of materials 89%
Student self-assessment of learning 86%
What do SRI instruments look like across Ontario?
Number of questions ranges from 8-47
94% report use of a common SRI but many can be customized at faculty or department level
Most forms designed with joint input faculty, faculty associations, senate committees, University administration
Teaching Dossiers in Ontario
Mandatory at 3 Ontario U’s
Mandatory or optional at 54%
May also be required for some types of faculty members as elements of hiring and review, but not for others.
Most Common Teaching Dossier Elements at Ontario Universities:
Some Current Teaching Evaluation Projects at Canadian Universities
Commonly regulated elements of teaching evaluation in Ontario
Approval and revision of instruments used for summative evaluative purposes
Procedures for the implementation of summative evaluation
Types of data that must be included in files for hiring, tenure and performance review
Functions the data can serve at the institution
Limitations to data access
Processes for identifying satisfactory performance
Ownership and rights to data
How is teaching evaluation used in Ontario?
Teaching Evaluation Vignettes
How are teaching evaluations used?
Proportion of Ontario universities reporting this use.
Who has access to the data?
Proportion of Ontario universities reporting this kind of access.
What SRI data do instructors receive?
Proportion of Ontario universities reporting this kind of data provision.
Challenges with Teaching Dossiers
Lack of consistency
Lack of understanding of what should be in them
Lack of understanding of how to read them
Need for more guidance for submitters
Need for a more efficient and engaging model
In our 2014 PIF survey, Ontario universities identified the following challenges with the two most common approaches to summative teaching evaluation.
Challenges with SRIs
Low response rates
Faculty perceptions of SRI
Resistance to changing instrument items
Transition to SRIs
Lack of standardization
Support & planning groups in institutions usually don’t have access to teaching evaluation data
82% of institutions do not use teaching evaluation data in the aggregate to examine student perceptions at the programmatic or departmental level
I am interested in improvements in the evaluation process and how we can implement new ways to improve teaching skills. I personally would like to see the "360 method" of evaluation where not only the student SET scores determine a teacher's effectiveness but also additional colleague assessments and clinical lead's evaluation for a more well-rounded perspective. In order to ensure confidentiality, ideally a third party should conduct the evaluation anonymously. This will help identify strengths and areas for improvement in order to create goals to continue to improve our teaching skills.
- Tracy Seguin, Faculty of Nursing
When submitting materials for my promotion to full professor in 2003, I included two separate lists of my weighted averages for teaching evaluation scores in each semester. One list showed the weighted averages for semesters in which my teaching load was mostly comprised of required methodology and statistics courses, and the other list showed the weighted averages for semesters in which my teaching load was made up of only (non-methodology) content courses. As you can probably guess, my teaching scores were significantly lower in the semesters in which my teaching load included required stats & methods courses. I argued that I actually thought my teaching effectiveness was greatest in the methodology/stats courses, but that a simple analysis of the teaching evaluation results would not reflect this, and that factors such as student motivation to take courses (required vs. non-required, methods vs. content) should be considered as a contextual factor in evaluating teaching scores.
- Kathryn Lafreniere, Department of Psychology
It has been a long debate on the purpose of SET and issue with the use of SET data. SET is often viewed as an administrative tool to hold instructors accountable. Unfortunately, as we all know that SET can be affected by many factors including student appreciation, maturity of students, the nature of subject/course, availability of teaching resources, and of course, personality of the instructor and the quality of teaching. It might even be subject to discrimination. I would like to see SET more as a feedback tool to instructors. At the university I worked before, the SET questionnaire include two parts. The first part is the universal questions that apply to the whole campus. The second part is adaptive part. Faculty/department or individual instructors can add their own questions. I feel that this adaptive part makes the SET carry more feedback value to instructors.
“Sh.it teacher, sh.it class, terrible pp slides, crappy tests, she talks like a 3 year old girl, can’t teach, she wears the gay purse every class like someone is going to steal her stupid little bag. She reads verbatum in class from the slides, the reviews for test are a waste of time. good luck in this class its so sh.it, don't go to class!!”
This student evaluation of my teaching is posted on a publically accessible website. I found this after seeking mid-course formative student feedback in my large group lecture class, my very first course. I had recently changed from a problem-based small group to a lecture based large group learning environment. The formative feedback I received was almost the same, with less swearing. When seeking support from colleagues, several asked me why on earth I asked for feedback before the end of the course.
I lived with the student grapevine effect of that feedback for the next two years.
I learned that I had misread my faculty culture of teaching and learning. Remarkably, this experience lead to merging my other scholarly work with that of teaching and learning, and shaped my ongoing professional development. I enrolled in the UTC program. I am determining how to study the culture of teaching and learning in order to provide an evidence-base for culture change.
While I will never be able to completely put this experience behind me I have resolved it enough to have the courage and confidence to move forward and turn it into something positive for myself, and through sharing, for others.
The end-of-semester rankings have been valuable to me to gauge what does and does not work in my teaching so I can focus on things to change (or not) for my next time teaching the class but this is of no real benefit to students in my classroom at the time (inherent in the "end of semester" aspect I know). To get around this I also supplement the official evaluations to give student input into the class while it is ongoing. To do this I set aside a few minutes in the first class after the first exam to poll the students on how the class is going so far. The exact questions will depend on the course itself and whether it has lab component but always ask them 1) was the test fair, 2) how is the lecture pace (too fast? too slow?), 3) do you like the videos/demos etc., and then I always end the survey with the question "If you could change anything what would you change"?
I then go through these responses and summarize them in the next class. If I do find things I can change I promise to change it and make sure to follow up on it. I no longer see large structural changes requested but there are often little things. While this is not really relevant to "evaluation" it is important because it is of value to students in the class to fix any issues and, equally importantly gives the students a sense of ownership in their education and really shows them that their profs do care about their experiences. I know some profs are hesitant to do this because they are afraid of the answers but I strongly suggest it to all.
- Dennis Higgs, Department of Biological Sciences
There are so many issues with regards to the practice of SETs. The pendulum has swung way too far out to the other extreme. Whereas, previously some professors were able to get away with shoddy and dishonest practices, now, as a collective we are being penalized and have to live under the so-called “SET sword of Damocles”! Although the SET tool has its place and could be seen as an important tool to assess teaching practices, I believe that SETs have become more of a consumer report card or survey of service- much like at a restaurant. Further our university’s practice of stacking/ranking instructors on a scale based solely on their scores is deplorable. The SET tool has also become a bean counting exercise when PTR committees review files, despite the platitudes that these scores are only one measure among a host of measures that should be employed to evaluate teaching effectiveness. This whole practice is flawed and needs substantial revisions.
- Yvette Daniel, Faculty of Education
EXPLORING THE TEACHING EVALUATION LANDSCAPE