International Journal of Teaching and Learning in Higher Education 2017, Volume 29, Number 1, 47-60
http://www.isetl.org/ijtlhe/ ISSN 1812-9129
Strategies for Increasing Response Rates for
Online End-of-Course Evaluations
Diane D. Chapman and Jeffrey A. Joines
NC State University
Student Evaluations of Teaching (SETs) are used by nearly all public and private universities as one
means to evaluate teaching effectiveness. A majority of these universities have transitioned from the
traditional paper-based evaluations to online evaluations, resulting in a decline in overall response
rates. This has led to scepticism about the validity and reliability of the SETs. In this study, a large,
US public university transitioned to online SETs in 2007 and suffered a decline in overall response
rates from 73% for the paper-based evaluations in 2006 to a low of 43%. The aim of this study was
to determine successful strategies used by instructors to improve their own SET response rates. A
survey was conducted of faculty members who had high response rates, and the data were analyzed
to determine which strategies were being employed. The study found that when instructors show
students they care about evaluations, response rates tend to be higher. The results from the study
have been turned into a FAQ on myths and suggestions that has been distributed to the faculty at the
university to provide guidelines for increasing response rates on SETs.
Universities are facing increasing pressure to
assess educational outcomes. In this climate, one
concrete way to assess teaching effectiveness is through
end of course evaluations. Although several studies
have shown student evaluations to be reliable and
somewhat valid, end of course evaluations are not
without their problems (Aleamoni, 1999; Centra, 2003;
Hobson & Talbot, 2001). Individual faculty members
are often concerned with the validity, reliability, and
usefulness of the SETs in assessing their individual
teaching effectiveness. Owing to small sample sizes,
the data obtained from these evaluations can lack
statistical significance, and results can be biased.
Especially when response rates are low, instructors are
concerned that only dissatisfied or less successful
students respond to SETs. Research refutes this
common myth, as more successful and engaged
students tend to complete online evaluations (Adams &
Umbach, 2012). Obtaining a high response rate can
help alleviate some of these concerns. Since the
majority of institutions use SETs to inform decisions
about faculty salaries as well as reappointment,
promotion, and tenure, ensuring statistically significant
data through high response rates is a goal shared by
administrators and faculty alike (Education Advisory
Board, 2009; Haskell, 1997). For example, one study
showed that instructors with class sizes under 10 should
have at least a 75% response rate under liberal (10%
sampling error) conditions to create reliable feedback
and 100% under stringent (3% sampling error)
conditions (Nulty, 2008), while others refute this
notion, noting that response rates under 100% are not
satisfactory as they may not be generalizable to the
entire class, especially for small class sizes (Kulik,
2009). Despite the importance of obtaining a high SET
response rate, research on best practices in increasing
evaluation response rates is relatively scarce (Misra,
Stokols, & Marino, 2013), and there have been calls by
researchers for more study on strategies for increasing
response rates (Adams & Umbach, 2012; Goodman,
Anson, & Belcheir, 2015).
According to the University Planning and Analysis
(UPA) and the Evaluation of Teaching Committee
(EOTC) at the university under study, response rates for
end-of-course evaluations have been gradually declining
since the instrument began being administered online. The
EOTC was considering recommending changes to the
current no-incentives” policy by allowing incentives for
students who complete SETs as a potential way to boost
response rates. The EOTC knew different strategies were
being used by instructors to help increase response rates,
but it was not known which strategies were being
employed, which strategies worked, and which strategies
aligned with current university policy. Misra et al. (2013)
found, Developing effective strategies for increasing
response rates can help reduce nonresponse biases in
survey data and improve the quality of research findings”
(89). The purpose of this study was to determine which
strategies were being used by faculty members to
effectively increase SET response rates.
Review of the Literature
SETs are often the primary assessment of teaching
performance in institutions of higher education in the
U.S. (Pounder, 2007), but as with all types of evaluation,
they are inherently political (Russ-Eft & Preskill, 2009).
Student evaluation of teaching in higher education was
initially intended to help instructors improve their
teaching and/or student learning. It was only later that
the results were commonly used for promotion and
tenure purposes (Lindahl & Unger, 2010). Marzano
(2012) reported that teachers perceived evaluation in one
of two ways: for measurement and for development.
Chapman and Jones Online End-of-Course Evaluations 48
Figure 1.
Overall response rates since moving to an online system at the university under study.
Most SETs used in higher education today are for
the purposes of measurement and not for development
and are typically summative since they are performed at
the end of the semester.
Instructors have long argued the problems with
SETs, mainly because of their use for promotion and
tenure purposes. Critiques abound on the usefulness,
validity, and reliability of these traditionally end-of-
term instructor evaluations. Lindahl and Unger (2010)
claimed that the situation itself leads to atypical
behavior: “The structure of the collection process itself,
involving a group situation, heightened emotional
arousal, and anonymity, encourages deindividuation
and may allow the mechanisms of moral disengagement
to operate, permitting behavior that students would
never engage in face-to-face” (73).
There are additional reasons why end-of-course
evaluations at research-intensive universities rarely result
in instructional improvement. SETs habitually get
distilled down to a single quantitative number whether
high or low; they often tell one nothing about how to
improve teaching, and often ratings are based on a
consumerism model that is focused on entertainment
level or difficulty of the course (Wright, 2000). Courses
vary widely by discipline, class size, student
demographics, and outcomes, but end-of-course
evaluations are usually standardized and may not be
suitable across institutions (Richardson, 2005).
McCullough and Radson (2011) suggested that SETs are
often not calculated correctly because they are based on
ordinal data but analyzed as interval data. Add to this the
issue that students are not trained to rate any one
question in the same way. This leads to unreliable and
likely invalid results. When the stakes are high, the
pressure to make false or misleading statements
increases. Studies have shown that students lie on
faculty evaluations, especially in cases where the student
has an axe to grind (Clayson, 2008). However, some
studies show that the dissatisfied or poorer students are
less likely to fill out the SETs (Adams & Umbach, 2012;
Avery, Bryant, Mathios, Kang, & Bell, 2006; Fidelman,
2007; Sax, Gilmartin, & Bryant, 2003). Adams and
Umbach (2012) found students who have spent time
working to get a good grade are more likely to fill out
SETS and surmised that students with higher GPAs and
course grades have “the intellectual ability to evaluate
the course at a meaningful level” (586).
Online SETs and Response Rates
To complicate matters more, most SETs are now
administered online. Potential advantages of performing
SETs online include standardization across the institution,
no loss of class time to perform SETs, reductions in cost
due to the absence of printing, distributing, and collecting
results (Bothell & Henderson, 2003), getting feedback to
instructors more efficiently, and reduction of errors for
partially or improperly filled out forms. Online SETs can
be argued to have more flexibility in the time and location
for completion (Cummings, Ballantyne, & Fowler, 2001),
which allows students to write more thoughtful comments
online than on paper (Adams & Umbach, 2012;
Ballantyne, 2003; Cummings et al., 2001; Hativa, 2013;
Kasiar, Schroeder, & Holstad, 2002; Stowell, Addison, &
Smith, 2012). In addition to the cost savings, Dommeyer,
Baum, Hanna, & Chapman (2004) pointed out that online
evaluations may help minimize the faculty influence over
in-class SETs (e.g., activities that happen prior to
evaluation, presence of the faculty, and peer influence) as
well as allow more students to complete them (i.e., if they
were absent on the day of the in-class evaluation.) Online
administration provides for more anonymity, eliminating
potential handwriting recognition of paper-based SETs
(Avery et al., 2006).
Chapman and Jones Online End-of-Course Evaluations 49
Multiple studies reported that while response rates
for online SETs initially average near 60%, they soon
drop off to the 30 to 40 percentile range (Avery et al.
2006; Nulty, 2008; Sax et al., 2003). As seen in Figure
1, this phenomenon occurred at the university under
study when it moved to online evaluations in the spring
of 2007, reaching a low of 43% in the fall 2011 and
spring 2012 semesters. While these levels of response
rate may hold some statistical significance in large
courses, smaller classes are more problematic as 40%
of a class of 20 is only eight responses (see Table 1 for
recommended levels for validity.) SETs with low
response rates may not be representative of the whole
and add to the argument against making instructional
changes or personnel decisions based upon such
feedback, although one study found that scoring
methods were similar for both forms of administration
(Fike, Doyle, & Connolly, 2010).
Low response rates for online SETs are partially
due to a lack of motivation for filling them out since
students are no longer in class. Students do not
necessarily benefit from SETs (Bullock, 2003) as they
are done at the end of the term, and thus can provide
only a snapshot of the instructional process at a point
when the current students will not experience
instructional improvements. Students perceive that
evaluations have no effect on an instructor’s teaching
effectiveness or performance review. Often they are
left with the notion that no one but the individual
instructors will see them or that the SET results are not
taken seriously (Spencer & Schmelkin, 2002). These
perceptions have some validity as research has shown
that faculty do not view student evaluations as valuable
for improving instruction and report not making
changes based SETs (Beran & Rokosh, 2009; Gaillard,
Mitchell, & Kavota, 2006). SETs are fraught with
problems, and although only a sampling of the
criticisms is presented here, the literature is clear that
the low and declining response rates for online SETs
present fundamental problems as well as
misperceptions (Avery et al., 2006, Dommeyer et al.,
2004; Norris & Conn, 2005; Nowell, Gale, & Handley,
2010; Stowell et al., 2012).
Response/Non-response Rates
Low response rates for online SETs are a
recognized problem in higher education and have been
studied from a variety of perspectives. This problem
stems from the concern that low response rates have the
potential to create bias if the students filling out the
evaluations are not representative of the entire class
population. Adams and Umbach (2012) found that that
non-response bias may actually double-bias SET results
as “not only are students with higher grades typically
awarding higher ratings, but they are also the ones who
are more likely to respond” (586). They also found that
engaged students were more likely to respond to
courses in their major, but the more SET requests sent
to a student, the more unlikely the student is to respond
(i.e., survey fatigue). It is no surprise that in an earlier
study some instructors were found to prefer the
traditional paper method because of their beliefs that
they can achieve higher response rates and a more
accurate representation of the population (Dommeyer et
al., 2004). But, as mentioned earlier, in-class
evaluations are not without their own issues (e.g.,
potential instructor and/or peer influence, students
filling out multiple evaluations, concern of student
anonymity, etc.)
Table 1
Suggested Minimum Response Rates Required for Validity of Data (Adapted from Nulty, 2008)
Class Size
Recommended Rates under
Liberal Conditions*
Recommended Rates under Stringent
Conditions**
10
75%
100%
30
48%
96%
50
35%
93%
70
28%
91%
100
21%
87%
200
12%
77%
300
8%
70%
500
5%
58%
*10% sampling error; 80% confidence level; **3% sampling error; 95% confidence level
Chapman and Jones Online End-of-Course Evaluations 50
Incentives and Increasing Response Rates
Misra, Stokols, and Marino (2011) found that
social norm-based appeals for issues such as social
cooperation and social responsibility were effective
in increasing web-based response rates. A number
of researchers have noted that reminding students
about the evaluations as well as letting the students
know the importance of SETs has helped response
rates rise (Dommeyer et al., 2004; Goodman et al.,
2015; Johnson 2002; Laubsch 2006; Nulty, 2008;
University of British Columbia, 2010).
Additionally, researchers have shown that
instructors who performed a formative mid-
semester evaluation as part of their class gained
between 9% and 16% in response rates (Crews &
Curtis, 2011; Lewis, 2001b; McGowen &
Osgathorpe, 2011; Tucker, Jones, & Straker, 2008).
Students respond positively when they feel their
comments will make a difference in improving a
class. Students then become more engaged in the
course as well as better evaluators (Lewis, 2001b).
They are more motivated if they feel their voices
will be heard and it can begin with simply stating
how SETs results are used in the course syllabus
(Chen & Hoshower, 2003; Tucker et al., 2008).
Several studies have examined aspects of the use of
incentives to increase response rates in online surveys
(Crews & Curtis, 2011; Dommeyer et al., 2004;
Goodman et al., 2015; McGourty, Scoles, & Thorpe,
2002a, b). Cook, Heath, and Thompson (2000) found
that personalized correspondence is linked to higher
response rates in electronic surveys. Students are also
more likely to reply to surveys they find more relevant.
One study found that the best determinant of response
rate was issue salience. In other words, the more salient
the issue to the respondent, the more likely he or she is
to respond (Sheehan & McMillan, 1999). Interestingly,
Cook and colleagues (2000) found that the use of
incentives was negatively associated with response
rates and resulted in more homogeneous responses.
Several researchers have discussed the importance of
giving positive incentives such as extra credit or bonus
points in order to achieve high response rates
(Anderson, Cain, & Bird, 2005; Goodman et al., 2015)
or making SET completion an assignment for the class
(Ravenescroft & Enyeart, 2009). Another study found
that entering students into a random drawing for a cash
prize upon completing their evaluations worked as an
incentive option but was not highly effective
(Ballantyne, 2003). Some universities withhold early
access to grades unless the evaluations are filled out
(Anderson et al., 2005). Clearly, the research on which
incentives work to increase response rates in web-based
evaluations is mixed (Misra et al., 2011).
Methodology
Because of poor response rates for SETs (see
Figure 1), the EOTC at the university under study
wanted to know what could be done to improve them.
This study was designed to determine the following:
What strategies are instructors using to
successfully improve response rates in SETs?
How do these strategies compare to the university
policy?
What strategies should be recommended for use
throughout the university?
The university under study is a large (over 33,000
students) research intensive institution located in the
United States. SET process and procedure is governed
by policy and administered by a centralized division
reporting to the university’s Provost. Prior to spring of
2007, when they began to be administered online, SETs
were administered in a face-to-face format. Since that
time, response rates have steadily declined.
The University’s SET is administered online
through a proprietary system and includes 12 Likert
scale questions and three open-ended questions to allow
for comments. Deans, department heads, and
instructors may add a limited number of their own
questions to this set of 15 common-core questions. The
system automatically sends out generic email reminders
several times to those students who have not filled out
their evaluations. Instructors cannot see their SET
results until after the last official day to post final
grades but can monitor the response rates online and in
real time (NCSU, 2013, para. 1).
The policies relating to strategies and/or incentives
for completion of SETs are clear and cover such topics
as the instrument, the scope, and the procedures.
Specifically, students are not required to fill out the
evaluation (NCSU, n.d., para. 31) and incentives to
increase response rate are forbidden (para. 33).
Population and Data Collection
The population under study consisted of 205
instructors (out of approximately 950 total faculty
members who taught at least one course in the previous
semester) that received an SET response rate of 70% or
higher. Because the objective of the study was to find
successful strategies for increasing response rate, the
decision was to limit participants to only those who
taught at least one course in the semester that had a
70% or higher response rate. Seventy percent was
selected in order to find successful strategies, and 70%
covered most requirements for survey validity for class
sizes over ten in liberal conditions (see Table 1). The
Chapman and Jones Online End-of-Course Evaluations 51
Table 2
List of Survey Strategies to Increase Response Rates
survey was anonymous, was open for three weeks, and
used two follow-up reminders. Out of the population of
205, 120 participants completed the survey resulting in
a response rate of 59%.
Instrumentation
A Web-based survey instrument was developed
that listed 15 different strategies (see Table 2) that
were either found in the literature as having been
associated with higher response rates for SETs or
that members of the EOTC heard were being used.
The list was reviewed for face validity by members
of the committee. It should be noted that the SET
instrument used at this institution is called
ClassEval. In addition, there were two text boxes in
which respondents could add alternative methods
that were not represented in the list. The survey
began with qualifying questions (see Table 3) that if
answered in a particular matter would disqualify a
participant. This was done to assure that
participants actually did teach at least one course in
the term that received a 70% or higher response
rate. Because of the university policy against
incentives, the study did not collect any identifying
characteristics which could be linked back to a
particular respondent, class, and/or set of
evaluations.
The final part of the survey listed the 15 potential
strategies along with two spaces for respondents to add
strategies not represented as seen in Table 2. The prompt
was stated: In those courses that received a response
rate of 70% or higher, select all of the ways you or
someone else took action to increase response rates.
Findings
The instrument included three demographic
questions. The first question asked respondents to
report the number of course sections they taught in the
prior semester (see Figure 2). Those that reported they
taught five or more sections were likely considering
Chapman and Jones Online End-of-Course Evaluations 52
Table 3
Qualifying Questions
Qualifying Question
How Participant was Disqualified
How many course sections did you teach in Fall 2012?
Participant disqualified if response was zero
Of these courses, how many of them received an end of
course evaluation response rate of 70% or higher (an
estimate is fine)?
Participant disqualified if response was zero
Figure 2.
Number of Sections Taught in Semester.
labs, independent studies, and other course structures
that differ from the standard three credit hour course.
The second question asked respondents to specify the
number of sections received an SET response rate of
70% or greater as seen in Figure 3.
The third question asked respondents to estimate
the number of students in the class for those with an
SET response rate of 70% or greater. Here the majority
of the classes had from 11 to 25 students enrolled (see
Figure 4).
Strategies
Respondents were asked to select strategies used to
increase SET response rate in their courses that had a 70%
or greater response rates in the previous semester. They
could select from the list of 15 options in Table 2, or they
could add additional strategies. They were allowed to
select more than one option. The list of strategies included
those that are considered incentives against the current
policy as well as non-incentive strategies. Figure 5 shows
the results of comparing instructors based on their use of
incentives. As seen, the number of faculty using no form
of incentive strategies is statistically higher than those that
used any form of incentive.
Response frequencies for each strategy are listed in
Table 4. The most used strategies seen in Table 4 are
not ones associated with giving away bonus points or
altering assignments, but with the way in which the
instructors approached students about the SET process.
The most often used strategy was merely talking about
the importance of SETs in their classes, followed
closely by creating an environment of mutual respect in
the classroom. The assumption here is that mutual
respect creates an environment where students want to
fill out evaluations. The third most commonly used
strategy (and the only other strategy used by more than
half of the respondents) was instructors who told their
students how they used evaluation results to modify
their courses. The next three most highly rated
strategies were used by 27% to 35% of the respondents
and were all related to the ways in which information
about the SET was communicated.
During analysis, incentives were also categorized
by type of incentive, a category that classified the
strategy as either “No Incentive,” a “Red Incentive,” or
Chapman and Jones Online End-of-Course Evaluations 53
Figure 3
Number of Sections with SET Response Rate of 70% or Higher.
Figure 4.
Number of Students in Sections with SET Response Rates 70% or Higher.
Figure 5
Proportion Testing of Faculty Using any Form of Incentive (95% confidence error bars)
Chapman and Jones Online End-of-Course Evaluations 54
Table 4
Response Frequencies for Strategies to Increase Response Rates
Strategy
N
%
Type of Incentive
1
Talked about the importance of ClassEval in my class.
97
87%
No Incentive
2
Worked to create a climate in my class that reflects mutual
respect between instructor and students.
93
83%
No Incentive
3
Told my students how I use student evaluation feedback to
modify my course.
87
78%
No Incentive
4
Sent announcements through Moodle asking students to
complete evaluations. If so, how many announcements do you
generally send?
39
35%
No Incentive
5
Sent personal e-mails to students asking them to complete
evaluations. If so, how many emails do you generally send?
36
32%
No Incentive
6
Included statements on the syllabus about ClassEval and its
importance in my class.
30
27%
No Incentive
7
Encouraged students to bring laptops/tablets/ smartphones to
class and allowed time for students to complete the evaluation
while a moderator was there
26
23%
No Incentive
8
Offered a mid-semester evaluation where students could give
feedback and then used that feedback to modify my course.
25
22%
No Incentive
9
Added bonus points to students' test or assignments if certain
course response rates were achieved.
15
13%
Red Incentive
10
Held my course in (or took my class to) a computer lab and
allowed time for students to complete the evaluation while
moderator was there.
11
10%
No Incentive
11
Increased all students’ grades if certain course response rates
were achieved.
8
7%
Red Incentive
12
Added a bonus/extra credit question or questions to the final if a
certain course response rate was achieved.
8
7%
Grey Incentive
13
Dropped a low assignment grade for all students if certain
response rates were achieved.
4
4%
Red Incentive
14
Forwarded an e-mail from a Department Head or Dean about the
importance of course evaluations to my College or Department.
2
2%
No Incentive
15
Offered to bring snacks to class or final if a particular response
rate was achieved.
2
2%
Grey Incentive
16
No actions were taken to increase ClassEval response rates in
these courses.
0
0%
No Incentive
*Respondents could choose more than one strategy.
a “Grey Incentive.” These categories were defined by
the EOTC whereas a red incentive was classified as
being totally against policy while grey incentive
strategies were against the policy, but not as egregious
because students were considered to be affected in the
same manner. Both types of strategies were considered
incentives currently prohibited by university policy.
This categorization is displayed in Table 4.
The issue of grade influence only begins to show at the
ninth most often used strategy where instructors added
bonus points to tests or assignments if a certain response rate
was achieved (13%), and strategies ranked at 11, 12 and 13
also refer to strategies that could will likely influence grades.
The total number of non-incentive strategies
employed by faculty who used at least one incentive
versus those faculty who did not was statistically the same,
as seen in Figure 6. Also, most instructors who received
high response rates employed an average 4.3 different
strategies. Even when a faculty member used a prohibited
incentive to increase their response rates, he or she still
employed an average of 4.5 non-prohibited strategies.
Because the group distributions of “No Incentive and
“Incentive were not normally distributed, a
Wilcoxon/Kruska-Wallis Test using JMPsoftware was
employed to test the null hypothesis that the samples come
from the same distribution. Since the p-value is 0.61, the
null hypothesis cannot be rejected and it can be concluded
that the number of non-incentive strategies employed by
faculty who use at least one incentive is the same as
faculty who do not employ incentives.
Chapman and Jones Online End-of-Course Evaluations 55
In Figure 5 it was shown that statistically, more
instructors are employing incentives in alignment
with university policy as compared to those that are
using strategies prohibited by policy. However,
class size seems to impact those decisions. Figure 7
shows the contingency analysis when doing the
same comparison with regard to class size: small (5
to 25), medium (26 to 75), and large (greater than
75). The class sizes from Figure 4 had to be
merged to ensure at least five items of each class
size occurred for each category (i.e., no incentive
and incentive) to make the analysis valid. The null
hypothesis (the proportion of faculty employing
incentives for all three class sizes is the same) is
rejected because the p-value for the Chi square test
that is less than 0.0001. The larger the class size,
the more likely a faculty member was to use a
prohibited incentive to help increase response rates.
Figure 6
Comparing the Number Non-Incentive Strategies Employed by Each Respondent.
Figure 7
Comparing the Use of Incentives Based on Class Size.
Chapman and Jones Online End-of-Course Evaluations 56
Additional Strategies
Thirty respondents submitted strategies they felt
were not represented in Table 4, but after closer
inspection, only 10 were considered additional
strategies (Table 5). The first strategy was related to
evoking student responsibility where instructors would
not only talk about the SET in class, but would also
imply the student had a social responsibility in helping
to create better learning environments and providing
input affecting the career of the instructor. Some
instructors told students that evaluation was a privilege
that was fought for decades ago and others described it
as a responsibility. This can clearly be seen in one
instructor’s comment “I emphasize that I worked hard
to deliver their course and if they respect that fact, I am
entitled to feedback- positive or negative.”
Another instructor described her strategy, I
explain that low response rates mean that the evals,
whether positive or negative, are somewhat suspect.”
The next most often mentioned additional strategy was
giving students time off: “I let students leave early or
not have class if a certain response rate was achieved,”
explained an instructor. Note that all mention of time
off related to the last day of class, whether it was part of
the day or the entire day.
Discussion
This study sought to determine the types of
strategies that are successful in increasing response
rates to SETs. Although the findings are limited due to
the self-reporting nature of the study, there are still
valuable findings and implications for policy,
instructors, and administrators. While instructors can
employ a myriad of methods, three strategies were used
by more than 75% of the respondents in this study.
These strategies were:
1. Talked about the importance of class
evaluations in my class;
2. Worked to create a climate in my class that
reflects mutual respect between instructor and
students; and
3. Told my students how I use student evaluation
feedback to modify my course.
These results clearly show that at this institution,
high SET response rates are more associated with
course climate and instructor-student communication
than with incentives given to students. In fact, the top
eight strategies did not include incentives and it was
only at 13% when actual incentives appear in the results
(adding points to tests or assignments.) This
contradicts the findings of Goodman and colleagues
(2015), who determined that grade incentives were the
most effective way of increasing response rates.
Policy and Standards
When reviewing the usage of strategies that are
acceptable to the institution and incentives that are not,
the results have clear policy implications. The policy at
the university under study states, “There is no penalty to
students who decline to submit evaluations,” and, “No
form of incentive should be provided to increase
response rate. While the great majority of instructors
achieving a 70% or higher response rate used strategies
that would not be considered incentives, there were
instructors using incentives that are opposed to the
institutional policy. Table 6 displays the strategies
instructors used that may be considered incentive-based.
Implications for Instructors
The clearest implication from this study for
instructors is to talk about student evaluations of
teaching with their students. This not only includes
explaining their purposes, but also focusing on how the
instructor uses the information and who benefits from
the information that is submitted via an SET (Lewis,
2001a). Results of this study support the case for
creating a climate of mutual respect, one where student
opinions are respected and addressed and instructor
needs are taken into consideration. This can be
accomplished through class discussion and by modeling
behaviors such as using formative evaluations of
teaching and pointing out to students the changes that
result from analysis of the data. The key information
here is that incentives are not only against policy, likely
to bias data, and have questionable ethical implications,
but they also do not work as well as simply reinforcing
the importance of participating in the process making
students feel their voices make a difference.
Implications for Policy and Administrators
The results of this study in no way support the use
of incentives to raise SET responses rates. Policy
makers should focus on rules and processes that enable
faculty members to conduct productive evaluation
discussions in all classes. Steps should be taken to
reduce the conflict between the use of SET results for
course improvement and the use for promotion and
tenure purposes. When an institution places high
importance on SET data for promotion and tenure, it
may also increase the likelihood of an instructor to use
incentives to increase response rates. Should SETs be
primarily used to improve instruction, response rates
and validity become less of a high-stakes issue and the
Chapman and Jones Online End-of-Course Evaluations 57
Table 5
Additional Strategies via Open-ended Responses
Strategy
N
%
1
Evoked Student Responsibility or Guilt
4
4%
2
Make Learning about Statistical Significance a Part of Class Content
4
4%
3
Gave Students Time Off
3
3%
4
Gave Bonus attached to Honesty Attestation
2
2%
5
Commanded Students to Complete Evaluation
1
1%
6
Appealed from the Student Perspective
1
1%
7
Withheld Final Grades
1
1%
8
Created Competition Among Sections
1
1%
9
Altered Final Exam
1
1%
10
Withheld Study Aids
1
1%
Table 6
Strategies that May Be Construed as Incentives
Strategy
N
%
1
Increased all students’ grades if certain course response rates were achieved.
15
13%
2
Added a bonus/extra credit question or questions to the final if a certain course
response rate was achieved.
11
10%
3
Dropped a low assignment grade for all students if certain response rates were
achieved.
8
7%
4
Gave Bonus attached to Honesty Attestation
2
2%
5
Offered to bring snacks to class or final if a particular response rate was achieved.
2
2%
6
Withheld Final Grades
1
1%
7
Altered Final Exam
1
1%
8
Withheld Study Aids
1
1%
pressure to increase response rate somewhat
diminishes. The goal for policy makers should be to
reduce the impetus for participating in activities that
would bias results or be considered unethical. As
echoed by the American Evaluation Association
(AEA) evaluation standards (AEA, 2015), SET
policy should project and guard against unintended
consequences, such as extreme urgency in inflating
SET response rates, as well as avoid conflicts of
interest between the formative and summative uses
of the SET. In order for SETs to be valid and
reliable, policy makers should decide their primary
purpose (i.e., course improvement or faculty
promotion and tenure).
Conclusion
This study examined practices among
instructors who had high SET response rates in
order to determine best practices in increasing end
of course evaluation response rates. Findings
indicated that the most common strategies to
successfully increase SET response rates were:
a. Discussing the importance of evaluation feedback
and how it will be used to inform future courses
b. Working to create a classroom culture that reflects
mutual respect between instructor and students.
Showing students “that their input is important in the
collaborative venture of teaching and learning is mutually
beneficial to instructor and student (Keutzer, 1993, p. 240).
Use of incentives was not employed as widely as the
investigators expected. Based on the results, an FAQ
document was created to assist faculty in increasing
response rates without the use of incentives (NCSU, 2014).
The FAQ document was distributed through multiple
channels, and there is some anecdotal evidence that it is
making a difference as the response rates have risen back to
the upper 40% range over the past few semesters.
References
Adams, M. J. D., & Umbach, P. D. (2012).
Nonresponse and online student evaluations of
teaching: Understanding the influence of salience,
fatigue, and academic environments.” Research in
Chapman and Jones Online End-of-Course Evaluations 58
Higher Education 53, 576591. doi:
10.1007/s11162-011-9240-5
AEA. (2015). Guiding principles for evaluators.
Retrieved from http://www.eval.org/p/cm/ld/fid=51
Aleamoni, L. M. (1999). Student rating myths versus
research facts from 1924 to 1998. Journal of
Personnel Evaluation in Education, 13, 153-166.
Anderson, H. M., Cain, J., & Bird, E. (2005). Online
student course evaluations: Review of literature
and pilot study. American Journal of
Pharmaceutical Education, 69(1), 34-43.
Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., &
Bell, D. (2006). Electronic SETs: Does an online
delivery system influence student evaluations?
Journal of Economic Education, 37, 2137.
Ballantyne, C. (2003). Online evaluations of teaching:
An examination of current practice and
considerations for the future. New Directions for
Teaching and Learning, 96, 103-112.
Beran, T. N., & Rokosh, J. L. (2009). Instructors’
perspectives on the utility of student ratings of
instruction. Instructional Science, 37, 171-184.
Bothell, T. W., & Henderson, T. (2003). Do online
ratings of instruction make $ense? New Directions
for Teaching and Learning, 96, 69-80.
doi: 10.1002/tl.124
Bullock, C. D. (2003). Online collection of midterm
student feedback. New Directions for Teaching and
Learning, 96, 95-102. 10.1002/tl.126
Centra, J. A. (2003). Will teachers receive higher
student evaluations by giving higher grades and
less course work? Research in Higher Education,
44(5), 495-518.
Chen, Y., & Hoshower, L. (2003). Student evaluation
of teaching effectiveness: An assessment of student
perception and motivation. Assessment &
Evaluation in Higher Education, 28(1), 71-88.
Clayson, D. E. (2008). Student evaluations of teaching:
Are they related to what students learn? A meta-
analysis and review of the literature. Journal of
Marketing Education, 31(1), 16-30. doi:
10.1177/0273475308324086
Cook, C., Heath, F., & Thompson, R. L. (2000). A
meta-analysis of response rates in web- or internet-
based surveys. Educational and Psychological
Measurement, 60(6), 821-836.
Crews, T. B., & Curtis, D. F. (2011). Online course
evaluations: Faculty perspective and strategies for
improved response rates, Assessment & Evaluation
in Higher Education, 36(7), 865-878.
Cummings, R., Ballantyne, C., & Fowler, L. (2001).
Online student feedback surveys: Encouraging
staff and student use. In E. Santhanam (Ed.)
Student feedback on teaching: Reflections and
projections. Perth, AU: The University of
Western Australia,
Dommeyer, C. J., Baum, P., Hanna, R. W., &
Chapman, K. S. (2004). Gathering faculty teaching
evaluations by in-class and online surveys: Their
effects on response rates and evaluations.
Assessment & Evaluation in Higher Education,
29(5), 611623.
Education Advisory Board. (2009). Online student
course evaluations: Strategies for increasing
student participation rates. Retrieved from
http://www.aims.edu/about/departments/iea/course
-evaluations/response-rates.pdf
Fidelman, C. G. (2007). Course evaluation surveys: In-
class paper surveys versus voluntary online
surveys (Unpublished doctoral dissertation).
Boston College, Boston, MA.
Fike, D. S., Doyle, D. J., & Connelly, R. J. (2010).
Online vs. paper evaluations of faculty: When less
is just as good. Journal of Effective Teaching,
10(2), 42-54.
Gaillard, F., Mitchell, S., & Kavota, V. (2006).
Students, faculty, and administrators’ perception of
students’ evaluations of faculty in higher education
business schools. Journal of College Teaching &
Learning, 3(8), 77-90.
Goodman J., Anson, R., & Belcheir, M. (2015). The
effect of incentives and other instructor-driven
strategies to increase online student evaluation
response rates. Assessment & Evaluation in Higher
Education, 40(7), 958-970. doi:
10.1080/02602938.2014.960364
Haskell, R. E. (1997). Academic freedom, promotion,
reappointment, tenure and the administrative use of
student evaluation of faculty (SEF): Part IV
analysis and implications of views from the court
in relation to academic freedom, standards, and
quality instruction. Education and Analysis Policy
Archives, 5(21). Retrieved from
http://epaa.asu.edu/ojs/article/viewFile/622/744
Hativa, N. (2013) Student ratings of instruction: A
practical approach to designing, operating, and
reporting. Create Space Independent Publishing
Platform: Oron Publications.
Hobson, S. M., & Talbot, D. M. (2001). Understanding
student evaluations: What all faculty should know.
College Teaching, 49(1), 26-31.
Johnson, T. D. (2002). Online student ratings: Will
students respond? New Directions for Teaching
and Learning, 96, 4959.
Kasiar, J. B., Schroeder, S. L., & Holstad, S. G. (2002).
Comparison of traditional and web-based course
evaluation processes in a required, team-taught
pharmacotherapy course. American Journal of
Pharmaceutical Education, 66, 268-270.
Keutzer, C. S. (1993). Midterm evaluation of teaching
provides helpful feedback to instructors. Teaching
of Psychology, 20(4), 238-240.
Chapman and Jones Online End-of-Course Evaluations 59
Kulik, J. A. (2009). Response rates in online
teaching evaluation systems. Ann Arbor, MI:
Office of Evaluations and Examinations,
University of Michigan. Retrieved from
https://www.wku.edu/senate/archives/archives_2
015/e-4-l-response-rates-research.pdf
Laubsch, P. (2006). Online and in-person
evaluations: A literature review and exploratory
comparison. Journal of Online Learning and
Teaching 2(2). Retrieved from
http://jolt.merlot.org/Vol2_No2_Laubsch.htm
Lewis, K. G. (2001a). Making sense of student written
comments. New Directions for Teaching and
Learning, 87, 25-32.
Lewis, K. G. (2001b). Using midsemester student
feedback and responding to it. New Directions for
Teaching and Learning, 87, 33-44.
Lindahl M. W., & Unger, M. L. (2010). Cruelty in
student teaching evaluations. College Teaching, 58,
71-76. doi:10.1080/87567550903253643
Marzano, R. (2012). Two purposes of teacher
evaluation. Educational Leadership, 70(3), 14-19.
McCullough, B. D., & Radson, D. (2011). Analysing
student evaluations of teaching: Comparing means
and proportions. Evaluation & Research in
Education, 24(3), 183-202.
McGourty, J., Scoles, K., & Thorpe, S. (2002a). Web-
based student evaluation of instruction: Promises
and pitfalls. Paper presented at the 42nd Annual
Forum of the Association for Institutional
Research, Toronto, CA.
McGourty, J., Scoles, K., & Thorpe, S. (2002b). Web-
based course evaluation: Comparing the
experience at two universities. Paper presented at
the 32nd ASEE/IEEE Frontiers in Education
Conference, Boston, MA.
McGowen, W. R., & Osgathorpe, R. T. (2011). Student
and faculty perceptions of effects of midcourse
evaluation. To Improve the Academy, 29, 160-172.
Misra, S., Stokols, D., & Marino, H. A. (2011). Using
norm-based appeal to increase response rates in
evaluation research: A field experiment. American
Journal of Evaluation, 33(1), 88-98.
Misra, S., Stokols, D., & Marino, H. A. (2013).
Descriptive, but not injunctive, normative
appeals increase response rates in web-based
survey. Journal of Multidisciplinary
Evaluation, 9(21), 1-10. Retrieved from
http://journals.sfu.ca/jmde/index.php/jmde_1/ar
ticle/view/381
NCSU. (2013). NC State ClassEval dashboard.
Retrieved from
http://classeval.ncsu.edu/cedashboard/index.cfm
NCSU. (2014). NC State ClassEval concerns and
suggestions. Retrieved from http://ofd.ncsu.edu/wp-
content/uploads/2013/07/ClassEvalFAQ.pdf.
NCSU. (n.d.). Reg. 05.20.10 Evaluation of teaching.
Retrieved from http://policies.ncsu.edu/regulation/reg-
05-20-10
Norris, J., & Conn, C. (2005). Investigating strategies
for increasing student response rates to online-
delivered course evaluations. Quarterly Review of
Distance Education, 6, 13-29.
Nowell, C., Gale L. R., & Handley B. (2010).
Assessing faculty performance using student
evaluations of teaching in an uncontrolled setting.
Assessment & Evaluation in Higher Education,
35(4), 463-475.
Nulty, D. D. (2008). The adequacy of response rates to
online and paper surveys: What can be done?
Assessment & Evaluation in Higher Education,
33(3), 301-314.
Pounder, J. S. (2007). Is student evaluation of teaching
worthwhile? An analytical framework for answering
the question. Quality Assurance in Education, 15(2),
178-191. doi: 10.1108/09684880710748938
Ravenscroft, M., & Enyeart, C. (2009). Online
student course evaluations: Strategies for
increasing student participation rates.
Washington, DC: Custom Research Brief,
Education Advisory Board. Retrieved from
http://tcuespot.wikispaces.com/file/view/Online
+Student+Course+Evaluations+-
+Strategies+for+Increasing+Student+Participati
on+Rates.pdf
Richardson, T. E. (2005). Instruments for obtaining
student feedback: A review of the literature.
Assessment & Evaluation in Higher Education,
30(4), 387-415.
Russ-Eft, D., & Preskill, H. (2009). Evaluation in
organizations: A systematic approach to enhancing
learning, performance, and change (2
nd
ed.), New
York, NY: Basic Books.
Sax, L. J., Gilmartin, S. K., & Bryant, A. N. (2003).
Assessing response rates and nonresponse bias in
web and paper surveys. Research in Higher
Education, 44(4), 409432.
Sheehan, K., & McMillan, S. (1999). Response
variation in e-mail surveys: An exploration.
Journal of Advertising Research, 39, 45-54.
Spencer, K. J., & Schmelkin, P. (2002). Student
perspectives on teaching and its evaluation.
Assessment & Evaluation in Higher Education,
27(5), 397-409.
Stowell, J., Addison, W. E., & Smith, J. L. (2012).
Comparison of online and classroom-based student
evaluations of instruction. Assessment &
Evaluation in Higher Education, 37(4), 465-473.
Tucker, B., Jones, S., & Straker, L. (2008). Online student
evaluation improves course experience questionnaire
results in a physiotherapy program. Higher Education
Research and Development, 27, 281-296.
Chapman and Jones Online End-of-Course Evaluations 60
University of British Columbia. (2010). Student
evaluations of teaching: Response rates. Retrieved
from http://teacheval.ubc.ca/files/2010/05/Student-
Evaluations-of-Teaching-Report-Apr-15-2010.pdf
Wright, R. E. (2000). Student evaluations and consumer
orientation of universities. Journal of Nonprofit
and Public Sector Marketing, 8, 33-40.
____________________________
DIANE D. CHAPMAN serves as Director of the Office
of Faculty Development at NC State University in
addition to her role as Teaching Professor in the
Department of Educational Leadership, Policy, and
Human Development. She received her B.B.A. from
Western Michigan University, M.B.A. from Western
Carolina University and Ed.D. from NC State
University. Her current research interests include
faculty development, program evaluation, and issues
surrounding the roles of non-tenure track faculty. She is
a recipient of awards in teaching and learning and
community engagement. She has previously worked at
UNC Chapel Hill and in positions in the private sector.
JEFFREY A. JOINES is Associate Professor and
Department Head in the Textile Engineering,
Chemistry, and Science Department at NC State
University and the recipient of the 2016 UNC
System’s Board of Governors Award for Excellence in
Teaching. He received a B.S. in Electrical Engineering
and B.S. in Industrial Engineering in 1990, a M.S in
Industrial Engineering in 1990, and Ph.D. in Industrial
Engineering in 1996 all from NC State University He
was awarded the 2006 NC State University
Outstanding Teaching Award and in 2012, he was
awarded the Alumni Distinguished Undergraduate
Professor award for outstanding teaching. He chaired
the University’s Evaluation on Teaching Committee
between 2012 and 2015