Assessing What Students Learn



Assessing student learning outcomes in the advertising campaigns course:
What do students learn and how can we measure it?
The call for assessment is not new, nor is it a passing academic fad (Boyer, 1990; Maki, 2004, Rowntree, 1987). Accrediting bodies demand to know what our students are learning, while institutions of higher learning face growing political pressure to account for student learning. Evidence of this pressure comes in the form of the Spellings Report, an attempt to provide a comprehensive national strategy for postsecondary education in the United States (Chronicle Review, 2006).

What do we think students in advertising programs should learn in our classes? How should we measure what they learn? How can we translate what we discover to improve student-learning opportunities? These are questions that accrediting bodies for journalism and mass communications, Accrediting Council for Education in Journalism and Mass Communication (ACEJMC) and business, Association to Advance Collegiate Schools of Business (AACSB) ask programs to answer in the accreditation process. Those questions are also asked at institutional levels by regional accrediting bodies and by national associations such as American Association for Higher Education and American Association for Colleges and Universities.

In 2004 ACEJMC began requiring programs that ask to be accredited to provide not only a plan for assessment, but evidence demonstrating that results are used to improve curriculum and learning (ACEJMC Web site). Likewise, AACSB requires that “the school uses well documented systematic processes to develop, monitor, evaluate and revise the substance and delivery of the curricula of degree programs and to assess the impact of the curricula on learning” (AACSB International Eligibility Procedures and Accreditation Standards for Business Accreditation, 2006, p. 15).

Maki (2002) and Hersh (2005) make strong arguments that the commitment to assessment should come from within the institution, not from politicians, businessmen or consultants outside the academy. They contend that by taking ownership for accountability through assessment, faculty may be able to avoid the one-size-fits-all No Child Left Behind-style testing regime. More importantly, they add, academicians need to work toward good assessment practices not because they are being forced to do so by institutions and accrediting bodies as an act of accountability, but because it reflects a commitment to learning about learning (personal correspondence Maki, 2006).

Assessing student-learning shifts the focus from what instructors teach to what students learn. While this concept has been the topic of conversation in the academy for a while, exactly how to do this and how to measure what students learn haven’t been fully explored with respect to advertising. Jeremy Cohen, former editor of Journalism and Mass Communication Educator, issued a challenge to the academy: “If assessment is to play a meaningful role . . . it is time to create depth and breadth of understanding of assessment theory and implementation through increased availability of professional quality development” (Cohen, 2004, p. 6). Assessment is particularly challenging for advertising programs because they encompass different disciplines – communication studies, journalism and business – that have different sets of assessment/accreditation criteria.

An appropriate place to start with assessment in journalism and mass communications is the capstone course, such as the advertising campaigns class (Rosenberry & Vicker, 2006). However, literature regarding assessment of student learning in said courses is sparse. In fact, most studies that have dealt with issues in the advertising campaigns course focused on what skills the industry values (Ganahl, 2003; Benigni & Cameron, 1999), students’ attitudes toward grades (Umphrey & Fullerton, 2004), and the importance of teamwork (Ahles & Bosworth, 2004). However, no studies that we were aware of measure direct and indirect indicators of learning as well as other outcomes of the campaigns course that may benefit students as they develop into advertising professionals. This study will investigate learning outcomes identified by professors and look at student comments about those course objectives. The findings of this study can help advertising educators develop assessment tools for student learning related to competence in understanding and applying the skills and tools of the profession that are beyond the grades students receive.

The purpose of this paper is to investigate how educators might define measurable outcomes for advertising campaigns courses to help faculty develop a sustainable plan to evaluate student learning. This paper argues that the appropriate place to start assessing learning for advertising students is the capstone advertising course. It offers a model for teaching and learning that employs several components of an assessment framework offered by Shulman (2007).


Literature Review
Grading and Assessment
Some faculty might argue that they are already “doing” assessment because they are grading student work. To some extent this is true. However, faculty needs to be careful to make some important distinctions (Cohen, 2004). Grading student work such as papers and exams produces summative evaluations, which means that the work that is generated is measured against specific outcomes designed for the course rather than evaluating information about the learning process. Secondly, according to Cohen, students, not instructors, are accountable for their learning. The power of assessment lies in the feedback that instructors receive to improve learning opportunities. Rather than a summative emphasis, he contends, the focus should be on formative work because grades are not considered to be a valid indicator of the process of learning that occurs in a course.

How, then, can faculty systematically evaluate and improve student learning? Do faculty grade what students actually learn or only the assignments they submit? This question is especially perplexing, given the complex, collaborative enterprise that is part of the campaigns course, and will be explored in more depth later in this paper. Professors are often not entirely comfortable with grading (Barnes, 1985), but many use it as their main evaluation tool. Pollio and Beck (2000) found that “students wished professors were less grade-oriented, while professors wished students were more learning-oriented” (p. 45) In fact, students often “confuse grades with learning and do not view grades as a snapshot” (Giese, 2005, p. 255). Grades can be important in the advertising/public relations job search after graduation, although employers often look at other factors when making hiring decisions, including experience, level of confidence, and the quality of the applicant’s portfolio. While professors appear to believe that grades in advertising and public relations courses generally reflect the quality of a student’s course work, they also think that grades are not necessarily an adequate predictor of a student’s potential as employee (Ganahl, 2003).

Still, grades can be an important motivating factor, especially for younger, less experienced students (Umphrey & Fullerton, 2004). In their study of advertising majors’ attitudes toward grades, Umphrey and Fullerton theorize that older and more experienced students may be less motivated by grades than younger students because they “notice less of a relationship between the time they spend studying and resultant grades” (p. 45). At the same time, they found that students who held creative positions on campaigns teams were not motivated by grades. Given that the course has a strong creative focus and almost all students enroll in it during their senior year, what other factors might motivate students to succeed in the course, if not the grade? A review of existing literature uncovers some factors that have been considered in the past.

While grades are an important assessment tool for summative evaluations such as papers and exams, Cohen states that they are not considered to be a valid indicator of the process of learning that occurs in a course (Cohen, 2004). Shulman (2007) suggests that instead of measuring the learning that occurs in our classrooms by computing a grade based on how well students meet specific course objectives, learning should be measured via multiple measures such as course-embedded assessment.

Why Begin Assessment with the Capstone Course?
The Association of American Colleges and Universities (AAC&U) National Leadership Council (2007) provides support that the culminating experience “can be structured to show how well students can integrate their knowledge and apply it to complex problems, and students’ level of performance on them can be aggregated and made public.”

Rosenberry and Vicker (2006) suggest that it is appropriate to begin assessment with the capstone course in mass communications programs because the culminating experience requires that students integrate and apply knowledge from their majors. The products they generate offer opportunities to reflect on the adequacy of students’ preparation in the program. The culminating experience gives students a chance to synthesize what they have learned during their academic careers and bring coherence and closure to their experience in the major. They suggest further that the capstone experience not only provides a sense of closure, but also one of exploration. Capstone courses push students to extend beyond their present knowledge. Rosenberry and Vicker offer three major themes that emerged from their research on capstone courses: (1) capstones reflect an integration and synthesis of knowledge, (2) they require students to apply knowledge to real-world situations, and (3) they help students make a transition from the classroom to their careers. Other topics they identified include: “extension of knowledge, opportunities for in-depth study, reinforcement or extension of basic communication competencies, and development of higher-order or critical thinking skills” (Rosenberry & Vicker, 2006, p. 270). Interestingly, they identify outcomes that are traditionally thought of as intellectual skills and abilities. Interpersonal skills do not emerge on their list.

What should be taught in a campaigns course? A review of literature suggests students learn practical, professional and interpersonal skills (Benigni & Cameron, 1999; Ahles & Bosworth, 2004). The capstone course in most advertising and public relations programs has a “real-world” focus, meaning that students work on a campaign that will solve a real life client’s communication problem strategically and creatively. Benigni and Cameron argued the importance of real-world application: “perhaps the most important function of capstone courses in journalism and mass communications is to prepare students for the real world” (1999, p. 50). They suggest that these “real world” skills should include communication and planning skills, as well as an ability to base strategic decisions on sound research and theory. The goal of the capstone course is to synthesize skills learned from prerequisite courses in a collaborative learning environment, in which students work in a team environment to create a campaign, they state. In their study, Benigni and Cameron investigated the role that interpersonal dynamics play in a student campaign both internally (within team structure) and externally (with client). They found that two-thirds of campaigns classes used a team approach as class format, with 94% of all presenters being graded on individual as well as team performance. Seventy-three percent of campaigns courses used peer evaluations, while 60% indicated that peer evaluations of other students were reflected in the final grade for those students. They conclude teamwork, therefore, is an important component of the campaigns course.

Benigni and Cameron furthermore found that some programs teach about teamwork, team building, and problem solving, and consensus building, but teamwork is not covered in much detail in undergraduate advertising and public relations programs (Ahles & Bosworth, 2004). Ahles and Bosworth suggest that effective teams generally earn higher grades and produce a better quality campaign for the client. They found that after students complete the campaigns course, they often have a “shared vision” of effective teams, characterized by strong work habits and human relations skills, but not necessarily professional skills. Human relations skills included reliability, dedication to the project, and teamwork attitude. Ahles and Bosworth conclude that students may rank human relations skills so high because they think those skills will help them achieve a better grade. Thus, the desire to have these skills is ultimately selfish. In the same study students ranked professional skills, such as advertising and public relations tactical skills, computer skills, and problem-solving skills, lowest.

Principles of Assessment
Understanding why assessment is needed and how assessment occurs in a cyclical process prepares us to explore what it is that faculty should assess. Lee S. Shulman, president of The Carnegie Foundation for the Advancement of Teaching, argues that assessment should be viewed essentially as a form of narrative (Shulman, 2007). He states that the story told by assessment is a function of the measurements taken, and those dimensions determine the possible directions the narrative might take. In other words, faculty needs to make clear their rationale for telling a particular narrative rather than alternative stories. What is it that advertising educators want to tell in their assessment story? What are the key indicators that those outcomes are being met that should be measured?

Shulman (2007) offers “Seven Pillars of Assessment for Accountability” that can be used as a framework for developing an assessment plan. Four of them guided our work. 1. Become explicit about the story you need to tell and the rationale for choosing it.
The story is driven by accrediting bodies to some extent. ACEJMC stipulates competencies that all graduates in accredited programs should know (see Appendix A for list). Another part of the story could be shaped by national standards for the discipline. The National Academic Committee of the American Advertising Federation (AAF), which is comprised of advertising practitioners and faculty, identified a coherent set of goals for advertising education (personal correspondence, Fullerton, 2007, see Appendix B for list). These competency lists give faculty some tools to use to set the outcomes they want their students to achieve – to create the story they want to tell about their programs.

Another option for setting outcomes involves researching academic journals to see what other scholars have identified as important curricular areas for advertising. One direction the narrative could take could reveal how the goals of advertising education might link with goals of the institution’s general or liberal education, as some educators call for examining how learning and research are integrated across disciplinary boundaries (e.g., Gilbert, Schilt & Ekland-Olson, 2005; AAC&U National Leadership Council, 2007). Ganahl, for example, surveyed alumni and faculty about the advertising/PR curriculum, finding those faculties were more supportive than professionals of a strong liberal arts education (2003).

It is quite obvious at this point that to try to assess every possible aspect of the course at once is overwhelming, if not impossible. Faculty need not tell an epic story with their assessment. Rather they might conceptualize it as incremental steps over a period of time. This way assessment becomes an on-going activity and evolving story, rather than a snapshot taken just to have something to show an accrediting team.

2. Do not think that there is a “bottom line.”
Once an instrument has been selected to assess a learning outcome (or competency), it is important to recognize what it measures and what it does not. Assessment results should be examined in the context of the particular narrative that is being told. This means that assessment that is focused to find the answer to a specific question cannot be generalized to conclude that assessment is complete or successful with one instrument. Rather, assessment is an on-going process of discovery.

3. Design multiple measures.
An array of instruments will help provide a variety of assessment evidence from which to make informed pedagogical decisions. ACEJMC guidelines stipulate that these should include direct and indirect methods. Direct methods require students to demonstrate their learning or produce work that lets others judge whether or not outcomes have been achieved. Examples of direct assessment include a paper or test or evaluation of the campaign by a professional expert. Indirect measures involve asking students their perceptions about what they learned. Asking students, alumni and employers about their satisfaction with a program and measuring their job placement rate are examples of indirect assessment. Direct and indirect measures can be complementary and each tells a different part of the assessment story (Maki, 2004; Walvoord & Anderson, 1998).

4. Embed assessment into ongoing instruction.
The key here, according to Shulman, is to assess early and often. He suggests that assessment that is employed late in a course or program yields helpful pedagogical information when it’s too late to be of much use to students in that course. He says that assessment should be “a regular physical exam rather than a public autopsy” (p.6). This calls for what Shulman refers to as bilateral transparency. Progress toward learning outcomes should be accessible to both faculty and students.

Shulman’s Seven Pillars of Assessment for Accountability invite a challenge for advertising educators. These pillars can be used to guide assessment that is multiple method, embedded, intentional, and iterative. To summarize, there is a need for strategic, intentional learning that improves as a result of evaluation of curriculum. Based on the assessment literature and specifically Shulman’s recommendations, we developed an assessment model called IDEA (Identification, Development, Evaluation and Application). This model illustrates the Teaching and Learning Cycle that is essential to the development of the overall plan for assessment, while providing a manageable “roadmap” for faculty to measure and improve student learning. The model suggests that faculty begin the assessment process by identifying and aligning a set of interconnected goals, including institutional, college, departmental, major, and course goals. Then, a qualitative and/or quantitative instrument should be developed to measure specific learning outcomes that will assess the extent to which these goals were achieved. Next, evidence of student-learning should be collected, analyzed, and evaluated based on the specific goals that were identified at the beginning. The most important step is to apply the findings to improve student-learning. Finally, in order to complete the cycle, it is important to identify and align the various goals again and continue the assessment process.

IDEA Model of Teaching and Learning:

Taking Shulman’s advice to create bilateral transparency, this study focuses the assessment lens on a perspective that is often neglected: the students’. What is it that they report having learned? How does that match what faculty think they are grading and the course objectives they set? This exploratory study will show how assessment of the campaigns course has been attempted by one advertising program by implementing the IDEA model. In this case, the research questions for this study helped us to assess student learning at the course level. We wanted to investigate whether we teach what we are grading and grade what students are learning:

RQ1: In what ways do student comments relate to what they learned in the capstone course match the instructor’s learning objectives for the course?
RQ2: How do students rate the extent to which they achieved learning outcomes that include professional guidelines (AAF Principles), accrediting competencies (ACEJMC) and personal expectations?
RQ3: What types of learning outcomes do students identify as most important?



Based on our experience working with faculty in our college to help them understand what assessment is and how it can help improve their teaching, two questions arise in almost every discussion: How is assessment different from grading? And, what is the difference between a teaching-objectives approach and a student-centered learning approach?

As a result, an assessment method was developed that might offer an example of how faculty members could move from a teaching-objectives approach to a student learning focus, and in the process learn what assessment offers beyond meeting the course objectives. The capstone course, Advertising and Public Relations Campaigns, was the target course for this study as recommended by Rosenberry and Vicker (2006). It is an example of a course where cumulative learning and various other types of learning might be assessed.

The purpose of this exploratory sequential mixed methods design was to explore student reflections on learning in the campaigns course with the intent of developing and testing a survey instrument that measures a variety of learning outcomes. The first phase of the study was a qualitative exploration of how students who have completed the advertising campaigns course at a large Midwestern university reflect on what they have learned in the course and how the course objectives match what they say they learned. These initial course objectives had been developed by the course instructors based on the ACEJMC assessment levels of learning (awareness, understanding, and application). After completing the campaigns course, students were asked to reflect on their development throughout the semester, using the course objectives as a guide. A total of 40 written student reflections from three sections were collected and categorized as pertaining to learning outcomes identified in the course objectives listed in the syllabus or additional learning outcomes that emerged from student responses. Students mentioned some of the course objectives and also listed things not on the list of objectives in the syllabus.

The reason for collecting qualitative data initially is that an instrument needed to be developed that included more than the course objectives developed by the instructors. In the second, quantitative phase of the study, students’ statements and/or quotes from the qualitative data were used to develop an instrument to measure a more accurate list of learning outcomes among students in the campaigns course. In addition, profession related learning goals offered by The American Advertising Federation (AAF) and professional values and competencies listed by the Accrediting Council for Education in Journalism and Mass Communication (ACEJMC) were added to complete the list. In short, the revised instrument included learning expectations from an accrediting body, a professional group, as well as students themselves.

For the quantitative phase of the study a new group of students from three different sections of the next semester’s campaigns course was asked to complete the revised list of learning outcomes by evaluating whether their team achieved the original course objectives and to what extent they, as individuals, achieved the revised list of learning objectives. Students were then asked to list the top three things they learned from the campaigns course. This ranking was added to provide a measure of the importance of the learning objectives. The revised assessment forms were completed by 51 students from three different sections and three different instructors of the campaigns course at the same institution.


A qualitative inventory of the responses to the initial form indicated that student comments were made about their learning on all three levels — awareness, understanding and application — and in about equal proportions. However, there were some learning objectives in each of the categories that were not mentioned in the open-ended comments. In addition, there were comments about what was learned that had not been identified by instructors. These additional comments were related to group learning as well as individual learning that had taken place.

This finding was deemed by the researchers to be an example of how the professor’s view of what should be learned in a course may not match the students’ view of what they learned in the course. The traditional approach in higher education has been that the professor outlines the learning objectives, teaches to those expectations, and indicates in the grading how well the student has met those objectives. This first step in our investigation offered a reminder that what we teach may not be what students learn, but also made us cognizant that what students learn may be beyond the teaching objectives we set. The student comments related to individual and group learning were incorporated in the next version of the course evaluation form.

The consistency of the student responses on the team and individual sections of the revised evaluation form offer evidence of content validity and reliability related to student-oriented learning outcomes. Almost every student indicated “yes” related to each of the team achievements on the list. Only one person answered “no” on four of the learning objective statements. There was greater diversity in the individual versus the team evaluations, but most of the students gave themselves a 4 or 5 on the learning outcome statements. On only four of the individual learning objectives did fewer than half of the students give themselves a high “5”. The following four statements had the lowest means and the highest standard deviations of this group of questions: “ability to speak in public” (M = 3.53, SD = 1.689), “developed leadership skills” (M = 4.18, SD = 1.090), “played mediator between group members” (M = 3.84, SD = 1.405), and “learned how to write clearly and concisely” (M = 3.96, SD = 1.371). It is possible that those who did not express high agreement with these statements may not have been presenters for the campaign presentation to the client, did not take a leadership role on the team, or did not think that this course was the place they learned to write clearly and concisely. It is also possible that students did not realize that they reinforced their writing skills in the course, even though those skills were not specifically taught in campaigns. We will need to investigate these learning outcome differences to determine if the statement wording needs revision or if the course does not provide everyone with the opportunity to advance in these areas.

Qualitative analysis of student comments about the three top things learned in the campaigns course produced three skills categories identified by the students: professional, personal development and interpersonal. Those three categories were developed from 157 items identified by the 51 students in the three sections of the course. Of those, 43% (67) were professional skills related, 34% (53) were about interpersonal relationships and 23% (37) were personal development items. The differences between the three groups of students in number of items in each category were minimal (Professional 23, 23, 21; Interpersonal 20, 19, 14; Personal development 15, 14, 8). Personal development and interpersonal skills comments were categorized based on whether the comment was given in a group or team context or was offered as a statement about self-development.

Student comments about professional skills included such things as: presentation skills, evaluation and application of research, strategy development, understanding of entire process, technology needed to produce what was needed, importance of attention to details, how to build a cohesive campaign, near perfection needed for client, and understanding of what it takes to develop a plans book.

Interpersonal skills included such items as: team work, client communication, need to share ideas, put differences aside for the welfare of the group, learn to rely on others and let them rely on you, need to compromise in order to succeed, and group communication is important.
Statements about personal development items included such things as: learn to accept criticism, keep an open mind, master multi-tasking, think outside the box, learn to take responsibility, don’t take it personally, learn to compromise, learn to handle frustration, time management, patience is key, my ability may be more than I thought, and this experience confirmed that I do love the ad business.

The richness of the student comments provided evidence that they can delineate different types of learning emanating from the campaigns experience. One item, time management, was difficult to categorize because it wasn’t always part of a statement related to self, or the group. It was included in the personal development category for this study because it was often used in a personal trait context. However, this is an item that needs to be investigated in more depth. It may be that time management skills could be considered as important for all three categories.

What We Learned From Conducting This Exploratory Study
Research question 1 asked, “In what ways do student comments related to what they learned in the capstone course match the instructor’s learning objectives for the course?” We discovered that student comments indicate the instructor’s learning objectives do match their views of what they learned in many cases, but students also report that there are additional types of learning that go beyond what is traditionally measured with grades that are tied to requirements or expectations listed in the syllabus.

Research question 2 asked, “How do students rate the extent to which they achieved learning outcomes that include professional guidelines, accrediting competencies, and personal expectations?” Our findings indicate that students say they generally have met the professional and accrediting expectations in the capstone course as well as the personal expectations. However, we also were able to identify four skill areas where some students indicated the course did not help them meet the learning expectations. This provides information that could help develop changes in the course materials or assignments that might improve learning in those areas.

Research question 3 asked, “What types of learning outcomes do student identify as most important?” Students in the campaigns course were able to help identify three types of learning that came from their class experience. The first type related to professional skills, which are often part of the grading rubric for courses offered in a professional program. Discovery of the other two types of learning – personal development and interpersonal skills – expands our knowledge of what students learn in a course beyond what was listed as learning objectives in the syllabus.

It is our hope that the findings of this study might help faculty members understand the importance of developing assessment techniques that measure learning experiences outside and beyond the class assignments that are part of the grading rubric. Perhaps faculty will see that student input and feedback related to learning objectives can help make course instruction more student-centered.

Our plan is to refine this method in the campaigns course and then apply it to other courses in the advertising program, as well as the capstone courses for the other majors offered in our college.

The purpose of this paper was to investigate how educators might define measurable outcomes for a capstone course and to help faculty and administrators develop tools to build a sustainable plan to evaluate student learning. Based on current conventional practices in assessment, accrediting bodies demand that faculty complete the transition from teacher-centered education to learning-outcome accountability. The accrediting process requires that institutions not only create plans and assess student learning, but that they use the information from their activities to demonstrate how learning opportunities are improved as a result.

One place to start assessing learning in the advertising major is the culminating experience students get in the capstone campaigns course. This paper offers the IDEA model for assessment of teaching and learning, which starts by identifying and aligning institutional, departmental and course goals. The goal in this example was to assess if we teach what we are grading and if we grade what we are teach what we are grading and if we grade what we are teaching. We then developed an instrument to measure evidence of student learning pertaining to this goal, collected and analyzed qualitative and quantitative data, and demonstrated how we used it to improve student learning. The last step of the IDEA model is to go back and start the cycle again by identifying and aligning goals.

This study followed Shulman’s (2007) recommendation that assessment should be bilaterally transparent. Missing from existing literature is a notion that students had input or feedback into creating the learning objectives. This investigation focused the assessment efforts narrowly, exploring how students reflect on whether the course objectives matched their learning, and asking how they rate learning outcomes that include professional and personal expectations.

Findings indicated what was expected to a certain extent. Students reported that they believed they learned what the faculty had established for course objectives. Most interesting however, was the notion that the students themselves believed they learned more than the instructors expected. Three themes emerged in the qualitative portion of the study, which categorized students’ responses: professional skills, interpersonal skills and personal development. To the literature about campaigns courses, this study adds the notion that the personal component is an important learning outcome of the campaigns course as identified by students.

Overall, this study demonstrates that it is not necessary to assess every element in a program to be informed about certain parts of it. Incremental assessment conducted over time with multiple measures helps give a fuller picture of the learning experience. While this study is an example of an indirect measure of assessment, faculty needs to add to the assessment story with other evaluations of the course, such as critiques by professional panels and reviews by the clients, which directly measure student learning.

Completing the assessment cycle, it is important to implement changes based on evidence generated in the process. Faculty has choices to accomplish improvements. To implement what was learned in this particular case, faculty could revise the learning outcomes and reflect that in the grading. One way to apply the findings of this study to the campaigns course would be to incorporate a “personal development” component into the grading rubric and share it with the students at the beginning of the semester. Grading rubrics are an effective way to articulate expectations to students (Lattuca, 2005). It involves establishing and defining standards that must be met. In this study, the first step is to define “personal development,” which can be accomplished in a variety of ways. For example, the instructor could provide one by referring to the literature. However, a much more student-centered approach would be to involve students and to ask them to define and reflect on what “personal development” means in the campaigns course. This could be achieved by implementing an assessment plan similar to the one we presented here. The definition could then become part of the grading rubric and measured by indicating to what extent “personal development” was achieved by each student. It would be the instructor’s discretion to determine the percentage of the grade that “personal development” should account for. The grading rubric should be re-evaluated each semester as part of the continuous assessment process. On the other hand, if the instructor does not believe that “personal development” should be graded, she could simply ask students to reflect on this particular learning outcome and again measure it based on an operational definition as another way to assess what students learn in the course.

This study demonstrates that grading is not the same as assessment. We learned that students report learning personal development skills that are not taken into account when grades are given. We now know that an important part of the course includes that dimension. We, as faculty, can choose to integrate it in the grade with rubrics that reflect this component or keep it as a separate ungraded component of the course. The instructors teaching the campaigns course at our institution have incorporated the “personal development” component in different ways. One has included it in the grading rubric, while another chose to discuss it with students at various points throughout the semester. In both cases, students will be asked at the end of this semester to reflect on this important component, which will again become part of the assessment cycle.

The primary limitation of this study is the relatively small sample size. The findings are meant to be an illustration of how one institution has assessed a specific component of the advertising campaigns course.
Future Directions
As indicated in the IDEA model of teaching and learning, the most important piece of assessment is to view it as an ongoing process. We need to assess student learning systematically over time so that we can continuously improve it. The next step for us is to assess what advertising alumni say they have learned in the capstone course. This will add another chapter to our assessment story because students who have just completed the course but haven’t had any professional experience may not know what they have learned until they are somewhat established in the professional world.
• AAC&U National Leadership Council for Liberal Education & America’s Promise. (2007). College Learning for the New Global Century. Washington, DC: Association of American College and Universities.

• Accrediting Council for Education in Journalism and Mass Communication (ACEJMC).Retrieved March 17, 2007 from (assessment standard) (competency list)
• Ahles, C. B. & Bosworth, C. C. (2004). The perception and reality of students and workplace teams. Journalism and Mass Communication Educator, 59 (1), 42-59.
• Association to Advance Collegiate Schools of Business. (AACSB). Eligibility procedures and accreditation standards for business accreditation. Retrieved March 21, 2007 from (
• Barnes, S. (1985). A study of classroom pupil evaluation: The missing link in teacher education, Journal of Teacher Education, 36, 46-49.
• Benigni, V. L., & Cameron, G. T. (1999). Teaching PR campaigns: The current state of the art. Journalism and Mass Communication Educator, (59) 3, 50-60.
• Boyer, E.L. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ: Carnegie Foundation for the Advancement of Teaching.
• Chronicle Review (2006, Sept. 1) The Spellings Report, Warts and All. Chronicle of Higher Education, 53 (2).
• Cohen, J. (2004). Assessment . . . yours, mine, ours. Journalism and Mass Communication Educator, 59 (1), 3-6.

• Creswell, J. W., & Plano Clark, V. (2007). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage.

• Ganahl, D. (2003). Evaluating a professional advertising/PR curriculum: Aligning the liberal arts curriculum with professional expectations. Journal of Advertising Education, 7(2), 24-32.

• Gilbert, L. A., Schilt, P.E., & Ekland-Olson, S. (2005). Engaging students: Integrated learning and research across disciplinary boundaries. Liberal Education (Summer/Fall 2005), 44-49.

• Giese, M. (2005). An educator’s journal: Evaluating and evaluated. Journalism & Mass Communication Educator, 60 (3), 252-256.

• Hersh, R. H. (2005). What does college teach? It’s time to put an end to “faith-based” acceptance of higher education’s quality. The Atlantic Monthly, November 2005 , 140-143.

• Lattuca, L. R. (2005). Making learning visible: Student and peer evaluation. Journalism and Mass Communication Educator, 60 (3), 247-251.

• Maki, P. L. (2002). Moving from paperwork to pedagogy: Channeling intellectual
curiosity into a commitment to assessment. AAHE Bulletin, May 2002. Retrieved Feb. 14, 2007 from Reprint.asp.

• Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: American Association for Higher Education.

• Pollio, H. R., Humphries, W. L., & Milton, O. (1989). Components of contemporarycollege grade meanings. Contemporary Educational Psychology, 14, 77-91.

• Rosenberry, J. & Vicker, L.A. (2006). Capstone courses in mass communication programs. Journalism and Mass Communication Educator, 61 (3), 267-283.

• Rowntree, D. (1987). Assessing students: How shall we know them? (2nd ed.) London: Kogan Page.

• Shulman, L. S. (2007). Counting and recounting: Assessment and the quest for accountability. Change, 39 (1), (

• Umphrey, D., & Fullerton, J. (2004). Attitudes toward grades among advertising majors. Journal of Advertising Education (8)1, 39-47.

• Walvoord, B. E., & Anderson, V. J. (1998). Effective Grading: A Tool for Learning and Assessment. San Francisco: Jossey-Bass.

Appendix A
Professional values and competencies
(adopted Sept. 16, 2000)
Individual professions in journalism and mass communication may require certain specialized values and competencies. Irrespective of their particular specialization, all graduates should be aware of certain core values and competencies and be able to:
• Understand and apply First Amendment principles and the law appropriate to professional practice;
• Demonstrate an understanding of the history and role of professionals and institutions in shaping communications;
• Demonstrate an understanding of the diversity of groups in a global society in relationship to communications;
• Understand concepts and apply theories in the use and presentation of images and information;
• Work ethically in pursuit of truth, accuracy, fairness and diversity;
• Think critically, creatively and independently;
• Conduct research and evaluate information by methods appropriate to the communications professions in which they work;
• Write correctly and clearly in forms and styles appropriate for the communications professions, audiences and purposes they serve;
• Critically evaluate their own work and that of others for accuracy and fairness, clarity, appropriate style and grammatical correctness;
• Apply basic numerical and statistical concepts;
• Apply tools and technologies appropriate for the communications professions in which they work

Appendix B
A Statement of Principles for Advertising Education Programs,
National Academic Committee
American Advertising Federation (July 2006)

1. Advertising students should know the following:
A. The institutions of advertising, their history, and how they relate to each other.
B. How advertising is coordinated with marketing and other aspects of a company or organization’s activities.
C. Management of the advertising function and personnel in agencies and client organizations.
D. A wide range of alternatives for delivering advertising messages and how to use those delivery vehicles.
E. The conceptual basis for crafting advertising messages.
F. How advertising is regulated.
G. Ethical principles for advertising practices.
H. Research methodologies appropriate to guiding advertising strategy and evaluating its results.
I. An appreciation for the diversity of markets and audiences for whom advertisers create campaigns and messages.
J. Critical thinking, written, oral and visual communication, and presentation skills.
K. The ability to work with others to solve problems creatively.
2. Instruction in advertising courses should include both theory and practical application, such as the National Student Advertising Competition (Relevant to ACEJMC Standard 2).
3. Advertising faculty members should have professional experience relevant to the courses they teach (Relevant to ACEJMC Standard 4).
4. Advertising students should be strongly encouraged to gain work experience before graduation through campus media and internships (Relevant to ACEJMC Standard 2)
5. Advertising students should be proficient in using equipment and technology they will use in their careers (Relevant to ACEJMC Standard 2).
6. Advertising programs should be assessed using multiple measures, which could include:
• Participation in regional and national competitions, such as the National Student Competition, ADDYs, and competitive internship programs
• Capstone papers
• Journals and reflection pieces
• Focus groups
• Benchmark measurements (pre-tests/ post-tests of courses and senior year
• Portfolios of student work