Dissertation Overview

Doctoral Examinations as Curricular Infrastructure:
An Institutional Ethnography of Disciplinary Pluralities and Professional Transfer

Abstract

Widely recognized as bridging graduate coursework and independent research, doctoral examinations also (re)-produce disciplinary norms and map trajectories for graduate student professionalization. This institutional ethnography investigates doctoral exam processes as a component of curricular infrastructure. While all PhD programs include a doctoral exam process, they take a multitude of forms including reading lists, written exams, and oral exams. This study begins with a discipline-wide survey reaching 81 PhD programs in Rhetoric and Composition, building on previous programmatic research in the field (Estrem and Lucas. 2003) to identify constellations of exam formats and their stated purposes. From this broad view, institutional ethnography re-orients analysis by focusing on the standpoints of individual stakeholders (graduate students and faculty) and the ways that work processes point toward broader structural tendencies and assumptions (LaFrance, 2019). Beyond birds-eye policy analysis of macroscopic trends and written policy, this institutional ethnography centers individual perspectives through interviews and focus groups, to identify work practices and the institutional and disciplinary factors that direct them. This project yields an empirically grounded description of current graduate pedagogical practices while also theorizing how those practices interface with educational institutions and academic disciplines, which is particularly important for emergent or interdisciplinary fields.

Summary Project Narrative

In addition to dissertations, doctoral exam processes are one of the few elements of curricular infrastructure common to all PhD programs; however, the content, format, and purposes of doctoral exams vary widely. I take the term, “doctoral exams,” to refer to any examination process that PhD students take between coursework and candidacy (e.g. preliminary, comprehensive, and qualifying exams). Motivated by the primary research question, “how do doctoral exams promote independent research skills, disciplinary expertise, and professionalization for the graduate students who take them?,” this project works from a grounding in institutional ethnography to investigate how and why doctoral exam processes take various forms among Rhetoric and Composition PhD programs .

Nearly two decades ago, Heidi Estrem and Brad Lucas published the results of their survey of comprehensive exams in Rhetoric and Composition (2003). Replicating their categories of analysis, this project expands and extends Estrem and Lucas’ preliminary investigation into a formal survey that serves as a springboard for more granular engagement with faculty and graduate students. Structurally, my dissertation is composed of six chapters: an introductory chapter with a literature review, a chapter situating institutional ethnography as the methodology, and three subsequent chapters, each dedicated to one of the three major phases of data collection, plus a conclusion. Phase I of data collection is a broadly distributed survey of PhD programs, from which categorical themes were identified for deeper investigation through Phase II interviews with faculty members at various institutions, and Phase III focus groups with graduate students. Seeking to directly draw from the lived experience and highlight the different standpoints of faculty and students, my methods center on people as scholars in addition to institutional texts. 

Beginning with a broad analytical lens, I developed a survey through several iterations of user testing, and IRB review (#2020-780) before distributing it to all 81 PhD programs listed in the Doctoral Consortium in Rhetoric and Composition. The survey was completed in October 2020, garnered a 51% response rate, and provided valuable information about the purposes and formats of doctoral exams in each program (See Table 1 in Appendix), as well as changes underway or being considered. The survey results quantified themes initially identified by Estrem and Lucas (2003) (See Table 2 in Appendix), including the link between doctoral exams and professionalization, and the notion of exams simultaneously testing student mastery of content while also challenging them to articulate their own disciplinary views. In terms of exam format, the survey revealed new challenges faculty face in developing accessible exam structures and approaches to accommodating multilingual students who test under timed conditions. While the chapter reporting the full survey results is also being prepared as a research article for submission to Rhetoric Review in Spring 2021, highlighted survey findings also serve as focal points for mixed methods triangulation in the subsequent phases of data collection (Denzin, 2012, p. 82).

From the survey, I recruited 14 program directors who indicated they would be willing to participate in faculty interviews. These interviews have two broad goals: first to learn more about programmatic approaches to doctoral exams, and secondly, to talk with faculty about their own personal experiences with doctoral exams. Recognizing that every professor took some form of doctoral examination during their own graduate school career, I ask how professors remember their own exams and how they helped them develop professionally. Interviews are semi-structured, and I share the list of interview questions with faculty members in advance. IRB protocol for Phases II and III has been reviewed (#2021-340) and data collection was completed in June 2021.

Since institutional ethnography emphasizes standpoint, or the valuation of differences in perspective based on social position (particularly race, gender, class, and power relations), the perspectives of current graduate students must also be centered (LaFrance, 2019, p. 38). Looking beyond Purdue, the third phase of data collection for this project consults with graduate students across the field in virtual focus groups to discuss their work in a semi-structured format. The focus group format is designed to allow latitude for each group to talk about exams in general and how they perceive their exams as helping to scaffold their progress toward graduation and professionalization toward a career. The focus groups intentionally draw together students from different programs who may not know each other in order to identify and discuss the plurality of exam processes. When colleagues in the same program have experienced the same exam process, it is easier for them to overlook details because of assumed common knowledge. Graduate student focus groups seek to bring the plurality of programmatic exam experiences into the space of a conversation.

The underlying research methodology for my dissertation is institutional ethnography, which is a theory of investigation, first developed in sociology by Dorothy Smith (1999) as a person-centered methodology in contrast to strictly structural views. In Rhetoric and Composition, it has been used most recently to examine perspectives of administrators and staff in a writing center (LaFrance & Nicolas, 2012), as well as the origin of undergraduate curriculum and the role of institutional documentation in recording labor and expertise among faculty and administrative staff (LaFrance, 2019).  As a methodology, institutional ethnography fundamentally seeks to learn how institutional phenomena come to be through people-centered investigation, attending closely to standpoint, work processes, and ways in which textual discourses mitigate ruling relations. As with ethnography more broadly, institutional ethnography attends to the emic/etic awareness that has long been a consideration of ethnographic studies, by attending to the potential biases inherent in studying one’s own academic culture and seeking to situate these practices within broader academic communities and their professional intersections.

While offering a descriptive account of current exam practices, I concurrently theorize my findings in light of a decades-long question regarding the disciplinary status of Rhetoric and Composition. This question has been posed and engaged by many including Janice Lauer (1984), Louise Wetherbee Phelps (1988), Kathleen Blake Yancey (2014), and Bruce Horner (2014).  My formulation of the perennial question asks, “what role do doctoral examinations play in developing and defining fields like Rhetoric and Composition, whose disciplinary status is disputed or emergent?” While doctoral exams are not the only factor in determining academic disciplinarity, they are poised at the critical hinge point in graduate curriculum between coursework and independent research. At the level of theory-building, I argue doctoral exams function as a unique component in graduate curricula by asking students to simultaneously demonstrate their competence of the academic discipline and ability for independent research.

Additionally, my semi-grounded analysis protocol incorporates multiple cycles of qualitative coding to identify themes in verbal discourse, leading me to theorize doctoral exams as epistemic and functional infrastructure with respect to graduate curriculum. Building on Sarah Read’s relational theory of infrastructure for writing studies (2019, p. 246), I argue exams are epistemic in the sense that they test for mastery of knowledge and the ability to produce new knowledge through independent research, and functional in the sense that they serve as tools for scaffolding research agendas and forming dissertation committees. This infrastructural conceptualization accounts for the simultaneous plurality of form and institutional stability of exams which strongly resemble boundary objects (Star, 1989, 2010). If skills and content learned in doctoral exams transfer to future applications, this infrastructural view invites a pedagogical question. Can the transferred effects of doctoral exams in graduate programs be traced and described as a person advances through a scholarly career? In response, my dissertation extends the infrastructural paradigm of boundary objects to include temporal analysis of knowledge transfer. While my dissertation leads to actionable insights for the field of Rhetoric and Composition, it also invites cross-disciplinary attention. Estrem and Lucas’ survey of doctoral exams in 2003 has subsequently been cited in publications across several disciplines such as counselor education (Kostohryz, 2011), nursing (Oehrtman et al., 2010; Mawn & Goldberg, 2012), criminal justice (Schafer & Giblin, 2008), and computer science (Straub, 2014), indicating ways in which empirical research in Rhetoric and Composition can lead to tangible results in other fields. This project deeply engages with theoretical, empirical, and pedagogical questions in Rhetoric and Composition, and to the extent that Rhetoric and Composition continues to define itself disciplinarily within the university, this project offers a fresh perspective on the intellectual contributions and methodological nimbleness of the field so that it may become better understood and recognized. More importantly, my infrastructural account of graduate curricula offers a new theory and lexicon for describing and justifying the various components that compose graduate curriculum amid a rapidly changing landscape in higher education.

Appendix of Tables

Doctoral Exam Formats%Count
Written exam covering a student-selected reading list29.70%30
Oral exam/defense28.71%29
Other (please specify)16.83%17
Prospectus for Dissertation11.88%12
Written exam covering a reading list determined by the program6.93%7
Pedagogical development
(e.g. lesson plans, teaching demo, course proposal)
5.94%6
Total responses100%101
Table 1. Formats of Doctoral Exams in Rhetoric and Composition PhD Programs.
Note that several respondents selected multiple formats.

#QuestionRank 1 Rank 2Rank 3 Rank 4 Total
1Critical Thinking37.84%1429.73%1127.03%105.41%237
2Expert Knowledge45.95%1727.03%1027.03%100.00%037
3Research Ability24.32%937.84%1432.43%125.41%237
4Teaching Ability5.56%25.56%28.33%380.56%2936
Table 2. Four Goals of Doctoral Exams (originally identified by Estrem and Lucas), organized by ranked choice of respondents. Critical Thinking, Expert Knowledge, and Research Ability are closely clustered as equally important while Teaching Ability was consistently marked as least important.

References

Denzin, N. K. (2012). Triangulation 2.0. Journal of Mixed Methods Research, 6(2), 80–88. https://doi.org/10.1177/1558689812437186

Estrem, H., & Lucas, B. E. (2003). Embedded Traditions, Uneven Reform: The Place of the Comprehensive Exam in Composition and Rhetoric PhD Programs. Rhetoric Review, 22(4), 396–416. https://doi.org/10.1207/S15327981RR2204_4

Horner, B. (2014). Grounding Responsivity. JAC, 34(1/2), 49–61.

Kostohryz, K. (2011). The Comprehensive Examination in Counselor Education Doctoral Programs: A Study of Faculty’s Perceived Purposes. Dissertation, The Ohio State University.

LaFrance, M. (2019). Institutional Ethnography: A Theory of practice for writing studies researchers. Utah State University Press.

LaFrance, M., & Nicolas, M. (2012). Institutional Ethnography as Materialist Framework for Writing Program Research and the Faculty-Staff Work Standpoints Project. College Composition and Communication, 66(1), 21.

Lauer, J. M. (1984). Composition Studies: Dappled discipline. Rhetoric Review, 3(1), 20–29. https://doi.org/10.1080/07350198409359074

Mawn, B. E., & Goldberg, S. (2012). Trends in the Nursing Doctoral Comprehensive Examination Process: A National Survey. Journal of Professional Nursing, 28(3), 156–162. https://doi.org/10.1016/j.profnurs.2011.11.013

Oehrtman, S. J., Smolen, D., Hoblet, K., & Phillips, K. A. (2010). The Comprehensive Examination: A Viable Master’s of Science in Nursing Capstone Course. Journal of Professional Nursing, 26(6), 360–365. https://doi.org/10.1016/j.profnurs.2010.08.003

Phelps, L. W. (1988). Composition as a Human Science: Contributions to the Self-Understanding of a Discipline. Oxford University Press. http://site.ebrary.com/id/10142119

Read, S. (2019). The Infrastructural Function: A Relational Theory of Infrastructure for Writing Studies. Journal of Business and Technical Communication, 33(3), 233–267. https://doi.org/10.1177/1050651919834980

Schafer, J. A., & Giblin, M. J. (2008). Doctoral Comprehensive Exams: Standardization, Customization, and Everywhere in Between. Journal of Criminal Justice Education, 19(2), 275–289. https://doi.org/10.1080/10511250802137648

Smith, D. E. (1999). Writing the Social: Critique, theory, and investigations. University of Toronto Press.

Star, S. L. (1989). The Structure of Ill-Structured Solutions: Boundary Objects and Heterogeneous Distributed Problem Solving. In Distributed Artificial Intelligence (Vol. 2, pp. 37–54). Pittman Publishing. https://books.google.com/books?id=azyjBQAAQBAJ&printsec=frontcover&source=gbs_ViewAPI#v=onepage&q&f=false

Star, S. L. (2010). This is Not a Boundary Object: Reflections on the Origin of a Concept. Science, Technology, & Human Values, 35(5), 601–617. https://doi.org/10.1177/0162243910377624

Straub, J. (2014). Assessment of Examinations in Computer Science Doctoral Education. Computer Science Education, 24(1), 25–70. https://doi.org/10.1080/08993408.2014.890792

Yancey, K. B., Robertson, L., & Taczak, K. (2014). The Content of Composition, Reflective Practice, and the Transfer of Knowledge and Practice in Composition. In Writing across contexts: Transfer, composition, and sites of writing (pp. 1–36). Utah State University Press.