Tuesday, August 10, 2021

The method section as conceptual epicenter in constructing social science research reports

Smagorinsky, P. (2008). The method section as conceptual epicenter in constructing social science research reports. Written communication25(3), 389-411.

In this position paper, Smagorinsky argues for a renewed attention to the methods section. He bases this argument off of many years reviewing qualitative manuscripts both officially and unofficially, both on and off of the clock. He points out that not everyone appreciates the methods section to the extent that he does, and he doesn't understand why because, to him, the results are unintelligible without a proper rendering of the methods section. He says,

Increasing  attention  to  the  social  complexity  of research  begat  a  greater  need  to  implicate  method  in  results,  presenting authors with new obligations as they wrote their articles. ... The Method section, then, has evolved to the point where, in order for results  to  be  credible,  the  methods  of  collection,  reduction,  and  analysis need to be highly explicit. Further, the methods need to be clearly aligned with the framing theory and the rendering of the results" (392) 

They key word there, I think, is "implicate." In his history of the methods section, he reviews how the research article in writing studies used to be positivistic, but then, after the social turn, which he points to as occurring earlier than you might think. Hmmm, I'll walk that back. The methods section became more important around that time because people like Flower and Hayes (1981) were importing methods like protocol analysis from cognitive psychology. 

But then same thing happens now. And in fact, Spinuzzi is really passionate about this idea and even wrote part of an article on it, that is, this idea that you can't just import a theory from elsewhere. Where it comes from has consequences. Smagorinsky says something similar about coding. Don't just take someone's coding scheme. 

In  particular, the  outline  of  the  analytic  approach—for  me, usually  the articulation of a coding system—sets the terms for what I need to talk about elsewhere in the manuscript. If my codes reflect a sociocultural orientation to the data, then I need to frame the study from this theoretical perspective, and  the  same  goes  for  information-processing  theorists, postcolonialists, phenomenologists, and everyone else. Ultimately, I need to ensure that if I claim this perspective, the language that I employ for naming my categories needs  to  be  grounded  in  the  terminology  and  constructs  of  the  framing theory. For this reason, borrowed coding systems can be highly problematic because they were developed by someone else for, in all likelihood, other purposes and certainly for other data. Rather, codes need to be developed in a dialectic relation among the data, the theoretical framework, and whatever else a researcher brings to the analytic process. (See Bracewell & Breuleux,1994, for a counterperspective on the value of universal coding systems.) (405-06)

You could probably critique rhetoric's importing of philosophy as a theoretical framework with this logic, and you know of several places in which Scott G certainly has. Same idea. The framework needs to be modified or changed or related to differently... There needs to be a sifting or selection that takes place, which is to say, there needs to be activity on the importer's part. 

Which gets to a different point. A lot of this is simply about co-creation. So there needs to be selection and sifting on the importer's part. But part of the reason of going to all of this trouble to be transparent about coding--which is to say, transcending the "I read, I coded, I found themes" (407) method of reporting on coding--is so the reader can participate in the activity. For example, there was this part: 

Describing a data collection is probably the most straightforward part of accounting for method. Generally, this section includes a description of the data  sources  and  how  they  were  collected:  field  notes,  interviews,  audio394Written Communication recordings of discussions, ancillary artifacts, samples of writing, and so on.But  merely  listing  sources  in  a  general  way  is  typically  insufficient.  AsChin (1994)  has  argued,  simply  announcing  that  data  are  composed  of“interviews” overlooks the fact that interviews may be conducted in manyways,  obligating  the  researcher  to  be  explicit  about  who  conducted  theinterviews, whether  or  not  multiple  interviewers  were  involved  and  if  so,how  consistency  across  interviewers  was  achieved  (e.g., relying  on  a  uni-form  interview  protocol  or  set  of  prompts  and  providing  the  text  of  suchscripts), and other factors that help to reveal the specific nature of the datacollection. I use interviews here for illustrative purposes; virtually any qual-itative research method benefits from explication of this sort.Limitations and cautions about the data collection procedures also meritattention.  Interviews, to  return  to  this  example, are  not  benign  but  rather involve interaction effects. Rosenthal (1966) examined researcher effects in behavioral research and identified a myriad of characteristics that can affectthe  relationship  between  a  researcher  and  participant,  in  turn  helping  toshape the data that emerge from the collection process. For instance, femaleparticipants tend to be treated more attentively and considerately than men,female  researchers  tend  to  smile  more  often  than  their  male  counterparts,male  and  female  researchers  behave  more  warmly  toward  female  partici-pants  than  they  do  toward  men  (with  male  researchers  the  warmer  of  thetwo), White participants are more likely to reveal racial prejudice to a Whiteresearcher  than  to  a  Black  one, gentile  subjects  are  more  likely  to  revealanti-Semitic  attitudes  to  a  gentile  researcher  than  to  one  whom  they  per-ceive as Jewish...the list seems endless. Making some effort to accountfor these phenomena helps to explain the social construction of data in stud-ies involving researcher-participant interactions. (394-95)

If you know that women conducted the interviews rather than me, then you have something to talk about, a point of critique. This is not to say that you'd be giving reviewers a foothold to reject your article. It's about dialogue. As Davida says, there is no single elimination? Sudden death elimination when it comes to reviews. I think. Weaknesses make you relatable. 

Just before I forget, Clay uses the phrase "chain of custody," and I think that's what Smagorinsky is talking about. You need a chain of custody from the methods to the results to the implications, and so on. Everything has to be relatable. Like that Chekhov quote with the gun, which Spinuzzi has actually used btw...

Oh, there's the part about inter-rater reliability, which is not a big surprise, since Smagorinsky is so into the socio-cultural stuff. If you value activity and co-creation so highly, then it's going to follow that you're not going to like trying to get one rater to corroborate another, since that will silence one of the conversational partners. It's bidirectional. My grad students and I code collaboratively, since it's not merely the case that they learn from me; I also learn from them. 

Is this a form of holism? I am related to or in conversation with my grad student, the two of us are in conversation with the method, which is in conversation with the results, etc. Or is it just a network or networking?

Do methods first, everyone says that I feel like. But I liked how he was saying (or implying?) that he just starts coding and then figures out later which framework this is. Oh, because I Xed and Yed and Zed when I was coding, I should probably use framework A to explain it....

Trust and credibility came up a lot, but that's what I was talking about via co-creation.  Faith too...

I also see this connecting to the interview with Spinuzzi with McNely in which the former was bemoaning about how people always think qual research is subjective and quant objective. Smagorinsky is saying, if we do these things I am talking about, people will start to think of qual research more objectively. 

But it also seems like something was lost in the move away from the experimental article in writing studies. Back then, it was common to reference the method from the results. We've lost that (407). Now, of course, we've come a long way, and we're lucky to have realized all of these social facets of research. Reporting is stronger now, more objective? more accurate? more reliable? Smagorinsky doesn't say this, but I take him to mean that it would be great if we acted like these old experimental guys in just this one way, while continuing to do all of this stuff that we're currently doing.

When reading this, I got the impression that data reduction could mean going from interview transcripts to, say, transcripts. Interview transcripts. Like typed out. But then again, how were they typed out because that says something. Just typed out, no applied linguistics or linguistic markings, no marking for intonation or pauses or whatever. We just coded for meaning. That says something about your assumptions. Is this a meaningful detail? how does that bear on the results? Why do care about knowing that? Is it just that it gives the reviewer the opportunity to say, yes, that was meaningful, thank you for including that. Is this about empowering the reviewer and maximizing the opportunity for publication? 

You were also thinking about the relationship of coding to the research questions, and how that would be a good idea for a papers going forward. You always have to be able to ask yourself, how is the coding pass related to the research questions? and is that relationship simple enough for the reader to follow? and how have you gone out of your way to make that visible through design? 

No comments:

Post a Comment