IS246 Week 3 Class Notes

In Uncategorized

Ethnographic project

For my project, consider how:

  • emotion
  • unfamiliarity (location, language, situation)
  • intimidation
  • stress
    affect how information is sought and delivered differently from how it might be conducted under other circumstances

MT: explain how I've always been the translator for medical and technical language to laypeople and my sensitivity to mismatches in expectations and intentions (i.e., practitioners think they are delivering information and also have a completely different "worry" threshold; plus, they are under pressure, too.)

talk about privacy concerns and the guy whose rectal surgery was discussed in detail loudly

MT: perhaps I could organize it by user experience: what the "some term for waiting people" expect (and this is contingent upon what they've been told prior to arriving in the surgical lounge; can look at the website and show the info presented about the surgical lounge), how they imagine things will go, their emotional states (a spectrum of possible and sometimes unanticipated feelings); what the staff's oreitnation and experience seemed to be, along with how desensitized they may be; the doctors' situations-->hierarchy of interns, residents, surgeons, nurses, phlebotomists and other specialized staff, custodial and serving staff, ombudsmen, etc.)

this case tends to be different from other types of situations because it is rare that one is waiting for someone to come out of surgery. maybe I could find some sort of statistics about number of times someone will sit in a surgical lounge in one's life.

http://www.wisn.com/news/health/new-technology-eases-anxiety-in-hospital-waiting-rooms/-/9373180/20810286/-/s0i1cuz/-/index.html

types of waiting people: need to know, info avoidance, work/be productive (knitting, computing, writing, conducting and ethnographic study), thinking about what could go wrong, thinking about how great things will be after

each of these indicates an expectation for the outcome and the information about how the patient is doing during the course of the surgery

uncertainty manifests in multiple ways, not just for the waiter

Case Ch 8
reliability and validity
tension between these: consistency and generalizability
inductive and deductive theory making
how pressure plays a role in information-seeking behavior

most important thing to do when reading a study/article is what question are they asking and how are they going about trying to answer it?

First thing is to ask what the question is: Intro/Problem/Hypothesis
Can be stated in a very exploratory, general way; or it can be stated in the classical way as an hypothesis to be tested; there are different views as what "counts" as good knowledge
and a lot of this has to do with the nature of reality, how we access it, and how we observe it

Nature of knowledge schools of thought
the two viewpoints revolve around either
the way we know things is by observing an external reality that exists independent of us; it's nature; this is positivism-->you can have positive knowledge through direct observation
the challenge is to eliminate your own bias and have direct, objective, unbiased access to whatever that reality/phenomenon is; go out and get the data and then apply an analylitic procudure to your data
the is a criteria: you provide a very detailed account of exactly how you conducted your observation and measurement so someone else can replicate the procedure and get similar results
a big "hoo ha" going on now about reproducibility and its unlikelihood bc journals tend to publish confirmatory articles; the likelihood is that the confirmatory ones get published and the others are ignored=80% chance that the politics are crafting the areas of study

if we adopt this objective view of knowledge, that's it's out there independent of our observation-->this mode is very much about discovery based on the idea of the possibility of objective knowledge
a different philosophy: often epistemologically opposed) actually as humans, we're not just redorcing devices-->we don't just get sensory data and objectively report it;

empiricists know that, too, so they devise techniques and tools to try to hold on to the objectivity
if you're subjectivist, relativist/constructivist/phenomenological viewpoint says the only way we know is thru our experience, culture, language, socialization; no data can stand free of our ability to perceive it; even the act of recording it, we've decided in advance what to record and how to measure it

for objectivist, realists ,that's not a problem and we want o ensure we have this reliability and some validity (we limit the scope--we can say sthg about this particular phenomenon, bc we keep seeing it again and again)

humanists this is a very strongly culturalist viewpoint-->humanities scholars (setting aside even the scientific world view) humanists are interested in a theory of knowledge based on interpretation

humanists do critiques-->you're willing to do extensive reading, etc., abt the same info, they know everyone has a different interpretation

the spectrum goes from unrealistic ignoring of one's biases to solipcism

"YES, absolutely right!"MQ: would you say that objectivists are pragmatists who know there is subjectivity but realize the debate can't be settled?

intersubjectivity-->we have enough intersubjective agreement and not even ge into the theory debate

anthro and soc were deeply subjectivist in the early days (Chicago-->almost entirely subjectivist techniques borrowed from anthropology)
late 19th early 20th century--Chicago School sociology was qualitative

trying to make sense out of the culture through long-term, invested observation of American social problems; concerned abt urbanization, industrialization, social pathologies that were arising in the wake of mobility, work changes, -- taxi dancers, prostitution, etc.; funded by Rockefeller

a lot of those individual studies--each can tell you sthg very detailed, rich, valied--have to be a good interpreter to see the patterns across them; by end of 1930s and 40s, American social scientists (the time when info science became a term) can't we find sthg more generalizable about the mass society in which we live? turn toward quantification in teh social sciences and a reversion to objectivism; have to see it across multitudes of cases-->example: Census
they need the volume-->most Censuses have had interviews conducted in a sample so that they could stay representative

Articulation of the problem; significance, framing, justification in the problem statement

Literature Review: Then you have to find out what's already out there

what are the best, tidiest, parsimonious literatures that relate to the problem; if we look across these, what do they tell us? maybe they suggest some things; how do they contribute to finding an answer?

in real life, you can't know the problem without the knowing the literature; the first two steps really inform each other and you adjust the way it is that you start describing your problem

you've learned sthg about the problem from this process, so the next step is that you have a section on methods, which come out of the problem and what's already known (maybe their techniques were the problem, or there's an approach to the problem that hasn't yet been done)

this is how we develop theory-->your problem comes out of what you think might be an explanation of how something observed happens

in old-school fieldwork, you put your theory building on hold until after you go into the field--different from this classic social science approach (in our project, we started with the method)

if the kind of ? you're asking requires that you know or get data from or there is a phen that's widespread or common in a culture, you have to rule out certain techniques (like this type of fieldwork)

middle of 20th century heyday of the survey-->need volume, can't experience fist-hand, so ask questions

prblems with survey rsch and interviewing, but if you need a lot of data that's fairly reliable and fairly valid, surveys work bc you need the volume

philosophies of knowledge
positivist/objectivist----------phenomenological/subjectivist

humanists often are subjectivist, but some are not (history researchers, for example)

if the kind of ? is one of experience, of being in the setting of ppl's understanding of how things work that doesn't depend on an external generalizability, fieldwork helps you try to see it from their point of view

ethnography-->the signature technique of the subjectivist viewpoint

we've swung hard this way in IS since the 1980s (remember my comment abt the pendulum)

MT: so yes, I'm still a pragmatist!

filmmaking/media making is a mode of doing ethnography, as well

the other side: prolly the most high-status technique from objectivist, positivist, realist side is tktk experiement; most of the time validity is what is criticized

ethnographic methods are enormously strong on validity

validity: external and internal
are you really measuring/observing what you think you're measuring/observing??
the first thing to think about is, are they really getting what they think they are getting?

one of the classic controversies on early anthro Margaret Mead, Benedict & Boaz-->the apogee of the perid of going to exotic places and observing

she cam eback and wrote her book acct of kinship, violence, etc.
later, other accused her of being duped-->she'd been lied to and the ppl there misrepresented themselves; the debate continues about this-->this is a real risk--are you really observing what you think you're

just as true on quant side-->semantic differential scale: how would you rate this thing on a scaleof 5, 7, etc. or the telephone interviewer: very likely, likely, etc.-->those mmts are taken as evidence for some behavior and opinions abt how you lead your life--the question always has to be asked is this measuring what you're observing

internal: have you constructed your study coherently? haven't overlapped to much; that your design will get you wha tyou need to get-->triangulated methods are a good way to determine this; harder, but makes more sense

external: whether or not what you utilize is representative of the world out there; even with different techniques, they will find the same phenomenon--so you want your study to be internally coherent, but also externally representative of reality

phrasing in surveys--ordering, phrasing etc. design of a survey instrument is

phone surveys: we're just beginnning to randomize on wireless

MT: exit polls and my childhood skepticism

looking at how diff researchers frame questions differently is important

MT: deception/misinformation in polling would be a fun topic

Nate Silver: Signal & Noise

the ideal research design has some sort of balance bt validity and reliability
if i see the same thing again, what do i think is likely to happen-->if it predicts well enough, then it's considered generalizable

we can't ever measure every variable

 

MT: for methods presentation: patterns over time-->if i want to see this pattern, how can i? if i see this pattern, what can i predict will happen?

levels of analysis: is the question about thise scale, that scale, how the population is defined (macro to micro)

can be global networks (Davos, telecom, trade)
can be middle range, can be micro

Tracy Kitter-->Soul of a New Machine ethnographic study of building a new computer; abt his experience of being embedded
even at the micro level, it also says something abt engineering; corporate climate; technology; fan culture with technology; but he said my level of analysis is going to be small

when you think abt problem and method, where am i goning to have to look to get the data to answer the ?

units of analysis are the particular entities you observe; mostly this is individual people-->can be indivs, orgs, families, teens, any sort of stratum

if you want to compare, your unit of analysis is the thing you're comparing; compare the data by unit

in network analysis, this takes a different view; esp social network analysis-->that viewpoint says individuals don't really explain-->only gets interesting at the unit of the dyad-->smallest unit of analysis is the dyad TWO-->can't learn anything about an environment by piling up bricks

MT: relates to the study or presentation

there is a big move away from the analysis of just individuals

this is why we're all upset about NSA metadata bc the patterns are in the relations

you can treat a network as a case (Gary Wellman, neighborhoods in Toronto)

see what people really did both mediated/online and face-toface interactions

the methods correspond to the problem, your belief abt what kind of data will answer the problem; given that data, what level?; devise methods that you think are going to get you the most robust answers you can get

controlled experiment compare two things over two times, trying to find out if only the intervention caused the change in outcome between the two groups; need strong evidence that you can attribute it to the intervention

hard to do: natural or quasiexperiments instead-->example: give all students test for baseline. at the end, final exam should reflect learning

two different teaching techniques would give more info

MT: this is why I like fMRI

the objective is to pull the experimenter out of the situation, so it can be repeated

well-constructed experiements and surveys have extraordinary reliability

ex: put kids into a lab two sets: show them happy cartoons, inocuous and fun--how many times the kids hit each other and the Bozo doll; then there's another group of kids matched as much as possible, then introduce Tom & Jerry and see whether there's different behavior

 

lots of early media effects studies were constructed like that; "being exposed to this kind of content causes violent behavior"; is that reliable? different culture, kids, town, etc.

the study's internal validity might miss sthg

you want to match up with already existing techniques; typically don't pioneer a whole new way bc you're intorducing another source of variance into the study; instead

MT: bc techniques are so common, are survey respondents contemplating less? I think we're so used the questions that we don't contemplate over time, even though our beliefs/experiences evolve

findings: a report of what i got

discussion/interpretation: confirms or denies, support/not support, doo i do it the wrong way? should i have considered sth else?

for our abstracts we can use this, but we don't need to

MT: theory papers often are done differently. how?

make sure you get around to the "so what" of the article; what did they find and does it stand? do you go fo rit?

ethics: the main principle is first, do no harm; whatever ou come up with as a technique for capturing data, you have to be sure it's not going to have negative consequences for the subject--have to in good faith minimize the risk of injury(can include psychological things, as well)

IRB UCLA training goes thru the rationale for why we have rules abt human subjects
a few years ago, a little revolt abt the UCLA IRB; the we ruling out all sorts of methods; most o fhte violations had happened at the VA or medical campus, the rest were paying the price for it-->now it's split for medical campus IRB and north campus IRB (also rigorous, but more reasonable); super helpful to students-->OHRPP

Milgram: "banality of evil"

MT: like the 23yo beaten outside the nightclub--instantly on social media networks, yet no one called the police

these are studies on why ppl don't behave as they believe they would behave, but the IRB issues make it really difficult to study today

every time you observe in the situation, you're changing it. it wouldn't be the same were you not htere. okay, not polluting it, but from ethnographic perspective, want to make few ripples as possible

for the abstracts (first one due next week): pick what you think will be relevant o the topic of the week: