Tuesday, July 17, 2012

Can We Trust Psychological Research?

Time Magazine  Psychologist Dirk Smeesters does fascinating work, the kind that lends itself to practical, and particularly business, applications. Earlier this year, the Dutch researcher published a study that showed that varying the perspective of advertisements from the third person to the first person, such as making it seem as if we were looking out through the TV through our own eyes, makes people weigh certain information more heavily in their consumer choices. The results appeared in the Journal of Personality and Social Psychology, a top American Psychological Association (APA) journal. Last year, Smeesters published a different study in the Journal of Experimental Psychology suggesting that even manipulating colors such as blue and red can make us bend one way or another.

 Except that apparently none of it is true. Last month, after being exposed by Uri Simonsohn at the University of Pennsylvania, Dr. Smeesters acknowledged manipulating his data, an admission that been the subject of fervent discussions in the scientific community. Dr. Smeesters has resigned from his position and his university has asked that the respective papers be retracted from the journals. The whole affair might be written off as one unfortunate case, except that, as Smeesters himself pointed out in his defense in Discover Magazine, the academic atmosphere in the social sciences, and particularly in psychology, effectively encourages such data manipulation to produce “statistically significant” outcomes.[...]\\

Many psychologists are aware of these issues and very concerned about them—in fact, most of the concern about this problem has been raised from within the scholarly community itself. This is how science works, by identifying problems and trying to correct them. Our field needs to change the culture wherein null results are undervalued and scholars should submit their data along with their manuscripts for statistical peer review when trying to get published. And we need to continue to look for ways of moving past “statistical significance” into more sophisticated discussions of how our results may or may not have real world impact. These are problems that can be fixed with greater rigor and open discussion. Without any attempt to do so, however, our field risks becoming little more than opinions with numbers.

9 comments :

  1. can we trust poskim? They too are supposed to operate by certain moral and ethical standards. They too are human and capable of skewing the data in their favor.

    ReplyDelete
  2. The issue of research being fabricated is as old as time. Kayin was the first liar. Today's professional world buys very little of what is proposed unless it is supported by empirical data. In science, everything must be presented in a format that permits replication so that findings and conclusions are based on something more reliable than a single study. Most professional journals in the health field (as well as others) are focused on reports of research. This makes it tempting to create one's own facts, in the hopes that no one will notice, and no one will bother to replicate. Throughout recent history, there are well published researchers and professionals who violated this basic ethic. Many were exposed, losing the awards that were bestowed on them, and entering the halls of shame. This should not lead to concluding that the bulk of published science is also fiction. That is as silly as fabricating fact.

    Having noted this, it is wise to look at the markets in health care, for example. Look at what sells, whether it has supportive research, or whether the trends are based on sheer marketing. In the long run, those marketing tactics that work will be reused and popular; those that are unproductive will fade into history.

    ReplyDelete
  3. Heinrich, et. al.: The Weirdest People in the World? (in Behavioral and Brain Sciences, 2010)

    ABSTRACT:

    Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as
    representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.

    PDF (75pp) via: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.187.3268

    ReplyDelete
  4. I agree with Nemo to some extent.

    However, having interacted with secular academics, I find many of them are actually very dishonest intellectually. The difference is, that a frum scholar in general, will have some yiras Shamayim (there may be exceptions to the rule). So even with those who I am diametrically opposed, we can always find some halacha and moral obligation to agree on. With secular academics, they have no yirah whatsoever, and very often pursue their own self interest at almost any cost. Again, not a general rule, but it is sadly a fact.

    ReplyDelete
  5. I interact with secular academics as part of my job. I have no doubt that Rubam K'Kulam are completely honest and would not engage in such practices. I understand the pressure to publish or perish; we all do, but certainly in my area, any and most results require the data to be available for others to re-test and affirm, should they wish. Eventually, someone will get caught out. This is old news, and simply affirms the fallibility of man; yes, every man.

    ReplyDelete
  6. How Reliable are Research Studies?
    From Karen Lee Richards, former About.com Guide
    Created: July 6, 2006
    About.com Health's Disease and Condition content is reviewed by the Medical Review Board
    Every few days, new research studies come across my desk. While it is exciting when one of those studies reports on a drug or therapy that holds promise for effectively treatingfibromyalgia (FM) or chronic fatigue syndrome (CFS), history has taught me to be skeptical.
    A History of Deception
    A couple of years ago the pharmaceutical giant GlaxoSmithKline found itself in hot water when it was discovered they had withheld some of the results of research studies done on their antidepressant Paxil. Apparently they had released the studies with positive results and withheld those that revealed serious negative effects.
    A similar situation occurred when Merck and Pfizer waited until last year to reveal clinical trial results showing that the Cox-2 inhibitors Vioxx, Celebrex and Bextra could pose cardiovascular risks. I wish I could say these were isolated incidents. Unfortunately, we don’t know what we don’t know.
    A Newer Study
    Recently I awoke to a TV newscast reporting that a study had shown Tylenol could cause liver damage when taken according to the maximum dosage recommended on the package. I was naturally concerned since many people with FM and CFS take Tylenol (acetaminophen) either alone or in prescriptions that combine acetaminophen with another medication (Ex. Ultracet). However, my internal caution light was flashing, so I began to dig for more information.
    I discovered that the reported study was conducted using a very small number of participants for a very short period of time, and that previous studies conducted by the manufacturer had yielded quite different results. Although I have no reason to question the validity of the study itself, it is a small study and doesn’t give the whole picture. I don’t think it’s sufficient information on which to make a decision.
    Paid Endorsements
    We all know that pharmaceutical companies pay for much of the medical research that is done. We also know they pay physicians to write professional articles promoting the positive aspects of their various drugs. What you may be surprised to learn is that now, according to a May 9 article in The New York Times, they are also paying doctors to write professional articles pointing out the negative aspects of their competitors’ drugs.
    What Is Being Done?
    Concerning the “scientific” articles, many doctors and health care advocates are calling for full disclosure of the link between pharmaceutical companies and medical authors, up to and including the amount of the fee paid to the doctor writing the article. As for the pharmaceutical companies revealing all of their research results, after the Paxil incident, drug makers promised to provide more research information. While some drug companies have significantly increased their reporting, others are dragging their feet.
    Until we have regulations requiring full disclosure, the best we can do as patients is to check every research study to see who conducted it, where it was conducted and, most importantly, who funded it. We should look at any medical article with a skeptical eye. If there seems to be an obvious bias pro or con, I would be hesitant to give it much credibility.
    Sources: Watkins, P.B., et al. Journal of the American Medical Association, July 5, 2006; vol 296: pp 87-93. “Study suggests possible risk from Tylenol.” MSNBC, July 4, 2006. Carlat, Daniel. “Generic Smear Campaign.” The New York Times May 9, 2006. Berenson, Alex. “Despite Vow, Drug Makers Still Withhold Data.” The New York Times May 31, 2005.

    ReplyDelete
  7. Fraud Case Seen as a Red Flag for Psychology Research
    By BENEDICT CAREY New York Times
    A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found. Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.
    The psychologist, Diederik Stapel, of Tilburg University, committed academic fraud in “several dozen” published papers, many accepted in respected journals and reported in the news media, according to a report released on Monday by the three Dutch institutions where he has worked: the University of Groningen, the University of Amsterdam, and Tilburg. The journal Science, which published one of Dr. Stapel’s papers in April, posted an “editorial expression of concern” about the research online on Tuesday.

    ReplyDelete
  8. The scandal, involving about a decade of work, is the latest in a string of embarrassments in a field that critics and statisticians say badly needs to overhaul how it treats research results. In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.
    “The big problem is that the culture is such that researchers spin their work in a way that tells a prettier story than what they really found,” said Jonathan Schooler, a psychologist at the University of California, Santa Barbara. “It’s almost like everyone is on steroids, and to compete you have to take steroids as well.”
    In a prolific career, Dr. Stapel published papers on the effect of power on hypocrisy, on racial stereotyping and on how advertisements affect how people view themselves. Many of his findings appeared in newspapers around the world, including The New York Times, which reported in December on his study about advertising and identity.
    In a statement posted Monday on Tilburg University’s Web site, Dr. Stapel apologized to his colleagues. “I have failed as a scientist and researcher,” it read, in part. “I feel ashamed for it and have great regret.”
    More than a dozen doctoral theses that he oversaw are also questionable, the investigators concluded, after interviewing former students, co-authors and colleagues. Dr. Stapel has published about 150 papers, many of which, like the advertising study, seem devised to make a splash in the media. The study published in Science this year claimed that white people became more likely to “stereotype and discriminate” against black people when they were in a messy environment, versus an organized one. Another study, published in 2009, claimed that people judged job applicants as more competent if they had a male voice. The investigating committee did not post a list of papers that it had found fraudulent.
    Dr. Stapel was able to operate for so long, the committee said, in large measure because he was “lord of the data,” the only person who saw the experimental evidence that had been gathered (or fabricated). This is a widespread problem in psychology, said Jelte M. Wicherts, a psychologist at the University of Amsterdam. In a recent survey, two-thirds of Dutch research psychologists said they did not make their raw data available for other researchers to see. “This is in violation of ethical rules established in the field,” Dr. Wicherts said.
    In a survey of more than 2,000 American psychologists scheduled to be published this year, Leslie John of Harvard Business School and two colleagues found that 70 percent had acknowledged, anonymously, to cutting some corners in reporting data. About a third said they had reported an unexpected finding as predicted from the start, and about 1 percent admitted to falsifying data.
    Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.

    ReplyDelete
  9. The American Psychological Association, the field’s largest and most influential publisher of results, “is very concerned about scientific ethics and having only reliable and valid research findings within the literature,” said Kim I. Mills, a spokeswoman. “We will move to retract any invalid research as such articles are clearly identified.”
    Researchers in psychology are certainly aware of the issue. In recent years, some have mocked studies showing correlations between activity on brain images and personality measures as “voodoo” science, and a controversy over statistics erupted in January after The Journal of Personality and Social Psychology accepted a paper purporting to show evidence of extrasensory perception. In cases like these, the authors being challenged are often reluctant to share their raw data. But an analysis of 49 studies appearing Wednesday in the journal PLoS One, by Dr. Wicherts, Dr. Bakker and Dylan Molenaar, found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.
    “We know the general tendency of humans to draw the conclusions they want to draw — there’s a different threshold,” said Joseph P. Simmons, a psychologist at the University of Pennsylvania’s Wharton School. “With findings we want to see, we ask, ‘Can I believe this?’ With those we don’t, we ask, ‘Must I believe this?’ ”
    But reviewers working for psychology journals rarely take this into account in any rigorous way. Neither do they typically ask to see the original data. While many psychologists shade and spin, Dr. Stapel went ahead and drew any conclusion he wanted.
    “We have the technology to share data and publish our initial hypotheses, and now’s the time,” Dr. Schooler said. “It would clean up the field’s act in a very big way.”

    ReplyDelete

ANONYMOUS COMMENTS WILL NOT BE POSTED!
please use either your real name or a pseudonym.