Posts tagged ethics

US brain project puts focus on ethics

…memories are surprisingly pliable. In the past few years, researchers have shown that drugs can erase fearful memories or disrupt alcoholic cravings in rodents. Some scientists have even shown that they can introduce rudimentary forms of learning during sleep in humans. Giordano says that dystopian fears of complete human mind control are overblown. But more limited manipulations may not be far off: the US Defense Advanced Research Projects Agency (DARPA), one of three government partners in the BRAIN Initiative, is working towards ‘memory prosthetic’ devices to help soldiers with brain injuries to regain lost cognitive skills. [via]

…or other manipulations to the brain that control the mind.

US brain project puts focus on ethics

…memories are surprisingly pliable. In the past few years, researchers have shown that drugs can erase fearful memories or disrupt alcoholic cravings in rodents. Some scientists have even shown that they can introduce rudimentary forms of learning during sleep in humans. Giordano says that dystopian fears of complete human mind control are overblown. But more limited manipulations may not be far off: the US Defense Advanced Research Projects Agency (DARPA), one of three government partners in the BRAIN Initiative, is working towards ‘memory prosthetic’ devices to help soldiers with brain injuries to regain lost cognitive skills. [via]

…or other manipulations to the brain that control the mind.

“Monstrous Crimes, Framing, and the Preventive State: The Moral Failure of Forensic Psychiatry"  - International Library of Ethics, Law, and the New Medicine

Monsters and predators frighten, entertain, and disgust us. The idea of a creature that is a volatile mixture of human and animal parts (the monster) triggers our visual and visceral imagination perhaps more than any other image. The fear of predation – literally, eating another’s flesh – disgusts and repels, but like rubberneckers who slow down to witness accidents, our voyeurism seems unconstrained by shame. The monster and the predator threaten us by threatening to rend the social fabric and bring about a state of nature in which, as Hobbes famously wrote, we are engaged in a war of all against all, and life is nasty, brutish and short. We demand that the government and its legal process protect us from the monsters and predators in our midst, which has resulted in a quest for security at the expense of the protection of the rights of citizens that runs parallel with the quest for protection from “terrorists,” as reflected in the epigram to this book. The referent of the “terrorist,” however, is often simply somebody who looks, acts, or talks in a way that is vaguely Middle-Eastern. Similarly, people who look, act, or talk like our vaguely sketched stereotype of what constitutes a sex offender, an image that has come to constitute a monstrous predator, trigger a panic as well (Lancaster 2011a, b). [via, img]

The idea being if dehumanizing terminology is eliminated, it would change our perception of offenders and how treatment or punishment is delivered, creating a more ethical justice system. Changing language so we see humans (not monsters) over educating why certain monstrous acts are performed by some humans…there might be a term for that already and a pair of tall boots to get you through it. 

Monstrous Crimes, Framing, and the Preventive State: The Moral Failure of Forensic Psychiatry"  - International Library of Ethics, Law, and the New Medicine

Monsters and predators frighten, entertain, and disgust us. The idea of a creature that is a volatile mixture of human and animal parts (the monster) triggers our visual and visceral imagination perhaps more than any other image. The fear of predation – literally, eating another’s flesh – disgusts and repels, but like rubberneckers who slow down to witness accidents, our voyeurism seems unconstrained by shame. The monster and the predator threaten us by threatening to rend the social fabric and bring about a state of nature in which, as Hobbes famously wrote, we are engaged in a war of all against all, and life is nasty, brutish and short. We demand that the government and its legal process protect us from the monsters and predators in our midst, which has resulted in a quest for security at the expense of the protection of the rights of citizens that runs parallel with the quest for protection from “terrorists,” as reflected in the epigram to this book. The referent of the “terrorist,” however, is often simply somebody who looks, acts, or talks in a way that is vaguely Middle-Eastern. Similarly, people who look, act, or talk like our vaguely sketched stereotype of what constitutes a sex offender, an image that has come to constitute a monstrous predator, trigger a panic as well (Lancaster 2011a, b). [via, img]

The idea being if dehumanizing terminology is eliminated, it would change our perception of offenders and how treatment or punishment is delivered, creating a more ethical justice system. Changing language so we see humans (not monsters) over educating why certain monstrous acts are performed by some humans…there might be a term for that already and a pair of tall boots to get you through it. 

Lying, for free

themoralperspective:

Can we “radically simplify our lives and improve society by merely telling the truth in situations where others often lie”? Author Sam Harris seems to think so — he defended that proposal in his recent 48-page e-book, Lying.

But while Lying has been on sale for months at a cost of $1.99, Harris is now offering the e-book for free. Why, you ask?

Because of Jonah Lehrer, the now-disgraced journalist who was caught fabricating quotes:

I consistently meet smart, well-intentioned, and otherwise ethical people who do not seem to realize how quickly and needlessly lying can destroy their relationships and reputations. This is why I wrote a short ebook on the subject. Since it contains more or less everything I want to say in response to the Lehrer debacle, I’m offering the full text of LYING as a free download for the rest of the week.

Better get reading. 

scienceofthekgb:
In light of a recent Wired article about psychopharma use during Gitmo interrogations, we revisit this topic:
In your experience, what are the types of techniques of psychological torture used?xKGB:  Forcible narcotics addiction - here you can use also depressants, stimulants, opiates or hallucinogens (psychedelics), depressants (alcohol, barbiturates, antianxiety drugs with effects of euphoria, tension reduction, disinhibition, muscle relaxation, drowsiness; stimulants (cocaine, amphetamine, methamphetamine (crystal meth).
Once you’ve made an addict, information can be easily obtained, the drug has now become more important than the protection silence offered… if you are not mad by then.
***
According to reports, Haldol is our choice “sedative” in Gitmo. A footnote in the Pentagon’s inspector general report (p.4) explains that:
Haldol is antipsychotic used in the treatment of schizophrenia and, more acutely, in the treatment of acute psychotic states and delirium. Side-effects of Haldol include; anxiety, dysphoria, and an inability to remain motionless. 
What we know:
Prisoners inside the U.S. military’s detention center at Guantanamo Bay were forcibly given ‘mind altering drugs,’ including being injected with a powerful anti-psychotic sedative used in psychiatric hospitals. Prisoners were often not told what medications they received, and were tricked into believing routine flu shots were truth serums.” 
A patient on Haldol can develop long-term movement disorders and life-threatening neurological disorders. […] But did they consent? (No.) Did the medics consult the prisoners’ medical background before administering drugs? Were prisoners still under the effect of the drugs during interrogation? The report concludes: very likely.” [via]
I then asked my contact, “How reliable is the information?”

scienceofthekgb:

In light of a recent Wired article about psychopharma use during Gitmo interrogations, we revisit this topic:

In your experience, what are the types of techniques of psychological torture used?

xKGB:  Forcible narcotics addiction - here you can use also depressants, stimulants, opiates or hallucinogens (psychedelics), depressants (alcohol, barbiturates, antianxiety drugs with effects of euphoria, tension reduction, disinhibition, muscle relaxation, drowsiness; stimulants (cocaine, amphetamine, methamphetamine (crystal meth).

Once you’ve made an addict, information can be easily obtained, the drug has now become more important than the protection silence offered… if you are not mad by then.

***

According to reports, Haldol is our choice “sedative” in Gitmo. A footnote in the Pentagon’s inspector general report (p.4) explains that:

Haldol is antipsychotic used in the treatment of schizophrenia and, more acutely, in the treatment of acute psychotic states and delirium. Side-effects of Haldol include; anxiety, dysphoria, and an inability to remain motionless. 

What we know:

Prisoners inside the U.S. military’s detention center at Guantanamo Bay were forcibly given ‘mind altering drugs,’ including being injected with a powerful anti-psychotic sedative used in psychiatric hospitals. Prisoners were often not told what medications they received, and were tricked into believing routine flu shots were truth serums.” 

A patient on Haldol can develop long-term movement disorders and life-threatening neurological disorders. […] But did they consent? (No.) Did the medics consult the prisoners’ medical background before administering drugs? Were prisoners still under the effect of the drugs during interrogation? The report concludes: very likely.” [via]

I then asked my contact, “How reliable is the information?”

Not so secretly fascinated by robots these days, I found this golden nugget that was presented this week at the AISB/IACAP World Congress 2012 in Birmingham, UK, about robots, theory of mind and empathy.
MORAL COGNITION & THEORY OF MIND

The dangers inherent in autonomous systems initiating kill orders are a central concern for critics of military robots. They commonly point out the fact that present day robots lack situational awareness and are unable to distinguish combatants from non-combatants.  Nor will robots be likely to have the necessary capabilities to perform these tasks in the near future. Distinguishing friend from foe, for example, is also a difficult challenge for humans, but we bring cognitive resources to bear on the problem that are unavailable to robots. [via]

img of Nico who exhibits “primitive self awareness” as he is able to recognize himself in a mirror -from MIT’s Kevin Gold, collaborator with Yale’s Brian Scassellati.

Not so secretly fascinated by robots these days, I found this golden nugget that was presented this week at the AISB/IACAP World Congress 2012 in Birmingham, UK, about robots, theory of mind and empathy.

MORAL COGNITION & THEORY OF MIND

The dangers inherent in autonomous systems initiating kill orders are a central concern for critics of military robots. They commonly point out the fact that present day robots lack situational awareness and are unable to distinguish combatants from non-combatants.  Nor will robots be likely to have the necessary capabilities to perform these tasks in the near future. Distinguishing friend from foe, for example, is also a difficult challenge for humans, but we bring cognitive resources to bear on the problem that are unavailable to robots. [via]

img of Nico who exhibits “primitive self awareness” as he is able to recognize himself in a mirror -from MIT’s Kevin Gold, collaborator with Yale’s Brian Scassellati.

The Economist on the ethical case for using robots urging us to develop ways to deal with the dilemmas associated with robotics, “As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. ”
This is especially relevant in military use.

Campaign groups such as the International Committee for Robot Arms Control have been formed in opposition to the growing use of drones. But autonomous robots could do much more good than harm. Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat. [via] [img]

H/T themoralperspective

The Economist on the ethical case for using robots urging us to develop ways to deal with the dilemmas associated with robotics, “As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. ”

This is especially relevant in military use.

Campaign groups such as the International Committee for Robot Arms Control have been formed in opposition to the growing use of drones. But autonomous robots could do much more good than harm. Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat. [via[img]

H/T themoralperspective

The Moral Perspective: The more you think, the less you cheat

themoralperspective:

A new study in the journal Psychological Science suggests that the human tendency to cheat is a natural impulse, and that given some time for reflection, humans are less likely to cheat.

The research experiment — conducted by Shaul Shalvi, a psychologist at the University of Amsterdam, and his…

Which has always amazed me neuroanatomically speaking. In 2002, the first research re: fMRI and lying was published and found that the distribution of deception-related activation in the brain suggests that lying involves both conflict and suppression of the truth. So it turns out, if lying and cheating is instinctual, it certainly doesn’t mean it’s any less neurally demanding.  

"Paved with Good Intentions: Sentencing Alternatives from Neuroscience and the Policy of Problem-Solving Courts"

Abstract:      
Advances in basic and clinical neuroscience will soon present novel options for prediction, treatment, and prevention of antisocial behavior, particularly drug addiction. These hard-won advances have significant potential to improve public health and safety and increase efficiency in delivery of treatment and rehabilitation. Such therapies will undoubtedly find a large portion of their target population in the criminal justice system as long as drug possession remains criminalized.

Improvements, however, are not without risks. The risks stem not only from the safety and side effect profile of such treatments, but also their insertion into a specialized criminal justice and sentencing system of “problem-solving courts” that may be overburdened, overpoliticized, undertheorized, and lacking sufficient checks and balances on institutional competency. While offering substantial therapeutic benefits, such developments might also short-circuit a critical policy discussion about the nature of drug use and its criminalization. 

 - Emily R. Murphy, Standford Law School  [via]

"Emotion and Morality in Psychopathy and Paraphilias"

Many sex offenders suffer from a paraphilia. Paraphilias are disorders characterized by recurrent and intrusive deviant sexual impulses. One paraphilia that shares some characteristics with psychopathy is sexual sadism.

Sadism, like psychopathy, is characterized by callousness, anger, and low empathy. Sadists derive sexual gratification from inflicting physical or emotional pain and suffering on others, and may thus represent the extreme end of the moral sensitivity spectrum” ranging from compassion to callousness. They show increased arousal (measured by penile plethsymograph responses) when perceiving people in pain, in sexual or nonsexual situations.

While this clearly represents profound moral insensitivity, the capacity for “normal” moral judgment has not been directly investigated in this disorder. Sadists may be less likely than other sex offenders to show cognitive distortions that justify moral transgressions, since an understanding of the immorality of their actions (causing harm) is precisely what facilitates sexual gratification. Thus, like psychopaths they appear to understand the wrongness of their actions. [via]

Unlike psychopaths who know right from wrong but just don’t care, I suggest that sadists, who also enjoy inflicting pain/suffering, would show increased activation in the domain specific frontoinsular (FI) cortex, hinting at a higher sense of a certain type of empathy (comparatively) and regulation of moral judgement, depending on amount of emotional processing exercised. Pleasure and reward centers should show similair activation. wah-psh.

ResearchBlogging.org

Harenski, C., & Kiehl, K. (2011). Emotion and Morality in Psychopathy and Paraphilias Emotion Review, 3 (3), 299-301 DOI: 10.1177/1754073911402378

Greene JD, Sommerville RB, Nystrom LE, Darley JM, & Cohen JD (2001). An fMRI investigation of emotional engagement in moral judgment. Science (New York, N.Y.), 293 (5537), 2105-8 PMID: 11557895

Greene’s “dual-process theory” of moral decision-making posits that rationality and emotion are recruited according to the circumstances, with each offering its own advantages and disadvantages. He likens the moral brain to a camera that comes with manufactured presets, such as “portrait” or “landscape,” along with a manual mode that requires photographers to make adjustments on their own. Emotional responses, which are influenced by humans’ biological makeup and social experiences, are like the presets: fast and efficient, but also mindless and inflexible. Rationality is like manual mode: adaptable to all kinds of unique scenarios, but time-consuming and cumbersome.

“The nice thing about the overall design of the camera is that it gives you the best of both worlds: efficiency in point-and-shoot mechanisms and flexibility in manual mode,” Greene explains. “The trick is to know when to point and shoot and when to use manual mode. I think that this basic design is really the design of the human brain.”

The Biology of Right and Wrong (via theatlantic)

— I was hoping the Atlantic actually did a piece on Joshua Greene.  The Green/Cohen paper, For the law, neuroscience changes nothing and everything from 2004 is one of the most well known neurolaw papers out there. It definitely got me started in this area.

"Are Doing Harm and Allowing Harm Equivalent? Ask fMRI"
Most people, as well as the law, recognize that doing harm is morally worse than not doing anything when you know there to be a risk, thereby allowing harm to just happen. To that end, you would assume that judging the former to be worse is cognitively more demanding than the latter. Well, so did Fiery Cushman. He does research surrounding neuroethics up at Brown to see “…how the brain has evolved to process moral dilemmas and make moral judgments.”

People typically say they are invoking an ethical principle when they judge acts that cause harm more harshly than willful inaction that allows that same harm to occur. That difference is even codified in criminal law. A new study based on brain scans, however, shows that people make that moral distinction automatically. Researchers found that it requires conscious reasoning to decide that active and passive behaviors that are equally harmful are equally wrong. Via

This is interesting because you would think that in order to decide that doing harm is worse than non-acting that results in harm, it would require lots of conscious reasoning to arrive at that point, like most moral dilemmas. But it turns out, that’s the easy part for our brains requiring less activity…the hard part for our dorsolateral prefrontal cortex, which via the fMRI scans show evidence of using more “careful deliberative controlled thinking” is after weighing the two, deciding that they are both as bad. Now that makes even more sense, huh.
I’ve read enough of his fascinating work to contact Dr. Cushman about an idea I had a couple months ago to see if collaborations with him are possible. He is very accessible, but it appears my new lab- AKA the hardest lab to get into ever, is taking me away from this.  Yes, I’m still waiting on that confirmation… fingers crossed so hard it hurts. 

Above: Looking at a moral choice Test subjects who feel that doing active harm is morally the same as allowing harm to occur will show more brain activity. The notion that active harm is worse appears to be automatic, a psychological default requiring less thought. (Credit: Cushman Lab/Brown University)

"Are Doing Harm and Allowing Harm Equivalent? Ask fMRI"

Most people, as well as the law, recognize that doing harm is morally worse than not doing anything when you know there to be a risk, thereby allowing harm to just happen. To that end, you would assume that judging the former to be worse is cognitively more demanding than the latter. Well, so did Fiery Cushman. He does research surrounding neuroethics up at Brown to see “…how the brain has evolved to process moral dilemmas and make moral judgments.”

People typically say they are invoking an ethical principle when they judge acts that cause harm more harshly than willful inaction that allows that same harm to occur. That difference is even codified in criminal law. A new study based on brain scans, however, shows that people make that moral distinction automatically. Researchers found that it requires conscious reasoning to decide that active and passive behaviors that are equally harmful are equally wrong. Via

This is interesting because you would think that in order to decide that doing harm is worse than non-acting that results in harm, it would require lots of conscious reasoning to arrive at that point, like most moral dilemmas. But it turns out, that’s the easy part for our brains requiring less activity…the hard part for our dorsolateral prefrontal cortex, which via the fMRI scans show evidence of using more “careful deliberative controlled thinking” is after weighing the two, deciding that they are both as bad. Now that makes even more sense, huh.

I’ve read enough of his fascinating work to contact Dr. Cushman about an idea I had a couple months ago to see if collaborations with him are possible. He is very accessible, but it appears my new lab- AKA the hardest lab to get into ever, is taking me away from this.  Yes, I’m still waiting on that confirmation… fingers crossed so hard it hurts. 

Above: Looking at a moral choice Test subjects who feel that doing active harm is morally the same as allowing harm to occur will show more brain activity. The notion that active harm is worse appears to be automatic, a psychological default requiring less thought. (Credit: Cushman Lab/Brown University)

So today, an Italian court reduced the sentence of a murderer when the defence team used “neuroimaging and genetic tests proved the partial mental illness of the defendant” effectively mitigating a life sentence into just 20 years.
This makes some neuro-people go bananas since there are only a hand full of people that believe brain scan technology is appropriate for the courts at this time (especially if it will help mentally ill people get treatment)…notwithstanding, we use sketchy evidence that hasn’t been through the rigors and scrutiny of academic/scientific research all. the. time. 

“The decision was made ​​not only on the basis of psychiatric assessments, but also morphological analysis and neuroscience on the brain and its genetic heritage.” via

Questions:
1. Is this a correct translation of the quote? *paging SciPsy* Cause if it is, it’s really not entirely justifiable to focus our attention solely on the brain scan, is it? What type of psychiatric assessments were used? What did they find? What type of expert did the prosecution counter with? And the MAOA gene tests, amirite?
2. Is it customary in Italy for an offender to receive psychiatric treatment in a case where psychiatric/neurological evidence was successful in showing diminished capacity? Or do they receive just incarceration?
3. Bueller?
I get that the last thing we need is another type of unreliable evidence allowed in court, but guess what neuroscientists?
Taking your kickball, going home and leaving the lawyers to play with themselves won’t make it go away. To be clear, what I mean is: discussing the flaws of applying fMRI scans to criminal behavior or aggression among scientists is super…but your not accomplishing the moratorium you so desperately want. They aren’t readying your articles, huh. So why aren’t more scientists who are concerned with this area reaching out to educate lawyers on why certain brain scans are not appropriate?  You know what’s easier than waiting for research to happen and to be published and then to be accepted in to social consciousness decades later? (I’m looking at you eyewitness identification) Writing a note and publishing it in a law journal. 
Here is a forum where you have the attention of lawyers, legal scholars and law students. Here is a place where you can continue the dialogue with those who intend to use these tools, instead of huffing at the absurdity of it on twitter. Here is a place where lawyers/judges can get an intro on how to digest/interpret this type of evidence when it is introduced…or at the least, find a godforsaken expert to explain it. 
Yes- this has been done to an extent, and this is probably a moot point suggestion, but if setting a dangerous precedent is the main concern (and not just protecting your own academic integrity), then clearly it’s not been done enough.

So today, an Italian court reduced the sentence of a murderer when the defence team used “neuroimaging and genetic tests proved the partial mental illness of the defendant” effectively mitigating a life sentence into just 20 years.

This makes some neuro-people go bananas since there are only a hand full of people that believe brain scan technology is appropriate for the courts at this time (especially if it will help mentally ill people get treatment)…notwithstanding, we use sketchy evidence that hasn’t been through the rigors and scrutiny of academic/scientific research all. the. time

The decision was made ​​not only on the basis of psychiatric assessments, but also morphological analysis and neuroscience on the brain and its genetic heritage.” via

Questions:

1. Is this a correct translation of the quote? *paging SciPsy* Cause if it is, it’s really not entirely justifiable to focus our attention solely on the brain scan, is it? What type of psychiatric assessments were used? What did they find? What type of expert did the prosecution counter with? And the MAOA gene tests, amirite?

2. Is it customary in Italy for an offender to receive psychiatric treatment in a case where psychiatric/neurological evidence was successful in showing diminished capacity? Or do they receive just incarceration?

3. Bueller?

I get that the last thing we need is another type of unreliable evidence allowed in court, but guess what neuroscientists?

Taking your kickball, going home and leaving the lawyers to play with themselves won’t make it go away. To be clear, what I mean is: discussing the flaws of applying fMRI scans to criminal behavior or aggression among scientists is super…but your not accomplishing the moratorium you so desperately want. They aren’t readying your articles, huh. So why aren’t more scientists who are concerned with this area reaching out to educate lawyers on why certain brain scans are not appropriate?  You know what’s easier than waiting for research to happen and to be published and then to be accepted in to social consciousness decades later? (I’m looking at you eyewitness identification) Writing a note and publishing it in a law journal. 

Here is a forum where you have the attention of lawyers, legal scholars and law students. Here is a place where you can continue the dialogue with those who intend to use these tools, instead of huffing at the absurdity of it on twitter. Here is a place where lawyers/judges can get an intro on how to digest/interpret this type of evidence when it is introduced…or at the least, find a godforsaken expert to explain it. 

Yes- this has been done to an extent, and this is probably a moot point suggestion, but if setting a dangerous precedent is the main concern (and not just protecting your own academic integrity), then clearly it’s not been done enough.

In “Tinkering With Our Ethical Chemistry”, Guy Kahane, deputy director of the Oxford Centre for Neuroethics, writes:


Humans are born with the capacity to be moral, but it is a limited capacity which is ill equipped to deal with the ethical complexities of the modern world. For thousands of years, humans have relied on education, persuasion, social institutions, and the threat of real (or supernatural) punishment to make people behave decently. We could all be morally better, but it is clear that this traditional approach cannot take us much further. It is not as if people would suddenly begin to behave better if we just gave them more facts and statistics, or better arguments. 
So we shouldn’t be too quick to dismiss the suggestion that science might help—in the first instance, by helping us design more effective institutions, more inspiring moral education, or more persuasive ethical arguments. But science might also offer more direct ways of influencing our brains.


These are, of course, hypothetical questions. We don’t yet know what is possible. But it is better to begin the ethical discussion too early than too late. And even if “moral pills” are just science fiction, they raise deep questions. Will we want to take them if they ever become available? And what does it say about us if we won’t?   

Via.  Image: D. Sharon Pruitt (CC).   

In “Tinkering With Our Ethical Chemistry”, Guy Kahane, deputy director of the Oxford Centre for Neuroethics, writes:

Humans are born with the capacity to be moral, but it is a limited capacity which is ill equipped to deal with the ethical complexities of the modern world. For thousands of years, humans have relied on education, persuasion, social institutions, and the threat of real (or supernatural) punishment to make people behave decently. We could all be morally better, but it is clear that this traditional approach cannot take us much further. It is not as if people would suddenly begin to behave better if we just gave them more facts and statistics, or better arguments. 

So we shouldn’t be too quick to dismiss the suggestion that science might help—in the first instance, by helping us design more effective institutions, more inspiring moral education, or more persuasive ethical arguments. But science might also offer more direct ways of influencing our brains.

These are, of course, hypothetical questions. We don’t yet know what is possible. But it is better to begin the ethical discussion too early than too late. And even if “moral pills” are just science fiction, they raise deep questions. Will we want to take them if they ever become available? And what does it say about us if we won’t?   

Via.  Image: D. Sharon Pruitt (CC).   

When are you dead?

Little more than 40 years ago, a partially functioning brain would not have gotten in the way of organ donation; irreversible cardiopulmonary failure was still the only standard for determining death. But during the 1970s, that began to change, and by the early 1980s, the cessation of all brain activity — brain death — had become a widely accepted standard. In the transplant community, brain death was attractive for one particular reason: The bodies of such donors could remain on respirators to keep their organs healthy, even during much of the organ-removal surgery. Today, the medical establishment, facing a huge shortage of organs, needs new sources for transplantation. One solution has been a return to procuring organs from patients who die of heart failure. Before dying, these patients are likely to have been in a coma, sustained by a ventilator, with very minimal brain function — a hopeless distance from what we mean by consciousness. Still, many people, including some physicians, consider this type of organ donation, known as “donation after cardiac death” or DCD, as akin to murder.

This becomes especially interesting when neurological tests have identified very specific brain activity in comatose patients when asking questions that shows they can hear and respond by using their thoughts.

….they tested a young woman diagnosed as being in a vegetative state following a car accident. Although she was unresponsive and apparently unaware of her surroundings, she exhibited distinct patterns of brain activity when asked to imagine herself playing tennis or walking through the rooms of her house. As in healthy volunteers, imagining tennis activated motor planning regions in the woman’s brain, whereas picturing her house activated a brain region involved in recognizing familiar scenes. VIA

 So, she is picturing going through the rooms of her house when you ask her to, and thinking about playing tennis just the same. And she wasn’t the only one. More patients were found to be able to communicate via yes or no with activating the motor or mapping/memory parts of their brains which are in two different locations.

These results show that a small proportion of patients in a vegetative or minimally conscious state have brain activation reflecting some awareness and cognition. Via

We still pull plugs on coma patients and can’t even decide when a fetus is a human.  What is brain dead, and if we ever pull the plug, is that murder?  

ResearchBlogging.org Martin M. Monti, Ph.D., Audrey Vanhaudenhuyse, M.Sc., Martin R. Coleman, Ph.D., Melanie Boly, M.D., John D. Pickard, F.R.C.S., F.Med.Sci., Luaba Tshibanda, M.D., Adrian M. Owen, Ph.D., and Steven Laureys, M.D., Ph.D. (2010). Willful Modulation of Brain Activity in Disorders of Consciousness N Engl J Med

Brain scans: too soon for the courts, but not interrogations.

Don’t get me wrong, this is my jam…and either way, we are working with memory here (short v long), I’m just not sure how we can differentiate (& establish reliability) between the types of memories within both categories. I have not seen that explained yet. How can we tell the difference between a memory of an actual plan with intent, or a related memory of something similar? (a movie/story/hearsay with a similar plot?) If someone with expertise on memory would chime in, that would be nice. 

For the first time, the Northwestern researchers used the P300 testing in a mock terrorism scenario in which the subjects are planning, rather than perpetrating, a crime. The P300 brain waves were measured by electrodes attached to the scalp of the make-believe “persons of interest” in the lab.

The most intriguing part of the study in terms of real-word implications, Rosenfeld said, is that even when the researchers had no advance details about mock terrorism plans, the technology was still accurate in identifying critical concealed information.

“Without any prior knowledge of the planned crime in our mock terrorism scenarios, we were able to identify 10 out of 12 terrorists and, among them, 20 out of 30 crime- related details,” Rosenfeld said. “The test was 83 percent accurate in predicting concealed knowledge, suggesting that our complex protocol could identify future terrorist activity.”

Rosenfeld is a leading scholar in the study of P300 testing to reveal concealed information. Basically, electrodes are attached to the scalp to record P300 brain activity — or brief electrical patterns in the cortex — that occur, according to the research, when meaningful information is presented to a person with “guilty knowledge.” via