Posts tagged neuroethics

fMRI as a lie dectector: and another thing…

Aside from the common problems of reliability and general acceptance in the scientific and legal field… which are issues with any new technology…there is another problem with using fMRI as a lie detector in the courts that is often ignored by brain porn skeptics that is perhaps the easiest to explain:

Defendants cannot be forced to testify against themselves — the Fifth Amendment. So the legal and ethical question here is: If the police put you into a machine that’s reading your mind, are you being forced to testify against yourself? At present, a person can be forced to surrender DNA. Is an f.M.R.I. scan the same thing? - Dr. Matthew Liao, neuroethicist. [via]

[Giving] rise to the question of the free will to do evil: Advances in neuroscience may result in a device that makes it impossible for a person to complete a particular action; say murder or rape. The use of such a device would require careful thought and ethical review. Do we have the right to remove the ability to make a choice, even a wrong choice from another person?
Arguing that We Have No Free Will [via]. The irony being that you may not have control over your actions, but someone else could. 
WARning to Neuroscientists
Britain’s Royal Society recently released a report on the “possible benefits of neuroscience to military and law enforcement”. Areas of concentration are military training, performance enhancement, neuropharmacology or "Botox for the Brain" (to combat fatigue or erase painful memories) and using fMRI for screening or recruiting and other types of task training. This isn’t new and it’s proposed in a positive light to improve military efficiency which translates into a big budget win.  
But in nearly the same breath of talking about neural processing research to help facilitate rehabilitation to wounded soldiers (i.e., trauma or prosthetic limbs), using these applications conversely against the enemy is ever so briefly mentioned.  An example of this would be development of neuro-weaponry like chemical or biological weapons, “anesthetic agents” that would modify or incapacitate the central nervous system of the enemy or that could be used in riot control. The report also mentions “the use of devices known as brain-machine interfaces (BMIs), which connects soldiers’ brains directly to military technology, such as drone aircraft and weapons.” [via]
So whats the big deal?  

"As a scientist I dislike that someone might be hurt by my work," Vince Clark, a cognitive neuroscientist at the University of New Mexico, told the British newspaper The Daily Telegraph. "I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don’t know how to deal with that." [via]

To which the Royal Society says, buck up (basically), stressing that researchers should just “be aware of the potential uses that your work may put to in the future.” [via] 

Prof. Rod Flower, one of the members who chaired the report, suggests these investigations are similar to how GPS was first used by the military and now we are each basically a walking GPS, via cars and cell phones.  The idea that some of the applications above that governments are looking into might one day be so common place is very remarkable, part inevitable and possibly, deplorable.
The full report with recommendations - here.
[img]

WARning to Neuroscientists

Britain’s Royal Society recently released a report on the “possible benefits of neuroscience to military and law enforcement”. Areas of concentration are military training, performance enhancementneuropharmacology or "Botox for the Brain" (to combat fatigue or erase painful memories) and using fMRI for screening or recruiting and other types of task training. This isn’t new and it’s proposed in a positive light to improve military efficiency which translates into a big budget win.  

But in nearly the same breath of talking about neural processing research to help facilitate rehabilitation to wounded soldiers (i.e., trauma or prosthetic limbs), using these applications conversely against the enemy is ever so briefly mentioned.  An example of this would be development of neuro-weaponry like chemical or biological weapons, “anesthetic agents” that would modify or incapacitate the central nervous system of the enemy or that could be used in riot control. The report also mentions “the use of devices known as brain-machine interfaces (BMIs), which connects soldiers’ brains directly to military technology, such as drone aircraft and weapons.” [via]

So whats the big deal?  

"As a scientist I dislike that someone might be hurt by my work," Vince Clark, a cognitive neuroscientist at the University of New Mexico, told the British newspaper The Daily Telegraph. "I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don’t know how to deal with that." [via]

To which the Royal Society says, buck up (basically), stressing that researchers should just “be aware of the potential uses that your work may put to in the future.” [via

Prof. Rod Flower, one of the members who chaired the report, suggests these investigations are similar to how GPS was first used by the military and now we are each basically a walking GPS, via cars and cell phones.  The idea that some of the applications above that governments are looking into might one day be so common place is very remarkable, part inevitable and possibly, deplorable.

The full report with recommendations - here.

[img]

Adam Kolber’s  ”Unintentional Punishment”


Abstract:    Theorists overwhelmingly agree that in order for some conduct to constitute punishment, it must be imposed intentionally. Some have argued that a theory of punishment need not address unintentional aspects of punishment, like the bad experiences associated with incarceration, because such side effects are not imposed intentionally and are, therefore, not punishment. In this essay, I explain why we must measure and justify the unintended hardships associated with punishment. I argue that our intuitions about punishment severity are largely indifferent as to whether a hardship was inflicted purposely or was merely foreseen. Moreover, under what I call the “justification symmetry principle,” the state must be able to justify the imposition of the side effects of punishment because you or I would have to justify the same kind of conduct. Therefore, any justification of punishment that is limited to intentional inflictions cannot justify a punishment practice like incarceration that almost always causes side effect harms. via

It’s no secret Kolber is one of my favs… and in this area, I know of no one who compares. His work is literally infested with brilliant ideas certainly related to my interests in neurolaw and ethics.  His latest piece re: punishment is no exception.  If we believe that punishment only exisats when it is intentional, we are naive and mistaken. He provides examples of the unintentional side effects of punishment -such as when solitary confinement for protection from other inmates has the same effects (extreme isolation, light/movement deprivation, etc) as if they were sent there as “purposeful inflictions of punishment”. Other examples are forced celibacy, severe depression, complete denial of family visitation and the time and type of prison (harshness) on individual sensitivities or biological differences.
Even when 2 people commit the same crime, receive the same punishment, no experience or perception of that is the same. Nonetheless, there should be an attempt to measure the projected experience. An interesting bit he brings up surrounds the story of “If a Siamese Twin Commits Murder, Does His Brother Get Punished Too?” Via .

One option is to incarcerate the pair but compensate the innocent twin, just as we would pay a prison guard. But if the innocent twin is confined for a very long time, he may not have good opportunities to spend money, and it is not obvious how we would determine an appropriate level of compensation.

Kolber’s “justification symmetry” principle says “that if you or I must have a justification for risking or causing some harm, then so must any person who risks or causes the same kind of harm in the name of punishment. In other words, a complete justification of punishment will tell us why, by virtue of being just punishment, some ordinarily impermissible behavior is made permissible (…) even a state actor like a prison guard or the police.” 
It all begs: retributivists.  How do they work?
image

Adam Kolber’s  ”Unintentional Punishment”

Abstract:    
Theorists overwhelmingly agree that in order for some conduct to constitute punishment, it must be imposed intentionally. Some have argued that a theory of punishment need not address unintentional aspects of punishment, like the bad experiences associated with incarceration, because such side effects are not imposed intentionally and are, therefore, not punishment. 

In this essay, I explain why we must measure and justify the unintended hardships associated with punishment. I argue that our intuitions about punishment severity are largely indifferent as to whether a hardship was inflicted purposely or was merely foreseen. Moreover, under what I call the “justification symmetry principle,” the state must be able to justify the imposition of the side effects of punishment because you or I would have to justify the same kind of conduct. Therefore, any justification of punishment that is limited to intentional inflictions cannot justify a punishment practice like incarceration that almost always causes side effect harms. via

It’s no secret Kolber is one of my favs… and in this area, I know of no one who compares. His work is literally infested with brilliant ideas certainly related to my interests in neurolaw and ethics.  His latest piece re: punishment is no exception.  If we believe that punishment only exisats when it is intentional, we are naive and mistaken. He provides examples of the unintentional side effects of punishment -such as when solitary confinement for protection from other inmates has the same effects (extreme isolation, light/movement deprivation, etc) as if they were sent there as “purposeful inflictions of punishment”. Other examples are forced celibacy, severe depression, complete denial of family visitation and the time and type of prison (harshness) on individual sensitivities or biological differences.

Even when 2 people commit the same crime, receive the same punishment, no experience or perception of that is the same. Nonetheless, there should be an attempt to measure the projected experience. An interesting bit he brings up surrounds the story of “If a Siamese Twin Commits Murder, Does His Brother Get Punished Too?” Via .

One option is to incarcerate the pair but compensate the innocent twin, just as we would pay a prison guard. But if the innocent twin is confined for a very long time, he may not have good opportunities to spend money, and it is not obvious how we would determine an appropriate level of compensation.

Kolber’s “justification symmetry” principle says “that if you or I must have a justification for risking or causing some harm, then so must any person who risks or causes the same kind of harm in the name of punishment. In other words, a complete justification of punishment will tell us why, by virtue of being just punishment, some ordinarily impermissible behavior is made permissible (…) even a state actor like a prison guard or the police.”

It all begs: retributivists.  How do they work?

image

Memory Erasing Drugs: An Update.

As you may know: one of our favorites, Neuroskeptic posted today about Adam Kolber’s recent comment paper for Nature about memory erasing/dampening drugs. You remember him from the Neuroethics and Law Blog.

Kolber starts off with his comment, which is behind Nature’s pay wall… so we have this:

A comment I wrote for Nature entitled, “Give Memory-Altering Drugs a Chance,” was published today.  I fought hard to have the paper available free on the Internet, but in the end, I lost the battle.  You can find my other papers on the topic herehere, and here. via

So give those a look. Then we have Neuroskeptic who (I’m glad wrote about this) questions the example used in the comment of how memory erasing drugs could benefit a rescue worker and he offers that a trauma victim would have been a more apt candidate, on the basis that:

A rescue worker, at least a professional one, has chosen to do that kind of work. The experiences that are part of that job are ones they decided to have - or at least that they knew were a realistic possibility - and that may be an expression of their identity. via

Great point, though I question to what extend do forgetfulness drugs effect identity/personality and future choices, wouldn’t that depend on if we are talking actual erasing or weakening memories? Anyhoo, Kolber acknowledged, answered and expanded upon the critique here:

…just as one can choose to be a rescue worker and make it part of one’s identity, can’t one change his mind?  Can’t someone decide that even though he thought he was cut out for this line of work, in fact, he failed to anticipate its psychological toll in some instance?

 If you’ve read this far, you should really read the entire exchange. It’s is a fascinating topic providing interesting chasms on many levels (science, law, ethics, philosophy/self, medical, policy, etc…) and having two of my favorites discuss it today doesn’t hurt either.