On Policy, Morality, Duty, and Consequence
A not-very-secret: cash “taxi” items on expense reports sometimes don’t quite correspond to actual taxi fares.
Sometimes the traveler didn’t remember exactly what the fare was, little more than an oversight or an honest mistake. But sometimes the traveler bought, say, a drink for someone, and whereas the expense-reporting policy allows and requires no receipts for taxi fares under $25 or so, it doesn’t allow buying drinks.
Another not-very-secret: Sometimes IT help-desk or computer-repair staff actually notice things that appear on the screens of users’ computers in the course of assistance or repairs. Most policies within help desks and computer-repair groups state clearly that staff are not to read messages or look at documents on users’ computers, and that if they happen to do so accidentally they are not to disclose what they’ve seen to anyone else.
And so to a scenario. A repair technician, A, is fixing a faculty member’s computer. The repair succeeds, the computer comes back to life, and on the screen is a message the faculty member had received from a colleague: “Had a great time drinking with my friends last night, and better still, I wrote it all off to the University as taxi fares”.
A is unintentionally aware of a faculty member’s claim to have violated University policy. Does A tell, or not? If A tells, that’s a violation of the don’t-disclose-content rule. If A doesn’t tell, that’s concealing, and therefore abetting, an apparent violation of University policy.
Most of us have a pretty easy time dealing with this one: privacy trumps expense reporting, A should say nothing, and that’s what the IT organization wants. But what if A tells? Should he or she be punished for violating the don’t-disclose rule, rewarded for helping to unearth fraud, both, or neither? Most of us, I expect, would mildly rebuke A but nevertheless thank him or her, keep the whole thing quiet, and move on.
A more nuanced scenario: same story, but a different technician, B, who is known to routinely look at material on repaired computers, and to tell stories — without identifying individuals — about what he’s seen (“You should have seen some of the pictures on a computer I fixed yesterday — I’m not going to say who it is, but I wish I got that kind of action”).
If B tells about the faculty member’s alleged expense fraud, it’s likely that B will be admonished or punished publicly even if his disclosure helps redress fraud. B presumably knows this, and so is very unlikely to say anything about the expense-fraud message. Again, most of us know how to think about this one: we really don’t expect B to tell, but we also think that B’s comeuppance is nigh given his or her history of peeking. The important point is that B’s disincentives to tell are stronger than A’s, even though the policy environment is identical.
Next scenario: Same situation as before, except now the message from the faculty member C views on the repaired computer isn’t about expense fraud. Rather, it’s a detailed discussion of plans to use a recently-purchased handgun to murder the individual who blew the whistle on certain research improprieties, thus ending the faculty member’s tenure, job, and probably career.
To me, the answer to this scenario is clear: C must tell. That means C is violating policy in exactly the same way we thought A or B needn’t, but that’s what we want C to do. Why? Because threats to life morally trump threats to property. When C tells, the University will take protective, disciplinary, and legal action as appropriate. If the faculty member turns out to have been writing hypothetically rather than intentionally, there will probably be a complaint against C, and C may well lose his or her job as a result. C nevertheless must tell.
In the process of guiding A and B, we send staffers like C mixed signals. We tell IT staff that privacy is paramount, and that violations will be prosecuted. This doesn’t stop staff from seeing things they shouldn’t, but it usually keeps them from discussing what they see. In cases where the issue is privacy versus minor fraud, or even research malfeasance, that may be the result we want. In cases where the issue is life versus privacy, however, we want staff speak up. To encourage this, we probably want to hold them harmless — or at least let them know that we’ll deal with cases based on the circumstances.
Our challenge here — I’m channeling Larry Kohlberg and other moral-stage researchers, who were active at Harvard during my graduate-student years there — is to help staff understand that sometimes personal risk must give way to moral imperative — that is, for example, that trying to save someone’s life should outweigh risking one’s job. We need to hire and educate staff to understand this kind of moral hierarchy. We need to frame our policies to recognize and address the moral dilemmas that may arise. Most important, we need to behave sensibly and pragmatically when that’s the right thing to do, and make sure our staff understand that’s what we’ll do.
Cases to think about:
http://www.nytimes.com/1999/05/20/us/pornography-cited-in-ouster-at-harvard.html
http://www.nytimes.com/1998/11/13/nyregion/inquiry-on-child-pornography-prompts-a-resignation-at-yale.html
For further reading:
http://en.wikipedia.org/wiki/Kohlberg%27s_stages_of_moral_development