Ed Fenimore, Scientist

Ed Fenimore is probably not a household name, unless you’re a high-energy astrophysicist, in which case he’s ED FENIMORE, scientist, instrument builder extraordinaire, theorist, rare case of honest and able data analyst in a field of charlatans, mentor of dozens of front-rank students.  Ed was a key figure in the process of turning the field of Gamma-Ray Burst studies from a freak show into an actual science.  He was also a key member of the High Energy Transient Explorer (HETE) team, of which I also was a member.

We had a scientific meeting in September 2009 to celebrate Ed’s retirement from Los Alamos National Lab (actually he’s not really retiring, just taking evasive action from management responsibilities.  Rumor has it he’s still in his office 6 days a week, to his wife Sue’s chagrin).  There were talks all day, mostly on GRBs, and on Ed’s influence on the subject.  I gave the final talk of the day, by the title of “HETE-WXM: Fenimorean GRB Localization On A Shoestring”.  The PDF of my presentation is here.

At the end of the presentation, there’s a largely blank slide, entitled “Why Does Ed Always Think He Might Be Wrong (Even When He’s Right)?”. The slide has a single bullet, “A meditation on Ed’s unyielding commitment to scientific truth.” I showed that slide, then talked for about 15 minutes about some fairly deep things about science that I had learned from watching Ed at work. None of it was written down, which I’ve since felt was a shame, since I’m pretty sure it was worth preserving (and several other attendees told me they felt the same way). So here, now, as best I can reconstruct them (and cleaned up a bit, so as to be closer to what I would have said were I a better extemporaneous speaker), are those remarks.


As I alluded to above, and as others have mentioned in previous talks today, Ed has always been supremely suspicious of his own results, and has always insisted on redundant validations of any idea or measurement or analysis. Further, in scientific arguments, Ed has always been ready to concede that his results or ideas might in fact be wrong, and whenever a problem has arisen in a collaboration, he has always been the first to offer up his own work for re-examination. For example, when HETE was not observing the rate of GRBs that we had anticipated prior to launch, Ed put his flight trigger software under a microscope, trying to divine how it might be missing bursts. The software was in perfect working order, and the problem turned out to be that we had misunderstood the underlying distribution of GRBs in the (new) regime where HETE operated. But for a while, Ed was sure that the missing GRBs were his fault.

As another example: when I was a graduate student, I worked with Ed on the notorious “cyclotron lines” in GRB spectra. We had two examples of GRBs with really clear pairs of spectral absorption lines at certain times. The line spacing was characteristic of cyclotron absortption. It all fit, it was beautiful, it “proved” (wrongly, as it turned out) that GRBs were emitted from neutron stars in our own galaxy. When BATSE was launched and failed to confirm the presence of cyclotron lines in GRB spectra, I was sure the problem was with the BATSE SD responses (which did in fact have some problems). Ed, on the other hand, would occasionally joke that perhaps the lines occurred at the energy where we observed them because that was the “SPANAL constant” — the joke being that SPANAL was Ed’s “SPectrum ANALysis” code that we used to analyze the spectra when we found the lines. He was saying that the lines may very well be artifacts of some sort.

I thought he was just whistling past the graveyard, making grim jokes for the fun of shocking his student. As it turned out, however, the lines were probably an instrumental artifact due to a poorly-calibrated subtlety in the GINGA GBD response. When the realization dawned on me, I was shocked and disappointed. Ed was also disappointed, but he was not shocked, or even surprised. He’d always been prepared for the possibility that he’d been wrong.

In my early days knowing Ed when I was a graduate student, I found his attitude puzzling. Here was a guy who, by virtue of his talent and intellect and experience could have simply run over anyone he was arguing with without even slowing down. But he never did. Instead, he’d start with conceding that he was probably wrong, even though he rarely was. This was weird.  Scientists are generally-speaking a pretty opinionated bunch, who place a high premium on robust debate.  Ed could sometimes seem as if he wasn’t holding up his end of those debates.  I wondered briefly whether he suffered from some kind of self-esteem problem — “briefly” because nobody can spend any time with him without realizing that self-esteem is simply a non-issue for him.  So what was wrong with the guy?  Didn’t he care about being right?

In the years since, I’ve come to understand Ed’s attitude towards “being right”, and that process has taught me a great deal about the operations of science, and about the psychological attitudes of great scientists.  Here’s what I believe I’ve learned:

Those of us in this room are members of a very exclusive club, a tiny sliver of humanity.  What distinguishes us scientists from the vast majority of other people is not how smart or knowledgeable or geeky we are — there are plenty of non-scientists who rank near the top in any of those categories.  No, what distinguishes scientists as a category from most other people is our attitude towards truth.

It’s a funny thing, but for a bunch of people so concerned with finding out the truth about things in the world, we don’t often ask ourselves what we mean by “truth”.  What is truth?  What do we mean when we say that a belief is true?  How do we ascertain the truth of a belief, and how should we order our reasoning so as to increase our stock of true beliefs, and reduce the supply of false ones?

“Truth” is a slippery concept, which different people use to mean different things at different times, usually without realizing it.  In general terms, truth is an attribute of belief — some beliefs are “true”, or “truer” than others, although how one ascribes a truth value to a belief is subjective and disputable.  This is in no small measure due to the fact that there are several significantly different meanings of the term “truth” in common use.  Among the most common, are:

  • Emotional/Intuitive truth — what we know, requiring no justification or evidence;
  • Revealed truth — a variant of emotional truth, but divinely sourced;
  • Rhetorical truth — what we can persuade others of. Analysed in hilarious detail by Harry Frankfurter in his gem of an essay, “On Bullshit”.  Politicians and postmodernists tend to shelter under this one;
  • Deductive truth — logical implication, the truth of mathematicians;
  • Inferential/Evidentiary truth — a generalization of deductive truth, in which the direction of logical implication can be is reversed to get from evidence to conclusions.

The “truth” of science is that last one, evidentiary truth.  It is the payload at the end of a chain whose links are evidence.  This variety of truth is the framework for all discussion and debate in science.  Everyone making any kind of serious scientific claim is appealing to this notion of truth, and everyone who isn’t using this standard of truth in a scientific argument is just blowing CO2 bubbles into the air.

One can imagine justifying the use of different variants of truth in different contexts.  After all, emotional truth is more useful than evidentiary truth in literary criticism, and evidentiary truth can actually be dangerous to political careers (rhetorical truth is much safer).  However, in those cases of beliefs that we regard as universally, soundly, indisputably knowable, the standard that is applied is invariably that of evidentiary truth.  For the securing of sound objective knowledge, it’s the only game in town.

If we accept for a moment the radical view that the only valid (or at least objectively knowable) meaning of the word “truth” is in the sense of evidentiary truth, then something really interesting happens when we use this notion of truth to analyze the structure of belief.  We find that beliefs have two chief attributes:  One is truth (in the sense of evidentiary support).  The other is passion:  the emotional attachment that we have to a belief.  Truth and passion are the main characteristics of any belief.  Each is logically independent of the other — the presence of one says nothing about the extent of the other.  We all hold dispassionately to some beliefs universally agreed to be true.  And I’m sure we can all think of one or two examples of false beliefs that have been held passionately enough to kill for by various people at various times.

Most people don’t trouble themselves with the distinction between truth and passion — and deeply religious people would probably reject it outright, as a menace to their world view.  And because most folks won’t distinguish between the two, they effectively wind up valuing passion over truth.

But to us scientists, truth is the highest value.  It’s not that we don’t hold passionate beliefs — we’re human, so of course we do.  But if anyone were to write a charter for us to sign onto, that document would say right at the top, near the beginning of the preamble, something along the following lines:

We will not be bound by passion to any belief so tightly that, if presented with incontrovertible evidence for the falsity of that belief, we would reject the evidence, rather than the belief.

That is who we are.

Or at least, that is who we aspire to be.  This is an ideal, and a rigorous, exacting ideal at that.  As with all such ideals, it is not an easy one to live up to perfectly at all times.  We’re a prideful bunch — we take pride in our results, in our understanding, in our accomplishments.  It’s justifiable pride, for the most part.  But pride is a passion.  I can think of one or two occasions when my pride in my scientific understanding misled me about the significance of some evidence.  Probably some of you can conjure up examples from your own work.

Nonetheless, the ideal is important, and all of us who think of ourselves as scientists subscribe to it, value it, and attempt to live up to it the the best of our merely human ability.

Which brings me back to Ed.

Of all the scientists that I have met in the course of my scientific career, Ed is the person who most perfectly embodies that ideal.  He lives it.  It informs his outlook so completely that he’s probably no longer even entirely conscious of it.  He has his share of (perfectly justified) pride in his work.  But being known as the guy who figured something out, who “got it right”, is not his highest priority.  Quite simply, what Ed wants, above everything else, is to know the right answer.  That suspicion of his own rightness, that doubt which appeared to me in my graduate school days as self-doubt, was nothing of the sort.  It was rather nothing less that the outer manifestation of an ideal scientist’s untiring, relentless, uncompromising commitment to the truth.

That’s what I learned from watching you work, Ed.  It’s probably the most important lesson I picked up in my scientific career.

So, thanks.

Advertisements

Tags: , ,

3 Responses to “Ed Fenimore, Scientist”

  1. Greg Roelofs Says:

    Well said. Acknowledgment of one’s own fallibility–preferably early and often–is one of the hardest things to learn, in my experience. After multiple decades of techno-professional online and offline life, I’m _still_ working on it. Yet crow tastes a whole lot better when it’s cooked, seasoned, and hummingbird-sized, so you’d think it would be easier…

    The other side of the coin, of course, is that it’s an excellent tactic in almost any personal interaction. A little humility on one’s own part goes a long way in “taking the edge off” and getting others to lower their shields, or at least to reduce their intensity a bit. One frequently makes a lot more progress when both sides don’t start out all prideful and prickly.

  2. Carlo Graziani Says:

    Hi Greg.

    Humility does take the edge off, as you say. The trouble is that to some extent it is at cross-purposes with scientific debate, which is necessarily adversarial at times. This is the source of two problems: (a) people can take the disagreement personally, as a form of attack; and (b) people can become too attached to their ideas, and too willing to overlook cogent criticism. I’m pretty good about (a), occasionally less than perfect about (b) (I usually come around, though).

    What Ed is perfect about is that he never loses sight of the fact that it is really ideas that are jousting, rather than people. And while he may champion some idea or another, he is ruthless about abandoning the ones that don’t measure up, and totally open to the dispassionate weighing of evidence.

  3. Can anyone believe your results? (reproducibility) | programmingforresearch Says:

    […] Carlo Graziani’s article on Ed Fenimore and honesty in science (in particular his link on the Ginga lines) […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: