Friday, December 08, 2006

luck of the draw

Sometimes you can carefully tend field-deployed data loggers, checking on them every month or so, being vigilant about recording site conditions and collecting supplementary data, frequently looking over the collected data for potential problems, etc. And that doesn't guarantee that the data you get out of the dataloggers will be of a decent (usable) quality. Such is the problem with the datasets that I am working with now - ones that were supposed to be part of my dissertation and that tooks months and months of my time over the past four years.

Othertimes you can ad-hoc throw something together in a few days, deploy data loggers remotely for a few months, make several careless errors after retrieving them, and still retrieve good quality data. That's the auspicious result of my recent adventure and my pre-doc post-doc work.

Which one of these datasets will, in the end, yield a compelling scientific story that gets written up for publication? It's way to early to tell. I don't want to give up on the first dataset because I've put too much blood, sweat, and tears into it, but if the second dataset reveals its mysteries easily, it will make a much bigger contribution to my research portfolio (and my CV). Sometimes I guess it all boils down to luck.

3 comments:

Anonymous said...

Is "ability to make a good story" really the best way of judging the quality of a dadta set?

ScienceWoman said...

Lab lemming: A good story is vastly preferable to "no story" (i.e., undecipherable results). It's kind of hard to write a paper saying "We did this experiment and we got this data back, but we have no idea what it means." :)

Lab Lemming said...

Unfortunately that is true. However, it makes you wonder how much really great, high quality data is rotting away in drawers somewhere because people couldn't figure out what it means...
I know I'm such a culprit.