I happened across a nice graphic over on codeproject.com that helpfully illustrates the types of development processes we use for GeoQuery. Enter “how much engineering does my software need”:
Different elements of GeoQuery are engineered to different standards depending on two key factors: (1) if we believe it will be a long-term element of the program, and (2) the nature of the code. For example, changing the way that boundary searches occur within the front-end is a very quick activity, so we tend not to worry too much about “doing it right” (plus, if we get it wrong the core goals of the software – getting users good data – aren’t critically injured). On the other side, the extraction scripts we use go through round after round of engineering, ensuring both academic and technical rigor; these scripts may not be as complex as enterprise-grade software, but it’s critical to our mission we get it right.
One of the things that slows down new feature development is the importance of the long-term sustainability of GeoQuery. A great example of this is the ability to upload custom user boundaries – technically, we could implement such a feature tomorrow; it would be a mess of code that likely would break in the future, and inhibit future functionality. Doing it right is hard; doing it quick is possible, but would inhibit us significantly in the future (and, if we did it wrong, potentially break the whole system!).
The short of it: we’re always balancing the importance of new feature requests with the challenge of engineering a system that is sustainable in the long-term. We wish we could just throw quick hacks at every problem, but with research replicability representing a core goal, we have to be very careful about implementing any code that might risk our sustainability – even if we want it as well!