Subversion server on Snow Leopard server

As I already bragged about, I got me one of those delicious little OSX Mini Snow Leopard Server boxes. So sweet you could kiss it. I just got everything together to make it run a subversion server through Apache, too, and as a way to document that process, I could just as well make a post out of it. Then I can find it again later for my own needs.

First of all, subversion server is already a part of the OSX Snow Leopard distribution, so there is no need to go get it anywhere. Mine seems to be version 1.6.5, according to svnadmin. Out of the box, however, apache is not enabled to connect to subversion, so that needs to be fixed.

We’ll start by editing the httpd.conf for apache to load the SVN module. You’ll find the file at:

/etc/apache2/httpd.conf

Uncomment the line:

#LoadModule dav_svn_module libexec/apache2/mod_dav_svn.so

Somewhere close to the end of the file, add the following line:

Include "/private/etc/apache2/extra/httpd-svn.conf"

Now we need to create that httpd-svn.conf file. If you don’t have the “extra” dir, make it, then open the empty file and add in:

<Location /svn>
  DAV svn
  SVNParentPath /usr/local/svn
  AuthType Basic
  AuthName "Subversion Repository"
  AuthUserFile /private/etc/apache2/extra/svn-auth-file
  Require valid-user
</Location>

Save and exit. Then create the password file and add the first password by:

sudo htpasswd -c svn-auth-file username

…where “username” is your username, of course. You’ll be prompted for the desired password. You can add more passwords with the same command, while dropping the -c switch.

Time to create svn folders and repository. Create /usr/local/svn. Then create your first repository by:

svnadmin create firstrep

Since apache is going to access this, the owner should be apache. Do that:

sudo chown -R www firstrep

Through Server Admin, stop and restart Web service. Check if no errors appear. Then use your fav SVN client to check if things work. Normally, you’d be able to adress your subversion repository using:

http://yourserver/svn/firstrep

Finally, don’t forget to use your SVN client to create two folders in the repository, namely “trunk” and “tags”. Your project should end up under “trunk”.

Once up and running, this repository works perfectly with Panic’s Coda, which is totally able to completely source code control an entire website. If you don’t know Coda, it’s a website editor of the text editor kind, not much fancy graphic tools, but it does help with stylesheets and stuff. It’s for the hands-on developer, you could say.

The way you manage a site in Coda is that you have a local copy of your site, typically a load of PHP files, which are version controlled against the subversion repository, then you upload the files to the production server. Coda keeps track of both the repository server and the production server for each site. The one feature that is missing is a simple way of having staged servers, that is uploading to a test server, and only once in a while copy it all up to the production server. But that can be considered a bit outside of the primary task of the Coda editor, of course.

You could say that if your site isn’t mission critical, but more of the 200 visitors a month kind, you can work directly against the production server, especially since rolling back and undoing changes is pretty slick using the Coda/subversion combo. But it does require good discipline, good nerves, and a site you don’t really, truly need for your livelihood. You can break it pretty bad and jumble up your versions, I expect. Plus, don’t forget, the database structure and contents aren’t any part of your version control if you don’t take special steps to accomplish that.

Coda doesn’t let you access all the functionality of subversion. As far as I can determine, it doesn’t have provisions for tag and branch, for instance. But it does have comparisons, rollbacks and most of the rest. The easiest way to do tagging would be through the command line. Or possibly by using a GUI SVN client, there are several for OSX. I’m just in the process of testing the SynchroSVN client. Looks pretty capable, but not all that cheap.

The cutest little muscle machine ever

I got me that brand new Apple Mini with Snow Leopard OSX Server unlimited edition included. This is such an adorable machine, you wouldn’t believe it. It has everything you can wish for in a server, as far as I can make out after just a couple of hours with it. It’s super easy to set up and to monitor. It’s small, it’s beautiful, it’s almost totally noiseless, and seems to use hardly any power. When you feel the case, it’s just barely warmer than the environment and the same goes for the power supply. When I switch off everything else in the room, I can only hear the server running from less than a meter’s distance. It seems to produce about the same noise level my 13″ white MacBook does when it’s just started and perfectly cool. In other words, practically inaudible. Still, it’s running two 500 Gb drives in there, which I’ve set up as a mirrored (Raid 1) set.

I’ll probably brag about this system some more once I get to know it better. But meanwhile, it’s the nicest computer purchasing experience I’ve ever had. Except for the Mac Pro. And the MacBook. And the iMac, of course. And the iPhone. And Apple TV.

server_dimensions_20091020

More on evidence based

This is a continuation on my previous post, “Evidence based vs anecdotal“.

I wrote an email to the main author of the chapter in “4th Paradigm”, Michael Gillam, and he graciously responded to my criticism by agreeing to everything I said and emphasizing that this is what they wanted to say in that chapter. He suggests that it may not have been clear enough in that regard, and I agree. Anyhow, it’s great to know that smart people are indeed having the right idea of how to handle the knowledge that appears in IT systems, often as a side benefit of having extensive amounts of data in them.

What Michael stresses instead is the benefit of having a real-time monitoring of the performance of treatments in the population. He points to the Vioxx debacle and how a lot less people would have been subjected to the increased risk of myocardial infarctions, had the systems been able to signal the pattern in large data sets. And in this he’s entirely correct, too.

So, in conclusion, we’re in total agreement. A problem remains, however, in that even I, the archetypal skeptic, was easily mislead to read the chapter in question as promoting the discovery of new treatment regimes from dynamic electronic health care data. And I think that is exactly what is happening when some new ill-conceived projects are started in health care. I’ve seen an increasing tendency to dream up projects based on just this, the idea that large sets of health care data will allow our electronic health care record systems to recommend for or against treatments based on the large accumulated set of experience data in the system. And I think the reason is that people like us that reason about how to handle that data and what to use it for and what not to use it for, don’t realize that snippets of our conversations taken out of context, may lead decision makers to take catastrophically wrong turns when investing in new projects. At least, that’s what seems to happen. Time for an anecdote from real life. (Note the strangely ironic twist, whereby I use an anecdote to illustrate why anecdotal knowledge is a bad thing.)

This is an entirely true story, happened to me about 30 years ago, while I was doing my residency in surgery. A trauma patient, comatose, with multiple rib fractures and abdominal trauma, in respiratory distress, was wheeled into the emergency room. I asked the male nurse to administer oxygen through a mask and bladder, while the blood gases got done. As the blood gas results came back, I stood a couple of meters away and quietly explained the blood gas results to an intern, saying something like “see how the oxygen saturation is way down, he’s shunting, while the carbon dioxide is lowered under the normal value, which you may find strange, but it’s because he’s compensating with hyperventilation”, etc. After a minute of explanations, I look up and I see that the nurse is holding the rubber mask over the patient as I ordered, but I see no oxygen line connected, so I tell him it fell off. He says “No, I took it off. You said he’s hyperventilating, so he should re-breathe without any oxygen.” OMG… this guy was actively suffocating the patient after overhearing one word of a half-whispered conversation and applying the only piece of knowledge he possessed that was associated with that word. Which was entirely wrong, as it turns out.

Admittedly, this particular nurse wasn’t the sharpest knife in the drawer; he did this kind of thing with frightening regularity. But still, this illustrates quite perfectly, in my opinion, what politicians and technicians are doing with health care related projects. They catch snippets of conversations, apply some wishful thinking, and formulate a thoroughly sexy project that in their opinion will revolutionize medicine. Except it’s all based on a fundamental misunderstanding. We have to become very much more clear in our discussions about exactly what we can use electronic health care records data for, and what we absolutely must not use it for. Yes, we can use it to provide warning signals to epidemiologists and pharmacologists, and ideas for future studies on new phenomena, but we can definitely not use it to make direct recommendations for or against treatments to doctors while they handle patients. The only recommendations that should be presented to them, are recommendations based on thoroughly and correctly performed studies, nothing else.

It’s up to us to see to it that the people in power get the entire conversation, and understand it, before we let them start projects that have the potential to destroy the advances in medical knowledge we have today. They’re entirely capable of suffocating this particular patient in the name of sexy IT health care projects.