The real iPhone conspiracy

So I’ve used a Mac for a while and I’m just starting on iPhone development and a blinding flash of the almost-obvious strikes me. This is not the Blackberry killer or the Palm killer, it’s the long-fuse Microsoft killer.

Remember the monkey dance? Ballmer yelling “Developers, developers, developers!”, while jumping around like a neurally defective and sweating profusely (one could be excused for suspecting some cholinergic poison, but he lived through it, so that is not the answer). Right. I mean, he’s right. Developers is what makes or breaks a platform, but now he’s losing them, so he really has no reason to celebrate.

When Apple designed the iPhone, they could have created a special development system and language for it, but even though it may have been easier, they didn’t. They chose to tweak the development environment for OSX to include the iPhone, and by necessity, also putting OSX on the iPhone. The result of this is that if you want to develop for the iPhone, you have to get a Mac (strike 1), learn OSX (strike 2), learn Objective-C (strike 3), learn Cocoa (strike 4), and by then you’re so deeply immersed in the Mac environment that you won’t find your way out again. Since you can run your Windows stuff, including Visual Studio, just fine under Parallels or Fusion, you don’t need that Dell or HP machine for anything anymore, and you’re not sorry to see them go. In other words, you’ve got a developer that clearly isn’t going to like going back to .NET development again. I mean, once you’ve used these two environments (Xcode/Cocoa/Objective-C vs .NET/Visual Studio) it’s practically impossible to enjoy .NET anymore. It’s so far behind and so very clunky in comparison it’s almost a joke.

So, every developer you task with iPhone development is almost certainly lost from the .NET camp forever. This I can’t prove, but I’m convinced of it. But now is the question: who are these developers? Do they already develop for the Mac or are they from the “other” side? Again, by the seat of my pants, I’m convinced that a very large and increasing proportion come from large enterprise .NET development organisations that need to add a client for their large systems on the iPhone. See where this is going?

It’s only just begun.

Update: I suddenly realized that I fused two unrelated events together in my mind. Steve Ballmer did the monkey dance and yelled “Developers, developers…!” at two different, equally traumatizing, occasions. I’m not sure that’s any better, though. It’s all very disturbing.

A feature?

Had to use the Directory.GetFiles() method in .NET, so I read the description. Now, take a moment and read the following about how an asterisk wildcard character works in the search pattern parameter. Then tell me if this description is of a feature or of a bug. Windows, largely due to legacy, is full of this crap.

When using the asterisk wildcard character in a searchPattern, such as “*.txt”, the matching behavior when the extension is exactly three characters long is different than when the extension is more or less than three characters long. A searchPattern with a file extension of exactly three characters returns files having an extension of three or more characters, where the first three characters match the file extension specified in the searchPattern. A searchPattern with a file extension of one, two, or more than three characters returns only files having extensions of exactly that length that match the file extension specified in the searchPattern. When using the question mark wildcard character, this method returns only files that match the specified file extension. For example, given two files, “file1.txt” and “file1.txtother”, in a directory, a search pattern of “file?.txt” returns just the first file, while a search pattern of “file*.txt” returns both files.

and:

Because this method checks against file names with both the 8.3 file name format and the long file name format, a search pattern similar to “*1*.txt” may return unexpected file names. For example, using a search pattern of “*1*.txt” returns “longfilename.txt” because the equivalent 8.3 file format is “LONGFI~1.TXT”.

The conclusion must be that this function is worse than useless and bound to cause excruciating bugs in your apps. Better use the GetFiles() method without any search pattern and then filter using a regex.

The end of .NET? I can’t wait.

Ok, I admit, that title is a bit over the edge, but still that is how I feel. Developing for .NET is increasingly becoming not fun and far too expensive. The only reason to do it is because customers expect products for .NET, but under slowly increasing pressure from developers, that is going to change. It may take a while, but it will happen. There are a number of reasons for this.

.NET development is single platform. Admittedly the largest platform, but a platform that is increasingly having to share the market with other platforms. And already, according to some, there’s more sales potential for small developers in the OSX market than in the Windows market, due to a number of factors like customers that are more willing to buy and to pay for software, less competition in each market segment, etc.

.NET development is also entirely dependent on Microsoft’s development tools and those are increasingly expensive. For reasonable development, you need an IDE, a good compiler, version control, bug handler, coverage analysis, profiling, and a few more. We used to have most of that in the regular Visual Studio, but recently MS has removed all the goodies and plugged them into the Team system only, which carries an obscene pricetag (in Sweden around USD 13,000 + VAT for the first year…). This means that a regular one-man development shop can barely afford the crippled Visual Studio Professional at USD 1,500 for the first year. Sadly, there aren’t even any decent and affordable third party products to complement the VS Pro so it becomes a “real” development suite. And with every version of Visual Studio this only gets worse. More and more features are added to the Team suite and removed from the Pro. This is not the way to breed a happy following.

Meanwhile, OSX comes with XCode, which is almost as good as Visual Studio Pro, and is free. Objective-C is also a much more modern language with more depth than any .NET language, even though it is actually older. But, sadly, it’s not cross platform either and I don’t see how you can get the Windows fanboys of the Scandiavian healthcare scene to even consider another platform. Same probably goes for most other industries.

I’m no fan of Java, but on the other hand I’ve never worked much with it so that opinion doesn’t count. Eclipse, the IDE often used for Java development, is cross platform, very capable, and open for other languages such as Python, Flex, and many more. Yes, I know, in theory so is Visual Studio, but how many real languages do you have there? You’ve got one: Basic, masquerading as C#, J#, and, um, Basic.

Using Eclipse on any platform, you’ve got a real good chance of covering the line of tools you need, profilers, coverage, version control, without much pain and without breaking the bank. And you can write crossplatform integrated larger systems.

So, I guess it’s time to bite the bullet. I really like XCode and OSX, I really know C# and .NET, but I really only believe in Java, Flex, Python, Perl, C++ under Eclipse for enterprise development in vertical markets. And in XCode under OSX for regular shrinkwrapped desktop apps.

Not even Silverlight is very attractive and that is largely due to the marketing and pricing of the tools for it. A small developer organisation can’t afford it. Flex and AIR looks like serious contenders, though.

x2c source

I finally got around to putting up the source code for x2c under GPL. No, you haven’t heard of this thing and it may not seem immediately useful, but when it is useful, it’s incredibly useful. The hardest thing is coming up with full samples of what it can do, so I’ll just outline it right here.

x2c stands for “XML to Code”, and it’s an interpreter for a little language I made with built-in commands to handle XML documents and write to plain text output files.

It started life as a tool to create VB and C# source code for data access layer classes, based on XML descriptions of an Oracle database. Another possibility is generating language tables from Excel spreadsheets, and I’ll tell you how:

Imagine an Excel spreadsheet with one sentence per row. In each column, the same sentence is written in another language, like Swedish, English, French, etc. Save the spreadsheet as an XML document. Now you can write a pretty short x2c script that reads these languages, column by column, and then produces a C++ header file with the right strings declared as constants. Great for products you want to recompile for a number of human languages.

Especially for this last use, I recently adapted the text output file command in x2c to allow output to ASCII, unicode (default), or any codepage you have installed on the Windows system you’re running this thing on. In the above script example, you see codepage 1251 used for Russian. In this case this was necessary since the C++ compiler used (Borland) couldn’t use unicode header files. This script runs under US or Swedish XP and Vista, as long as codepage 1251 is also installed on the system, and then produces the right MBCS file for Borland C++, resulting in binaries that will look real to russians running russian versions of Windows. Note that above is the complete script that is needed to convert the excel spreadsheet to four different C++ header files and it can easily be run from a build script.

The source is C++ in a VS 2008 solution. Have a go at it.

Meatloaf code

“Meatloaf code” is code that is there since a long time but nobody remembers why it’s there but everyone still respects it and keeps writing things that way. I call that “meatloaf code” based on the following anecdote that I read in one of my books, except I can’t remember which one so I apologize for not attributing it correctly… oh, now I do, must have been one of the Richard Feynman books.

“My mother often made meatloaf for us kids. One day I was watching her rolling the meatloaf, cutting off both ends and placing it in the pan. I asked her why she cut off the ends and she said that her mother taught her to do it that way, but she didn’t know why. After a bit of arguing back and forth, we called her mother and asked, and got the same story. Her mother in turn taught her to do that but she didn’t know why. A week passes and we go visit my mother’s grandmother and then we ask her why she taught her daughter to cut off the ends of the meatloaf, and she said: because my pan was too small back then. Don’t tell me you’re still cutting off the ends, are you?!”

Today I saw this: a textfile that serves as a printer template and begins with a tag like “[START]” and ends with a tag like “[END]”. All templates have to have that. Before sending it to the printer, you have to strip it out. Nobody knows why, but everyone does it.

Now that is meatloaf code.

Real developers…

… can read and understand several books with contradictory or complementary content without having their heads explode.

… and thus fear the one-book-religion as much as it deserves being feared.

… can understand, appreciate, and follow more than one methodology at the same time.

… know that no single book or methodology or language or tool or great tip will ever improve quality or output by any more than a couple of percentage points.

… know that any single book or methodology or language or tool or great tip used to the exclusion of common sense will reduce quality and output by close to 100%.

… truly understand the saying “A foolish consistency is the hobgoblin of little minds“.

… know the Orders of Ignorance and apply that knowledge.

… love writing lists of what real developers do and know.

The flip side of TDD

There is a problem with Test Driven Development (TDD) and security. Even though I’m a severe proponent of TDD and do my own development (largely) that way, I notice a strong conflict between good architecture and TDD. I’ve also seen mention of this effect in the journals lately, so I’m not alone in this.

What happens is that TDD promotes doing early and minimal implementation, then iterate over it until you get everything to work. Fine, everyone loves that. But early “ready-to-run” code usually implies a simplistic architecture. Not necessarily, but usually, please note.

Now, you start out writing all these tests, ostensibly free from architecture and design assumptions, only specifying the actual requirements. But you aren’t as free from assumptions as you’d like, since just by writing the test in a particular place, you’ve already made an architectural decision. Once the tests are in place and your code runs fine, you’re very free to refactor and improve your code safely, in a kind of localized way, class by class, method by method. But as soon as you do serious changes of architecture, your fine unit tests are usually blown away and have to be refactored or even rewritten. That hurts, and humans try to avoid things that hurt.

After a few incidences like that, you get gun shy and tend to not change your architecture unless you really and truly have to. And there are very few instances where you really have to do that just to make your system work (which is the only criterion your stakeholders care about). So, the architecture of TDD developed systems tend to be monolithic or at least simplistic and kinda smacked together, and guess what… there’s nothing more important than good architecture for secure systems. Forget about buffer overruns and unsafe APIs. It’s the architecture that makes your system fragile or resilient. The rest is just dust and filling. You can make systems run fine and bug free without a solid architecture, but you can never make them really robust.

Personally, I refactor my architecture anyway, over and over, but the only incentive I have is because I’m compulsive obsessive and people tend to not appreciate it (until years later, that is). Every other external incentive there is tells me not to do it. Timeboxing also increases the pressure to leave the architecture unchanged.

So I think we’re in the process of discovering the Achilles heel of TDD: even if the code is great, it much too easily leads to a poor and insecure architecture, and I think we need to take that seriously and try to come up with answers to fix this problem.

And, no, BDUF isn’t the answer.

Strongly typed constant parameters in C#

After a bit of searching, I found a way to have strongly typed constant parameters for C# functions. You know the situation, where you need to pass one of a limited set of strings or chars or other values to a function and you want to make sure somebody doesn’t just go and pass any old thing they find laying around the place. Enums are pretty good for this kind of thing, but it gets hairy if you need to translate it to anything else, like a string or a char.

Any solution also needs to pander to intellisense, making it easy to use and kinda idiot safe (I’m talking about myself a couple of hours after defining any constant, which usually leads to me behaving like the idiot user I had a hard time envisioning just hours earlier).

I think I found a good system for doing this, and as an example, I’ll invent a function that takes a string parameter, but it has to be just the right kind of string. To do that, I first declare the constant strings in a separate module this way:

Then I write my function, the fictional “Rechandler” that takes a parameter of the ConstRecTypeValue kind. And then I write a function that calls it. Now, while writing the caller, I want intellisense to do its thing, and it does:

As you can see, it obediently pops up a tooltip to tell me only a ConstRecTypeValue is accepted here. As soon as I start to type that, it recognizes the ConstRecType static class name and it intellisensively lets me choose which constant member I want:

…which I complete the usual way:

The callee (Rechandler) then easily recovers the string that is hiding inside the passed value (in this case “DELETED”) and continues its merry ways.

Naturally, you can use chars, doubles or entire collections of values instead of the string value in this example and still achieve the same effect.

You can also take it one step further along the path to universality, by using a generic base class for the value type:

If you have this guy in reach in your project somewhere, you can now simplify the definition of the value class like so:

…while everything else stays just the same.

I love it.

Reflect on those constants

This falls in the category “neat tricks” and definitely under “DRY” (Don’t Repeat Yourself). When you have a list of constants that you need to save or retrieve, typically settings, you easily get into a situation where you have say 20 constant strings defining the names of your constants, and then a block of code going through the same 20 variables to retrieve or save them. When you add a constant, you’ve got at least three places to add code and then I’m not even counting the places where you actually use the settings value.

But using reflection in C#, you can easily make it so the system retrieves all your constants and their values into a dictionary at runtime and saves them back, using nothing but the declaration of the string constants.

This is an example of a declaration of the names of the values we want to save and restore:

And this is code that then gets the values of those constants and sticks them into a dictionary in runtime. The rest of the code is trivial and not worth reproducing here