“App translocation”

When applications are in “quarantine” on OSX after being downloaded, they are run in a kind of sandbox; they’re “translocated”. You don’t really see this, but weird things then happen. For instance, Little Snitch won’t let you create “forever” rules on the fly, claiming your app isn’t in “/Applications”, which it clearly is if you check in Finder.

The problem is that the extended quarantine attribute is set, and needs to be reset (at least if you trust the application). Too bad Apple didn’t provide a GUI way of doing that, so here goes the magic incantation (assuming WebStorm is the problem in the example):

First check if the attribute is set:

xattr /Applications/Webstorm.app/

Then if you see that it is, reset it:

xattr -d com.apple.quarantine /Applications/Webstorm.app/

…and there you go. Life is good again.


Server-side Swift

This presentation from WWDC 2016 boggles the mind. It completely overturned all my concepts about server-side nodejs and javascript in general. If you’re into docker containers or anything of the kind, and you develop in Swift client-side, this must be seen.

Let’s hope the project doesn’t die. Let’s hope I didn’t overestimate this.

Swift, missing idea #1?

Going through “properties”, I’m not finding anything about private and public properties, or protected. I’m also not seeing anything about header files and class files, so at first blush it seems we can’t hide properties from other classes. How do we stop people from using the wrong properties?

That can’t be good. I must be missing something.

Swift, good idea #1

There’s a lot of good stuff in Swift, of course, but adopting Ruby’s block syntax seems a really nice idea. It’s called “trailing closures” in Swift, but it’s the same thing, as far as I can see. An example from the text:

let strings = numbers.map {
    (var number) -> String in
    var output = ""
    while number > 0 {
        output = digitNames[number % 10]! + output
        number /= 10
    return output

Everything between braces is the block, um.., trailing closure.

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks.

Swift, bad idea #3

Function types are declared like:

var mathFunction: (Int, Int) -> Int

In this example: mathFunction is a variable that can hold any function that takes two Int as parameters and returns one Int. Fine, so far.

Functions can take such functions as parameters and also return them. For instance, a function taking a function like the above as parameter would be declared as:

func printMathResult(mathFunction: (Int, Int) -> Int) 

A function returning a function taking a Bool as parameter and returning an Int could look like:

func chooseFunction(choice: Bool) -> (Int) -> Int

Notice how the two pointer operators (->) mean to entirely different things. The first indicates a return value, the second is part of the function signature of the returned value.

Let’s imagine what a function type would look like if it takes a function of an Int, returning an Int as parameters, and returns a similar function:

func confusingFunction(number: Int) -> (Int) -> (Int) -> Int

I may very well have written that wrong, but can you tell? This is totally different from old school C declaration of function prototypes, but I’m far from sure it’s any easier to understand. Maybe judicious use of “function types” (or “typedefs” as we used to call them 30 years ago…) could make this clearer.

Swift, bad idea #2

Function parameters now have distinct “internal” and “external” parameter names. The simplest form does away with named parameters when calling functions, that is, we can now do:

mysteriousFunction(15.2, "yeah, right", "only on a sunday", -1)

…just like in the good(?) old days of plain C/C++. Yes, you can force naming of parameters on the caller’s side, but it’s more work than the sloppy old way. Guess how often we’ll see that now? So, simple, inscrutable, and bug prone is the new default.

Swift, bad idea #1

Looking over the “Swift” language Apple presented during the WWDC keynote. First off, declarations using “var” and “let” made me think of Basic, and had to stifle a gag reflex.

I’m reading the iBooks book on Swift. When I got to closed and halfopen ranges on page 100, I thought this was a big mistake. It’s very hard to see at a glance which is which. We’ve been trained for so many years to see ranges in for loops as closed, and react immediately if that expected “-1” as in “(0 to n-1)” or equivalent, is missing.

Sure enough, go to page 135 in the same book, and the example given is clearly wrong, where the author of the book confused the two. Two dots is a halfopen interval, three dots a closed interval, and the example is:

“shoppingList[4...6] = ["Bananas", "Apples"]”

(Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks.)

This is not going to end well.

Update: I was wrong. See the comments.

Mountain Lion for free?

I’ve downloaded and installed Mountain Lion (10.8) on several machines now, but I never paid for it. No, I didn’t pirate it, I got it from the App store, but it never gave me a chance to pay. Looking up the transaction in the App store via iTunes, I see this:

In other words, I did “buy” it, but got it for nothing. Officially. What I don’t get is why. I’m just guessing here, but since I’m a registered developer on the same account, and I’ve run the developer previews, that earns me a free release version as well. (Note that the 10.8 above is not a developer preview, but the released public version.)

Nice gesture, Apple. Unless it’s an error. If so, I really don’t mind paying for it; it’s not exactly expensive.

3.3.1 with a twist

The by now famous paragraph 3.3.1 in the iPhone Developer Program License Agreement for iPhone OS 4.0 says that “Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited”. Which, of course, ruins the day for Adobe and Flash CS5. The idea was to have Flash scripts run on the iPhone on just such a compatibility layer.

The theories as to the reason why are, generally speaking: f*ck Adobe, preserve performance on the iPhone and iPad, and/or make multiprocessing efficient on these devices. With regard to that last, the theory goes that the OS figures out how the app works and hooks into the app and the app framework, but if there’s a compatibility layer in between, that becomes very difficult and inefficient. Actually, purely technically, without any fanboyisms, it does make sense to me.

In that case, and reading the 3.3.1 literally, nothing stops me, or Adobe, from implementing a translation from our own specific languages using a precompiler, as long as you end up compiling actual Objective-C code using XCode into the app. That’s what I would do, and I find it a better solution, anyway.

But the anti-Adobe conspiration theorists may claim Apple doesn’t want you to do this, either. I don’t know if they do, but let’s assume.

Now it gets interesting. There is no way that Apple can detect from the runtime code, or even the source code, that the code has been produced by a precompiler, if that precompiler does a decent job. If they want to stop that from happening, they’ll have to monitor the user’s machine for precompilers and editing tools, like World of Warcraft is monitoring for bots. What a fascinating circus that would be.

iPad: the lowest common denominator

After watching Apple vs Predator, a short YouTube video, I had a blinding flash of the somewhat obvious and this is it: no other interface but the iPhone/iPad interface can seamlessly transfer to a virtual surface and gestures. Let’s expand on this.

If you’ve seen “Minority Report”, the movie, you must remember the interface Tom Cruise uses to access files. He pulls on gloves, then works the displays as if he touches a virtual surface in space. There are a number of projects doing gloves like this, such as the AcceleGlove by AnthroTronix.

It’s obvious, to me at least, that you can’t usefully move just any graphical interface to a virtual surface like in “Minority Report”. There are UI elements that work and others that don’t work. Obviously, you can’t use a mouse, there’s nowhere to let it rest, there’s just air. You can’t use a pen. The only thing you can use is your fingers. In other words, it’s a multi-touch interface, albeit virtually and in the middle of the air.

Could you imagine if you developed a useful virtual surface like this and you wanted to use the same user interface on a hard, real surface device. How would that look? Surprise, surprise, it would look exactly like the iPad. Not like Windows for Tablets, not like any other smartphone UI I’ve ever seen, but exactly like the iPhone and iPad UI.

I don’t think this is accidental. I think this is the fundamental reason that the iPhone and iPad have never had, and never will have, a pen or other pointing device. As long as they are entirely useable using only one or more fingers, the UI translates seamlessly to a virtual surface in the air.

There are signs one can do using a glove and a virtual surface that aren’t useable on a real surface with multi-touch. Example: making the “ok” sign using your thumb and index finger could work with a glove, but not with an iPad. On the other hand, it seems such signs are rarely used even in science fiction movies, and I think there’s a fundamental reason why not, simply because they are less suitable for an intuitive command interface. This leads to the rule that one should probably not introduce any visual signs in virtual surfaces that cannot be translated to gestures using a hardware device surface.

For medicine, all this is great news. This means that if you develop a medical records interface, or the interface to any other medical system, on an iPad, it will automatically be just right for a virtual interface, such as those we will need in operating theatres and bedside.

That makes the iPad user interface the lowest common denominator. If you develop for this UI, your medical app is future proof. MS Windows based medical apps, on the other hand, are living on borrowed time.