Recently I have migrated an old project of mine from Objective-C to Swift. It was not that difficult and it worked out fine, and I was eager to see how the performance of the migrated system would turn out. How big was my disappointment when I saw that the Swift version of my system was at least a factor of 10 slower than the Objective-C version with unchanged functionality. That can’t be true, I thought and debugged at little bit.
The system in question is a Call Center Simulator which is heavily using random numbers of specific distributions. It turned out that the biggest difference in performance seemed to occur at these distributed random numbers. Here is the code (Swift and Objective-C) of this little piece of software (part of a struct):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
/* getRandomx: Get a random x-position according to distribution * * return: Int random x * * Ver: 1.1 21.09.1997 fpp */ func getRandomx() -> Int { let test = Utilities.shared.rndm() * sum // our test random value var tsum = 0.0 for i in 0 ..< numxv { // loop over all bins tsum += array[i].value if tsum >= test { // found the x-position array[i].num += 1 // update num member entries += 1 // and total number of random requests return minxv + i // and return the x-position } } return minxv + numxv } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
/* GetRandomx: Get a random x-position according to distribution * * Ver: 1.1 21.09.1997 fpp */ - (int) getRandomx { double test, tsum; int i, x; FValue *fvalue; test = [rndm rndm] * sum; /* our test random value */ tsum = 0.; x = minxv + numxv; for (i = 0; i < numxv; i++) { fvalue = [array objectAtIndex:i]; /* get the array content */ tsum += [fvalue value]; if (tsum >= test) { /* found the x-position */ [fvalue setNum:[fvalue num] + 1]; /* update num member */ entries++; /* and total number of random requests */ x = minxv + i; break; } } return (x); } |
Well, that doesn’t look too dangerous, does it? I isolated that code in a little test. For that I used the same algorithm for the random number generation (Utilities.shared.rndm()
) and generated a Maxwell-Boltzmann distribution with 400 bins. I generated 10,000,000 random numbers with that distribution and measured the time needed on my iMac 2019. Here are the results:
Objective-C | 9.67 s |
Swift | 590.81 s |
OK, that explains my disappointment; the performance of this little piece of code seems to be in Swift a factor of 61 lower than in Objective-C. Too bad. Well, let’s double check that result with the release version of my test project (debug off).
And, surprise, I got the following results:
Objective-C | 6.50 s |
Swift | 1.87 s |
The trend turned into the opposite. Now the Swift version is a factor of 3.5 faster than the Objective-C version. The debugger seems to have a much bigger influence on the performance in Swift compared to the Objective-C world. Going back to my original project I observed a similar behaviour in turning off the debugger. That’s fine.
So don’t forget to turn off the debugger when measuring Swift performance.