Title text: Although the Markov chain-style text model is still rudimentary; it recently gave me "Massachusetts Institute of America". Although I have to admit it sounds prestigious.
Swiftkey is a product that is installable only on Android-based phones and tablets. Swiftkey has noticed their inclusion in xkcd and have created a blog post for other users to comment with their default phrase when they hit the "central prediction key". The results are pretty funny. 
In the image text, the Markov chain is a reference to: (via Wikipedia) "A Markov chain, named after Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized as memoryless: the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of "memorylessness" is called the Markov property. Markov chains have many applications as statistical models of real-world processes."
So, that makes sense because Swiftkey only looks at the previous word, not the sequence of words that preceded that word.
And the “I am so sorry - that's never happened before” is a sexual reference, as that is something a guy would say after a particularly unsatisfying sexual encounter. The guy would say that to convince his sexual partner to give sleeping with him another try some time, even though they didn’t like it that time. And of course that is funny because it is his typical sentence, so he is texting that phrase over and over again.
Massachusetts Institute of America is an unlikely name because it shows two different locations. It is an amalgamation of Massachusetts Institute of Technology and [field] Institute of America (e.g. Mining). However, this is a likely result of a Markov chain: although the way Markov chains work, and the appearance of their output, may suggest that the model is "working towards" a certain thing, in reality it has no memory apart from its current state and the database of Bayesian probabilities it draws from. More specifically, the "state" usually consists of a certain number of words or letters prior to the current word it's generating, and so after generating "Massachusetts Institute of," the word "Massachusetts" left this pseudo-memory, and when looking only at "Institute of," it saw America as a likely follow-up.
- [Cueball shows off phone to Megan]
- Cueball: Have you tried SwiftKey? It's got the first decent language model I've seen. It learns from your SMS/Email archives what words you use together most often.
- Cueball: Spacebar inserts its best guess. So if I type "The Empi" and hit space three times, it types "The Empire Strikes Back";.
- Megan: What if you mash space in a blank message?
- Cueball: I guess it fills in your most likely first word, then the word that usually follows it..
- Megan: So it builds up your "typical" sentence. Cool! Let's see yours!
- Cueball: Uh -
- SwiftKey: I
- SwiftKey: Am
- SwiftKey: So
- SwiftKey: Sorry
- SwiftKey: That's
- SwiftKey: Never
- SwiftKey: Happened
- SwiftKey: Before.