We’ve covered many aspects of Meteor on this blog: reactivity, publications and subscriptions, security, and more.

Looking back, it seems like a recurring theme of all these articles is client-server interactions. And it makes sense: after all, one of Meteor’s major innovations was bundling up the client and server side of things in a single package.

But precisely because they break with the norm, these client-server interactions also tend to trip up a lot of beginners when they first start working with Meteor.

Today, we’ll look at one of these potential pitfalls: latency compensation.

Latency What?

“Latency Compensation” is one of the Seven Holy Principles of Meteor, which – as the legend goes – were handed down to the Meteor Development Group engraved on a burning Meteorite in the year Two Thousand And Eleven.

Here’s how the documentation defines the term:

Latency Compensation. On the client, Meteor prefetches data and simulates models to make it look like server method calls return instantly.

OK, so it seems like latency compensation is some kind of trick Meteor does to make apps seem faster. But what is it exactly? And what does “simulates models” even mean?

The Old Way

Let’s step back and pick a concrete example: insert a post in a database, then add it to a list of posts. The typical, pre-Meteor flow would look something like:

  1. User submits form.
  2. Send a post to the server via AJAX.
  3. Wait for response from the server.
  4. Add the new post to the list.

Nothing wrong with that, but we did just introduce a bunch of latency (i.e. waiting time) in our user experience.

The Meteor Way

Instead, here’s how Meteor does it:

  1. User submits form.
  2. Call Posts.insert().
  3. Simulate the results of that insertion on the client.
  4. Use result of the simulation to add the new post right away.
  5. Get response from the server.
  6. If we got the simulation wrong, correct the mistake (more on that later).

Let’s pause a second here. What exactly gives Meteor the ability to perform this kind of simulation on the client, when frameworks like Angular or Ember can do no such thing?

It all boils down to another one of the other Seven Principles.

If you’re trying to insert a post in a database, you need, well, a database. This is where the database everywhere principle comes in: because Meteor can store a subset of the database on the client, it can perform operations against it and get a fairly good idea of what the result will look like on the server.

A Practical Example

To make things a bit clearer, let’s look at a practical example.

Let’s suppose we have a form for submitting a post. Note that we’ll use the anti:fake package to generate dummy content instead of bothering with an actual form (“I hope we get to code a form!”, said nobody ever).

Here’s what its event handler could look like:

Template.submitForm.events({
  'click .submit': function (event, template) {
    event.preventDefault();
    Posts.insert({title: Fake.sentence(6), body:Fake.paragraph(3)});
    alert("Post inserted!");
  }
});

Because Posts.insert() is latency compensated, it will not hold up the execution of the handler. Client-side, the simulation will run, the post will appear, and our alert() will trigger, all without having to wait for a response from the server.

Latency Compensation & Meteor Methods

Now I know what you’re thinking. This is all very well, but didn’t we recommend not to use Collection.insert(), and to use Meteor methods instead? What happens then?

This is where yet another Meteor Principle comes in handy: not only does Meteor use one language (JavaScript, for those of you who aren’t paying attention) on both client and server, but it also lets you share the same code.

This makes it possible for the client to simulate server operations on the client by executing the same instructions in both places!

This means we can rewrite our previous example as follows:

Template.posts.events({
  'click .submit': function (event, template) {
    event.preventDefault();
    Meteor.call('insertPost', {title: Fake.sentence(6), body: Fake.paragraph(3)});
    alert("Post inserted!");
  }
});

Now we need to write this insertPost method, and we’ll use a little trick to highlight latency compensation in action.

As you may know, Meteor gives you two mechanisms to control where code will run. One is simply where you put your file: files in /server run on the server, files in /client run on the client, and anything else runs on both environments.

But you also get finer-grained control through Meteor.isServer and Meteor.isClient. These boolean variables let you mix client- and server- exclusive blocks of code right in the same function.

So we’ll simply tell Meteor to wait for 5 seconds if we’re on the server (using the handy Meteor._sleepForMs() function):

Meteor.methods({
  insertPost: function (post) {
    if (Meteor.isServer) {
      Meteor._sleepForMs(5000); // wait for 5 seconds
    }
    Posts.insert(post);
  }
});

But wait, where do we put this? First, let’s see what happens when we put it in our /server directory:

No latency compensation

We see our alert() immediately, but as expected the actual post doesn’t show up for five seconds.

If you remember what we said a few paragraphs earlier, this should make sense: by putting our code in the /server directory, we’re effectively hiding it from the client. The direct consequence is that the client has no way of executing that code to simulate its effects!

Things are different if we put the exact same method code in a shared file (such as /common.js at the root of our project):

With latency compensation

This time latency compensation does kick in, and our post appears instantly, no matter how long the server is waiting behind the scenes.

Recap

Let’s go over what we’ve learned up to now:

  • Native methods like Collection.insert() and Collection.update() are automatically latency compensated.
  • Custom methods can be latency compensated if you make sure their code is accessible by the client.

So far so good. But out there in the real world, things can get a lot trickier than that. So stay tuned for part two of this article, where we’ll dig deeper and learn how to deal with with a few common latency compensation issues.