Pusher-js 2.0.0 released: Cutting edge WebSockets with comprehensive legacy support

  - - Announcement

Pusher 2.0 Js Release is fast

We’re very excited to announce the release of a massive new update to our Javascript client library. The new release adds a smarter connection strategy with additional fallback mechanisms such as HTTP-based transport. These changes further improve our support for legacy devices and network complications such as corporate firewalls.

The Javascript client library is our most widely used library. Its aim is to quickly and reliably connect browser based clients to Pusher. Connectivity is a very fundamental aspect of our service and we’ve put a lot of thought into making it the most comprehensive and considered connection manager available.

We’ll be releasing updates to other client libraries including the iOS and Android ones in the coming months.

The changes in brief

The new version of pusher-js introduces a completely re-worked connection strategy, and adds several HTTP transports to our range of fallbacks. The result is much more comprehensive device compatibility, reduced time to connect, and a host of other improvements.

As well as showing the benefits of the changes, this post will also attempt to show how we approached the problem, and some of our design decisions.

Why we’ve rethought the transport mechanism

One of the things that is important to point out is that the new transports are an extension of our fallback options. This release should in no way detract from our ongoing philosophy that WebSockets are the most awesome way of communicating with a centralised server. This version of the pusher-js client is an attempt to make the world a more bearable place for those people who, for whatever reasons, are incapable of using the best transport available.

Complementing our existing fallback

Pusher has always worked across a wide variety of devices via a Flash based fallback mechanism, and we haven’t replaced this option in the newer version. Since the Flash version uses the WebSocket endpoint, it gets all the low latency, low overhead advantages of WebSockets. The drawbacks with this approach are that it involves downloading extra files, Flash isn’t installed on every browser, and certain ports need to be open on the client’s network.

Why we didn’t do it earlier

Adding additional transports is time consuming, and doing it right was very important. We needed to do it in such a way as to preserve the experience for the thousands of developers who use Pusher on a regular basis.

In addition to this, Pusher was initially started on the back of our excitement about WebSockets, and our goal that developers should be able to use this wondrous technology without having to throw out their existing infrastructure.

This release marks a firm commitment on our behalf to provide the perfect balance of cutting edge technology that works comprehensively for all devices.

The challenges of creating universal connectivity

In an ideal world, WebSockets would always be used. However, in some cases an alternative transport is needed, for example when WebSocket and Flash are not available on the browser, or when the network contains a meddlesome proxy or firewall (especially common on mobile networks).

The new connection strategy tries alternatives in parallel, and provides the user with a transport that works.

Remembering successful strategies

We’ve also added the ability for clients to remember successful strategies in local storage. This should result in a much snappier user experience by quickly connecting between page loads.

A metric driven approach

In our alpha tests we have tried new versions of our library with willing participants, and gathered detailed data about time to connect, and what methods are used. This has evolved our approach to a point we are very happy with. We’ll write about our methodology for this in a future post.

The library will continue to collect anonymised metrics about connections, and we will use this data to make improvements. If you’d prefer we didn’t collect stats in your application, you can disable collection via a configuration option:

Getting started

To use the new library, all you need to do is to change the URL:

  • http://js.pusher.com/2.0/pusher.min.js
  • https://d3dy5gmtp8yhk7.cloudfront.net/2.0/pusher.min.js (for SSL sites)

The changelog is available here

For most apps, this will probably have an almost unnoticeable effect. Behind the scenes, your users will be getting a dramatically improved experience though.

As usual, contact support@pusher.com if you have any questions or encounter issues with the new release.

Pusher is the easiest way to add realtime features to your application.

  • Ted McMannus

    Not very interesting nor is it very important news.

    • bren101

      This is huge news. It was the #1 reason I switched back to Pusher last week.

    • http://twitter.com/mattwatson81 Matt Watson

      This is very important news for people using Pusher. This is a great enhancement that has been needed for a long time. Also brings them to par with Pubnub, SignalR, etc.

    • Jeremy Haile

      This is very important news. I switched away from Pusher a while back specifically because of the lack of HTTP fallback support. I’ll be looking at testing the new version and switching back now!

    • http://markmakes.com/ Mark Lancaster

      Disagree. Important and interesting. Thanks guys @pusher!

  • Gabe

    Would be nice if gzip compression was used for serving the js library so that the file size is smaller. Any reason you guys don’t do this?

    • http://www.leggetter.co.uk/ Phil Leggetter

      We use Amazon Cloudfront as our CDN right now and achieving this isn’t as easy as we’d like it to be. It is on our backlog.

      • Freddywang

        I second that. It will involve handling Vary: Accept-Encoding header properly and apply deflate/gzip accordingly. Sounds complicated.

        I haven’t tried this before, maybe you could try other cdn provider which comes with gzip by default. Cloudfront will point to that cdn provider as the origin, and that cdn provider will point to your host as the origin. In another way of saying, you could outsource the gzip compression part to third party.

      • Gabriel Dibble

        +1

      • Gabe

        What I do is have a cloudfront distribution that pulls directly from my own server, which handles the gzipping. Then I just set a far future expires so that requests to my server are infrequent.

      • Etienne

        Gzip on Cloudfront is easy, instead of from a S3 bucket serve it front a server that does gzip ouput compression!