It’s been awhile since we’ve posted anything technical related toTeamwork Chat. I’m starting off what will likely be a series of posts about bringing our desktop client to the Web. It wasn’t quite as easy as Adam described in A peek under the hood of Teamwork Chat , but it was very close. We had to sort some cross-browser compatibility issues, add Web notifications, and so on. The CSS side of things was pretty painless thanks to Autoprefixer . An added bonus to releasing the Web client was that it enabled people to access Teamwork Chat on their phones while we worked on finishing up our mobile apps.
Since then, we’ve made a lot of changes to the app to speed up the time it takes you to get to your messages. Things like inlining critical CSS, parallelizing API calls, loading less important JS on demand, and reducing amount of time & effort needed to render messages & conversations. Today I’ll be talking specifically about how we publish, store and serve our static assets.
Publishing a new version
To publish a new release of the Web version we bump the version number and merge changes onto our master branch. Our continuous integration and deployment (CI/CD) server, i.e CodeShip , then compiles and tests the merge commit. If the tests pass, CodeShip goes on to send the new files up to an Amazon S3 bucket.
Our bucket is in Virginia so for us (located in Ireland) there will always be a 100ms round trip time added onto every request. To reduce this for users not on the east coast of the US we hooked our S3 bucket up as a source to a CloudFront distribution and request all of our content from there.
How CloudFront works
CloudFront is a content delivery network (CDN) service that allows you to distribute static content with very low latency. CDNs use georouting to serve content for the one domain from multiple different locations (currently ~35) around the world. When a request is made to a CloudFront domain it is routed to the server closest to the requester’s DNS server. For us this automatically cut 80ms off request times!
When the first request is made to a file on CloudFront, it grabs it from your source (S3 for us), caches the response and forwards it on to the requester. Every subsequent request to that file is served that same response unless the request headers are different.
Adding Min/Max TTL is very important; This gets CloudFront to add
Cache-Control headers with your specified values. When a browser receives these headers it stores the version in cache until the expiration date is reached. We set Max TTL to the maximum possible (31536000) because we never need to invalidate files due to the way we store them. The TTL values also specify how long CloudFront should keep the objects in cache before requesting a new copy from the origin.
TTLs can be set in a variety of different ways; The best way to read up on this is on CloudFront’s official docs .
After setting TTLs:
Gzipping your content on CloudFront is as easy as checking a checkbox so this is a bit of a moot point but because of the hoops we had to jump though a few months ago I’m going to have to mention this ;). We had a HAProxy server set up for a variety of different reasons so instead of using S3 as an origin in CloudFront we pointed to the HAProxy server which rewrote the url and proxied the request to S3. Once this was sorted, all we had to do was add the following lines to the S3 buckets backend and hey presto
Content-Encoding: gzip !
As I mentioned above, CloudFront takes your specified request headers into consideration when deciding whether or not to serve a cached request if you choose to forward them on to the origin. We tell CloudFront to forward the
Accept-Encoding header to the origin; If a request is made without
Accept-Encoding: gzip CloudFront will forward the request onto the origin to get an uncompressed version of the file before serving it to the requester. This way you don’t need to worry about supporting consumers that can’t handle gzipped content.
One thing to note is that S3 will only accept requests if the request header’s
Host value is
s3.amazonaws.com . So if you are proxying requests to s3 then make sure you replace the Host header. This absolutely killed me.
CloudFront make it extremely easy to domain shard your CDN enabling you to get around browser’s same domain concurrent request restrictions. We haven’t used sharded domains yet because we there isn’t enough of a gain. There are diminishing returns from domain sharding. The browser needs to do a DNS lookup and a TCP 3-Way handshake for each. Anyway, we technically do it right now because we’re using a CDN; we have the
x.teamwork.com domain (mostly for API calls) and the
cdn.teamwork.chat domain for assets.
That said we have set up multiple
cdnX.teamwork.chat domains that are pointed to the same origin; they’re ready and waiting to be used. Hopefully soon (once HTTP 2.0 is used everywhere) domain sharding will be little more than a memory.
To prevent us from having to manually invalidate objects on CloudFront (it can take an hour or sometimes much longer and also costs money) every time we make a new release, we put all files apart from the entry html file (
index.html ) in a folder named after the version number. It looks something like this:
0.13.1/ scripts/ app.js ... styles/ ... 0.13.2/ ... index.html
index.html file is the only file that is replaced when a new update is rolled out. This structure is great because it allows people who still have old version of the app loaded be able to request files that they hadn’t gotten before the update was published. When the user next reloads, they are brought to an updated
index.html which then loads assets from the new locations. S3 sets an ETag on the
index.html so the user doesn’t need to re-load its content unless it changes.
Speeding up time to first screen
Before we moved to the Web, the size of our
index.html wasn’t an issue as it was being loaded from your file-system and hence we ended up filling it with everything needed to run the app. The HTML file included anything from inline SVG elements to CSS. Because of this we needed to reduce its size by a significant amount; In the end we got it from 400KB down to 39KB. To achieve this we moved out anything that wasn’t to do with showing the first screen or error handling and load them on demand once it’s required.
How does this affect the desktop version?
Well, it doesn’t. It would actually slow it down but only by ~1ms. We simply request the files in
0.13.2 , for example, directly. Local files take a negligible amount of time to load and there is no limit to the number of concurrent requests you can make to
file:// URLs. If we used a CDN, then our desktop apps would need a network connection to start up at all. An NW.js app is offline-first by default.
We decide at build-time whether the app should use relative or CDN-based links for assets, injecting the asset root URL into our templates and scripts. We use relative links during development of course. During the build, we enable CDN usage by detecting automatically on CodeShip by checking if
We also publish multiple other branches online on every push for testing, etc. We have an object mapping branches to CDN addresses or in some cases, CDN usage is disabled altogether.
Think you could do better? We’re always looking for great people to join the Teamwork Chat crew . As well as working on interesting stuff like this every day, you won’t believe some of theperks and benefits of working with us.