One of my most recent and interesting challenges was building a module where you can scroll, search and filter through a really long list of horse racing-related bets. These numbers can quickly sum up to be more than a thousand entries, meaning a lot of information must be shown in a small device with slow network speed and sparse processing power.
The first idea that comes to mind is probably pagination. It solves most network issues by making only small requests, scales really well and since you’re only processing small chunks of data each time, you probably won’t suffer from performance issues.
But when you want to scroll through all those horses and not feel like you’re using a website, pagination – even if performed automatically as you scroll down – just can’t allow you to get that responsive feeling as most native applications usually offer. Furthermore, in such a scenario all searches must be done server-side and those results also need pagination. So we aimed to be able to have 10.000 list items with the best possible experience and fast rendering in my four-year-old iPhone 4S.
A quick search on the web shows that the new trend is virtual scrolling and since we have an AngularJS app, we immediately considered angular-vs-repeat, which is quite an amazing directive that wraps your good old ng-repeat and gives it a huge performance boost by rendering only a visible subset of a potentially huge list. Unfortunately this out of the box solution couldn’t work for us since we needed some very specific fine tuning in order to save some extra requests.
So we opted for angular-inview, a simpler AngularJS directive that just informs you when an element becomes visible or hidden. This way we could easily build a custom solution, so we tried 4 different approaches:
||Loading Time MBP
||Loading Time iPhone 4S
(a) First we started by measuring how much it would cost us to render a full list of 10.000 elements, each of them with a lot of code in it (directives, filters, etc.). The results were not surprising. My Macbook Pro took around 2 minutes to get everything ready and my iPhone’s Safari just crashed.
(b) Then we applied angular-inview directive to every row. If the row was in the viewport, we’d render the content. If the row was outside the viewport, we’d render a placeholder with no data. My laptop timing improved a little bit, but a quick analysis of what was happening showed that the browser was losing a ton of time on angular-inview.js in a hander that was bound to the scroll event. For every row there was a scroll handler and that’s a very, very, bad idea.
(c) So instead of attaching the in-view directive to every element, we tried bucketing the data in groups of 30 rows allowing us to have 1/30 of the handlers. The results were quite surprising: Chrome was now taking only 8 seconds to render everything and the iPhone finally started showing us our list with no problems… in 100 seconds. Having a module that takes 100 seconds to render is as good as nothing, so we still needed a better solution.
(d) It turns out the iPhone was really slow rendering lots of HTML. It wasn’t a matter of scripting anymore, but actually the fact that we were rendering almost 10.000 placeholders that contained some simple styled HTML. As soon as we understood this, we started only rendering placeholders for the buckets that were close to the viewport. The other ones would just render an empty placeholder with the appropriate size. We got some slightly better numbers for Chrome, but a major improvement for iPhone.
Is 30 seconds still a lot? Sure. Could we improve this solution even more? Probably… But if premature optimization is the root of all evil and when we test this solution with a real scenario we get a snappy application even on older devices, maybe that’s a sign we can take the rest of the day off. 🙂