JavaScript can save your day or it can cause you nightmares. JavaScript and XHR (XmlHttpRequest) enable what the industry considers to be Web 2.0 – meaning highly interactive web sites where some application logic is pushed down to the client into the browser's JavaScript engine. As with any application code – regardless of the language and runtime environment – it is easy to not follow Best Practices which ultimately negatively impact the end-user experience with the site.

dynaTrace AJAX Edition has a unique capability to trace all JavaScript execution on the web page. It also traces calls into the Browser DOM (Document Object Model) and is able to capture method arguments and return values. The following illustration shows a JavaScript trace of a script execution in the PurePath view of the dynaTrace AJAX Edition:

By getting this level of details on JavaScript execution it is easy to identify slow running JavaScript handlers, custom javascript code, slow access to the DOM and expensive or inefficient calls into 3rd party frameworks such as jQuery.

Blocking and long running script tags

When scripts get downloaded and then executed by the browser the browser typically stops all other downloads. This behaviour can easily be observed looking at the network breakdown view as in the following illustration:

The dynaTrace AJAX Edition Timeline View shows exactly what happened during these periods:

Delay Loading JavaScript Files

The goal of a page must be to download all resources as fast as possible in order to improve the Time to First Impression and Time to onLoad. Yahoo and Google suggest delay/defer loading JavaScript files. Putting JavaScript files on the bottom of the page or using the DEFER attribute is one option as described in Yahoo’s Best Practice Document. Google’s Best Practices describes delay loading of JavaScript in the onLoad Event Handler by dynamically adding script tags to the DOM.

Optimizing JavaScript Execution

Besides optimizing the load sequence of JavaScript files it is the JavaScript code itself that can be sped up. The dynaTrace AJAX Edition provides two views to analyze JavaScript Execution:

The JavaScript/AJAX Tab on the Performance Report shows similar data than the HotSpot View. It analyzes all JavaScript executions on a page and provides an aggregated list showing all methods with their overall performance contribution. The list is filtered to script blocks and calls to external libraries such as jQuery. This list gives you a good starting point on performance improvement efforts. The view also shows who called these problematic methods (Back Traces), which methods were called (Forward Traces) and the actual JavaScript source code:

The next interesting view is the PurePath view. It shows the full execution trace of every individual script tag (and also all other script event handlers). By double clicking on a script execution block in the TimeLine view or by drilling from a method identified in the HotSpot view we get to the PurePath view showing the exact trace:

With these in-depth execution traces it is easy to identify where time is spent. The PurePath view also shows the actual JavaScript code. Bad performance often comes from excessive use of string manipulations, manipulations of the DOM, DOM object lookups using CSS Selectors, problematic 3rd party javascript libraries and too many or long running XHR calls. The following sections give more detail on these problems

Slow CSS Selectors with jQuery/Prototype

The #1 problem we have seen since we released dynaTrace AJAX Edition is the use of class name based CSS Selectors used with frameworks like jQuery or Prototype.

What are CSS Selectors

The jQuery Docs provide a detailed description of all available CSS Selectors. Objects in the HTML DOM can be identified in multiple ways. JavaScript code needs to get access to those objects to manipulate them in order to achieve a more interactive web site. Developers can either use the low level functions that the browser provides, such as getElementById or getElementsByTag. You can also just iterate through the whole DOM tree and try to find the element based on other criteria. In order to make it easier – frameworks such as jQuery or Prototype provide a very convenient way to lookup elements by using CSS Selector expressions. In a JavaScript file we can therefore find code like this:

The query $(“div.vt_tabPanel”) returns all DIV tags that have the CSS Class vt_tabPanel assigned. jQuery needs to use the underlying capabilities of the browser to find the elements that match this criteria. Different browser provide different types of query methods. Internet Explorer for instance does not provide a method to look up elements by class name. Other browser provide the method getElementsByClassName. As this method is missing in IE jQuery simulates this functionality by iterating through the whole DOM and checking every single DOM element whether it matches the class name of not. Depending on the size of the DOM this can become a very lengthy operation.

For a detailed description about the actual performance impact of using CSS Selectors like this please read 101 on jQuery Select Performance and 101 on Prototype CSS Selectors.

How to improve CSS Selector performance

There are several ways to improve CSS Selector performance especially on Internet Explorer but also on other browsers.

Use Unique ID when possible

The fastest way to lookup a single element is to use its unique id. If your query is only returning a single element you should give this element a unique id and then use the lookup by ID.

Instead of $(“div.vt_tabPanel”) we can use $(“#myTabPanel”). On average looking up elements by ID is about 95% faster than looking up elements by class name.

Specify a Tag name if you have to use the Class Name

When looking up elements by class name and these elements are all of the same tag type, e.g: DIV, then use the tag name as part of the query.

Instead of $(“.vrtc_activetabdiv”) we can use $(“div.vrtc_activetabdiv”). The latest versions of jQuery is smarter when resolving these queries than some older ones. If you specify a tag name jQuery will first resolve these elements using getElementsByTag (which is natively supported by all browsers). It then iterates through all these elements and matches the class name. This approach saves a lot of execution time as jQuery doesn’t need to iterate through ALL DOM Elements but only those of a specific tag.

Specify a parent context

By default jQuery queries the whole DOM to return the requested elements specified by the selector query. It also allows you to pass in a context object as second parameter. This context parameter causes jQuery to only query the elements underneath the passed context object instead the whole document. If you are looking for certain objects and they all share a common parent you are better off passing the common parent as context object.

Instead of $(“div.vt_tabPanel”) we could use $(“div.vt_tabPanel”, $(“#tabArea”)). This approach saves a lot of execution time as jQuery doesn’t need to iterate through ALL DOM Elements but only through the child elements of the parent context object.

Cache Lookup Results

As we have learned – lookups can be very expensive. Therefore we should avoid unnecessary lookups. Results of lookups can be saved in a variable and be reused at a later stage. A perfect example is loops where the same object is looked up all over again.

Instead of this code:

you better do

Reduce the DOM Size

What makes queries slow is the number of DOM elements that need to be iterated – either natively by the browser or via JavaScript when jQuery tries to mimic the missing functionality of the browser. The fewer DOM elements the fewer iterations are necessary. Bringing down the number of DOM elements on the page therefore has a positive impact on element lookups.

Too many XHR calls

JavaScript and XmlHttpRequests are the basis for what in general is called AJAX. Frameworks like jQuery make it very easy to make AJAX calls in order to retrieve additional content from the server. An example would be the implementation of a paging mechanism. Instead of downloading all pages at once only the first page is downloaded. When the user navigates to the next page we request the next page via an AJAX call and refresh the DOM. This avoids a full roundtrip and avoids the browser reloading the whole page.

A mistake that is often made is that too much information is fetched dynamically with too many calls. One example is a product page with 10 products. The developer may decide to use AJAX to load detailed product information for every product individually. This means 10 XHR calls for every 10 products that are displayed. This will of course work but it means that you have 10 roundtrips to the server that lets the user wait for the final result. The server needs to handle 10 additional requests that puts additional pressure on the server infrastructure.

Instead of making 10 individual requests it is recommended to combine these calls into a single batch call requesting the product details for all 10 products on the page.

The dynaTrace AJAX Edition shows all XHR/AJAX calls on the JavaScript/AJAX Tab of the Performance Report:

A Drill-Down to the PurePath shows you where these AJAX calls are executed giving you a better understanding on which component and which JavaScript handler is triggering those requests:

A click on the details of the Network Request shows you the request/response details including the retrieved content. Rich JavaScript UI Frameworks – such as GWT – use AJAX as part of updating UI components. Depending on configuration settings in the framework you may end up with more XHR calls than is good for the overall web site performance.

Manipulating the DOM

The DOM (Document Object Model) is the representation of the current web site and is accessible to the JavaScript code that gets executed. Elements in the DOM can be queried, property values can be accessed and methods can be called. Manipulating the DOM can however be very expensive in terms of how they impact end user performance.

The dynaTrace AJAX Edition provides the HotSpot View to analyze not only JavaScript but also DOM access. From there it is easy to identify the JavaScript handlers that accessed the DOM and what the performance impact of these calls are:

The view can be filtered to only show calls into the DOM. From there we can easily see which calls into the DOM are very expensive, e.g.: changing the class name of DIV tags or the BODY tag. The Back Traces view shows us which JavaScript code actually made the call. It could be our custom code or done through a framework such as jQuery.

Manipulating the DOM can be very expensive. Changing the class name – especially on the body tag causes the browser to re-evaluate all elements on the page.

Performance Savings, Recommendations and Rank Calculation

The goal is to avoid JavaScript execution in the early stages of page load to achieve fast overall page load. When interacting with the web site it is important to make the web site as interactive as possible. Optimize your JavaScript code to not let the user feel it is a slow page.

Recommendations and Savings

Reduce the JavaScript code and the number of JavaScript files on the page. This section is also explained in Best Practices on Network. Follow the Best Practices on jQuery Selector Performance and general jQuery Performance Rules.

When using 3rd Party frameworks make sure you understand how they work on your web page and optimize the framework usage comparing different configuration settings. As an example read the blog about the Performance Impact of 3rdParty JavaScript Menus.

Focus on long running JavaScript blocks and long running methods. These are the best hotspots to start optimizing JavaScript execution.

Rank Calculations

dynaTrace AJAX Edition calculates a Rank based on the number of JavaScript files and based on long running script blocks. We consider 2 JavaScript files as good but penalize the Rank for every additional file as we believe these files can be merged and therefore reduce roundtrips and script parsing.

Script blocks that execute longer than 20ms are considered to have potential for improvement. The longer a script block executes the more impact it has on the overall performance and therefore results in a lower Rank. We take the overall execution time of blocks that execute longer than 20ms. Every 50ms reduces the Page Rank by 1 point.

Some script blocks use JavaScript timers and may end up running for a very long time. The ranking for these types might therefore seem unfair and inaccurate so please handle this Rank with caution in case your applications make heavy use of timers. To avoid very unfair ranking we only account 2s to script blocks that take longer than 2s to execute.

We also take XHR calls into the Rank calculation. We consider more than 5 XHR calls as too many and therefore penalize the page for more than 5 calls. Pages that we have analyzed either show a handful of calls during the load phase. On pages where there was a problem we usually see far more than 5 calls which is often a result of incorrect implementation and not followed best practices (too chatty).

Note: On interactive sites this approach will lower the Page Rank as XHR is often used in mouse and keyboard event handlers to download additional content. Therefore this Rank might be misleading on pages with heavy interaction.

Example

Take a page that has a total of 5 JavaScript files. 4 execute faster than 20ms, 2 execute in 500ms, 1 takes 700ms and the last one takes 1s. 2. It also makes 4 XHR calls.

The Rank gets degraded by 3 because of too many JavaScript files. We also have a total of 2620ms (480+480+680+980) of script blocks executing longer than 20ms which leads us to a Rank reduction 52 (2620/50). We do not penalize for XHR as it is below the 5 XHR threshold.
We end up with a Page Rank of 45 which corresponds to an F Grade.

Further Readings

Here are further reads and detailed explanation on Cache Settings

  • No labels