As far as I understand, if the data contains an object with id
which coincides with the id
(or part of it) of an element, that object's data is placed inside it, right?
So here goes:
Loop through the elements, not the data.
Currently you are looping through the 1000-item data. Why not loop through the 100 elements instead? This way, you would have avoided the other 900 iterations (if you iterated on the data) which may not have a corresponding container at all.
Narrow down your sets
In your code, you did this:
$('div').each(function(){...});
The problem with this is it gets ALL divs and checks for the id. I repeat: ALL the divs, including the ones you don't even need. You'd end up with a thousand on a full packed page, and you'd be wasting cycles on the useless ones.
Instead, try narrowing your set by targeting specific divs instead. Here's one example, targetting divs which start with test_
. At least this time, it's not all divs, but only a fraction, the ones that have an id
starting with test_
$('div[id^="test_"]').each(function{...});
As Nikola Vukovic pointed out, my "id's starting with test_
" example is slow because how these selectors are parsed and how elements are fetched. You can read more about right-to-left CSS selector parsing here. You might say that modern browsers have querySelectorAll
- but this method is also slow. For older browsers, they don't have qSA
. What jQuery does is use a combination of available methods like getElementsByClassName
, getElementById
and others, which are, by themselves fast, but slow when selectors get complex and result sets are huge.
Anyways, with that explanation done, here's another way to do it, by appending a class on your elements, and fetch them by class.
// Classes, assuming your divs have the class test
$('.test').each(function{...});
Alter the data structure
I'm no stranger to 1000 items in JSON. One thing we did to optimize access and file size was to flatten and simplify the data. You can do this instead:
{
"1" : ["dataAvalue","dataBValue"],
"2" : ["dataAvalue","dataBValue"],
"3" : ["dataAvalue","dataBValue"],
...
}
This way, you don't need to loop through the data. You can use hasOwnProperty
to check if an entry exists in the data.
Combining the suggestions
The code might look like this, using the suggested data structure of course. Further optimizations in place as well:
//Get your 100 target divs
$('.test').each(function(){
//You can directly get the id from the object this way
var id = this.id.replace('test_','');
//If the data does not have an entry for this id, skip
if(!data.hasOwnProperty(id)) return;
//Otherwise, add the HTML using innerHTML
this.innerHTML = 'dataA : ' + data[id][0] + ' , dataB : ' + data[id][3];
});
Native vs jQuery
Hands down, native wins in speed. But the problem with native JS is that code gets very tangled easily. Another is that you'd be creating code which could already have been implemented in libraries.
However, libraries like jQuery gain the upper-hand when it comes to readability and simplicity. Libraries tend to normalize the API, making them simple and very verbose.
But it's up to you to balance it out, whichever makes you feel comfortable. I usually prefer building code using libraries first, then optimize it afterwards by replacing routines to native if needed, or if possible. For example, using the native forEach
in arrays instead of each
in libraries, or direct access to attributes as opposed to using attr