Take the 2-minute tour ×
Code Review Stack Exchange is a question and answer site for peer programmer code reviews. It's 100% free, no registration required.

I'm trying to have data of 1000 objects:

JSON

{
                "list": [
                            {
                                "id": 1,
                                "dataA": true,
                                "dataB": false
                            },
                            {
                                "id": 2,
                                "dataA": true,
                                "dataB": true
                            }
                            // if 1000+ More like this ?
                        ]
}

My jQuery script checks elements on page like this:

HTML:

<div id="test_1"></div>
<div id="test_2"></div>
<div id="test_3"></div>
<div id="test_4"></div>
<div id="test_5"></div>
// Each page has elements like this about 100 elements per page

And I will check each element like this:

$('div').each(function(){
    var $this = $(this);
    var id = $this.attr('id').replace('test_','');
        $.each(data.list, function(i, v) {
            if (v.id == id) {
                $this.html("dataA : "+v.dataA+" , dataB : "+v.dataB);
            }
        });
});

What do you think, if JSON has object about 1000+? Is there any better way to do it or to make it faster? Am I thinking right about using JSON in this case?

My script will do this:

  • Show information (from JSON data)

  • Save/change data (JSON values)

Demo

share|improve this question

closed as off-topic by Jamal Dec 7 '14 at 18:46

This question appears to be off-topic. The users who voted to close gave this specific reason:

If this question can be reworded to fit the rules in the help center, please edit the question.

3 Answers 3

up vote 6 down vote accepted

As far as I understand, if the data contains an object with id which coincides with the id (or part of it) of an element, that object's data is placed inside it, right?

So here goes:

Loop through the elements, not the data.

Currently you are looping through the 1000-item data. Why not loop through the 100 elements instead? This way, you would have avoided the other 900 iterations (if you iterated on the data) which may not have a corresponding container at all.

Narrow down your sets

In your code, you did this:

$('div').each(function(){...});

The problem with this is it gets ALL divs and checks for the id. I repeat: ALL the divs, including the ones you don't even need. You'd end up with a thousand on a full packed page, and you'd be wasting cycles on the useless ones.

Instead, try narrowing your set by targeting specific divs instead. Here's one example, targetting divs which start with test_. At least this time, it's not all divs, but only a fraction, the ones that have an id starting with test_

$('div[id^="test_"]').each(function{...});

As Nikola Vukovic pointed out, my "id's starting with test_" example is slow because how these selectors are parsed and how elements are fetched. You can read more about right-to-left CSS selector parsing here. You might say that modern browsers have querySelectorAll - but this method is also slow. For older browsers, they don't have qSA. What jQuery does is use a combination of available methods like getElementsByClassName, getElementById and others, which are, by themselves fast, but slow when selectors get complex and result sets are huge.

Anyways, with that explanation done, here's another way to do it, by appending a class on your elements, and fetch them by class.

// Classes, assuming your divs have the class test
$('.test').each(function{...});

Alter the data structure

I'm no stranger to 1000 items in JSON. One thing we did to optimize access and file size was to flatten and simplify the data. You can do this instead:

{
  "1" : ["dataAvalue","dataBValue"],
  "2" : ["dataAvalue","dataBValue"],
  "3" : ["dataAvalue","dataBValue"],
  ...
}

This way, you don't need to loop through the data. You can use hasOwnProperty to check if an entry exists in the data.

Combining the suggestions

The code might look like this, using the suggested data structure of course. Further optimizations in place as well:

//Get your 100 target divs
$('.test').each(function(){

  //You can directly get the id from the object this way
  var id = this.id.replace('test_','');

  //If the data does not have an entry for this id, skip
  if(!data.hasOwnProperty(id)) return;

  //Otherwise, add the HTML using innerHTML
  this.innerHTML = 'dataA : ' + data[id][0] + ' , dataB : ' + data[id][3];

});

Native vs jQuery

Hands down, native wins in speed. But the problem with native JS is that code gets very tangled easily. Another is that you'd be creating code which could already have been implemented in libraries.

However, libraries like jQuery gain the upper-hand when it comes to readability and simplicity. Libraries tend to normalize the API, making them simple and very verbose.

But it's up to you to balance it out, whichever makes you feel comfortable. I usually prefer building code using libraries first, then optimize it afterwards by replacing routines to native if needed, or if possible. For example, using the native forEach in arrays instead of each in libraries, or direct access to attributes as opposed to using attr

share|improve this answer
    
Thanks so much, Will read your solution this tonight :D –  l2aelba Sep 20 '13 at 6:52

jQuery doesn't really help you here( as far as performance is consirned ). If you are to boost script speed avoid jQuery completely! Fns like: $(), .each(), .attr(), html() do additional processing which 'retards' performance when manipulating huge amounts of data. Try reverting to the native api, something like this maybe:

data.list.forEach(
   function ( o ) {
      var div;
      ( div = document.getElementById( "test_" + o.id ) )
      && ( div.innerHTML = "dataA : " + o.dataA + " , dataB : " + o.dataB );
   }
);

Native methods are a lot faster then any library's counterpart, and the two ( .getElementById(), and .innerHTML ) dom methods are very well supported. Btw, the snippet you've pressented is poorly composed: for each selected div you iterate the whole data set top to bottom (!), over and over... Given you have 100 divs, and data set of 1k+ elements, you do 100k+ iterations! Break iteration once you found what you've looking for, do: return false; inside that ( unnecessary ) if statement.

share|improve this answer

Joseph the Dreamer's answer is kind of 'almoust there'... Problem I find with it is $("div[id^=test_]") thing... jQuery uses regexp ( which are quite pc demanding ) to match the divs. This aproach would be a lot faster to get a hand on divs you want to process:

Array.prototype.filter.call(
    document.getElementsByTagName("div"),
    function ( o ) {
       return o.id.indexOf('test_') !== -1;
    }
);

I've just tested it against jQuery's $("div[id^=test_]") on 10k divs. It took less than a second to collect the divs, while jQuery approach lasted ~11 seconds to collect them. And, again, I suggest you use .forEach() on div collection, it's lightning fast. So, posible, 'combo', solution might be:

Array.prototype.filter.call(
    document.getElementsByTagName("div"),
    function ( o ) {
       return o.id.indexOf('test_') !== -1;
    }
)
.forEach(
    function ( div ) {
         var id;
        data.hasOwnProperty( id = div.id.substr( 5 ) )
        && (
            div.innerHTML = 'dataA : ' + data[id][0] + ' , dataB : ' + data[id][1]
        );
    }
);

Choose your weapon ;)

share|improve this answer
    
Thanks for this prototype filter :D so nice ! –  l2aelba Sep 20 '13 at 6:53

Not the answer you're looking for? Browse other questions tagged or ask your own question.