Tell me more ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.

I have cried blood finally getting this script to work (new to the google apps scripting and json, relatively familiar with javascript). It works great with a small data set, but as soon as I jump to the large, 20,000 item data set, then it times out! I can't think of another way to make this more efficient. These prices update every 3 minutes so I want to cache it for 10 minutes only. I feel like ScriptDb is not appropriate for this. Essentially I had to store information in the cache since I had ~500 equations using a simple, non-cache version of testing() and google/api didn't allow that many calls. I figured one call, and ~500 pulls from the cache would be better. Is there some way to make it more efficient that I am not thinking of?

This is run every 10 minutes to store the information. This piece times out.

function updateCache() {
  var cache = CacheService.getPublicCache();
    var myUrl = "http://www.gw2spidy.com/api/v0.9/json/all-items/all"
    var jsonData = UrlFetchApp.fetch(myUrl);
    var jsonArray = JSON.parse(jsonData).results;
      for (var i =0; i < jsonArray.length; i++) {
      cache.put(jsonArray[i].data_id + 'sell', jsonArray[i].min_sale_unit_price, 1500);
      }
    return cache.get(itemID + 'sell')
    }

This is the function that uses the cache and returns the data.

function testing(itemID) {
  var cache = CacheService.getPublicCache();
return cache.get(itemID + 'sell')
}
share|improve this question
add comment (requires an account with 50 reputation)

1 Answer

You could have one routine (10 minute timer) that downloads the data and dumps it as a flat text file in Google Drive.

A second timer function (every 2mins say, or whatever doesn't trip the single execution fail) could then process chunks of that flat file into cache/scriptDb

so if you have a 2minute timer caching 20% of the flat file, any entry on recall will be a maximum of 13 minutes stale and a minimum of 2 minutes.


alternatively, could you not query the api based on the items list? so query and cache smaller chunks at a time more frequently? each query on the timer could be sequential or a random selection of the available item categories?

update

https://github.com/rubensayshi/gw2spidy/wiki/API-v0.9#wiki-item-list

so if you use the /items/all/{page} option in the query, you can work out how many queries you can safely make, retrieve and store in one execution. lets say it is 10.

var lastPageChecked = ScriptProperties.getProperty('lastPage') + 1,
    checkEnd = lastPageChecked + 10,
    myUrl,
    jsonData,
    jsonArray,
    page;

for (page = lastPageChecked; page < checkEnd; page += 1) {

  myUrl = "http://www.gw2spidy.com/api/v0.9/json/items/all/" + page;
  jsonData = UrlFetchApp.fetch(myUrl);
  jsonArray = Utilities.jsonParse(jsonData);

  if (page <= json.last_page) {

    // [snip] process however you like

  } else {
    page = 0;
    break;
  }
}

ScriptProperties.setProperty('last_page_checked', page);

run that function as often as you like every query formatted in that way is cached by the api for 15 minutes, so you have 15 minutes to make the remaining 20 page calls. obviously if you can make 20 calls in one execution (2000 objects) then you only need to run the script 10 times - one every minute and you have time to spare :D

warning - i've written that in stackoverflow's editor and not tested it.

obviously if you run:

"http://www.gw2spidy.com/api/v0.9/json/items/0/" + page 

all armor, say and the same for each type you'd be running a lot more triggers, but every category might in theory be more 'current'

share|improve this answer
Unfortunately, it is hard to break up the chunks. One of the chunks of data is 10,000 items large. Do you have an opinion regarding using the cache or the scriptDb? – Jacob Bolda 20 hours ago
just checked - yup there's a lot of items :D but the api allows for paging, so that could help also no? – Jonathon 19 hours ago
I honestly have no idea what paging is. To google! I am accessing the API for like 22k items and using the data in about ~500 separate formulas. I suspect that will grow as we find more uses. I totally understand why I am hitting limits with those numbers, haha. – Jacob Bolda 19 hours ago
Here is a link to the API if that will help at all. github.com/rubensayshi/gw2spidy/wiki/API-v0.9 – Jacob Bolda 19 hours ago
i've amended my answer to explain paging – Jonathon 19 hours ago
add comment (requires an account with 50 reputation)

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.