I am working with big JSON files (that basically contain tree-like structures) in a ClojureScript app. Basically I iterate over all elements in that tree structure and these are quite a bunch of operations. Now I wonder how much overhead the Lazy Hash map treatment causes.
Basically I:
- Load the JSON file via AJAX
- Convert it to a JS object using browser's
JSON.parse
- use
js->clj :keywordize-keys true
to convert it to a clojure structure
The JSON's structure consists of nested lists and hash-maps. Something like
{:A-key-a [{:B-key-a 34 :B-key-b [213. 23.4]}
{:B-key-a 34 :B-key-b [213. 23.4]}
...]
:A-key-b [{:someother-a 30 ...}
...]
...}
Now I wonder if I should fall back to direct JS object usage to gain spped. Intuitively I would think that this is faster than the ClojureScript objects on the other hand I do not want to optimize prematurely and I do not know enough about the internals of ClojureScript in order to judge the overheads introduced by lazy evaluation.
I kind of figure that I could use .-mykey
accessor syntax and google closure foreach functions in order to rewrite that specific source code. What do you think?
I have seen Improve performance of a ClojureScript program on a similar optimation topic and I think this also implies that loop .. recur
seems to be a viable option for looping. Is that right?