You could initial load it in simple model. (as long as the load by itself isn't a multi step process where you want to be able to do a point in time restore).
Now after the load, it could be that your logfile has grown quite a bit if you had a long running transaction regardless of being in simple model. You can then shrink the log file.
Then you need to size the log file. A good starting point would be to check your largest table. If your in full recovery model, you'll need at least as much transaction log file space as your largest table. Since when you do regular index mantainance and you do a rebuild, the transaction log generated by the rebuild is about as much as your table. So sizing for your bigest table is a good start.
However, you'll see over time if your transaction record rate compared to your log backup frequency is making hte log file grow or not. You can either ajust the size of your log file or the frequency of your log backups.
Sizing up front is important. Having the file auto grow is a huge performance penalty, since the log file needs to be zero initialised. Remember, it always autogrows at the moment you are doing transactions, the transaction log writing is synchronous, so waiting for the autogrowth to finish is a direct wait on all pending transactions.
Then the auto grow option is also bad for the internal layout of your log file. see:
http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx
and:
http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
so properly sizing it up front will be best.