Let's go through some of the points. You mentioned:
I understand that the tlogs are written to sequentially (the reason it
is recommended moving them).
It's also for Recoverability
Moving transaction files to a different array isn't just for performance, it's also for recover-ability. Assume you take a transaction log file backup at 3:45pm and your Fudion IO disk goes bad at 3:55pm before your next transaction log backup. If you have your data and log files on the same drive you are not able to go in after the crash, take a final transaction log backup thus ensuring almost no data loss (in flight transactions would still be lost of course).
Performance
I understand that the tlogs are written to sequentially (the reason it
is recommended moving them). But if IO difference is significant
between the two drives as it is between traditional drives and a
FusionIO card, will I still see a benefit from moving the tlogs to a
different drive? Or will performance be worse?
Without proper benchmarking, you could very well lose performance by isolating your log files from a lightning fast PCI-E FusionIO card and moving them to a SAS6 RAID array with magnetic disks. Typically moving log files to a different array also increased performance, but in your case it might decrease it.
You should measure the following perfmon counters after moving the log files over:
DISK SEC/WRITE
DISK SEC/READ
If DISK SEC/WRITE on the log file goes over 5MS it's over what MS recommends. However, if your app doesn't have performance issues with writes, just keep an eye on that counter so you're not caught off guard.
Also,
Transaction Logs are sequential in nature but the second you RAID them or put more than 1 accessed file on the drive the physical IO is no longer sequential thus you lose performance on traditional disks. All the FusionIO drives I benchmarked did perform better for sequential reads/writes but random IO was sufficient.
Next you mention:
The database is a typical web-based workload with much more reading
going on than writing
Typically most apps are much more read intensive than write intensive, which is why common SQL Server benchmarking tools use 8K IOs @ roughly 70% read / 30% write so that part of your setup is fairly common.
For example, users can upload multiple files with 10,000 or more
records in each of them.
If your writes aren't slowed down by moving to the new disk then great, but if you're having scaling issues you should look into BULK LOAD. This will minimize the log activity.
Real world benchmarking in a test system is the only way I'd go at it personally. I'd reload a test system with a production workload and see how the drives handle by recording the metrics in SQL Server wait stats and perfmon, then make a decision. Good luck.