You are suffering from a common problem among engineers: that of over-optimization in one frame. The two classical constraints of computation are time & space. They are generally opposed; you cannot conserve one without "spending" another. The Y2K bug was in fact an example of this. Space constraints made programmers "save" two digits in the year, appending "19" in front of the date year is a computation (aka time). Likewise, you are suggesting we save space by compressing the file, consuming computation time to save space on disk.
Sometimes this is valid. In fact, a significant quantity of TCP/IP is already transparently gzip-compressed in transit. The time spent compressing & decompressing it is negligible compared to the network resources required to transfer an uncompressed HTML document.
In your case, it isn't, for the following reasons:
Data storage is cheap compared to Computation;
Think of your computer today vs. your first computer. My first computer (that I bought for myself) was a 120Mhz processor, with 16MB of RAM and a 1.2 Gb drive. My current computer (somewhat aged) has a 3.6Ghz CPU, 32GB of RAM and about 16TB of storage. The CPU is 30x faster, but the RAM is 2000x greater (not to mention waaaay faster) and the storage is 12500x greater. Rainbow "decryption" attacks are an example of leveraging space to compensate for our computational defects. We've gotten much more room than we have time.
Second, and more practical:
If you compress the text in situ, you lose the ability to search it. Depending on the compression algorithm and surrounding data, the string "WKRP in Cincinnati" might be compressed to a different set of characters, and the similar string "WKRP/Cincinnati" would almost certainly be different in any cypher. In order to search your compressed database, the user would need to decompress (or god forbid, download) the entire thing.