Take the tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.

I understand how integer and floating point data types are stored, and I am guessing that the variable length of decimal data types means it is stored more like a string.

Does that imply a performance overhead when using a decimal data type and searching against them?

share|improve this question
 
Just never use floats for money or currency, ok? –  Neil McGuigan 16 hours ago
add comment

4 Answers

up vote 4 down vote accepted

A impact of decimal type (Numeric type in Postgres) depends on usage. For typical OLTP this impact could not be significant - for OLAP can be relative high. In our application a aggregation on large columns with numeric is more times slower than for type double precision.

Although a current CPU are strong, still is rule - you should to use a Numeric only when you need exact numbers or very high numbers. Elsewhere use float or double precision type.

share|improve this answer
add comment

You are correct: fixed-point data is stored as a (packed BCD) string.

To what extent this impacts performance depends on a range of factors, which include:

  1. Do queries utilise an index upon the column?

  2. Can the CPU perform BCD operations in hardware, such as through Intel's BCD opcodes?

  3. Does the OS harness hardware support through library functions?

Overall, any performance impact is likely to be pretty negligable relative to other factors that you may face: so don't worry about it. Remember Knuth's maxim, "premature optimisation is the root of all evil".

share|improve this answer
 
I am trying to prove that Postgres will be as capable as MongoDB for a task, so it will make sense to optimize at least somewhat. We seem to be throwing away a lot of relational goodness for the sake or using some hipster technology stack. –  wobbily_col 19 hours ago
1  
@wobbily_col: in that case send your colleagues to thebuild.com/presentations/pg-as-nosql-pgday-fosdem-2013.pdf and obartunov.livejournal.com/175235.html –  Denis 19 hours ago
 
@Denis I want to keep a schema! The other guys have done some benchmarks to prove MongoDB is the only way to go, but the data is relational, and it seems like a big mistake to me to denormalize it before we even have some solid requirements. –  wobbily_col 19 hours ago
 
@wobbily_col ... and MongoDB has some magic super-fast exact, high-range numeric (BCD or IEEE 754 decimal floating point) type? Or are you not really comparing the same thing? –  Craig Ringer 18 hours ago
add comment

I am guessing that the variable length of decimal data types means it is stored more like a string.

Taken from MySql document here

The document says

as of MySQL 5.0.3 Values for DECIMAL columns no longer are represented as strings that require 1 byte per digit or sign character. Instead, a binary format is used that packs nine decimal digits into 4 bytes. This change to DECIMAL storage format changes the storage requirements as well. The storage requirements for the integer and fractional parts of each value are determined separately. Each multiple of nine digits requires 4 bytes, and any remaining digits require some fraction of 4 bytes.

share|improve this answer
1  
Why is the answer NO? From the quote you've cited, I would have said that the answer is very much YES. –  eggyal 20 hours ago
 
@eggyal, HOW? I don't understand. –  Rahul 20 hours ago
1  
It is variable length, so its not going to be performed in one CPU operation unless it is one digit. –  wobbily_col 19 hours ago
add comment

Pavel has it quite right, I'd just like to explain a little.

Presuming that you mean a performance impact as compared to floating point, or fixed-point-offset integer (i.e. storing thousandsths of a cent as an integer): Yes, there is very much a performance impact. PostgreSQL, and by the sounds of things MySQL, store DECIMAL / NUMERIC in binary-coded decimal. This format is more compact than storing the digits as text, but it's still not very efficient to work with.

If you're not doing many calculations in the database, the impact is limited to the greater storage space requried for BCD as compared to integer or floating point, and thus the wider rows and slower scans, bigger indexes, etc. Comparision operations in b-tree index searches are also slower, but not enough to matter unless you're already CPU-bound for some other reason.

If you're doing lots of calculations with the DECIMAL / NUMERIC values in the database, then performance can really suffer. This is particularly noticeable, at least in PostgreSQL, because Pg can't use more than one CPU for any given query. If you're doing a huge bunch of division & multiplication, more complex maths, aggregation, etc on numerics you can start to find yourself CPU-bound in situations where you would never be when using a float or integer data type. This is particularly noticeable in OLAP-like (analytics) workloads, and in reporting or data transformation during loading or extraction (ETL).

Despite the fact that there is a performance impact (which varies based on workload from negligible to quite big) you should generally use numeric / decimal when it is the most appropriate type for your task - i.e. when very high range values must be stored and/or rounding error isn't acceptable.

Occasionally it's worth the hassle of using a bigint and fixed-point offset, but that is clumsy and inflexible. Using floating point instead is very rarely the right answer due to all the challenges of working reliably with floating point values for things like currency.

(BTW, I'm quite excited that some new Intel CPUs, and IBM's Power 7 range of CPUs, include hardware support for IEEE 754 decimal floating point. If this ever becomes available in lower end CPUs it'll be a huge win for databases.)

share|improve this answer
add comment

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.