Question perhaps slightly left of field - but I'm curious as to the logic behind why Java's BigInteger/BigDecimal classes feel the need to define a constant of TEN?
Example taken from java.math.BigInteger:
/**
* The BigInteger constant zero.
*
* @since 1.2
*/
public static final BigInteger ZERO = new BigInteger(new int[0], 0);
/**
* The BigInteger constant one.
*
* @since 1.2
*/
public static final BigInteger ONE = valueOf(1);
/**
* The BigInteger constant two. (Not exported.)
*/
private static final BigInteger TWO = valueOf(2);
/**
* The BigInteger constant ten.
*
* @since 1.5
*/
public static final BigInteger TEN = valueOf(10);
Now I'm assuming it might be something to do with use in scaling functions or the like, but it did make me curious - especially since TEN seems to first appear in 1.5 whereas ONE and ZERO existed much earlier (I can also see why 1 and 0 would be more immediately useful than 10, too)
Anyone know?