https://people.mozilla.org/~jorendorff/es6-draft.html#sec-number.epsilon
uses base 10 but the max/mins are in base 2. Base 10 is fine as a comment, but to someone verifying an implementation they have to do work and perhaps fail.
Base 10 seems to be commonly used when talking about floating point epsilon. For example see http://en.wikipedia.org/wiki/Machine_epsilon
Since this value is a very small fractional floating point value it isn't clear that base 2 would make things much clearer. It sounds like what you would really like is a IEEE Binary64 encoding, but even that (in theory) could differ among processors so we general avoid taking about encoding at that level.
I agree with comment 0 that 2**-52 (I assume this is what he meant) would be a much clearer way to write this than base-10. For comparison, Number.MAX_SAFE_INTEGER is defined with both a decimal number and (parenthetically) 2**53 - 1. The latter is definitely the more useful number, and definition, in my estimation.
I'd actually get rid of all the decimal definitions of such IEEE-754 special numbers and have only base-two definitions, if it were me and it were easy to do so. Unfortunately I'm not sure that
2**1023 + 2**1022 + ... + 2**972 + 2**971
is a more elegant definition for Number.MAX_VALUE than a decimal expansion, so I don't know that there's an equally readable way to define that one in binary terms.
Actually, 2**1024 - 2**971 would be a perfectly elegant way to define Number.MAX_VALUE, on second thought. This would rely on the spec language that math is on actual mathematical values, not IEEE-754 numbers (as 2**1024 becomes Infinity in that system), but otherwise it's much nicer than a decimal definition.
deferring for ES7