The bug is that a uint_fast32_t has 64 bits on this platform, so the
test suite is failing because of a bad value from maxrand(). This commit
fixes that.
The usual case is that the number passed in is less than or equal to
vm->max. In that case, all we have to do is generate one number with
either bc_rand_bounded() or bc_rand_int(). If the bound is equal to
vm->max or is a power of 2, we call bc_rand_int() and mask bits
appropriately. If the bound is *not* a power of 2, we call
bc_rand_bounded().
This change means that only in the case of the bound being greater than
vm->max do we enter the expensive arbitrary-precision generation code.
This optimization was actually a bit of low-hanging fruit after adding
RNG stuff. You see, I had to add BcNum that stored the RNG max, which is
also the BcBigDig max. After doing a lot of testing, I found out that
a lot of functions knew they wouldn't run into problems with the old
bc_num_bigdig(), so I split the actual conversion into a new function
called bc_num_bigdig2() and changed the error checking in the old one to
just compare against rng->max, which was moved into vm.
The code I replaced getopt_long() with comes from
https://github.com/skeeto/optparse, and it is in the public domain.
I replaced getopt_long() for several reasons:
1. getopt_long() is not guaranteed to exist.
2. getopt_long() has different behavior on different platforms.
3. glibc's getopt_long() is broken.
4. It allows me to standardize the error messages.