#Specialize type traits
Specialize type traits
#Specialize type traits
Specialize type traits
Refactor the limits
There's quite a lot of repetition of std::numeric_limits<T>::max()
and std::numeric_limits<T>::min()
. It would make sense to define
static const T min_val = std::numeric_limits<T>::min();
static const T max_val = std::numeric_limits<T>::max();
Or, we could make those limits be template parameters instead, which allows us to define saturating types with non-default limits:
/** A saturating integer value. */
template
<typename T,
std::enable_if_t<std::is_integral_v<T> T> min = std::numeric_limits<T>::min(),
std::enable_if_t<std::is_integral_v<T> T> max = std::numeric_limits<T>::max()>
class xint_sat_t
{
public:
// We don't need return_type - just use xint_sat_t directly
// (T, min and max will be inferred).
static const T min_val = min;
static const T max_val = max;
If we take this approach, we'll need operator=
that accepts a T
alone, otherwise assignments would need values constructed with matching min
and max
.
#Specialize type traits
If we want our new type to behave as a normal value type, it's worthwhile to specialize the type traits:
namespace std
{
<template typename T, T min, T max>
constexpr is_unsigned<xint_sat_t<T, min, max>> {
return is_unsigned<T>();
}
}
Other candidates for specialization include std::is_signed
, std::is_integral
, std::is_arithmetic
(if we also specialize std::numeric_limits
), std::is_exact
and so on - the pattern is mostly to forward to the specialization for T
, as above.
A cast is required in clamp()
If I try to assign a uint_sat8_t
to a uint_sat32_t
variable, I get an error from mismatched ?:
arguments. The fix is to cast to T
:
return
val < min_val ? min_val
: val > max_val ? max_val
: static_cast<T>(val);
Don't move return values
Instead of this:
constexpr xint_sat_t operator++(int) {
xint_sat_t temp { value };
if (value < max_val - 1) ++value;
return std::move(temp);
}
We should simply return by value, and trust in return value optimization (which the std::move()
may inhibit):
constexpr xint_sat_t operator++(int) {
xint_sat_t temp { value };
if (value < max_val - 1) ++value;
return temp;
}
I think the test should be value < max_val
there, given that the limit seems to be inclusive everywhere else.
Consider using compiler builtins
With GCC, we can test for overflow without having to promote to a wider type:
// pass by value should be as efficient as passing T by value
constexpr xint_sat_t __attribute__((pure)) add(xint_sat_t other) const
{
T result;
if (std::is_signed_v<xint_sat_t> && other < 0) {
return (__builtin_add_overflow(value, other.value, &result) || result < min_val)
? min_val
: result;
} else {
return (__builtin_add_overflow(value, other.value, &result) || result > max_val)
? max_val
: result;
}
}
It's probably not too hard to make implementations that use these builtins (and similar functions for other compilers) where available, and hand-crafted code otherwise. It might be worth standardizing on one interface to add_with _overflow()
and implementing that per-compiler. That would solve the problem that the public methods add()
, subtract()
sound like mutators. (And I noticed a wee typo - substract()
should be subtract()
before that modification.)
Make the main()
a proper self-test
The main function can be a much more comprehensive test - and instead of commenting the expectations, it should actively test them, returning non-zero if any fail.
Negative infinity
Should division of a negative quantity by zero result in the max value? Perhaps it should saturate at the min value instead? (That's an open question - you get to decide the interface, but at least be clear that you thought of this).