I disagree, and I think a number of practitioners of these methods
would also. See
DUBOIS, D. and H. PRADE. 1990. ROUGH FUZZY SETS AND FUZZY ROUGH SETS*.
International Journal of General Systems, 17(2), pp.191 - 209.
The methods for handling rough sets may look superficially like a
specific case of a discontinuous fuzzy membership function, but rough
and fuzzy sets were designed to deal with different kinds of
uncertainty. In order to perform classification under both kinds of
uncertainty, these authors developed rough fuzzy sets or fuzzy rough
sets.
Their abstract says it well
"The notion of a rough set introduced by Pawlak has often been compared
to that of a fuzzy set, sometimes with a view to prove that one is more
general, or, more useful than the other. In this paper we argue that
both notions aim to different purposes. Seen this way, it is more
natural to try to combine the two models of uncertainty (vagueness and
coarseness) rather than to have them compete on the same problems.
First, one may think of deriving the upper and lower approximations of
a fuzzy set, when a reference scale is coarsened by means of an
equivalence relation. We then come close to Caianiello's C-calculus.
Shafer's concept of coarsened belief functions also belongs to the same
line of thought. Another idea is to turn the equivalence relation into
a fuzzy similarity relation, for the modeling of coarseness, as already
proposed by Farinas del Cerro and Prade. Instead of using a similarity
relation, we can start with fuzzy granules which make a fuzzy partition
of the reference scale. The main contribution of the paper is to
clarify the difference between fuzzy sets and rough sets, and unify
several independent works which deal with similar ideas in different
settings or notations."
According to Google Scholar, this paper has been cited 734 times, so
I'd say quite a few folks found this view worthwhile.
Tara