I posted the following as a comment on the webword post:
I think there is actually a lot of commonality in the usability industry as far as methodology goes. Practitioners agree (granted at a high level) on exploratory, assessment, comparative, and validation testing. They agree on many of the inspection methods used, such as heuristic evaluations and cognitive walkthroughs.
What no one seems to be able to agree on is: what is a usability issue? What is an error?
They way to go about finding usability issues seems to be settling down, but still up in the air is how we define what we observe during testing or inspection.
The CIF is a good start. At least some commonality in what is reported might help smooth out some of the differences in reports on the same system by different usability groups. Many of the groups (some IIRC in the Molich research) still do not define well enough (if at all) measurable, and relevant usability objectives.