Detection of undesirable phrases in NLU
When the training dataset contains too few intents, the NLU engine may be prone to assigning phrases to incorrect intents where they shouldn't be—similarly, when there are too many intents to choose from, the engine's confidence decreases, leading to an increased likelihood of misclassifying intents.
What can be done in such cases?
A solution is to add example phrases where no specific intent is conveyed, categorizing them as phrases without an assigned intent. Alternatively, you can create a "virtual" intent, such as no_intent
, which will not be associated with any process but will serve as a logically coherent set of phrases, such as general or off-topic statements.
Updated 4 months ago