Training practices & errors
It may take thirty minutes or more in larger-scale implementations, such as those with 50+ intents.
Note
Please note that the model must be retrained to account for newly added phrases, intents, and entities in the NLU and flow.
The size of the dataset determines the duration of training. Larger-scale implementations, such as those with 10k phrases, may take 15 minutes or more.
It is strongly advised NOT to work on the platform while training is in session.
It is highly recommended to schedule these training sessions so as not to disrupt ongoing work on the platform.
The model and the dataset produce a dynamic akin to a living organism. Each time you change something and retrain, the results will slightly differ.
For advanced improvements in NLU training, see Training Analytics.
Training errors
Unusual recognition behaviors
If you test a phrase after training and detect errors with recognition (even if you know you trained the phrase), the first step to resolve this issue is to train the model and test again. In most cases, this helps to resolve the problem.
Server limitations
Although uncommon, training may encounter instability or fail to train, depending on how the platform is deployed. For example, an on-premise installation (on client side) may experience errors when saving multiple versions of an NLU.
This is commonly caused by a lack of service space or memory. Particularly with on-premise installations of Automate, there may be issues when saving multiple versions of an NLU. A lack of server space can cause instability.
For more information about Automate installations go here.
Updated over 2 years ago