L - Large environment

Up to 800 concurrent conversations, HA enabled, suitable for big production environments

Kubernetes Pod Resources

ComponentInstancesCPU reqCPU limitRAM req (GB)RAM limit (GB)
Automate2----
admin24866
dialogs2481818
nlu-pipeline21422
nlu-facade21466
gateway24822
analytics21422
cron-orchestrator21422
web-chat21422
sso-server21422
storage22222
channels-connector21422
thread-coordinator21422
bot-integration21222
refinery21288
uploader21222
Listen&React* 2
new-web21244
analyser21277
html-converter20.2288
redis (single)20.2222
hooks-server21222
Infrastructure2----
ElasticSearch*3141616
RabbitMQ*31266
PostgreSQL*32488
Kubernetes - master node3881616
NFS31222
Redis AI118363636
Voice systems* 2----
Voice Gateway248486464
NLU2----
duckling22411
inferrer21211
intentizer-multi2220.20.2
intentizer-fitter29161616
intentizer-llm20.1211
keywords2120.250.25
name-service20.510.250.25
ner-pl24822
pattern22222
pcre2120.50.5
sentiduck24811
tokenizer-pl23622
tagger-de22222
tagger-en22222
tagger-multi22222
ner-multi22101010
nlu-data-migrator11166

Resources summary

ComponentInstancesCPU reqCPU limitRAM req (GB)RAM limit (GB)
Automate248132118118
Listen&React26.3204646
Infrastructure35796180180
Voice systems24848128128
NLU279.21669595
TOTAL239462567567

Recommended VM configuration

qty.NVMe [GB]vCPURAM [GB]
Kubernetes Master375816
Kubernetes Worker620048128

* - Optional applications

⚠️ When planning deployment please take into account requirements described in Deployment Assumptions and contact us for further details.

ℹ️ RedisAI assumes 35 active NLU simple models. If more active NLU models are needed please calculate 0.8GB per 1 NLU simple model. If complex models are used, calculate 4GB per 1 model - more information about complex models in Comparison of available intentizer types