M - Medium environment

Up to 550 concurrent conversations, HA enabled, suitable for medium size production environments

Kubernetes Pod Resources

ComponentInstancesCPU reqCPU limitRAM req (GB)RAM limit (GB)
Automate2----
admin22466
dialogs2241212
nlu-pipeline21222
nlu-facade21255
gateway22422
analytics21222
cron-orchestrator21222
web-chat21222
storage11222
sso-server21222
channels-connector21222
thread-coordinator21222
bot-integration21222
refinery21288
uploader21222
Listen&React* 2
new-web21244
analyser21277
html-converter20.2288
redis (single)20.2222
hooks-server21222
Infrastructure2----
ElasticSearch*3121616
RabbitMQ*31266
PostgreSQL*31266
Kubernetes - master node34488
NFS31222
Redis AI112242424
Voice systems* 2----
Voice Gateway216163232
NLU2----
duckling22211
inferrer21211
intentizer-multi2220.10.1
intentizer-fitter22161616
intentizer-llm20.1211
keywords2120.250.25
name-service20.510.250.25
ner-pl24822
pattern22222
pcre2120.20.2
sentiduck22411
tokenizer-pl22422
tagger-de21222
tagger-en21222
tagger-multi21222
ner-multi22666
nlu-data-migrator11166

Resources summary

ComponentInstancesCPU reqCPU limitRAM req (GB)RAM limit (GB)
Automate23672106106
Listen&React*26.8204646
Infrastructure33660138138
Voice systems*232326464
NLU259.214286.786.7
TOTAL170326440.7440.7

Recommended VM configuration

qty.NVMe [GB]vCPURAM [GB]
Kubernetes Master37548
Kubernetes Worker52004096

* - Optional applications

⚠️ When planning deployment please take into account requirements described in Deployment Assumptions and contact us for further details.

ℹ️ RedisAI assumes 35 active NLU simple models. If more active NLU models are needed please calculate 0.8GB per 1 NLU simple model. If complex models are used, calculate 4GB per 1 model - more information about complex models in Comparison of available intentizer types