M - Medium environment

Up to 550 concurrent conversations, HA enabled, suitable for medium size production environments

Kubernetes Pod Resources

ComponentInstancesCPU reqCPU limitRAM req (GB)RAM limit (GB)
Automate2----
admin22466
dialogs2241212
nlu-pipeline21222
nlu-facade21244
gateway22422
analytics21222
cron-orchestrator21222
web-chat21222
storage11222
sso-server*21222
channels-connector*21222
thread-coordinator*21222
bot-integration21222
refinery21288
uploader21222
Listen&React* 2
new-web21244
analyser21277
Infrastructure2----
ElasticSearch3121616
RabbitMQ31266
PostgreSQL31266
Kubernetes - master node34488
NFS31222
Redis AI112242424
Voice systems* 2----
Voice Gateway23636384384
NLU2----
duckling22211
inferrer21211
intentizer-multi2220.10.1
intentizer-multi-fitter22161616
keywords2120.250.25
name-service20.510.250.25
ner-pl24822
pattern22222
pcre2120.20.2
sentiduck22411
tokenizer-pl22422
tagger-de21222
tagger-en21222
tagger-multi21222
ner-multi22666

Resources summary

ComponentInstancesCPU reqCPU limitRAM req (GB)RAM limit (GB)
Automate2357098 (101**)98 (101**)
Listen&React*2482222
Infrastructure33660138138
Voice systems*27272768768
NLU25111476.776.7
TOTAL1983241102.71102.7

Recommended VM configuration

qty.HDD [GB]vCPURAM [GB]
Kubernetes Master37548
Kubernetes Worker42004096
Voice Gateway*220036384

* - Optional applications

⚠️ When planning deployment please take into account requirements described in Deployment Assumptions and contact us for further details.

ℹ️ RedisAI assumes 35 active NLU simple models. If more active NLU models are needed please calculate 0.8GB per 1 NLU simple model. If complex models are used, calculate 4GB per 1 model - more information about complex models in Comparison of available intentizer types