S - Small environment

Up to 500 concurrent conversations, no HA - suitable for non critical production environments

Kubernetes Pod Resources

ComponentInstancesCPU reqCPU limitRAM req (GB)RAM limit (GB)
Automate1----
admin12422
dialogs12466
nlu-pipeline11222
nlu-facade1122 (5**)2 (5**)
gateway12422
analytics11222
cron-orchestrator11222
web-chat11222
storage11111
sso-server*11222
channels-connector*11222
thread-coordinator*11222
bot-integration11222
refinery11288
uploader11222
Listen&React* 1
new-web11244
analyser11277
Infrastructure1----
ElasticSearch11288
RabbitMQ11166
PostgreSQL11244
Kubernetes - master node11244
NFS11222
Redis AI16122424
Voice systems* 1----
Voice Gateway13232384384
NLU1----
duckling12211
inferrer10.1211
intentizer-multi10.220.10.1
intentizer-multi-fitter12161616
keywords10.210.250.25
name-service10.110.250.25
ner-pl14822
pattern10.1211
pcre10.210.20.2
sentiduck11211
tokenizer-pl11422
tagger-de11122
tagger-en11122
tagger-multi11122
ner-multi12466

Resources summary

ComponentInstancesCPU reqCPU limitRAM req (GB)RAM limit (GB)
Automate1183539 (42**)39 (42**)
Listen&React*1241111
Infrastructure111214848
Voice systems*13232384384
NLU116.94736.8536.85
TOTAL79.9139518.85518.85

Recommended VM configuration

qty.HDD [GB]vCPURAM [GB]
Kubernetes Master17548
Kubernetes Worker42002064
Voice Gateway*120032384

* - Optional applications

** - If autocorrect is enabled

⚠️ When planning deployment please take into account requirements described in Deployment Assumptions and contact us for further details.

ℹ️ RedisAI assumes 35 active NLU simple models. If more active NLU models are needed please calculate 0.8GB per 1 NLU simple model. If complex models are used, calculate 4GB per 1 model - more information about complex models in Comparison of available intentizer types