Deployment Assumptions
Below you can find the default on-premises deployment configuration and assumptions for SentiOne Automate.
Area | SentiOne's deployment assumptions |
---|---|
Installation platform | SentiOne Automate can be installed on the customer's environment (on-premise) or in the cloud, where in both cases the customer provides the infrastructure. |
Hardware | 1. For performance reason CPU are required to have AVX-512 Vector Neural Network Instructions (VNNI) a.k.a. “Intel® Deep Learning Boost (Intel® DL Boost)”, these instructions must also be accessed from within the VM. 2. Configuration of virtual machines should not assume CPU overcommitment (this is managed at the Kubernetes level) 3. The automate application is under intensive development, which is associated with potential changes in resource requirements in the future (plus or minus) - we recommend the use of virtual machines that give space for changes in the future. |
Internet access | ⚠️ The customer provides Internet access: 1. from Kubernetes nodes for downloading Docker images from the Docker image repository provided by SentiOne. 2. from the administrator's system to the Helm charts repository provided by SentiOne. 3. from the administrator's system to the git repository provided by SentiOne.SentiOne provides login credentials for all services and the customer provides IP addresses (k8s, admin system) based on which SentiOne whitelists the traffic. |
Kubernetes infrastructure | The customer provides the infrastructure, i.e. a Kubernetes cluster (provisioned on the customer's own infrastructure or using an existing solution delivered as a PaaS (e.g. AKS in Azure)). SentiOne uses the Ubuntu Server operating system LTS version for the Kubernetes infrastructure (control-plane and nodes). |
Kubernetes node configuration | It is possible to change the number of workers, e.g. increasing their number, at the expense of unit resources, but the following issues should be considered: 1. When allocating pods to nodes, Kubernetes solves the knapsack problem - the larger the nodes are, the greater the optimization of resources 2. Some components have clearly higher CPU requirements, in the case of small workers there will always be a limitation in the form of VM size and there may be a situation where single nodes have a very heavy load and others do nothing at that time (e.g. during NLU training) |
Kubernetes infrastructure (HA) | The Kubernetes cluster may be provisioned with a HA configuration. The installation and configuration is prepared by the customer based on the official documentation for Kubernetes. |
Infrastructure outside Kubernetes | Following external services can be provided outside Kubernetes cluster (either via a dedicated instance or PaaS): RabbitMQ, ElasticSearch, PostgreSQL, RedisAI. By default SentiOne provides those services inside Kubernetes cluster. |
PostgreSQL database | It is possible to use an existing PostgreSQL database managed by the customer. In this case, SentiOne provides information on what databases should be created and what permissions the database user must have. The minimum version of the PostgreSQL database must be compatible with the one outlined in the Automate documentation. |
PostgreSQL database (HA) | ⚠️ The customer provides an HA mechanism for the PostgreSQL database. |
RabbitMQ | The message queuing (MQ) system installed within Kubernetes cluster. |
RabbitMQ (HA) | It is possible to run RabbitMQ with HA - for this purpose, you need to create a RabbitMQ cluster consisting of min. 3 nodes (in a sample production configuration) and configure queues to be highly available |
ElasticSearch | By defaults ElasticSearch database is installed within the Kubernetes cluster |
ElasticSearch (HA) | It is possible to run ElasticSearch with HA - for this purpose, you need to create an ElasticSearch cluster consisting of min. 3 nodes (in a sample production configuration). |
GIT Repository | SentiOne provides the customer with a dedicated GIT repository with read and write (RW) rights, where the currently used configuration files related to the customer's environment are stored. Configuration files should be edited and stored by the customer in this repository for easier debugging. This is not mandatory, the customer can instead store the files on its internal git repository, but this hinders SentiOne support in such a case (SentiOne's access to the configuration repository allows the SentiOne Support team to track configuration changes made by the customer). The customer provides the details of the users on the customer side who are to be granted access and SentiOne creates the accounts. |
Docker images | SentiOne provides pre-built Docker images via a private image repository. The Docker images provided by SentiOne are built based on containers running Oracle Linux. Applications are run in the containers as a non-root user. |
Ingress | Services in the Kubernetes cluster that must be available outside the cluster via HTTP/HTTPS are provided by the Kubernetes Ingress API Object. |
SSL Certificate | ⚠️ Some Automate functionality requires the use of HTTPS (e.g. Clipboard API). The default configuration assumes the use of HTTP. The client is responsible for configuring TLS certificates and terminating HTTPS traffic. |
Kubernetes Namespace | All manifests running in Kubernetes must be run in a single namespace. |
External Reverse Proxy | ⚠️ The customer provides an external Edge Proxy service. Services in the default configuration are made available to the user by Ingress Kubernetes. |
Documentation | The Product documentation of the system in English is available here. |
Version updates | Version updates are performed by the customer based on the instructions provided by SentiOne. We recommend a bi-monthly update cycle for Automate to keep up with the latest bug fixes and features. |
Support / SLA | Technical issues are provided via a Service Desk system. Named accounts in the Service Desk system will be created by SentiOne based on the list of persons authorized to report errors provided by the Customer. |
Helm | Installing and updating Automate and SentiOne is performed using Helm. |
Access to K8S for SentiOne | To streamline the support process, SentiOne recommends that the customer provides SentiOne with administrator access to both the Kubernetes cluster (e.g., kubeconfig file) and the servers hosting dependent services (root or sudo access). The customer is responsible for providing network communication, having agreed on its form with SentiOne. |
Staging Environments | SentiOne recommends a minimum of one test environment to verify both the process of updating the system and the state of the system itself after the update, before deploying it to a production environment. |
Customer's Intranet | When installing Automate and SentiOne in a customer's environment, where environment-specific configurations (e.g., http(s) proxy in the absence of direct Internet access) are implemented that may affect system correctness or create problems during upgrades, the customer will adjust its environment to ensure that SentiOne-supplied systems work correctly. |
System performance | Depending on the amount of traffic in the customer's environment, SentiOne provides 3 versions for the required infrastructure: Pod configuration. Customer agrees to provide resources (RAM/CPU/disk) as recommended by SentiOne. |
Backups | ⚠️ The customer is responsible for the process of regular backups of files and databases. The documentation for the PostgreSQL batabase backup can be found in the Database Maintenance section. |
Monitoring | ⚠️ The customer is responsible for the monitoring system (e.g. Grafana, Prometheus) as part of the implementation. Based on the Automate Monitoring documentation, customer can independently monitor the status of individual applications. |
Collecting application log files | Access to logs is available though the Grafana UI that uses the Loki backend. Grafana is configured within the Kubernetes cluster and is accessible only to cluster admins. |
CPU Requests vs Limits and VM sizing | Our configurations assume that there is CPU overcommitment regarding the actual availability of CPUs on VMs. The sum of real CPUs should be approximately 66% of sum of CPU limits in k8s configuration. |
Updated over 1 year ago