Skip to content

Private Cloud Setup Guide


βœ”πŸŸ’ Step 1. Configure your local kubeconfig file in order to access the cluster.


In order to execute kubectl commands against a kubernetes cluster authentication is required. The authorization file is usually named 'config' and located (default location) at folder:

    πŸ”΄πŸ’» Windows:

C:\Users\YOUR_USER_NAME\.kube\

    πŸ”΄πŸ’» Ubuntu & MAC:

~\.kube\

The config file has to be present in order to execute kubectl commands against the cluster!
Here are guides for some cloud provides which explains how to get/generate your config file.

| 🟑 Google config/authorization Setup

https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl?hl=de#gcloud

| 🟑 Azure config/authorization Setup

https://learn.microsoft.com/de-de/azure/aks/control-kubeconfig-access

| 🟑 STACK IT config/authorization Setup

Once your stack it kubernetes cluster has been created with the recommended ressources mentioned at Cloud System Requirements you can download the kubeconfig file directly from stack-it.
Reference: https://docs.stackit.cloud/stackit/de/zugriff-auf-ein-kubernetes-cluster-10125618.html

1. 'Open' STACKIT Portal
2. 'Choose' your project
3. Klicken Sie Kubernetes innerhalb des Runtime Abschnitts
4. Klicken Sie auf die drei Punkte an der rechten Seite der Cluster Kachel und wΓ€hlen sie Konfiguration herunterladen, um die Kubeconfig herunter zu laden
5. 'Rename' the downloaded kubeconfig file into 'config' and place it in your default kubeconfig folder (defaults to: '~\.kube\'). If the folder does not exist, create it.
6. 'Verify' your authorization. Type the command 'kubectl cluster-info'

More information about the kubeconfig file at: https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/

| 🟑 TANZU config/authorization Setup

Tanzu has different authorization methods to access the cluster, here i will cover the basic vsphere authentication , for more information about vsphere -> https://docs.vmware.com/de/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-70CAF0BB-1722-4526-9CE7-D5C92C15D7D0.html

1. What you need:

1. 'kubectl-vsphere' (https://docs.vmware.com/de/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-0F6E45C4-3CB1-4562-9370-686668519FCA.html)
2. 'kubectl'  (https://kubernetes.io/de/docs/tasks/tools/)
3. 'tanzu cluster being setup'
4. 'tanzu/cluster server ip'
5. 'tanzu cluster name'
6. 'tanzu account'

For more information: https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-93B29112-4492-431F-958A-12323540C38D.html

2. How to Authorize

If you have kubectl-vsphere installed correctly and have all of the requirements from Step 1., then you can execute those commands into a bash terminal:

export KUBECTL_VSPHERE_PASSWORD=YOUR_PASSWORD  '(please replace the text YOUR_PASSWORD with your actual password)'

./kubectl-vsphere login --vsphere-username YOUR_ACCOUNT@tanzu.local --server=https://123.123.123.11 --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name MY_CLUSTER_NAME --tanzu-kubernetes-cluster-namespace MY_CLUSTER_NAMESPACE

In the command from above, please replace the place holders:
1. 'YOUR_ACCOUNT@tanzu.local'                   with 'your account username'
2. 'https://123.123.123.11'                                      with 'your cluster server ip'
3. 'MY_CLUSTER_NAME'                                  with 'your cluster name'
4. 'MY_CLUSTER_NAMESPACE'                      with 'your cluster namespace'

If authorization was successfully, you should be able to execute kubectl commands. Try now to use the correct context:

kubectl config use-context MY_CLUSTER_NAME  (like above, replace MY_CLUSTER_NAME with your cluster name)
kubectl get pods
kubectl get nodes

βœ”πŸŸ’ Step 2. Verify helm installation.


Once you have access to the cluster the next step is to verify if you have all the access rights in order to deploy application(s) / services / pods via helm. Try to deploy a simple nginx application to your cluster:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx -n default

If the deployment was successfull, you should get a message similar to 'deployment successfull'. Now let's test if the pods are available:

kubectl get pods -n default

Output should be similar to:

NAMESPACE   NAME                                           READY   STATUS    RESTARTS   AGE
default     ingress-nginx-ingress-5d55d8b9dc-v46bg   1/1     Running   0          1m14s

⚑ Last todo, clean up our deployment.

helm uninstall ingress-nginx -n default

If some of the commands were not successfull, please contact your cloud administrator for access.


βœ”πŸŸ’ Step 3. Verify GIT installation / authorization & Clone infrastructure repository


The entire mybusiness-ai is assembled into helm charts. The helm chart(s) requires some system environment variables to be present in order to deploy the application individually. Those environment variables can be set directly in your bash. (https://devconnected.com/set-environment-variable-bash-how-to/) But first let's clone the infrastructure locally:
Cloning commands:

git clone git@git.mobile2b.de:mybusiness-ai/infrastructure.git
git checkout -b master’

If the cloning was not successfull, please contact your administrator for access.


βœ”πŸŸ’ Step 4. Create the environment file which contains all the environment values for your individual mybusiness-ai deployment.


If you have cloned the infrastructure locally, then go into the folder /kubernetes and create a new file.
Use for the file name a pattern like .env_MY_COMPANY_production
Replace the text MY_COMPANY with the company name.

Once you created the file, paste this content into it and check the individual description of each environment variable and adjust it accordinly

## Cluster features
export CLUSTER_PLATFORM=kubernetes  # possible values are: 'kubernetes' or 'openshift' , if your cloud provider is 'openshift' then please use 'openshift' as value, The value 'openshift' will apply additional configuration to the pod's security context
export HAS_MONITORING=false   # MORE INFORMATION check Grafana Deployment (Optional Steps below) 
export HAS_CERTMANAGER=false  # MORE INFORMATION check Cert Manager Deployment (Optional Steps below)
export HAS_DB_BACKUPS=false   # Guide is in progress , keep this values false

## Databases Monitoring
export MONGODB_SERVICEMONITOR=false # Should we enable service monitor mongodb? (INFO: this only works if prometheus / grafana monitoring stack was configured aswell)
export MARIADB_SERVICEMONITOR=false # Should we enable service monitor mariadb? (INFO: this only works if prometheus / grafana monitoring stack was configured aswell)

## Hosts
export CLUSTER_ISSUER=letsencrypt  # MORE INFORMATION check Cert Manager Deployment (Optional Steps below)
export MASTER_SERVICE_BASE_URL=https://master.mybusiness.ai # never change this value
export FRONTEND_HOSTNAME=test.mydomain.com  # hostname for the frontend (MANDATORY variable)
export FRONTEND_HOSTNAME_2=unknown.mydomain.com # second hostname for the frontend (NOT MANDATORY)
export API_GATEWAY_HOSTNAME=api.mydomain.com # hostname for the backend / api gateway (MANDATORY variable)
export API_GATEWAY_HOSTNAME_2=apigatewayhostnameunknown # second hostname for the backend / api gateway (NOT MANDATORY)
export CHAT_SERVICE_HOSTNAME=chat.mydomain.com # hostname for the chat-service (used for refresh events at flows / uis and other stuff) (MANDATORY variable if you want to use flows/uis)
export CHAT_SERVICE_HOSTNAME_2=chatservicehostnameunknown # second hostname for the chat-service (used for refresh events at flows / uis and other stuff) (NOT MANDATORY)

## CORS configuration
export CORS_ALLOW_ORIGIN="test.mydomain.com" # use the same value as the environment_variable FRONTEND_HOSTNAME

## Static IP for ingress
export LB_IP=12321.13131.3131 # the public ip adress for ingress (load balance ip adress) # MORE INFORMATION check Cert Manager Deployment (Optional Steps below)

## Monitoring / Logging
export ES_DISK_SIZE=50

## Credentials for accessing private registy
export PRIVATE_REGISTRY_HOSTNAME=git.mobile2b.de:4567
export PRIVATE_REGISTRY_USERNAME=Deployment-User
export PRIVATE_REGISTRY_PASSWORD='***************' # omitting password
export GCR_HOSTNAME=eu.gcr.io
export GCR_USERNAME=_json_key
export GCR_PASSWORD='***************' # omitting password

## Credentials for backup cron jobs (Guide is in progress)
export AWSACCESSKEYID=none
export AWSSECRETACCESSKEY=none
export AWSREGION=eu-central-1
export AWSBUCKET=none

## Root passwords of DBs
export MYSQL_ROOT_PASSWORD='***************' # omitting password
export INFLUXDB_ROOT_PASSWORD=='***************' # omitting password

## Cluster names
export CLUSTER_NAME_LONG=myclustername-production # this value will be passed on each pod as an environment variable, replace myclustername with something individual
export CLUSTER_NAME_SHORT=myclustername-prod # this value will be passed on each pod as an environment variable, replace myclustername with something individual

## JMX configuration for the all the pods (which are java services)
export JMX_PORT=8686
export JMX_USER=m2badmin
export JMX_PASSWORD=q4YVML91iR8td9b9rx0
export XMX=512m
export XMS=512m

## Pod Resource Limits / Request | (The values below are set individual at each pod)
export API_GATEWAY_XMX=512m  # The heaps map space for API_GATEWAY (java service)
export API_GATEWAY_XMS=512m  # The heaps min space for API_GATEWAY (java service)
export API_GATEWAY_REQUESTS_CPU=500m # The cpu request for this pod
export API_GATEWAY_REQUESTS_MEMORY=512Mi # The memory request for this pod
export API_GATEWAY_LIMITS_CPU=2 # The cpu limit for this pod
export API_GATEWAY_LIMITS_MEMORY=2Gi # The memory limit for this pod

export BO_XMX=2048m # The heaps map space for this POD (java service)
export BO_XMS=2048m # The heaps min space for this POD (java service)
export BO_REQUESTS_CPU=500m
export BO_REQUESTS_MEMORY=2Gi
export BO_LIMITS_CPU=2
export BO_LIMITS_MEMORY=4Gi

export DOCUMENT_SERVICE_XMX=256m # The heaps map space for this POD (java service)
export DOCUMENT_SERVICE_XMS=256m # The heaps min space for this POD (java service)
export DOCUMENT_SERVICE_REQUESTS_CPU=100m
export DOCUMENT_SERVICE_REQUESTS_MEMORY=256Mi
export DOCUMENT_SERVICE_LIMITS_CPU=1
export DOCUMENT_SERVICE_LIMITS_MEMORY=1Gi

export FLOW_SERVICE_REQUESTS_MEMORY=2Gi
export FLOW_SERVICE_REQUESTS_CPU=1
export FLOW_SERVICE_LIMITS_MEMORY=3Gi
export FLOW_SERVICE_LIMITS_CPU=4

export FRONTEND_REQUESTS_CPU=100m
export FRONTEND_REQUESTS_MEMORY=16Mi
export FRONTEND_LIMITS_CPU=200m
export FRONTEND_LIMITS_MEMORY=32Mi

export INVENTORY_SERVICE_REQUESTS_CPU=100m
export INVENTORY_SERVICE_REQUESTS_MEMORY=256Mi
export INVENTORY_SERVICE_LIMITS_CPU=300m
export INVENTORY_SERVICE_LIMITS_MEMORY=512Mi

export IOT_SERVICE_REQUESTS_CPU=100m
export IOT_SERVICE_REQUESTS_MEMORY=256Mi
export IOT_SERVICE_LIMITS_CPU=300m
export IOT_SERVICE_LIMITS_MEMORY=512Mi

export MAIN_SERVICE_XMX=768m
export MAIN_SERVICE_XMS=768m
export MAIN_SERVICE_REQUESTS_CPU=1
export MAIN_SERVICE_REQUESTS_MEMORY=768Mi
export MAIN_SERVICE_LIMITS_CPU=4
export MAIN_SERVICE_LIMITS_MEMORY=3Gi

export NOTIFICATION_SERVICE_REQUESTS_CPU=100m
export NOTIFICATION_SERVICE_REQUESTS_MEMORY=256Mi
export NOTIFICATION_SERVICE_LIMITS_CPU=300m
export NOTIFICATION_SERVICE_LIMITS_MEMORY=512Mi

export STORAGE_SERVICE_XMX=768m
export STORAGE_SERVICE_XMS=768m
export STORAGE_SERVICE_REQUESTS_CPU=200m
export STORAGE_SERVICE_REQUESTS_MEMORY=768Mi
export STORAGE_SERVICE_LIMITS_CPU=500m
export STORAGE_SERVICE_LIMITS_MEMORY=1536Mi

export USER_SERVICE_REQUESTS_CPU=100m
export USER_SERVICE_REQUESTS_MEMORY=1Gi
export USER_SERVICE_LIMITS_CPU=2
export USER_SERVICE_LIMITS_MEMORY=2Gi

export CHAT_SERVICE_REQUESTS_CPU=50m
export CHAT_SERVICE_REQUESTS_MEMORY=64Mi
export CHAT_SERVICE_LIMITS_CPU=100m
export CHAT_SERVICE_LIMITS_MEMORY=128Mi

export REPORT_PRINT_SERVICE_REQUESTS_CPU=100m
export REPORT_PRINT_SERVICE_REQUESTS_MEMORY=1Gi
export REPORT_PRINT_SERVICE_LIMITS_CPU=1
export REPORT_PRINT_SERVICE_LIMITS_MEMORY=2Gi

export INFLUXDB_REQUESTS_CPU=100m
export INFLUXDB_REQUESTS_MEMORY=512Mi
export INFLUXDB_LIMITS_CPU=500m
export INFLUXDB_LIMITS_MEMORY=1Gi

export MONGODB_REQUESTS_CPU=1
export MONGODB_REQUESTS_MEMORY=2Gi
export MONGODB_LIMITS_CPU=3
export MONGODB_LIMITS_MEMORY=4Gi

export MARIADB_REQUESTS_CPU=200m
export MARIADB_REQUESTS_MEMORY=512Mi
export MARIADB_LIMITS_CPU=3
export MARIADB_LIMITS_MEMORY=2Gi

βœ”πŸŸ’ Step 5. Deploy the software to your cloud


1. Prerequisites / Infos:

  • kubectl context setup
  • FINISH STEP 4 -> STEP 4
  • open bash in the folder /kubernetes at the infrastructure repository
  • execute the following bash commands:
KUBE_CONTEXT_NAME=YOUR_CONTEXT_NAME
kubectl config use-context $KUBE_CONTEXT_NAME
ENVIRONMENT_FILE_PATH=.env_YOUR_ENVIRONMENT_FILE_NAME_FROM_STEP_4
source $ENVIRONMENT_FILE_PATH
helmfile -e mybusinessai -f helmfile.yaml apply --skip-deps --kube-context $KUBE_CONTEXT_NAME

If everything went well, helmfile should display the text success, if errors occured then please contact your administrator for help


βœ”πŸ”΅Optional Step | Ingress Deployment (NGINX)


1. Prerequisites / Infos:

  • Kubectl context setup
  • FINISH STEP 4 -> STEP 4
  • Be in the folder /kubernetes at the infrastructure repository
  • The commands below (at 2.) will deploy an ingress controller (nginx) to your cluster, this ingress controller will act as an load balancer and handle all the requests
  • You need to set those environment variables in the .env file created at Step 4
export LB_IP=123.123.123.123 # set your load balancer ip adress
... all the other environment variables from step 4.....

2. Deployment commands (shell commands):

KUBE_CONTEXT_NAME=YOUR_CONTEXT_NAME
kubectl config use-context $KUBE_CONTEXT_NAME
ENVIRONMENT_FILE_PATH=.env_YOUR_ENVIRONMENT_FILE
source $ENVIRONMENT_FILE_PATH
helmfile -e services -f helmfile.yaml apply --skip-deps --kube-context $KUBE_CONTEXT_NAME -i

Follow all the instructions from the helmfile


βœ”πŸ”΅Optional Step | Cert Manager Deployment


Important for Cert Manager Deployment, it has to be configured individually, for simplicity, we have not included all the required steps in this guide. 1. Prerequisites / Infos:

  • Important this cert-manager has to be setup individually, you can't just execute the commands below!!!!!!!
  • Kubectl context setup
  • FINISH STEP 4 -> STEP 4
  • Be in the folder /kubernetes at the infrastructure repository
  • The commands below (at 3.) will deploy a certificate manager, the certificate manager will automatically renew certificates for your ingress hostnames. It will be using letsencrypt as issuer for all the dns challenges
  • You need to set those environment variables in the .env file created at Step 4
export HAS_CERTMANAGER=true
export CLUSTER_ISSUER=letsencrypt
... all the other environment variables from step 4.....

2. EXAMPLE OF Letsencrypt Cluster Issue file:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
    name: letsencrypt
    namespace: { { .Release.Namespace } }
    annotations:
        cert-manager.io/cluster-issuer: letsencrypt
        acme.cert-manager.io/http01-edit-in-place: "true"
    labels:
        app.kubernetes.io/name: { { include "certificates.name" . } }
        helm.sh/chart: { { include "certificates.chart" . } }
        app.kubernetes.io/instance: { { .Release.Name } }
        app.kubernetes.io/managed-by: { { .Release.Service } }
spec:
    acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        email: { { .Values.global.issuerEmail } }
        privateKeySecretRef:
            name: letsencrypt
        solvers:
            -   http01:
                    ingress:
                        class: nginx
                selector:
                    matchLabels:
                    dnsNames:
                        - 'mysubdomain.domain.de'
                        - 'mysubdomaintwo.domain.de'
                        - 'api.domain.de'
                        - 'chat.domain.de'

Important for Cert Manager Deployment, it has to be configured individually, for simplicity, we have not included all the required steps in this guide.


βœ”πŸ”΅Optional Step | Deploy Grafana


1. Prerequisites / Infos:

  • kubectl context setup
  • Be in the folder /kubernetes at the infrastructure repository
  • You need to set those environment variables in your .env file created at Step 4
export HAS_MONITORING=true
export GRAFANA_PASSWORD==your_grafana_password # this will be your initial password for the user 'admin'
export GRAFANA_INGRESS=false       # if you want to enable ingress / dns for grafana (If enabled, keep in mind you need an ingress controller, grafana hostname & certficates configured , read more below)
export CLUSTER_ISSUER=letsencrypt # cluster issuer for the grafana ingress settings, default: letsencrypt
export GRAFANA_HOSTNAME=mysubdomain.mydomain.com
export PROMETHEUS_DISK_SIZE=50  # how much space (gigabyte) do you want to use for prometheus data (recommended: 50)
export GRAFANA_DISK_SIZE=50     # how much space (gigabyte) do you want to use for grafana data (recommended: 50)
... other environment variables....

2. Execute deployment commands in bash (folder ope bash in the folder /kubernetes):

KUBE_CONTEXT_NAME=YOUR_KUBECTL_CONTEXT_NAME
kubectl config use-context $KUBE_CONTEXT_NAME

ENVIRONMENT_FILE_PATH=.env_YOUR_ENVIRONMENT_FILE_NAME_FROM_STEP_4
source $ENVIRONMENT_FILE_PATH
helmfile -e services -f helmfile.yaml apply --skip-deps --kube-context $KUBE_CONTEXT_NAME -i

βœ”πŸ”΅ Troubleshooting for Tanzu Deployment


After deployment, some errors may occur at the Tanzu kubernetes cluster, example:

https://www.unknownfault.com/posts/podsecuritypolicy-unable-to-admit-pod/

In order to solve the issue, you have to deploy this .yaml file:

#  https://www.unknownfault.com/posts/podsecuritypolicy-unable-to-admit-pod/
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
    name: rolebinding-cluster-user-administrator
    namespace: default
roleRef:
    kind: ClusterRole
    name: edit                             #Default ClusterRole
    apiGroup: rbac.authorization.k8s.io
subjects:
    -   kind: User
        name: sso:ddumitru@tanzu.local            #sso:<username>@<domain>
        apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
    name: administrator-cluster-role-binding
roleRef:
    kind: ClusterRole
    name: psp:vmware-system-privileged
    apiGroup: rbac.authorization.k8s.io
subjects:
    -   kind: Group
        name: system:authenticated
        apiGroup: rbac.authorization.k8s.io

See Private Cloud