HomeMogDBMogDB StackUqbar

Documentation:v2.0

Supported Versions:

Prerequisites

  • Install Kustomize v3+

    go install sigs.k8s.io/kustomize/kustomize/v3@v3.8.7
    ~/go/bin/kustomize version

    Note: Kustomize is integrated with kubectl 1.4+ by default and does not need to be installed separately.

  • Download MogDB Operator examples

    First, fork the MogDB Stack examples repository on GitHub at

    https://github.com/enmotech/mogdb-stack-examples/fork

    After the fork repository, you can download it locally with a command like the following.

    YOUR_GITHUB_UN="<your GitHub username>"
    git clone --depth 1 "git@github.com:${YOUR_GITHUB_UN}/mogdb-stack-examples.git"
    cd mogdb-stack-examples

    The MogDB Operator installation item is in the kustomize directory.

  • Installed components

    • mogdb-operator

    • mogha

    • mogdb-monitor

    • mogdb-apiserver

    • mgo-client

      Where mgo-client runs under the physical machine, the rest of the components run under k8s

Configuration

The default Kustomize will work in most Kubernetes environments, or can be customized to meet your specific needs.

For example, the custom image path for MogDB Operator, which is in the kustomize/mogdb-operator/default/kustomization.yaml file, can be modified to:

images:
- name: controller
  newName: swr.cn-north-4.myhuaweicloud.com/mogdb-cloud/mogdb-operator
  newTag: v2.0.0

If you need to change the namespace name, you need to change the following configuration in the kustomize/mogdb-operator/default/kustomization.yaml file.

namespace: custom-namespace

Installation

Step 1: Install MogHA

kubectl apply -k kustomize/mogha

Expected output:

namespace/mogha created
serviceaccount/mogdb-ha created
clusterrole.rbac.authorization.k8s.io/mogdb-ha-role created
clusterrolebinding.rbac.authorization.k8s.io/mogdb-ha-rolebinding created
secret/huawei-registry created
service/mogdb-ha created
deployment.apps/mogdb-ha created

Check that the relevant component is working properly.

kubectl get pods -n mogha

Expected output:

NAME                        READY   STATUS    RESTARTS   AGE
mogdb-ha-5cf86cc667-r45vm   2/2     Running   0          66s

When the Pod is in Running state, continue to the next step.

Step 2: Install MogDB Operator

kubectl apply -k kustomize/mogdb-operator/default/

Expected output:

namespace/mogdb-operator-system created
customresourcedefinition.apiextensions.k8s.io/mogdbbackups.mogdb.enmotech.io configured
customresourcedefinition.apiextensions.k8s.io/mogdbclusters.mogdb.enmotech.io configured
serviceaccount/mogdb-operator-controller-manager created
role.rbac.authorization.k8s.io/mogdb-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/mogdb-operator-manager-role created
rolebinding.rbac.authorization.k8s.io/mogdb-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/mogdb-operator-manager-rolebinding created
configmap/mogdb-operator-global-config created
configmap/mogdb-operator-manager-config created
secret/mogdb-operator-huawei-registry created
deployment.apps/mogdb-operator-controller-manager created

Check that the relevant component is working properly.

kubectl get pods -n mogdb-operator-system

Expected output:

NAME                                                READY   STATUS    RESTARTS   AGE
mogdb-operator-controller-manager-fcf875446-ngknd   1/1     Running   0          45s

When the Pod is in Running state, continue to the next step.

Step 3: Install MogDB Apiserver

kubectl apply -k kustomize/mogdb-apiserver/

Expected output:

namespace/mogdb-operator-system unchanged
serviceaccount/mogdb-apiserver created
clusterrole.rbac.authorization.k8s.io/mgo-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/mgo-cluster-role created
secret/mgorole-admin created
secret/mgouser-admin created
service/mogdb-apiserver created
deployment.apps/mogdb-apiserver created

Check that the relevant components are working properly.

kubectl get pods -n mogdb-operator-system

Expected output:

NAME                                                READY   STATUS    RESTARTS   AGE
mogdb-apiserver-699c855d9b-zx4sm                    1/1     Running   0          71s
mogdb-operator-controller-manager-fcf875446-ngknd   1/1     Running   0          18m

When the Pod is in Running state, continue to the next step.

Step 4: Install Monitoring and Alerting

  • Change the configuration and edit kustomize/mogdb-monitor/alertmanager/alertmanager-conf.yaml

    smtp_smarthost: 'smtp.163.com:25'       # Alert mailbox server host
    smtp_from: 'xxxx@163.com'               # Sender email address
    smtp_auth_username: 'xxxx@163.com'      # User name
    smtp_auth_password: '<mailbox password>'         # Password, the authorization code generated by enabling smtp service
    
    - name: 'email'
      email_configs:
      - to: 'xxx@qq.com'                    # Email address for receiving alerts, separated by commas
  • Install Monitoring and Alerting component

    kubectl apply -k kustomize/mogdb-monitor

    Expected output:

    namespace/monitor created
    serviceaccount/kube-state-metrics created
    serviceaccount/prometheus created
    clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
    clusterrole.rbac.authorization.k8s.io/prometheus created
    clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
    clusterrolebinding.rbac.authorization.k8s.io/prometheus created
    configmap/prometheus-config created
    service/grafana created
    service/kube-state-metrics created
    service/node-exporter created
    service/prometheus created
    deployment.apps/grafana created
    deployment.apps/kube-state-metrics created
    deployment.apps/prometheus created
    daemonset.apps/node-exporter created

Check that the relevant components are working properly.

kubectl get pods -n monitor

Expected output:

NAME                                  READY   STATUS    RESTARTS   AGE
grafana-5548d85c77-m69fc              1/1     Running   0          8s
kube-state-metrics-77f9d6d895-hw5t5   0/1     Running   0          8s
node-exporter-8j78l                   1/1     Running   0          7s
node-exporter-9ctt7                   1/1     Running   0          7s
node-exporter-chlg5                   1/1     Running   0          7s
prometheus-645fcdf654-d956p           2/2     Running   0          7s

When all Pods are in Running state, continue to the next step.

Step 5: Install the mgo Client

wget https://cdn-mogdb.enmotech.com/mogdb-stack/v2.0.0/client-setup.sh
chmod +x client-setup.sh
./client-setup.sh

This will download the mgo client and prompt you to add some environment variables for you to set in your session, which you can do with the following command.

export MGOUSER="${HOME?}/.mgo/mgouser"
export MGO_CA_CERT="${HOME?}/.mgo/client.crt"
export MGO_CLIENT_CERT="${HOME?}/.mgo/client.crt"
export MGO_CLIENT_KEY="${HOME?}/.mgo/client.key"
export MGO_APISERVER_URL='https://127.0.0.1:32444'
export MGO_NAMESPACE=mogdb-operator-system

If you wish to permanently add these variables to your environment, you can run the following command.

cat <<EOF >> ~/.bashrc
export PATH="${HOME?}/.mgo:$PATH"
export MGOUSER="${HOME?}/.mgo/mgouser"
export MGO_CA_CERT="${HOME?}/.mgo/client.crt"
export MGO_CLIENT_CERT="${HOME?}/.mgo/client.crt"
export MGO_CLIENT_KEY="${HOME?}/.mgo/client.key"
export MGO_APISERVER_URL='https://127.0.0.1:32444'
export MGO_NAMESPACE=mogdb-operator-system
EOF

source ~/.bashrc

Note: For macOS users, you use the file ~/.bash_profile instead of ~/.bashrc.

Step 6: Create MogDB Cluster

There are two ways to create a MogDB cluster

  • Created by mgo command line
mgo create cluster cluster1

Expected output:

created cluster: cluster1

Check if the cluster is installed successfully

mgo show cluster cluster1

Expected output:

cluster : cluster1
 pod : cluster1-ib7zq (Running) on mogdb-k8s-001 (3/3) (primary)
 pod : cluster1-rtwdz (Running) on mogdb-k8s-002 (3/3) (standby)
 service : cluster1-svc-master - ClusterIP (10.1.149.4) - Ports (5432:30013/TCP)
 service : cluster1-svc-replicas - ClusterIP (10.1.175.46) - Ports (5432:30012/TCP)

As you can see, the cluster creates two standby pods, and two services.

  • Created by kustomize
kubectl apply -k kustomize/mogdb-cluster/

Expected output:

mogdbcluster.mogdb.enmotech.io/cluster1 created

Check if the cluster pod was created successfully

kubectl get pods -n mogdb-operator-system

Expected output:

NAME                                                READY   STATUS    RESTARTS   AGE
cluster1-m4sr3                                      2/2     Running   0          7m13s
cluster1-nebx9                                      2/2     Running   0          8m8s
mogdb-operator-controller-manager-fcf875446-gmmqp   1/1     Running   0          9m9s

At this time, all cluster-related Pods have been successfully created.

Uninstall

Note.

Before uninstalling, please make sure all the MogDB clusters on your system have been deleted completely, otherwise you can't uninstall them.

Delete MogDB Cluster

kubectl delete -k kustomize/mogdb-cluster/

Expected output:

mogdbcluster.mogdb.enmotech.io "cluster1" deleted

Check if all Pods are successfully deleted

kubectl get pods -n mogdb-operator-system

Expected output:

NAME                                                READY   STATUS    RESTARTS   AGE
mogdb-operator-controller-manager-fcf875446-gmmqp   1/1     Running   0          13m

You can see that all Pods in the cluster have been successfully deleted.

Note: There may be multiple clusters in the system, please follow this procedure to delete them one by one.

Uninstall MogHA

kubectl delete -k kustomize/mogha

Expected output:

namespace "mogha" deleted
serviceaccount "mogdb-ha" deleted
clusterrole.rbac.authorization.k8s.io "mogdb-ha-role" deleted
clusterrolebinding.rbac.authorization.k8s.io "mogdb-ha-rolebinding" deleted
secret "huawei-registry" deleted
service "mogdb-ha" deleted
deployment.apps "mogdb-ha" deleted

Check if the relevant Pod was successfully deleted

kubectl get pods -n mogha

Expected output:

No resources found in mogha namespace.

When all related Pods have been successfully deleted, continue to the next step.

Uninstall Monitor

kubectl delete -k kustomize/mogdb-monitor

Expected output:

namespace "monitor" deleted
serviceaccount "kube-state-metrics" deleted
serviceaccount "prometheus" deleted
clusterrole.rbac.authorization.k8s.io "kube-state-metrics" deleted
clusterrole.rbac.authorization.k8s.io "prometheus" deleted
clusterrolebinding.rbac.authorization.k8s.io "kube-state-metrics" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted
configmap "alert-config" deleted
configmap "prometheus-config" deleted
service "grafana" deleted
service "kube-state-metrics" deleted
service "node-exporter" deleted
service "prometheus" deleted
deployment.apps "grafana" deleted
deployment.apps "kube-state-metrics" deleted
deployment.apps "prometheus" deleted
daemonset.apps "node-exporter" deleted

Check if all related Pods are successfully deleted

kubectl get pods -n monitor

Expected output:

No resources found in monitor namespace.

When all related Pods have been successfully deleted, continue to the next step.

Uninstall MogDB Apiserver

kubectl delete -k kustomize/mogdb-apiserver/

Expected output:

namespace "mogdb-operator-system" deleted
serviceaccount "mogdb-apiserver" deleted
clusterrole.rbac.authorization.k8s.io "mgo-cluster-role" deleted
clusterrolebinding.rbac.authorization.k8s.io "mgo-cluster-role" deleted
secret "mgorole-admin" deleted
secret "mgouser-admin" deleted
service "mogdb-apiserver" deleted
deployment.apps "mogdb-apiserver" deleted

Check if all related Pods are successfully deleted

kubectl get pods -n mogdb-operator-system

Expected output:

NAME                                                READY   STATUS    RESTARTS   AGE
mogdb-operator-controller-manager-fcf875446-2d525   1/1     Running   0          28s

When all related Pods have been successfully deleted, continue to the next step.

Uninstall MogDB Operator

kubectl delete -k kustomize/mogdb-operator/default

Expected output:

namespace "mogdb-operator-system" deleted
customresourcedefinition.apiextensions.k8s.io "mogdbbackups.mogdb.enmotech.io" deleted
customresourcedefinition.apiextensions.k8s.io "mogdbclusters.mogdb.enmotech.io" deleted
serviceaccount "mogdb-operator-controller-manager" deleted
role.rbac.authorization.k8s.io "mogdb-operator-leader-election-role" deleted
clusterrole.rbac.authorization.k8s.io "mogdb-operator-manager-role" deleted
rolebinding.rbac.authorization.k8s.io "mogdb-operator-leader-election-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "mogdb-operator-manager-rolebinding" deleted
configmap "mogdb-operator-global-config" deleted
configmap "mogdb-operator-manager-config" deleted
secret "mogdb-operator-huawei-registry" deleted
deployment.apps "mogdb-operator-controller-manager" deleted

Check if all related Pods are successfully deleted

kubectl get pods -n mogdb-operator-system

Expected output:

No resources found in mogdb-operator-system namespace.

At this time, all components have been uninstalled.

Copyright © 2011-2024 www.enmotech.com All rights reserved.