Saturday, December 12, 2020

UPGRADING POSTGRESQL TO A MAJOR VERSION

 


To do the upgrade , we have various options:


pg_dump & pg_restore 

https://www.enterprisedb.com/postgres-tutorials/how-use-pgdump-and-pgrestore-multi-host-enviorment


pg_upgrade : Copy data on a binary level


pg_upgrade is not destructive. 

If things go wrong, you can always delete the new data directory and start from scratch.


To upgrade a database, two steps are needed:

initdb -D /data/db_new

pg_upgrade -d /data/db_old/ -D /data/db_new -b /path/to/pg_old/bin/ -B /path/to/pg_new/bin/  



pg_upgrade –link : In-place upgrades

To reduce downtime, we can use the << -link >> option:

pg_upgrade -d /data/db_old/ -D /data/db_new -b /path/to/pg_old/bin/ -B /path/to/pg_new/bin/  --link


https://www.cybertec-postgresql.com/en/upgrading-and-updating-postgresql/

Tuesday, September 22, 2020

MySQL 8 Router - High Availability - with Pacemaker on CentOS 8

 
https://github.com/auntaru/vagrantfiles
https://www.youtube.com/watch?v=EVb_YhO3c7U
https://raw.githubusercontent.com/auntaru/vagrantfiles/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/provision.sh
https://github.com/auntaru/vagrantfiles/blob/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/innodb-cluster-status.js
https://github.com/auntaru/vagrantfiles/blob/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/Vagrantfile



GoaL of this article :
   MySQL Router shouldn't be the single point of failure between application and a MySQL CLuSTeR

On all Nodes, Install Pacemaker and configure some settings.
If [HighAvailability] repository is not in your CentOS 8, run [# dnf upgrade centos-repos] first.

The first step is to install pacemaker on all the machines we will use for our “MySQL Router Cluster”:

# # enable [HighAvailability] repo and install (it's disabled by default)
[root@hyperv-primary-cos8 ~]# dnf --enablerepo=HighAvailability -y install pacemaker pcs

# Now we need to start the pcsd service and enable it at boot (on all machines):
[root@hyperv-primary-cos8 ~]# systemctl enable --now pcsd

Setup authentication : operation is again executed on all machines part of the MySQL Router Cluster:
# # set cluster admin password
[root@hyperv-primary-cos8 ~]# echo MyStrongPassw0rd | passwd --stdin hacluster

And on one of the nodes, we can now execute the following command:

On CentOS7 : 

[root@mysql1 ~]# pcs cluster auth mysql1 mysql2 mysql3 -u hacluster -p MyStrongPassw0rd --force
mysql1: Authorized
mysql2: Authorized
mysql3: Authorized

On CentOS8 :
On a Node, Configure basic Cluster settings.
# authorize among nodes
[root@hyperv-primary-cos8 ~]# pcs host auth mysql01 mysql02 mysql03 -u hacluster -p MyStrongPassw0rd
mysql02: Authorized
mysql01: Authorized
mysql03: Authorized
[root@hyperv-primary-cos8 ~]#


# configure / create pacemaker-corosync cluster :
[root@hyperv-primary-cos8 ~]# pcs cluster setup routercluster mysql01 mysql02 mysql03
No addresses specified for host 'mysql01', using 'mysql01'
No addresses specified for host 'mysql02', using 'mysql02'
No addresses specified for host 'mysql03', using 'mysql03'
Destroying cluster on hosts: 'mysql01', 'mysql02', 'mysql03'...
mysql02: Successfully destroyed cluster
mysql03: Successfully destroyed cluster
mysql01: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'mysql01', 'mysql02', 'mysql03'
mysql01: successful removal of the file 'pcsd settings'
mysql02: successful removal of the file 'pcsd settings'
mysql03: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'mysql01', 'mysql02', 'mysql03'
mysql01: successful distribution of the file 'corosync authkey'
mysql01: successful distribution of the file 'pacemaker authkey'
mysql02: successful distribution of the file 'corosync authkey'
mysql02: successful distribution of the file 'pacemaker authkey'
mysql03: successful distribution of the file 'corosync authkey'
mysql03: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'mysql01', 'mysql02', 'mysql03'
mysql01: successful distribution of the file 'corosync.conf'
mysql02: successful distribution of the file 'corosync.conf'
mysql03: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@hyperv-primary-cos8 ~]#


We can now start the cluster (running the following command on one node):
[root@hyperv-primary-cos8 ~]# pcs cluster start --all
mysql02: Starting Cluster...
mysql01: Starting Cluster...
mysql03: Starting Cluster...
[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# pcs cluster status
Cluster Status:
 Cluster Summary:
   * Stack: corosync
   * Current DC: mysql01 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
   * Last updated: Wed Sep 16 18:34:28 2020
   * Last change:  Wed Sep 16 18:19:10 2020 by hacluster via crmd on mysql01
   * 3 nodes configured
   * 0 resource instances configured
 Node List:
   * Online: [ mysql01 mysql02 mysql03 ]

PCSD Status:
  mysql01: Online
  mysql02: Online
  mysql03: Online
[root@hyperv-primary-cos8 ~]#


Now we can set some properties to our cluster:
[root@hyperv-primary-cos8 ~]# pcs property set stonith-enabled=false
[root@hyperv-primary-cos8 ~]# pcs property set no-quorum-policy=ignore
[root@hyperv-primary-cos8 ~]# pcs resource defaults migration-threshold=1


Adding resources

VIP

The first resource we will use is the VIP (Virtual IP) that the application will use to connect to MySQL Router:

[root@hyperv-primary-cos8 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:15:5d:38:01:07 brd ff:ff:ff:ff:ff:ff

    inet 192.168.0.22/24 brd 192.168.0.255 scope global noprefixroute eth0

       valid_lft forever preferred_lft forever

    inet6 fc00::1:122/64 scope global noprefixroute

       valid_lft forever preferred_lft forever

    inet6 fe80::77b3:112:420b:3363/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

[root@hyperv-primary-cos8 ~]#




[root@hyperv-primary-cos8 ~]# pcs resource create Router_VIP ocf:heartbeat:IPaddr2 \

>     ip=192.168.0.11 cidr_netmask=24 op monitor interval=5s


[root@hyperv-primary-cos8 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:15:5d:38:01:07 brd ff:ff:ff:ff:ff:ff

    inet 192.168.0.22/24 brd 192.168.0.255 scope global noprefixroute eth0

       valid_lft forever preferred_lft forever

    inet 192.168.0.11/24 brd 192.168.0.255 scope global secondary eth0

       valid_lft forever preferred_lft forever

    inet6 fc00::1:122/64 scope global noprefixroute

       valid_lft forever preferred_lft forever

    inet6 fe80::77b3:112:420b:3363/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

[root@hyperv-primary-cos8 ~]#


yum install -y mysql-router

[root@hyperv-primary-cos8 ~]# pcs resource create mysqlrouter systemd:mysqlrouter clone

pcs resource list | grep -i router
pcs resource list | grep -i mysql


[root@hyperv-primary-cos8 ~]# pcs resource list | grep -i postgres
ocf:heartbeat:pgsql - Manages a PostgreSQL database instance
[root@hyperv-primary-cos8 ~]#


Before adding the MySQL Router as resource, we need to be sure that the router is installed and bootstrapped on each nodes. As reminder, this is how to proceed:


MySQL-RouteR : BooTsTraP

password="Cluster^123"
mysql -e "create user 'mysqlrouter' identified by '$password'"

[root@hyperv-proxy-cos8 ~]# echo $password | mysqlrouter --bootstrap mycluster@mysql01 --user=mysqlrouter

mysql --host=192.168.0.22 --port=3306 --user=mycluster -p"Cluster^123"


Now we can add the resource that will manage MySQL Router in our pacemaker cluster. We will use systemd.

You can verify if systemd service for mysqlrouter is available using the following command:

[root@mysql1 ~]# pcs resource list | grep router
service:mysqlrouter - systemd unit file for mysqlrouter
systemd:mysqlrouter - systemd unit file for mysqlrouter


OK, so let’s add the resource and let’s it run on all nodes (clone), no need to start mysqlrouter only if the running one has issue:

[root@hyperv-primary-cos8 ~]# pcs resource create mysqlrouter systemd:mysqlrouter clone


Let’s verify if the resource is now added:

[root@hyperv-primary-cos8 ~]# crm_mon -1

Cluster Summary:

  * Stack: corosync

  * Current DC: mysql01 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum

  * Last updated: Wed Sep 16 19:00:31 2020

  * Last change:  Wed Sep 16 19:00:27 2020 by root via cibadmin on mysql01

  * 3 nodes configured

  * 4 resource instances configured


Node List:

  * Online: [ mysql01 mysql02 mysql03 ]


Active Resources:

  * Router_VIP  (ocf::heartbeat:IPaddr2):       Started mysql01

  * Clone Set: mysqlrouter-clone [mysqlrouter]:

    * Started: [ mysql01 mysql02 mysql03 ]


[root@hyperv-primary-cos8 ~]#



Now we need to tell the cluster that the VIP needs to stand where a router instance also runs:


[root@hyperv-primary-cos8 ~]# pcs constraint colocation add Router_VIP \

>                  with mysqlrouter-clone score=INFINITY

[root@hyperv-primary-cos8 ~]#


Now we will kill mysqlrouter process on mysql01 and see what the cluster does:


https://lefred.be/content/mysql-router-ha-with-pacemaker/
https://www.server-world.info/en/note?os=CentOS_8&p=pacemaker&f=1
https://raw.githubusercontent.com/behemius/gcp01/master/node1_install.sh
https://raw.githubusercontent.com/behemius/gcp01/master/router_install.sh


[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# mysqlsh --sql -e "select * from performance_schema.replication_group_members\G" --user=mycluster -p"Cluster^123"

WARNING: Using a password on the command line interface can be insecure.

*************************** 1. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: 3d17fb2b-f143-11ea-9c6b-00155d380107

   MEMBER_HOST: hyperv-primary-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: PRIMARY

MEMBER_VERSION: 8.0.17

*************************** 2. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: 6a766bf2-f184-11ea-a7ad-00155d380109

   MEMBER_HOST: hyperv-proxy-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: SECONDARY

MEMBER_VERSION: 8.0.17

*************************** 3. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: ef479c0a-f183-11ea-b35a-00155d380108

   MEMBER_HOST: hyperv-standby-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: SECONDARY

MEMBER_VERSION: 8.0.17

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# mysqlsh --sql -e "SELECT @@global.read_only, @@global.super_read_only, @@global.group_replication_group_seeds\G" --user=mycluster -p"Cluster^123"

WARNING: Using a password on the command line interface can be insecure.

*************************** 1. row ***************************

                    @@global.read_only: 0

              @@global.super_read_only: 0

@@global.group_replication_group_seeds: 192.168.0.22:33061,192.168.0.24:33061,192.168.0.28:33061,hyperv-proxy-cos8:33061

[root@hyperv-primary-cos8 ~]#




[root@hyperv-primary-cos8 ~]# cat ./innodb-cluster-status.js
try {
   shell.connect('mycluster@hyperv-primary-cos8', "Cluster^123");
   print(dba.getCluster().status());
   print(session.runSql("SHOW DATABASES;"));
} catch(e) {
     print('\nThe InnoDB cluster could not be created.\n\nError: ' +
     + e.message + '\n');
}
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]# mysqlsh -f innodb-cluster-status.js
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "hyperv-primary-cos8:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "hyperv-primary-cos8:3306": {
                "address": "hyperv-primary-cos8:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "hyperv-proxy-cos8:3306": {
                "address": "hyperv-proxy-cos8:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "hyperv-standby-cos8:3306": {
                "address": "hyperv-standby-cos8:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "hyperv-primary-cos8:3306"
}<SqlResult>[root@hyperv-primary-cos8 ~]#


Thursday, August 20, 2020

How to generate self-signed SSL certificates

 Generate a root private key (rootCA.key):

openssl genrsa -out rootCA.key 2048

Generate a self-singed root certificate (rootCA.pem):

openssl req -x509 -new -nodes -key rootCA.key -days 1024 -out rootCA.pem

Create a private key for the final certificate (dovecot.key):

openssl genrsa -out dovecot.key 2048

Create a certificate sign request (dovecot.csr):

openssl req -new -key dovecot.key -out dovecot.csr

Create a certificate based on the root CA certificate and the root private key (dovecot.crt):

openssl x509 -req -in dovecot.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out dovecot.crt -days 500

Copy the private key and the certificate to the /etc/dovecot/private/ directory

cp dovecot.crt /etc/dovecot/private/dovecot.crt

# cp dovecot.key /etc/dovecot/private/dovecot.key

Set the required permissions on the file:

chmod 400 /etc/dovecot/private/dovecot.crt

# chmod 400 /etc/dovecot/private/dovecot.key

Update paths to the key and the certificate in the Dovecot configuration file /etc/dovecot/dovecot.conf:

ssl_cert = < /etc/dovecot/private/dovecot.crt

ssl_key =  </etc/dovecot/private/dovecot.key

Restart the Dovecot service to apply the changes:

service dovecot restart

Sunday, August 9, 2020

Systemd - systemctl - Services Management


systemctl list-units
This will show you a list of all of the units that systemd currently has active on the system.

systemctl list-units --all
This will show any unit that systemd loaded or attempted to load, regardless of its current state on the system. Some units become inactive after running, and some units that systemd attempted to load may have not been found on disk.

systemctl list-units --all --state=inactive
systemctl list-units --type=service

systemctl list-unit-files
Units are representations of resources that systemd knows about. Since systemd has not necessarily read all of the unit definitions in this view, it only presents information about the files themselves. The output has two columns: the unit file and the state.

systemctl cat atd.service
systemctl list-dependencies sshd.service
systemctl show sshd.service

sudo systemctl edit nginx.service
This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system directory which contains the name of the unit with .d appended. For instance, for the nginx.service, a directory called nginx.service.d will be created.

https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units

Saturday, July 25, 2020

AWS EC2 : Dockerize Java App & Deploy Container



https://youtu.be/ToqoJ8YAkRg

Dockerize app and deploy in EC2 instance
Push the docker image to Docker Hub public repository

Steps:
1. Setting up docker engine

sudo yum update -y

sudo amazon-linux-extras install docker

sudo service docker start

sudo usermod -a -G docker ec2-user


2. Dockerize your application

-- Dockerfile
FROM java:8
WORKDIR /
ADD myapp.jar myapp.jar
COPY application.properties application.properties
EXPOSE 5000
CMD java -jar myapp.jar -Dspring.config.location=application.properties

3. Building docker image of application:

docker build -t myapp .


4. Tagging image

docker tag myapp nirajtechi/myapp


5. Overriding port while launching docker container

docker run -p 8080:5000 nirajtechi/myapp


6. Push image to repository (Docker Hub)
docker push nirajtechi/myapp

7. Pull image from repository (Docker Hub)
docker pull nirajtechi/myapp

Java Microservices : Quarkus vs. Spring Boot


Nowadays, with the Microservice Architecture, perhaps it does not make sense anymore, nor any advantage, build something multi-platform (interpreted) for something that will always run on the same place and platform (the Docker Container — Linux environment). Portability is now less relevant (maybe more than ever), those extra level of abstraction is not important.

Having said that, let's check a simple and raw comparison between two alternatives to generate Microservices in Java: the very well-known Spring Boot and the not so very well-know (yet) Quarkus.


Thorntail Community Announcement on Quarkus
The team will continue contributing to SmallRye and Eclipse MicroProfile, while also shifting work towards Quarkus in the future . . . 


Thorntail has announced end of life. Everything else on this site is now outdated. We recommend using Quarkus or WildFly.

Thorntail offers an innovative approach to packaging and running Java EE applications by packaging them with just enough of the server runtime to "java -jar" your application. It's MicroProfile compatible, too. And, it's all much, much cooler than that ...



Getting started
In order to help you start using the org.wildfly.plugins:wildfly-jar-maven-plugin Maven plugin, we have defined a set of examples that cover common use-cases.

To retrieve the examples:

git clone -b 2.0.0.Alpha4 http://github.com/wildfly-extras/wildfly-jar-maven-plugin
cd wildfly-jar-maven-plugin/examples
A good example to start with is the jaxrs example. To build and run the jaxrs example:

cd jaxrs
mvn package
java -jar target/jaxrs-wildfly.jar
The plugin documentation (currently an index.html file to download) can be found here. It contains a comprehensive list of the options you can use to fine tune the Maven build and create a bootable JAR.

Be sure to read the examples/README that contains required information to run the examples in an OpenShift context.

Saturday, July 18, 2020

Using Kubernetes to Deploy PostgreSQL



Config Maps :
===============================
File: postgres-configmap.yaml
---------------------------
apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  labels:
    app: postgres
data:
  POSTGRES_DB: postgresdb
  POSTGRES_USER: postgresadmin
  POSTGRES_PASSWORD: admin123


Create Postgres config maps resource
---------------------------
$ kubectl create -f postgres-configmap.yaml 
configmap "postgres-config" created


Persistent Storage Volume
===============================
File: postgres-storage.yaml
---------------------------
kind: PersistentVolume
apiVersion: v1
metadata:
  name: postgres-pv-volume
  labels:
    type: local
    app: postgres
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: postgres-pv-claim
  labels:
    app: postgres
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi


Create storage related deployments
---------------------------
$ kubectl create -f postgres-storage.yaml 
persistentvolume "postgres-pv-volume" created
persistentvolumeclaim "postgres-pv-claim" created

PostgreSQL Deployment
===============================
File: postgres-deployment.yaml
--------------------------------
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:10.4
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          envFrom:
            - configMapRef:
                name: postgres-config
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgredb
      volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-pv-claim

Create Postgres deployment
--------------------------------
$ kubectl create -f postgres-deployment.yaml 
deployment "postgres" created


PostgreSQL Service
===============================
File: postgres-service.yaml
--------------------------------
apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  type: NodePort
  ports:
   - port: 5432
  selector:
   app: postgres

Create Postgres Service
$ kubectl create -f postgres-service.yaml 
service "postgres" created


Connect to PostgreSQL
$ kubectl get svc postgres
NAME       TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
postgres   NodePort   10.107.71.253   <none>        5432:31070/TCP   5m

psql -h localhost -U postgresadmin1 --password -p 31070 postgresdb


Delete PostgreSQL Deployments
# kubectl delete service postgres 
# kubectl delete deployment postgres
# kubectl delete configmap postgres-config
# kubectl delete persistentvolumeclaim postgres-pv-claim
# kubectl delete persistentvolume postgres-pv-volume


Other Kubernetes usefuL Links : 
Running Galera Cluster on Kubernetes
https://severalnines.com/database-blog/running-galera-cluster-kubernetes

Example: Deploying WordPress and MySQL with Persistent Volumes
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

MicroK8s, Part 3: How To Deploy a Pod in Kubernetes
https://virtualizationreview.com/articles/2019/02/01/microk8s-part-3-how-to-deploy-a-pod-in-kubernetes.aspx

Using Kubernetes to Deploy PostgreSQL
https://severalnines.com/database-blog/using-kubernetes-deploy-postgresql

Setting up PostgreSQL Database on Kubernetes
https://medium.com/@suyashmohan/setting-up-postgresql-database-on-kubernetes-24a2a192e962

Getting started with Docker and Kubernetes: a beginners guide
https://www.educative.io/blog/docker-kubernetes-beginners-guide
https://github.com/vfarcic/k8s-specs/blob/master/pod/db.yml

Sunday, July 12, 2020

MariaDB in the penguin container of ChromeOS



I've got a Chromebook ; Installed MariaDB in penguin container :

apt install mysql-server
apt install mysql-workbench
systemctl status mariadb



https://discuss.linuxcontainers.org/t/using-lxd-on-your-chromebook/3823

https://www.maketecheasier.com/run-ubuntu-container-chrome-os/

Monday, July 6, 2020

How can Docker help a MariaDB cluster for Disaster/Recovery


https://blog.dbi-services.com/how-can-docker-help-a-mariadb-cluster-for-disaster-recovery/

This post is a work in progress ; On the above link is the original ideea ;

Mistakes or accidental data deletions can sometimes happen on a productive MariaDB Galera Cluster and this can be disastrous.

There are so many cases I have heard by customers and hereafter are some of the most common:
– dropping one column of a table
– dropping one table
– updating a big table without a where clause
What if it was possible to restore online a subset of data without downtime?



MariaDB preparation

We need some help from Docker ; 
Using a delayed node (the helper), we can deploy a container with a delayed replication of 15minutes,  but of course you can choose your own lags.
MariaDB > create user rpl_user@’192.168.56.%' IDENTIFIED BY 'manager';
MariaDB > GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%’;
MariaDB > show grants for 'rpl_user'@'’192.168.56.%';
+-------------------------------------------------------------------------------------+
| Grants for rpl_user@%                                                               |
+-------------------------------------------------------------------------------------+
| GRANT REPLICATION SLAVE ON *.* TO 'rpl_user'@'’192.168.56.%'             |
+-------------------------------------------------------------------------------------+

$ vi /storage/mariadb-slave-15m/mariadb.conf.d/my.cnf
[mysqld]
server_id=10015
binlog_format=ROW
log_bin=binlog
log_slave_updates=1
relay_log=relay-bin
expire_logs_days=7
read_only=ON

The most important parameter is “MASTER_DELAY” as it determines the amount of time in seconds the slave should lag behind the master.
CHANGE MASTER TO MASTER_HOST = '192.168.56.203’, \ MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'manager’, \ MASTER_DELAY=900; START SLAVE

How to use table partitioning to scale PostgreSQL


https://www.enterprisedb.com/postgres-tutorials/how-use-table-partitioning-scale-postgresql

Benefits of partitioning 

  • PostgreSQL declarative partitioning is highly flexible and provides good control to users. Users can create any level of partitioning based on need and can modify, use constraints, triggers, and indexes on each partition separately as well as on all partitions together.
  • Query performance can be increased significantly compared to selecting from a single large table.
  • Partition-wise-join and partition-wise-aggregate features increase complex query computation performance as well.
  • Bulk loads and data deletion can be much faster, as based on user requirements these operations can be performed on individual partitions.
  • Each partition can contain data based on its frequency of use and so can be stored on media that may be cheaper or slower for low-use data.

When to use partitioning 

Most benefits of partitioning can be enjoyed when a single table is not able to provide them. So we can say that if a lot of data is going to be written on a single table at some point, users need partitioning. Apart from data, there may be other factors users should consider, like update frequency of the data, use of data over a time period, how small a range data can be divided, etc. With good planning and taking all factors into consideration, table partitioning can give a great performance boost and scale your PostgreSQL to larger datasets.

How to use partitioning 

As of PostgreSQL12 release List, Range, Hash and combinations of these partition methods at different levels are supported.

Sunday, July 5, 2020

MySQL other commands

MySQL default port = 3306

MySQL USER

show grants for 'SUPER'@'%'
CREATE USER 'SUPER'@'%' IDENTIFIED BY 'toor'
GRANT SELECT ON  *.* TO 'SUPER'@'127.0.0.1'
ALTER USER  'SUPER'@'%' PASSWORD EXPIRE
flush privileges;

GRANT ALL PRIVILEGES ON *.* TO 'SUPER'@'%' WITH GRANT OPTION;


mysql -u SUPER -P 3306 -h 127.0.0.1 -p toor
show grants for 'SUPER'@'%';

select @@port , @@hostname;

show global variables\G
select @@version , @@port , @@hostname , @@datadir , @@socket , @@log_error , @@pid_file , @@max_connections , @@log_bin , @@expire_logs_days;

show global variables like "datadir";
show global variables like "%port%";
SHOW GLOBAL VARIABLES LIKE '%bin%';
SHOW GLOBAL VARIABLES LIKE '%log%';
show slave status\G
show master status\G

select  host,user,authentication_string from mysql.user where user like 'R%';


sfx=$(ps -efa | grep mysql | grep 'suffix')
if [ "x${sfx}" != "x" ]
  then
    instlist=$(ps -ef | grep 'mysqld '| grep -o '@.*' | cut -c2- | awk '{print $1}' | sort );
  else
    instlist=$(ps -efa | grep 'mysqld ' | grep -o 'port=.*' | cut -c6- | sort)
fi
echo $instlist
for i in $instlist; do echo $i ; done

select * from information_schema.processlist where state like '%lock%';


su ansible -c "mysqladmin --login-path=labelPORT status processlist shutdown"
instlist="3366 3367 3368 3369";for INSTANCE in $instlist; do echo ${INSTANCE};mysql --login-path=label${INSTANCE} -e "\s" | grep Up ; done

NO COLOR
echo $PS1
export PS1="[\u@\h \d \@ \w]$ "


mysql --login-path=label${INSTANCE} -e 'SELECT table_schema "database", sum(data_length + index_length)/1024/1024/1024 "size in GB" FROM information_schema.TABLES table_schema; ';


   instlist=$( ps -efa | grep 'mysqld ' | grep -o 'pid-file=.*' | awk '{print $1}' | cut -d'.' -f1 | rev | cut -c1-4 | rev | sort );
   for INSTANCE in $instlist; do echo ${INSTANCE};mysql --login-path=label${INSTANCE} -e "\s" | grep "Server version" ; done


5.6=>

instlist56=$(ps -efa | grep 'mysqld ' | grep -o 'port=.*' | cut -c6- | sort )
for INSTANCE in $instlist56; do echo ${INSTANCE};/home/usr/bin/mysql --login-path=label${INSTANCE} -e "\s" | grep "Server version"; done

instlist57=$(ps -ef | grep 'mysqld '| grep -o '@.*' | cut -c2- | awk '{print $1}' | sort );
for INSTANCE in $instlist57; do echo ${INSTANCE};mysql --login-path=label${INSTANCE} -e "\s" | grep "Server version"; done

<=5.7

echo $instlist


sfx=$(ps -efa | grep mysql | grep 'suffix')
if [ "x${sfx}" != "x" ]
  then
    instlist=$(ps -ef | grep 'mysqld '| grep -o '@.*' | cut -c2- | awk '{print $1}' | sort );
  else
    instlist=$(ps -efa | grep 'mysqld ' | grep -o 'port=.*' | cut -c6- | sort)
fi


echo $instlist
for INSTANCE in $instlist;
do
echo ${INSTANCE};
mysql --login-path=label${INSTANCE} -e 'SELECT table_schema "database", sum(data_length + index_length)/1024/1024/1024 "size in GB" FROM information_schema.TABLES table_schema ORDER by 2 DESC; ';
done

mysql_config_editor set --login-path=label3306 --host=127.0.0.1 --user=USER --port=3306 --password
mysql_config_editor print --all

mysql -u db_user -S/home/mysql/socket/my.sock -p
/home/usr/bin/mysql --login-path=label3306 -e 'show global variables like "%expire_logs_days%"'
/home/usr/bin/mysql --login-path=label3306 -e 'show global variables like "%log%"'
/home/usr/bin/mysql --login-path=label3306 -e 'show global variables like "general_log_file"'
/home/usr/bin/mysql --login-path=label3306 -e 'show global variables like "log_error"'
/home/usr/bin/mysql --login-path=label3306 -e 'show global variables like "%log_bin%"'
/home/usr/bin/mysql --login-path=label3306 -e 'show global variables like "%relay_log%"'
/home/usr/bin/mysql --login-path=label3306 -e 'show global variables like "%slow_query_log%"'
CLUSTER RHEL7 : sudo pcs status
sudo su - ansible + root on labelJUmper
mysql -h 127.0.0.1 -u USER -p -P port
mysql -u USER -P 3306 -h 127.0.0.1 -p
systemctl status mysqld@port
mysql --login-path=labelPORT


/opt/rh/mysql55/root/usr/bin/mysql -uroot -p pact  -e 'SELECT * from table where field in ('3520','3535','3540') into OUTFILE  'autorisatie.csv' FIELDS TERMINATED BY ',';'


How To Copy a Folder
https://www.cyberciti.biz/faq/copy-folder-linux-command-line/
cp -var /home/source/folder /backup/copy/destination

Where,

-a : Preserve attributes such as file ownership, timestamps
-v : Verbose output.
-r  : Copy directories recursively.

Use Linux rsync Command to copy a folder
rsync -av /path/to/source/ /path/to/destination/

To backup my home directory, which consists of large files and mail folders to /media/backup, enter:
$ rsync -avz /home/vivek /media/backup

copy a folder to remote machine called server1.cyberciti.biz as follows:
$ rsync -avz /home/vivek/ server1.cyberciti.biz:/home/backups/vivek/

Where,

-a : Archive mode i.e. copy a folder with all its permission and other information including recursive copy.
-v : Verbose mode.
-z : With this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of data being transmitted something that is useful over a slow connection.
You can show progress during transfer using –progress or -P option:
$ rsync -av --progress /path/to/source/ /path/to/dest


Tuesday, February 25, 2020

Important MySQL Commands


To login (from unix shell) use -h only if needed.
# [mysql dir]/bin/mysql -h hostname -u username -ppassword
To login (from windows)
mysql dir/bin/mysql.exe -h hostname -u username -ppassword

Create a database.
mysql> create database [databasename];

List all databases on the server.
mysql> show databases;

Switch to a database.
mysql> use [db name];

To see all the tables in the db.
mysql> show tables;

To see table's field formats.
mysql> describe [table name];

To delete a db.
mysql> drop database [database name];

To delete a table.
mysql> drop table [table name];

Show all data from a table.
mysql> SELECT * FROM [table name];

To return columns and column information.
mysql> show columns from [table name];

Show particular rows with the given value.
mysql> SELECT * FROM [table name] WHERE [field name] = "value";

Show all records containing the name "Something" AND the phone number '0123456789'.
mysql> SELECT * FROM [table name] WHERE name = "Something" AND phone_number = '0123456789';

Show all records not containing the name "Something" AND the phone number '0123456789' order by the phone_number field.
mysql> SELECT * FROM [table name] WHERE name != "Something" AND phone_number = '0123456789' order by phone_number;

Show all records starting with the letters 'Something' AND the phone number '0123456789'.
mysql> SELECT * FROM [table name] WHERE name like "Something%" AND phone_number = '0123456789';

Show all records starting with letters 'Something' AND the phone number '0123456789' limit to records 1 through 5.
mysql> SELECT * FROM [table name] WHERE name like "Something%" AND phone_number = '0123456789' limit 1,5;

Use a regular expression to find records. Use "REGEXP BINARY" to force case-sensitivity. This finds any record beginning with a.
mysql> SELECT * FROM [table name] WHERE rec RLIKE "^a";

Show unique records.
mysql> SELECT DISTINCT [column name] FROM [table name];

Show selected records sorted in an ascending (asc) or descending (desc).
mysql> SELECT [col1],[col2] FROM [table name] ORDER BY [col2] DESC;

Return number of rows.
mysql> SELECT COUNT(*) FROM [table name];

Sum column.
mysql> SELECT SUM(*) FROM [table name];

Creating a new user. Login as root. Switch to the MySQL db. Make the user. Update privs.
# mysql -u root -p
mysql> use mysql;
mysql> INSERT INTO user (Host,User,Password) VALUES('%','username',PASSWORD('password'));
mysql> flush privileges;

Change a users password from unix shell.
# [mysql dir]/bin/mysqladmin -u username -h hostname -ppassword 'new-password'

Change a users password from MySQL prompt. Login as root. Set the password. Update privileges.
# mysql -u root -p
mysql> SET PASSWORD FOR 'user'@'hostname' = PASSWORD('password');
mysql> flush privileges;

Recover a MySQL root password. Stop the MySQL server process. Start again with no grant tables. Login to MySQL as root. Set new password. Exit MySQL and restart MySQL server.
# /etc/init.d/mysql stop
# mysqld_safe --skip-grant-tables
# mysql -u root
mysql> use mysql;
mysql> update user set password=PASSWORD("newpassword") where User='root';
mysql> flush privileges;
mysql> quit
# /etc/init.d/mysql stop
# /etc/init.d/mysql start

Set a root password if there is no root password.
# mysqladmin -u root password newpassword

Update a root password.
# mysqladmin -u root -p oldpassword newpassword

Allow the user "Someone" to connect to the server from localhost using the password "passwd". Login as root. Switch to the MySQL db. Give privs. Update privs.
# mysql -u root -p
mysql> use mysql;
mysql> grant usage on *.* to Someone@localhost identified by 'passwd';
mysql> flush privileges;

Give user privilages for a db. Login as root. Switch to the MySQL db. Grant privs. Update privs.
# mysql -u root -p
mysql> use mysql;
mysql>INSERT INTO user(Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv)
 VALUES ('%','databasename','username','Y','Y','Y','Y','Y','N');
mysql> flush privileges;

or

mysql> grant all privileges on databasename.* to username@localhost;
mysql> flush privileges;

To update info already in a table.
mysql> UPDATE [table name] SET Select_priv = 'Y',Insert_priv = 'Y',Update_priv = 'Y' where [field name] = 'user';

Delete a row(s) from a table.
mysql> DELETE from [table name] where [field name] = 'fieldvalue';

Update database permissions/privilages.
mysql> flush privileges;

Delete a column.
mysql> alter table [table name] drop column [column name];

Add a new column to db.
mysql> alter table [table name] add column [new column name] varchar (20);

Change column name.
mysql> alter table [table name] change [old column name] [new column name] varchar (50);

Make a unique column so you get no dupes.
mysql> alter table [table name] add unique ([column name]);

Make a column bigger.
mysql> alter table [table name] modify [column name] VARCHAR(3);

Delete unique from table.
mysql> alter table [table name] drop index [colmn name];

Load a CSV file into a table.
mysql> LOAD DATA INFILE '/tmp/filename.csv' replace INTO TABLE [table name] FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (field1,field2,field3);

Dump all databases for backup. Backup file is sql commands to recreate all db's.
# mysqldump -u username -ppassword --opt > /tmp/alldatabases.sql

Dump one database for backup.
# mysqldump -u username -ppassword --databases databasename > /tmp/databasename.sql

Dump a table from a database.
# mysqldump -u username -ppassword databasename tablename > /tmp/databasename.tablename.sql

Restore database (or database table) from backup.
# mysql -u username -ppassword databasename < /tmp/databasename.sql

Create Table Example 1.
mysql> CREATE TABLE [table name] (name VARCHAR(20));

Create Table Example 2.
mysql> create table [table name] (personid int(50) not null auto_increment primary key,firstname varchar(35),middlename varchar(50),lastnamevarchar(50) default 'somethiing');

Saturday, February 22, 2020

C# .NET REST client


https://docs.microsoft.com/en-gb/dotnet/core/install/linux-package-manager-ubuntu-1910
https://docs.microsoft.com/en-gb/dotnet/core/install/linux-package-manager-centos7
https://docs.microsoft.com/en-gb/dotnet/core/install/linux-package-manager-rhel81

https://dotnet.microsoft.com/download
https://dotnet.microsoft.com/download/dotnet-core
https://dotnet.microsoft.com/platform/tools
https://code.visualstudio.com/Download


https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro

https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/create

wget -q https://packages.microsoft.com/config/ubuntu/19.10/packages-microsoft-prod.deb -O packages-microsoft-prod.deb sudo dpkg -i packages-microsoft-prod.deb


sudo apt-get update sudo apt-get install apt-transport-https sudo apt-get update sudo apt-get install dotnet-sdk-3.1
sudo apt-get install aspnetcore-runtime-3.1
sudo apt-get install dotnet-runtime-3.1

https://docs.microsoft.com/en-gb/dotnet/csharp/tutorials/intro-to-csharp/branches-and-loops-local


REST client

https://docs.microsoft.com/en-gb/dotnet/csharp/tutorials/console-webapiclient

This tutorial teaches you a number of features in .NET Core and the C# language. You’ll learn:
  • The basics of the .NET Core CLI.
  • An overview of C# Language features.
  • Managing dependencies with NuGet
  • HTTP Communications
  • Processing JSON information
  • Managing configuration with Attributes.


    cat /etc/lsb-release 
    wget -q https://packages.microsoft.com/config/ubuntu/19.10/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
   sudo dpkg -i packages-microsoft-prod.deb
   sudo apt-get update
   sudo apt-get install apt-transport-https
   sudo apt-get update
   sudo apt-get install dotnet-sdk-3.1
   sudo apt-get update
   sudo apt-get install apt-transport-https
   sudo apt-get update
   sudo apt-get install aspnetcore-runtime-3.1
   sudo apt-get install dotnet-runtime-3.1
   dotnet
   dotnet --info
   powershell --version
   dotnet --list-sdks
   dotnet --list-runtimes
   cd /home/learn/dotnet/HelloWorld
   dotnet new console -o myApp
   cd myApp
  
/home/learn/dotnet/HelloWorld/myApp# dotnet run
Hello World!
The current time is 2/22/2020 11:46:12 PM