Sunday, February 28, 2021

MS SQLS port 1433


PowersheLL commands for Microsoft SQL Server :  


Get-Service | Where-Object {$_.Name -like "*SQL*"}
gsv | where {$_.Name -like "*SQL*"}



Get-NetTCPConnection | Where-Object {$_.LocalPort -like "143*"}
Test-NetConnection -ComputerName localhost -Port 1433




Monday, February 15, 2021

Install Microsoft SQL Server with PowersheLL DSC

 

Get-Host
Install-Module -Name SqlServerDsc
Get-DscResource -Module SqlServerDsc
New-Item -Path C:\SQL2017 -ItemType Directory
$mountResult = Mount-DiskImage -ImagePath 'C:\vagrant\SQLServer2017-x64-ENU.iso' -PassThru
$volumeInfo = $mountResult | Get-Volume
echo $volumeInfo
$driveInfo = Get-PSDrive -Name $volumeInfo.DriveLetter
echo $driveInfo
Copy-Item -Path ( Join-Path -Path $driveInfo.Root -ChildPath '*' ) -Destination C:\SQL2017\ -Recurse
Dismount-DiskImage -ImagePath 'C:\vagrant\SQLServer2017-x64-ENU.iso'


cd C:\vagrant\scripts\
PS C:\vagrant\scripts\> dir
    Directory: C:\Scripts\MSSQLS
Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        2/14/2021   5:20 AM            627 SQLInstallConfiguration.ps1


. .\SQLInstallConfiguration.ps1
SQLInstall

Start-DscConfiguration -Path C:\Scripts\MSSQLS\SQLInstall -Wait -Force -Verbose
Test-DscConfiguration
Get-Service -Name *SQL*

Get-Command -ModuleName sqlserver


sqlcmd
SELECT @@SERVERNAME
go

SELECT @@VERSION
go


Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box = "StefanScherer/windows_2019"
  config.vm.network "private_network", ip: "192.168.20.19"
end


SQLServer2017-x64-ENU.iso
   - downloaded from : 
https://www.microsoft.com/en-us/sql-server/sql-server-downloads
https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019

https://www.google.com/search?q=download+Microsoft+Command+Line+Utilities+for+SQL+Server
   Microsoft Command Line Utilities 14.0 for SQL Server
   https://www.microsoft.com/en-us/download/details.aspx?id=53591

About sqlcmd Utility
https://docs.microsoft.com/en-us/sql/tools/sqlcmd-utility?view=sql-server-2017

Download the latest version of sqlcmd Utility
https://go.microsoft.com/fwlink/?linkid=2142258

Download ODBC Driver for SQL Server
https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-2017

Install SQL Server with PowerShell Desired State Configuration
https://docs.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server-with-powershell-desired-state-configuration?view=sql-server-ver15



Sunday, January 17, 2021

The rise of the time-series database

 The rise of the time-series database


Which new startups are emerging?

New companies see an opportunity through focusing on adding the right amount of indexing and post-processing to make queries fast and effective.


InfluxDB began as an open source project and is now available as either a standalone installation or an elastic serverless option from the InfluxDB Cloud. The company’s Flux query language simplifies tasks like computing the moving averages of the data stream. The language is functional and designed to be easily composable so queries can be built up from other queries.


Timescale DB is a separate engine that is fully integrated with PostgreSQL for tasks that might need traditional relational tables and time-series data. The company’s benchmarks boast of speeding up ingesting data by a factor of 20. The queries for searching the data or identifying significant values like maxima can be thousands of times faster.


Prometheus stores all data with a timestamp automatically and provides a set of standard queries for analyzing changes in the data. Its PromQL bears some resemblance to the emerging data format for queries, GraphQL. This makes it simple for developers to set up alerts that could be triggered by data anomalies.


Redis created a special module for ingesting the rapid data flows into the database. The indexing routines build a set of average statistics about the data’s evolution. To save memory, it can also downsample or aggregate the elements.



The cloud companies are also adding data storage services for this market. AWS, for example, launched its Timestream service, a tool optimized for IoT data.



https://venturebeat.com/2021/01/15/database-trends-the-rise-of-the-time-series-database/


Saturday, December 12, 2020

UPGRADING POSTGRESQL TO A MAJOR VERSION

 


To do the upgrade , we have various options:


pg_dump & pg_restore 

https://www.enterprisedb.com/postgres-tutorials/how-use-pgdump-and-pgrestore-multi-host-enviorment


pg_upgrade : Copy data on a binary level


pg_upgrade is not destructive. 

If things go wrong, you can always delete the new data directory and start from scratch.


To upgrade a database, two steps are needed:

initdb -D /data/db_new

pg_upgrade -d /data/db_old/ -D /data/db_new -b /path/to/pg_old/bin/ -B /path/to/pg_new/bin/  



pg_upgrade –link : In-place upgrades

To reduce downtime, we can use the << -link >> option:

pg_upgrade -d /data/db_old/ -D /data/db_new -b /path/to/pg_old/bin/ -B /path/to/pg_new/bin/  --link


https://www.cybertec-postgresql.com/en/upgrading-and-updating-postgresql/

Tuesday, September 22, 2020

MySQL 8 Router - High Availability - with Pacemaker on CentOS 8

 
https://github.com/auntaru/vagrantfiles
https://www.youtube.com/watch?v=EVb_YhO3c7U
https://raw.githubusercontent.com/auntaru/vagrantfiles/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/provision.sh
https://github.com/auntaru/vagrantfiles/blob/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/innodb-cluster-status.js
https://github.com/auntaru/vagrantfiles/blob/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/Vagrantfile



GoaL of this article :
   MySQL Router shouldn't be the single point of failure between application and a MySQL CLuSTeR

On all Nodes, Install Pacemaker and configure some settings.
If [HighAvailability] repository is not in your CentOS 8, run [# dnf upgrade centos-repos] first.

The first step is to install pacemaker on all the machines we will use for our “MySQL Router Cluster”:

# # enable [HighAvailability] repo and install (it's disabled by default)
[root@hyperv-primary-cos8 ~]# dnf --enablerepo=HighAvailability -y install pacemaker pcs

# Now we need to start the pcsd service and enable it at boot (on all machines):
[root@hyperv-primary-cos8 ~]# systemctl enable --now pcsd

Setup authentication : operation is again executed on all machines part of the MySQL Router Cluster:
# # set cluster admin password
[root@hyperv-primary-cos8 ~]# echo MyStrongPassw0rd | passwd --stdin hacluster

And on one of the nodes, we can now execute the following command:

On CentOS7 : 

[root@mysql1 ~]# pcs cluster auth mysql1 mysql2 mysql3 -u hacluster -p MyStrongPassw0rd --force
mysql1: Authorized
mysql2: Authorized
mysql3: Authorized

On CentOS8 :
On a Node, Configure basic Cluster settings.
# authorize among nodes
[root@hyperv-primary-cos8 ~]# pcs host auth mysql01 mysql02 mysql03 -u hacluster -p MyStrongPassw0rd
mysql02: Authorized
mysql01: Authorized
mysql03: Authorized
[root@hyperv-primary-cos8 ~]#


# configure / create pacemaker-corosync cluster :
[root@hyperv-primary-cos8 ~]# pcs cluster setup routercluster mysql01 mysql02 mysql03
No addresses specified for host 'mysql01', using 'mysql01'
No addresses specified for host 'mysql02', using 'mysql02'
No addresses specified for host 'mysql03', using 'mysql03'
Destroying cluster on hosts: 'mysql01', 'mysql02', 'mysql03'...
mysql02: Successfully destroyed cluster
mysql03: Successfully destroyed cluster
mysql01: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'mysql01', 'mysql02', 'mysql03'
mysql01: successful removal of the file 'pcsd settings'
mysql02: successful removal of the file 'pcsd settings'
mysql03: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'mysql01', 'mysql02', 'mysql03'
mysql01: successful distribution of the file 'corosync authkey'
mysql01: successful distribution of the file 'pacemaker authkey'
mysql02: successful distribution of the file 'corosync authkey'
mysql02: successful distribution of the file 'pacemaker authkey'
mysql03: successful distribution of the file 'corosync authkey'
mysql03: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'mysql01', 'mysql02', 'mysql03'
mysql01: successful distribution of the file 'corosync.conf'
mysql02: successful distribution of the file 'corosync.conf'
mysql03: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@hyperv-primary-cos8 ~]#


We can now start the cluster (running the following command on one node):
[root@hyperv-primary-cos8 ~]# pcs cluster start --all
mysql02: Starting Cluster...
mysql01: Starting Cluster...
mysql03: Starting Cluster...
[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# pcs cluster status
Cluster Status:
 Cluster Summary:
   * Stack: corosync
   * Current DC: mysql01 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
   * Last updated: Wed Sep 16 18:34:28 2020
   * Last change:  Wed Sep 16 18:19:10 2020 by hacluster via crmd on mysql01
   * 3 nodes configured
   * 0 resource instances configured
 Node List:
   * Online: [ mysql01 mysql02 mysql03 ]

PCSD Status:
  mysql01: Online
  mysql02: Online
  mysql03: Online
[root@hyperv-primary-cos8 ~]#


Now we can set some properties to our cluster:
[root@hyperv-primary-cos8 ~]# pcs property set stonith-enabled=false
[root@hyperv-primary-cos8 ~]# pcs property set no-quorum-policy=ignore
[root@hyperv-primary-cos8 ~]# pcs resource defaults migration-threshold=1


Adding resources

VIP

The first resource we will use is the VIP (Virtual IP) that the application will use to connect to MySQL Router:

[root@hyperv-primary-cos8 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:15:5d:38:01:07 brd ff:ff:ff:ff:ff:ff

    inet 192.168.0.22/24 brd 192.168.0.255 scope global noprefixroute eth0

       valid_lft forever preferred_lft forever

    inet6 fc00::1:122/64 scope global noprefixroute

       valid_lft forever preferred_lft forever

    inet6 fe80::77b3:112:420b:3363/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

[root@hyperv-primary-cos8 ~]#




[root@hyperv-primary-cos8 ~]# pcs resource create Router_VIP ocf:heartbeat:IPaddr2 \

>     ip=192.168.0.11 cidr_netmask=24 op monitor interval=5s


[root@hyperv-primary-cos8 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:15:5d:38:01:07 brd ff:ff:ff:ff:ff:ff

    inet 192.168.0.22/24 brd 192.168.0.255 scope global noprefixroute eth0

       valid_lft forever preferred_lft forever

    inet 192.168.0.11/24 brd 192.168.0.255 scope global secondary eth0

       valid_lft forever preferred_lft forever

    inet6 fc00::1:122/64 scope global noprefixroute

       valid_lft forever preferred_lft forever

    inet6 fe80::77b3:112:420b:3363/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

[root@hyperv-primary-cos8 ~]#


yum install -y mysql-router

[root@hyperv-primary-cos8 ~]# pcs resource create mysqlrouter systemd:mysqlrouter clone

pcs resource list | grep -i router
pcs resource list | grep -i mysql


[root@hyperv-primary-cos8 ~]# pcs resource list | grep -i postgres
ocf:heartbeat:pgsql - Manages a PostgreSQL database instance
[root@hyperv-primary-cos8 ~]#


Before adding the MySQL Router as resource, we need to be sure that the router is installed and bootstrapped on each nodes. As reminder, this is how to proceed:


MySQL-RouteR : BooTsTraP

password="Cluster^123"
mysql -e "create user 'mysqlrouter' identified by '$password'"

[root@hyperv-proxy-cos8 ~]# echo $password | mysqlrouter --bootstrap mycluster@mysql01 --user=mysqlrouter

mysql --host=192.168.0.22 --port=3306 --user=mycluster -p"Cluster^123"


Now we can add the resource that will manage MySQL Router in our pacemaker cluster. We will use systemd.

You can verify if systemd service for mysqlrouter is available using the following command:

[root@mysql1 ~]# pcs resource list | grep router
service:mysqlrouter - systemd unit file for mysqlrouter
systemd:mysqlrouter - systemd unit file for mysqlrouter


OK, so let’s add the resource and let’s it run on all nodes (clone), no need to start mysqlrouter only if the running one has issue:

[root@hyperv-primary-cos8 ~]# pcs resource create mysqlrouter systemd:mysqlrouter clone


Let’s verify if the resource is now added:

[root@hyperv-primary-cos8 ~]# crm_mon -1

Cluster Summary:

  * Stack: corosync

  * Current DC: mysql01 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum

  * Last updated: Wed Sep 16 19:00:31 2020

  * Last change:  Wed Sep 16 19:00:27 2020 by root via cibadmin on mysql01

  * 3 nodes configured

  * 4 resource instances configured


Node List:

  * Online: [ mysql01 mysql02 mysql03 ]


Active Resources:

  * Router_VIP  (ocf::heartbeat:IPaddr2):       Started mysql01

  * Clone Set: mysqlrouter-clone [mysqlrouter]:

    * Started: [ mysql01 mysql02 mysql03 ]


[root@hyperv-primary-cos8 ~]#



Now we need to tell the cluster that the VIP needs to stand where a router instance also runs:


[root@hyperv-primary-cos8 ~]# pcs constraint colocation add Router_VIP \

>                  with mysqlrouter-clone score=INFINITY

[root@hyperv-primary-cos8 ~]#


Now we will kill mysqlrouter process on mysql01 and see what the cluster does:


https://lefred.be/content/mysql-router-ha-with-pacemaker/
https://www.server-world.info/en/note?os=CentOS_8&p=pacemaker&f=1
https://raw.githubusercontent.com/behemius/gcp01/master/node1_install.sh
https://raw.githubusercontent.com/behemius/gcp01/master/router_install.sh


[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# mysqlsh --sql -e "select * from performance_schema.replication_group_members\G" --user=mycluster -p"Cluster^123"

WARNING: Using a password on the command line interface can be insecure.

*************************** 1. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: 3d17fb2b-f143-11ea-9c6b-00155d380107

   MEMBER_HOST: hyperv-primary-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: PRIMARY

MEMBER_VERSION: 8.0.17

*************************** 2. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: 6a766bf2-f184-11ea-a7ad-00155d380109

   MEMBER_HOST: hyperv-proxy-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: SECONDARY

MEMBER_VERSION: 8.0.17

*************************** 3. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: ef479c0a-f183-11ea-b35a-00155d380108

   MEMBER_HOST: hyperv-standby-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: SECONDARY

MEMBER_VERSION: 8.0.17

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# mysqlsh --sql -e "SELECT @@global.read_only, @@global.super_read_only, @@global.group_replication_group_seeds\G" --user=mycluster -p"Cluster^123"

WARNING: Using a password on the command line interface can be insecure.

*************************** 1. row ***************************

                    @@global.read_only: 0

              @@global.super_read_only: 0

@@global.group_replication_group_seeds: 192.168.0.22:33061,192.168.0.24:33061,192.168.0.28:33061,hyperv-proxy-cos8:33061

[root@hyperv-primary-cos8 ~]#




[root@hyperv-primary-cos8 ~]# cat ./innodb-cluster-status.js
try {
   shell.connect('mycluster@hyperv-primary-cos8', "Cluster^123");
   print(dba.getCluster().status());
   print(session.runSql("SHOW DATABASES;"));
} catch(e) {
     print('\nThe InnoDB cluster could not be created.\n\nError: ' +
     + e.message + '\n');
}
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]# mysqlsh -f innodb-cluster-status.js
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "hyperv-primary-cos8:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "hyperv-primary-cos8:3306": {
                "address": "hyperv-primary-cos8:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "hyperv-proxy-cos8:3306": {
                "address": "hyperv-proxy-cos8:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "hyperv-standby-cos8:3306": {
                "address": "hyperv-standby-cos8:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "hyperv-primary-cos8:3306"
}<SqlResult>[root@hyperv-primary-cos8 ~]#


Thursday, August 20, 2020

How to generate self-signed SSL certificates

 Generate a root private key (rootCA.key):

openssl genrsa -out rootCA.key 2048

Generate a self-singed root certificate (rootCA.pem):

openssl req -x509 -new -nodes -key rootCA.key -days 1024 -out rootCA.pem

Create a private key for the final certificate (dovecot.key):

openssl genrsa -out dovecot.key 2048

Create a certificate sign request (dovecot.csr):

openssl req -new -key dovecot.key -out dovecot.csr

Create a certificate based on the root CA certificate and the root private key (dovecot.crt):

openssl x509 -req -in dovecot.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out dovecot.crt -days 500

Copy the private key and the certificate to the /etc/dovecot/private/ directory

cp dovecot.crt /etc/dovecot/private/dovecot.crt

# cp dovecot.key /etc/dovecot/private/dovecot.key

Set the required permissions on the file:

chmod 400 /etc/dovecot/private/dovecot.crt

# chmod 400 /etc/dovecot/private/dovecot.key

Update paths to the key and the certificate in the Dovecot configuration file /etc/dovecot/dovecot.conf:

ssl_cert = < /etc/dovecot/private/dovecot.crt

ssl_key =  </etc/dovecot/private/dovecot.key

Restart the Dovecot service to apply the changes:

service dovecot restart

Sunday, August 9, 2020

Systemd - systemctl - Services Management


systemctl list-units
This will show you a list of all of the units that systemd currently has active on the system.

systemctl list-units --all
This will show any unit that systemd loaded or attempted to load, regardless of its current state on the system. Some units become inactive after running, and some units that systemd attempted to load may have not been found on disk.

systemctl list-units --all --state=inactive
systemctl list-units --type=service

systemctl list-unit-files
Units are representations of resources that systemd knows about. Since systemd has not necessarily read all of the unit definitions in this view, it only presents information about the files themselves. The output has two columns: the unit file and the state.

systemctl cat atd.service
systemctl list-dependencies sshd.service
systemctl show sshd.service

sudo systemctl edit nginx.service
This will be a blank file that can be used to override or add directives to the unit definition. A directory will be created within the /etc/systemd/system directory which contains the name of the unit with .d appended. For instance, for the nginx.service, a directory called nginx.service.d will be created.

https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units