Sunday, March 21, 2021

PostgreSQL High Availability with Patroni and ETCD


Building an Enterprise-Grade PostgreSQL Setup Using Open Source Tools
https://www.percona.com/blog/2018/10/19/postgresql-building-enterprise-grade-setup-with-open-source/
https://www.veritas.com/content/support/en_US/doc/129277259-129955710-0/v129957708-129955710 

Set Up Highly Available PostgreSQL12 Cluster on CentOS, RHEL 8
https://www.techsupportpk.com/2020/02/how-to-create-highly-available-postgresql-cluster-centos-rhel-8.html   

Set Up a Highly Available PostgreSQL12 Cluster on CentOS, RHEL 7   
https://www.techsupportpk.com/2020/02/how-to-create-highly-available-postgresql-cluster-using-patroni-haproxy-centos-rhel-7.html   


The simplest Patroni PostgreSQL HA is with One ETCD instance and Two PostgreSQL ( instances )   https://www.cybertec-postgresql.com/en/patroni-setting-up-a-highly-available-postgresql-cluster/


My favourite Patroni introduction :
https://www.cybertec-postgresql.com/en/postgresql-high-availability-and-patroni-an-introduction/


Nice article about ETCD Cluster : 
https://www.cybertec-postgresql.com/en/introduction-and-how-to-etcd-clusters-for-patroni/


Other useful Links with more information regarding the subject :
https://www.youtube.com/watch?v=FiWZuEVlSLc
https://digitalis.io/blog/postgresql/part1-postgresql-ha-patroni-etcd-haproxy/

https://www.programmersought.com/article/4066235751/

https://www.techsupportpk.com/2020/02/how-to-create-highly-available-postgresql-cluster-centos-rhel-8.html

https://www.alibabacloud.com/blog/how-to-set-up-a-highly-available-postgresql-cluster-using-patroni-on-ubuntu-16-04_594477

https://blog.dbi-services.com/postgresql-high-availabilty-patroni-ectd-haproxy-keepalived/


PostgreSQL Cluster Patroni ETCD
https://www.youtube.com/watch?v=SBRo0l7fc_Q
https://github.com/jamalshahverdiev/vagrant-codes-in-practice/tree/master/vagrant-jenkins-gitlab
   by Jamal Shahverdiev


How to Set Up a High Available PostgreSQL Cluster Using Patroni
https://miro.medium.com/max/1400/1*MKVdFmARn20kBlKcnSx_9w.png
https://medium.com/@neslisah.demirci/how-to-set-up-a-high-available-postgresql-cluster-using-patroni-d7044a754d2f

   - Load Balancing with Primary + Replica - HAProxy
https://github.com/zalando/patroni/blob/master/postgres1.yml
https://github.com/zalando/patroni/blob/master/postgres0.yml

https://dzone.com/articles/managing-high-availability-in-postgresql-part-iii



---

Articles from books : 

---

 - Preventing Split Brain
https://subscription.packtpub.com/book/data/9781838984854/1/ch01lvl1sec09/preventing-split-brain
 - Installing and configuring pgBackRest
https://subscription.packtpub.com/book/data/9781838984854/8/ch08lvl1sec98/installing-and-configuring-pgbackrest

 ( PostgreSQL 12 High Availability Cookbook - Third Edition : By Shaun Thomas - February 2020 ) 

---


PITR on a patroni-pgbackrest stack
https://medium.com/@Gowtham_Raj_Elangovan/pitr-on-a-patroni-pgbackrest-stack-9472846d8559

How to configure a Patroni Cluster to use pgbackrest
https://community.pivotal.io/s/article/How-to-Configure-a-Patroni-Cluster-to-use-pgbackrest?language=en_US

Point-in-time-recovery (PITR) with Patroni
https://community.pivotal.io/s/article/Point-in-time-recovery-PITR-with-Patroni

Q: Is it possible to perform PITR recovery with Patroni using pgBackrest?
A: As of pgBackrest 2.23 and Patroni 1.6.4, it is only possible to perform a PITR if you bootstrap the cluster. 

How to use pgbackrest to bootstrap or add replica to HA Patroni
https://community.pivotal.io/s/article/How-to-Use-pgbackrest-to-bootstrap-or-add-replica-to-HA-Patroni


How to configure a Patroni Cluster to use pgbackrest
https://community.pivotal.io/s/article/How-to-Configure-a-Patroni-Cluster-to-use-pgbackrest

An ultimate guide to upgrading your PostgreSQL installation
https://www.youtube.com/watch?v=CzhVkdnbQBU
https://youtu.be/CzhVkdnbQBU?t=2528
by Ilya Kosmodemiansky

Cluster PostgreSQL mutualisé : les choix techniques Sigma
https://www.sigma.fr/2020/05/26/2-cluster-postgresql-mutualise-les-choix-techniques-sigma/


Sunday, February 28, 2021

PostgreSQL Data ModeL

 

A postgreSQL Data ModeL related to customer's order : 


CREATE TABLE customer (
    c_since timestamp with time zone NOT NULL,
    c_id integer NOT NULL,
    c_w_id integer NOT NULL,
    c_d_id smallint NOT NULL,
    c_payment_cnt smallint NOT NULL,
    c_delivery_cnt smallint NOT NULL,
    c_first character varying(16) NOT NULL,
    c_middle character(2) NOT NULL,
    c_last character varying(16) NOT NULL,
    c_street_1 character varying(20) NOT NULL,
    c_street_2 character varying(20) NOT NULL,
    c_city character varying(20) NOT NULL,
    c_state character(2) NOT NULL,
    c_zip character(9) NOT NULL,
    c_phone character(16) NOT NULL,
    c_credit character(2) NOT NULL,
    c_credit_lim numeric(12,2) NOT NULL,
    c_discount numeric(4,4) NOT NULL,
    c_balance numeric(12,2) NOT NULL,
    c_ytd_payment numeric(12,2) NOT NULL,
    c_data character varying(500) NOT NULL,

    CONSTRAINT customer_i1 PRIMARY KEY (c_w_id, c_d_id, c_id)
);
CREATE UNIQUE INDEX customer_i2 ON customer USING btree (c_w_id, c_d_id, c_last, c_first, c_id);

CREATE TABLE district (
    d_w_id integer NOT NULL,
    d_next_o_id integer NOT NULL,
    d_id smallint NOT NULL,
    d_ytd numeric(12,2) NOT NULL,
    d_tax numeric(4,4) NOT NULL,
    d_name character varying(10) NOT NULL,
    d_street_1 character varying(20) NOT NULL,
    d_street_2 character varying(20) NOT NULL,
    d_city character varying(20) NOT NULL,
    d_state character(2) NOT NULL,
    d_zip character(9) NOT NULL,

    CONSTRAINT district_i1 PRIMARY KEY (d_w_id, d_id)
);

CREATE TABLE history (
    h_date timestamp with time zone NOT NULL,
    h_c_id integer,
    h_c_w_id integer NOT NULL,
    h_w_id integer NOT NULL,
    h_c_d_id smallint NOT NULL,
    h_d_id smallint NOT NULL,
    h_amount numeric(6,2) NOT NULL,
    h_data character varying(24) NOT NULL
);

CREATE TABLE item (
    i_id integer NOT NULL,
    i_im_id integer NOT NULL,
    i_name character varying(24) NOT NULL,
    i_price numeric(5,2) NOT NULL,
    i_data character varying(50) NOT NULL,

    CONSTRAINT item_i1 PRIMARY KEY (i_id)
);

CREATE TABLE new_order (
    no_w_id integer NOT NULL,
    no_o_id integer NOT NULL,
    no_d_id smallint NOT NULL,

    CONSTRAINT new_order_i1 PRIMARY KEY (no_w_id, no_d_id, no_o_id)
);

CREATE TABLE order_line (
    ol_delivery_d timestamp with time zone,
    ol_o_id integer NOT NULL,
    ol_w_id integer NOT NULL,
    ol_i_id integer NOT NULL,
    ol_supply_w_id integer NOT NULL,
    ol_d_id smallint NOT NULL,
    ol_number smallint NOT NULL,
    ol_quantity smallint NOT NULL,
    ol_amount numeric(6,2),
    ol_dist_info character(24),

    CONSTRAINT order_line_i1 PRIMARY KEY (ol_w_id, ol_d_id, ol_o_id, ol_number)
);

CREATE TABLE orders (
    o_entry_d timestamp with time zone NOT NULL,
    o_id integer NOT NULL,
    o_w_id integer NOT NULL,
    o_c_id integer NOT NULL,
    o_d_id smallint NOT NULL,
    o_carrier_id smallint,
    o_ol_cnt smallint NOT NULL,
    o_all_local smallint NOT NULL,

    CONSTRAINT orders_i1 PRIMARY KEY (o_w_id, o_d_id, o_id)
);
CREATE UNIQUE INDEX orders_i2 ON orders USING btree (o_w_id, o_d_id, o_c_id, o_id);

CREATE TABLE stock (
    s_i_id integer NOT NULL,
    s_w_id integer NOT NULL,
    s_ytd integer NOT NULL,
    s_quantity smallint NOT NULL,
    s_order_cnt smallint NOT NULL,
    s_remote_cnt smallint NOT NULL,
    s_dist_01 character(24) NOT NULL,
    s_dist_02 character(24) NOT NULL,
    s_dist_03 character(24) NOT NULL,
    s_dist_04 character(24) NOT NULL,
    s_dist_05 character(24) NOT NULL,
    s_dist_06 character(24) NOT NULL,
    s_dist_07 character(24) NOT NULL,
    s_dist_08 character(24) NOT NULL,
    s_dist_09 character(24) NOT NULL,
    s_dist_10 character(24) NOT NULL,
    s_data character varying(50) NOT NULL,

    CONSTRAINT stock_i1 PRIMARY KEY (s_w_id, s_i_id)
);

CREATE TABLE warehouse (
    w_id integer NOT NULL,
    w_name character varying(10) NOT NULL,
    w_street_1 character varying(20) NOT NULL,
    w_street_2 character varying(20) NOT NULL,
    w_city character varying(20) NOT NULL,
    w_state character(2) NOT NULL,
    w_zip character(9) NOT NULL,
    w_tax numeric(4,4) NOT NULL,
    w_ytd numeric(12,2) NOT NULL,

    CONSTRAINT warehouse_i1 PRIMARY KEY (w_id)
);

https://www.enterprisedb.com/blog/postgresql-tproc-c-benchmarks-postgresql-12-vs-postgresql-13-performance


MS SQLS port 1433


PowersheLL commands for Microsoft SQL Server :  


Get-Service | Where-Object {$_.Name -like "*SQL*"}
gsv | where {$_.Name -like "*SQL*"}



Get-NetTCPConnection | Where-Object {$_.LocalPort -like "143*"}
Test-NetConnection -ComputerName localhost -Port 1433




Monday, February 15, 2021

Install Microsoft SQL Server with PowersheLL DSC

 

Get-Host
Install-Module -Name SqlServerDsc
Get-DscResource -Module SqlServerDsc
New-Item -Path C:\SQL2017 -ItemType Directory
$mountResult = Mount-DiskImage -ImagePath 'C:\vagrant\SQLServer2017-x64-ENU.iso' -PassThru
$volumeInfo = $mountResult | Get-Volume
echo $volumeInfo
$driveInfo = Get-PSDrive -Name $volumeInfo.DriveLetter
echo $driveInfo
Copy-Item -Path ( Join-Path -Path $driveInfo.Root -ChildPath '*' ) -Destination C:\SQL2017\ -Recurse
Dismount-DiskImage -ImagePath 'C:\vagrant\SQLServer2017-x64-ENU.iso'


cd C:\vagrant\scripts\
PS C:\vagrant\scripts\> dir
    Directory: C:\Scripts\MSSQLS
Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        2/14/2021   5:20 AM            627 SQLInstallConfiguration.ps1


. .\SQLInstallConfiguration.ps1
SQLInstall

Start-DscConfiguration -Path C:\Scripts\MSSQLS\SQLInstall -Wait -Force -Verbose
Test-DscConfiguration
Get-Service -Name *SQL*

Get-Command -ModuleName sqlserver


sqlcmd
SELECT @@SERVERNAME
go

SELECT @@VERSION
go


Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box = "StefanScherer/windows_2019"
  config.vm.network "private_network", ip: "192.168.20.19"
end


SQLServer2017-x64-ENU.iso
   - downloaded from : 
https://www.microsoft.com/en-us/sql-server/sql-server-downloads
https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019

https://www.google.com/search?q=download+Microsoft+Command+Line+Utilities+for+SQL+Server
   Microsoft Command Line Utilities 14.0 for SQL Server
   https://www.microsoft.com/en-us/download/details.aspx?id=53591

About sqlcmd Utility
https://docs.microsoft.com/en-us/sql/tools/sqlcmd-utility?view=sql-server-2017

Download the latest version of sqlcmd Utility
https://go.microsoft.com/fwlink/?linkid=2142258

Download ODBC Driver for SQL Server
https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-2017

Install SQL Server with PowerShell Desired State Configuration
https://docs.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server-with-powershell-desired-state-configuration?view=sql-server-ver15



Sunday, January 17, 2021

The rise of the time-series database

 The rise of the time-series database


Which new startups are emerging?

New companies see an opportunity through focusing on adding the right amount of indexing and post-processing to make queries fast and effective.


InfluxDB began as an open source project and is now available as either a standalone installation or an elastic serverless option from the InfluxDB Cloud. The company’s Flux query language simplifies tasks like computing the moving averages of the data stream. The language is functional and designed to be easily composable so queries can be built up from other queries.


Timescale DB is a separate engine that is fully integrated with PostgreSQL for tasks that might need traditional relational tables and time-series data. The company’s benchmarks boast of speeding up ingesting data by a factor of 20. The queries for searching the data or identifying significant values like maxima can be thousands of times faster.


Prometheus stores all data with a timestamp automatically and provides a set of standard queries for analyzing changes in the data. Its PromQL bears some resemblance to the emerging data format for queries, GraphQL. This makes it simple for developers to set up alerts that could be triggered by data anomalies.


Redis created a special module for ingesting the rapid data flows into the database. The indexing routines build a set of average statistics about the data’s evolution. To save memory, it can also downsample or aggregate the elements.



The cloud companies are also adding data storage services for this market. AWS, for example, launched its Timestream service, a tool optimized for IoT data.



https://venturebeat.com/2021/01/15/database-trends-the-rise-of-the-time-series-database/