1 / 284 - · PDF fileMySQL InnoDB Cluster & Group Replication in a Nutshell: Hands-On Tutorial Percona Live 2017 - Santa Clara Frédéric Descamps - MySQL Community Manager -...

Post on 14-Mar-2018

220 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

Transcript

1 / 284

2 / 284

MySQL InnoDB Cluster & Group Replication in aNutshell: Hands-On Tutorial 

Percona Live 2017 - Santa Clara

Frédéric Descamps - MySQL Community Manager - OracleKenny Gryp - MySQL Practice Manager - Percona

3 / 284

 

Safe Harbor StatementThe following is intended to outline our general product direction. It is intended forinformation purpose only, and may not be incorporated into any contract. It is not acommitment to deliver any material, code, or functionality, and should not be relied up inmaking purchasing decisions. The development, release and timing of any features orfunctionality described for Oracle´s product remains at the sole discretion of Oracle.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

4 / 284

Who are we ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

5 / 284

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

6 / 284

Frédéric Descamps@lefredMySQL EvangelistManaging MySQL since 3.23devops believerhttp://about.me/lefred

 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

7 / 284

Kenny Gryp@grypMySQL Practice Manager

 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

8 / 284

Matt Lord@mattalordSenior MySQL Product Manager

 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

9 / 284

get more at the conference

MySQL Group Replication

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

10 / 284

Other sessionsEverything You Need to Know About MySQL Group Replication

Luìs Soares & Nuno CarvalhoThursday April 27th1:50PM - 2:40PM - Ballroom F

MySQL Group Replication, Percona XtraDB Cluster, Galera ClusterRamesh Sivaraman & Kenny GrypThursday April 27th3:00PM - 3:50PM - Ballroom E

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

11 / 284

AgendaPrepare your workstation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

12 / 284

AgendaPrepare your workstationMySQL InnoDB Cluster & Group Replication concepts

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

13 / 284

AgendaPrepare your workstationMySQL InnoDB Cluster & Group Replication conceptsMigration from Master-Slave to GR

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

14 / 284

AgendaPrepare your workstationMySQL InnoDB Cluster & Group Replication conceptsMigration from Master-Slave to GRHow to monitor ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

15 / 284

AgendaPrepare your workstationMySQL InnoDB Cluster & Group Replication conceptsMigration from Master-Slave to GRHow to monitor ?Application interaction

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

16 / 284

VirtualBox

Setup your workstation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

17 / 284

Setup your workstationInstall VirtualBox 5On the USB key, copy PL17_GR.ova on your laptop and doubleclick on itEnsure you have vboxnet2 network interface (VirtualBox Preferences -> Network -> Host-Only Networks -> +)Start all virtual machines (mysql1, mysql2, mysql3 & mysql4)Install putty if you are using Windows

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

18 / 284

Setup your workstationInstall VirtualBox 5On the USB key, copy PL17_GR.ova on your laptop and doubleclick on itEnsure you have vboxnet2 network interface (VirtualBox Preferences -> Network -> Host-Only Networks -> +)Start all virtual machines (mysql1, mysql2, mysql3 & mysql4)Install putty if you are using WindowsTry to connect to all VM´s from your terminal or putty (root password is X) :

ssh -p 8821 root@127.0.0.1 to mysql1ssh -p 8822 root@127.0.0.1 to mysql2ssh -p 8823 root@127.0.0.1 to mysql3ssh -p 8824 root@127.0.0.1 to mysql4

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

19 / 284

LAB1: Current situation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

20 / 284

launchrun_app.sh

on mysql1 intoa screensessionverify thatmysql2 is arunning slave

LAB1: Current situation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

21 / 284

Summary 

+--------+----------+--------------+-----------------+| | ROLE | SSH PORT | INTERNAL IP |+--------+----------+--------------+-----------------+ | | | | | | mysql1 | master | 8821 | 192.168.56.11 | | | | | | | mysql2 | slave | 8822 | 192.168.56.12 | | | | | | | mysql3 | n/a | 8823 | 192.168.56.13 | | | | | | | mysql4 | n/a | 8824 | 192.168.56.14 | | | | | | +--------+----------+--------------+-----------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

22 / 284

Easy High Availability

MySQL InnoDB Cluster

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

23 / 284

InnoDB

cluster

Ease-of-Use

Extreme Scale-Out

Out-of-Box Solution

Built-in HA

High P erformance

Everything Integr ated

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

24 / 284

Our vision in 4 steps

MySQL Document Store

Relational & Document Models

MySQL HA

Out-Of-Box HA

Read Scale-Out

Async Replication + A uto F ailover

Write Scale-Out

Sharding

E1

E3

E2 E4

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

25 / 284

Step 2´s Architecture

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

M

y

S

Q

L

S

h

e

l

l

I

n

n

o

D

B

c

l

u

s

t

e

r

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

M

p

M

M

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

26 / 284

Step 3´s ArchitectureA

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

M

y

S

Q

L

S

h

e

l

l

I

n

n

o

D

B

c

l

u

s

t

e

r

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

M

p

M

M

S1 S2 S3 S4 S...

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

27 / 284

Step 4´s ArchitectureA

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

M

y

S

Q

L

S

h

e

l

l

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

I

n

n

o

D

B

c

l

u

s

t

e

r

M

p

M

M

S2 S3 S4 S...S1

I

n

n

o

D

B

c

l

u

s

t

e

r

M

p

M

M

S1 S2 S3 S4 S...

I

n

n

o

D

B

c

l

u

s

t

e

r

M

p

M

M

S1 S2 S3 S4 S...

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

replicaset 1

replicaset 2

replicaset 3

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

28 / 284

the magic explained

Group Replication Concept

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

29 / 284

Group Replication: heart of MySQL InnoDBCluster

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

30 / 284

Group Replication: heart of MySQL InnoDBCluster

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

31 / 284

MySQL Group Replication

but what is it ?!?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

32 / 284

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQL

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

33 / 284

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theory

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

34 / 284

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryGR allows to write on all Group Members (cluster nodes) simultaneously whileretaining consistency

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

35 / 284

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryGR allows to write on all Group Members (cluster nodes) simultaneously whileretaining consistencyGR implements conflict detection and resolution

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

36 / 284

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryGR allows to write on all Group Members (cluster nodes) simultaneously whileretaining consistencyGR implements conflict detection and resolutionGR allows automatic distributed recovery

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

37 / 284

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryGR allows to write on all Group Members (cluster nodes) simultaneously whileretaining consistencyGR implements conflict detection and resolutionGR allows automatic distributed recoverySupported on all MySQL platforms !!

Linux, Windows, Solaris, OSX, FreeBSD

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

38 / 284

MySQL Group Communication System (GCS)MySQL Xcom protocol

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

39 / 284

MySQL Group Communication System (GCS)MySQL Xcom protocolReplicated Database State Machine

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

40 / 284

MySQL Group Communication System (GCS)MySQL Xcom protocolReplicated Database State MachinePaxos based protocol

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

41 / 284

MySQL Group Communication System (GCS)MySQL Xcom protocolReplicated Database State MachinePaxos based protocolits task: deliver messages across the distributed system:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

42 / 284

MySQL Group Communication System (GCS)MySQL Xcom protocolReplicated Database State MachinePaxos based protocolits task: deliver messages across the distributed system:

atomically

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

43 / 284

MySQL Group Communication System (GCS)MySQL Xcom protocolReplicated Database State MachinePaxos based protocolits task: deliver messages across the distributed system:

atomicallyin Total Order

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

44 / 284

MySQL Group Communication System (GCS)MySQL Xcom protocolReplicated Database State MachinePaxos based protocolits task: deliver messages across the distributed system:

atomicallyin Total Order

MySQL Group Replication receives the Ordered 'tickets' from this GCS subsystem.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

45 / 284

And for users ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

46 / 284

And for users ?not longer necessary to handle server fail-over manually or with a complicated script

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

47 / 284

And for users ?not longer necessary to handle server fail-over manually or with a complicated scriptGR provides fault tolerance

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

48 / 284

And for users ?not longer necessary to handle server fail-over manually or with a complicated scriptGR provides fault toleranceGR enables update-everywhere setups

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

49 / 284

And for users ?not longer necessary to handle server fail-over manually or with a complicated scriptGR provides fault toleranceGR enables update-everywhere setupsGR handles crashes, failures, re-connects automatically

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

50 / 284

And for users ?not longer necessary to handle server fail-over manually or with a complicated scriptGR provides fault toleranceGR enables update-everywhere setupsGR handles crashes, failures, re-connects automaticallyAllows an easy setup of a highly available MySQL service!

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

51 / 284

OK, but how does it work ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

52 / 284

OK, but how does it work ?

it´s just magic !

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

53 / 284

OK, but how does it work ?

it´s just magic !

... no, in fact the writeset delivery is synchronous and then certification and apply of thechanges are local to each nodes and happen asynchronous.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

54 / 284

OK, but how does it work ?

it´s just magic !

... no, in fact the writeset delivery is synchronous and then certification and apply of thechanges are local to each nodes and happen asynchronous.

not that easy to understand... right ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

55 / 284

OK, but how does it work ?

it´s just magic !

... no, in fact the writeset delivery is synchronous and then certification and apply of thechanges are local to each nodes and happen asynchronous.

not that easy to understand... right ?

As a picture is worth a 1000 words, let´s illustrate this...

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

56 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

57 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

58 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

59 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

60 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

61 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

62 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

63 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

64 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

65 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

66 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

67 / 284

MySQL Group Replication (autocommit)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

68 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

69 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

70 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

71 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

72 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

73 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

74 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

75 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

76 / 284

MySQL Group Replication (full transaction)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

77 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

78 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

79 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

80 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

81 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

82 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

83 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

84 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

85 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

86 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

87 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

88 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

89 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

90 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

91 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

92 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

93 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

94 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

95 / 284

Group Replication : Total Order Delivery - GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

96 / 284

Group Replication: return commitAsynchronous Replication:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

97 / 284

Group Replication: return from commit (2)Semi-Sync Replication:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

98 / 284

Group Replication: return from commit (3)Group Replication:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

99 / 284

Group Replication : Optimistic LockingGroup Replication uses optimistic locking

during a transaction, local (InnoDB) locking happensoptimistically assumes there will be no conflicts across nodes (no communication between nodes necessary)cluster-wide conflict resolution happens only at COMMIT, during certification

Let´s first have a look at the traditional locking to compare.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

100 / 284

Traditional locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

101 / 284

Traditional locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

102 / 284

Traditional locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

103 / 284

Traditional locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

104 / 284

Traditional locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

105 / 284

Traditional locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

106 / 284

Optimistic Locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

107 / 284

Optimistic Locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

108 / 284

Optimistic Locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

109 / 284

Optimistic Locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

110 / 284

Optimistic Locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

111 / 284

Optimistic Locking

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

112 / 284

Optimistic Locking

The system returns error 149 as certification failed:

ERROR 1180 (HY000): Got error 149 during COMMIT

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

113 / 284

CertificationCertification is the process that only needs to answer the following unique question:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

114 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

115 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactions

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

116 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactionssuch conflicts must come for other members/nodes

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

117 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactionssuch conflicts must come for other members/nodes

certification is a deterministic operation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

118 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactionssuch conflicts must come for other members/nodes

certification is a deterministic operationcertification individually happens on every member/node

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

119 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactionssuch conflicts must come for other members/nodes

certification is a deterministic operationcertification individually happens on every member/nodecommunication with other members is not needed for certification

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

120 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactionssuch conflicts must come for other members/nodes

certification is a deterministic operationcertification individually happens on every member/nodecommunication with other members is not needed for certification

pass: enter in the apply queue

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

121 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactionssuch conflicts must come for other members/nodes

certification is a deterministic operationcertification individually happens on every member/nodecommunication with other members is not needed for certification

pass: enter in the apply queuefail: drop the transaction

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

122 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactionssuch conflicts must come for other members/nodes

certification is a deterministic operationcertification individually happens on every member/nodecommunication with other members is not needed for certification

pass: enter in the apply queuefail: drop the transaction

serialized by the total order in GCS/Xcom + GTID

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

123 / 284

CertificationCertification is the process that only needs to answer the following unique question:

can the write (transaction) be applied ?based on unapplied earlier transactionssuch conflicts must come for other members/nodes

certification is a deterministic operationcertification individually happens on every member/nodecommunication with other members is not needed for certification

pass: enter in the apply queuefail: drop the transaction

serialized by the total order in GCS/Xcom + GTIDfirst committer wins rule

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

124 / 284

Drawbacks of optimistic lockinghaving a first-committer-wins system means conflicts will more likely happen with:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

125 / 284

Drawbacks of optimistic lockinghaving a first-committer-wins system means conflicts will more likely happen with:

large transactions

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

126 / 284

Drawbacks of optimistic lockinghaving a first-committer-wins system means conflicts will more likely happen with:

large transactionslong running transactions

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

127 / 284

GTIDGTIDs are the same as those used by asynchronous replication.

mysql> SELECT * FROM performance_schema.replication_connection_status\G ************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: afb80f36-2bff-11e6-84e0-0800277dd3bf SOURCE_UUID: afb80f36-2bff-11e6-84e0-0800277dd3bf THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: afb80f36-2bff-11e6-84e0-0800277dd3bf:1-57, f037578b-46b1-11e6-8005-08002774c31b:1-48937 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

128 / 284

GTIDbut transactions use the Group Name / UUID in the GTIDs

mysql> show master status\G ************************** 1. row *************************** File: mysql4-bin.000001 Position: 1501 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: afb80f36-2bff-11e6-84e0-0800277dd3bf:1-57, f037578b-46b1-11e6-8005-08002774c31b:1-48937

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

129 / 284

Requirementsexclusively works with InnoDB tables

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

130 / 284

Requirementsexclusively works with InnoDB tablesevery table must have a PK defined

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

131 / 284

Requirementsexclusively works with InnoDB tablesevery table must have a PK definedonly IPV4 is supported

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

132 / 284

Requirementsexclusively works with InnoDB tablesevery table must have a PK definedonly IPV4 is supporteda good network with low latency is important

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

133 / 284

Requirementsexclusively works with InnoDB tablesevery table must have a PK definedonly IPV4 is supporteda good network with low latency is importantmaximum of 9 members per group

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

134 / 284

Requirementsexclusively works with InnoDB tablesevery table must have a PK definedonly IPV4 is supporteda good network with low latency is importantmaximum of 9 members per grouplog-bin must be enabled and only binlog_format=ROW is supported

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

135 / 284

Requirements (2)enable GTIDs

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

136 / 284

Requirements (2)enable GTIDsreplication meta-data must be stored in system tables

--master-info-repository=TABLE --relay-log-info-repository=TABLE

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

137 / 284

Requirements (2)enable GTIDsreplication meta-data must be stored in system tables

--master-info-repository=TABLE --relay-log-info-repository=TABLE

writesets extraction must be enabled

--transaction-write-set-extraction=XXHASH64

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

138 / 284

Requirements (2)enable GTIDsreplication meta-data must be stored in system tables

--master-info-repository=TABLE --relay-log-info-repository=TABLE

writesets extraction must be enabled

--transaction-write-set-extraction=XXHASH64

log-slave-updates must also be enabled

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

139 / 284

Limitationsbinlog checksum is not supported

--binlog-checksum=NONE

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

140 / 284

Limitationsbinlog checksum is not supported

--binlog-checksum=NONE

savepoints were not supported before 5.7.19 & 8.0.1

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

141 / 284

Limitationsbinlog checksum is not supported

--binlog-checksum=NONE

savepoints were not supported before 5.7.19 & 8.0.1SERIALIZABLE is not supported as transaction isolation level

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

142 / 284

Limitationsbinlog checksum is not supported

--binlog-checksum=NONE

savepoints were not supported before 5.7.19 & 8.0.1SERIALIZABLE is not supported as transaction isolation levelhttp://lefred.be/content/mysql-group-replication-limitations-savepoints/http://lefred.be/content/mysql-group-replication-and-table-design/

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

143 / 284

Is my workload ready for Group Replication ?As the writesets (transactions) are replicated to all available nodes on commit, and asthey are certified on every node, a very large writeset could increase the amount ofcertification errors.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

144 / 284

Is my workload ready for Group Replication ?As the writesets (transactions) are replicated to all available nodes on commit, and asthey are certified on every node, a very large writeset could increase the amount ofcertification errors.

Additionally, changing the same record on all the nodes (hotspot) concurrently will alsocause problems.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

145 / 284

Is my workload ready for Group Replication ?As the writesets (transactions) are replicated to all available nodes on commit, and asthey are certified on every node, a very large writeset could increase the amount ofcertification errors.

Additionally, changing the same record on all the nodes (hotspot) concurrently will alsocause problems.

And finally, the certification uses the primary key of the tables, a table without PK is alsoa problem.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

146 / 284

Is my workload ready for Group Replication ?Therefore, when using Group Replication, we should pay attention to these points:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

147 / 284

Is my workload ready for Group Replication ?Therefore, when using Group Replication, we should pay attention to these points:

PK is mandatory (and a good one is better)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

148 / 284

Is my workload ready for Group Replication ?Therefore, when using Group Replication, we should pay attention to these points:

PK is mandatory (and a good one is better)

avoid large transactions

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

149 / 284

Is my workload ready for Group Replication ?Therefore, when using Group Replication, we should pay attention to these points:

PK is mandatory (and a good one is better)

avoid large transactions

avoid hotspot

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

150 / 284

ready ?

Migration from Master-Slave to GR

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

151 / 284

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

152 / 284

1) We install andsetup MySQL InnoDBCluster on one of thenew servers

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

153 / 284

2) We restore abackup

3) setupasynchronousreplication on the newserver.

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

154 / 284

4) We add a newinstance to our group

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

155 / 284

5) We point theapplication to one ofour new nodes.

6) We wait and checkthat asynchronousreplication is caughtup

7) we stop thoseasynchronous slaves

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

156 / 284

8) We attach themysql2 slave to thegroup

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

157 / 284

9) Use MySQL Routerfor directing traffic

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

158 / 284

Latest MySQL 5.7 is already installed onmysql3.

Let´s take a backup on mysql1:

[mysql1 ~]# xtrabackup --backup \ --target-dir=/tmp/backup \ --user=root \ --password=X --host=127.0.0.1

[mysql1 ~]# xtrabackup --prepare \ --target-dir=/tmp/backup

LAB2: Prepare mysql3Asynchronous slave

159 / 284

LAB2: Prepare mysql3 (2)Asynchronous slave

Copy the backup from mysql1 to mysql3:

[mysql1 ~]# scp -r /tmp/backup mysql3:/tmp

And restore it:

[mysql3 ~]# xtrabackup --copy-back --target-dir=/tmp/backup [mysql3 ~]# chown -R mysql. /var/lib/mysql

160 / 284

LAB3: mysql3 as asynchronous slave (2)Asynchronous slave

Configure /etc/my.cnf with the minimal requirements:

[mysqld]...server_id=3enforce_gtid_consistency = ongtid_mode = onlog_bin log_slave_updates

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

161 / 284

LAB2: Prepare mysql3 (3)Asynchronous slave

Let´s start MySQL on mysql3:

[mysql3 ~]# systemctl start mysqld

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

162 / 284

find the GTIDs purgedchange MASTERset the purged GTIDsstart replication

LAB3: mysql3 as asynchronous slave (1) 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

163 / 284

LAB3: mysql3 as asynchronous slave (2)Find the latest purged GTIDs:

[mysql3 ~]# cat /tmp/backup/xtrabackup_binlog_info mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-771

Connect to mysql3 and setup replication:

mysql> CHANGE MASTER TO MASTER_HOST="mysql1", MASTER_USER="repl_async", MASTER_PASSWORD='Xslave', MASTER_AUTO_POSITION=1;

mysql> RESET MASTER;mysql> SET global gtid_purged="VALUE FOUND PREVIOUSLY";

mysql> START SLAVE;

Check that you receive the application´s traffic

164 / 284

Administration made easy and more...

MySQL-Shell

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

165 / 284

MySQL ShellThe MySQL Shell is an interactive Javascript, Python, or SQL interface supportingdevelopment and administration for MySQL. MySQL Shell includes the AdminAPI--availablein JavaScript and Python--which enables you to set up and manage InnoDB clusters. Itprovides a modern and fluent API which hides the complexity associated with configuring,provisioning, and managing an InnoDB cluster, without sacrificing power, flexibility, orsecurity.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

166 / 284

MySQL Shell (2)The MySQL Shell provides:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

167 / 284

MySQL Shell (2)The MySQL Shell provides:

Both Interactive and Batch operations

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

168 / 284

MySQL Shell (2)The MySQL Shell provides:

Both Interactive and Batch operationsDocument and Relational Models

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

169 / 284

MySQL Shell (2)The MySQL Shell provides:

Both Interactive and Batch operationsDocument and Relational ModelsCRUD Document and Relational APIs via scripting

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

170 / 284

MySQL Shell (2)The MySQL Shell provides:

Both Interactive and Batch operationsDocument and Relational ModelsCRUD Document and Relational APIs via scriptingTraditional Table, JSON, Tab Separated output results formats

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

171 / 284

MySQL Shell (2)The MySQL Shell provides:

Both Interactive and Batch operationsDocument and Relational ModelsCRUD Document and Relational APIs via scriptingTraditional Table, JSON, Tab Separated output results formatsMySQL Standard and X Protocols

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

172 / 284

MySQL Shell (2)The MySQL Shell provides:

Both Interactive and Batch operationsDocument and Relational ModelsCRUD Document and Relational APIs via scriptingTraditional Table, JSON, Tab Separated output results formatsMySQL Standard and X Protocolsand more...

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

173 / 284

LAB4: MySQL InnoDB ClusterCreate a single instance cluster

Time to use the new MySQL Shell !

[mysql3 ~]# mysqlsh

Let´s verify if our server is ready to become a member of a new cluster:

mysql-js> dba.checkInstanceCon�guration('root@mysql3:3306')

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

174 / 284

LAB4: MySQL InnoDB ClusterCreate a single instance cluster

Time to use the new MySQL Shell !

[mysql3 ~]# mysqlsh

Let´s verify if our server is ready to become a member of a new cluster:

mysql-js> dba.checkInstanceCon�guration('root@mysql3:3306')

Change the configuration !

mysql-js> dba.con�gureLocalInstance()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

175 / 284

LAB4: MySQL InnoDB Cluster (2)Restart mysqld to use the new configuration:

[mysql3 ~]# systemctl restart mysqld

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

176 / 284

LAB4: MySQL InnoDB Cluster (2)Restart mysqld to use the new configuration:

[mysql3 ~]# systemctl restart mysqld

Create a single instance cluster

[mysql3 ~]# mysqlsh

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

177 / 284

LAB4: MySQL InnoDB Cluster (2)Restart mysqld to use the new configuration:

[mysql3 ~]# systemctl restart mysqld

Create a single instance cluster

[mysql3 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql3:3306')

mysql-js> \c root@mysql3:3306

mysql-js> cluster = dba.createCluster('perconalive')

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

178 / 284

LAB4: Cluster Statusmysql-js> cluster.status(){ "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }}

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

179 / 284

Add mysql4 to the Group:

restore the backupset the purged GTIDsuse MySQL shell

LAB5: add mysql4 to the cluster (1) 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

180 / 284

LAB5: add mysql4 to the cluster (2)Copy the backup from mysql1 to mysql4:

[mysql1 ~]# scp -r /tmp/backup mysql4:/tmp

And restore it:

[mysql4 ~]# xtrabackup --copy-back --target-dir=/tmp/backup [mysql4 ~]# chown -R mysql. /var/lib/mysql

Start MySQL on mysql4:

[mysql4 ~]# systemctl start mysqld

181 / 284

LAB5: MySQL shell to add an instance (3)[mysql4 ~]# mysqlsh

Let´s verify the config:

mysql-js> dba.checkInstanceCon�guration('root@mysql4:3306')

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

182 / 284

LAB5: MySQL shell to add an instance (3)[mysql4 ~]# mysqlsh

Let´s verify the config:

mysql-js> dba.checkInstanceCon�guration('root@mysql4:3306')

And change the configuration:

mysql-js> dba.con�gureLocalInstance()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

183 / 284

LAB5: MySQL shell to add an instance (3)[mysql4 ~]# mysqlsh

Let´s verify the config:

mysql-js> dba.checkInstanceCon�guration('root@mysql4:3306')

And change the configuration:

mysql-js> dba.con�gureLocalInstance()

Restart the service to enable the changes:

[mysql4 ~]# systemctl restart mysqld

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

184 / 284

LAB5: MySQL InnoDB Cluster (4)Group of 2 instances

Find the latest purged GTIDs:

[mysql4 ~]# cat /tmp/backup/xtrabackup_binlog_info mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-77177

Connect to mysql4 and set GTID_PURGED

[mysql4 ~]# mysqlsh

mysql-js> \c root@mysql4:3306mysql-js> \sqlmysql-sql> RESET MASTER;mysql-sql> SET global gtid_purged="VALUE FOUND PREVIOUSLY";

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

185 / 284

LAB5: MySQL InnoDB Cluster (5)mysql-sql> \js

mysql-js> dba.checkInstanceCon�guration('root@mysql4:3306')

mysql-js> \c root@mysql3:3306

mysql-js> cluster = dba.getCluster()

mysql-js> cluster.checkInstanceState('root@mysql4:3306')

mysql-js> cluster.addInstance("root@mysql4:3306")

mysql-js> cluster.status()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

186 / 284

Cluster Statusmysql-js> cluster.status(){ "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "RECOVERING" } }}

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

187 / 284

Recovering progressOn standard MySQL, monitor the group_replication_recovery channel to seethe progress:

mysql4> show slave status for channel 'group_replication_recovery'\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: mysql3 Master_User: mysql_innodb_cluster_rpl_user ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... Retrieved_Gtid_Set: 6e7d7848-860f-11e6-92e4-08002718d305:1-6,7c1f0c2d-860d-11e6-9df7-08002718d305:1-15,b346474c-8601-11e6-9b39-08002718d305:1964-77177,e8c524df-860d-11e6-9df7-08002718d305:1-2 Executed_Gtid_Set: 7c1f0c2d-860d-11e6-9df7-08002718d305:1-7,b346474c-8601-11e6-9b39-08002718d305:1-45408,e8c524df-860d-11e6-9df7-08002718d305:1-2 ...

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

188 / 284

point the applicationto the cluster

Migrate the application

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

189 / 284

LAB6: Migrate the applicationNow we need to point the application to mysql3, this is the only downtime !

...[ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18[ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14[ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16[ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30[ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13[ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12[ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17^C[mysql1 ~]# run_app.sh mysql3

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

190 / 284

LAB6: Migrate the applicationStop asynchronous replication on mysql2 and mysql3:

mysql2> stop slave;mysql3> stop slave;

Make sure gtid_executed range on mysql2 is lower or equal than on mysql3

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

191 / 284

LAB6: Migrate the applicationStop asynchronous replication on mysql2 and mysql3:

mysql2> stop slave;mysql3> stop slave;

Make sure gtid_executed range on mysql2 is lower or equal than on mysql3

mysql[2-3]> show global variables like 'gtid_executed'\Gmysql[2-3]> reset slave all;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

192 / 284

previous slave(mysql2) can nowbe part of the cluster

Add a third instance

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

193 / 284

LAB7: Add mysql2 to the groupWe first validate the instance using MySQL Shell and we configure it.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

194 / 284

LAB7: Add mysql2 to the groupWe first validate the instance using MySQL Shell and we configure it.

[mysql2 ~]# mysqlsh

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

195 / 284

LAB7: Add mysql2 to the groupWe first validate the instance using MySQL Shell and we configure it.

[mysql2 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql2:3306')

mysql-js> dba.con�gureLocalInstance()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

196 / 284

LAB7: Add mysql2 to the groupWe first validate the instance using MySQL Shell and we configure it.

[mysql2 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql2:3306')

mysql-js> dba.con�gureLocalInstance()

We also need to remove super_read_only from my.cnf to be able to use the shell to addthe node to the cluster.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

197 / 284

LAB7: Add mysql2 to the groupWe first validate the instance using MySQL Shell and we configure it.

[mysql2 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql2:3306')

mysql-js> dba.con�gureLocalInstance()

We also need to remove super_read_only from my.cnf to be able to use the shell to addthe node to the cluster.

[mysql2 ~]# systemctl restart mysqld

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

198 / 284

LAB7: Add mysql2 to the group (2)Back in MySQL shell we add the new instance:

[mysql2 ~]# mysqlsh

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

199 / 284

LAB7: Add mysql2 to the group (2)Back in MySQL shell we add the new instance:

[mysql2 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql2:3306')

mysql-js> \c root@mysql3:3306

mysql-js> cluster = dba.getCluster()

mysql-js> cluster.addInstance("root@mysql2:3306")

mysql-js> cluster.status()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

200 / 284

LAB7: Add mysql2 to the group (3){ "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/O", "readReplicas": {}, Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

201 / 284

writing to a single server

Single Primary Mode

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

202 / 284

Default = Single Primary ModeBy default, MySQL InnoDB Cluster enables Single Primary Mode.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

203 / 284

Default = Single Primary ModeBy default, MySQL InnoDB Cluster enables Single Primary Mode.

mysql> show global variables like 'group_replication_single_primary_mode';+---------------------------------------+-------+| Variable_name | Value |+---------------------------------------+-------+| group_replication_single_primary_mode | ON |+---------------------------------------+-------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

204 / 284

Default = Single Primary ModeBy default, MySQL InnoDB Cluster enables Single Primary Mode.

mysql> show global variables like 'group_replication_single_primary_mode';+---------------------------------------+-------+| Variable_name | Value |+---------------------------------------+-------+| group_replication_single_primary_mode | ON |+---------------------------------------+-------+

In Single Primary Mode, a single member acts as the writable master (PRIMARY) and therest of the members act as hot-standbys (SECONDARY).

The group itself coordinates and configures itself automatically to determine whichmember will act as the PRIMARY, through a leader election mechanism.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

205 / 284

Who´s the Primary Master ?As the Primary Master is elected, all nodes part of the group knows which one waselected. This value is exposed in status variables:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

206 / 284

Who´s the Primary Master ?As the Primary Master is elected, all nodes part of the group knows which one waselected. This value is exposed in status variables:

mysql> show status like 'group_replication_primary_member';+----------------------------------+--------------------------------------+| Variable_name | Value |+----------------------------------+--------------------------------------+| group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 |+----------------------------------+--------------------------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

207 / 284

Who´s the Primary Master ?As the Primary Master is elected, all nodes part of the group knows which one waselected. This value is exposed in status variables:

mysql> show status like 'group_replication_primary_member';+----------------------------------+--------------------------------------+| Variable_name | Value |+----------------------------------+--------------------------------------+| group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 |+----------------------------------+--------------------------------------+

mysql> select member_host as "primary master" from performance_schema.global_status join performance_schema.replication_group_members where variable_name = 'group_replication_primary_member' and member_id=variable_value;+---------------+| primary master|+---------------+| mysql3 |+---------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

208 / 284

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

209 / 284

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

210 / 284

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})

A new InnoDB cluster will be created on instance 'root@mysql3:3306'.

The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.Before continuing you have to con�rm that you understand the requirements andlimitations of Multi-Master Mode. Please read the manual before proceeding.

I have read the MySQL InnoDB cluster manual and I understand the requirementsand limitations of advanced Multi-Master Mode.Con�rm [y|N]:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

211 / 284

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})

A new InnoDB cluster will be created on instance 'root@mysql3:3306'.

The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.Before continuing you have to con�rm that you understand the requirements andlimitations of Multi-Master Mode. Please read the manual before proceeding.

I have read the MySQL InnoDB cluster manual and I understand the requirementsand limitations of advanced Multi-Master Mode.Con�rm [y|N]:

Or you can force it to avoid interaction (for automation) :

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

212 / 284

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})

A new InnoDB cluster will be created on instance 'root@mysql3:3306'.

The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.Before continuing you have to con�rm that you understand the requirements andlimitations of Multi-Master Mode. Please read the manual before proceeding.

I have read the MySQL InnoDB cluster manual and I understand the requirementsand limitations of advanced Multi-Master Mode.Con�rm [y|N]:

Or you can force it to avoid interaction (for automation) :

> cluster=dba.createCluster('perconalive',{multiMaster: true, force: true})

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

213 / 284

get more info

Monitoring

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

214 / 284

Performance SchemaGroup Replication uses Performance_Schema to expose status

mysql3> SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 00db47c7-3e23-11e6-afd4-08002774c31b MEMBER_HOST: mysql3.localdomain MEMBER_PORT: 3306 MEMBER_STATE: ONLINE

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

215 / 284

Performance SchemaGroup Replication uses Performance_Schema to expose status

mysql3> SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 00db47c7-3e23-11e6-afd4-08002774c31b MEMBER_HOST: mysql3.localdomain MEMBER_PORT: 3306 MEMBER_STATE: ONLINE

mysql3> SELECT * FROM performance_schema.replication_connection_status\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: afb80f36-2bff-11e6-84e0-0800277dd3bf SOURCE_UUID: afb80f36-2bff-11e6-84e0-0800277dd3bf THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: afb80f36-2bff-11e6-84e0-0800277dd3bf:1-2 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

216 / 284

Member StateThese are the different possible state for a node member:

ONLINE

OFFLINE

RECOVERING

ERROR: when a node is leaving but the plugin was not instructed to stopUNREACHABLE

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

217 / 284

Status information & metrics

Membersmysql> SELECT * FROM performance_schema.replication_group_members\G

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

218 / 284

Status information & metrics

Membersmysql> SELECT * FROM performance_schema.replication_group_members\G

*************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 00db47c7-3e23-11e6-afd4-08002774c31b MEMBER_HOST: mysql3.localdomain MEMBER_PORT: 3306 MEMBER_STATE: ONLINE *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: e1544c9d-4451-11e6-9f5a-08002774c31b MEMBER_HOST: mysql4.localdomain.localdomain MEMBER_PORT: 3306 MEMBER_STATE: ONLINE

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

219 / 284

Status information & metrics

Connectionsmysql> SELECT * FROM performance_schema.replication_connection_status\G

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

220 / 284

Status information & metrics

Connectionsmysql> SELECT * FROM performance_schema.replication_connection_status\G

*************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: afb80f36-2bff-11e6-84e0-0800277dd3bf SOURCE_UUID: afb80f36-2bff-11e6-84e0-0800277dd3bf THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: 5de4400b-3dd7-11e6-8a71-08002774c31b:1-814089, afb80f36-2bff-11e6-84e0-0800277dd3bf:1-2834 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 *************************** 2. row *************************** CHANNEL_NAME: group_replication_recovery GROUP_NAME: SOURCE_UUID: THREAD_ID: NULL SERVICE_STATE: OFF

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

221 / 284

Status information & metrics

Local node status mysql> select * from performance_schema.replication_group_member_stats\G

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

222 / 284

Status information & metrics

Local node status mysql> select * from performance_schema.replication_group_member_stats\G

*************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 14679667214442885:4 MEMBER_ID: e1544c9d-4451-11e6-9f5a-08002774c31b COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 5961 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: 5de4400b-3dd7-11e6-8a71-08002774c31b:1-814089, afb80f36-2bff-11e6-84e0-0800277dd3bf:1-5718 LAST_CONFLICT_FREE_TRANSACTION: afb80f36-2bff-11e6-84e0-0800277dd3bf:5718

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

223 / 284

Performance_SchemaYou can find GR information in the following Performance_Schema tables:

replication_applier_con�guration

replication_applier_status

replication_applier_status_by_worker

replication_connection_con�guration

replication_connection_status

replication_group_member_stats

replication_group_members

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

224 / 284

Status during recovery mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'\G

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

225 / 284

Status during recovery mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'\G

*************************** 1. row *************************** Slave_IO_State: Master_Host: <NULL> Master_User: gr_repl Master_Port: 0 ... Relay_Log_File: mysql4-relay-bin-group_replication_recovery.000001 ... Slave_IO_Running: No Slave_SQL_Running: No ... Executed_Gtid_Set: 5de4400b-3dd7-11e6-8a71-08002774c31b:1-814089, afb80f36-2bff-11e6-84e0-0800277dd3bf:1-5718 ... Channel_Name: group_replication_recovery

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

226 / 284

Sys SchemaThe easiest way to detect if a node is a member of the primary component (when thereare partitioning of your nodes due to network issues for example) and therefore a validcandidate for routing queries to it, is to use the sys table.

Additional information for sys can be downloaded athttps://github.com/lefred/mysql_gr_routing_check/blob/master/addition_to_sys.sql

On the primary node:

[mysql? ~]# mysql < /tmp/addition_to_sys.sql

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

227 / 284

Sys SchemaIs this node part of PRIMARY Partition:

mysql3> SELECT sys.gr_member_in_primary_partition();+------------------------------------+| sys.gr_node_in_primary_partition() |+------------------------------------+| YES |+------------------------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

228 / 284

Sys SchemaIs this node part of PRIMARY Partition:

mysql3> SELECT sys.gr_member_in_primary_partition();+------------------------------------+| sys.gr_node_in_primary_partition() |+------------------------------------+| YES |+------------------------------------+

To use as healthcheck:

mysql3> SELECT * FROM sys.gr_member_routing_candidate_status;+------------------+-----------+---------------------+----------------------+| viable_candidate | read_only | transactions_behind | transactions_to_cert |+------------------+-----------+---------------------+----------------------+| YES | YES | 0 | 0 |+------------------+-----------+---------------------+----------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

229 / 284

LAB8: Sys Schema - Health CheckOn one of the non Primary nodes, run the following command:

mysql-sql> �ush tables with read lock;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

230 / 284

LAB8: Sys Schema - Health CheckOn one of the non Primary nodes, run the following command:

mysql-sql> �ush tables with read lock;

Now you can verify what the healthcheck exposes to you:

mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status;+------------------+-----------+---------------------+----------------------+| viable_candidate | read_only | transactions_behind | transactions_to_cert |+------------------+-----------+---------------------+----------------------+| YES | YES | 950 | 0 |+------------------+-----------+---------------------+----------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

231 / 284

LAB8: Sys Schema - Health CheckOn one of the non Primary nodes, run the following command:

mysql-sql> �ush tables with read lock;

Now you can verify what the healthcheck exposes to you:

mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status;+------------------+-----------+---------------------+----------------------+| viable_candidate | read_only | transactions_behind | transactions_to_cert |+------------------+-----------+---------------------+----------------------+| YES | YES | 950 | 0 |+------------------+-----------+---------------------+----------------------+

mysql-sql> UNLOCK TABLES;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

232 / 284

application interaction

MySQL Router

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

233 / 284

MySQL RouterMySQL Router is lightweight middleware that provides transparent routing between yourapplication and backend MySQL Servers. It can be used for a wide variety of use cases,such as providing high availability and scalability by effectively routing database traffic toappropriate backend MySQL Servers.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

234 / 284

MySQL RouterMySQL Router is lightweight middleware that provides transparent routing between yourapplication and backend MySQL Servers. It can be used for a wide variety of use cases,such as providing high availability and scalability by effectively routing database traffic toappropriate backend MySQL Servers.

MySQL Router doesn´t require any specific configuration. It configures itself automatically(bootstrap) using MySQL InnoDB Cluster´s metadata.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

235 / 284

LAB9: MySQL RouterWe will now use mysqlrouter between our application and the cluster.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

236 / 284

LAB9: MySQL Router (2)Configure MySQL Router that will run on the app server (mysql1). We bootstrap it usingthe Primary-Master:

[root@mysql1 ~]# mysqlrouter --bootstrap mysql3:3306 --user mysqlrouterPlease enter MySQL password for root: WARNING: The MySQL server does not have SSL con�gured and metadata used by the router may be transmitted unencrypted.

Bootstrapping system MySQL Router instance...MySQL Router has now been con�gured for the InnoDB cluster 'perconalive'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'perconalive':- Read/Write Connections: localhost:6446- Read/Only Connections: localhost:6447

X protocol connections to cluster 'perconalive':- Read/Write Connections: localhost:64460- Read/Only Connections: localhost:64470

[root@mysql1 ~]# chown -R mysqlrouter. /var/lib/mysqlrouter

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

237 / 284

LAB9: MySQL Router (3)Now let´s modify the configuration file to listen on port 3306:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

238 / 284

LAB9: MySQL Router (3)Now let´s modify the configuration file to listen on port 3306:

in /etc/mysqlrouter/mysqlrouter.conf:

[routing:perconalive_default_rw]-bind_port=6446+bind_port=3306

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

239 / 284

LAB9: MySQL Router (3)Now let´s modify the configuration file to listen on port 3306:

in /etc/mysqlrouter/mysqlrouter.conf:

[routing:perconalive_default_rw]-bind_port=6446+bind_port=3306

We can stop mysqld on mysql1 and start mysqlrouter into a screen session:

[mysql1 ~]# systemctl stop mysqld[mysql1 ~]# systemctl start mysqlrouter

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

240 / 284

LAB9: MySQL Router (4)Before killing a member we will change systemd´s default behavior that restartsmysqld immediately:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

241 / 284

LAB9: MySQL Router (4)Before killing a member we will change systemd´s default behavior that restartsmysqld immediately:

in /usr/lib/systemd/system/mysqld.service add the following under[Service]

RestartSec=30

[mysql3 ~]# systemctl daemon-reload

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

242 / 284

LAB9: MySQL Router (5)Now we can point the application to the router (back to mysql1):

[mysql1 ~]# run_app.sh

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

243 / 284

LAB9: MySQL Router (5)Now we can point the application to the router (back to mysql1):

[mysql1 ~]# run_app.sh

Check app and kill mysqld on mysql3 (the Primary Master R/W node) !

[mysql3 ~]# kill -9 $(pidof mysqld)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

244 / 284

LAB9: MySQL Router (5)Now we can point the application to the router (back to mysql1):

[mysql1 ~]# run_app.sh

Check app and kill mysqld on mysql3 (the Primary Master R/W node) !

[mysql3 ~]# kill -9 $(pidof mysqld)

mysql2> select member_host as "primary" from performance_schema.global_status join performance_schema.replication_group_members where variable_name = 'group_replication_primary_member' and member_id=variable_value;+---------+| primary |+---------+| mysql4 |+---------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

245 / 284

ProxySQL / HA Proxy / F5 / ...

3rd party router/proxy

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

246 / 284

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

247 / 284

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

If you need some specific features that are not yet available in MySQL Router, liketransparent R/W splitting, then you can use your software of choice.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

248 / 284

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

If you need some specific features that are not yet available in MySQL Router, liketransparent R/W splitting, then you can use your software of choice.

The important part of such implementation is to use a good health check to verify if theMySQL server you plan to route the traffic is in a valid state.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

249 / 284

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

If you need some specific features that are not yet available in MySQL Router, liketransparent R/W splitting, then you can use your software of choice.

The important part of such implementation is to use a good health check to verify if theMySQL server you plan to route the traffic is in a valid state.

MySQL Router implements that natively, and it´s very easy to deploy.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

250 / 284

ProxySQL also has native support for GroupReplication which makes it maybe the bestchoice for advanced users.

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

If you need some specific features that are not yet available in MySQL Router, liketransparent R/W splitting, then you can use your software of choice.

The important part of such implementation is to use a good health check to verify if theMySQL server you plan to route the traffic is in a valid state.

MySQL Router implements that natively, and it´s very easy to deploy.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

251 / 284

operational tasks

Recovering Node

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

252 / 284

Recovering Nodes/MembersThe old master (mysql3) got killed.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

253 / 284

Recovering Nodes/MembersThe old master (mysql3) got killed.

MySQL got restarted automatically by systemd

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

254 / 284

Recovering Nodes/MembersThe old master (mysql3) got killed.

MySQL got restarted automatically by systemd

Let´s add mysql3 back to the cluster

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

255 / 284

LAB10: Recovering Nodes/Members[mysql3 ~]# mysqlsh

mysql-js> \c root@mysql4:3306 # The current master

mysql-js> cluster = dba.getCluster()

mysql-js> cluster.status()

mysql-js> cluster.rejoinInstance("root@mysql3:3306")

Rejoining the instance to the InnoDB cluster. Depending on the originalproblem that made the instance unavailable, the rejoin operation might not besuccessful and further manual steps will be needed to �x the underlyingproblem.

Please monitor the output of the rejoin operation and take necessary action ifthe instance cannot rejoin.

Please provide the password for 'root@mysql3:3306': Rejoining instance to the cluster ...

The instance 'root@mysql3:3306' was successfully rejoined on the cluster.

The instance 'mysql3:3306' was successfully added to the MySQL Cluster.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

256 / 284

mysql-js> cluster.status(){ "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql4:3306", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }}

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

257 / 284

Recovering Nodes/Members (automatically)This time before killing a member of the group, we will persist the configuration on disk inmy.cnf.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

258 / 284

Recovering Nodes/Members (automatically)This time before killing a member of the group, we will persist the configuration on disk inmy.cnf.

We will use again the same MySQL command as previouslydba.con�gureLocalInstance() but this time when all nodes are alreadypart of the Group.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

259 / 284

LAB10: Recovering Nodes/Members (2)Verify that all nodes are ONLINE.

...mysql-js> cluster.status()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

260 / 284

LAB10: Recovering Nodes/Members (2)Verify that all nodes are ONLINE.

...mysql-js> cluster.status()

Then on all nodes run:

mysql-js> dba.con�gureLocalInstance()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

261 / 284

LAB10: Recovering Nodes/Members (3)Kill one node again:

[mysql3 ~]# kill -9 $(pidof mysqld)

systemd will restart mysqld and verify if the node joined.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

262 / 284

understanding

Flow Control

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

263 / 284

Flow ControlWhen using MySQL Group Replication, it´s possible that some members are lagging behindthe group. Due to load, hardware limitations, etc... This lag can become problematic tokeep good certification performance and keep the possible certification failures as low aspossible.

More problems can occur in multi-primary/write clusters when the applying queue grows,the risk to have conflicts with those not yet applied transactions increases.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

264 / 284

Flow Control (2)Within MySQL Group Replication´s FC implementation :

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

265 / 284

Flow Control (2)Within MySQL Group Replication´s FC implementation :

the Group is never totally stalled

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

266 / 284

Flow Control (2)Within MySQL Group Replication´s FC implementation :

the Group is never totally stalledthe node having issues doesn´t send flow control messages to the rest of the groupasking to slow down

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

267 / 284

Flow Control (3)Every member of the Group send some statistics about its queues (applier queue andcertification queue) to the other members.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

268 / 284

Flow Control (3)Every member of the Group send some statistics about its queues (applier queue andcertification queue) to the other members.

Then every node decide to slow down or not if they realize that one node reached thethreshold for one of the queue:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

269 / 284

Flow Control (3)Every member of the Group send some statistics about its queues (applier queue andcertification queue) to the other members.

Then every node decide to slow down or not if they realize that one node reached thethreshold for one of the queue:

group_replication_�ow_control_applier_threshold (defaultis 25k)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

270 / 284

Flow Control (3)Every member of the Group send some statistics about its queues (applier queue andcertification queue) to the other members.

Then every node decide to slow down or not if they realize that one node reached thethreshold for one of the queue:

group_replication_�ow_control_applier_threshold (defaultis 25k)group_replication_�ow_control_certi�er_threshold

(default is 25k)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

271 / 284

Flow Control (4)So when group_replication_�ow_control_mode is set to QUOTA on thenode seeing that one of the other members of the cluster is lagging behind (thresholdreached), it will throttle the write operations to the the minimum quota.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

272 / 284

Flow Control (4)So when group_replication_�ow_control_mode is set to QUOTA on thenode seeing that one of the other members of the cluster is lagging behind (thresholdreached), it will throttle the write operations to the the minimum quota.

This quota is calculated based on the number of transactions applied in the last second,and then it is reduced below that by subtracting the "over the quota" messages from thelast period.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

273 / 284

LAB10: Flow ControlDuring this last lab, we will reduce the flow control threshold on the Primary Master:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

274 / 284

LAB10: Flow ControlDuring this last lab, we will reduce the flow control threshold on the Primary Master:

mysql> show global variables like '%�ow%';+----------------------------------------------------+-------+| Variable_name | Value |+----------------------------------------------------+-------+| group_replication_�ow_control_applier_threshold | 25000 || group_replication_�ow_control_certi�er_threshold | 25000 || group_replication_�ow_control_mode | QUOTA |+----------------------------------------------------+-------+3 rows in set (0.08 sec)

mysql> set global group_replication_�ow_control_applier_threshold=100;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

275 / 284

LAB10: Flow Control (1)And now we block all writes on one of the Secondary-Masters:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

276 / 284

LAB10: Flow Control (1)And now we block all writes on one of the Secondary-Masters:

mysql> �ush tables with read lock;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

277 / 284

LAB10: Flow Control (1)And now we block all writes on one of the Secondary-Masters:

mysql> �ush tables with read lock;

And we check how the queue is growing:

mysql> SELECT * FROM sys.gr_member_routing_candidate_status;+------------------+-----------+---------------------+----------------------+| viable_candidate | read_only | transactions_behind | transactions_to_cert |+------------------+-----------+---------------------+----------------------+| YES | NO | 487 | 0 |+------------------+-----------+---------------------+----------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

278 / 284

LAB10: Flow Control (1)And now we block all writes on one of the Secondary-Masters:

mysql> �ush tables with read lock;

And we check how the queue is growing:

mysql> SELECT * FROM sys.gr_member_routing_candidate_status;+------------------+-----------+---------------------+----------------------+| viable_candidate | read_only | transactions_behind | transactions_to_cert |+------------------+-----------+---------------------+----------------------+| YES | NO | 487 | 0 |+------------------+-----------+---------------------+----------------------+

Did you notice something on the application when the threshold was reached ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

279 / 284

LAB10: Flow Control (2)If nothing happened, please increase the trx rate:

[root@mysql1 ~]# run_app.sh mysql1 --tx-rate=500

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

280 / 284

LAB10: Flow Control (2)If nothing happened, please increase the trx rate:

[root@mysql1 ~]# run_app.sh mysql1 --tx-rate=500

When the application's writes are low, you can just remove the lock and see the queue andthe effect on the application:

mysql> UNLOCK TABLES;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

281 / 284

LAB10: Flow Control (2)If nothing happened, please increase the trx rate:

[root@mysql1 ~]# run_app.sh mysql1 --tx-rate=500

When the application's writes are low, you can just remove the lock and see the queue andthe effect on the application:

mysql> UNLOCK TABLES;

Create flow control again and when you see the application writing just a fewtransactions, on the Primary-Master, disable the flow control mode:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

282 / 284

LAB10: Flow Control (2)If nothing happened, please increase the trx rate:

[root@mysql1 ~]# run_app.sh mysql1 --tx-rate=500

When the application's writes are low, you can just remove the lock and see the queue andthe effect on the application:

mysql> UNLOCK TABLES;

Create flow control again and when you see the application writing just a fewtransactions, on the Primary-Master, disable the flow control mode:

mysql> set global group_replication_�ow_control_mode='DISABLED';

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

283 / 284

Thank you !

Any Questions ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

284 / 284

top related