This section discusses known problems or issues when using replication with MySQL Cluster in MySQL 5.1 (including MySQL Cluster NDB 6.X and 7.X through 7.1).
Loss of master-slave connection.
A loss of connection can occur either between the replication
master SQL node and the replication slave SQL node, or between
the replication master SQL node and the data nodes in the master
cluster. In the latter case, this can occur not only as a result
of loss of physical connection (for example, a broken network
cable), but due to the overflow of data node event buffers; if
the SQL node is too slow to respond, it may be dropped by the
cluster (this is controllable to some degree by adjusting the
MaxBufferedEpochs
and
TimeBetweenEpochs
configuration parameters). If this occurs, it is
entirely possible for new data to be inserted into the master
cluster without being recorded in the replication master's
binary log. For this reason, to guarantee high
availability, it is extremely important to maintain a backup
replication channel, to monitor the primary channel, and to fail
over to the secondary replication channel when necessary to keep
the slave cluster synchronized with the master. MySQL Cluster is
not designed to perform such monitoring on its own; for this, an
external application is required.
Prior to MySQL 5.1.18, a MySQL Cluster replication slave mysqld had no way of detecting that the connection from the master had been interrupted. For this reason, it was possible for the slave to become inconsistent with the master.
Beginning with MySQL 5.1.18, the replication master issues a
“gap” event when connecting or reconnecting to the
master cluster. (A gap event is a type of “incident
event,” which indicates an incident that occurs that
affects the contents of the database but that cannot easily be
represented as a set of changes. Examples of incidents are server
crashes, database resynchronization, (some) software updates, and
(some) hardware changes.) When the slave encounters a gap in the
replication log, it stops with an error message. This message is
available in the output of SHOW SLAVE
STATUS
, and indicates that the SQL thread has stopped
due to an incident registered in the replication stream, and that
manual intervention is required. See
Section 17.6.8, “Implementing Failover with MySQL Cluster Replication”, for more
information about what to do in such circumstances.
Because MySQL Cluster is not designed on its own to monitor replication status or provide failover, if high availability is a requirement for the slave server or cluster, then you must set up multiple replication lines, monitor the master mysqld on the primary replication line, and be prepared fail over to a secondary line if and as necessary. This must be done manually, or possibly by means of a third-party application. For information about implementing this type of setup, see Section 17.6.7, “Using Two Replication Channels for MySQL Cluster Replication”, and Section 17.6.8, “Implementing Failover with MySQL Cluster Replication”.
However, if you are replicating from a standalone MySQL server to a MySQL Cluster, one channel is usually sufficient.
Multi-byte character sets. Previously, there were several known issues with regard to the use of multi-byte characters sets with MySQL Cluster Replication. These were resolved in MySQL 5.1.21, MySQL Cluster NDB 6.2.14, and MySQL Cluster NDB 6.3.11. (See Bug #27404 and Bug #29562.)
Circular replication. Prior to MySQL 5.1.18, circular replication was not supported with MySQL Cluster replication, due to the fact that all log events created in a particular MySQL Cluster were wrongly tagged with the server ID of the MySQL server used as master and not with the server ID of the originating server.
Beginning with MySQL 5.1.18, this limitation is lifted, as discussed in the next few paragraphs, in which we consider the example of a replication setup involving three MySQL Clusters numbered 1, 2, and 3, in which Cluster 1 acts as the replication master for Cluster 2, Cluster 2 acts as the master for Cluster 3, and Cluster 3 acts as the master for Cluster 1. Each cluster has two SQL nodes, with SQL nodes A and B belonging to Cluster 1, SQL nodes C and D belonging to Cluster 2, and SQL nodes E and F belonging to Cluster 3.
Circular replication using these clusters is supported as long as the following conditions are met:
The SQL nodes on all masters and slaves are the same
All SQL nodes acting as replication masters and slaves are
started using the
--log-slave-updates
option
This type of circular replication setup is shown in the following diagram:
In this scenario, SQL node A in Cluster 1 replicates to SQL node C in Cluster 2; SQL node C replicates to SQL node E in Cluster 3; SQL node E replicates to SQL node A. In other words, the replication line (indicated by the red arrows in the diagram) directly connects all SQL nodes used as replication masters and slaves.
It should also be possible to set up circular replication in which not all master SQL nodes are also slaves, as shown here:
In this case, different SQL nodes in each cluster are used as
replication masters and slaves. However, you must
not start any of the SQL nodes using
--log-slave-updates
). This type of
circular replication scheme for MySQL Cluster, in which the line
of replication (again indicated by the red arrows in the diagram)
is discontinuous, should be possible, but it should be noted that
it has not yet been thoroughly tested and must therefore still be
considered experimental.
Beginning with MySQL 5.1.24, the
NDB
storage engine uses
idempotent execution mode, which
suppresses duplicate-key and other errors that otherwise break
circular replication of MySQL Cluster. This is equivalent to
setting the global
slave_exec_mode
system variable
to IDEMPOTENT
. This is also required for
multi-master replication when using MySQL Cluster. (Bug #31609)
It is not necessary to set
slave_exec_mode
in MySQL
Cluster replication; MySQL Cluster does this automatically for
all NDB
tables and ignores any
attempts to set this variable explicitly.
MySQL Cluster replication and primary keys.
In MySQL 5.1.6, only those NDB
tables having explicit primary keys could be replicated. This
limitation was lifted in MySQL 5.1.7. However, in the event of a
node failure, errors in replication of
NDB
tables without primary keys can
still occur, due to the possibility of duplicate rows being
inserted in such cases. For this reason, it is highly
recommended that all NDB
tables
being replicated have primary keys.
MySQL Cluster Replication and Unique Keys.
Prior to MySQL Cluster NDB 7.0.25 and MySQL Cluster NDB 7.1.14,
operations that updated values of unique key columns of
NDB
tables could result in
duplicate-key errors when replicated. This was due to the
ordering of row events in the binary log according to the
partitioning of the base table, which meant that they could
appear in a different order from that in which they were
originally executed. (Bug #11756082)
In MySQL Cluster NDB 7.0.25, MySQL Cluster NDB 7.1.14, and later,
this issue is solved for replication between
NDB
tables by deferring unique key
checks until after all table row updates have been performed.
Deferring constraints in this way is currently supported only by
NDB
. Thus, updates of unique keys
when replicating from NDB
to a
different storage engine such as
MyISAM
or
InnoDB
are still not supported.
The problem encountered when replicating without deferred checking
of unique key updates can be illustrated using
NDB
table such as
t
, is created and populated on the master (and
replicated to a slave that does not support deferred unique key
updates) as shown here:
CREATE TABLE t ( p INT PRIMARY KEY, c INT, UNIQUE KEY u (c) ) ENGINE NDB; INSERT INTO t VALUES (1,1), (2,2), (3,3), (4,4), (5,5);
The following UPDATE
statement on
t
succeeded on the master, since the rows
affected are processed in the order determined by the
ORDER BY
option, performed over the entire
table:
UPDATE t SET c = c - 1 ORDER BY p;
However, the same statement failed with a duplicate key error or other constraint violation on the slave, because the ordering of the row updates was done for one partition at a time, rather than for the table as a whole.
In MySQL 5.1.6 and later, every NDB
table is implicitly partitioned by key when it is created. This
also applies to all MySQL Cluster NDB 6.x and MySQL Cluster NDB
7.x releases. See Section 18.2.4, “KEY
Partitioning”, for more
information.
Restarting with --initial
.
Restarting the cluster with the
--initial
option causes the
sequence of GCI and epoch numbers to start over from
0
. (This is generally true of MySQL Cluster
and not limited to replication scenarios involving Cluster.) The
MySQL servers involved in replication should in this case be
restarted. After this, you should use the
RESET MASTER
and
RESET SLAVE
statements to clear
the invalid ndb_binlog_index
and
ndb_apply_status
tables, respectively.
auto_increment_offset
and
auto_increment_increment
variables.
The use of the
auto_increment_offset
and
auto_increment_increment
server
system variables is supported beginning with MySQL 5.1.20.
Previously, these produced unpredictable results when used with
NDB
tables or MySQL Cluster
replication.
Replication from NDB
to other storage
engines.
It is possible to replicate an NDB
table on the master to a table using a different storage engine
on the slave, taking into account the restrictions listed here:
Multi-master and circular replication are not supported
(tables on both the master and the slave must use the
NDB
storage engine for this to
work).
Using a storage engine does not perform binary logging for slave tables requires special handling.
Use of a nontransactional storage engine for slave tables also requires special handling.
The next few paragraphs provide additional information about each of the issues just described.
Multiple masters not supported when replicating
NDB
to other storage engines.
For replication from NDB
to a
different storage engine, the relationship between the two
databases must be a simple master-slave one. This means that
circular or master-master replication is not supported between
MySQL Cluster and other storage engines.
In addition, it is not possible to configure more than one
replication channel when replicating between
NDB
and a different storage engine.
(However, a MySQL Cluster database can
simultaneously replicate to multiple slave MySQL Cluster
databases.) If the master uses NDB
tables, it is still possible to have more than one MySQL Server
maintain a binary log of all changes; however, for the slave to
change masters (fail over), the new master-slave relationship must
be explicitly defined on the slave.
Replicating NDB
to a slave storage engine
that does not perform binary logging.
If you attempt to replicate from a MySQL Cluster to a slave that
uses a storage engine that does not handle its own binary
logging, the replication process aborts with the error
Binary logging not possible ... Statement cannot be
written atomically since more than one engine involved and at
least one engine is self-logging (Error
1595). It is possible to work around this
issue in one of the following ways:
Turn off binary logging on the slave.
This can be accomplished by setting
sql_log_bin = 0
.
Change the storage engine used for the
mysql.ndb_apply_status
table.
Causing this table to use an engine that does not handle its
own binary logging can also eliminate the conflict. This can
be done by issuing a statement such as
ALTER TABLE
mysql.ndb_apply_status ENGINE=MyISAM
on the slave.
It is safe to do this when using a
non-NDB
storage engine on the
slave, since you do not then need to worry about keeping
multiple slave SQL nodes synchronized.
Filter out changes to the mysql.ndb_apply_status
table on the slave.
This can be done by starting the slave SQL node with
--replicate-ignore-table=mysql.ndb_apply_status
.
If you need for other tables to be ignored by replication,
you might wish to use an appropriate
--replicate-wild-ignore-table
option instead.
You should not disable replication or
binary logging of mysql.ndb_apply_status
or
change the storage engine used for this table when replicating
from one MySQL Cluster to another. See
Replication and binary log filtering rules with replication between
MySQL Clusters,
for details.
Replication from NDB
to a nontransactional storage
engine.
When replicating from NDB
to a
nontransactional storage engine such as
MyISAM
, you may encounter
unnecessary duplicate key errors when replicating
INSERT ...
ON DUPLICATE KEY UPDATE
statements. You can suppress
these in MySQL Cluster NDB 6.2 and later MySQL Cluster releases
by using
--ndb-log-update-as-write=0
,
which forces updates to be logged as writes (rather than as
updates).
In addition, when replicating from
NDB
to a storage engine that does not
implement transactions, if the slave fails to apply any row
changes from a given transaction, it does not roll back the rest
of the transaction. (This is true when replicating tables using
any transactional storage engine—not only
NDB
—to a nontransactional
storage engine.) Because of this, it cannot be guaranteed that
transactional consistency will be maintained on the slave in such
cases.
Replication and binary log filtering rules with replication between
MySQL Clusters.
If you are using any of the options
--replicate-do-*
,
--replicate-ignore-*
,
--binlog-do-db
, or
--binlog-ignore-db
to filter
databases or tables being replicated, care must be taken not to
block replication or binary logging of the
mysql.ndb_apply_status
, which is required for
replication between MySQL Clusters to operate properly. In
particular, you must keep in mind the following:
Using
--replicate-do-db=
(and no other db_name
--replicate-do-*
or
--replicate-ignore-*
options) means that
only tables in database
db_name
are replicated. In this
case, you should also use
--replicate-do-db=mysql
,
--binlog-do-db=mysql
, or
--replicate-do-table=mysql.ndb_apply_status
to ensure that mysql.ndb_apply_status
is
populated on slaves.
Using
--binlog-do-db=
(and no other db_name
--binlog-do-db
options) means that changes only to
tables in database db_name
are
written to the binary log. In this case, you should also use
--replicate-do-db=mysql
,
--binlog-do-db=mysql
, or
--replicate-do-table=mysql.ndb_apply_status
to ensure that mysql.ndb_apply_status
is
populated on slaves.
Using
--replicate-ignore-db=mysql
means that no tables in the mysql
database
are replicated. In this case, you should also use
--replicate-do-table=mysql.ndb_apply_status
to ensure that mysql.ndb_apply_status
is
replicated.
Using --binlog-ignore-db=mysql
means that no changes to tables in the
mysql
database are written to the binary
log. In this case, you should also use
--replicate-do-table=mysql.ndb_apply_status
to ensure that mysql.ndb_apply_status
is
replicated.
You should also remember that each replication rule requires the following:
Its own --replicate-do-*
or
--replicate-ignore-*
option, and that
multiple rules cannot be expressed in a single replication
filtering option. For information about these rules, see
Section 16.1.3, “Replication and Binary Logging Options and Variables”.
Its own --binlog-do-db
or
--binlog-ignore-db
option, and
that multiple rules cannot be expressed in a single binary log
filtering option. For information about these rules, see
Section 5.2.4, “The Binary Log”.
If you are replicating a MySQL Cluster to a slave that uses a
storage engine other than NDB
, the
considerations just given previously may not apply, as discussed
elsewhere in this section.
MySQL Cluster Replication and IPv6. Currently, the NDB API and MGM API do not support IPv6. However, beginning with MySQL Cluster NDB 6.4.1, MySQL Servers—including those acting as SQL nodes in a MySQL Cluster—can use IPv6 to contact other MySQL Servers. This means that you can replicate between MySQL Clusters using IPv6 to connect the master and slave SQL nodes as shown by the dotted arrow in the following diagram:
However, all connections originating within the MySQL Cluster—represented in the preceding diagram by solid arrows—must use IPv4. In other words, all MySQL Cluster data nodes, management servers, and management clients must be accessible from one another using IPv4. In addition, SQL nodes must use IPv4 to communicate with the cluster.
Since there is currently no support in the NDB and MGM APIs for IPv6, any applications written using these APIs must also make all connections using IPv4.
Attribute promotion and demotion.
Formerly, support in MySQL Cluster for type conversions between
columns of similar but different types on the master and the
slave was extremely limited. However, starting with MySQL
Cluster NDB 6.3.33, 7.0.14, and 7.1.3, MySQL Cluster Replication
includes support for attribute promotion and demotion. The
implementation of the latter distinguishes between lossy and
non-lossy type conversions, and their use on the slave can be
controlled by setting the
slave_type_conversions
global
server system variable.
For more information about attribute promotion and demotion in MySQL Cluster, see Attribute promotion and demotion (MySQL Cluster).
User Comments
Add your own comment.