15.4. MySQL Cluster Configuration

MySQL 5.0

15.4. MySQL Cluster Configuration

A MySQL server that is part of a MySQL Cluster differs in only one respect from a normal (non-clustered) MySQL server, in that it employs the storage engine. This engine is also referred to simply as , and the two forms of the name are synonymous.

To avoid unnecessary allocation of resources, the server is configured by default with the storage engine disabled. To enable , you must modify the server's configuration file, or start the server with the option.

The MySQL server is a part of the cluster, so it also must know how to access an MGM node to obtain the cluster configuration data. The default behavior is to look for the MGM node on . However, should you need to specify that its location is elsewhere, this can be done in or on the MySQL server command line. Before the storage engine can be used, at least one MGM node must be operational, as well as any desired data nodes.

15.4.1. Building MySQL Cluster from Source Code

, the Cluster storage engine, is available in binary distributions for Linux, Mac OS X, and Solaris. We are working to make Cluster run on all operating systems supported by MySQL, including Windows.

If you choose to build from a source tarball or the MySQL 5.0 BitKeeper tree, be sure to use the option when running configure. You can also use the BUILD/compile-pentium-max build script. Note that this script includes OpenSSL, so you must either have or obtain OpenSSL to build successfully, or else modify compile-pentium-max to exclude this requirement. Of course, you can also just follow the standard instructions for compiling your own binaries, and then perform the usual tests and installation procedure. See Section 2.9.3, “Installing from the Development Source Tree”.

15.4.2. Installing the Software

In the next few sections, we assume that you are already familiar with installing MySQL, and here we cover only the differences between configuring MySQL Cluster and configuring MySQL without clustering. (See Chapter 2, Installing and Upgrading MySQL, if you require more information about the latter.)

You will find Cluster configuration easiest if you have already have all management and data nodes running first; this is likely to be the most time-consuming part of the configuration. Editing the file is fairly straightforward, and this section will cover only any differences from configuring MySQL without clustering.

15.4.3. Quick Test Setup of MySQL Cluster

To familiarize you with the basics, we will describe the simplest possible configuration for a functional MySQL Cluster. After this, you should be able to design your desired setup from the information provided in the other relevant sections of this chapter.

First, you need to create a configuration directory such as , by executing the following command as the system user:

shell> 

In this directory, create a file named that contains the following information. Substitute appropriate values for and as necessary for your system.

# file "config.ini" - showing minimal setup consisting of 1 data node,
# 1 management server, and 3 MySQL servers.
# The empty default sections are not required, and are shown only for
# the sake of completeness.
# Data nodes must provide a hostname but MySQL Servers are not required
# to do so.
# If you don't know the hostname for your machine, use localhost.
# The DataDir parameter also has a default value, but it is recommended to
# set it explicitly.
# Note: DB, API, and MGM are aliases for NDBD, MYSQLD, and NDB_MGMD
# respectively. DB and API are deprecated and should not be used in new
# installations.
[NDBD DEFAULT]
NoOfReplicas= 1

[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]

[NDB_MGMD]
HostName= myhost.example.com

[NDBD]
HostName= myhost.example.com
DataDir= /var/lib/mysql-cluster

[MYSQLD]
[MYSQLD]
[MYSQLD]

You can now start the ndb_mgmd management server. By default, it atttempts to read the file in its current working directory, so change location into the directory where the file is located and then invoke ndb_mgmd:

shell> 
shell> 

Then start a single data node by running ndbd. When starting ndbd for a given data node for the very first time, you should use the option as shown here:

shell> 

For subsequent ndbd starts, you will generally want to omit the option:

shell> 

The reason for omitting on subsequent restarts is that this option causes ndbd to delete and re-create all existing data and log files (as well as all table metadata) for this data node. One exception to this rule about not using except for the first ndbd invocation is that you use it when restarting the cluster and restoring from backup after adding new data nodes.

By default, ndbd looks for the management server at on port 1186.

Note: If you have installed MySQL from a binary tarball, you will need to specify the path of the ndb_mgmd and ndbd servers explicitly. (Normally, these will be found in .)

Finally, change location to the MySQL data directory (usually or ), and make sure that the file contains the option necessary to enable the NDB storage engine:

[mysqld]
ndbcluster

You can now start the MySQL server as usual:

shell> 

Wait a moment to make sure the MySQL server is running properly. If you see the notice , check the server's file to find out what went wrong.

If all has gone well so far, you now can start using the cluster. Connect to the server and verify that the storage engine is enabled:

shell> 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 5.0.25-Max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> 
...
*************************** 12. row ***************************
Engine: NDBCLUSTER
Support: YES
Comment: Clustered, fault-tolerant, memory-based tables
*************************** 13. row ***************************
Engine: NDB
Support: YES
Comment: Alias for NDBCLUSTER
...

The row numbers shown in the preceding example output may be different from those shown on your system, depending upon how your server is configured.

Try to create an table:

shell> 
mysql> 
Database changed

mysql> 
Query OK, 0 rows affected (0.09 sec)

mysql> 
*************************** 1. row ***************************
       Table: ctest
Create Table: CREATE TABLE `ctest` (
  `i` int(11) default NULL
) ENGINE=ndbcluster DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

To check that your nodes were set up properly, start the management client:

shell> 

Use the SHOW command from within the management client to obtain a report on the cluster's status:

NDB> 
Cluster Configuration
---------------------
[ndbd(NDB)]     1 node(s)
id=2    @127.0.0.1  (Version: 3.5.3, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @127.0.0.1  (Version: 3.5.3)

[mysqld(API)]   3 node(s)
id=3    @127.0.0.1  (Version: 3.5.3)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)

At this point, you have successfully set up a working MySQL Cluster. You can now store data in the cluster by using any table created with or its alias .

15.4.4. Configuration File

Configuring MySQL Cluster requires working with two files:

  • : Specifies options for all MySQL Cluster executables. This file, with which you should be familiar with from previous work with MySQL, must be accessible by each executable running in the cluster.

  • : This file is read only by the MySQL Cluster management server, which then distributes the information contained therein to all processes participating in the cluster. contains a description of each node involved in the cluster. This includes configuration parameters for data nodes and configuration parameters for connections between all nodes in the cluster. For a quick reference to the sections that can appear in this file, and what sorts of configuration parameters may be placed in each section, see Sections of the File.

We are continuously making improvements in Cluster configuration and attempting to simplify this process. Although we strive to maintain backward compatibility, there may be times when introduce an incompatible change. In such cases we will try to let Cluster users know in advance if a change is not backward compatible. If you find such a change and we have not documented it, please report it in the MySQL bugs database using the instructions given in Section 1.8, “How to Report Bugs or Problems”.

15.4.4.1. Basic Example Configuration

To support MySQL Cluster, you will need to update as shown in the following example. Note that the options shown here should not be confused with those that are used in files. You may also specify these parameters on the command line when invoking the executables.

# my.cnf
# example additions to my.cnf for MySQL Cluster
# (valid in MySQL 5.0)

# enable ndbcluster storage engine, and provide connectstring for
# management server host (default port is 1186)
[mysqld]
ndbcluster
ndb-connectstring=ndb_mgmd.mysql.com


# provide connectstring for management server host (default port: 1186)
[ndbd]
connect-string=ndb_mgmd.mysql.com

# provide connectstring for management server host (default port: 1186)
[ndb_mgm]
connect-string=ndb_mgmd.mysql.com

# provide location of cluster configuration file
[ndb_mgmd]
config-file=/etc/config.ini

(For more information on connectstrings, see Section 15.4.4.2, “The Cluster .)

# my.cnf
# example additions to my.cnf for MySQL Cluster
# (will work on all versions)

# enable ndbcluster storage engine, and provide connectstring for management
# server host to the default port 1186
[mysqld]
ndbcluster
ndb-connectstring=ndb_mgmd.mysql.com:1186

Important: Once you have started a mysqld process with the and parameters in the in the file as shown previously, you cannot execute any or statements without having actually started the cluster. Otherwise, these statements will fail with an error. This is by design.

You may also use a separate section in the cluster file for settings to be read and used by all executables:

# cluster-specific settings
[mysql_cluster]
ndb-connectstring=ndb_mgmd.mysql.com:1186

For additional variables that can be set in the file, see Section 5.2.2, “Server System Variables”.

The configuration file is named by default. It is read by ndb_mgmd at startup and can be placed anywhere. Its location and name are specified by using on the ndb_mgmd command line. If the configuration file is not specified, ndb_mgmd by default tries to read a file named located in the current working directory.

Currently, the configuration file is in INI format, which consists of sections preceded by section headings (surrounded by square brackets), followed by the appropriate parameter names and values. One deviation from the standard INI format is that the parameter name and value can be separated by a colon (‘’) as well as the equals sign (‘’). Another deviation is that sections are not uniquely identified by section name. Instead, unique sections (such as two different nodes of the same type) are identified by a unique ID specified as a parameter within the section.

Default values are defined for most parameters, and can also be specified in . To create a default value section, simply add the word to the section name. For example, an section contains parameters that apply to a particular data node, whereas an section contains parameters that apply to all data nodes. Suppose that all data nodes should use the same data memory size. To configure them all, create an section that contains a line to specify the data memory size.

At a minimum, the configuration file must define the computers and nodes involved in the cluster and on which computers these nodes are located. An example of a simple configuration file for a cluster consisting of one management server, two data nodes and two MySQL servers is shown here:

# file "config.ini" - 2 data nodes and 2 SQL nodes
# This file is placed in the startup directory of ndb_mgmd (the
# management server)
# The first MySQL Server can be started from any host. The second
# can be started only on the host mysqld_5.mysql.com

[NDBD DEFAULT]
NoOfReplicas= 2
DataDir= /var/lib/mysql-cluster

[NDB_MGMD]
Hostname= ndb_mgmd.mysql.com
DataDir= /var/lib/mysql-cluster

[NDBD]
HostName= ndbd_2.mysql.com

[NDBD]
HostName= ndbd_3.mysql.com

[MYSQLD]
[MYSQLD]
HostName= mysqld_5.mysql.com

Note that each node has its own section in the . For instance, this cluster has two data nodes, so the preceding configuration file contains two sections defining these nodes.

Sections of the File

There are six different sections that you can use in the configuration file, as described in the following list:

You can define values for each section. All Cluster parameter names are case-insensitive, which differs from parameters specified in or files.

15.4.4.2. The Cluster

With the exception of the MySQL Cluster management server (ndb_mgmd), each node that is part of a MySQL Cluster requires a connectstring that points to the management server's location. This connectstring is used in establishing a connection to the management server as well as in performing other tasks depending on the node's role in the cluster. The syntax for a connectstring is as follows:

<connectstring> :=
    [<nodeid-specification>,]<host-specification>[,<host-specification>]

<nodeid-specification> := 

<host-specification> := [:]

is an integer larger than 1 which identifies a node in . is a string representing a valid Internet host name or IP address. is an integer referring to a TCP/IP port number.

example 1 (long):    "nodeid=2,myhost1:1100,myhost2:1100,192.168.0.3:1200"
example 2 (short):   "myhost1"

All nodes will use as the default connectstring value if none is provided. If is omitted from the connectstring, the default port is 1186. This port should always be available on the network because it has been assigned by IANA for this purpose (see http://www.iana.org/assignments/port-numbers for details).

By listing multiple values, it is possible to designate several redundant management servers. A cluster node will attempt to contact successive management servers on each host in the order specified, until a successful connection has been established.

There are a number of different ways to specify the connectstring:

  • Each executable has its own command-line option which enables specifying the management server at startup. (See the documentation for the respective executable.)

  • It is also possible to set the connectstring for all nodes in the cluster at once by placing it in a section in the management server's file.

  • For backward compatibility, two other options are available, using the same syntax:

    1. Set the environment variable to contain the connectstring.

    2. Write the connectstring for each executable into a text file named and place this file in the executable's startup directory.

    However, these are now deprecated and should not be used for new installations.

The recommended method for specifying the connectstring is to set it on the command line or in the file for each executable.

15.4.4.3. Defining Cluster Computers

The section has no real significance other than serving as a way to avoid the need of defining host names for each node in the system. All parameters mentioned here are required.

  • This is an integer value, used to refer to the host computer elsewhere in the configuration file. This is not the same as the node ID.

  • This is the computer's hostname or IP address.

15.4.4.4. Defining the Management Server

The section is used to configure the behavior of the management server. can be used as an alias; the two section names are equivalent. All parameters in the following list are optional and assume their default values if omitted. Note: If neither the nor the parameter is present, the default value will be assumed for both.

  • Each node in the cluster has a unique identity, which is represented by an integer value in the range 1 to 63 inclusive. This ID is used by all internal cluster messages for addressing the node.

  • This refers to the set for one of the computers defined in a section of the file.

  • This is the port number on which the management server listens for configuration requests and management commands.

  • Specifying this parameter defines the hostname of the computer on which the management node is to reside. To specify a hostname other than , either this parameter or is required.

  • This parameter specifies where to send cluster logging information. There are three options in this regard: , , and :

    • outputs the log to :

      CONSOLE
      
    • sends the log to a facility, possible values being one of , , , , , , , , , , , , , , , , , , , or .

      Note: Not every facility is necessarily supported by every operating system.

      SYSLOG:facility=syslog
      
    • pipes the cluster log output to a regular file on the same machine. The following values can be specified:

      • : The name of the logfile.

      • : The maximum size (in bytes) to which the file can grow before logging rolls over to a new file. When this occurs, the old logfile is renamed by appending to the filename, where is the next number not yet used with this name.

      • : The maximum number of logfiles.

      FILE:filename=cluster.log,maxsize=1000000,maxfiles=6
      

      It is possible to specify multiple log destinations separated by semicolons as shown here:

      CONSOLE;SYSLOG:facility=local0;FILE:filename=/var/log/mgmd
      

      The default value for the parameter is _cluster.log,maxsize=1000000,maxfiles=6, where is the ID of the node.

  • This parameter is used to define which nodes can act as arbitrators. Only MGM nodes and SQL nodes can be arbitrators. can take one of the following values:

    • : The node will never be used as an arbitrator.

    • : The node has high priority; that is, it will be preferred as an arbitrator over low-priority nodes.

    • : Indicates a low-priority node which be used as an arbitrator only if a node with a higher priority is not available for that purpose.

    Normally, the management server should be configured as an arbitrator by setting its to 1 (the default value) and that of all SQL nodes to 0.

  • An integer value which causes the management server's responses to arbitration requests to be delayed by that number of milliseconds. By default, this value is 0; it is normally not necessary to change it.

  • This specifies the directory where output files from the management server will be placed. These files include cluster log files, process output files, and the daemon's process ID (PID) file. (For log files, this location can be overridden by setting the parameter for as discussed previously in this section.)

15.4.4.5. Defining Data Nodes

The and [NDBD DEFAULT] sections are used to configure the behavior of the cluster's data nodes. There are many parameters which control buffer sizes, pool sizes, timeouts, and so forth. The only mandatory parameters are:

  • Either or , which must be defined in the local section.

  • The parameter , which must be defined in the [NDBD DEFAULT] section, as it is common to all Cluster data nodes.

Most data node parameters are set in the section. Only those parameters explicitly stated as being able to set local values are allowed to be changed in the section. Where present, , and must be defined in the local section, and not in any other section of . In other words, settings for these parameters are specific to one data node.

For those parameters affecting memory usage or buffer sizes, it is possible to use , , or as a suffix to indicate units of 1024, 1024×1024, or 1024×1024×1024. (For example, means 100 × 1024 = 102400.) Parameter names and values are currently case-sensitive.

Identifying Data Nodes

The value (that is, the data node identifier) can be allocated on the command line when the node is started or in the configuration file.

  • This is the node ID used as the address of the node for all cluster internal messages. This is an integer in the range 1 to 63 inclusive. Each node in the cluster must have a unique identity.

  • This refers to the set for one of the computers defined in a section.

  • Specifying this parameter defines the hostname of the computer on which the data node is to reside. To specify a hostname other than , either this parameter or is required.

  • (OBSOLETE)

    Each node in the cluster uses a port to connect to other nodes. This port is used also for non-TCP transporters in the connection setup phase. The default port is allocated dynamically in such a way as to ensure that no two nodes on the same computer receive the same port number, so it should not normally be necessary to specify a value for this parameter.

  • This global parameter can be set only in the section, and defines the number of replicas for each table stored in the cluster. This parameter also specifies the size of node groups. A node group is a set of nodes all storing the same information.

    Node groups are formed implicitly. The first node group is formed by the set of data nodes with the lowest node IDs, the next node group by the set of the next lowest node identities, and so on. By way of example, assume that we have 4 data nodes and that is set to 2. The four data nodes have node IDs 2, 3, 4 and 5. Then the first node group is formed from nodes 2 and 3, and the second node group by nodes 4 and 5. It is important to configure the cluster in such a manner that nodes in the same node groups are not placed on the same computer because a single hardware failure would cause the entire cluster to crash.

    If no node IDs are provided, the order of the data nodes will be the determining factor for the node group. Whether or not explicit assignments are made, they can be viewed in the output of the management client's statement.

    There is no default value for ; the maximum possible value is 4.

  • This parameter specifies the directory where trace files, log files, pid files and error logs are placed.

  • This parameter specifies the directory where all files created for metadata, REDO logs, UNDO logs and data files are placed. The default is the directory specified by . Note: This directory must exist before the ndbd process is initiated.

    The recommended directory hierarchy for MySQL Cluster includes , under which a directory for the node's filesystem is created. The name of this subdirectory contains the node ID. For example, if the node ID is 2, this subdirectory is named .

  • This parameter specifies the directory in which backups are placed. If omitted, the default backup location is the directory named under the location specified by the parameter. (See above.)

Data Memory, Index Memory, and String Memory

and are parameters specifying the size of memory segments used to store the actual records and their indexes. In setting values for these, it is important to understand how and are used, as they usually need to be updated to reflect actual usage by the cluster:

  • This parameter defines the amount of space (in bytes) available for storing database records. The entire amount specified by this value is allocated in memory, so it is extremely important that the machine has sufficient physical memory to accommodate it.

    The memory allocated by is used to store both the actual records and indexes. Each record is currently of fixed size. (Even columns are stored as fixed-width columns.) There is a 16-byte overhead on each record; an additional amount for each record is incurred because it is stored in a 32KB page with 128 byte page overhead (see below). There is also a small amount wasted per page due to the fact that each record is stored in only one page. The maximum record size is currently 8052 bytes.

    The memory space defined by is also used to store ordered indexes, which use about 10 bytes per record. Each table row is represented in the ordered index. A common error among users is to assume that all indexes are stored in the memory allocated by , but this is not the case: Only primary key and unique hash indexes use this memory; ordered indexes use the memory allocated by . However, creating a primary key or unique hash index also creates an ordered index on the same keys, unless you specify in the index creation statement. This can be verified by running ndb_desc -d in the management client.

    The memory space allocated by consists of 32KB pages, which are allocated to table fragments. Each table is normally partitioned into the same number of fragments as there are data nodes in the cluster. Thus, for each node, there are the same number of fragments as are set in .

    Once a page has been allocated, it is currently not possible to return it to the pool of free pages, except by deleting the table. (This also means that pages, once allocated to a given table, cannot be used by other tables.) Performing a node recovery also compresses the partition because all records are inserted into empty partitions from other live nodes.

    The memory space also contains UNDO information: For each update, a copy of the unaltered record is allocated in the . There is also a reference to each copy in the ordered table indexes. Unique hash indexes are updated only when the unique index columns are updated, in which case a new entry in the index table is inserted and the old entry is deleted upon commit. For this reason, it is also necessary to allocate enough memory to handle the largest transactions performed by applications using the cluster. In any case, performing a few large transactions holds no advantage over using many smaller ones, for the following reasons:

    • Large transactions are not any faster than smaller ones

    • Large transactions increase the number of operations that are lost and must be repeated in event of transaction failure

    • Large transactions use more memory

    The default value for is 80MB; the minimum is 1MB. There is no maximum size, but in reality the maximum size has to be adapted so that the process does not start swapping when the limit is reached. This limit is determined by the amount of physical RAM available on the machine and by the amount of memory that the operating system may commit to any one process. 32-bit operating systems are generally limited to 2–4GB per process; 64-bit operating systems can use more. For large databases, it may be preferable to use a 64-bit operating system for this reason. In addition, it is also possible to run more than one ndbd process per machine, and this may prove advantageous on machines with multiple CPUs.

  • This parameter controls the amount of storage used for hash indexes in MySQL Cluster. Hash indexes are always used for primary key indexes, unique indexes, and unique constraints. Note that when defining a primary key and a unique index, two indexes will be created, one of which is a hash index used for all tuple accesses as well as lock handling. It is also used to enforce unique constraints.

    The size of the hash index is 25 bytes per record, plus the size of the primary key. For primary keys larger than 32 bytes another 8 bytes is added.

    The default value for is 18MB. The minimum is 1MB.

  • This parameter determines how much memory is allocated for strings such as table names, and is specified in an or section of the file. A value between and inclusive is interpreted as a percent of the maxmimum default value, which is calculated based on a number of factors including the number of tables, maximum table name size, maximum size of files, , maximum column name size, and maximum default column value. In general it is safe to assume that the maximum default value is approximately 5 MB for a MySQL Cluster having 1000 tables.

    A value greater than is interpreted as a number of bytes.

    In MySQL 5.0, the default value is — that is, 100 percent of the default maximum, or roughly 5 MB. It is possible to reduce this value safely, but it should never be less than 5 percent. If you encounter Error 773 Out of string memory, please modify StringMemory config parameter: Permanent error: Schema error, this means that means that you have set the value too low. (25 percent) is not excessive, and should prevent this error from recurring in all but the most extreme conditions, as when there are hundreds or thousands of tables with names whose lengths and columns whose number approach their permitted maximums.

The following example illustrates how memory is used for a table. Consider this table definition:

CREATE TABLE example (
  a INT NOT NULL,
  b INT NOT NULL,
  c INT NOT NULL,
  PRIMARY KEY(a),
  UNIQUE(b)
) ENGINE=NDBCLUSTER;

For each record, there are 12 bytes of data plus 12 bytes overhead. Having no nullable columns saves 4 bytes of overhead. In addition, we have two ordered indexes on columns and consuming roughly 10 bytes each per record. There is a primary key hash index on the base table using roughly 29 bytes per record. The unique constraint is implemented by a separate table with as primary key and as a column. This other table consumes an additional 29 bytes of index memory per record in the table as well 8 bytes of record data plus 12 bytes of overhead.

Thus, for one million records, we need 58MB for index memory to handle the hash indexes for the primary key and the unique constraint. We also need 64MB for the records of the base table and the unique index table, plus the two ordered index tables.

You can see that hash indexes takes up a fair amount of memory space; however, they provide very fast access to the data in return. They are also used in MySQL Cluster to handle uniqueness constraints.

Currently, the only partitioning algorithm is hashing and ordered indexes are local to each node. Thus, ordered indexes cannot be used to handle uniqueness constraints in the general case.

An important point for both and is that the total database size is the sum of all data memory and all index memory for each node group. Each node group is used to store replicated information, so if there are four nodes with two replicas, there will be two node groups. Thus, the total data memory available is 2 × for each data node.

It is highly recommended that and be set to the same values for all nodes. Data distribution is even over all nodes in the cluster, so the maximum amount of space available for any node can be no greater than that of the smallest node in the cluster.

and can be changed, but decreasing either of these can be risky; doing so can easily lead to a node or even an entire MySQL Cluster that is unable to restart due to there being insufficient memory space. Increasing these values should be acceptable, but it is recommended that such upgrades are performed in the same manner as a software upgrade, beginning with an update of the configuration file, and then restarting the management server followed by restarting each data node in turn.

Updates do not increase the amount of index memory used. Inserts take effect immediately; however, rows are not actually deleted until the transaction is committed.

Transaction Parameters

The next three parameters that we discuss are important because they affect the number of parallel transactions and the sizes of transactions that can be handled by the system. sets the number of parallel transactions possible in a node. sets the number of records that can be in update phase or locked simultaneously.

Both of these parameters (especially ) are likely targets for users setting specific values and not using the default value. The default value is set for systems using small transactions, to ensure that these do not use excessive memory.

  • For each active transaction in the cluster there must be a record in one of the cluster nodes. The task of coordinating transactions is spread among the nodes. The total number of transaction records in the cluster is the number of transactions in any given node times the number of nodes in the cluster.

    Transaction records are allocated to individual MySQL servers. Normally, there is at least one transaction record allocated per connection that using any table in the cluster. For this reason, one should ensure that there are more transaction records in the cluster than there are concurrent connections to all MySQL servers in the cluster.

    This parameter must be set to the same value for all cluster nodes.

    Changing this parameter is never safe and doing so can cause a cluster to crash. When a node crashes, one of the nodes (actually the oldest surviving node) will build up the transaction state of all transactions ongoing in the crashed node at the time of the crash. It is thus important that this node has as many transaction records as the failed node.

    The default value is 4096.

  • It is a good idea to adjust the value of this parameter according to the size and number of transactions. When performing transactions of only a few operations each and not involving a great many records, there is no need to set this parameter very high. When performing large transactions involving many records need to set this parameter higher.

    Records are kept for each transaction updating cluster data, both in the transaction coordinator and in the nodes where the actual updates are performed. These records contain state information needed to find UNDO records for rollback, lock queues, and other purposes.

    This parameter should be set to the number of records to be updated simultaneously in transactions, divided by the number of cluster data nodes. For example, in a cluster which has four data nodes and which is expected to handle 1,000,000 concurrent updates using transactions, you should set this value to 1000000 / 4 = 250000.

    Read queries which set locks also cause operation records to be created. Some extra space is allocated within individual nodes to accommodate cases where the distribution is not perfect over the nodes.

    When queries make use of the unique hash index, there are actually two operation records used per record in the transaction. The first record represents the read in the index table and the second handles the operation on the base table.

    The default value is 32768.

    This parameter actually handles two values that can be configured separately. The first of these specifies how many operation records are to be placed with the transaction coordinator. The second part specifies how many operation records are to be local to the database.

    A very large transaction performed on an eight-node cluster requires as many operation records in the transaction coordinator as there are reads, updates, and deletes involved in the transaction. However, the operation records of the are spread over all eight nodes. Thus, if it is necessary to configure the system for one very large transaction, it is a good idea to configure the two parts separately. will always be used to calculate the number of operation records in the transaction coordinator portion of the node.

    It is also important to have an idea of the memory requirements for operation records. These consume about 1KB per record.

  • By default, this parameter is calculated as 1.1 × . This fits systems with many simultaneous transactions, none of them being very large. If there is a need to handle one very large transaction at a time and there are many nodes, it is a good idea to override the default value by explicitly specifying this parameter.

Transaction Temporary Storage

The next set of parameters is used to determine temporary storage when executing a statement that is part of a Cluster transaction. All records are released when the statement is completed and the cluster is waiting for the commit or rollback.

The default values for these parameters are adequate for most situations. However, users with a need to support transactions involving large numbers of rows or operations may need to increase these values to enable better parallelism in the system, whereas users whose applications require relatively small transactions can decrease the values to save memory.

  • For queries using a unique hash index, another temporary set of operation records is used during a query's execution phase. This parameter sets the size of that pool of records. Thus, this record is allocated only while executing a part of a query. As soon as this part has been executed, the record is released. The state needed to handle aborts and commits is handled by the normal operation records, where the pool size is set by the parameter .

    The default value of this parameter is 8192. Only in rare cases of extremely high parallelism using unique hash indexes should it be necessary to increase this value. Using a smaller value is possible and can save memory if the DBA is certain that a high degree of parallelism is not required for the cluster.

  • The default value of is 4000, which is sufficient for most situations. In some cases it can even be decreased if the DBA feels certain the need for parallelism in the cluster is not high.

    A record is created when an operation is performed that affects a unique hash index. Inserting or deleting a record in a table with unique hash indexes or updating a column that is part of a unique hash index fires an insert or a delete in the index table. The resulting record is used to represent this index table operation while waiting for the original operation that fired it to complete. This operation is short-lived but can still require a large number of records in its pool for situations with many parallel write operations on a base table containing a set of unique hash indexes.

  • The memory affected by this parameter is used for tracking operations fired when updating index tables and reading unique indexes. This memory is used to store the key and column information for these operations. It is only very rarely that the value for this parameter needs to be altered from the default.

    The default value for is 1MB.

    Normal read and write operations use a similar buffer, whose usage is even more short-lived. The compile-time parameter (found in ) set to 4000 × 128 bytes (500KB). A similar buffer for key information, (also in ) contains 4000 × 16 = 62.5KB of buffer space. is the module that handles transaction coordination.

Scans and Buffering

There are additional parameters in the module (in ) that affect reads and updates. These include , set by default to 10000 × 128 bytes (1250KB) and , set by default to 10000*16 bytes (roughly 156KB) of buffer space. To date, there have been neither any reports from users nor any results from our own extensive tests suggesting that either of these compile-time limits should be increased.

  • This parameter is used to control the number of parallel scans that can be performed in the cluster. Each transaction coordinator can handle the number of parallel scans defined for this parameter. Each scan query is performed by scanning all partitions in parallel. Each partition scan uses a scan record in the node where the partition is located, the number of records being the value of this parameter times the number of nodes. The cluster should be able to sustain scans concurrently from all nodes in the cluster.

    Scans are actually performed in two cases. The first of these cases occurs when no hash or ordered indexes exists to handle the query, in which case the query is executed by performing a full table scan. The second case is encountered when there is no hash index to support the query but there is an ordered index. Using the ordered index means executing a parallel range scan. The order is kept on the local partitions only, so it is necessary to perform the index scan on all partitions.

    The default value of is 256. The maximum value is 500.

    This parameter specifies the number of scans possible in the transaction coordinator. If the number of local scan records is not provided, it is calculated as the product of and the number of data nodes in the system.

  • Specifies the number of local scan records if many scans are not fully parallelized.

  • This parameter is used to calculate the number of lock records which must be there to handle many concurrent scan operations.

    The default value is 64; this value has a strong connection to the defined in the SQL nodes.

  • This is an internal buffer used for passing messages within individual nodes and between nodes. Although it is highly unlikely that this would need to be changed, it is configurable. By default, it is set to 1MB.

Logging and Checkpointing

These parameters control log and checkpoint behavior.

  • This parameter sets the size of the node's REDO log files. REDO log files are organized in a ring. It is extremely important that the first and last log files (sometimes referred to as the “head” and “tail” log files, respectively) do not meet. When these approach one another too closely, the node begins aborting all transactions encompassing updates due to a lack of room for new log records.

    A log record is not removed until three local checkpoints have been completed since that log record was inserted. Checkpointing frequency is determined by its own set of configuration parameters discussed elsewhere in this chapter.

    How these parameters interact and proposals for how to configure them are discussed in Section 15.4.6, “Configuring Parameters for Local Checkpoints”.

    The default parameter value is 8, which means 8 sets of 4 16MB files for a total of 512MB. In other words, REDO log space must be allocated in blocks of 64MB. In scenarios requiring a great many updates, the value for may need to be set as high as 300 or even higher to provide sufficient space for REDO logs.

    If the checkpointing is slow and there are so many writes to the database that the log files are full and the log tail cannot be cut without jeopardizing recovery, all updating transactions are aborted with internal error code 410 (). This condition prevails until a checkpoint has completed and the log tail can be moved forward.

    Important: This parameter cannot be changed “on the fly”; you must restart the node using . If you wish to change this value for a running cluster, you can do so via a rolling node restart.

  • This parameter sets the maximum number of trace files that are kept before overwriting old ones. Trace files are generated when, for whatever reason, the node crashes.

    The default is 25 trace files.

Metadata Objects

The next set of parameters defines pool sizes for metadata objects, used to define the maximum number of attributes, tables, indexes, and trigger objects used by indexes, events, and replication between clusters. Note that these act merely as “suggestions” to the cluster, and any that are not specified revert to the default values shown.

  • Defines the number of attributes that can be defined in the cluster.

    The default value is 1000, with the minimum possible value being 32. The maximum is 4294967039. Each attribute consumes around 200 bytes of storage per node due to the fact that all metadata is fully replicated on the servers.

    When setting , it is important to prepare in advance for any statements that you might want to perform in the future. This is due to the fact, during the execution of on a Cluster table, 3 times the number of attributes as in the original table are used. For example, if a table requires 100 attributes, and you want to be able to alter it later, you need to set the value of to 300. Assuming that you can create all desired tables without any problems, a good rule of thumb is to add two times the number of attributes in the largest table to to be sure. You should also verify that this number is sufficient by trying an actual after configuring the parameter. If this is not successful, increase by another multiple of the original value and test it again.

  • A table object is allocated for each table, unique hash index, and ordered index. This parameter sets the maximum number of table objects for the cluster as a whole.

    For each attribute that has a data type an extra table is used to store most of the data. These tables also must be taken into account when defining the total number of tables.

    The default value of this parameter is 128. The minimum is 8 and the maximum is 1600. Each table object consumes approximately 20KB per node.

  • For each ordered index in the cluster, an object is allocated describing what is being indexed and its storage segments. By default, each index so defined also defines an ordered index. Each unique index and primary key has both an ordered index and a hash index.

    The default value of this parameter is 128. Each object consumes approximately 10KB of data per node.

  • For each unique index that is not a primary key, a special table is allocated that maps the unique key to the primary key of the indexed table. By default, an ordered index is also defined for each unique index. To prevent this, you must specify the option when defining the unique index.

    The default value is 64. Each index consumes approximately 15KB per node.

  • Internal update, insert, and delete triggers are allocated for each unique hash index. (This means that three triggers are created for each unique hash index.) However, an ordered index requires only a single trigger object. Backups also use three trigger objects for each normal table in the cluster.

    This parameter sets the maximum number of trigger objects in the cluster.

    The default value is 768.

  • This parameter is deprecated in MySQL 5.0; you should use and instead.

    This parameter is used only by unique hash indexes. There needs to be one record in this pool for each unique hash index defined in the cluster.

    The default value of this parameter is 128.

Boolean Parameters

The behavior of data nodes is also affected by a set of parameters taking on boolean values. These parameters can each be specified as by setting them equal to or , and as by setting them equal to or .

  • For a number of operating systems, including Solaris and Linux, it is possible to lock a process into memory and so avoid any swapping to disk. This can be used to help guarantee the cluster's real-time characteristics.

    This feature is disabled by default.

  • This parameter specifies whether an ndbd process should exit or perform an automatic restart when an error condition is encountered.

    This feature is enabled by default.

  • It is possible to specify MySQL Cluster tables as diskless, meaning that tables are not checkpointed to disk and that no logging occurs. Such tables exist only in main memory. A consequence of using diskless tables is that neither the tables nor the records in those tables survive a crash. However, when operating in diskless mode, it is possible to run ndbd on a diskless computer.

    Important: This feature causes the entire cluster to operate in diskless mode.

    When this feature is enabled, Cluster online backup is disabled. In addition, a partial start of the cluster is not possible.

    is disabled by default.

  • This feature is accessible only when building the debug version where it is possible to insert errors in the execution of individual blocks of code as part of testing.

    This feature is disabled by default.

Controlling Timeouts, Intervals, and Disk Paging

There are a number of parameters specifying timeouts and intervals between various actions in Cluster data nodes. Most of the timeout values are specified in milliseconds. Any exceptions to this are mentioned where applicable.

  • To prevent the main thread from getting stuck in an endless loop at some point, a “watchdog” thread checks the main thread. This parameter specifies the number of milliseconds between checks. If the process remains in the same state after three checks, the watchdog thread terminates it.

    This parameter can easily be changed for purposes of experimentation or to adapt to local conditions. It can be specified on a per-node basis although there seems to be little reason for doing so.

    The default timeout is 4000 milliseconds (4 seconds).

  • This parameter specifies how long the Cluster waits for all data nodes to come up before the cluster initialization routine is invoked. This timeout is used to avoid a partial Cluster startup whenever possible.

    The default value is 30000 milliseconds (30 seconds). 0 disables the timeout. In other words, the cluster may start only if all nodes are available.

  • If the cluster is ready to start after waiting for milliseconds but is still possibly in a partitioned state, the cluster waits until this timeout has also passed.

    The default timeout is 60000 milliseconds (60 seconds).

  • If a data node has not completed its startup sequence within the time specified by this parameter, the node startup fails. Setting this parameter to 0 means that no data node timeout is applied.

    The default value is 60000 milliseconds (60 seconds). For data nodes containing extremely large amounts of data, this parameter should be increased. For example, in the case of a data node containing several gigabytes of data, a period as long as 10–15 minutes (that is, 600,000 to 1,000,000 milliseconds) might be required to to perform a node restart.

  • One of the primary methods of discovering failed nodes is by the use of heartbeats. This parameter states how often heartbeat signals are sent and how often to expect to receive them. After missing three heartbeat intervals in a row, the node is declared dead. Thus, the maximum time for discovering a failure through the heartbeat mechanism is four times the heartbeat interval.

    The default heartbeat interval is 1500 milliseconds (1.5 seconds). This parameter must not be changed drastically and should not vary widely between nodes. If one node uses 5000 milliseconds and the node watching it uses 1000 milliseconds, obviously the node will be declared dead very quickly. This parameter can be changed during an online software upgrade, but only in small increments.

  • Each data node sends heartbeat signals to each MySQL server (SQL node) to ensure that it remains in contact. If a MySQL server fails to send a heartbeat in time it is declared “dead,” in which case all ongoing transactions are completed and all resources released. The SQL node cannot reconnect until all activities initiated by the previous MySQL instance have been completed. The three-heartbeat criteria for this determination are the same as described for .

    The default interval is 1500 milliseconds (1.5 seconds). This interval can vary between individual data nodes because each data node watches the MySQL servers connected to it, independently of all other data nodes.

  • This parameter is an exception in that it does not specify a time to wait before starting a new local checkpoint; rather, it is used to ensure that local checkpoints are not performed in a cluster where relatively few updates are taking place. In most clusters with high update rates, it is likely that a new local checkpoint is started immediately after the previous one has been completed.

    The size of all write operations executed since the start of the previous local checkpoints is added. This parameter is also exceptional in that it is specified as the base-2 logarithm of the number of 4-byte words, so that the default value 20 means 4MB (4 × 220) of write operations, 21 would mean 8MB, and so on up to a maximum value of 31, which equates to 8GB of write operations.

    All the write operations in the cluster are added together. Setting to 6 or less means that local checkpoints will be executed continuously without pause, independent of the cluster's workload.

  • When a transaction is committed, it is committed in main memory in all nodes on which the data is mirrored. However, transaction log records are not flushed to disk as part of the commit. The reasoning behind this behavior is that having the transaction safely committed on at least two autonomous host machines should meet reasonable standards for durability.

    It is also important to ensure that even the worst of cases — a complete crash of the cluster — is handled properly. To guarantee that this happens, all transactions taking place within a given interval are put into a global checkpoint, which can be thought of as a set of committed transactions that has been flushed to disk. In other words, as part of the commit process, a transaction is placed in a global checkpoint group. Later, this group's log records are flushed to disk, and then the entire group of transactions is safely committed to disk on all computers in the cluster.

    This parameter defines the interval between global checkpoints. The default is 2000 milliseconds.

  • Timeout handling is performed by checking a timer on each transaction once for every interval specified by this parameter. Thus, if this parameter is set to 1000 milliseconds, every transaction will be checked for timing out once per second.

    The default value is 1000 milliseconds (1 second).

  • This parameter states the maximum time that is permitted to lapse between operations in the same transaction before the transaction is aborted.

    The default for this parameter is zero (no timeout). For a real-time database that needs to ensure that no transaction keeps locks for too long, this parameter should be set to a much smaller value. The unit is milliseconds.

  • When a node executes a query involving a transaction, the node waits for the other nodes in the cluster to respond before continuing. A failure to respond can occur for any of the following reasons:

    • The node is “dead

    • The operation has entered a lock queue

    • The node requested to perform the action could be heavily overloaded.

    This timeout parameter states how long the transaction coordinator waits for query execution by another node before aborting the transaction, and is important for both node failure handling and deadlock detection. Setting it too high can cause a undesirable behavior in situations involving deadlocks and node failure.

    The default timeout value is 1200 milliseconds (1.2 seconds).

  • When executing a local checkpoint, the algorithm flushes all data pages to disk. Merely doing so as quickly as possible without any moderation is likely to impose excessive loads on processors, networks, and disks. To control the write speed, this parameter specifies how many pages per 100 milliseconds are to be written. In this context, a “page” is defined as 8KB. This parameter is specified in units of 80KB per second, so , setting to a value of entails writing 1.6MB in data pages to disk each second during a local checkpoint. This value includes the writing of UNDO log records for data pages. That is, this parameter handles the limitation of writes from data memory. UNDO log records for index pages are handled by the parameter . (See the entry for for information about index pages.)

    In short, this parameter specifies how quickly to execute local checkpoints. It operates in conjunction with , , and .

    For more information about the interaction between these parameters and possible strategies for choosing appropriate values for them, see Section 15.4.6, “Configuring Parameters for Local Checkpoints”.

    The default value is 40 (3.2MB of data pages per second).

  • This parameter uses the same units as and acts in a similar fashion, but limits the speed of writing index pages from index memory.

    The default value of this parameter is 20 (1.6MB of index memory pages per second).

  • This parameter is used in a fashion similar to and , only it does so with regard to local checkpoints executed in the node when a node is restarting. A local checkpoint is always performed as part of all node restarts. During a node restart it is possible to write to disk at a higher speed than at other times, because fewer activities are being performed in the node.

    This parameter covers pages written from data memory.

    The default value is 40 (3.2MB per second).

  • Controls the number of index memory pages that can be written to disk during the local checkpoint phase of a node restart.

    As with and , values for this parameter are expressed in terms of 8KB pages written per 100 milliseconds (80KB/second).

    The default value is 20 (1.6MB per second).

  • This parameter specifies how long data nodes wait for a response from the arbitrator to an arbitration message. If this is exceeded, the network is assumed to have split.

    The default value is 1000 milliseconds (1 second).

Buffering and Logging

Several configuration parameters corresponding to former compile-time parameters are also available. These enable the advanced user to have more control over the resources used by node processes and to adjust various buffer sizes at need.

These buffers are used as front ends to the file system when writing log records to disk. If the node is running in diskless mode, these parameters can be set to their minimum values without penalty due to the fact that disk writes are “faked” by the storage engine's filesystem abstraction layer.

  • The UNDO index buffer, whose size is set by this parameter, is used during local checkpoints. The storage engine uses a recovery scheme based on checkpoint consistency in conjunction with an operational REDO log. To produce a consistent checkpoint without blocking the entire system for writes, UNDO logging is done while performing the local checkpoint. UNDO logging is activated on a single table fragment at a time. This optimization is possible because tables are stored entirely in main memory.

    The UNDO index buffer is used for the updates on the primary key hash index. Inserts and deletes rearrange the hash index; the NDB storage engine writes UNDO log records that map all physical changes to an index page so that they can be undone at system restart. It also logs all active insert operations for each fragment at the start of a local checkpoint.

    Reads and updates set lock bits and update a header in the hash index entry. These changes are handled by the page-writing algorithm to ensure that these operations need no UNDO logging.

    This buffer is 2MB by default. The minimum value is 1MB, which is sufficient for most applications. For applications doing extremely large or numerous inserts and deletes together with large transactions and large primary keys, it may be necessary to increase the size of this buffer. If this buffer is too small, the NDB storage engine issues internal error code 677 ().

    Important: It is not safe to decrease the value of this parameter during a rolling restart.

  • This parameter sets the size of the UNDO data buffer, which performs a function similar to that of the UNDO index buffer, except the UNDO data buffer is used with regard to data memory rather than index memory. This buffer is used during the local checkpoint phase of a fragment for inserts, deletes, and updates.

    Because UNDO log entries tend to grow larger as more operations are logged, this buffer is also larger than its index memory counterpart, with a default value of 16MB.

    This amount of memory may be unnecessarily large for some applications. In such cases, it is possible to decrease this size to a minimum of 1MB.

    It is rarely necessary to increase the size of this buffer. If there is such a need, it is a good idea to check whether the disks can actually handle the load caused by database update activity. A lack of sufficient disk space cannot be overcome by increasing the size of this buffer.

    If this buffer is too small and gets congested, the NDB storage engine issues internal error code 891 (Data UNDO buffers overloaded).

    Important: It is not safe to decrease the value of this parameter during a rolling restart.

  • All update activities also need to be logged. The REDO log makes it possible to replay these updates whenever the system is restarted. The NDB recovery algorithm uses a “fuzzy” checkpoint of the data together with the UNDO log, and then applies the REDO log to play back all changes up to the restoration point.

    sets the size of the buffer inwhich the REDO log is written, and is 8MB by default. The minimum value is 1MB.

    If this buffer is too small, the NDB storage engine issues error code 1221 ().

    Important: It is not safe to decrease the value of this parameter during a rolling restart.

Controlling Log Messages

In managing the cluster, it is very important to be able to control the number of log messages sent for various event types to . For each event category, there are 16 possible event levels (numbered 0 through 15). Setting event reporting for a given event category to level 15 means all event reports in that category are sent to ; setting it to 0 means that there will be no event reports made in that category.

By default, only the startup message is sent to , with the remaining event reporting level defaults being set to 0. The reason for this is that these messages are also sent to the management server's cluster log.

An analogous set of levels can be set for the management client to determine which event levels to record in the cluster log.

  • The reporting level for events generated during startup of the process.

    The default level is 1.

  • The reporting level for events generated as part of graceful shutdown of a node.

    The default level is 0.

  • The reporting level for statistical events such as number of primary key reads, number of updates, number of inserts, information relating to buffer usage, and so on.

    The default level is 0.

  • The reporting level for events generated by local and global checkpoints.

    The default level is 0.

  • The reporting level for events generated during node restart.

    The default level is 0.

  • The reporting level for events generated by connections between cluster nodes.

    The default level is 0.

  • The reporting level for events generated by errors and warnings by the cluster as a whole. These errors do not cause any node failure but are still considered worth reporting.

    The default level is 0.

  • The reporting level for events generated for information about the general state of the cluster.

    The default level is 0.

Backup Parameters

The parameters discussed in this section define memory buffers set aside for execution of online backups.

  • In creating a backup, there are two buffers used for sending data to the disk. The backup data buffer is used to fill in data recorded by scanning a node's tables. Once this buffer has been filled to the level specified as (see below), the pages are sent to disk. While flushing data to disk, the backup process can continue filling this buffer until it runs out of space. When this happens, the backup process pauses the scan and waits until some disk writes have completed freed up memory so that scanning may continue.

    The default value is 2MB.

  • The backup log buffer fulfills a role similar to that played by the backup data buffer, except that it is used for generating a log of all table writes made during execution of the backup. The same principles apply for writing these pages as with the backup data buffer, except that when there is no more space in the backup log buffer, the backup fails. For that reason, the size of the backup log buffer must be large enough to handle the load caused by write activities while the backup is being made. See Section 15.8.4, “Configuration for Cluster Backup”.

    The default value for this parameter should be sufficient for most applications. In fact, it is more likely for a backup failure to be caused by insufficient disk write speed than it is for the backup log buffer to become full. If the disk subsystem is not configured for the write load caused by applications, the cluster is unlikely to be able to perform the desired operations.

    It is preferable to configure cluster nodes in such a manner that the processor becomes the bottleneck rather than the disks or the network connections.

    The default value is 2MB.

  • This parameter is simply the sum of and .

    The default value is 2MB + 2MB = 4MB.

    Important: If and taken together exceed 4MB, then this parameter must be set explicitly in the file to their sum.

  • This parameter specifies the size of messages written to disk by the backup log and backup data buffers.

    The default value is 32KB.

15.4.4.6. Defining SQL Nodes

The sections in the file define the behavior of the MySQL servers (SQL nodes) used to access cluster data. None of the parameters shown is required. If no computer or host name is provided, any host can use this SQL node.

  • The value is used to identify the node in all cluster internal messages. It must be an integer in the range 1 to 63 inclusive, and must be unique among all node IDs within the cluster.

  • This refers to the set for one of the computers (hosts) defined in a section of the configuration file.

  • Specifying this parameter defines the hostname of the computer on which the SQL node (API node) is to reside. To specify a hostname other than , either this parameter or is required.

  • This parameter defines which nodes can act as arbitrators. Both MGM nodes and SQL nodes can be arbitrators. A value of 0 means that the given node is never used as an arbitrator, a value of 1 gives the node high priority as an arbitrator, and a value of 2 gives it low priority. A normal configuration uses the management server as arbitrator, setting its to 1 (the default) and those for all SQL nodes to 0.

  • Setting this parameter to any other value than 0 (the default) means that responses by the arbitrator to arbitration requests will be delayed by the stated number of milliseconds. It is usually not necessary to change this value.

  • For queries that are translated into full table scans or range scans on indexes, it is important for best performance to fetch records in properly sized batches. It is possible to set the proper size both in terms of number of records () and in terms of bytes (). The actual batch size is limited by both parameters.

    The speed at which queries are performed can vary by more than 40% depending upon how this parameter is set. In future releases, MySQL Server will make educated guesses on how to set parameters relating to batch size, based on the query type.

    This parameter is measured in bytes and by default is equal to 32KB.

  • This parameter is measured in number of records and is by default set to 64. The maximum size is 992.

  • The batch size is the size of each batch sent from each data node. Most scans are performed in parallel to protect the MySQL Server from receiving too much data from many nodes in parallel; this parameter sets a limit to the total batch size over all nodes.

    The default value of this parameter is set to 256KB. Its maximum size is 16MB.

You can obtain some information from a MySQL server running as a Cluster SQL node using in the client, as shown here:

mysql> 
+-----------------------------+---------------+
| Variable_name               | Value         |
+-----------------------------+---------------+
| Ndb_cluster_node_id         | 5             | 
| Ndb_config_from_host        | 192.168.0.112 | 
| Ndb_config_from_port        | 1186          | 
| Ndb_number_of_storage_nodes | 4             | 
+-----------------------------+---------------+
4 rows in set (0.02 sec)

For information about these Cluster system status variables, see Section 5.2.4, “Server Status Variables”.

15.4.4.7. Cluster TCP/IP Connections

TCP/IP is the default transport mechanism for establishing connections in MySQL Cluster. It is normally not necessary to define connections because Cluster automatically set ups a connection between each of the data nodes, between each data node and all MySQL server nodes, and between each data node and the management server. (For one exception to this rule, see Section 15.4.4.8, “TCP/IP Connections Using Direct Connections”.) sections in the file explicitly define TCP/IP connections between nodes in the cluster.

It is only necessary to define a connection to override the default connection parameters. In that case, it is necessary to define at least , , and the parameters to change.

It is also possible to change the default values for these parameters by setting them in the section.

  • ,

    To identify a connection between two nodes it is necessary to provide their node IDs in the section of the configuration file. These are the same unique values for each of these nodes as described in Section 15.4.4.6, “Defining SQL Nodes”.

  • TCP transporters use a buffer to store all messages before performing the send call to the operating system. When this buffer reaches 64KB its contents are sent; these are also sent when a round of messages have been executed. To handle temporary overload situations it is also possible to define a bigger send buffer. The default size of the send buffer is 256KB.

  • To be able to retrace a distributed message datagram, it is necessary to identify each message. When this parameter is set to , message IDs are transported over the network. This feature is disabled by default.

  • This parameter is a boolean parameter (enabled by setting it to or , disabled by setting it to or ). It is disabled by default. When it is enabled, checksums for all messages are calculated before they placed in the send buffer. This feature ensures that messages are not corrupted while waiting in the send buffer, or by the transport mechanism.

  • (OBSOLETE)

    This formerly specified the port number to be used for listening for connections from other nodes. This parameter should no longer be used.

  • Specifies the size of the buffer used when receiving data from the TCP/IP socket. There is seldom any need to change this parameter from its default value of 64KB, except possibly to save memory.

15.4.4.8. TCP/IP Connections Using Direct Connections

Setting up a cluster using direct connections between data nodes requires specifying explicitly the crossover IP addresses of the data nodes so connected in the section of the cluster file.

In the following example, we envision a cluster with at least four hosts, one each for a management server, an SQL node, and two data nodes. The cluster as a whole resides on the subnet of a LAN. In addition to the usual network connections, the two data nodes are connected directly using a standard crossover cable, and communicate with one another directly using IP addresses in the address range as shown:

# Management Server
[NDB_MGMD]
Id=1
HostName=172.23.72.20

# SQL Node
[MYSQLD]
Id=2
HostName=172.23.72.21

# Data Nodes
[NDBD]
Id=3
HostName=172.23.72.22

[NDBD]
Id=4
HostName=172.23.72.23

# TCP/IP Connections
[TCP]
NodeId1=3
NodeId2=4
HostName1=1.1.0.1
HostName2=1.1.0.2

The parameter, where is an integer, is used only when specifying direct TCP/IP connections.

The use of direct connections between data nodes can improve the cluster's overall efficiency by allowing the data nodes to bypass an Ethernet device such as a switch, hub, or router, thus cutting down on the cluster's latency. It is important to note that to take the best advantage of direct connections in this fashion with more than two data nodes, you must have a direct connection between each data node and every other data node in the same node group.

15.4.4.9. Shared-Memory Connections

MySQL Cluster attempts to use the shared memory transporter and configure it automatically where possible, chiefly where more than one node runs concurrently on the same cluster host. (In very early versions of MySQL Cluster, shared memory segments functioned only when the server binary was built using .) sections in the file explicitly define shared-memory connections between nodes in the cluster. When explicitly defining shared memory as the connection method, it is necessary to define at least , and . All other parameters have default values that should work well in most cases.

Important: SHM functionality is considered experimental only. It is not officially supported in any MySQL release series up to and including 5.0. This means that you must determine for yourself or by using our free resources (forums, mailing lists) whether it can be made to work correctly in your specific case.

  • ,

    To identify a connection between two nodes it is necessary to provide node identifiers for each of them, as and .

  • When setting up shared memory segments, a node ID, expressed as an integer, is used to identify uniquely the shared memory segment to use for the communication. There is no default value.

  • Each SHM connection has a shared memory segment where messages between nodes are placed by the sender and read by the reader. The size of this segment is defined by . The default value is 1MB.

  • To retrace the path of a distributed message, it is necessary to provide each message with a unique identifier. Setting this parameter to causes these message IDs to be transported over the network as well. This feature is disabled by default.

  • This parameter is a boolean (/) parameter which is disabled by default. When it is enabled, checksums for all messages are calculated before being placed in the send buffer.

    This feature prevents messages from being corrupted while waiting in the send buffer. It also serves as a check against data being corrupted during transport.

15.4.4.10. SCI Transport Connections

sections in the file explicitly define SCI (Scalable Coherent Interface) connections between cluster nodes. Using SCI transporters in MySQL Cluster is supported only when the MySQL-Max binaries are built using . The should point to a directory that contains at a minimum and directories containing SISCI libraries and header files. (See Section 15.9, “Using High-Speed Interconnects with MySQL Cluster” for more information about SCI.)

In addition, SCI requires specialized hardware.

It is strongly recommended to use SCI Transporters only for communication between ndbd processes. Note also that using SCI Transporters means that the ndbd processes never sleep. For this reason, SCI Transporters should be used only on machines having at least two CPUs dedicated for use by ndbd processes. There should be at least one CPU per ndbd process, with at least one CPU left in reserve to handle operating system activities.

  • ,

    To identify a connection between two nodes it is necessary to provide node identifiers for each of them, as and .

  • This identifies the SCI node ID on the first Cluster node (identified by ).

  • It is possible to set up SCI Transporters for failover between two SCI cards which then should use separate networks between the nodes. This identifies the node ID and the second SCI card to be used on the first node.

  • This identifies the SCI node ID on the second Cluster node (identified by ).

  • When using two SCI cards to provide failover, this parameter identifies the second SCI card to be used on the second node.

  • Each SCI transporter has a shared memory segment used for communication between the two nodes. Setting the size of this segment to the default value of 1MB should be sufficient for most applications. Using a smaller value can lead to problems when performing many parallel inserts; if the shared buffer is too small, this can also result in a crash of the ndbd process.

  • A small buffer in front of the SCI media stores messages before transmitting them over the SCI network. By default, this is set to 8KB. Our benchmarks show that performance is best at 64KB but 16KB reaches within a few percent of this, and there was little if any advantage to increasing it beyond 8KB.

  • To trace a distributed message it is necessary to identify each message uniquely. When this parameter is set to , message IDs are transported over the network. This feature is disabled by default.

  • This parameter is a boolean value, and is disabled by default. When is enabled, checksums are calculated for all messages before they are placed in the send buffer. This feature prevents messages from being corrupted while waiting in the send buffer. It also serves as a check against data being corrupted during transport.

15.4.5. Overview of Cluster Configuration Parameters

The next three sections provide summary tables of MySQL Cluster configuration parameters used in the file to govern the cluster's functioning. Each table lists the parameters for one of the Cluster node process types (ndbd, ndb_mgmd, and mysqld), and includes the parameter's type as well as its default, mimimum, and maximum values as applicable.

It is also stated what type of restart is required (node restart or system restart) — and whether the restart must be done with — to change the value of a given configuration parameter. This information is provided in each table's Restart Type column, which contains one of the values shown in this list:

  • : Node Restart

  • : Initial Node Restart

  • : System Restart

  • : Initial System Restart

When performing a node restart or an initial node restart, all of the cluster's data nodes must be restarted in turn (also referred to as a rolling restart). It is possible to update cluster configuration parameters marked or online — that is, without shutting down the cluster — in this fashion. An initial node restart requires restarting each ndbd process with the option.

A system restart requires a complete shutdown and restart of the entire cluster. An initial system restart requires taking a backup of the cluster, wiping the cluster filesystem after shutdown, and then restoring from the backup following the restart.

In any cluster restart, all of the cluster's management servers must be restarted in order for them to read the updated configuration parameter values.

Important: Values for numeric cluster parameters can generally be increased without any problems, although it is advisable to do so progressively, making such adjustments in relatively small increments. However, decreasing the values of such parameters — particularly those relating to memory usage and disk space — is not to be undertaken lightly, and it is recommended that you do so only following careful planning and testing. In addition, it is the generally the case that parameters relating to memory and disk usage which can be raised using a simple node restart require an initial node restart to be lowered.

Because some of these parameters can be used for configuring more than one type of cluster node, they may appear in more than one of the tables.

(Note that — which often appears as a maximum value in these tables — is equal to .)

15.4.5.1. Data Node Configuration Parameters

The following table provides information about parameters used in the or sections of a file for configuring MySQL Cluster data nodes. For detailed descriptions and other additional information about each of these parameters, see Section 15.4.4.5, “Defining Data Nodes”.

Restart Type Column Values

  • : Node Restart

  • : Initial Node Restart

  • : System Restart

  • : Initial System Restart

See Section 15.4.5, “Overview of Cluster Configuration Parameters”, for additional explanations of these abbreviations.

Parameter Name Type/Units Default Value Minimum Value Maximum Value Restart Type
milliseconds 1000 10 4294967039 N
bytes 2M 0 4294967039 N
string N/A N/A IN
bytes 2M 0 4294967039 N
bytes 4M 0 4294967039 N
bytes 32K 2K 4294967039 N
integer 64 1 992 N
string N/A N/A IN
bytes 80M 1M 1024G (subject to available system RAM and size of ) N
true|false (|) 0 0 1 IS
integer        
string value specified for N/A N/A IN
milliseconds 1500 100 4294967039 N
milliseconds 1500 10 4294967039 N
string N/A N/A S
integer None 1 63 N
bytes 18M 1M 1024G (subject to available system RAM and size of ) N
true|false (|) 0 0 1 N
integer 0 0 15 IN
integer 0 0 15 N
integer 0 0 15 N
integer 0 0 15 N
integer 0 0 15 N
integer 0 0 15 N
integer 1 0 15 N
integer 0 0 15 N
bytes 1M 512K 4294967039 N
integer 1000 32 4294967039 N
integer 8K 0 4294967039 N
integer 32768 32 4294967039 N
integer 256 2 500 N
integer 4096 32 4294967039 N
integer 4000 0 4294967039 N
(DEPRECATED — use or instead) integer 128 0 4294967039 N
integer 32 4294967039 N
integer 32 4294967039 N
integer 128 0 4294967039 N
integer 25 0 4294967039 N
integer 128 8 4294967039 N
integer 768 0 4294967039 N
integer 64 0 4294967039 N
integer (number of 8KB pages per 100 milliseconds) 20 (= 20 * 80KB = 1.6MB/second) 1 4294967039 N
integer (number of 8KB pages per 100 milliseconds) 40 (= 40 * 80KB = 3.2MB/second) 1 4294967039 N
integer (number of 8KB pages per 100 milliseconds) 20 (= 20 * 80KB = 1.6MB/second) 1 4294967039 N
integer (number of 8KB pages per 100 milliseconds) 40 (= 40 * 80KB = 3.2MB/second) 1 4294967039 N
integer 8 1 4294967039 IN
integer None 1 4 IS
bytes 8M 1M 4294967039 N
(DEBUG BUILDS ONLY) true|false (|) 0 0 1 N
(OBSOLETE) integer 1186 0 4294967039 N
milliseconds 60000 0 4294967039 N
milliseconds 30000 0 4294967039 N
milliseconds 60000 0 4294967039 N
true|false (|) 1 0 1 N
milliseconds 2000 10 32000 N
milliseconds 1000 1000 4294967039 N
integer (number of 4-byte words as a base-2 logarithm) 20 (= = 4MB write operations) 0 31 N
milliseconds 4000 70 4294967039 N
bytes 1M 1K 4294967039 N
milliseconds 1200 50 4294967039 N
milliseconds 0 0 4294967039 N
bytes 16M 1M 4294967039 N
bytes 2M 1M 4294967039 N

15.4.5.2. Management Node Configuration Parameters

The following table provides information about parameters used in the or sections of a file for configuring MySQL Cluster management nodes. For detailed descriptions and other additional information about each of these parameters, see Section 15.4.4.4, “Defining the Management Server”.

Restart Type Column Values

  • : Node Restart

  • : Initial Node Restart

  • : System Restart

  • : Initial System Restart

See Section 15.4.5, “Overview of Cluster Configuration Parameters”, for additional explanations of these abbreviations.

Parameter Name Type/Units Default Value Minimum Value Maximum Value Restart Type
milliseconds 0 0 4294967039 N
integer 1 0 2 N
string N/A N/A N/A IN
integer        
string N/A N/A IN
integer None 1 63 IN
, , or N/A N/A N

15.4.5.3. SQL Node Configuration Parameters

The following table provides information about parameters used in the sections of a file for configuring MySQL Cluster SQL nodes. For detailed descriptions and other additional information about each of these parameters, see Section 15.4.4.6, “Defining SQL Nodes”.

Restart Type Column Values

  • : Node Restart

  • : Initial Node Restart

  • : System Restart

  • : Initial System Restart

See Section 15.4.5, “Overview of Cluster Configuration Parameters”, for additional explanations of these abbreviations.

Parameter Name Type/Units Default Value Minimum Value Maximum Value Restart Type
milliseconds 0 0 4294967039 N
integer 1 0 2 N
bytes 32K 1K 1M N
integer 64 1 992 N
integer        
string N/A N/A IN
integer None 1 63 IN
bytes 256K 32K 16M N

15.4.6. Configuring Parameters for Local Checkpoints

The parameters discussed in Logging and Checkpointing and in Data Memory, Index Memory, and String Memory that are used to configure local checkpoints for a MySQL Cluster do not exist in isolation, but rather are very much interdepedent on each other. In this section, we illustrate how these parameters — including , , , , and — relate to one another in a working Cluster.

In this example, we assume that our application performs the following numbers of types of operations per hour:

  • 50000 selects

  • 15000 inserts

  • 15000 updates

  • 15000 deletes

We also make the following assumptions about the data used in the application:

  • We are working with a single table having 40 columns.

  • Each column can hold up to 32 bytes of data.

  • A typical run by the application affects the values of 5 columns.

  • No values are inserted by the application.

A good starting point is to determine the amount of time that should elapse between local checkpoints (LCPs). It worth noting that, in the event of a system restart, it takes 40-60 percent of this interval to execute the REDO log — for example, if the time between LCPs is 5 minutes (300 seconds), then it should take 2 to 3 minutes (120 to 180 seconds) for the REDO log to be read.

The maximum amount of data per node can be assumed to be the size of the parameter. In this example, we assume that this is 2 GB. The parameter represents the amount of data to be checkpointed per unit time — however, this parameter is actually expressed as the number of 8K memory pages to be checkpointed per 100 milliseconds. 2 GB per 300 seconds is approximately 6.8 MB per second, or 700 KB per 100 milliseconds, which works out to roughly 85 pages per 100 milliseconds.

Similarly, we can calculate in terms of the time for local checkpoints and the amount of memory required for indexes — that is, the . Assuming that we allow 512 MB for indexes, this works out to approximately 20 8-KB pages per 100 milliseconds for this parameter.

Next, we need to determine the number of REDO logfiles required — that is, fragment log files — the corresponding parameter being . We need to make sure that there are sufficient REDO logfiles for keeping records for at least 3 local checkpoints. In a production setting, there are always uncertainties — for instance, we cannot be sure that disks always operate at top speed or with maximum throughput. For this reason, it is best to err on the side of caution, so we double our requirement and calculate a number of fragment logfiles which should be enough to keep records covering 6 local checkpoints.

It is also important to remember that the disk also handles writes to the REDO log and UNDO log, so if you find that the amount of data being written to disk as detemined by the values of and is approaching the amount of disk bandwidth available, you may wish to increase the time between local checkpoints.

Given 5 minutes (300 seconds) per local checkpoint, this means that we need to support writing log records at maximum speed for 6 * 300 = 1800 seconds. The size of a REDO log record is 72 bytes plus 4 bytes per updated column value plus the maximum size of the updated column, and there is one REDO log record for each table record updated in a transaction, on each node where the data reside. Using the numbers of operations set out previously in this section, we derive the following:

  • 50000 select operations per hour yields 0 log records (and thus 0 bytes), since statements are not recorded in the REDO log.

  • 15000 statements per hour is approximately 5 delete operations per second. (Since we wish to be conservative in our estimate, we round up here and in the following calculations.) No columns are updated by deletes, so these statements consume only 5 operations * 72 bytes per operation = 360 bytes per second.

  • 15000 statements per hour is roughly the same as 5 updates per second. Each update uses 72 bytes, plus 4 bytes per column * 5 columns updated, plus 32 bytes per column * 5 columns — this works out to 72 + 20 + 160 = 252 bytes per operation, and multiplying this by 5 operation per second yields 1260 bytes per second.

  • 15000 statements per hour is equivalent to 5 insert operations per second. Each insert requires REDO log space of 72 bytes, plus 4 bytes per record * 40 columns, plus 32 bytes per column * 40 columns, which is 72 + 160 + 1280 = 1512 bytes per operation. This times 5 operations per second yields 7560 bytes per second.

So the total number of REDO log bytes being written per second is approximately 0 + 360 + 1260 + 7560 = 9180 bytes. Mutiplied by 1800 seconds, this yields 16524000 bytes required for REDO logging, or approximately 15.75 MB. The unit used for represents a set of 4 16-MB logfiles — that is, 64 MB. Thus, the minimum value (3) for this parameter is sufficient for the scenario envisioned in this example, since 3 times 64 = 192 MB, or about 12 times what is required; the default value of 8 (or 512 MB) is more than ample in this case.

A copy of each altered table record is kept in the UNDO log. In the scenario discussed above, the UNDO log would not require any more space than what is provided by the default seetings. However, given the size of disks, it is sensible to allocate at least 1 GB for it.