Tuesday, July 31, 2018

High Availability with MySQL Cluster, Setup From Command Line (II)

In the first delivery of this series of posts, delivered for whom who are interested to understand the basics of MySQL Cluster "by examples", I wrote about installing MySQL Cluster with a python utility called ndb_setup-py, which offers a nice web graphical interface to define and start our brand new cluster.

In this post I will share an example to do everything from scratch and manually, instead. Doing things manually is always the best recommendation to learn everything about processes life cycle through their:
  • Initialization
  • Administration (start/stop/reconfigure)
  • Monitoring (logs/counters/status)
  • Troubleshooting 
The resulting topology I'd like to setup is composed of 2 data nodes, 1 management node and 1 SQL node, as in the following picture:

Let's get down to business and deal today with processes initialization; I will discuss about setting up a MySQL Cluster environment from scratch and step by step on a Linux machine.
  1. Download generic binaries release for Linux
  2. File system preparation
  3. Initialize and start the management node
  4. Initialize and start the data nodes
  5. Initialize and start the SQL node
  6. Monitor the status 
In order to run a MySQL Cluster deployment, I will need to create:
  • A cluster configuration file, usually named config.ini. This is unique for all cluster processes.
  • The SQL node configuration file, usually named my.cnf. Must have one per SQL node.

1. Download generic binaries release for Linux

My favorite MySQL Cluster install method is using binaries, as to the purpose of learning how to install and administer a cluster it is not needed to install RPMs and their related scripts to start/stop a process as a service. Hence I recommend to download binaries by choosing "Linux - Generic" from related download page

2. File system preparation

Processes need to store stuff on the disk: not only to achieve durability, but also to store logs and other support files necessary to perform their duties. So every process will have a data directory. Plus, every process must define a nodeid, which is an integer number used to identify a process. Therefore, for simplicity, I recommend to create 4 folders under the desired path and name them with the nodeid I will set later in the configuration file. Important note: in order to make things easier, I will setup a whole cluster on my machine, while this is not the recommended topology in a real production environment, where data nodes and the management node should all run on dedicated hosts.
  • 49. This folder will store data directory for the management node
  • 1. This folder will store the data directory for data node 1
  • 2. This folder will store the data directory for data node 2 (replica of data node 1)
  • 50. This will store the data directory for the SQL node
Now that the four directories are created, let the fun begin!

3. Initialize and start the management node

Before starting the management node for the first time, I need to setup a config.ini file. Let's use this bare simple one.

[NDB_MGMD]
HostName=127.0.0.1
DataDir=/home/mortensi/cluster/49
NodeId=49

[NDBD]
NodeId=1
HostName=127.0.0.1
DataDir=/home/mortensi/cluster/1

[NDBD]
NodeId=2
HostName=127.0.0.1
DataDir=/home/mortensi/cluster/2

[MYSQLD]
HostName=127.0.0.1
NodeId=50

This configuration file, the easiest possible (all configuration parameter have default values), is telling that:
  • Node ids are: 49 for the management node, 1 and 2 for the data nodes and 50 for the SQL node (the mysqld instance)
  • We're setting up a topology where all processes are co-located on the same localhost machine
  • We will use those folders we created as data directory.
Let's start the management node now:

ndb_mgmd --config-file=/home/mortensi/cluster/config.ini --configdir=/home/mortensi/cluster/49 --initial &

If no error is reported, we can check if the management process is running with ndb_mgm command line client tool like this:

ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1 (not connected, accepting connect from 127.0.0.1)
id=2 (not connected, accepting connect from 127.0.0.1)

[ndb_mgmd(MGM)] 1 node(s)
id=49 @127.0.0.1 (mysql-5.7.22 ndb-7.6.6)

[mysqld(API)]   1 node(s)
id=50 (not connected, accepting connect from 127.0.0.1)

This output is telling that our 2 data nodes defined in the config.ini have not been started, but the management was started successfully. Finally, the SQL node is not connected, yet.

4. Initialize and start the data nodes

Now it's time to start the core processes for any cluster setup: data nodes! Data nodes can only be started after the management node, as they need to connect to it to retrieve the configuration of the cluster. Run the following:

ndbmtd --connect-string=localhost --initial --ndb-nodeid=1 &
ndbmtd --connect-string=localhost --initial --ndb-nodeid=2 &

Check again status with ndb_mgm:

ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @127.0.0.1 (mysql-5.7.22 ndb-7.6.6, Nodegroup: 0, *)
id=2 @127.0.0.1 (mysql-5.7.22 ndb-7.6.6, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=49 @127.0.0.1 (mysql-5.7.22 ndb-7.6.6)

[mysqld(API)] 1 node(s)
id=50 (not connected, accepting connect from 127.0.0.1)

We're done! Now that the cluster is up and running, we only need to connect through the SQL node and play! Remember that future data nodes restart, must be done without --initial option, otherwise data nodes data directories will be erased!

5. Initialize and start the SQL node

To finalize this cluster installation, we need to initialize SQL node data directory as follows (must indicate the path to the binary release and to the brand new datadir candidate folder):

mysqld --initialize-insecure --datadir="/home/mortensi/cluster/50" --basedir="/export/home/mortensi/software/mysql-cluster-gpl-7.6.6-linux-glibc2.12-x86_64/" --user=mortensi

Now that the SQL node has been initialized, let's use this simple my.cnf configuration file to start it:

[mysqld]
ndbcluster=on
ndb_nodeid=50
datadir=/home/mortensi/cluster/50
basedir=/export/home/mortensi/software/mysql-cluster-gpl-7.6.6-linux-glibc2.12-x86_64/
socket=/home/mortensi/cluster/50/mysqld.sock
log_error=/home/mortensi/cluster/50/mysqld.log

Save the file, and let's go:

mysqld --defaults-file=/home/mortensi/cluster/my.cnf &

You will now be able to connect:

mysql -h127.0.0.1 -uroot

And check your brand new cluster status!

mysql> select * from ndbinfo.nodes;
+---------+--------+---------+-------------+-------------------+
| node_id | uptime | status  | start_phase | config_generation |
+---------+--------+---------+-------------+-------------------+
|       1 |   1691 | STARTED |           0 |                 1 |
|       2 |   1690 | STARTED |           0 |                 1 |
+---------+--------+---------+-------------+-------------------+
2 rows in set (0.02 sec)

Cool, isn't it?

6. Monitor status

To complete this mini tutorial, let's just check status for this cluster as already done:

ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @127.0.0.1 (mysql-5.7.22 ndb-7.6.6, Nodegroup: 0, *)
id=2 @127.0.0.1 (mysql-5.7.22 ndb-7.6.6, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=49 @127.0.0.1 (mysql-5.7.22 ndb-7.6.6)

[mysqld(API)] 1 node(s)
id=50 @127.0.0.1 (mysql-5.7.22 ndb-7.6.6)

All processes declared as belonging to the cluster (topology was defined in the config.ini file) are now up and running, mission accomplished!

Wednesday, July 25, 2018

Top 5 Things to Consider to Get Started with MySQL Performance Tuning

Today I'll share a few slides I prepared last year for a presentation delivered at Oracle Open World. This is a quick and easy hands-on lab for fresh MySQL Server DBAs. I chose 5 among the most relevant topics when tuning and scaling a MySQL Server using InnoDB tables.

In particular, in this hands-on, I will talk about:
  • Scaling connections
  • The threads model
  • InnoDB REDO log
  • InnoDB Buffer Pool
  • The Execution plan


MySQL Performance Tuning 101 from Mirko Ortensi

Rate at which MySQL is delivering new features and improvements is impressive, in fact MySQL Server 8 boosts performance in many fields, especially regarding InnoDB REDO logging. Hence while the rest of recommendations still apply for new MySQL Server 8, the tuning REDO log flushing strategy is not mandatory anymore to achieve an improved throughput starting from MySQL 8. Improvements in such a field are mentioned by this blog post edited by Dimitri Kravtchuk, MySQL Performance Architect at Oracle MySQL (Twitter).

For an overview of optimization techniques, I recommend having a look at official documentation.

Tuesday, July 24, 2018

High Availability with MySQL Cluster, a Quick How-To Guide for Dummies (I)

I have been playing with MySQL Cluster for some years now, and today I'd like to start writing a bit about it, how to set it up, configure, backup and also how to use it, as there's plenty of ways to drive operations towards the Cluster for brutal speed  and concurrency. MySQL Cluster, the open source in memory database from Oracle MySQL, is available for free from MySQL Cluster download page (Community version has GPL license).

But before starting with an overview of installation and setup, if you're new to MySQL Cluster, I would strongly recommend to have a look at this video.



What I find more interesting about MySQL Cluster, is that it is possible to have a setup running on commodity hardware (the bare laptop), as it can be configured to have a minimum footprint in terms of memory and storage requirements. About high availability, MySQL Cluster offers unique features to resist any single failure (it has no single point of failure: that means any process, communication link or hardware component may crash/fail but the cluster will still be available) while offering a consistent view of data (synchronous replication between data node, a unique feature in MySQL databases).

But.. what is MySQL Cluster?

MySQL Cluster is a cluster of processes, they are:
  1. ndb_mgmd management process. A MySQL Cluster setup must have at least one (can have more, for redundancy). It is recommended to execute it on a host where ndbmtd processes are not running, but does not need to be on a dedicated host, though. It consumes little resources and act as arbitrator to rule out eventual split brain situations and serves a number of functions. 
  2. ndbmtd data node process. Default configuration includes 2 data nodes processes, they implement the NDB (NDB stands for Network Database) storage engine and store data in memory. The recommendation is to execute one ndbmtd on its own host. Data nodes are replicated synchronously, hence any of these can crash and Cluster would survive and be serviceable.
  3. mysqld instances, aka SQL nodes. These are the classical MySQL Servers delivered with MySQL Cluster distribution (they differ from standard standalone distribution in that they are compiled to make use of NDB storage engine). They can be used to retrieve MySQL Cluster metrics, to execute SQL queries against the cluster (as we'll see this is not the only way to fetch data from it) and also are used to implement geographical replication (among other uses). SQL nodes can run on dedicated hosts but can also share host with data nodes or ndb_mgmd node. Best option is always away from a host running ndbmtd data nodes, as resources needed by SQL nodes can be variable depending on operations, then it's better not to add variable load to the host, which may affect data nodes performance (at least data nodes footprint is quite stable over time).
Full detailed summary is available at official documentation page.

How to install MySQL Cluster?

There are two main distributions: the RPM/DEB package and the binary compressed package, I usually prefer to get the binary one, as it is more flexible and also easier to manage for new users. With binary distribution, everything is contained within a folder. 

To get started quickly with MySQL Cluster, I would recommend to deploy a cluster on a single host using ndb_setup.py utility, available under the folder of binaries (bin). This command will open a graphical interface in a browser served by a minimalistic Python web server. It is really a simple tool and better suited to deploy all cluster processes at localhost (not the production recommended topology, but it's a good idea to start up quickly). This video shows a basic installation on multiple hosts (can be virtual machines as well).



MySQL Cluster documentation is pretty good at explaining the basics about the database, its specifications and tuning options, but I would also like to point to a couple of books:
  • Pro MySQL NDB Cluster, written by Jesper Wisborg Krogh and Mikiya Okuno, Support Engineers at Oracle MySQL. The book contemplates the basics and much more to get started and administer a MySQL Cluster database.
  • MySQL Cluster 7.5 Inside and Out, written by Mikael Ronstrom, NDB creator and Senior Architect at Oracle MySQL. It offers an in depth description of MySQL Cluster guts, together with the story of the product.
In the next chapter, I will share some basic scripts to start the cluster from command line. Stay tuned!