Thursday, November 15, 2018

MySQL NDB Cluster row level locks and write scalability

MySQL NDB Cluster uses row level locks instead of a single shared commit lock in order to prevent inconsistency in simultaneous distributed transactions. This gives NDB a great advantage over all other MySQL clustering solutions and is one reason behind cluster’s unmatched ability to scale both reads and writes. 

NDB is a transactional data store. The lowest and only isolation level available in NDB is Read Committed. There are no dirty reads in NDB and only committed rows can be read by other transactions. 

All write transactions in NDB will result in exclusive row locks of all individual rows changed during the transaction. Any other transaction is allowed to read any committed row independent of their lock status. Reads are lock-free reads.

The great advantage is that committed reads in NDB never block during writes to the same data and always the latest committed changes are read. A select doesn't block concurrent writes and vice versa. 

This is extremely beneficial for write scalability. No shared global commit synchronization step is needed to ensure transaction consistency across distributed data store instances. Each instance instead handles its own row locks - usually only locking a few out of many rows. Due to NDB’s highly parallel and asynchronous design many rows can be committed in parallel within a distributed instance and across multiple instances. 

As a side effect interleaved reading transactions can read committed rows of write transactions before all rows of that writing transaction are committed. The set of rows returned may represent a partially committed transaction and not a snapshot of a single point in time. Pending transactions never change the state of the data before they are committed. All rows of committed transactions are atomically guaranteed to be network durable and consistent in all distributed instances of the data.

If more consistent reads are needed then read locks used in SELECT... IN SHARE MODE / SELECT .... FOR UPDATE can be used to get a serialized view of a set of rows.

Thursday, December 03, 2015

"Data" and "Performance" is where MySQL Cluster's heart is.

In-memory performance and always-up drives our agenda. The Percona Live Data Performance Conference is coming up with two submitted sessions about Cluster.

If you want to learn from MySQL Cluster how to build high performance solutions yourself then you should probably place a vote on Developing for extreme performance. If you want to know about the latest developments and how to build a highly elastic real-time service based on MySQL Cluster then Elastic real-time services with MySQL Cluster is your session.

In the first technical session we will be taking a deep dive into how to scale to extreme levels of real time performance. You will learn a lot from Cluster for your own solution. Discover how network, CPU, interrupts handling affect performance of a distributed system and how sometimes delays and bottlenecks are not that bad after all.
The second session will show you how to adopt to capacity needs in seconds rather than hours or days. The MySQL Cluster architecture allows to adopt to capacity demands making resources available instantaneously and when needed. Adding or removing nodes in seconds without interrupting service. Its native node.js platform and connectors for Java make it easy to write real-time web applications in the cloud. And obviously there is always news to talk about.

MySQL Cluster is the solution to bring mobile phone services to billions of people, to run the most popular gaming back-ends or innovating banking solutions.

Tuesday, April 19, 2011

O’Reilly Conference MySQL Cluster slides available

I finally uploaded my MySQL Cluster session slides to the O'Reilly conference page. They are available here:

The "Session Management" session I shared with my collegue Mat from our product marketing. Its an introduction to session management and describes the advantages of using MySQL Cluster in this special use case. But it also shows examples of how to set up session management with PHP, how to configure memcached on top of cluster for the easiest set-up and highest availability.

The "Sometimes SQL" session highlights all the different access methods to the cluster data storage node layer. C++, LDAP, Java, our new memcached layer and sometimes even SQL. A completely new feature is available in our development milestone cluster 7.2: by pushing down joins to the storage layer executing them distributed and in parallel network traffic is reduced dramatically. We have seen performance gains of 40x and more for some queries. If you found cluster too slow for your web work load you should give it another try now. Together with cluster index statistics still waiting in the labs for you though this will make a unique combination.

Btw, in 7.2 we are not only making the web developer's life easier. Also our data store admins will be glad to hear that MySQL grants can be stored centrally in cluster now distributing them to all connected MySQL Servers.

Tuesday, April 20, 2010

Around the volcano and home from US in 20 hours

"Sir, we advise you to not board this aircraft". The plane was ready for departure but the empty gate seemed to underline what the ground staff just said. All other flights to Europe were cancelled. And there are certainly worse places to be stranded than San Francisco in spring. After speaking at the MySQL Users Conference my already re-scheduled via Frankfurt in two days was likely to be cancelled again. I had to try something to avoid being stuck in US for days if not weeks.

7 hours later cold air streaming in through the front door welcomed me to Reykjavik. At departure time in US no onward flights were scheduled from there. Eyjafjallajökull (see picture) was still erupting sending its ashes to Europe. Rather than being stuck in US I could be easily stuck on Iceland. But I wanted to see my family. And I wanted to see the volcano.

By the time of my arrival on Iceland individual transfers to Northern Europe opened, I got a seat to Oslo and arrived at a deserted ghost airport. The only flight for the day would bring me there to my hometown Stockholm. The ground crew in US wished "Good Luck" when boarding. 20 hours later I had dinner with my family. 2 days ahead of schedule and no extra net travel time.

Monday, February 01, 2010

MySQL Cluster 7.1.1 is there

A new version of MySQL Cluster 7.1 beta has been released and is available from our ftp directory at


The Cluster 7.1 beta code base is identical to Cluster 7.0 GA. While maintaining our GA quality throughout all core functionalities we added two of the most requested enhancements as independed features to 7.1 beta:
  • ndb$info with SQL level real-time monitoring of Cluster
  • an easy-to-use and high performance native Java interface and OpenJPA plug-in

ndb$info makes cluster status and statistics available on SQL level. Log into the MySQL Server and simply use SQL to retrieve configuration details, memory status or the node status and uptime:
mysql> SELECT * FROM ndbinfo.nodes;
| node_id | uptime | status | start_phase |
| 2 | 45678 | STARTED | 0 |
| 3 | 45676 | STARTED | 0 |
| 4 | 45678 | STARTED | 0 |
| 5 | 312 | STARTED | 0 |
4 rows in set (0.03 sec)

ndb$info is designed to add little to no overhead even for extensive monitoring of Cluster.

MySQL Cluster Connector for Java

Designed for Java developers, the MySQL Cluster Connector for Java implements an easy-to-use and high performance native Java interface and OpenJPA plug-in. Using the Java Persistence Interface of Cluster/J a primary key lookup is as simple as
Fish e = session.find(Fish.class, 4711);

The OpenJPA for Cluster implementation plugs into Apache OpenJPA and provides native access to Cluster. Many transactions can now be directly executed on Cluster, without the MySQL Server being involved. This allows much faster query execution and higher throughput.

You can read more about Cluster/J and how to use it here:

To learn more about and have the chance to meet the architect and developer behind Cluster/J you might want to attend the webinar. More info here:

Monday, December 14, 2009

Java and OpenJPA for MySQL Cluster

MySQL Cluster Connector for Java is a new feature in the upcoming Cluster 7.1.

It is an easy to use, high-performance Java and JPA interface for MySQL Cluster. We currently call it simply Cluster/J and Cluster/JPA. It is an Objection Relational bridge that maps Java classes to tables stored in MySQL Cluster, and implements an OpenJPA interface.

How does it work?

The new MySQL Cluster Connector for Java and JPA implementation consist of two parts: an OpenJPA plugin and the standalone Cluster/J API.

The standalone Cluster/J API allows you to write very simple Java programs for MySQL Cluster. A read by primary key is then as easy as

Fish e = session.find(Fish.class, 4711);

A query like the above is executed without using the MySQL Server. Cluster/J API is a native MySQL Cluster-specific API, which is a direct access path to Cluster which is very simple to use, and which is used as the fast track in OpenJPA.

The OpenJPA for Cluster implementation plugs into Apache OpenJPA and provides native access to Cluster. Previous JPA implementations provided Cluster support via JDBC and the MySQL Server. JDBC is also used in this implementation; however, many transactions can now be directly executed on Cluster, without the MySQL Server being involved. This allows much faster query execution and higher throughput. We thus refer to it as "the fast track". More complex transactions still will be sent through JDBC where they will be optimized by the MySQL Server's query optimizer.

A third part is an implementation detail called NDB JTie; this is needed to actually use MySQL Cluster Connector/J with Cluster. NDB JTie contains a Java-to-C/C++ mapper based on JNI. It consists of a dynamic library and the two JAR files ndbjtie and JTie as wrappers around the existing NDBAPI while adding very little overhead. You need this part but are not supposed to program against it.

The native Cluster/J implementation already allows complex queries, but it is limited to queries on single tables and does not support relations or inheritance. Those are the domain of JPA.

Where to download?

The sources can be found here:


A Linux 64-bit tarball can be found here:


Cluster/J is part of Cluster 7.1 but should work with current GA 7.0 versions of Cluster as well. It can be used for writing OpenJPA based or Cluster/J based applications. Even simple "drop-in" migrations for OpenJPA from any other database to Cluster may work, although we have not yet extensively tested such migrations ourselves.

In the download you will find the following new files and directories that make up Cluster/J:
  1. share/mysql/java directory containing all JAR files needed
  2. the lib/mysql or lib directory with the new library libndbjtie
  3. storage/ndb/clusterj directory containing the Cluster/J sub-packages and sources
  4. storage/ndb/ndbjtie directory with the new JNI interface
  5. Cluster/J pom.xml and automake files if you like to work with maven or automake for compiling Cluster/J and JPA yourself.
  6. Cluster/J is integrated with automake into the cluster build system.
The naming of jarfiles follows the typical conventions of <package>-<version>.jar. There are also unversioned links to any jar file. In addition, many of the files exist only for testing purposes.


Download and unpack mysql-cluster-connector-j-7.1.0-135.tar.gz from, configure and compile; JTie and Cluster/J are enabled by default whenever the configure scripts can find a Java compiler.

./configure --with-plugins=ndbcluster

You will find the two JTie jars under storage/ndb/ndbjtie/ in the sub dirs ./src and ./jtie/src.

If you want to use OpenJPA and want to compile cluster yourself, download OpenJPA from We have tested with apache-openjpa-1.2.1. Then let configure know where to find the OpenJPA jar file and that the OpenJPA is supposed to be compiled into the MySQL Cluster Connector for Jav

--with-openjpa --with-classpath=/Users/bo/prg/apache-openjpa-1.2.1/openjpa-1.2.1.jar

Start Cluster as usual.

Where to start?

First, we will develop a simple Cluster/J standalone application which reads from and writes to MySQL Cluster, requiring no use of JDBC or OpenJPA. While the application itself doesn't need a MySQL Server, since it connects directly to the Cluster data nodes, a MySQL Server is still necessary to create a Cluster table.

Check the end if this tutorial for a link to the example files.

A simple Cluster/J program

Three components are needed to implement a Cluster/J program:
  1. An annotated entity interface
  2. A table created in MySQL Cluster
  3. A configuration pointing to a running MySQL Cluster instance
A simple interface for a new entity looks like this:

public interface Fish {

int getId();
void setId(int id);

String getName();
void setName(String name);

String getWine();
void setWine(String wine);

@Column(name = "taste")
Integer getTastiness();
void setTastiness(Integer taste);


The annotations need to be imported from the Cluster/J API. They can be found in the annotation package from the cluster API and are imported like this:

import com.mysql.clusterj.annotation.Column;
import com.mysql.clusterj.annotation.Index;
import com.mysql.clusterj.annotation.PersistenceCapable;
import com.mysql.clusterj.annotation.PrimaryKey;

As Cluster/J's main purpose is to write, read and modify data in a database, it is simpler to create the table that matches this interface using MySQL (more specifically the mysql client):

CREATE TABLE `t_fish_food` (
`id` int(11) NOT NULL,
`name` varchar(128) DEFAULT NULL,
`wine` varchar(55) DEFAULT NULL,
`taste` int(11) DEFAULT NULL,
UNIQUE KEY `idx_unique_hash_name` (`name`) USING HASH,
KEY `idx_taste` (`taste`)
) ENGINE=ndbcluster;

In Cluster/J as well as in JPA, annotations are used to describe how the interface is mapped to tables in a database. The annotation @PersistenceCapable(table="t_fish_food") is used to let Cluster/J know which database table to map the Fish to: the t_fish_food table in cluster. The annotations @PrimaryKey and @Index inform Cluster/J about indexes in the database table.

Whereas JPA requires extra annotations for every attribute, this is not necessary in Cluster/J.

Note that Cluster/J does not yet know which database the actual table is located in. This is part of writing the class which uses the Fish interface. Now that the interface and the table in cluster are complete, the actual program for writing data can be coded. Cluster/J requires the use of sessions and transactions. In order to obtain a session, a session factory needs to be created first. Properties are used to describe the database environment. The easiest way to do this is by reading a properties file:

Since we are accessing Cluster data directly (without going through a MySQL Server), the following properties are sufficient:


Once the properties are loaded, we can create a session factory:

sessionFactory = ClusterJHelper.getSessionFactory(props);

The session itself is created like this

session = sessionFactory.getSession();

Once these preparations are complete, the actual reading and writing of data is easy. First create a new instance of Fish, like this:

Fish fish = session.newInstance(Fish.class);
fish.setId(4711 + i);

Next, store it in the cluster via the native interface:

Transaction tx = session.currentTransaction();

That's all there is to it. Compile and run the program, then check the database. The attached source example shows how to store an entire ArrayList of Fish in a single transaction.
To use the native Cluster/J API, compile your application with the clusterj-api.jar that is included. The files clusterj-core.jar and clusterj-tie.jar, as well as the the ndbjtie/jtie JAR files, are needed to run your Cluster/J application.

java -classpath /usr/local/mysql/share/mysql/java/clusterj-api.jar:...

When starting the JVM, set the java.library.path variable to point to the directory containing the Cluster client and ndbjtie libraries.

java -Djava.library.path=/usr/local/mysql/lib/mysql

To use OpenJPA you must add the same clusterj and ndbjtie JAR files as well as the MySQL Server JDBC connector (mysql-connector-j.jar) to your project. In addition, you must also update your persistence.xml file.

Alternatively, you can also use the individual clusterj packages. Using the native Cluster/J API, you can compile using only the api package; however, if you do so, you must also run your application using the -core and -tie packages as well as the ndbjtie packages.

Whether you are using either native Cluster/J or OpenJPA, the JVM needs to be made aware of the location of the dynamic JTie library. This can be done by adding the directory containing the library to the java.library.path.

If you are wondering why you must use different JAR files at compile time and run time, this is because the Cluster/J API is meant to change less often, which means that you can trust your applications written against it to continue to run without having to adapt them to each new release of it. However, the implementation layers – the Cluster/J core and the clusterj-tie Cluster plugin – can at change any time.

Cluster/J is also intended to be independent of the Cluster version, while the NDB JTie layer is version specific.

Selects and updates with Cluster/J

The most simple and at the same time most efficient way to retrieve data from Cluster via Cluster/J is the find() function, as shown here:

Fish e = session.find(Fish.class, 4711);

An update on the retrieved object is as simple as this:
e.setTastiness(e.getTastiness() * 2);

This all should then be embedded in a transaction.

WHERE clauses

With Cluster/J, it is possible to write more complex queries. Data in Cluster/J is represented as domain objects which are separate from business logic. Domain objects are mapped to database tables. As a first step, you should fetch a query builder which provides the domain object.

 QueryBuilder qb = session.getQueryBuilder();

QueryDomainType dobj = qb.createQueryDefinition(Fish.class);

Query query = session.createQuery(dobj);

List resultList = query.getResultList();
for (Fish result: resultList) {

name = result.getName();

The query above maps to a full table scan. In order to introduce certain desired conditions the domain object is modified. PredicantOperand objects represent the operands in the condition. A string is used as a placeholder for parameters. The name of the set or get function of the persistence-capable interface is used as the other operand.

 PredicateOperand parm;
PredicateOperand propertyPredicate;
Predicate greaterThan;

param = dobj.param("tasteparam");
propertyPredicate = dobj.get("tastiness");

greaterThan = propertyPredicate.greaterThan(paramLowerPredicate);

This part of the code could be also written in pseudo-SQL as WHERE tastiness > ?. The actual query is then created with the domain object, and parameters set for query execution:

 Query query = session.createQuery(dobj);
query.setParameter("tasteparam", new Integer(12));

A simple JPA app

There are excellent tutorials and examples available to help you quickly ramp up on OpenJPA, but this might still be helpful to you in writing a first OpenJPA application for cluster.

OpenJPA requires a running MySQL Server and the MySQL JDBC Connector/J. OpenJPA is built on top of Cluster/J but it routes more complex queries through MySQL Server.

The approach for writing an OpenJPA app using the native Java connector to Cluster is quite similar to that for writing a Cluster/J app. Instead of instantiating a SessionFactory with the needed properties, OpenJPA uses a persistence.xml file. JPA also requires writing an entity class instead of describing the persistable entity with an interface.

Entities mapped to Cluster tables need to implement the Serializable interface. If the table name is the same as the class name, then name = "t_basic" can be omitted. Unlike Cluster/J, JPA requires you to annotate the actual member variables and not the setter and getter functions.

@Entity(name = "t_fish_food") //Name of the entity
public class Fish implements Serializable {

@Id //signifies the primary key
@Column(name = "id", nullable = false)
@GeneratedValue(strategy = GenerationType.AUTO)
private int id;

@Column(name = "name", length = 128)
private String name;

@Column(name = "wine", length = 55)
private String wine;

@Column(name = "taste")
private Integer tastiness;


The persistence.xml file needs to be in the META-INF directory, which needs to be in the Java classpath when executing the JPA program:

<persistence xmlns=""
xsi="" version="1.0">

<persistence-unit name="ndbjpa" type="RESOURCE_LOCAL">
<property name="openjpa.ConnectionDriverName"
<property name="openjpa.ConnectionURL"
<property name="openjpa.ConnectionUserName" value="root">
<property name="openjpa.ConnectionPassword" value="">
<property name="openjpa.BrokerFactory" value="ndb">
<property name="openjpa.ndb.connectString" value="localhost:1186">
<property name="openjpa.ndb.database" value="test">

The openjpa.BrokerFactory property advises JPA to enable the native NDB fast track.

openjpa.ndb.connectString uses the usual syntax for Cluster connection strings pointing to the Cluster management server.The Cluster/J fast track does not extract the database from the JDBC driver URL. You need to configure the fast track database using openjpa.ndb.database.

One of the special characteristic of OpenJPA is the implementation of auto-increment values via a sequence table. This table can be created automatically by enabling synchronized schemas. However, we recommend creating the schema manually with the following CREATE TABLE statement:

`ID` tinyint(4) NOT NULL,
) ENGINE=ndbcluster;

If openjpa.jdbc.SynchronizeMappings is used, you need to be careful. In this case, the MySQL Server needs to be configured to use NDB (or NDBCLUSTER) as the default storage engine. AIn addition, you must keep in mind that indexes are not created from the Java class.

In order to query the t_fish_food table that we created in the previous section, the following steps are necessary:

 EntityManagerFactory entityManagerFactory =
EntityManager em = entityManagerFactory.createEntityManager();
EntityTransaction userTransaction = em.getTransaction();

Please note that the entity manager factor is created with the name pointing to the persistence unit defined in the XML file.

Query q = em.createQuery("select x from Fish x where x.tastiness > 12");

for (Fish m : (List) q.getResultList()) {


You should also note that the column names in the WHERE clause are the entity's class members and not the column names in the table that they are mapped to. Now you've run your first OpenJPA program on Cluster. You can find more complex examples online. The examples enclosed in the OpenJPA package currently don't work for Cluster or any other storage engine.


You can find the examples used here in the basic tutorial as part of the source package here. To compile the Cluster/J example clusterj_example:

javac -classpath clusterj-api.jar

Start cluster and set host and port in pointing to the cluster management process. Then run the example with

java \
-Djava.library.path=<cluster-inst-dir>/lib \
-classpath mysql-cluster-connector-java.jar:ndbjtie.jar:jtie.jar:. Main

Libraries needed are all either found in the cluster install directory under lib if you use the pre-compiled cluster and lib/mysql if you compiled cluster yourself. Jar files are found in the share/mysql/java sub directory of your Cluster 7.1 installation.

Compiling the OpenJPA example is as simple as

javac -classpath openjpa-1.2.1.jar:geronimo-jpa_3.0_spec-1.0.jar \

And you typically run with the following jar files:

java -Djava.library.path=<cluster-inst-dir>/lib \
-classpath openjpa-1.2.1.jar:\
ndbjtie.jar:../.:. Main

The MySQL JDBC driver can be downloaded from The jar files are either part of Cluster 7.1 or the Apache OpenJPA distribution.

FAQ & hints

32-bit Java VM and 64-bit libaries This is an easy trap to get caught in. A 64-bit Java VM and 32-bit Cluster shared libraries will not work together (and vice versa). The error message you get when you try to run the program using a mixed 64-bit and 32-bit is quite clear:

failed loading library 'ndbjtie'; ... wrong ELF class: ELFCLASS64 (Possible cause: architecture word width mismatch)

Missing -Djava.library.path The Java VM needs the path to the cluster client libraries and the ndbjtie libraries. If that path is not provided or libaries are not found in the path provided then you get this error message:

failed loading library 'ndbjtie'; java.library.path='...' Exception in thread "main" java.lang.UnsatisfiedLinkError: ... / cannot open shared object file: No such file or directory