Firebird 3 protocol benchmark

At the 12th Firebird Developers Day, I talked about Using Firebird in high latency networks (aka. internet). Below are two slides from my presentation, where you can see the improvements in Firebird 3 wire protocol, compared to FB 2.5 and to MySQL.

Enjoy!

Obs: Left axis values are expressed in seconds. Test server was hosted in Amazon (USA) and client accessing it was located in Brazil. Ping reported latency of 219ms. The smaller the bar, the better.

fb3_1

Above graph shows the result of fetching 10.000 records from a real table used to store customers data. Red bars represents records with all the fields filled (ie: there was no fields containing nulls) and blue bars represents fetching records where some of the fields were nulls. Tests where done with and without compression.

fb3_2

The same table used in previous graph was created in MySQL InnoDB (same data). Blue bars means that wire compression was disabled, and red has compression enabled. Left side graphs has all fields filled (ie. there wasn’t null fields) and in right side graphs, some records has some null fields. 

As you see, FB 3 won  😉

I should mention that there was no blob fields in the table, and this makes a lot of difference. Fetching non-null blobs makes the fetch slower in Firebird (more roundtrips are needed).

PS: The improvements in the FB 3 wire protocol were sponsored by donations collected in the 9th edition of FDD conference, and were implemented by Dmitry Yemanov. Compression was implemented by Alex Peshkov.

 

Testing the Firebird 3 protocol enhancements

In the 9th Firebird Developers Day, we collected donations to sponsor the enhancements of the Firebird wire protocol, to optimize the speed of communication in high latency networks (aka. internet). Dmitry Yemanov implemented the optimizations that were finally available for public testing with the release of the Firebird 3 Beta 1, a few days ago.

So, I decided to test the improvements. I set up a Windows remote server running FB 2.5 and 3.0 (beta1 and beta2), and used a database with a single “customers” table containing real life data (7,000 records and 61 fields). For the tests, I also created a second table with the same data, but in this one, the fields containing nulls were filled with random chars and numbers up to its size limit.

The test itself is very simple: retrieve all the fields from the first 5,000 rows from the tables, using isql (directing ther output to disk, since stdout is too “slow” and affects the results in a bad way), and check the time taken to do the fetchall. Each test was run at last twice (in sequence, filling the cache, etc), and the lowest value obtained was used for the comparison.

You can see the results below, and it is very promising! Thanks Dmitry and also Alex Peshkov (who implemented the zlib compression).

PS: There is one weird case where FB 3 was slower than FB 2.5. I already reported this do Dmitry, and he is investigating.

A full article (in portuguese) about the tests is available at FireBase. Thanks for Fernando Pimenta who “donated” the remote server for my use.

Protocol Graph

Update: Dmitry just sent me more information about the case where FB 2.5 got better performance than FB 3:

Actually, the problem is in the default batch size, not the new code itself. With all fields filled up to their max length, the protocol message size is quite similar between v2.5 and v3, the difference is less than 5%. But v3 always sends 8 packets at once while v2.5 may send 8 to 16 packets at once, depending on the message size. In your particular case, the batch size should be ~12-13 packets. This explains better performance of v2.5.

I need to find a way to adapt the new batching algorithm to better match the old one in such border cases.

Update 2 (21-jan-15): In a recent email exchange, Dmitry told me that he was able to fix the “problem” causing FB 2.5 to have better performance in that specific single case.