Testing the Firebird 3 protocol enhancements

In the 9th Firebird Developers Day, we collected donations to sponsor the enhancements of the Firebird wire protocol, to optimize the speed of communication in high latency networks (aka. internet). Dmitry Yemanov implemented the optimizations that were finally available for public testing with the release of the Firebird 3 Beta 1, a few days ago.

So, I decided to test the improvements. I set up a Windows remote server running FB 2.5 and 3.0 (beta1 and beta2), and used a database with a single “customers” table containing real life data (7,000 records and 61 fields). For the tests, I also created a second table with the same data, but in this one, the fields containing nulls were filled with random chars and numbers up to its size limit.

The test itself is very simple: retrieve all the fields from the first 5,000 rows from the tables, using isql (directing ther output to disk, since stdout is too “slow” and affects the results in a bad way), and check the time taken to do the fetchall. Each test was run at last twice (in sequence, filling the cache, etc), and the lowest value obtained was used for the comparison.

You can see the results below, and it is very promising! Thanks Dmitry and also Alex Peshkov (who implemented the zlib compression).

PS: There is one weird case where FB 3 was slower than FB 2.5. I already reported this do Dmitry, and he is investigating.

A full article (in portuguese) about the tests is available at FireBase. Thanks for Fernando Pimenta who “donated” the remote server for my use.

Protocol Graph

Update: Dmitry just sent me more information about the case where FB 2.5 got better performance than FB 3:

Actually, the problem is in the default batch size, not the new code itself. With all fields filled up to their max length, the protocol message size is quite similar between v2.5 and v3, the difference is less than 5%. But v3 always sends 8 packets at once while v2.5 may send 8 to 16 packets at once, depending on the message size. In your particular case, the batch size should be ~12-13 packets. This explains better performance of v2.5.

I need to find a way to adapt the new batching algorithm to better match the old one in such border cases.

Update 2 (21-jan-15): In a recent email exchange, Dmitry told me that he was able to fix the “problem” causing FB 2.5 to have better performance in that specific single case.

1 Star2 Stars3 Stars4 Stars5 Stars (5 votes, average: 5.00 out of 5)
Loading...

Leave a Reply