Itaú Asset Management’s Christian Zimmer and Hellinton Hatsuo Takada drill down into the usage of FIX in Brazil, isolating the areas where FIX is developing and where there is room to grow.
The BM&FBOVESPA (BVMF) initiative to provide market data using FIX is just the beginning of moving past the basic usage of FIX in Brazil. FIX is being implemented under the Unified Market Data Feed (UMDF) banner with the objective of integrating the traditional FIX market data and Multimedia Multiplexing Transport Protocol (MMTP) market data streams. The communication efficiency between these two needs to increase a lot, because in Brazil the trading community is starting to go beyond the simple use cases for FIX.
Besides the FIX implementations, one example of this development is the FPL initiative tasked with creating a version of the FIXimate in Portuguese, which the local FIX engineers are contributing to. In the last FPL meeting in Brazil, the local public seemed to be a little bit more aware of FIX, while the use of the FIXimate in Portuguese indicates a growing development of FIX solutions in Brazil.
Currently, some brokers are providing simple execution algos to be used in the Brazilian market. However, these are not delivered via FIXatdl, but via an algo-number indicated in a general purpose FIX-tag. Currently, the algos offered are very simple: mainly VWAP, TWAP, Iceberg, and POV. More sophisticated algos that try to gain some alpha are present too, but they are not originally created by Brazilian market participants. These kind of algos are normally developed from global brokerage firms at their headquarters in the US or Europe and then applied/adapted to the local market (what we call tropicalization).
Even if the algos were customized by the international firms to fit local market data, we have our doubts that on the actual trading floor there are many buy-side traders using these advanced methods. There are mainly two reasons for the nonusage of the advanced algos. First, there is a lack of confidence whether international teams understand well the local Brazilian market. Second, most of the time the big buy-side firms have mandates to achieve a 100% fill rate – something not guaranteed by the alpha-creating algos. This demand originates from the way the big asset management firms work in Brazil: they are more fundamental, and focused on allocation rather than trading.
The usage of FIXatdl could improve the usage of algos because of its standardization, but it is still hard to move forward on this issue.The sell-side seems not to be too enthusiastic, and thus, does not provide the buy-side with this efficient alternative. The buy-side is also not demanding it, which implies that there will be no advances.
In addition to FIXatdl, we expect the efforts of the FPL High Performance Interfaces Working Group to become applicable in the Brazilian market. The success depends on, obviously, if the exchange permits a separated access to their matching engine with this protocol dialect. But as there is always demand for lower latency, the outlook is positive for this initiative. The same might be true for the FPL Inter-Party Latency Working Group. Although there are hardware solutions to this problem and these hardware solutions may create less additional latency, it seems to be much easier for any mid-sized firm to use FIX-based latency analysis rather than buying an expensive system just for this purpose.
At the recent FPL Americas Conference, Bill Hebert, FPL Americas Education and Marketing Committee Co-chair moderated a panel of industry experts on “Latency limbo: How low can you get?.” The panelists’ insightful observations were so well received by the delegates that we decided to bring them to you. Here, two of the panelists, FlexTrade System’s Vijay Kedia and Corvil’s Donal Byrne, communicate their insights in response to Bill’s questions.
Bill Hebert: How do you best define ‘latency’ as it pertains to the electronic trading world and the issues different firms such as yours are facing?
Vijay Kedia (FlexTrade): In the world of electronic trading, latency is the delay between receiving knowledge of a change in the market and acting upon it. During this time, information travels through both software and hardware, each element along the path introduces a measurable delay before a message reaches its destination. Latency is inevitable. The biggest challenge for vendors of high frequency algorithmic trading platforms is balancing rich functionality with the processing cost incurred because of it. Everything comes at a cost.
Donal Byrne (Corvil): While the term ‘latency’ has a specific technical definition, it is important to remember that in electronic trading it is used as a proxy for the question “How fast am I trading”? While you might think this is a simple enough question, it is actually quite complex and proving very difficult for traders to get consistent and useful answers. There are three main reasons for this:
LAW OF LATENCY RELATIVITY – knowing absolute latency is necessary, but not sufficient to determine if you will be successful in high frequency trading. Knowing latency relative to your competition is the key. This is often difficult to achieve.
LATENCY DESCRIPTION – today latency is not usefully described in our industry. A single published number is insufficient and often misleading to use in describing the latency performance of an electronic trading infrastructure. In addition, latency should be measured under load conditions that represent intended use. What is needed is the measurement and publication of latency distributions, measured during busy trading periods, e.g. during the busy 1 millisecond of the trading day.
INTER-PARTY LATENCY – latency measurement is required on an end-to-end basis for the electronic trading loop. This includes both market data paths and order execution paths. Unfortunately, the measurement of end-to-end latency in a trading loop involves two significant issues: How to measure latency across infrastructure that is owned and controlled by multiple parties, i.e. trader, venue and service provider? How to achieve microsecond accuracy latency measurement across the wide area?
Bill Hebert: What are some latency myths and misperceptions? How are firms using latency management/measurement as a “sales” tool and/or strategy?
Donal Byrne (Corvil): The single biggest myth and misperception is that ‘Latency’ is a single constant number. We are all familiar with the typical claims:
Technology Vendor – “The feed handler is benchmarked at 10us”
Market Center – “We can execute an order within 350us”
Data Provider – “The distribution latency of our direct feed is 2ms”
Telecom Provider – “Our latency is less than 55ms transatlantic”
Advertised latency numbers have taken on major commercial significance in the world of high frequency trading, as many in the industry publish numbers in an attempt to show their service in a favorable light and to demonstrate superior performance over competition. Unfortunately this method of describing latency does little to help end-users understand the true performance of the underlying low-latency service. Market pressures are such that few dare to offer latency information that brings real insight and transparency to latency performance due to the fear of “appearing slower” than the competition. As a result, most informed customers of these services are forced to ignore the published latency claims and look to measure and benchmark latency service levels independently.