The FPL Americas Electronic Trading Conference, for those in electronic trading, is always a year-end highlight and this year was no exception. Sara Brady, Program Manager, FPL Americas Conference, Jordan & Jordan thanks all the sponsors, exhibitors and speakers who made this year’s conference a huge success.
The 6th Annual FPL Americas Electronic Trading Conference took place at the New York Marriott Marquis in Times Square on November 4th and 5th, 2009. John Goeller, Co-Chair of the FPL Americas Regional Committee, aptly set the tone for the event in his opening remarks: “We’ve lived through a number of challenging times… and we still have quite a bit of change in front of us.” After a difficult year marked by economic turmoil, the remarkable turnout at the event was proof that the industry is back on its feet and ready to move forward with the changes to the electronic trading space set forth in 2009.
Market Structure and Liquidity Two topics clearly stood out as key issues that colored many of the discussions at the conference – regulatory impact on the industry and market structure as influenced by liquidity, and high frequency trading. An overview of industry trends demonstrated that the current challenges facing the marketplace are dominated by these two elements. Market players are still trying to digest the events of 2008 and early 2009, adjusting to the new landscape and assessing the changing pockets of liquidity amidst constrained resources and regulatory scrutiny. The consistent prescription for dealing with this confluence of events is to take things slow and understand any proposed changes holistically before acting on these changes and encountering unintended consequences.
The need for a prudent approach towards change and reform was expressed by many panelists, including Owain Self of UBS. According to Self, “Everyone talks about reform. I think ‘reform’ may be the wrong word. Reform would imply that everything is now bad, but I think that we’re looking at a marketplace which has worked extremely efficiently over this period.”
What the industry needs is not an overhaul but perhaps more of a fine-tuning. Liquidity is one such area that needs carefully considered finetuning. Any impulsive regulatory changes to a pool of liquidity could negatively impact the industry. The problem is not necessarily with how liquidity is accessed, but the lack of liquidity that results in the downward price movements that marked a nightmarish 2008. Regulations against dark liquidity and the threshold for display sizes are important issues requiring serious discussion.
Rather than moving forward with regulatory measures that may sound politically correct, there needs be a better understanding of why this liquidity is trading dark. While there is encouraging dialogue occurring between industry players and regulatory bodies, two things are for sure. We can be certain that the evolution of new liquidity venues is evidence that the old market was not working and that participants are actively seeking new venues. We can also be assured that the market as a messaging mechanism will continue to be as compelling a force as it has been over the last two decades.
Risk One of the messages that the market seems to be sending is that sponsored access, particularly naked access, is an undesirable practice. Presenting the broker dealer perspective on the issue, Rishi Nangalia of Goldman Sachs noted that while many agree that naked sponsored access is not a desirable practice, it still occurs within the industry. A panel on systemic risk and sponsored access identified four types of the latter: naked access, exchange sponsored access, sponsored access of brokermanaged risk systems (also referred to as SDMA or enhanced DMA) and broker-to-broker sponsored access.
According to the U.S. and Securities Exchange Commission (SEC), the commission’s agenda includes a look specifically into the practice of naked access. David Shillman of the SEC weighed in on the commission’s concern over naked access by noting, “The concern is, are there appropriate controls being imposed by the broker or anyone else with respect to the customer’s activity, both to protect against financial risk to the sponsored broker and regulatory risk, compliance with various rules?” Panelists agreed that the “appropriate” controls will necessarily adapt existing rules to catch up with the progress made by technology.
On October 23, NASDAQ filed what they believe to be the final amendment to the sponsored access proposal they submitted last year. The proposal addresses the unacceptable risks of naked access, and the questions of obligations with respect to DMA and sponsored access. The common element of both of these approaches is that both systems have to meet the same standards of providing financial and regulatory controls. . . Jeffrey Davis of NASDAQ commented on his suggested approach: “There are rules on the books now; we think that they leave the firms free to make a risk assessment. The NEW rules are designed to impose minimum standards to substitute for these risk assessments. This is a very good start for addressing the systemic risk identified.”
These steps may be headed in the right direction, but are they moving fast enough? Shillman added that since sponsored access has grown in usage there are increasing concerns and a growing sense of urgency to ensure a commission level rule for the future, hopefully by early next year. This commission proposal would address two key issues – should controls be pre-trade (as opposed to post-trade) and an answer to the very important question, “Who controls the controls?”
Getting to the bottom of naked sponsorship and high-frequency trades.
FIX: What does the buy-side want from Direct Market Access (DMA)?
David Polen: There are two distinct market segments that use DMA - the human trader and the blackbox. I like to call this “Human DMA” and “Highfrequency Trading (HFT) DMA”.
With Human DMA, the extreme is a buy-side that has traders manually executing trades and looking at market data over the Internet; with HFT DMA, the extreme is a blackbox co-located at the exchange. One market segment is sub-millisecond and the other is more than tens of milliseconds - sometimes hundreds of milliseconds.
The human trader manually clicking around on a front-end is more interested in the full range of services a broker can provide than he is in latency. Although speed is always important, he’s keen on being able to access all his applications via one front-end versus having to go to different windows. He’s looking for his broker to be a one-stop-shop, providing all the necessary services, such as algorithms and options and basket trading, in one easy and convenient bundle. He wants clean and compliant clearing and settlement.
The high-frequency trader is different. He has his own algorithms and smart order routers (SORs). He wants to get to the market as quickly as possible and needs credit and also memberships to the various execution venues.
FIX: What is the controversy around naked sponsorship for highfrequency traders?
DP: With naked sponsorship, the HFT is trading directly on the exchange, and the broker is only seeing the orders and trades afterwards. To help with this flow, exchanges have built in risk checks, so the broker can rely on the exchanges for pre-trade risk management.
To get a view across the exchanges, the broker consolidates the posttrade information through drops of the orders and trades. Although, the broker has a reasonably good view of the risk at all times, it can take as long as a minute to turn off a buyside that has exceeded their pre-set risk parameters. This is often exaggerated into a doomsday scenario where a buyside trades up to $2 billion of stock in those 60 seconds, but that ignores the exchange’s own controls which would not be set to $2 billion. It is a lot more likely for a buy-side to barely stay within its risk limits at each exchange, but exceed the overall allotted risk by multiples. Brokers need to have measurements in place to prevent that.
FIX: What are the key concerns with latency?
DP: The best way to lower latency is to get rid of as many message hops as possible. Co-locating at an exchange is obvious as it eliminates network hops. Although, co-location is important, it does come with infrastructure costs that not all high-frequency traders are willing to bear, for example, they may need to co-locate at each exchange.
Some buy-sides or brokers may co-locate at only one exchange and use that venue’s network to access others. Co-location also depends on the buy-side’s trading strategy. High-frequency traders need to understand where they want to trade. They can’t think of the market as a montage when they’re trying to achieve the lowest execution latency. There’s no time to sew together the fragmented marketplace if you’re also trying to be incredibly reactive to each and every exchange.
It’s also important to focus on latency within each exchange. Shaving another 100 microseconds off your DMA solution may not matter much if you are hitting an exchange port that is using old hardware or if you are overloading a port at the exchange and not load-balancing to another port. You also have to be aware of the protocol you are using: some exchanges have created legacy FIX sessions that are wrappers around internal technology and can be quite slow converters.
They are now creating “next generation” API’s that are native FIX and much faster, but these sessions may only offer a subset of available messages, so you have to consider routers that send the legit subset down the fast FIX pipe.
Working on a Porsche analogy: To say that all Porsches are slow is, clearly, about as ridiculous as saying that FIX is slow! Kevin Houston explains.
All early Porsches have their roots in the design of the Volkswagen Beetle designed by Ferry Porsche’s father Ferdinand. The VW Beetle is only capable of about 80 mph and even that would be at the cost of a terrified driver. Early Porsches share a lot of their design elements with the Volkswagen, however that does not mean that Porsches’ are slow. Many of us will have been on track days where we have driven Porsches around a race track, hitting speeds of well over 80 mph, without inducing any great feelings of fear.
The early FIX engines, designed in an era of simply routing care orders between the buy-side and sales traders, are also slow and if used for modern high speed trading would result in traders nervously guessing whether each message would be one of the lucky ones that went through quickly or, more probably, one of the unlucky ones that took several seconds to arrive at its destination. Again some of us can testify that these early engine do not represent the state of the art; equally, however high velocity trading houses, today, are using FIX to trade in around 250 microseconds and a small number of FIX engine vendors are currently capable of consistently beating 10 microseconds for message processing, delivering throughput of around 100,000 messages per second.
To say that all Porsches are slow is, clearly, about as ridiculous as saying that FIX is slow. The remainder of this article examines the second myth in more detail.
First a bit of History FIX started as a pilot between Salomon Brothers and Fidelity Investment to automate the placement of orders, the reports of executions and the distribution of advertisements and IOIs. Indeed many early FIX implementations did not even place the order electronically, but only reported the executions after order placement. At this time there was only one FIX engine on the market place and the price was extremely high, with the performance, by today’s standards, being extremely low. FIX adoption was driven by error reduction and the like. The arrival of the early commercial FIX engines did the community a great service by creating cheaper alternatives; but the performance bar was not very high. Often when people now refer to FIX as being slow, they are using the yard stick of early FIX engine performance as the measure. Since then and particularly over the last 5 to 10 years, increasing emphasis has been placed on the performance of FIX driven by a number of trends, and the FIX engine vendor community has responded. Some have accepted their current performance levels as adequate for order routing but not DMA and focused on that market; others, often new entrants, have engineered FIX engines from the ground up, to focus on future-proofing performance.
The drivers behind this need for speed are worth noting: • Increased market data • Increased order volume • Exchange adoption of FIX • Large percentage of trades going electronic • Rise and rise of algorithmic trading
These drivers lead to two separate performance needs, high throughput and low latency. Whilst these are related needs there are optimisations that can be made that favour either. For example, FIX communicates over TCP/IP, typically a FIX message uses a few hundred bytes, but an IP packet has space for around 1,400 bytes of information. A setting on the IP communication layer allows you to select whether packets should be held back to wait for additional information that can be sent in the same packet.
This obviously improves throughput as the receiving application has to process fewer packets, but it has the potential drawback that it increases latency for at least the first message by it having to wait for subsequent messages. So, whether you allow this hold-back or not depends on your performance needs, or profile. This process was first introduced by John Nagel and is named after him; see http://en.wikipedia.org/wiki/Nagle’s_algorithm for more details. OK, so FIX can be fast but a lot of the implementations out there are dated and therefore can be slow; so, are there other things FPL is working on that will help with performance?
Where are we today, how long does it take? Let’s take a look at some of the time costs in sending a FIX message. Obviously this is a very rough estimate and there are a lot of variables, but here are some general timing that are worth covering. Firstly, there is constructing the message, which typically takes something like 10 microseconds, saving a copy before sending, 50 microseconds; sending it to the wire, typically 10 microseconds; transmission time, a function of distance but easily worked out as distance divided by 2/3 the speed of light; switching time, a function of the number of routers and switches and their ilk, via the operating system into the user space, say 10 microseconds; and finally parsing in user space 10 microseconds. There are a number of ways you can improve these such as saving the copy of the message asynchronously, etc., but that still leaves a lot of time spent building the FIX message and it’s tag = value syntax.
Andres Araya Falcone of the Santiago Stock Exchange explains how FIX is increasing the range of services available to traders in Chile and throughout Latin America.
How is FIX facilitating DMA into the Santiago Stock Exchange?
The first concept of DMA in Chile began with what we call “direct traders” (buy-side traders) facilitating these specially authorized institutional clients, to send direct orders to the market via a “broker sponsor”. Thus, pension and mutual funds, insurance companies and other institutions, using trading terminals provided by the Stock Exchange, can trade directly in our market. The next natural step was the incorporation of electronic networks to attract order flow from the U.S., Europe and neighboring countries in Latin America, especially Brazil.
In 2006, we built the first FIX interface using version 4.0 to connect to the Marcopolo Network, to attract the order flow of our local equities market. After that, the Santiago Stock Exchange launched its initiative to modernize the equities electronic trading system and developed Telepregón HT, jointly with IBM, which went live in June 2010. This system is ready for algorithmic trading flow since it supports a throughput of over 3,000+ orders per second with sub-millisecond latency. In designing the system, we decided to use FIX 4.4 to enable easier connection via DMA with other exchanges, sell- and buy-side firms and market information vendors. This has greatly facilitated the connection to different networks, such as Bloomberg, Fidessa and SunGard, among others. For all these initiatives, FIX has been crucial in facilitating the integration with these listed networks. During 2011 we will announce new network agreements.
Currently, referring to the equity market, 11% of order flow comes from DMA which represents an average of a 27% increase over the last 6 months, today 19% on average comes from Internet retail order flow, and the rest comes from traditional OMS and Trade Work Stations.
As foreign investment into Chile and the Chilean market continues, how will the Santiago Stock Exchange upgrade its platforms to meet increased investor and trader demands?
In 2010, the Selective Share Price Index (IPSA), the country’s main stock market indicator, gained 37.6% in Chilean pesos (equivalent to some 46% in dollars). Share trading on the Santiago Stock Exchange rose to US$60 billion in 2010, up 30.5% from 2009, setting a new annual record. Trading was particularly strong in the second half of the year, which accounted for almost 60% of the annual total, reflecting strong demand from both local and international investors.
At the same time, by the end of 2010, the Santiago Stock Exchange had signed a linkage agreement with Brazil’s stock exchange, BM&FBOVESPA, heralding the latest in a series of cooperativeprojects being run between Latin American bourses. The agreement, signed on December 13th, will enable connectivity between both exchanges for order routing and market data dissemination. It also includes separate initiatives for further development of the Santiago Stock Exchange’s derivatives market, the establishment of joint initiatives related to settlement, clearing and central counterparty services, as well as access to the BM&FBOVESPA /CME trading platform from Chile.
Market participants in both countries will be able to route orders for stocks, stock options and related derivatives listed on the other’s exchange. Both exchanges will also be able to receive and distribute each other’s market data. Clearing and settlement of orders will be done according to local market rules of listed instruments. These kinds of initiatives imply that the Santiago Stock Exchange’s IT platform has to be prepared to manage more than 6 million orders per day.
What plans does the Santiago Stock Exchange have to accommodate High Frequency Trading and algorithmic order flow?
We are working as an integrator of a state of the art product for algorithmic trading. In conjunction with Streambase, FIXFlyer and IBM WFO, we are creating a product we will call “Broker in a Box”. The idea is to provide a framework for capital markets, including a set of algorithmic order execution strategies designed to achieve best execution, access liquidity, minimize slippage and maximize profits for trading operations. These algorithmic trading strategies (like VWAP, TWAP, Arrival Price / Implementation Shortfall, etc.), are provided as fully customizable EventFlow modules which can be used in conjunction with the frameworks. Trading firms will be able to modify each algorithm to reflect their own “secret sauce” and to differentiate their trading strategies in the market. The Santiago Stock Exchange will provide an “all in one” solution: integrated markets, market data (from Integrated Latin America Market (MILA), NYSE and NASDAQ), co-location, monitoring, local support, etc.
Timothy Furey, Goldman Sachs, Neal Goldstein, Nomura and John Goeller, Bank of America Merrill Lynch, shed light on the process of managing risk in electronic trading.
At the start of this year, FPL announced the completion of an initial set of guidelines, which recommends risk management best practices in electronic trading for institutional market participants. In the third quarter of 2010, FPL launched a group to raise awareness regarding the implications of electronic trading on risk management and to develop standardized best practices for industry consideration. Over the last few months, the group, which consists of a number of senior leaders in electronic trading from the major sell-side firms, has been working on developing this set of guidelines to encourage broker-dealers to incorporate a baseline set of standardized risk controls.
The objective of the guidelines is to provide information around risk management and encourage firms to incorporate best practices in support of their electronic trading platforms. In today’s volatile marketplace, the automation of complex electronic trading strategies increasingly demands a rational set of pre-trade, intra-day and pattern risk controls to protect the interests of the buy-side client, the sell-side broker and the integrity of the market. The objective of applying electronic order risk controls is to prevent situations where a client, the broker and/or the market can be adversely impacted by flawed electronic orders.
The scope of the particular set of risk controls included in the guidelines is for electronic orders delivered directly to an algorithmic trading product or to a Direct Market Access (DMA) trading destination. The recommended risk controls included provide the financial services community with a set of suggested guidelines that will systemically minimize the inherent risk of executing electronic algorithmic and DMA orders.
In what area are sell-side and buy-side firms’ risk controls most in need of improvement?
Timothy Furey, Managing Director, Goldman Sachs and FPL Risk Management Committee Co-Chair: One of the observations coming from the FPL risk sessions was that the buy-side and sell-side had really given considerable thought to their own individual firm’s risk controls. That said, both the sell-side and the buy-side should continue to focus on pulling together a standard, consistent base set of controls that their respective firms can reasonably implement. Therefore, it is more a question of standardization than a need for specific improvement.
John Goeller, FPL Americas Regional Committee Co-Chair and Managing Director, Global Execution Services, Bank of America Merrill Lynch: This effort was not necessarily to address an apparent deficiency in how the buy-side or the sell-side handles risk management, but to codify a set of best practices for all firms to use. It was generally accepted when we started this process that all firms implement some level of risk controls around their business. Our goal was to identify the most common ones and ensure that we have a base set of controls that all firms can implement.
Neal Goldstein, Managing Director, Nomura Securities International and FPL Risk Management Committee Co-Chair: It is important for the buy-side community to recognize that their efforts to implement risk management controls for electronic trading will be more effective when a collaborative effort is made with their sell-side executing brokers. For algorithmic and conventional (low frequency) DMA orders, the first line of defense should be the risk controls incorporated within the buy-side OMS/EMS. The most effective risk control is to prevent a questionable order from leaving the buy-side OMS/EMS. A specific factor that the buy-side should be looking at more closely is the impact a given order has on available liquidity. While the order validation employed by many buy-side clients accounts for notional value and order quantity, another factor that needs more consideration is the Average Daily Volume (ADV) during the trading interval. Creating an order to trade, where the volume participation rate may exceed ADV for a given interval, can have significant adverse impact on execution price and algorithmic performance, particularly for illiquid names.
What role, if any, should the exchanges play in implementing risk controls?
John Goeller: Most exchanges have technology solutions (in certain situations it is mandatory) around risk management. In some cases, these tools are optional and only work when accessing a particular exchange. Regardless, if a firm is utilizing exchange provided tools, home-grown, or vendor-supplied, they can still leverage our efforts to understand whether their tools are implementing industry best practices.
OLMA Investment Company’s Alena Melnikova and Valerian Zamolotskikh assess the Russian exchanges and comment on DMA and algorithmic trading.
Are algorithms widely used by Russian trading firms, and what asset classes are they used for (options, futures, ForEx, bonds, commodities)?
Algorithmic trading on the Russian market is relatively well developed. According to average estimates, about 60% of trading volume in our market is created by algorithmic trading systems. We use strategies similar to those used in Western markets: market making, arbitrage, pair trading, stochastic methods. However, algorithms for minimizing impact costs are not very popular, because there are few participants who can sufficiently affect prices. Algorithms to minimize market impact will be more popular with the advent of large participants and new funds. Trading firms widely use market making and arbitrage strategies, while stochastic methods are widespread amongst common traders.
Algorithmic trading in general and stochastic methods in particular are widespread amongst traders because the share of traders with strong mathematic skills is relatively high. These traders use stochastic methods for developing strategies and program algorithms for them. On the other hand, brokers meet traders’ requirements and develop trading platforms for the opportunity to connect traders’ applications with their platforms by API, internal programming language and so on. Algorithms are mostly used for futures, and slightly less for options, but the most liquid instruments in Russia are futures on the index RTS; the majority of algorithms use this asset class.
Do you see an increase in DMA to Russian markets, and if so, will it come through equities, derivatives, ETF’s or other products?
It is difficult to say if DMA is developing in some specific products. Often, it depends on the exchange that provides the DMA. DMA is developing in equities on MICEX and in derivatives on RTS through the RTS Standard (equities T+4). DMA will develop proportionally alongside current liquidity. Futures on the RTS Index and Sberbank shares (on MICEX) remain the most interesting for algorithmic traders. As for ETFs, they are a new product on our market and are not very popular at this time, although we do expect development of DMA in this asset class.
How will data centers and increased connectivity, within Russia and out to London and international exchanges, affect high frequency and algorithmic traders?
We think data centers and increased connectivity will have a positive impact on our financial system generally, and on HFT in particular. First of all we expect increasing liquidity in commodity, currency and index futures. Second, narrowing spreads will result in an increase in trading volume and more arbitrage strategies, for example ADRs-equities. Third, with the advent of foreign HF traders, local traders have to improve their technologies, equipment and strategies, all of which certainly will have a positive effect.
Are Russian exchanges’ trading architectures ‘fast enough’ and what do RTS or MICEX need to do to meet Russian traders’ demands?
From our point of view, the Russian exchanges’ trading architectures are considered fast enough. An average round trip is 10-20 microseconds for both RTS and MICEX. What is more, RTS makes continual efforts to improve the trading environment. Last year RTS launched a new protocol Plaza II, which aims to improve and accelerate trading conditions. RTS is much more mobile, determined and creative, fast to meet investors’ requirements and ready to discuss improvements to their current work. Technical failures, however, can happen on RTS. MICEX is stable, but tends to develop slowly, which limits its appeal for HFT clients. HF traders seem to choose RTS, which offers halved fees for intraday trading and allows them to place their equipment in RTS’ data center on more attractive terms.
What is your impression of Moscow’s future as a financial center?
I would have to say that much remains to be done. However, in recent years, market professionals with the state’s support have made important steps in this direction. For example, the taxation of derivatives adopted last year. Of course, it is difficult to compare Moscow with other financial centers now, but the prospects are very encouraging.
Stephanie Lawton reports on the latest from the Face2Face Forums in Mumbai and Kuala Lumpur.
Few exchanges have seen such dramatic transformations as those in India. Technology looks set to play a major role in meeting market demands with the BSE announcing its adoption of FIX 5.0 and the NSE using FIX 4.2, with plans to upgrade to 5.0 as needed. Both the NSE and BSE seem determined to not only meet, but exceed their members’ expectations and have aggressive plans to build on existing capabilities and develop new products. Bringing together the Exchanges Three exchanges (NSE, BSE and MCX) came together to debate the role of technology, regulators and, of course, competition.
Jim Shapiro, head of market development for the Bombay Stock Exchange (BSE), stated that the ability of an exchange to innovate and stay ahead of the market, would be the key to its success. Correctly reading how the regulators may react to situations and evolve regulations in India would also be key, he added. Vidhu Shekhar, vice president of new products for the National Stock Exchange of India (NSE), agreed that keeping pace with market growth was essential. “You need to keep your eye on the ball,” he urged. “We need to recognise what’s going on outside India and decide how we, as an exchange, respond to the challenges and opportunities of globalization.”.
Latika Kundu, head of market operations for the MCX-SX, focused on the role of technology. “It’s about awareness of products on the market and how we ensure maximum accessibility to these new products,” she argued.
Looking at the progress of DMA and automated trading, the BSE felt the process was still in its infancy, with DMA still showing market constraints. However, algorithms were attracting a lot of interest from most market participants. New players, in particular, were ramping up this aspect of their technology and product offerings, with the BSE keen to attract these new market entrants.
On the subject of regulatory changes, all the exchanges agreed that the regulators had come a long way in engaging with the market and the exchanges. The main concern centered on systematic risk and in better understanding their clients’ requirement. On the idea of a MiFIDstyle system, the exchanges said that though the issue of best execution was being actively discussed, it still remained a complex issue. According to Shapiro, dark pools wasn’t high on the regulators’ priority list and block trading provoked more interest.
The Keynote – High Frequency Trading
High Frequency Trading as the New Market Makers was addressed by Ronald Gould, Chief Executive Officer, Asia- Pacific, Chi-X Global.
To start his presentation, Ron questioned whether High Frequency Trading is ‘bad’ or just ‘badly understood’. He gradually unfolded the story by looking at the development of HFT in the US and Europe in terms of regulatory evolution and the technology arms race. He also illustrated that an Alternative Trading System (ATS) has a positive impact on trading volume, which was reflected by the explosion of trading activity in Europe and in the U.S. He predicted that Asia-Pacific markets will undergo many of the same changes as the U.S. and Europe with HFT will playing a critical role in many existing Asia-Pacific markets with relatively low liquidity.
What are the major issues for electronic trading in India?
The major drivers were still the foreign institutional investors that were showing a strong appetite for algorithms, explained Murat Atamer, vice president equities, at Credit Suisse AES. “FIXatdl would be attractive to our clients,” said Atamer, adding that India was not a market that should be traded without algos.