AFME’s Securities Trading Committee Chairman Stephen McGoldrick unlocks the latest MiFID proposals and looks at the rules for Organized Trading Facilities, algo trading and a consolidated tape.
Organized Trading Facilities (OTFs) The OTF regime began life as a specific regulatory wrapper to put around broker crossing systems, (which are a new mechanism for delivering an existing service). Crossing, which is almost the definition of a broker, has become highly automated. Whilst most crossing activities have not changed, other aspects of the industry were seen to require regulation – namely increased automation and greater scope of crossing. The initial proposals outlined an umbrella category of systems called OTFs, with one category created to hold broker crossing systems and another to hold the systems for G20 commitments around derivatives trading.
When the MiFID II proposals came out at the end of 2011, the ‘umbrella’ aspect had been simplified into a structure intended to be ‘all things to all people’, which is where it has come undone. MiFID II has created a regulatory receptacle for a practice and the two things differ in shape. The broker crossing system does not fit into the receptacle that has been created for it because much of the trading is against the books of the system’s operators, which is prohibited under the current proposals.
The regulators do not want speculative, proprietary trading within these systems, but unwinding risk created by clients is both useful and risk-reducing. An opt-in mechanism for compliance, allowing traders to decide if they want their orders traded this way may be a solution. Conflict management of this sort is common in the financial sector, as it ensures that any discretion is not exercised against the interests of the client. Certainly, when it comes to measuring the client’s interests against the operator of an OTF, it is absolutely unambiguous that their interests must come first. Therefore, any exercise of discretion that disadvantages the client relative to the operator is already prohibited. A formal, documented process to ensure that segregation stays in place is good, but to effectively prohibit the vast majority of trading on broker crossing systems seems to abandon the regulators’ objectives – to increase transparency and protect clients.
Furthermore, trades allowed into a broker crossing system would be instantly reported, creating post-trade transparency. The current proposals call for OTFs to be treated in the same way as Multilateral Trading Facilities (MTFs), which fosters uncertainty about the waivers for pre-trade transparency. Currently, there are clear criteria for granting a waiver to a platform: one is that orders are large in size, the other is taking reference prices from a third party platform. The Commission will not, however, be making the decisions about waivers; they have been handed to the European Securities Market Authority (ESMA) to determine. There is a danger in specifying too stringent limits for these waivers, which would create a very different landscape from that explicitly envisaged by MiFID I.
Systemic Internalisers (SIs) Our understanding is that regulators did not want to split activity that was in an OTF into two, but rather to regulate the broker crossing systems and to remove the subjectivity of SIs. The current SI proposal is aimed at regulating automated market making by banks, so that institutions make markets by reference to market conditions, not by reference to their clients. In MiFID I, the SI regime was introduced to protect retail investors, but subsequently this seems to have changed. When the European Commission (EC) was asked by the Committee of European Securities Regulators (CESR) to clarify the rationale for an SI regime, they declined to do so. As a result there is a distinct lack of clarity regarding the intent of the SI rules. If we had a clearer vision of the direction the regulators wished to take the market, then it would be far easier to assess whether the regulations were moving us in the right direction – or not.
Wendy Rudd of the Investment Industry Regulatory Organization of Canada (IIROC) describes the Canadian approach to circuit breakers, minimum size and increment requirements and the role of dark liquidity.
What is currently driving the regulatory policy agenda with regard to circuit breakers? Globally, and Canada is no exception, we have seen the introduction of new rules in several areas related to the mitigation of volatility. Circuit breakers are just one of those areas. While some reforms may have been in the works already, the Flash Crash of May 2010 certainly served as a catalyst for a broader debate about market structure, trading activity and the reliability and stability of our equity trading venues.
Volatility is inevitable, so when does it become a regulatory concern? From our perspective – and we regulate all trading activity on Canada’s three equity exchanges and eight alternative trading systems – we see it as a priority to mitigate the kind of shortterm volatility that interrupts a fair and orderly market. We do not expect to handle this role alone; it is a shared responsibility that includes appropriate order handling by industry participants and consistent volatility controls at the exchange/ATS level.
What are the benefits of harmonizing circuit breaker rules with US markets? One main advantage to a shared or complementary approach is that it limits the potential for certain kinds of regulatory arbitrage in markets that operate in the same time zone. Many Canadian-listed stocks also trade in the US, and roughly half of the dollar value traded in those shares takes place on US markets each day.
Which approaches are you considering taking for market-wide circuit breakers? We are monitoring developments in the US, where regulators have proposed changes which include lower trigger thresholds calculated daily, using the S&P 500 (instead of the Dow Jones Industrial Average) and shorter pauses when those thresholds are triggered. We are currently exploring options for marketwide circuit breakers which include continuing our existing policy of harmonizing with the US, pursuing a ‘made-in-Canada’ alternative or identifying a hybrid approach that does a little bit of both. At this stage, we are soliciting industry feedback on the merits of these three approaches. With the help of that feedback, we expect to be able to choose the appropriate path soon. It is important to note that these kinds of circuit breakers are an important control but have traditionally acted more as insurance – they have only been tripped once in the US and Canada since being introduced in 1988.
How similar is IIROC’s new Single-Stock Circuit Breaker (SSCB) rule to the US rules? Single-stock circuit breakers are relatively new for both jurisdictions. The US and Canada have implemented SSCBs which are similar in that a five-minute halt is triggered when a stock swings 10% within a five-minute period. Otherwise, the Canadian approach differs in several ways. For example, our SSCB does not trigger on a large swing in price if a stock were trading on widely disseminated news after a formal regulatory halt.
Do you believe circuit breakers, market-wide or single-stock, have a deterrent effect on momentum trading? We did not set out with a prescriptive approach to influence or change trading behaviour or strategy. IIROC’s circuit breaker policies were developed to provide added insurance against extraordinary short-term volatility. We intend to study the impact of any changes and we may be able to learn more about the impact of policy changes on trading behaviour.
Brian Ross of FIX Flyer talks to Buy- and Sell-side presenting the latest lessons on high frequency trading and algorithms from the Indian market.
India’s capital markets are experiencing increased interest from local and global firms and new rules are set to attract high frequency trading (HFT).
The capital markets regulator, the Securities and Exchange Board of India (SEBI), the exchanges, brokers and many investors are in favor of abolishing Securities Transaction Tax (STT). Eliminating STT will have a positive impact on market turnover, will help high frequency traders to be more profitable and, at the same time, narrow spreads should drive up trading volumes.
STT has been levied for all trades, domestic or foreign, on all transactions in either equities or derivatives markets since 2004. At the time, the purpose was to generate tax revenue and to protect market integrity by slowing down the pace of technological advancements of a few, well-funded players. Revenue generated by STT amounted to around USD 1.5bn in 2011.
It is widely expected that STT will be eliminated this spring, bringing new opportunities for HFT in one of the world’s biggest and fastest growingcapital markets.
To better understand the situation, we asked five panelists who are leading the charge in HFT in India, to share their insights with us.
You never forget your first algo. When you first got involved in algorithmic trading, what problem were you trying to solve? What was your decision process, and what technologies did you use?
Sanjay Rawal, Open Futures: We started off using algos for trading purposes and the first one we built was for a specific type of arbitrage that was getting difficult to run using manual input. We used third party software for the exchange connectivity and wrote our algo in C#.
Vishal Rana, IIFL Capital: My first experience with HFT was trying to create a straight-arb model on a real-time basis. Although it was a simple model, the most difficult thing was to clean the data. We got the data dumps and it took a lot of effort to clean it. Most of the coding was done using C++.
Rohit Dhundele, Edelweiss: At the onset of the project, the easiest yet most important task was gathering the business intelligence to be subsequently converted to algorithms. Some of the more intricate decisions were the selection of order, execution and risk management systems to ensure a stable back-bone to the platform. Other equally important criteria were a flexible programming environment and a friendly interface for users. To achieve these objectives, we had to decide whether to build or buy this technology.
At Edelweiss, we realized relatively quickly that there is a sweet spot between the two extremes of in-house vs. outsourced solutions. We have since been following this model – combining the best of both worlds, which has helped us deliver customized solutions within acceptable turnaround times, whilst still protecting our IP.
Sanjay Awasthi, Eastspring Investments (Singapore) Limited: In the Indian markets, propelled as they are by rapid information dissemination systems, anonymity becomes a key factor in determining efficient trading. It was this need for anonymity that propelled us towards algorithmic trading. Continued use and familiarity lead to further benefits by way of better execution control. Algorithmic trading has thus become an important part of our execution arsenal.
Chetan Pandya, Kotak Securities:
The first algo I worked upon and put in production was calendar rolls for derivatives. Our trading desk had huge positions to roll from the current month to the next and manual execution was leading to slippages and erroneous executions at times. Using the 2 legged order of NSE we created a simple algorithm which would roll the position at desired spread.
My first observation regarding algorithmic trading was to appreciate the difference between an individual trading manually versus a machine trading automatically. There are so many things that come naturally to a human being but needs to be told to the machine. Sometimes I wonder whether an algorithm can fully replace a human being ever. There are those nuances of the market and events that lead to erratic market behaviour that cannot be fully programmed for reaction.
Also, I had to ensure that there is no room for error when you are trading using an algo platform, primarily because of the sheer number of orders that it can process in a single second and also the inability to spot something going awry with the naked eye given the sheer speed. Hence, I had to also think of risk management capabilities of the Algorithmic platform while needing to ensure that risk management does not lead to inefficient execution due to latency.
In terms of technology, we were limited to applications that conformed to our market regulations. Once we had the base framework and architecture ready, we integrated it rapidly with our existing applications for order routing and downstream workflows.
Lakeview Capital Market Services’ Peter van Kleef relates the state of high frequency trading (HFT) in Europe including which trades are overcrowded and where the next breakthrough will come from.
Is high frequency order flow in Europe coming from Tier 1 banks or prop desks?
High frequency order flow in Europe comes mainly from proprietary trading firms and hedge funds as well as bank proprietary trading desks.
How is MiFID II changing the mood for HFT? In particular, how will a consolidated order tape affect HFT traders?
High frequency traders were already using a consolidated order tape for their strategies, so the only difference is that MiFID II might make that data cheaper and more readily available. Also, having a consolidated order tape will improve transparency, but that may indirectly cause problems for prime brokers. For example, if a prime broker’s client sees a price in the market data, their execution partner might not be in that market or might not be fast enough to get the price that their client has seen.
What are the most popular instruments for HFT in Europe? Are there any favorite HFT trades that are becoming potentially too ‘crowded’?
Most people who are new to HFT, trade the most common items such as Eurostoxx, DAX, CAC, AEX, FTSE, Bund, Bobl Schatz Futures, etc. This is counterintuitive, however, as the new traders are entering the most crowded trades and most competitive products. There are crowded trades around Eurostoxx, for example, and as a result, there will always be mini Flash Crashes and disruptions of that kind. The real thing is not to keep people out of these trades but to set up better systems in the exchange to maintain liquidity.
People at buy-sides institutions are often uncomfortable with HFT in markets because they want to trade a large amount, yet they do so in a way that is evident to the market and especially to high frequency traders. If there is an impression that there is a buildup of pressure to sell, then traders will lower their price. Some may complain about this process, but it is not the fault of the high frequency trader. Buy-side institutions need to learn more about interacting with HFT in the market. Institutional investors will find that they enjoy more liquidity when they become more sophisticated in terms of how they interact with high frequency traders.
It is incorrect to view HFT as artificial liquidity. Volume is liquidity. It might not always be liquidity in the direction you want, but it is liquidity. It makes it easier to trade, but people are unfamiliar with how to interact, so they simply need to become more familiar with it.
What are exchanges and MTFs doing to attract HFT order flow?
Many exchanges are supporting volume discounts. Many of the new MTFs want to attract volume, so they offer volume discounts for HFT. If you provide liquidity, you are paid for that liquidity; if you take liquidity, you pay. This model is common in all industries. If you buy more cars, cars become cheaper; if you buy more shirts, they get cheaper. In addition, many new exchanges claim to be faster than their rivals.
On the other hand, the older more traditional exchanges have restrictions for liquidity providers and naked access, which is a disadvantage for market makers who wish to interact with institutions or directly with exchange members. An unintended consequence of these restrictions is that by banning naked access, they disadvantage those very people they want to protect; i.e. the non members.
ITG’s Clare Rowsell and Rob Boardman outline the best practices for liquidity management across multiple regions, focusing on Asia Pacific, North America and Europe.
In an increasingly global and fragmented trading environment, finding and managing liquidity is the top priority for buy-side traders. The practicalities of doing so are complex, and are underpinned by the tradeoff between the time taken to find liquidity – which can result in delay costs as the price moves away, and the quality of that liquidity – trading against certain counterparties can increase market impact costs. Meanwhile, the global liquidity environment is changing rapidly due to evolving regulation, market structure and the trading tools available. What follows is a short summary of some of the most significant developments affecting liquidity management in different regions around the world.
Often cited as having a ‘last mover advantage’ in coming latest to the world of dark pools and alternative trading venues, Asia is now catching up rapidly. Growing awareness of the region’s higher trading costs (approximately one third higher than those of the US and UK) is creating market demand for both new lit and dark liquidity sources. Japan is the only major market that currently allows ‘lit’ or quote-publishing venues to compete directly with the exchanges, and in the past year market share on these venues (including SBI Japannext, Chi-X and Kabu.com) has risen, although they still average around 2-3% of total turnover.
Australia will be next, now that the launch of Chi-X to challenge the ASX exchange’s monopoly has been confirmed for early in Quarter 4 2011. As alternative lit venues develop, the importance of smart order routing grows and in Australia this has been a core component of consultation which will result in changes to regulation affecting brokers and exchanges and mandating Smart Order Routing (SOR) as a mechanism to achieve best price in a multi-market environment. For other Asian markets, buy-side traders have been turning to dark pools as a way of managing trading costs and finding quality liquidity.
Most of the large banks and brokers now offer a dark pool or internalization engine in markets including Hong Kong, Japan and Australia; but given Asia’s already-fragmented market structures, adding more broker liquidity pools threatens to complicate the buy-side trader’s life. This is where liquidity management, and specifically the aggregation of dark pools, is coming to the fore. Increasingly the buy-side are turning to dark pool aggregating algorithms to connect into multiple sources of liquidity through one access point.
Canada has long benefited from trading in an auction market supported by a highly visible electronic book. Even though it was not until the latter half of the decade that ATSs began to spring up in Canada, they quickly gained traction and in 2010 ATSs represented 34% of volume. As these changes have taken place, Canadian regulators have continually reviewed emerging regulation in other regions as Canada continues to parallel more mature markets. With the proliferation of alternative trading venues came an emphasis on the consolidation of data to ensure market integrity. In addressing the need for a consolidated tape, the CSA accepted RFPs and appointed the TMX Group to the role of Information Processor.
Also arising from the multiple-market trading environment is Reg.NMS-style regulations to protect against trade-throughs. February’s Order Protection Rule shifted the best price responsibility to marketplaces and also requires full depth of book protection (unlike the US’s top of book protection). About 3% of Canada’s equity trading is done in dark pools, and although Canada has only two dark pools (Liquidnet Canada and ITG’s MATCH NowSM), Instinet plans to open two this year and Canadian stock exchanges are making moves to offer dark order types.
The SEC’s proposed Consolidated Audit Trail System seeks to capture in real time details of all orders and trades on US equity markets. Instead of using a custom regulatory data protocol within this new system, Martin Koopman asks why don’t we just use FIX? After all, we are using FIX for nearly all of these orders and trades already.
The Financial Information eXchange (FIX) Protocol has been one of the biggest success stories within our industry in the last 15 years. FIX is used in equities, foreign exchange, fixed income, commodities, and derivative markets. FIX is used by the buyside, sell-side and exchanges. It is a global standard, being the dominant protocol in all major European and most Asian markets.
The advantages of using FIX for the Consolidated Audit Trail System are:
A. Lower Costs to the Industry
Buy-side, sell-side and exchanges already communicate and store messages using FIX. Costs to develop systems to store and communicate trading information to the SEC or Self-Regulatory Organizations (SROs) would be lower using FIX as the data already exists in a suitable format today.
B. Less Error and Easier Auditing
As records are currently kept using the FIX Protocol, if any other protocol is used a translation is required to transform data into a different protocol. This introduces error and offers the potential for manipulation of the data. Using FIX means the SEC is looking at the original format of the data.
All FIX messages are generated in real time for trading. The SEC could more easily attain a real time reporting system by using FIX.
D. Ability to Coordinate Across Derivative Markets and Globally
Market events such as the May 6 flash crash require looking at not just historical equity data, but also data from equity derivative markets and international markets. As the FIX Protocol is dominant in other asset classes and derivative markets, including equity index futures, equity options and global markets, regulators can more easily gain a broader view.
Quod Financial CEO Ali Pichvai advocates a re-examination of speed relative to risk.
The oversimplified debate on latency, which states ‘trading is all about speed’, does not represent the true situation. Latency is primarily a consequence of the market participant’s business model and goals. A liquidity provider sees latency competitiveness as vital, whilst a price taker considers it of less importance in the overall list of success factors. This article uniquely focuses on processing efficiency, considering that distance/ co-location has long been debated.The processing efficiency is determined by:
The processing efficiency is determined by:
*Number of processes:
The number of processes and the time an instruction spends in a given process will give a good measure of latency. As a general rule of thumb, the fewer the number of processes, the lower the latency. An arbitrage system will most likely consist of as few processes as possible, with a limited objective. For instance, a single instrument arbitrage between two exchanges can be built around three processes – two market gateways and one arbitrage calculator/order generator. An agency broker system will host more processes, with pre-trade risk management, order management and routing, intelligence for dealing with multi-listing and the gateway, as the minimum number of processes. The trend of latency reduction was sometimes at the expense of the amount of critical processing; for instance in the pursuit of attracting HFT houses, some brokerage houses provide naked direct market access, which removes pre-trade risk management from the processing chain. An initial conclusion is that it is very hard to reconcile a simplistic and, limited-in-scope, liquidity taker system with more onerous price taker systems.
This is where the process flow between different processing points is as efficient as possible, with minimal loops between processes, waiting time and bottlenecks. It also considers the comprehensive view of the architecture between the network and the application.
*Single process efficiency:
Two important areas must be reviewed:
There is an on-going debate on what the best language for trading applications is. On one side there are the Java/.NET proponents, who invoke that ease of coding and maintaining a high-level development language (at the expense of the need to re-engineer large parts of the Java JVMs). On the other side there are the C++ evangelists, who utilise better control of the resources, such as persistence, I/O and physical access to the different hardware devices, as a demonstration of better performance. The migration from main exchanges and trading applications (away from Java) to C++ seem to indicate that the second church is on the ascendancy. Beyond the coding language, building good parallelism in processing information, within the same component, also called multithreading, has been a critical element in increasing capacity and reducing overall system latency (but not unit latency).
Finally, there are attempts at putting trading applications or components of trading applications on hardware, which is often referred to as hardware acceleration. The current technology can be very useful for latency sensitive firms, to accelerate the most commoditised components of the overall architecture. For instance, vendors are providing specific solutions for market data feedhandlers (in single microseconds), that would result in market data to arbitrage signal detection of tens of microseconds. Yet trading is not standard enough to be easily implemented on such architecture. Another approach is the acceleration of some of the messaging flow, by accelerating middleware and network level content management. This goes in hand with the attempts of leading network providers to move more application programming lower into the network stack.