AFME’s Securities Trading Committee Chairman Stephen McGoldrick unlocks the latest MiFID proposals and looks at the rules for Organized Trading Facilities, algo trading and a consolidated tape.
Organized Trading Facilities (OTFs) The OTF regime began life as a specific regulatory wrapper to put around broker crossing systems, (which are a new mechanism for delivering an existing service). Crossing, which is almost the definition of a broker, has become highly automated. Whilst most crossing activities have not changed, other aspects of the industry were seen to require regulation – namely increased automation and greater scope of crossing. The initial proposals outlined an umbrella category of systems called OTFs, with one category created to hold broker crossing systems and another to hold the systems for G20 commitments around derivatives trading.
When the MiFID II proposals came out at the end of 2011, the ‘umbrella’ aspect had been simplified into a structure intended to be ‘all things to all people’, which is where it has come undone. MiFID II has created a regulatory receptacle for a practice and the two things differ in shape. The broker crossing system does not fit into the receptacle that has been created for it because much of the trading is against the books of the system’s operators, which is prohibited under the current proposals.
The regulators do not want speculative, proprietary trading within these systems, but unwinding risk created by clients is both useful and risk-reducing. An opt-in mechanism for compliance, allowing traders to decide if they want their orders traded this way may be a solution. Conflict management of this sort is common in the financial sector, as it ensures that any discretion is not exercised against the interests of the client. Certainly, when it comes to measuring the client’s interests against the operator of an OTF, it is absolutely unambiguous that their interests must come first. Therefore, any exercise of discretion that disadvantages the client relative to the operator is already prohibited. A formal, documented process to ensure that segregation stays in place is good, but to effectively prohibit the vast majority of trading on broker crossing systems seems to abandon the regulators’ objectives – to increase transparency and protect clients.
Furthermore, trades allowed into a broker crossing system would be instantly reported, creating post-trade transparency. The current proposals call for OTFs to be treated in the same way as Multilateral Trading Facilities (MTFs), which fosters uncertainty about the waivers for pre-trade transparency. Currently, there are clear criteria for granting a waiver to a platform: one is that orders are large in size, the other is taking reference prices from a third party platform. The Commission will not, however, be making the decisions about waivers; they have been handed to the European Securities Market Authority (ESMA) to determine. There is a danger in specifying too stringent limits for these waivers, which would create a very different landscape from that explicitly envisaged by MiFID I.
Systemic Internalisers (SIs) Our understanding is that regulators did not want to split activity that was in an OTF into two, but rather to regulate the broker crossing systems and to remove the subjectivity of SIs. The current SI proposal is aimed at regulating automated market making by banks, so that institutions make markets by reference to market conditions, not by reference to their clients. In MiFID I, the SI regime was introduced to protect retail investors, but subsequently this seems to have changed. When the European Commission (EC) was asked by the Committee of European Securities Regulators (CESR) to clarify the rationale for an SI regime, they declined to do so. As a result there is a distinct lack of clarity regarding the intent of the SI rules. If we had a clearer vision of the direction the regulators wished to take the market, then it would be far easier to assess whether the regulations were moving us in the right direction – or not.
Feargal O’Sullivan and Jamie Hill of NYSE Technologies discuss OpenMAMA, the open source middleware Agnostic Messaging API they hope will expedite innovation in services, reduce vendor lock-in and minimize implementation time and cost.
Solving a Problem Choosing a market data vendor because of their API alone is not sound practice. The issue of how to come up with a standard way of accessing market data that allows clients to select a vendor for any range of reasons – other than the API that the vendor happens to offer – has been a struggle for a long time. Something that should be low on any decision-making tree has unfortunately tended to be much more important. There are a number of different consolidated market vendors, including some obvious names like Thomson Reuters or Bloomberg and there is also a range of direct feeds or ticker plant vendors, where instead of going to a consolidator, feeds are accessed directly from an individual exchange.
In selecting a vendor, users must write all their code to suit that vendor’s particular way of accessing the data. Changing to a different vendor requires opening up the source code and altering everything to match how this other vendor wants to access the market data. With a consolidated feed for broad international access and a direct feed for low latency algo trading in US equities, for example, many users have to write according to two to four different APIs. This has been a significant problem for the industry and with OpenMAMA we are trying to drive the industry towards a standard.
User Base This API is an eight-year-old standard that was initially developed by NYSE Technologies as the Middleware Agnostic Messaging API (MAMA), and it is quite heavily deployed in the financial services industry; close to 200 clients already use this API in their custom applications, so today it has an established installed base. We have opened that up and made it a standard by taking the source code for the APIs these firms are using today and provided it to The Linux Foundation, which will physically host the code as a neutral body.
During this process we worked with multiple parties that would not ordinarily use our API. Since the launch of OpenMAMA on 31 October 2011, one of the key factors to this being taken seriously as an open installation, was getting the right level of adoption. Before we launched, we approached a number of customers, other vendors and competitors, out of whom we established our launch partners J.P. Morgan, Bank of America Merrill Lynch, Exegy, Fixnetix and EMC. These launch partners, along with NYSE Technologies, formed a steering committee to drive the direction and the future of OpenMAMA.
From that point forth, each of those organizations who are part of that committee has a stake in Open MAMA. The API is open source under the LGPL 2.1 licence, so it is now owned by the open source community. With participation from Interactive Data, Dealing Object Technologies and TS-Associates as well, we now have a group ten strong and it is a global mix comprising different industries. Whereas before the API was driven largely by NYSE Technologies and our commercial use cases, now it is being driven forward as an industry standard. The more people we have to adopt and participate, the higher the likelihood of achieving that.
Daniel Ciment of J.P. Morgan details the development of Brazilian algos and outlines the most effective strategies for trading in Brazil.
Using Algos in Brazil Already accustomed to trading with algorithms or using algorithms to trade strategies in different markets around the world, as international buy-side traders look to Brazil, they want to trade there in the same way they have traded elsewhere. Even though having just one exchange makes the data feed more streamlined, because of the low liquidity profile of certain stocks in Brazil, you cannot use algorithms to trade all stocks electronically. For the more liquid names, many traders are using benchmark algorithmic strategies, like VWAP, percentage of volume, or arrival price. Most algorithmic strategies are based on benchmarks for now, as buy-side traders seek to replicate the methods they use elsewhere, while obviously taking into account the intricacies of the market structure. In the end, if they trade with algorithms in the US, Europe and Asia, they want to trade with algorithms in Brazil as well.
Infrastructure and Volume Spikes This is one of the challenges that we face as an industry. As you are building electronic infrastructures, you have to build for growth and not just for where we are today. When we look at a market, whether it is Brazil or more developed markets like the US, Europe or Asia, we know what we are trading today, but we have to build to accommodate what we will trade in a year, two years and what we think the peak might be. Just because a market trades a couple of hundred million in a day, or in the US, 8 billion shares a day, it does not mean you build your plan to support 8 billion shares a day because a year from now, that figure might be 20% higher.
More so, if a major event happens next week, then that figure might double, so you need to build sufficient headroom. Right now, we can handle a lot more than what we manage on a daily basis, but that is on purpose to make sure that at times of stress we are there for our clients and that they can trade through us with full confidence.
DMA or Boots-on-the-Ground? To be successful in a market like Brazil, brokers need to have people on-site who know the local investor community and know the local financial community. J.P. Morgan has a major trading presence in Sao Paulo, and that is just one piece of the offering in Brazil. For small firms who want access, outsourcing is a realistic option, but if you are going to be big in a market, especially in a market like Brazil, an in-country trading team is required.
Technical Challenges Reliable trading requires market data and telecommunications systems, which are present in Brazil, along with data center space and algorithms that are tuned to the local market and market structures. This tuning includes the liquidity profiles of the stocks as well as the rules and regulations of the exchange; you cannot apply the same algorithms from one region to another and expect them to work. We spend a lot of time and effort, fine tuning our algorithms, testing them on our desk and then rolling them out to clients. It is not just copy-and-paste.
CME Group’s Fred Malabre and Don Mendelson chart the history of electronic commodities trading and discuss the recent improvements in FIX for commodities, including fractional pricing, trading listed strategies and faster market data.
Adoption of FIX for Commodity Trading
Products traded on CME Group exchanges have many underlying asset classes, including commodities, interest rate instruments, foreign exchange and equity indexes. Although the largest share of products traded today on our platforms are financial futures, the history of our markets began with agricultural and livestock commodities. The underlying physical commodities include the petroleum complex, agricultural products such as soybeans, wheat and corn, and metals such as gold and copper.
Listed contracts in all of our markets include futures and options on futures. The futures contracts can be physically settled, meaning that a seller has an obligation to physically deliver the commodity to the buyer when the contract expires at a delivery point specified by the contract, or cash settled, which are products that could not be settled to an index.
Back in 2001, we were at a crossroads. At this time, futures contracts were primarily traded via open outcry – brokers shouting and waving arms in trading pits. At that time, CME launched its first FIX compliant interface to electronically match orders for a wide range of asset classes ultimately including equity indexes, FX, interest rates, real estate, weather, economic events, energy, metals and agriculture. The question arose: how do we represent orders and execution reports and transmit them between firms and the exchange? We had earlier put out its own order routing API, but it had some drawbacks, including the high level of software developer support required. Firms were running several different computing platforms, for example.
At that time, the FIX Protocol had begun well in the equities world. It was attractive because it was not an API, but rather a standard for message exchange. From the exchange’s perspective, this was an attractive proposition: we would develop our side of the conversation, and firms would develop theirs. From the firms’ perspective, their software developers could achieve high performance, only limited by their own imagination and skills.
It seemed that with a few adjustments, FIX could be adapted for futures and options trading. In version 4.2, fields were added to support derivatives, including expiration and strike price. We participated in working groups to standardize those changes, and later helped spawn FIX-based solutions for market data and FIXML for clearing.
An example of a problem with adapting an equities standard to commodities is fractional pricing. Traditionally, agricultural prices were stated in fractions. To this day, soybean futures are quoted in increments of ¼ of one cent per bushel. US equities made a leap from fractional to decimal pricing in 2001. Although trade pricing stuck with tradition, we decided to follow FIX conventions with decimal pricing in messages.
We added some custom indicators within our FIX interface to facilitate conversion to a fractional display. One indicator represents the main fraction and another represents the sub-fraction. For example, for a product ticking in ½ 64th, we would send the main fraction as 1/64 and the sub-fraction as ½. These indicators could then be used to convert a decimal price used over our FIX interface as 105.0390625 which would then be converted to a screen display as 105 2.5/64th. We found that the FIX Protocol is easy to extend to add custom features and can easily be extended for legacy needs with negligible impact to customers not using new tags such as the custom indicators talked about previously.
Raymond Russell, of the FIX Inter-Party Latency (FIXIPL) Working Group and Corvil lays out the use cases for the FIX Inter-Party Latency standard and the functionality of Version 1.0.
Goals for FIXIPL
The principal goal of the Inter-Party Latency Working Group is to ensure interoperability between different latency monitoring vendors. Interoperability is essential because latency monitoring is vital to running a low-latency service, therefore the people building systems need confidence that they can start with one vendor and still migrate to another. What we have seen through the proliferation of latency monitoring systems across the trading world, whether DMA providers, market data providers or trading desks, is that often the problems in managing latency within an environment happen between the cracks. Most firms have a good handle on latency in their own environment because they have engineered it well, but when they connect into a counterparty, it gets tricky.
A trader who sees a slowdown in response time will want to understand why they have missed trades or why their fill rates are low, but there are multiple places where that latency could have occurred. One place is in the exchange matching engine, which in some respects is unavoidable. If there is considerable interest and activity in a symbol at the same time, those orders will have to queue in the matching engine, purely as a result of market activity. The latency might also have occurred in the exchange gateway. It is common practice for exchanges to load balance across multiple gateways to accommodate high volumes, and you might have hit a slow gateway. Perhaps the service provider you connect through may have oversubscribed their network and you could be caught in cross traffic unrelated to trading. We have seen all these things happen, so the ability to see where the latency is occurring requires a consistent set of time stamps across the architecture.
Most exchanges already employ latency monitoring in their own environment, and inter-party latency and the sharing of time stamps, while less important within the exchange, enables them to work with their members to identify areas of latency. The benefits unlocked through interparty latency are somewhat biased towards the end traders, but they also extend to brokers and market data providers, who receive better quality execution feeds and market data speeds, respectively.
For exchanges, the need for latency transparency is becoming a standard requirement as latency has become a competitive differentiator. To the extent that exchanges are comfortable with their own infrastructure and are ready to compete on their latency, they will want to share their latency measurements with members. In my experience, venues and brokers are no longer as reticent to share their latency figures as they were before.
Version 1.0 Rollout
Much of the work that we have done with Version 1.0 involved deciding how to produce a standard that on one hand is simple enough to be easily implemented, while ensuring it can still perform in all the basic use cases. Version 1.0, due out in December 2011, is clean and simple and emphasizes the core capability to publish time stamps. We have agreed on the technical scope and it is now going through the formal review procedures required to be standardized by FPL, including a public review. The other important part to be done before it is real is to get two different implementations. There are a number of things that will be ready in a few months’ time, such as distribution through multicast and the ability to automatically group several measurements together across the trade, which we will include in the next version later next year.
Simo Puhakka, Head of Trading for Pohjola Asset Management, shares his experience trading in the Nordic markets, giving his opinions on interacting with HFT, using TCA and knowing whether you can trust your broker.
The prospects for High Frequency Trading (HFT) are really up to regulators. It will be a free market, but as we all know, regulatory changes affect the whole trading landscape. For example, we can see what is happening in France and the debate that is going on in Sweden, which are quite hostile towards HFT, so those countries.
Personally, I think that HFT is a good thing for the market, as long as you have the proper tools to deal with it. There are a number of small firms that have been suffering from HFT
since MiFID I because they lack the proper technology and tools to measure and deal with it. We have not suffered in our dealings with HFT, and I would actually say in many cases, it is the opposite. HFT firms seem to add liquidity and when you have the proper tools to deal with it, you can take advantage of it.
Speaking of tools, we started building our own Smart Order Router (SOR ) a year and a half ago. The goal was to create an un-conflicted way to interact with the aggregated liquidity. In this process we went quite deep into the data and turned processes upside-down with the result that we have full control of how we interact with the market.
On the other hand, I welcome technological innovation from the sell-side; for example, brokers now disclose the venues where they execute trades on an annual basis. The surveillance responsibilities that brokers have are beneficial. Many of the small, local brokers and buy-sides, however, are now finding it challenging to upgrade their technology.
Trusting your Broker
Our approach was to take control of our order flow and only use our brokers for sponsored access. We chose full control because, in some to deliver what I am asking.These questions first arose a few years ago, and we realized we needed to create a transparent, fully-controlled, non-conflicted path to the market. How you interact with different venues – even lit venues, where you have more transparency – will affect your choice of strategy. In most cases, you are better off without brokers making decisions for you. The root of the problem is, when you send an order to the broker, what happens before it goes to the venue? What control do we have over the broker infrastructure, including their proprietary flow, internalization, market making and crossing, not to mention the routing logic?
When we dug into the data, we were quite surprised to see that, although a broker was connected to all the dark liquidity, many of the fills were coming from that particular broker’s dark pool, suggesting there are preferences in the routing logic. Brokers want to internalize flow, which is not a problem, if you are aware of potentially higher opportunity costs. When it comes to dark liquidity, that is an even bigger problem, since our trades were often routed to the broker’s own dark pool or those it has arrangements with.
Corwin Yu, Director of Trading at PhaseCapital, sits down with FIXGlobal to discuss his trading architecture, the proliferation of Complex Event Processing (CEP) and why he would rather his brokers just not call.
FIXGlobal: What instruments does your system cover? Corwin Yu: At the moment, we trade the S&P 500, and we have expanded that to include the Russell 2000, although not as an individual instrument, but as an index. We also trade the E-Mini futures on the Russell 2000 and also for the S&P500. We have done some investigation on doing the same exact type of trading with Treasuries using the TIPS indices and the TIPS ETFs and a few of the similar futures regarding those as well. We are not looking at expanding the equity side except to consider adding ETFs, indices, or futures of indices.
FG: Anything you would not add to your list? CY: We gravitate to liquid items with substantial historical market data because we really do not enter into a particular trading strategy unless there is market data to do sufficient back testing. Equities was a great fit because it has history behind it and great technology for market data, likewise for futures, where market data coverage has recently expanded. Options is a possibility, but the other asset class that is liquid but not a good fit is commodities. We shy away from emerging markets that are not completely electronic and do not have good market data. While we have not made moves into the emerging markets, we know that some other systematic traders have found opportunities there.
FG: How much of your architecture is original and how often do you review it for upgrades? CY: In terms of hardware, we maintain a two year end-of-life cycle, so whatever we have that is two years old, we retire to the back-test pool and purchase new hardware. We are just past the four year mark right now, so we have been through two hardware migrations. Usually this process is a wakeup call as to how technology has changed. When we bought our first servers, they were expensive four-core machines with a maximum memory of 64 GB. We just bought another system that can handle 256 GB through six-core processors. We are researching a one year end-of-life cycle because two years was a big leap in terms of technology and we could have leveraged some of that a year ago.