Liquidnet’s Seth Merrin shares how exchanges can develop a global strategy to compete today.
Following a year of failed crossborder mergers, exchanges are at a crossroads. They have worked in siloes within their respective countries but now have to create their global strategies. To move forward, there are lessons for the exchanges to learn from another industry that followed a very similar trajectory more than a decade ago—the airline industry.
Airlines share many parallels with exchanges—a strong nationalistic sentiment, a highly competitive environment driven by the entrance of low cost carriers, and a record of unsuccessful M&A activity. So what steps did the winners in the airline industry take in order to beat out the low cost competition, how did they achieve global scale and what can exchanges learn from this?
Airlines tackled the fundamental evolution of their industry by focusing on three key areas: diversification of revenue by selling more to their existing client base, differentiation of their offering by focusing on a premium customer, and development of global alliances to expand their geographic reach.
Let’s first take a look at revenue diversification and how exchanges can take a similar approach. Airlines realised they had a captive audience with their customers and once they had these customers in their seats, they could sell them more products. As a result, the airlines introduced paid-for services in coach and new premium products and services to all customers. Who hasn’t been on an aeroplane and paid for food, extra space, or, picked up an ever-expanding catalogue of duty free items?
Historically, exchanges have had two primary streams of revenue: company listings and trading. Today, these revenue streams constitute only a minor component of total revenue as exchanges have placed more emphasis on their ‘premium offerings’. The NYSE Euronext and Nasdaq, both of which have faced significant competition sooner than many of their peers, recognised that they had a captive audience in their listed companies and expanded their offering by selling premium services such as new technology offerings and premium data products and services. Today, both of these exchanges have multiple revenue streams and no single business comprises more than 20% of their overall revenue. What they have left to do—and what virtually no other exchange has done—is to develop a premium class of customer.
The entrance of low-cost providers, such as EasyJet and Ryanair, in the airline industry commoditised the price of an airline seat. As a consequence, airlines (particularly the established players) could no longer compete on price alone and needed to diversify their offering. So they went upscale, choosing instead to focus on high margins and higher value offerings, which their discount counterparts couldn’t match. While discount carriers charged for pillows, winning airlines created a premium offering and experience for their business and first class travellers. It’s not surprising that these premium passengers were willing to pay significantly more for steak, champagne, and lieflat beds because of the ultimate experience these airlines provided.
Ian Hoenisch of ITG lays out the perils of current test symbol regimes and describes the work to reduce the inherent risk in testing.
How has testing been done previously (i.e. ZVZZT) and what have been the drawbacks or risks? The initial production testing was typically done by real ticker symbols that are out of the money, like ‘buy IBM for a Dollar’, which has some limitations and fat finger figure problems. If you accidentally sell for a dollar, your order is going to get filled, and this can create substantive risk. Some exchanges then implemented test symbols, like NASDAQ’s ZVZZT, but not all exchanges have agreed to use test symbols. Production testing is difficult at the best of times and as it is not just about getting a fill on a test symbol; all the other issues, such as getting market data to trigger risk limits, determining if you are 10% off last fill price to trigger warnings for bad fills, flagging trade-throughs, creating testing for locked or crossed markets, etc – all of these are difficult without a truly live environment.
What has emerged now is a parallel environment using real stock symbols – real market data (although it is probably delayed in most cases) you are not trading in a live environment. This factor has emerged not just on the exchange side but also in back office systems like Omgeo. The other element of test symbology that we are trying to work out is a true end-to-end solution and quotes, routing, executions, allocations and settlement. In the past, you could not use the Internet, so you had to get a whole new set of circuits to NASDAQ’s test system.
Technically speaking, how does the test symbol fit into the normal FIX message? The FIX part was easy because it looks and behaves like a regular symbol. Getting everyone to understand that this is a test symbol is hard. For example, when we test in production with ZVZZT, there are many systems downstream from order routing that need to ignore test orders; compliance, dollar amount checks, OATS reporting, other SEC and FINRA reports that all need to exclude risk symbology. Buyside traders who want to test their systems often cannot create an order without initiation from the portfolio manager, thereby restricting them from creating a ZVZZT test order. Technically speaking, adapting the FIX message for test symbology is simple, but the surrounding elements will take more work to align.
What changes need to be made to broker systems to incorporate test symbols? At the exchange? The buy-side will probably rely on their OMS vendors or alternatively they can create a dummy account with dummy cash that they do not have to report to accounting. While the buy-side can handle that more easily, on the sell-side, we are required to modify a number of systems to handle test symbology and exchanges have to adjust their systems to not report OATS and executions if they have a test symbol. Exchanges also pay for quotes on filled orders, as a part of a revenue-sharing deal for market data dissemination. This would be impossible with ZVZZT orders, else everyone would build their own matching system and send in endless test orders.
Everybody has to make the business logic changes so NASDAQ initially had 30 different test symbols beyond ZVZZT, and NYSE had TESTa and TESTe, so any time that the symbols changed, there was a considerable amount of code that had to be updated.
The Capital Markets Cooperative Research Centre (CMCRC)’s Alex Frino talks about his research over the past 18 months and the conclusions as to the truth about high-frequency trading.
What inspired you to focus your research on High Frequency Trading (HFT)?
There is a very poor understanding of the impact of HFTs on the market place. There is a lot of ill-informed opinion in circulation about the impact of HFT on price volatility, and their contribution to liquidity. I wanted to provide some hard data to help markets move forward and inform sensible evidence-based policy decisions.
There was also considerable interest in the idea of conducting HFT research from our regulator partners, including the FSA and ASIC.
What were your views on HFT at the outset of your research program?
When we first set about doing the research 18 months ago, I began by speaking to the investment management community to gather their views and insights into HFT and its impact on their trading. The feedback I got was overwhelmingly negative. One comment sums it up best – an investment manager said to me that “liquidity provided by the HFT community is like fog – you can see it, but when you reach out to grab it, it is not there.” So I began the program expecting to confirm these dominant views. To my surprise we discovered that the realities about HFT are almost exactly the opposite of that the investment managers were telling me.
HFT liquidity has been described as ephemeral by many on the buy-side. What does your research suggest about the ability of the buy-side to interact with HFT liquidity?
We have done research with data from the LSE, ASX, SGX, NASDAQ and NYSE Euronext on exactly this subject. The exchanges furnished us with data that identifies when HFTs are present in the market place. We then looked at the make-take decision. HFTs make liquidity when they put up a quote that gets hit by someone on the other side of the trade. They take liquidity when they hit someone else’s quote. The data clearly showed that HFTs are net makers of liquidity.
Interestingly some of our data also included information about when firms are trading through co-located servers within the exchanges. This data too showed that co-lo HFT activity was also a net provider of liquidity in those markets.
Co-location is described by some as an ‘unfair advantage’. What is your take on that given your research into the area?
My view is that if the advantage is being put to good use in providing liquidity, then it is not being misused. That pool of co-located flow is providing liquidity that would not be there otherwise, so I cannot see how that is a negative for markets.
Many market participants – including recent widely-quoted comments by Andrew Haldane of the Bank of England – are critical of the speed and sophistication of markets generally, using HFT as their example. They argue the playing field is not level and that markets should be slowed to take away perceived unfair advantages. What is your view?
I was frankly amazed by Haldane’s suggestion that markets should be slowed [by introducing speed limits and resting periods]. What he is in effect suggesting is that we should take markets backwards by a decade. That is astonishing to me because I just do not see the arguments. Market participants who do not have the technology to compete with other players can easily access brokers with algorithmic trading engines to help them execute their trades. If you cannot or do not want to build the technology yourself, you can outsource it fairly cheaply and very efficiently.
From an HFT perspective, our research demonstrates emphatically that the liquidity they provide is real and other participants interact with it constantly, so I cannot see a problem there either.
Otkritie’s Tim Bevan describes the intricacies and idiosyncrasies of the Russian markets, and offers suggestions on how to effectively access the deep liquidity there.
How would you profile the firms that are interested in DMA to Russia?
There is an interest in DMA to Russia from prime brokerage desks because many of the hedge funds that use the global prime brokers have expressed interest in Russia, now that the liquidity has reached the point it has. It is worth pointing out that the liquidity in the local equity market is approximately $2.5 billion a day, and the derivatives market turnover is $10 billion notional a day. These are very significant and deep pools of liquidity. We are certainly seeing client pressure from different areas hitting Tier 1 banks, which in turn is reflected onto us. We are also seeing the big global electronic brokers looking to add Russia to their coverage.
There is sustained sell-side interest, but the other big pocket of interest we are seeing is from the low-latency, high frequency funds that utilize proximity hosting and co-location, who want to place hardware in Moscow and run their strategies in the electronic order books that are available there. There are many more of these types of participants now and they are often in London, New York, Chicago, Amsterdam, Paris and other parts of Europe.
How extensively are algos utilized in Russian DMA?
Obviously for a high frequency fund, the algo is the strategy. This is clearly different from execution algos, like VWAP, which are used to execute orders in a certain manner. Most Russian brokers have the most basic execution algos like VWAP, TWAP, icebergs, etc. It is a relatively new trend (i.e. 6-9 months old) for the big sell-sides to enter Russia, and many have not yet deployed their more sophisticated suites of algos into the Russian market.
Additionally, the Russian market itself, is quite unusual in that there is a lot of programming skill in Russia. The average Russian retail trader is quite often running an algo through an Excel spreadsheet with $10-20,000 worth of capital, so as regards alpha strategies, there is a lot of algo activity in the Russian market. In terms of execution algos, however, I think it has not penetrated this segment yet. As the sell-sides continue to move into the electronic market, the second phase will be to deploy their own execution algos and offer them to their main clients, but we are at the beginning of that part of the process.
With the majority of liquidity isolated in a dozen stocks, how would Russian DMA fit into a firm’s overall trading/investment strategy?
Liquidity is very concentrated in Russia. The top ten names account for the vast majority of liquidity, and even the top two or three probably make up 50% of the market. DMA is possible beyond the top 15 or 20, but it drops off fairly quickly thereafter. Obviously the big blue chip companies are where most of the interest is. Taking Sberbank as an example, there is no liquid Depository Receipt (DR) and there is an unsponsored DR trading of about $2 million a day in Germany. If you want to trade that stock, you have to trade the local market, where it trades between half to a billion dollars a day notional, so there are some very deeply liquid companies that are only available in the local market.
What other asset classes are being attracted or will attract DMA interest?
The biggest interest is in the RTS Index futures, which is an incredibly powerful product. Trading over $5 billion a day notional, more than double of all of Russian equity instruments (both DR and local), sometimes by a factor of two. RTS Index futures trade from 0700 UK time right through to the US close and are among the top ten most liquid equity index futures in the world. This instrument has generated the majority of interest from the quant funds, but interest is increasingly coming from more standard hedge funds and buy-sides where they are allowed to trade futures as it provides an instant hedge or leverage tool with an almost bottomless liquidity pool for any one player.
In the last 18 months, the credit crunch has distorted equity market conditions significantly, but there are some trends that appear to have continued, despite recent shortfalls in liquidity. To provide some context on these trends, we analyze and quantify the mechanics of trading over a significant period of time, across a wide range of different markets. We confine our study to looking at trading patterns of the most liquid stocks1 in the countries studied.
One of the clear advantages of this cross sectional and historical approach is that it allows the identification of outliers. For equity trading, the clear and consistent outlier is still the US, where trading in the most liquid assets is still faster, and smaller, than the busiest non-US assets.
We attempt to show how significant differences in trading environments are, by looking in detail at the month of October in 2008. We compare all venues analyzed and attempt to place them on the evolutionary line travelled by the NYSE. While interesting, we note that such a comparison implicitly assumes that exchanges will travel the same line. At the very least, the regulatory regimes in different regions should make us question this assumption.
Lastly, we look at bid offer spreads and how they have evolved. These spreads are an important part of transaction cost. We look in particular at how they have evolved over ‘Crunch’ period and beyond.
The Evolution of Size and Speed in Global Equity Markets Over the past ten years, with the growth of program trading, algorithmic trading hubs have moved from novelty to ubiquity. They now manage a sizeable portion of trading activity in major markets. In doing so, they have transformed what was in effect a paper driven process, where traders would calculate what to execute when, into a fully automatic one. Transaction costs have been pushed down to the limits imposed by profitability. What has this process looked like in terms of trade speed, and size? We consider speed first.
We measure speed in terms of typical intertrade duration; that is, the typical time, in seconds, you would expect to wait for one trade to follow another. Results are aggregated over the countries, for the top ten most liquid stocks, per year. We concentrate on continuous, automated trading.
For the US, typical trading duration has compressed steadily. Intertrade times for liquid assets are typically less than 1 second. Indeed, for NASDAQ, intertrade duration for liquid assets is significantly less than this. In terms of the greatest increase in trading speed – and possibly the greatest increase in automation – Europe stands out.
The FPL Americas Electronic Trading Conference, for those in electronic trading, is always a year-end highlight and this year was no exception. Sara Brady, Program Manager, FPL Americas Conference, Jordan & Jordan thanks all the sponsors, exhibitors and speakers who made this year’s conference a huge success.
The 6th Annual FPL Americas Electronic Trading Conference took place at the New York Marriott Marquis in Times Square on November 4th and 5th, 2009. John Goeller, Co-Chair of the FPL Americas Regional Committee, aptly set the tone for the event in his opening remarks: “We’ve lived through a number of challenging times… and we still have quite a bit of change in front of us.” After a difficult year marked by economic turmoil, the remarkable turnout at the event was proof that the industry is back on its feet and ready to move forward with the changes to the electronic trading space set forth in 2009.
Market Structure and Liquidity Two topics clearly stood out as key issues that colored many of the discussions at the conference – regulatory impact on the industry and market structure as influenced by liquidity, and high frequency trading. An overview of industry trends demonstrated that the current challenges facing the marketplace are dominated by these two elements. Market players are still trying to digest the events of 2008 and early 2009, adjusting to the new landscape and assessing the changing pockets of liquidity amidst constrained resources and regulatory scrutiny. The consistent prescription for dealing with this confluence of events is to take things slow and understand any proposed changes holistically before acting on these changes and encountering unintended consequences.
The need for a prudent approach towards change and reform was expressed by many panelists, including Owain Self of UBS. According to Self, “Everyone talks about reform. I think ‘reform’ may be the wrong word. Reform would imply that everything is now bad, but I think that we’re looking at a marketplace which has worked extremely efficiently over this period.”
What the industry needs is not an overhaul but perhaps more of a fine-tuning. Liquidity is one such area that needs carefully considered finetuning. Any impulsive regulatory changes to a pool of liquidity could negatively impact the industry. The problem is not necessarily with how liquidity is accessed, but the lack of liquidity that results in the downward price movements that marked a nightmarish 2008. Regulations against dark liquidity and the threshold for display sizes are important issues requiring serious discussion.
Rather than moving forward with regulatory measures that may sound politically correct, there needs be a better understanding of why this liquidity is trading dark. While there is encouraging dialogue occurring between industry players and regulatory bodies, two things are for sure. We can be certain that the evolution of new liquidity venues is evidence that the old market was not working and that participants are actively seeking new venues. We can also be assured that the market as a messaging mechanism will continue to be as compelling a force as it has been over the last two decades.
Risk One of the messages that the market seems to be sending is that sponsored access, particularly naked access, is an undesirable practice. Presenting the broker dealer perspective on the issue, Rishi Nangalia of Goldman Sachs noted that while many agree that naked sponsored access is not a desirable practice, it still occurs within the industry. A panel on systemic risk and sponsored access identified four types of the latter: naked access, exchange sponsored access, sponsored access of brokermanaged risk systems (also referred to as SDMA or enhanced DMA) and broker-to-broker sponsored access.
According to the U.S. and Securities Exchange Commission (SEC), the commission’s agenda includes a look specifically into the practice of naked access. David Shillman of the SEC weighed in on the commission’s concern over naked access by noting, “The concern is, are there appropriate controls being imposed by the broker or anyone else with respect to the customer’s activity, both to protect against financial risk to the sponsored broker and regulatory risk, compliance with various rules?” Panelists agreed that the “appropriate” controls will necessarily adapt existing rules to catch up with the progress made by technology.
On October 23, NASDAQ filed what they believe to be the final amendment to the sponsored access proposal they submitted last year. The proposal addresses the unacceptable risks of naked access, and the questions of obligations with respect to DMA and sponsored access. The common element of both of these approaches is that both systems have to meet the same standards of providing financial and regulatory controls. . . Jeffrey Davis of NASDAQ commented on his suggested approach: “There are rules on the books now; we think that they leave the firms free to make a risk assessment. The NEW rules are designed to impose minimum standards to substitute for these risk assessments. This is a very good start for addressing the systemic risk identified.”
These steps may be headed in the right direction, but are they moving fast enough? Shillman added that since sponsored access has grown in usage there are increasing concerns and a growing sense of urgency to ensure a commission level rule for the future, hopefully by early next year. This commission proposal would address two key issues – should controls be pre-trade (as opposed to post-trade) and an answer to the very important question, “Who controls the controls?”
Andres Araya Falcone of the Santiago Stock Exchange explains how FIX is increasing the range of services available to traders in Chile and throughout Latin America.
How is FIX facilitating DMA into the Santiago Stock Exchange?
The first concept of DMA in Chile began with what we call “direct traders” (buy-side traders) facilitating these specially authorized institutional clients, to send direct orders to the market via a “broker sponsor”. Thus, pension and mutual funds, insurance companies and other institutions, using trading terminals provided by the Stock Exchange, can trade directly in our market. The next natural step was the incorporation of electronic networks to attract order flow from the U.S., Europe and neighboring countries in Latin America, especially Brazil.
In 2006, we built the first FIX interface using version 4.0 to connect to the Marcopolo Network, to attract the order flow of our local equities market. After that, the Santiago Stock Exchange launched its initiative to modernize the equities electronic trading system and developed Telepregón HT, jointly with IBM, which went live in June 2010. This system is ready for algorithmic trading flow since it supports a throughput of over 3,000+ orders per second with sub-millisecond latency. In designing the system, we decided to use FIX 4.4 to enable easier connection via DMA with other exchanges, sell- and buy-side firms and market information vendors. This has greatly facilitated the connection to different networks, such as Bloomberg, Fidessa and SunGard, among others. For all these initiatives, FIX has been crucial in facilitating the integration with these listed networks. During 2011 we will announce new network agreements.
Currently, referring to the equity market, 11% of order flow comes from DMA which represents an average of a 27% increase over the last 6 months, today 19% on average comes from Internet retail order flow, and the rest comes from traditional OMS and Trade Work Stations.
As foreign investment into Chile and the Chilean market continues, how will the Santiago Stock Exchange upgrade its platforms to meet increased investor and trader demands?
In 2010, the Selective Share Price Index (IPSA), the country’s main stock market indicator, gained 37.6% in Chilean pesos (equivalent to some 46% in dollars). Share trading on the Santiago Stock Exchange rose to US$60 billion in 2010, up 30.5% from 2009, setting a new annual record. Trading was particularly strong in the second half of the year, which accounted for almost 60% of the annual total, reflecting strong demand from both local and international investors.
At the same time, by the end of 2010, the Santiago Stock Exchange had signed a linkage agreement with Brazil’s stock exchange, BM&FBOVESPA, heralding the latest in a series of cooperativeprojects being run between Latin American bourses. The agreement, signed on December 13th, will enable connectivity between both exchanges for order routing and market data dissemination. It also includes separate initiatives for further development of the Santiago Stock Exchange’s derivatives market, the establishment of joint initiatives related to settlement, clearing and central counterparty services, as well as access to the BM&FBOVESPA /CME trading platform from Chile.
Market participants in both countries will be able to route orders for stocks, stock options and related derivatives listed on the other’s exchange. Both exchanges will also be able to receive and distribute each other’s market data. Clearing and settlement of orders will be done according to local market rules of listed instruments. These kinds of initiatives imply that the Santiago Stock Exchange’s IT platform has to be prepared to manage more than 6 million orders per day.
What plans does the Santiago Stock Exchange have to accommodate High Frequency Trading and algorithmic order flow?
We are working as an integrator of a state of the art product for algorithmic trading. In conjunction with Streambase, FIXFlyer and IBM WFO, we are creating a product we will call “Broker in a Box”. The idea is to provide a framework for capital markets, including a set of algorithmic order execution strategies designed to achieve best execution, access liquidity, minimize slippage and maximize profits for trading operations. These algorithmic trading strategies (like VWAP, TWAP, Arrival Price / Implementation Shortfall, etc.), are provided as fully customizable EventFlow modules which can be used in conjunction with the frameworks. Trading firms will be able to modify each algorithm to reflect their own “secret sauce” and to differentiate their trading strategies in the market. The Santiago Stock Exchange will provide an “all in one” solution: integrated markets, market data (from Integrated Latin America Market (MILA), NYSE and NASDAQ), co-location, monitoring, local support, etc.
Pantor Engineering's Rolf Andersson examines the data behind the speed of FIX for low latency market data, settling the question of whether FIX is fast enough.
The FIX Protocol as well as software implementing the protocol (aka “FIX engines”) have from time to time been said to have a performance that is too low for use where very high throughput and/or very low latency is required. Performance characteristics of the protocol as well as engine implementations have been discussed in previous articles (Kevin Houston’s article “?” and John Cameron’s “Evolution of the FIX Engine” Vol 2, Issue 7, September 2008). Granted, there are slow implementations in use and the classic FIX tag=value syntax is too verbose for some use cases: e.g. market data.
Recent developments within FPL such as the release of the FAST Protocol and efforts within the FIX 5.0 usage sub-committee to support alternative recovery state models will enable FIX to be used in high throughput, low latency scenarios. This article reviews the various sources of latency and demonstrates that a FIX over FAST implementation can be used in place of a proprietary protocol to provide very high performance and low latency.
An overview of latency sources
There are a number of latency sources that contribute to the total latency for producing, transferring and consuming market data. The impact of different sources varies widely between implementations. The following sources will be discussed:
Message processing overhead – the encoding and decoding between transfer format and the internal representation suitable for the processing required in a specific application, as well as safe-storing messages to support recovery of lost messages;
Communication processing overhead – the network processing associated with sending and receiving messages;
Scheduling delay – the delay in reacting on a request to send a message or reacting on a notification that a message has been received.
Transfer delay – the time elapsed between the start and the end of a packet containing one or more messages.
Propagation delay – a function of the physical distance between two communicating parties and the speed of light in the medium used to communicate.
These latency sources are to a large extent similar in behavior irrespective of the choice of external protocol, but there are differences as discussed below.
Message processing overhead
The overhead of processing a traditional FIX message is negatively affected by a number of aspects:
Message content – FIX messages contain a host of information for the benefit of different communicating parties.
Message format – the FIX message format contains redundant information about the message structure. Text is used to represent all data.
Recovery semantics – the FIX recovery mechanism is based on message sequencing per session and a contract that a receiver can re-request messages.
Communication processing overhead
The communication overhead depends on the number and size of network packets. Each transferred packet incurs some processing both at the sending and receiving end. One or more messages are transferred in each network packet. Smaller messages mean that less data has to be copied and that more messages can be transferred in each network packet.
What key features do successful exchanges share that encourage liquidity; how automated trading drives growth and why markets will attract incremental liquidity with the advent of global CSAs. Robert Barnes, Managing Director, Equities of UBS Investment Bank explains.
The execution arms race continues. The prize is order flow that concentrates to those most capable, particularly in navigating market structures.
Market structures comprise the rules and institutions that determine competition and the framework of interaction, including Exchange fees, which ultimately shape order execution strategies. The focus includes external factors that impact business and operating models, driving opportunities to grow revenues and reduce costs.
Exchanges rebuilding liquidity is a priority market-wide theme in 2010 in the context of competition, transparency, and investor choice at trading and clearing layers. From a User perspective, we wish to work in a spirit of partnership with Exchanges and Regulators to promote liquidity and new business, and we thank the Authorities as they provide a framework within which we can behave as entrepreneurs.
Macro trends include rising number of trades, coincident with automated electronic trading. Regulation promotes competition, transparency, investor protection. This leads to a better result for clients via competitive execution policies. Competition, thus fragmentation, makes the world more complex. Not all brokers, however, can keep up with the technological arms race. Direct Execution models of electronic trading are evolving to address this. Latency reduction increasingly is sought for competitive advantage.
There is increasing awareness of a positive dynamic involving non-displayed pools and high frequency trading. The key insights are that markets allowing discretionary non-displayed broker crossing processes and non-discretionary dark pools effectively speed net liquidity onto order books. The benefits are lower market impact, greater efficiency, and a better result for end investors.
These benefits multiply if statistical traders are active. When orderbook liquidity increases, so too does the proportion of trading opportunities; and these stimulate further orders to the orderbook from automated strategies. This incremental liquidity, aggressive and passive, narrows spreads.
The world’s markets are split into those that support and benefit from high levels of automation, and those with the opportunity to encourage more. Investors’ current focus include global macro trends and emerging markets which means that moving toward more consistent electronic access models will help markets to take advantage of this burgeoning liquidity. A good start is to implement and enhance FIX specifications to offer advanced electronic flexibility. This adoption of standardisation can aid emerging markets in growing their scale of business.
One of the more “seismic” changes to Equity markets in recent years is the proliferation of commission unbundling and Commission Sharing Agreements, “CSAs”, or Client Commission Agreements,“CCAs”, in the USA. Initiated by UK regulators in 2006, this commission unbundling initiative spread across Europe (at the end of 2007) with the arrival of the Markets in Financial Instruments Directive, or “MiFID.” Global clients, preferring one consistent process world-wide, have led the demand for CSAs to become a market convention. With many CSAs established on a global basis, it can be easier than ever before for a newly automated market joining a broker’s network to attract liquidity.