Max Colas of CameronTec looks at smarter approaches to information overload and explains how improved management of data convergence can result in greater business insight and edge.
Every day Twitter delivers 300 million messages, 4% of which are actual news. Every 20 minutes, 3 million messages are published on Facebook and 10 million comments are added. Such mind-blowing numbers would be anecdotic if they did not highlight a trend – perhaps even a threat – that is also relevant in the trading world: information overload.
As usage of FIX grows globally and firms increasingly rely on their trading platform to contribute to their business edge, the risk for FIX users is that they focus on the wrong snippets of information, or miss the truly relevant trends. Addressing those challenges becomes a differentiator for FIX technology providers.
Previous generations of monitoring systems focused on displaying information, for instance by adding value in the shaping of data or user-friendliness of the interface like displaying logs with FIX tag/value expansion or showing “conversation views” that gathered together relevant messages. The mostly static log formats even allowed vendors to claim some degree of compatibility across FIX engines. Although useful, such systems are inherently flawed for two reasons:
1.They assume that FIX operators should approach information linearly, and
2.They expect all information that is relevant to a business to be contained in the logs.
Neither assumption proves true in today’s environment.
When algorithmic trading is involved, it is not unusual for FIX logs to grow by 10,000 lines per second for each session. When data flows converge from a number of FIX nodes across a pan-European topology, the dataset size can increase by multiple orders of magnitude. We are way past the display of logs on a screen. Gone is the linear approach to FIX data; gone is the time of perusing pages of logs one after another, of X-term windows scrolling slowly on a screen.
In fact, the only approach that remains at this point is to expect monitoring systems to deliver on two channels: “I tell you in advance what I am interested in and you notify me when it occurs” and “I tell you what I am interested in and you bring me the relevant results”. These approaches are not new: in the outside world, they are called Google Alerts and Google Queries. Technologies developed to implement this paradigm in the financial industry, such as California-based Splunk, have been in use for a few years. They all tend to gravitate around the convergence of data into one central repository to broaden the breath of searches. This, too, is an industry trend that is highly relevant to the FIX world, with a peculiar edge that is worth analyzing.
Guosen Securities’ Shen Tao reveals the latest trends in algo usage by Chinese asset managers, domestic mutual funds and Qualified Foreign Institutional Investors (QFIIs).
Who are the primary customers for algorithmic products in China? Algorithmic trading started in the Chinese A share market some time in 2007. In 2005, the first commercial FIX engine went live to accommodate the execution needs of the Chinese A share market of Qualified Foreign Institutional Investors, or QFIIs, as part of the plan by the Chinese government to allow regulated capital market investment by foreign investors. After an initial experimental phase of FIX connectivity with global trading networks, the local FIX trading platform became solid enough to interface with a real algo engine. In 2007, some leading global investment banks (predominantly, QFIIs from the sell-side) began to offer algorithmic trading facilities for their clients and their own proprietary trading desks. Most of these facilities were located offshore (e.g. Hong Kong) and connected to the Chinese brokers’ FIX gateway via a financial trading network such as Bloomberg.
The earliest providers and users of algo trading in the Chinese market were solely QFIIs and their clients. In 2008, although the global market was in turmoil and many infrastructure budgets were cut across the international financial community, there were still some firms seeking expansion opportunities for the future. Among them, some global banks with local brokerage joint venture subsidiaries began to build their onshore algo facilities. At about the same time, some leading purely local brokers also started their efforts in algo development, Guosen among them. We started in March 2008 and also targeted QFII investors for algorithmic trading, however, we understood the future of algorithmic trading in the Chinese market would rest on the domestic mutual fund industry. In late 2009, the Guosen algo platform was almost ready and the aforementioned onshore algo facilities run by the sell-side joint ventures of global banks also went live. The day of the algo had finally arrived for China.
In 2010, with support from a leading buy-side OMS vendor Hundsun; Guosen and UBS began their efforts by offering an algo solution for local mutual fund companies. In November 2010, UBS won its first success with two Beijing-based mutual fund companies, with Guosen securing a third six months later. Since that time, more than a dozen mutual fund companies have started using algorithms from UBS and Guosen. 2010 was the first year of the algo, from a local perspective. Currently, the momentum of mutual fund companies adopting algo platforms continues. We estimate that by the end of 2011, in terms of assets under management, over 40% of the local mutual fund industry could be covered by broker-provided algo services.
In retrospect, QFII investors were the founders of the market, but soon, the local mutual fund industry will become the primary user of algos. In addition, we foresee insurance companies adopting algo trading soon.
CIBC’s Thomas Kalafatis maps out the new CSA rules regarding direct electronic access and suggests its potential effects on brokers and institutional traders.
Are the updated Direct Electronic Access (DEA) requirements a response to patterns endemic to Canada or are they a response to patterns observed elsewhere? Given the existing Investment Industry Regulatory Organization of Canada (IIRO C) rules and the timing of the Canadian Securities Administrator (CSA)’s DEA rule proposal, it is fair to say that the rules proposed by our regulators are intended to maintain consistency with changes in other jurisdictions and prevent regulatory arbitrage. We do not believe that the rules are the result of a specific effort to solve a localized Canadian problem, but rather a preventative measure to ensure structural issues that have arisen elsewhere will not take root in Canada.
The issues around direct electronic access raised in the United States (who is accessing marketplaces directly, and how they are ensuring automated systems will not malfunction) are less of a concern in Canada. TMX rule 2-501 limits who is eligible to receive DEA access, restricting DEA to wellcapitalized firms, or firms that are registered and regulated in certain other jurisdictions.
IIRO C Notice 09-0081 addresses how automated systems should be managed to mitigate the risk of malfunctions. It requires brokers to manage the risk of electronic trading by clients in the same way that they manage the risk of their own electronic trading. This includes ensuring that automated risk filters are in place, that order flow from an automated system can be interrupted/switched off by the broker, and that strategies are tested prior to being deployed to market. These basic, principlesbased protections have been effective at mitigating risk in Canada since well before the wave of automation hit our markets in 2008.
The proposed DEA rules are a movement away from the IOSCO principles-based approach that has traditionally been taken in Canada, towards a more prescriptive regime more like the 15C-3-5 rules introduced by the SEC in the United States this year. This builds consistency between the Canadian and American jurisdictions that are so closely intertwined.
Automated pre-trade risk filters are in place for many brokerdealers. How difficult will this regulation be to implement? Broker-dealers will need to monitor the proposed rules closely, particularly with regard to their Sponsored Direct Market Access (SDMA) clients. These clients have their own sophisticated automated risk management systems in place – as required by UMIR rules and, more importantly, as a result of their own risk aversion. They connect directly to exchanges to minimize latency. The DEA rule proposes to change this, in parallel to 15C-3-5 in the US, in that brokers will need to have “direct and exclusive control” over the risk filters on client flow; this means that a duplicative set of filters operated by the broker will have to be put in place.
In this case, Canadian brokers benefit from the earlier adoption of 15C-3-5 in the United States where various technologies have been developed to meet SEC rules that went into effect in the summer of 2011. Depending on the needs of its client base, a Canadian broker can choose between several types of risk filter offerings operating in a latency range from the low milliseconds to the low microseconds. The only differentiator is cost, with a significant premium on the single-digit microsecond lowest latency offerings.
Generally, it is not economic for a Canadian broker to develop the ultra-low latency solutions in-house, and the Canadian broker community benefits from the availability of third party technologies developed to meet the US rules that came in to effect earlier this year.
Raymond Russell, of the FIX Inter-Party Latency (FIXIPL) Working Group and Corvil lays out the use cases for the FIX Inter-Party Latency standard and the functionality of Version 1.0.
Goals for FIXIPL
The principal goal of the Inter-Party Latency Working Group is to ensure interoperability between different latency monitoring vendors. Interoperability is essential because latency monitoring is vital to running a low-latency service, therefore the people building systems need confidence that they can start with one vendor and still migrate to another. What we have seen through the proliferation of latency monitoring systems across the trading world, whether DMA providers, market data providers or trading desks, is that often the problems in managing latency within an environment happen between the cracks. Most firms have a good handle on latency in their own environment because they have engineered it well, but when they connect into a counterparty, it gets tricky.
A trader who sees a slowdown in response time will want to understand why they have missed trades or why their fill rates are low, but there are multiple places where that latency could have occurred. One place is in the exchange matching engine, which in some respects is unavoidable. If there is considerable interest and activity in a symbol at the same time, those orders will have to queue in the matching engine, purely as a result of market activity. The latency might also have occurred in the exchange gateway. It is common practice for exchanges to load balance across multiple gateways to accommodate high volumes, and you might have hit a slow gateway. Perhaps the service provider you connect through may have oversubscribed their network and you could be caught in cross traffic unrelated to trading. We have seen all these things happen, so the ability to see where the latency is occurring requires a consistent set of time stamps across the architecture.
Most exchanges already employ latency monitoring in their own environment, and inter-party latency and the sharing of time stamps, while less important within the exchange, enables them to work with their members to identify areas of latency. The benefits unlocked through interparty latency are somewhat biased towards the end traders, but they also extend to brokers and market data providers, who receive better quality execution feeds and market data speeds, respectively.
For exchanges, the need for latency transparency is becoming a standard requirement as latency has become a competitive differentiator. To the extent that exchanges are comfortable with their own infrastructure and are ready to compete on their latency, they will want to share their latency measurements with members. In my experience, venues and brokers are no longer as reticent to share their latency figures as they were before.
Version 1.0 Rollout
Much of the work that we have done with Version 1.0 involved deciding how to produce a standard that on one hand is simple enough to be easily implemented, while ensuring it can still perform in all the basic use cases. Version 1.0, due out in December 2011, is clean and simple and emphasizes the core capability to publish time stamps. We have agreed on the technical scope and it is now going through the formal review procedures required to be standardized by FPL, including a public review. The other important part to be done before it is real is to get two different implementations. There are a number of things that will be ready in a few months’ time, such as distribution through multicast and the ability to automatically group several measurements together across the trade, which we will include in the next version later next year.
Otkritie’s Tim Bevan describes the intricacies and idiosyncrasies of the Russian markets, and offers suggestions on how to effectively access the deep liquidity there.
How would you profile the firms that are interested in DMA to Russia?
There is an interest in DMA to Russia from prime brokerage desks because many of the hedge funds that use the global prime brokers have expressed interest in Russia, now that the liquidity has reached the point it has. It is worth pointing out that the liquidity in the local equity market is approximately $2.5 billion a day, and the derivatives market turnover is $10 billion notional a day. These are very significant and deep pools of liquidity. We are certainly seeing client pressure from different areas hitting Tier 1 banks, which in turn is reflected onto us. We are also seeing the big global electronic brokers looking to add Russia to their coverage.
There is sustained sell-side interest, but the other big pocket of interest we are seeing is from the low-latency, high frequency funds that utilize proximity hosting and co-location, who want to place hardware in Moscow and run their strategies in the electronic order books that are available there. There are many more of these types of participants now and they are often in London, New York, Chicago, Amsterdam, Paris and other parts of Europe.
How extensively are algos utilized in Russian DMA?
Obviously for a high frequency fund, the algo is the strategy. This is clearly different from execution algos, like VWAP, which are used to execute orders in a certain manner. Most Russian brokers have the most basic execution algos like VWAP, TWAP, icebergs, etc. It is a relatively new trend (i.e. 6-9 months old) for the big sell-sides to enter Russia, and many have not yet deployed their more sophisticated suites of algos into the Russian market.
Additionally, the Russian market itself, is quite unusual in that there is a lot of programming skill in Russia. The average Russian retail trader is quite often running an algo through an Excel spreadsheet with $10-20,000 worth of capital, so as regards alpha strategies, there is a lot of algo activity in the Russian market. In terms of execution algos, however, I think it has not penetrated this segment yet. As the sell-sides continue to move into the electronic market, the second phase will be to deploy their own execution algos and offer them to their main clients, but we are at the beginning of that part of the process.
With the majority of liquidity isolated in a dozen stocks, how would Russian DMA fit into a firm’s overall trading/investment strategy?
Liquidity is very concentrated in Russia. The top ten names account for the vast majority of liquidity, and even the top two or three probably make up 50% of the market. DMA is possible beyond the top 15 or 20, but it drops off fairly quickly thereafter. Obviously the big blue chip companies are where most of the interest is. Taking Sberbank as an example, there is no liquid Depository Receipt (DR) and there is an unsponsored DR trading of about $2 million a day in Germany. If you want to trade that stock, you have to trade the local market, where it trades between half to a billion dollars a day notional, so there are some very deeply liquid companies that are only available in the local market.
What other asset classes are being attracted or will attract DMA interest?
The biggest interest is in the RTS Index futures, which is an incredibly powerful product. Trading over $5 billion a day notional, more than double of all of Russian equity instruments (both DR and local), sometimes by a factor of two. RTS Index futures trade from 0700 UK time right through to the US close and are among the top ten most liquid equity index futures in the world. This instrument has generated the majority of interest from the quant funds, but interest is increasingly coming from more standard hedge funds and buy-sides where they are allowed to trade futures as it provides an instant hedge or leverage tool with an almost bottomless liquidity pool for any one player.
Can there possibly be a silver lining in the current financial meltdown? John Knuff of Equinix, argues that now is a time to upgrade your investment, allowing the Asia Pacific region to catch up with its US and European peers.
While the global financial crisis has inevitably had an impact on investment, most commentators seem to agree, that the Asia Pacific market will see on-going development, particularly as it continues to invest in the infrastructure and technologies, that will allow it to match its US and European counterparts, in key areas such as execution speed, easier market access and direct data feeds.
Analysts such as Celent see the current downturn as a significant opportunity for Asia exchanges, even suggesting in a recent report that Asian exchanges have the potential to overtake their US colleagues in the near future. Before this can happen, however, there needs to be sustained investment in the technology, skills and processes that will enable lower latency, easier access and faster data feeds across the region.
One of the key challenges remains the diversity of the region. While the geographical diversity, and vast distances involved, will always make it hard for traders to gain low latency access to multiple market centers, the added complexity of local regulations and last mile access make region-wide performance goals even more difficult to achieve. Nevertheless, factors such as direct market access, the increasing presence of alternative trading systems and the introduction of crossing networks, will have a tremendous impact shortly after local regulations ease.
Investing to close the gap with other global markets To assume that the different markets in the Asia Pacific region will progress seamlessly together towards a more deregulated and open environment would be unrealistic. The global financial crisis is already leading some Asia Pacific exchanges and regulators to be more defensive in their outlook. However, it also provides an opportunity, for more traditional venues, to develop and implement their own alternative trading strategies to compete, more favourably, with new market entrants as conditions improve.
It is this imperative to remedy the handicap of limited bandwidth and slower trading platforms, that is driving Asia Pacific financial institutions to continue to update their technology infrastructure. As many of the incumbent exchanges re-tool their matching engines and foster technology partnerships, with global leaders like NASDAQ OMX and NYSE Euronext, the broker / dealer communities are quickly positioning themselves to be the partner of choice for many of their US and European counterparts.
Given this background, we believe it’s important for Asian market participants to ensure they are making the right infrastructure and connectivity choices today, to allow them to compete more effectively tomorrow.
A world of more end points and more trading venues In an Asia Pacific market driven by the continued growth of automated and algorithmic trading, the emergence of new liquidity opportunities and increasing numbers of order destinations and market data sources, we’re increasingly going to see financial firms trading a much wider range of asset classes and instruments across broader geographies.
In those countries, where the incumbent exchanges still handle the majority of trading, these new market developments will have a significant impact as local traders who, limited by their current choices, increasingly send order flows to more accessible and transparent electronic markets. All this translates into more end points and execution venues, and is driving demand among financial services firms for a greater choice of networks with low latency/ high bandwidth capabilities to enable these higher message rates and optimise throughput.
With the landscape of the Asia Pacific market's different trading centers evolving so quickly, it’s becoming increasingly apparent that a strong element of foresight, and much broader connectivity options, will play as important a role as proximity, when it comes to making location decisions across the Asia Pacific region.
Making the right technology decisions This is important given the highly volatile and competitive nature of today’s financial markets. Trading volumes are shifting dramatically, new entrants are changing the market opportunities, and there’s a continued growth in automated, algorithmic and alternative trading strategies. At the same time, many markets are fragmenting away from their traditional single venue exchange-based structure, while major players continue to join forces in line with the trend of globalization and consolidation.
Whatever your line of business, there’s a pressing requirement for a stable, global infrastructure that assists you in achieving your market goals. Whether you’re an asset management firm or a hedge fund, you depend on the ability to access continuous streams of global market data, messaging, news, history and analysis to ensure successful execution of your trading strategy. Or you could be an exchange that needs to be able to quickly connect to participants or clients, receive orders over a range of different financial extranets and broker connections, post or match the orders almost instantly, and respond in milliseconds (or increasingly microseconds).