What’s an Exchange to Do? The Role of the Exchange in Evaluating Algorithms
NYSE Euronext’s Joe Mecane responds to calls for pre-trade certification of algos and describes NYSE’s efforts to manage risk.
What is the exchanges’ burden in terms of regulating High Frequency Trading (HFT)?
There are multiple components to that answer. There is not necessarily any regulation specific to High Frequency Traders as a separate type of participant, but clearly there are a lot of regulations that are applied to HFT because of the nature of their business. Sponsored access, for example, is clearly an area that impacts high frequency traders who might not be members of exchanges. Recently, there has been a lot of press and discussion around regulating algorithms, and that has a broad application around, not only HFT, but also customer-type algorithms that are developed by firms who deploy those algorithms to their customers or use them to execute customer orders.
At the same time, algorithms can be used for high frequency traders to develop proprietary algorithms. In those cases, a general supervisory responsibility falls to any of those types of participants to ensure that their algorithms are tested and working properly before they actually deploy them to the public. There has been discussion around whether that should be a more formal stringent rule, but it is more complicated because everyone has some level of responsibility and oversight with regard to deploying and developing algorithms. One thing that we have talked about is creating a ‘best practices’ standard, for people to follow, as there have been some cases of particular firms being fined for releasing algorithms that have had damaging effects on the market.
When does the exchange’s supervisory onus take place? Should exchanges evaluate algorithms in real-time as they’re trading or is it something that should be evaluated beforehand?
The problem with evaluating algorithms pre-trade is that it is not really practical to create an infrastructure that would certify an algorithm before it is deployed. While it sounds good in theory, the reality is the regulators do not currently have the resources and skill sets to sit down and review lines of code; it is just not really practical. What that means is that there is a structure, where firms have policies and procedures around how they develop, test and deploy algorithms that then can be reviewed by the regulators. It is not really practical to demand regulatory sign-off on an algorithm before it is deployed.
What would you recommend for firms as best practice for testing algorithms before deploying them?
It is up to each individual firm to outline and deploy the practices that they think are most prudent. One possible thing the industry could do is develop best practices or standards for algo development that the industry could adhere to. Some of the trader groups and some of the technology trade groups may have those types of standards already or could put them together quite easily. It is not our place as an exchange to try to define those standards, but certainly, it is something that most firms have and the industry, as a whole, could assemble from a technology development and deployment perspective.
What are some actions NYSE takes to supervise algorithms as they are deployed?
NYSE has a number of elements in place to help mitigate and oversee algorithms. On our markets we have a number of circuit breaker-type mechanisms to detect what might be unintended behaviors on the exchanges, ranging from Liquidity Replenishment Points to some of the SEC circuit breakers that have been mandated. We also have market order collars and limit order collars on Arca. On one level, we have some limits on our market that are designed to mitigate big price moves when there could be an algorithm acting in an unintended manner.
At another level, we also have some safeguards that monitor excessive message traffic, so we have the ability to either moderate or throttle some message traffic if we start to see excessive message traffic coming down a particular connection. Third, and this is a market-wide thing, there are clearly defined rules in place to deal with erroneous types of trades that could happen as a result of an algorithm gone bad. The last thing is that we have outsourced the market regulation component of our responsibilities to the Financial Industry Regulatory Authority (FINRA). As a part of FINRA’s reviews, they look at a lot of the supervisory procedures and oversight that are utilized by firms in terms of deploying their algorithms, as well as development and testing.
What type of change would you like to see from regulators?
Probably the best thing that could be done, whether from the regulators or customer trade groups, is if there could be some disclosure or standards for best practices established in the marketplace. It would be significant to ensure that there is a consistent, minimum level of oversight for algorithmic development and deployment. A lot of different firms have standards they are subject to and hold themselves to internally, but to create a standard that everyone could use as a base level would be a very good thing.
To what degree is that complicated by the fact that there are such a large number of firms developing algorithmic products, relative to other countries?
That is why it is not practical to have a pre-clearance of all algorithms that are going to be developed or deployed. The better way to structure things is to organize more procedural, policy-type requirements for all participants to utilize. Considering the amount of time it would take to look at the algorithms of one firm and understand their code and what they were trying to do, it is clear that the resources are just not there.