Capital Group’s Brian Lees is driving efforts to ask more questions of brokers, and for more data on where an order is shown before it executes, but can the buy-side handle the resulting deluge?
The current work you are doing on venue reporting analysis
Our first push was simply to try to collect information about ‘where’ we were executing and a little bit about ‘how’ we were executing, namely, did we post or did we take liquidity. So having done that, the question was where do we go from there? And as such, the topic of requesting more data on where we didn’t execute and what order types were used started to be raised by some representatives on the FPL Americas Buy-Side Working Group. Some participants had already started down this road with brokers, asking for information relating to post-trade about where their orders were sprayed out to by the algorithms and what types of orders were placed on exchanges and also which exchanges they were on, etc. So that’s where the conversation began and that’s where we reached out to Jeff Alexander and Linda Giordano, because Barclays had already spearheaded this conversation.
What we are looking to achieve either in real time or post-trade, is whether we can standardise a format for brokers to tell us how our order interacted with the market, including when the order was placed, what order types were used, where it was placed in the markets and whether or not we got hits. The concern with this is not so much can we get it, because if we sign enough non-disclosure agreements we can get the information from the brokers. Some brokers have concerns about that information getting out and somebody reverse-engineering their algorithms, but from the buy-side perspective, I think the biggest concern is whether we can manage the volume of data that we would get.
The resources to store and analyse data and make some sort of good use of it
With the original data that we were getting, on where the execution took place, we talked a lot about this with smaller firms who were using TCA vendors to help them analyse this information. With this type of information, if we went a step further, the brokers would not want us sending that out to TCA firms, because it shows their methodology for how their algorithms behave. I was in New York several weeks ago and took the opportunity to meet up with Jeff and Linda while we were there. We invited Jeff to join one of our conference calls for the buy-side committee, which he did, and he talked about what they’ve been proposing. He showed proposals for both the real-time collection of data, via FIX messages, actually proposing a whole new FIX message to be created for this purpose, which could then be sent in real time. Or, alternatively we could standardise a format for collecting the information post-trade which, as a spreadsheet, would then tell us what we want to see. We’re trying to standardise how you ask for the data and what format it is going to be in, by creating best practices for how to get the data from the brokers. That way the brokers don’t have to keep coming up with a different format for every client that asks for it. The best practices do specify that the ISO MIC codes would be the standard for identifying the exchange that you executed on, but we said nothing about what you should do with the data once you get it.
Exchange involvement in the conversation
We did talk to some exchanges when we were first trying to standardise how to identify the exchanges, because when we first standardised the MIC codes, they did not cover all the exchanges, this was due to the fact that they hadn’t all registered with the ISO organisation and we wanted them to.
We had a little bit of trouble in differentiating the dark order books from the lit order books and some of the exchanges that have both. These exchanges consider themselves a hybrid book, and they didn’t want to be known as two different things. We didn’t have a way to differentiate the dark and the lit flow without introducing yet another FIX tag. That back and forth added to the conversation as part of the registration authority’s decision to come out with the new market segment concept, which says you can have an exchange defined and have child MIC codes that differentiate different segments of the market. We’re beginning to start conversations with exchanges about this topic, but that’s the extent to which we’ve had any discussion with them.
Broker willingness to participate in the process
The first half of this, just getting the information about where you executed, the brokers didn’t have any problem, because it’s public record once it executes. When we started talking about the more detailed reporting, they did raise a concern about the information being sent out and NDAs so that, you, as a client, are not going to send the data out to a third party. But because other firms had already started down this road we talked about the purpose of this, which was just to have someone looking over their shoulder to make sure that they are acting in the best interest of the client and not potentially favouring rebates over best execution; they can’t really argue with that logic. Somebody should have some oversight as to whether or not the right decisions are being made.
We haven’t talked to a large number of the sell-side yet, however this will be done prior to publishing the best practices, as they will be shared globally with the FPL membership for review. It’s certainly possible we can meet more resistance on this. However, because it is our order and our data being generated, and because of our activities, we shouldn’t face too many challenges, as long as we can come to an agreement about how it is disclosed and how it will be used internally.
Critical mass when enough brokers are on board
If we get some firms saying, “Yes, we’ll show you this data,” and some that say, “No, we won’t,” the ones who won’t are likely to get phone calls from heads of trading saying, “Well, if you won’t share the data, then we may not be able to trade with you, because we need to be able to monitor the quality of what you’re doing.”
In this situation, if there are some holdouts and the rest of the industry is going forward with it, I would not imagine that the holdouts would be able to continue to do so. There’s always going to be a balance between the proprietary data and the need for transparency about how trades are executing.
Touching base with European counterparts
We released an updated version of the guidelines last summer and worked closely with the EMEA Business Practices Subcommittee. We have also spoken with the EMEA Investment Managers Working Group and we have had phone calls to bring them up to speed on what we’re doing, and there’s definitely interest on their side to support this. A lot of the firms that are part of the US Buy-Side Working Group are global firms, so we have a vested interested in getting Europe, and Asia as well at some point, involved in helping us carry this forward.
The burden of extra data on smaller buy-sides
It will be up to each firm to come up with the resources to do something with the data. The challenge will emerge if it turns out that the information can’t be disclosed to a third party to analyse as there are a lot of buy-sides that simply won’t have the resources to do this themselves.
We are already seeing vendors incorporate information about where you execute into their EMS systems. A few weeks ago I attended a conference and I saw firms that had taken that information and shown a real-time pie chart about where an order was executing, and it was using MIC codes; pretty much the standard that we had set up with FIX, and it was reading them through the FIX messages. So, you can get a certain degree of information about how you’re executing already, but taking this extra step to see where you didn’t execute and what that means, such as ‘is a broker trying to get rebates at my expense?’ You’re going to deploy a certain amount of resources to do that, if you’re not allowed to send received data out to a vendor.
Extra burden on the brokers
Some of the smaller brokers might have resource issues in trying to produce the data. We’re starting out focusing on the larger firms. But of the ones that we have talked to, it’s not a matter of, “Do I have the data?” Their biggest concern is, “Am I going to be asked for this in 50 different formats for 50 different clients?” And that’s the problem we’re trying to address. Let’s get this standardised in a uniform way to minimise the work involved, and then you can set up whatever processes on the sell-side you need in order to generate in that format.
A lot of the smaller-sized firms do rely on vendor EMS’s as well and they may be able to ask their vendors to give them a way to generate the data. Again, having a standard format would help in this regard too. We want to try with the bigger brokers initially and see how useful we’re finding the data before we try to push it further. We’re still just at the discussion phase about how much data we are going to generate. We are trying to get our arms around just how much data we’re talking about here; we’re not quite sure.
We don’t currently know whether the broker is sending 20 quotes per second to one exchange and doing that with 14 exchanges all the once, or if it is less. We’re going to do some case studies just to get an idea. If we were to do this real time, here is the spreadsheet showing how many FIX messages that would have generated. Then we can go back and say whether or not we’d be able to handle that volume. There are a lot of questions that we have to answer.
This initiative has taken time, as we have sought to collect the data about where we were executing and the make or take flag, in addition to the principle and agency information. Achieving just those three items in addition to composing the standard for gathering this information and then popularising it, has taken time. We have also had a lot meetings with the brokerages, and saying to each firm, “now we have the standard, we expect you to adopt it.” We have then been monitoring the data that they sent, which has been an integrative process because we got some poor quality data at first. It’s been a constant feedback loop.
This next phase is likely to be similar as well. We haven’t set a deadline specifically for this. We’re taking it one meeting at a time and coming up with some action items and then following up. I would imagine it’s another year or two before we start to see adoption being carried forward.
This is very new territory for everybody. It has always been about actual transactions taking place, and now we’re talking about generating data for the transactions that ‘didn’t’ take place. That’s a whole new arena and whether or not, we, as the buy-side, can actually handle what we’re requesting, is a big question mark.