CFTC- Hail Mary?

It has been a busy seven days in the world of regulatory position limits. First a new MiFID II position limits RTS was released as reported here and the CFTC re-proposed their own ‘Dodd-Frank’ limits regime. Most international firms will have to comply with both.  So why has the CFTC limits announcement being getting so much more attention than the EU announcement? CFTC limits are barely a speed bump compared to MiFID II mountain.

Like others, when I first heard about the CFTC trying to get Dodd Frank Limits out the door, I thought to myself, “Holy Cow! Regulatory Hail Mary!  shutterstock_121083574

This knee-jerk reaction is premised on the new Dodd-Frank limits being be killed on inauguration of the new President.  But, then I thought again. There are only 3 commissioners, and that it will be impossible to get slots filled to kill it outright.  Then I re-convinced myself that it truly is a hail mary and that it’s nothing but a symbolic gesture being left to an unknown commission.

Then I took a step back and realized that none of that mattered.  Here’s why:

Limits Is Already the Priority One Surveillance Activity

Busting any limit,  irrespective of whether it’s an exchange or other regulatory limit is seriously bad news. The consequences today have much higher stakes than they ever have.  If you take a big  step back and look…really looks at what the sum total of all regulations in the US and Europe and enforcement actions mean, the takeaway is very, very clear:  

Don’t do anything that brings attention to your firm.

The reason are simple:

Whatever happened, no matter how accidental or well intentioned, will absolutely, positively and unquestionably be viewed in the worst possible light. Any open door invites tough scrutiny.  I’ll give you an example.  In a recent manipulation case a trader called a trade a “suckers bet.”  The judge in the case said something to the effect of, “So you think your clients are suckers.”   There is a big difference between calling something a suckers bet and calling someone a sucker.  But, this kind of interpreting everything the worst possible light is de rigueur in compliance actions.

Another thing you can be sure of is that if you bring attention to yourself in one region every other regulatory body will be seeing you as easy meat to bring a settlement (or worse) from.

Above all else,  the memory of an investigation lingers. Everyone remembers BP and Shell being raided for alleged price fixing. It was the lead story on the evening news and front page of every newspaper. Even David Cameron was discussing it. But how many people remember the outcome? (Turns out…no charges or fines).  

Personally, I’ve been kicking around the trading space since 1996.  In that time I’ve seen countless trading firms make half hearted attempts at cobbling together Total Real Time Positions. Whether your justification is Dodd Frank and MiFID or just overall better risk and reputation management now is the absolute time to do more than cobble and go after it as a priority. See How Limits Work

MiFID II- Data Validation

Validation. There is a reason everyone is talking about it.  If you do it wrong, your compliance personnel start pulling their hair out.   As far as ESMA, Dodd-Frank and MiFID II trade reporting we are clearly in a world that is moving from low validation to one that is highly validated.


Com-Sci 101:  Four Categories of Validation

Data Type Validation: “Did you send a properly formatted date, because you cant just send me any date….I require a properly formatted date.”

Range Validation:” Did you send me a value that is reasonable?  Because I hate it when you send me a market price of $50billion.”

Oh! My Compliance Data !

List Validation: “Did you send me an exact value that matches my list of acceptable values?  I swear, if you miss a single character I will bounce your message.”

Structured Validation: “Did you send me your LEI? Because I’m going to check that it’s both in the right format and that it matches your company name.”

So What’s the Big Deal?  

Latency.  That’s the big deal.  In some cases it takes repositories literally HOURS to validate the messages sent to them.  The more validation regulators throw onto their plate, the longer it will take.   The result?  Your compliance teams have to wait and wait and wait to see if their submission was accepted.  Did any of your trades get rejected? Yes? Ok…start again.. correct it…and wait and wait and wait. 

So the takeaway, as we move into MIFID II is that  you have two choices:  1) use a trade repository that is architected  for speed (read great API w speedy validation); or 2) pre-validate everything to absolutely minimize the possibility of having an error that you will have to wait and wait on.

K3 as an integration platform to trade repositories does all forms of validation prior going to the TR.  Likewise when sent we show the acknowledgement (or rejection) immediately upon receipt.

Considering that we have connected to just about everyone, I’d be remiss if I didn’t mention that the response time from destinations (ARMs, TRs, etc) is what we call “highly variable” .  There is a huge technical disparity between trade destinations.  It does make a difference because, for example, there is one repository that is so fast and comprehensive in terms of validation we really don’t bother pre-validating. (pre-validating takes just as long as validating against the actual repository) (read another way…spam the API to your heart’s content).  Even across thousands of trades, I doubt we’ve waited for a response longer than 2 seconds.   On the other end of the spectrum we have waited as much as 12 hours for a response.   If you’ve got to wait this long, pre-validation is mandatory.   So that is what all the validation hubbub is really about.

The Demise of Delegated Reporting- MiFID II

Trade Reporting.  Ask anyone on the sell side or in commodities or derivatives about MIFID II and you will probably get a pretty solid “been there, done that”.  Dodd-Frank, EMIR, REMIT, Canada; MIFID II does not look so bad.  Sure, a few wrinkles like T+0 transparency reporting, but other than that we are probably good to go.  Thing is we’ve been quite busy meeting with a large number of firms that have not been in the transaction reporting game before:  The Buy Side.tie

Delegated Reporting

For better or worse, the Buy Side has been able to avoid most of the reporting headaches under MiFID I, EMIR and the like through delegated reporting. It’s been a really good run actually.  The Buy Side has been able to avoid nearly 4 years of tough reporting work.  The bad news is that sell side firms are starting to announce that they are withdrawing delegated reporting under MiFID II.  We expect more to come.

Why is Delegated Reporting Going Away?

Nothing official, but there are some fairly obvious reasons.  The first is that delegated reporting is seen as a non-monetary benefit. A core principle of MiFID II is that, best execution has to be independent of “ancillary services.” Research is squarely in the crosshairs…and very likely delegated reporting is seen in the same light.  Second, for some entities it’s going to be difficult to pull off delegated reporting for some of the MIFID II reporting fields, especially when it includes proprietary client data and the like.  While not impossible, it makes things tricky for the sell side reporter figuring out how to collect and maintain what is very, very sensitive information.

Tips for the Buy Side

We have been delivering K3 for trade reporting for the past 5 years.  We’ve seen just about every regime and make it a point to talk straight with our customers.  So, here’s we go.

Right Now-Be Skeptical…But Don’t Let That Stop You.

Be skeptical about where your firm is going to report.  In trade reporting there is a big difference between signaling intent and actually showing up on game day.  Right now the ARM / APA side of things looks like the back pages of a British tabloid on the Premier League transfer deadline day.  Gossip, good intentions, dis-information, partnerships, alliances, intended integration points and plenty of back office chatter…none of which this has solidified yet.  But this will settle down. Closely inspect what you expect.

But don’t let that stop you.  We are expecting a literal crush of buy side firms who were late to the game figuring out they have reporting obligations under MIFID II.  We saw lots of late sign ups under EMIR and…well…let’s just say there are plenty of battlefield stories.

Be Ready to Pivot, Integrate and Adapt

Right now we are looking at 65 core reportable fields with an additional 18 or so for transparency reporting.  Experience tells us that this is a meaningless metric.  This set of fields will expand quickly both from regulators and from the ARMs/APAs.  Our experience tells us that despite having the same data set, each ARM/APA will do things very, very differently.  Likewise, familiarize yourself with the concept of “data validation.”  There is a whole blog post coming on this topic shortly.

There is also the careful ballet of connecting all your trading systems to meet the reporting deadlines.  I can practically guarantee that someone somewhere in your firm is going to say the words, “We need a data warehouse.”  I’m not going to disagree, but at the same time there are huge considerations that go into this especially with the MIFID II timetable.  We’ve done some great things with K3 on this front, both on the internal data warehouse side and reporting side. Always happy to talk.

Finally, the last piece of advice is this.  Know thy trades.  Every trading firm that has had to undergo a trade reporting project has necessarily been forced to inventory and “clean up” where the trades are being kept and how they are being kept.  Starting a project to clean up, tag, add metadata and systematize trading is a key feature of reporting success.  This is doubly true where the landscape of delegated reporting is changing.

The 4 Biggest Trading Limits Mistakes

It’s been wisely said before that “Little details have special talents at causing big problems”.   In time for Dodd-Frank and MiFID II limits we have just rolled out a new version of K3 limits.  [LOOK BELOW FOR FREE ACCESS TO ATLAS (Spot Start, Complete Limits Info, Open Interest, REST API)]

We just rolled out a new version of K3 limits getting ready for Dodd Frank and MiFID II limits coming up at EOY.  As companies consider how they are going to handle these new position limit regimes we thought it a good time to share the common details that many companies miss.

1.Spot Start Miscalculation.  

Believe it or not, exchanges don’t actually publish the date when a product enters the spot period.  What they publish instead is some text describing when the spot period starts.  Something like, spot starts x days prior to the maturity date except on Saturdays and Sundays and holidays etc.  Here’s the problem.  This forces companies to manually calculate the spot date.  

Never fond of manual process, we solved this with our cloud app ATLas.  ATLAS just  regularly downloads maturity dates and then through a little  coding magic calculates spot start dates.  It’s all easily accessible through a web interface or as a simple REST API call.

2.Missing the Diminishing Positions

I remember a risk officer asking what our position was at end of month on some New York Harbor contracts.  The answer was:  zero.  Confused he said, “Wait, weren’t we long X lots at the beginning of the month?  A: Yes.  Did we close the position? A;  No.  Enter the diminishing product.  If the exchange indicates that the product is diminishing, this means your limits position in the spot month will rateably decrease over the course of that month.  Beginning of the month you have a full position, and by end of month you are at zero.

The diminishing indicator is always available in ATLAS and of course K3 Limits always diminishes products in spot when indicated.

3.Parent / Child Roll Error

A trader once remarked that nothing she traded had position limits.  True they didn’t have explicit limits, but they rolled into a parent product which did have limits.  This is the parent-child relationship.  That roll up or “aggregation” may even be at a particular ratio. So for example, 1 lot of a child position may only be equal to half of a parent lot.

It gets even more complicated when maturity dates don’t match.  This one is easy to miss.  Let’s say your traders are trading CS Crude Contracts.  These are child positions to the NYMEX future 26; they roll up on a 1:1 basis.  But even though one CS contract is equal to one 26 contract, these products roll/mature on different dates.  If you go long Sept CS, part of your position rolls into Sept 26 and part into Oct 26 from a limits perspective.

K3 limits calculation does this automatically.  Atlas also indicates whether the product is a parent/child and the aggregation ratio.

4.No Exchange Delta

Trading options?  The rule is that you need to use an exchange published delta to determine position.  These can be a bit tricky to get.  But ATLAS downloads them  into a nice usable format behind a gorgeous API that can be called from any application.
As we gear up for Dodd Frank Limits, and MiFID II limits we will, of course have all the appropriate data in ATLAS and we are ready to go.  As always if you’d like a free preview of ATLAS to spot check your spot starts or just see how it works we’d be happy to accommodate.  Drop Tom Eisner a line at +1 (646) 461- 3820 or tje@broadpeakpartners.com

atlas

 

Building Server Empathy – Broadpeak’s Guide to WebSockets

While building an application, I recently encountered a situation where I needed to use WebSockets. Among other things, the application fits a row from the database (43 columns) into a row in the Facebook FixedDataTable (FDT), which is a Javascript library for displaying tabular data.

A problem occurred when the client requested too many records, and the server could not handle the records in memory (RAM). Depending on the server heap size, at about 75,000 records the server ran out of memory and became unresponsive. Given our large data sets and the possibility of potentially hundreds of connections, the memory issue had to be rectified. I had to implement data streaming to allow the server to run in constant memory. I decided to use WebSockets because I wanted to implement a cooperative streaming protocol that uses bidirectional communication.  This was the start of my journey towards Server Empathy.

The original implementation failed because of a memory overload, but for posterity it is outlined in these three steps:

1) Client sends HTTP request to server asking for all trades fitting a certain criteria from the database
2) Server opens a database connection and reads all relevant rows into memory
3) Server does some work and passes data as response back to client

In order to implement a solution I modified the way the database returns records, and created a communication protocol over WebSockets for client/server communication.

First let’s discuss the database connection. In Java JDBC, when a query is executed, an object of class ResultSet is returned. The ‘Resultset’ is a cursor into the query results prepared by the database server.  The original implementation was fetching the entirety of the result rows, bringing them into memory and sending them off to the client.

In the new implementation the server uses a result set function to manage the flow of data from the database to the server.  In a series of iterations, the server only fetches a small batch of records from the Resultset.  Next the server enriches the data and drops it on the WebSocket for the client, before proceeding on to the next batch. The previous batch automatically gets garbage collected, allowing the server to run in constant memory.  Next outlines how I built a client-server communication protocol to minimize the memory burden on the server.

When the client gets some data, it renders the data into the FDT and then sends a message to the server over the WebSocket. The message tells the server whether it should stop or continue sending records. The client has the option to request the termination of the connection at any time for any reason. The server will terminate the connection either when it has received a stop signal from the client, or when the query has completed.
The interaction is detailed in the diagram below (bi-directional arrows represent a web socket):

WebSockets communication model

In this model the server only needs to keep (at maximum) a predetermined number n records in memory at a given time. This alleviates the server load and allows the client to terminate a connection if the application becomes slow or the user navigates away from the page. The chatty relationship between client and server ensures that both sides are always responsive, and any problems will be rectified immediately.

WebSockets may be more difficult to implement than a standard HTTP request, but they provide a lot of flexibility in the client/server interaction. They are particularly effective for communicating large amounts of data, because WebSockets allow for batch streaming while lightening memory loads on both client and server.  So if you “care” about server memory, speed, or network bandwidth, there is a strong possibility web sockets may be useful!

Technical Information
If you are interested, here is some information on the technology stack used for this application:

Core.async – The asynchronous library we use on both client and server. Processes messages entering and leaving the web socket.

Re-Frame – A small templating library built on top of ReactJS. It helps to manage the state complexity, while providing a way to program functionally.

Clojure/ClojureScript – Respectively, server and client programming languages we use.

Chord – WebSocket library, built on top of http-kit.

Fixed Data Table – Javascript library used to render the trade data in a tabular form.

For People Who Love Bad News…

It’s been all good news these days. Makes me think back to high school when I learned about the Stoics. Something to the effect of …destructive emotions are the result of an ershutterstock_184339940ror in judgement.  I suppose it could also be said that errors in judgement are the result of destructive emotions. Either way these are the days to set aside overhyped hyperbole and keep one’s head.

Dislocation and Disruption

Everyone (especially in tech) loves to talk about disruption. Mostly these companies are not disruptive…they are dis-locative. There is a big difference, and that difference is “the big plan.” True disruption has a “plan of being” after the dislocative event.  Remember “Occupy Wall Street?”  Lots of attention, plenty of media, a voice heard ‘round the world.  But, nary a path forward that people can get behind.  Dislocation gets a lot of attention but eventually goes quietly into the night.  Compare that to, I dunno, Alexander the great (keeping with high school classics).  War and pillage had been happening since the beginning of time.  Alexander was different.  Had a plan to instill laws, commerce and governance that forever changed the face of the world.  That’s Disruption.  So I’ll ask what you think about Brexit.  Major Dislocation or Disruption?  It’s a whole lot of hype until the UK files for an Article 50 exit.  My guess is that could take a long, long time.

The Trouble With the Spot Month

The CFTC may or may not roll out with Dodd-Frank Limits this year.  At least that’s the opinion of everyone we talk to.  Signs point to yes, but experience says no.

The exciting news is that this month we are rolling out K3 ATLAS as a stand alone product.  Atlas is a data server that assembles everything you need to correctly calculate limits across the world.

The Problem ATLAS Overcomes

Correct reference data …When we started building limits we found limits ref data to be dead wrong or entirely non-existent.  Based on the commercially available data out there, we quickly realized there is a good chance that just about everyone’s calculation is probably wrong.

What Kind of Data You Need

Current and updated limits from the exchange.

Correct exchange deltas for your options

Aggregation ratios to roll product together

Current maturity dates of all your products

And More than anything….A correctly calculated spot start date.

That’s what ATLAS provides.  We are now downloading over 1 million pieces of data a day and the first service to offer the spot start date for all products.   You can tell you IT team it’s all delivered via a nice REST API.  They will be happy as clams because they can fold it into your existing limits process in no time.
Seriously, if you are looking to clean up your limits process, give us a call. You can get real time updates to all of these data points for less than you are paying some provider to only give you limits in a CSV.  Always happy to give you a free trial to try it out.  info@broadpeakpartners.com  

EU-MAR – More Insight

A really good article here from Baringa.   The gist is focus on Policy and Plan.  There’s a reason for that.  Sure, you want all your order information from all exchanges and market-places.  The bad news is that not all of them are remotely ready to go.  There’s a little bit of a dark art to getting your order data.  To make a long story short it pays to plan for the long haul!

Corporate Tuberculosis

So, I’m sitting in this meeting at an enormous Fortune 50 company. The general theme of the meeting is “digital transformation” and creating better insight into data for the business. Operations wants something really straightforward: direct access to data for faster turnaround.   We’re reviewing the most powerful data technologies in the market and how they all fit together to deliver exactly what the business wants.Untitled-1

And then a guy on the phone starts ranting. He’s got a fiefdom of 30 people running a 20 year old ETL tool that serves as the single gateway to the business. They’ve made enormous investment and he was having none of this meeting.

Furrowing my brow, I realized: “Aarrgh! – They’ve got a Data Hoarder.”

Data Hoarders are the corporate equivalent to Tuberculosis.  Won’t kill you right away but, they didn’t call it consumption for nothing.  Data Hoarders are a wasting disease from which, without significant intervention, you will eventually die.

Let’s set some context here. Worldwide corporate data use is cranking away at something like 8 Zettabytes per year.   That’s up from about 0.5 Zettabytes since 2009.  What’s a Zettabyte?  Well, if a Gigabyte were one second, one Zettabyte would equal 31,688 years.  

Here is the thing: in any given company somewhere between a quarter and half of the data falls into what we consider “critical corporate data.”This is the kind of transactional and metadata that has a clear and material impact on the company.  Exposing this data is critical toward building meaningful insight.

There is this saying: “Do what you’ve always done | Get what you’ve always got”.   It’s a peppy motivational quote.  But in the corporate world  doing what you’ve always done has a feeling of genuine safety.  The institutionalization of this is:  “Better the devil you know (than the devil you don’t) .”   But when it comes to data there is something afoot that upends the apple cart: Data is an ever changing deluge. Old technology scales miserably.  It requires larger and larger fiefdoms just to reach the starting line. In other words, if you do what you’ve always done, you will get far, far less than what you’ve always got.

So, You’ve Got Corporate Tuberculosis-Now What?

There is good news. Remember that 20 year old ETL.  It cost a fortune and requires dozens of skilled personnel to run.  There are plenty of amazing tools out there that completely change the game.  K3 just happens to be one of them.  At so many of our clients we have liberated data from monolith applications so the business can get at that data and do things with it.  We are talking about delivering streaming and cross functional data in weeks not years, and at a fraction of the cost.

I know.  Data Hoarders are a tough nut to crack.  Smart CEOs and CIOs have an open door policy when it comes to dismantling this type of fiefdom.  But even more important is getting Data Hoarders to let go of the fear leading them to hold on to job security for dear life.  If you really want a safe job, be the person that enables democratization of enterprise data.  Trust me, your career will be far more rewarding when you become a key stakeholder in delivering spectacular insight.

How To MAR- A Commodities Primer

We’ve  recently had some very interesting conversations with companies about MAR.  Despite going live in July 2016, it’s clear that companies are falling into two camps.  Those that are doing their best to ignore it.  And, those that are really grappling with some of the finer points of surveilling manipulative behavior.  Things like:

  • When does a Spoof become a Spoof, instead of just plain old bidding behavior?
  • How would we detect manipulation in the spot market via the futures market and vice versa?
  • Is there a scenario where MAR might reach into REMIT’s territory?
  • What constitutes a “reasonable suspicion” such that we have to submit a STOR under MAR?

These are all really important questions.  But even as a software vendor we are really encouraging companies to focus efforts on starting their POLICY & PROCEDURE (P&P).  Just for context, I am a guy who sells data connections & surveillance software and I’m suggesting that you don’t buy a single thing… until P&P is in place.

Why?  Well, when it comes to surveillance it pays for companies to take the long view.  The potential surveillance universe is large and the data required is complicated. There is no possible way to take this down in a big bang.

Food for Thought

Your MAR P&P boils down to three things:

  1. How will we surveil for known manipulative practices?
  2. How do we close gaps in our surveillance to cover known about but un-monitored activities?
  3. How do we stay current with manipulative activities that we hadn’t thought about?

For example, most compliance officers know about wash trading as a manipulative behavior.  We can capture data and generate reports that detect this behavior.  But rogue traders are a clever bunch and will certainly come up with manipulative ideas that no one ever thought of.  Likewise when they come up with something it’s going to take some time to close this surveillance gap.  

A surveillance program at any given point in time will have 3 universes.

  • Behaviors that we don’t know about yet (bottom sphere);
  • Behaviors that we know about but don’t monitor yet (big sphere);
  • Behaviors that we know about and monitor (widening sphere)


complianceNow here is the rub.  It may take any given company a long while to move into a known and monitored state.

Why?  Well, the biggest risk is that the data is just not available, and may not be for a long time.  In other cases, we can get the data but implementation of surveillance over that behavior will take “x” months.   

So the objective of your MAR P&P is to create a control framework that tracks, plans and expands the scope of known and monitored activity.  This directly leads to an actionable scope for buying software, data connections and the like. This will relieve a lot of pain in the process and prevent your surveillance program from spiraling out of control.

If You Would Like to Kickstart this Process We Have a Template Surveillance Policy that  covers this and more.  Tom Eisner directly at +1-646-461-3820 or tje@broadpeakpartners to get a copy.  And as always if you have any questions or comments please don’t hesitate to contact us.

Commodities Surveillance | You’ve Been “MAR’d”

You know what?  Change is like poetry…

….and most people absolutely hate poetry*.MAR

 

This is going to be one of those “Difficult Conversations”.  You know, the kind they teach you about in Business School. We’ve got to talk about some difficult things and find common ground toward a way forward.  So here’s the situation:

MAR (Market Abuse Regulation) is coming in July and there’s some things about it that are going to be really, really difficult. It’s a size-able reg but I’m going to jump right to the section which is getting the most attention: MAR requires that EU commodity firms  “establish and maintain effective arrangements, systems and procedures to be able to detect suspicious orders and transactions.”

It sounds innocuous enough, but before I get to all the ins and outs of actually executing this mandate, I need to put forth the centerpiece of this difficult conversation:

Some traders will be forced to change how they execute trades.  

I’m also going to say that the search for the ultimate surveillance tool is foolhardy until you’ve got your data strategy thoroughly thought through .

The Devil of Surveillance is in the Details

Let’s talk about what we are up against.

  • Most European companies have begun to look for some type of software solution based on the complexity of their business.  These range from simple to complicated pieces of software and process.  Some solutions have very cool artificial intelligence and other big data features that are a huge leap forward, technologically speaking, for both firm and industry alike.
  • But there are known challenges with surveillance systems: If you ask anyone in equities surveillance their complaint is that surveillance solutions take a  long time to configure and calibrate. Take equities surveillance. It’s been around for years.  What worries market officers is not actual market manipulation.  It’s false positives.  Every time your compliance system beeps and tells you it sees suspicious activity compliance officers are obligated to investigate and potentially report.  It is the proverbial boy who cried wolf.  
  • Can firms realistically get something up and running by July?
  • But there is one issue that crowns all: Where are commodity companies going to get the data to run any surveillance solution?  In other words, where exactly, do we get all of our order and trade execution data?

I’m not going to kid you.  The answer to where we get all this data is nuanced and nasty enough to drive any compliance officer through the roof.  So here we go:

Take this  project we just completed with a pretty typical trading firm.  About 1000 physical and financial trades per day across three exchanges and smattering of OTC trades.

How Do We Get Our Order & Execution Data?

Getting orders is like running a gauntlet with traps on all sides. The primary reason is that the major exchanges and FCMs are pretty new at it and there is a lot of bumps in the road.  Let me give you an idea of what it looks like:

Let’s start with ICE.  The ICE Private Order Feed will get you all our trader’s ICE order data.  It’s pretty straightforward  But, only so long as the traders are using WebIce.  Yeesh, looks like our traders are using a bunch of other tools.

Got some traders using TT?

Yep, we’re going to need a connection into TT to get those orders and fold them together with the ICE POF.  Totally do-able.  But, it’s not quite as simple as that.  You know the TT instance the traders are using?  It’s hosted by the FCM.  We’re going to have to  wrestle with them for a while to allow you access to your own order book.  This is not as easy as it sounds.

Done yet?  

Nope, not even close.  The orders are flowing fine from ICE but CME we are only seeing some from TT.  Must be our CME Direct trades.  Let’s get them through the drop file.   Anyone know a really good VPN guy because it’s  taking forever and ever to set-up what was supposed to be a simple connection to basically get a file.  And what do you mean I have to pay for a 500KB connection?

Whoops, our trading operations never set up segregated session IDs. Going to have to shell out for those and wrestle a bit more with the FCM.  As always, this is not as straightforward as it seems.

Cross your fingers we think we have a full footprint of CME and ICE orders.

How about OTC trades?

OK, yes over to OTC trades.  It’s all physical now and we can aggregate all of the ACER XML order data from our OMPs.  Wait! What do you mean the XML is different between these brokers?? OK, we will do some mapping  it as the data comes in and we should be good to go.

Wait, one more thing! Lets bring in the full order book so we can get tick values so we can compare our orders to the market.  Phew, at least that’s a well established integration.  But, it costs how much?  WOW.  Wait just one second.  Tick data is coming in …and there’s an ABSOLUTE TON of data.  We  estimate we need …..whoa, that’s a lot of storage!  How soon can we start procuring servers? Geeze our server team is going to gouge our eyes out on this one.    

With that done we’ve made it pretty far… except….

Except that:

  • We still  have 3 traders using some trade execution tool that just has no way to capture orders at all.   
  • We still have 5 traders using a third  exchange that offers absolutely no orders at all.  
  • Its March.  We go live in July. We haven’t even started configuring our surveillance solution yet.
  • The above orders & execution process only scratches the surface …For an article I had to leave so much out.

As an integration software company this is our world.  We secretly nerd out on it.  But I don’t think regulators really understands that as far as market surveillance data goes,this is the kind of cobble, beg, borrow and plead process we have to do to just get orders.

And, what’s a compliance officer do with the exceptions?  Are we really ready to tell traders they can’t use their favorite execution platform because it is entirely “un-surveillable?”  Can we shut off trading exchanges that don’t offer order data at all?  

The crux of this difficult conversation is, exactly that. Off reservation traders are going to have to change how they execute and it’s just not going to be popular.  But, until these execution tools and exchanges improve their order flow we’re going to have to curtail.

There is also a lot of sales pressure to “get into a system fast.”  I’m sorry, but without a really well thought out data strategy there is a risk of buying a system that has nothing but partial and half broken data to analyze.  That is going to result in a crushing blow of possibly report-able nonsense.

If you are trying to get your data for MAR we’d be happy to talk. It’s all do-able and we’ve  learned many tricks along the way. For a July go live date including getting a surveillance system up and running we’re going to need to get moving.  

OK, I’m glad we had this talk. 

*Adapted from a Michael Lewis quote in “The Big Short.”

Page 2 of 1112345...10...Last »
(877) 738-0470