Conference for ESMA Updates on MiFID/MiFIR/EMIR/SFTR Reporting

BroadPeak will be exhibiting at the March 28th Infoline conference in London.  Come to learn from and speak with regulators, repositories, end-users, and service providers.

30% discount by using promo code FKM63482BPK or click directly below.

Sign up »

Specific topics we will be discussing include:

  • Difference between trade reporting and transaction reporting
  • Ensuring non-duplication of trade reporting
  • Validation submissions and keeping a permanent record of acknowledgements
  • MiFID II Position Limits – Where is the reference data?
  • How to handle changing requirements

BroadPeak Partners with Unavista for MiFID II Trade Reporting

Broad Peak Partners announces a partnership agreement with the London Stock Exchange’s UnaVista for MiFID II. A recognized leader in software integration, BroadPeak Partners is the developer of K3, an integration platform that seamlessly moves data from one application to another, and named by TechCrunch in 2012 as one of the most disruptive applications of the year. Since then, K3 has spread across a happy roster of Fortune 500 clients ranging from bulge bracket banks to midsize industrials striving to harness their most valuable asset: data.

MiFID II significantly extends transaction reporting to a wide array of financial instruments and derivatives intended to improve the integrity of European capital markets. When applied in January 2018, MiFID II will re-engineer market infrastructure with far-reaching effects on everyone engaged in the dealing and the processing of financial instruments, forcing brokers, dealers, trading venues, hedge funds, asset managers, and global corporations to re-assess trade reporting. MiFID II will also have a global impact, forcing Asian and American firms to comply when doing business with European customers. Buy side or sell side, large or small, confidence in meeting MiFID II requirements hinges on nimble data reporting capabilities.

According to Gordon Allott, President & CEO of BroadPeak Partners, “We are very excited to partner with Unavista, as both of us are highly experienced in reporting and prepared to meet the challenges of MiFID II,” Allott continued, “With over 250 million trades reported under Dodd-Frank, EMIR, Canada and other jurisdictions, K3 has become a force in powerful and easy to use integration and technology. K3 is designed for use by non-technical users. Everyone knows the IT team is too busy and Operations is too weary for yet another call to ‘update the interface, or trace a data flow problem’. K3 is an intuitive user interface where non-technical people can truly manage data transformation from source to sink.”

Wendy Collins, Global Head of Partners UnaVista commented, “We are delighted to be partnering with BroadPeak to allow their clients’ connectivity to UnaVista for MIFID II reporting. The partnership will enable clients to utilize leading technologies from both organizations to fulfill regulatory reporting in a timely and efficient way.”

About BroadPeak
Founded in 2007, BroadPeak Partners is a NYC based software company and the developer of K3. Named as one of the most disruptive technologies by TechCrunch, K3 has brought the power of agile data integrations to Fortune 1000 companies around the globe.
While K3 could be categorized under archaic terms of middleware, ESB, ETL and SOA, at the end of the day it’s brilliant data plumbing. K3 is a quick to deploy and easy to manage application that makes enterprise application integrations more efficient and substantially reduces maintenance workloads. K3’s unrivaled technology is ready to meet MiFID II challenges and builds on existing regulatory reporting (Dodd-Frank, EMIR, Canada, etc) for a suite of global trading customers.

For more information email us.

Also check us out on Twitter or LinkedIn.

Enterprise Technology | Stop | Clear Your Mind

Name 3 technologies (software, open source, other)  that made a significant positive impact for the enterprise in 2016.

Corporate IT departments are a tough place to be these days. On the one hand IT departments are constantly hammered by the business to execute, execute, execute.  On the other hand they are shutterstock_551605303handcuffed by some “Cargo Cult” barriers that sound a lot like: “I know it’s senseless, but this is how it’s done here.”

Having said all that we are in a golden era of technology.  Some technologies are so striking, you’d have to be a damn fool not to adopt them.  Which brings us back to the original question:

Name 3 technologies (software, open source, other)  that made a significant and positive impact for the enterprise in 2016.

We debated this over lunch last week.  Some of these might evoke a kind of “Pfffffft,  Duh!” response. If that is the case this article might not be for you.  Count yourself lucky.  In your typical, corporate IT departments even the most positively impactful technology can easily get left on the roadside.

Some notes before you get angry:  1. We named the market leader.  If you are using a peer, we are not saying its bad, just it’s not the market leader. 2. This is the result of a lunch debate.  There is no element of  “Pay Some Money and Get An Award” here, just the result of good discussion with very knowledgeable enterprise people.

For the Business

Tableau.  Tableau changed a lot.  Used to be that beautiful business intelligence took ages to produce. Now?  Give a smart junior employee Tableau and watch the insight flow.  Ok, so you are using Spotfire or Qlik.  No objections here.  The trick is that you are getting insightful reports ASAP.  Caution!  The second that Tableau hits town, IT departments start getting asked for “Just the raw data, please.”  Some people get threatened by this, so if you are an executive, it’s important to clear the road for your analysts.

Slack.  If you are in a Windows/Outlook heavy world, as most corporate departments are, Slack is a Godsend.  Seriously, email is the worst means of communication… ever.  Slack plows the communication road by providing a seamless environment for chat, file share and expedited process.  We’ve seen slack make a real impact and as a company it’s probably far enough ahead that it will be hard to catch up with.

SalesForce.  We hotly debated putting this here, but SF is still the vanguard for true SaaS.  Like some of the others it’s not “fresh out of the box new” but it continued to make a big impact for the enterprise in 2016.  Note: We hear so many people touting enterprise SAAS.  But until you have clean, well documented APIs like SalesForce (and can run multi-tenant), YOU ARE NOT A SaaS APP!

For IT Departments

Amazon Web Services (VPC):  AWS Virtual Private Client is nearly entirely responsible for blurring the line of corporate “on premise” servers and “the cloud.”  The oft heard mantra from corporate IT departments is that: “We keep everything in house.”  This has always meant long wait times to get servers spun up and project delays.  VPC is “software defined networking” where you can take an Amazon server and make it entirely behind your firewall.  Unless you are on your corporate network, it’s impossible to access.  The beauty of this is that corporate IT departments can literally spin up dozens or hundreds of servers in minutes.  VPC really hit its stride in 2016.

Docker:  If your IT department is not using Docker, I’m going to flat out say it: They are a bunch of damn fools. This is the world of software containers.  To make a long story short, a lot of IT time is spent installing, uninstalling, and moving applications.  Every time you do this there is heaps and heaps of configuration, libraries, files etc to move.  Docker eliminates all this nonsense by putting your apps (whatever they are) into a tight container that can be managed and moved.  It plain old time saver and makes IT far more nimble for the business.

Splunk:  It’s a bit nerdy but if you are sitting in an IT department, most of your troubleshooting has to do with dreaded log files.  Every server, every application cranks out gigabytes and gigabytes of log files.   Splunk is a cool tool that helps IT departments stay on top of all these files and render important metadata about what’s happening across the enterprise.  It really gets IT departments out of the trees to see across the forest.

Who Didn’t Make the List?

Hadoop (Mongo, Cassandra, Other):  Almost on the list.  Some very significant debate here. Hadoop/ NoSQL made big inroads in 2016.  But we had difficulty pointing to a “significantly positive impact.”

Distributed Processing (Spark, Kafka etc).  Yes, we love, love, love it too.  Sooo close.  2017 list almost for sure.

Blockchain:  Ooooh blockchain!  So disruptive.  So theoretically possible to make a positive impact.  But we are looking at this for our 2018/2019 list.

JIRA:  If you are in dev you’d probably say JIRA.  We’d agree.  Impact: YES.  Significant positive impact for the enterprise?  Nominal.

AWS Lambda:  If you said AWS Lambda….oh we really like where your head’s at.   This could be a number one in 2018.

Graph DB’s (Datomic, Neo4J) Wow you know about this?  Go you!  We love it too.  But it’s a 2019 impact.

Wanna argue about this some more? Give us a call or send me an email …


CME Group & BroadPeak Offer Cloud Access to CME STP

K3 has been making it easy for firms to get a full electronic record of trading activity for years now.  Now working with CME Group, K3 is taking another giant step…to your doorstep.

You get a taxi or dinner delivered at the press of a button. Now experience the same for trade data.

No more IT projects.  No more FCM files.

Simply trade and receive the data you need to manage risk…instantly.

We bring with you the CME STP API in the cloud, powered by K3.

No infrastructure required.  Sign up and consume data.



Forget about fetching trades via FIX.  K3 in the cloud will serve your trade data in a format you can use to manage risk internally.

  • xml, csv, json, etc
  • Business friendly values
  • Custom fields based on mappings and rules
  • Support desk for questions

Email us to get started.


Death, Taxes, and MiFIDII

We haven’t even got everyone back in the BroadPeak offices after the holidays but we are already slammed with MiFID II requests and enquiries. To try and make everyone happy we are doing a variety of different events for which you are welcome to sign up:

  1. London dedicated week of K3 demos – Jan 9 – 13. We have a few slots left for in-depth presentations
  2. Drinks reception London – Jan 11. A more informal way for you to meet with us and others in the industry to strategize on MiFID II
  3. Web demos – As always we are happy to do a web demo at any time.
  4. Exhibiting and Sponsoring at the ETRC Conference London

Check out our blog for more regulatory resources.

Email for more information or to arrange a time to meet.


CFTC- Hail Mary?

It has been a busy seven days in the world of regulatory position limits. First a new MiFID II position limits RTS was released as reported here and the CFTC re-proposed their own ‘Dodd-Frank’ limits regime. Most international firms will have to comply with both.  So why has the CFTC limits announcement being getting so much more attention than the EU announcement? CFTC limits are barely a speed bump compared to MiFID II mountain.

Like others, when I first heard about the CFTC trying to get Dodd Frank Limits out the door, I thought to myself, “Holy Cow! Regulatory Hail Mary!  shutterstock_121083574

This knee-jerk reaction is premised on the new Dodd-Frank limits being be killed on inauguration of the new President.  But, then I thought again. There are only 3 commissioners, and that it will be impossible to get slots filled to kill it outright.  Then I re-convinced myself that it truly is a hail mary and that it’s nothing but a symbolic gesture being left to an unknown commission.

Then I took a step back and realized that none of that mattered.  Here’s why:

Limits Is Already the Priority One Surveillance Activity

Busting any limit,  irrespective of whether it’s an exchange or other regulatory limit is seriously bad news. The consequences today have much higher stakes than they ever have.  If you take a big  step back and look…really looks at what the sum total of all regulations in the US and Europe and enforcement actions mean, the takeaway is very, very clear:  

Don’t do anything that brings attention to your firm.

The reason are simple:

Whatever happened, no matter how accidental or well intentioned, will absolutely, positively and unquestionably be viewed in the worst possible light. Any open door invites tough scrutiny.  I’ll give you an example.  In a recent manipulation case a trader called a trade a “suckers bet.”  The judge in the case said something to the effect of, “So you think your clients are suckers.”   There is a big difference between calling something a suckers bet and calling someone a sucker.  But, this kind of interpreting everything the worst possible light is de rigueur in compliance actions.

Another thing you can be sure of is that if you bring attention to yourself in one region every other regulatory body will be seeing you as easy meat to bring a settlement (or worse) from.

Above all else,  the memory of an investigation lingers. Everyone remembers BP and Shell being raided for alleged price fixing. It was the lead story on the evening news and front page of every newspaper. Even David Cameron was discussing it. But how many people remember the outcome? (Turns out…no charges or fines).  

Personally, I’ve been kicking around the trading space since 1996.  In that time I’ve seen countless trading firms make half hearted attempts at cobbling together Total Real Time Positions. Whether your justification is Dodd Frank and MiFID or just overall better risk and reputation management now is the absolute time to do more than cobble and go after it as a priority. See How Limits Work

MiFID II- Data Validation

Validation. There is a reason everyone is talking about it.  If you do it wrong, your compliance personnel start pulling their hair out.   As far as ESMA, Dodd-Frank and MiFID II trade reporting we are clearly in a world that is moving from low validation to one that is highly validated.

Com-Sci 101:  Four Categories of Validation

Data Type Validation: “Did you send a properly formatted date, because you cant just send me any date….I require a properly formatted date.”

Range Validation:” Did you send me a value that is reasonable?  Because I hate it when you send me a market price of $50billion.”

Oh! My Compliance Data !

List Validation: “Did you send me an exact value that matches my list of acceptable values?  I swear, if you miss a single character I will bounce your message.”

Structured Validation: “Did you send me your LEI? Because I’m going to check that it’s both in the right format and that it matches your company name.”

So What’s the Big Deal?  

Latency.  That’s the big deal.  In some cases it takes repositories literally HOURS to validate the messages sent to them.  The more validation regulators throw onto their plate, the longer it will take.   The result?  Your compliance teams have to wait and wait and wait to see if their submission was accepted.  Did any of your trades get rejected? Yes? Ok…start again.. correct it…and wait and wait and wait. 

So the takeaway, as we move into MIFID II is that  you have two choices:  1) use a trade repository that is architected  for speed (read great API w speedy validation); or 2) pre-validate everything to absolutely minimize the possibility of having an error that you will have to wait and wait on.

K3 as an integration platform to trade repositories does all forms of validation prior going to the TR.  Likewise when sent we show the acknowledgement (or rejection) immediately upon receipt.

Considering that we have connected to just about everyone, I’d be remiss if I didn’t mention that the response time from destinations (ARMs, TRs, etc) is what we call “highly variable” .  There is a huge technical disparity between trade destinations.  It does make a difference because, for example, there is one repository that is so fast and comprehensive in terms of validation we really don’t bother pre-validating. (pre-validating takes just as long as validating against the actual repository) (read another way…spam the API to your heart’s content).  Even across thousands of trades, I doubt we’ve waited for a response longer than 2 seconds.   On the other end of the spectrum we have waited as much as 12 hours for a response.   If you’ve got to wait this long, pre-validation is mandatory.   So that is what all the validation hubbub is really about.

The Demise of Delegated Reporting- MiFID II

Trade Reporting.  Ask anyone on the sell side or in commodities or derivatives about MIFID II and you will probably get a pretty solid “been there, done that”.  Dodd-Frank, EMIR, REMIT, Canada; MIFID II does not look so bad.  Sure, a few wrinkles like T+0 transparency reporting, but other than that we are probably good to go.  Thing is we’ve been quite busy meeting with a large number of firms that have not been in the transaction reporting game before:  The Buy Side.tie

Delegated Reporting

For better or worse, the Buy Side has been able to avoid most of the reporting headaches under MiFID I, EMIR and the like through delegated reporting. It’s been a really good run actually.  The Buy Side has been able to avoid nearly 4 years of tough reporting work.  The bad news is that sell side firms are starting to announce that they are withdrawing delegated reporting under MiFID II.  We expect more to come.

Why is Delegated Reporting Going Away?

Nothing official, but there are some fairly obvious reasons.  The first is that delegated reporting is seen as a non-monetary benefit. A core principle of MiFID II is that, best execution has to be independent of “ancillary services.” Research is squarely in the crosshairs…and very likely delegated reporting is seen in the same light.  Second, for some entities it’s going to be difficult to pull off delegated reporting for some of the MIFID II reporting fields, especially when it includes proprietary client data and the like.  While not impossible, it makes things tricky for the sell side reporter figuring out how to collect and maintain what is very, very sensitive information.

Tips for the Buy Side

We have been delivering K3 for trade reporting for the past 5 years.  We’ve seen just about every regime and make it a point to talk straight with our customers.  So, here’s we go.

Right Now-Be Skeptical…But Don’t Let That Stop You.

Be skeptical about where your firm is going to report.  In trade reporting there is a big difference between signaling intent and actually showing up on game day.  Right now the ARM / APA side of things looks like the back pages of a British tabloid on the Premier League transfer deadline day.  Gossip, good intentions, dis-information, partnerships, alliances, intended integration points and plenty of back office chatter…none of which this has solidified yet.  But this will settle down. Closely inspect what you expect.

But don’t let that stop you.  We are expecting a literal crush of buy side firms who were late to the game figuring out they have reporting obligations under MIFID II.  We saw lots of late sign ups under EMIR and…well…let’s just say there are plenty of battlefield stories.

Be Ready to Pivot, Integrate and Adapt

Right now we are looking at 65 core reportable fields with an additional 18 or so for transparency reporting.  Experience tells us that this is a meaningless metric.  This set of fields will expand quickly both from regulators and from the ARMs/APAs.  Our experience tells us that despite having the same data set, each ARM/APA will do things very, very differently.  Likewise, familiarize yourself with the concept of “data validation.”  There is a whole blog post coming on this topic shortly.

There is also the careful ballet of connecting all your trading systems to meet the reporting deadlines.  I can practically guarantee that someone somewhere in your firm is going to say the words, “We need a data warehouse.”  I’m not going to disagree, but at the same time there are huge considerations that go into this especially with the MIFID II timetable.  We’ve done some great things with K3 on this front, both on the internal data warehouse side and reporting side. Always happy to talk.

Finally, the last piece of advice is this.  Know thy trades.  Every trading firm that has had to undergo a trade reporting project has necessarily been forced to inventory and “clean up” where the trades are being kept and how they are being kept.  Starting a project to clean up, tag, add metadata and systematize trading is a key feature of reporting success.  This is doubly true where the landscape of delegated reporting is changing.

The 4 Biggest Trading Limits Mistakes

It’s been wisely said before that “Little details have special talents at causing big problems”.   In time for Dodd-Frank and MiFID II limits we have just rolled out a new version of K3 limits.  [LOOK BELOW FOR FREE ACCESS TO ATLAS (Spot Start, Complete Limits Info, Open Interest, REST API)]

We just rolled out a new version of K3 limits getting ready for Dodd Frank and MiFID II limits coming up at EOY.  As companies consider how they are going to handle these new position limit regimes we thought it a good time to share the common details that many companies miss.

1.Spot Start Miscalculation.  

Believe it or not, exchanges don’t actually publish the date when a product enters the spot period.  What they publish instead is some text describing when the spot period starts.  Something like, spot starts x days prior to the maturity date except on Saturdays and Sundays and holidays etc.  Here’s the problem.  This forces companies to manually calculate the spot date.  

Never fond of manual process, we solved this with our cloud app ATLas.  ATLAS just  regularly downloads maturity dates and then through a little  coding magic calculates spot start dates.  It’s all easily accessible through a web interface or as a simple REST API call.

2.Missing the Diminishing Positions

I remember a risk officer asking what our position was at end of month on some New York Harbor contracts.  The answer was:  zero.  Confused he said, “Wait, weren’t we long X lots at the beginning of the month?  A: Yes.  Did we close the position? A;  No.  Enter the diminishing product.  If the exchange indicates that the product is diminishing, this means your limits position in the spot month will rateably decrease over the course of that month.  Beginning of the month you have a full position, and by end of month you are at zero.

The diminishing indicator is always available in ATLAS and of course K3 Limits always diminishes products in spot when indicated.

3.Parent / Child Roll Error

A trader once remarked that nothing she traded had position limits.  True they didn’t have explicit limits, but they rolled into a parent product which did have limits.  This is the parent-child relationship.  That roll up or “aggregation” may even be at a particular ratio. So for example, 1 lot of a child position may only be equal to half of a parent lot.

It gets even more complicated when maturity dates don’t match.  This one is easy to miss.  Let’s say your traders are trading CS Crude Contracts.  These are child positions to the NYMEX future 26; they roll up on a 1:1 basis.  But even though one CS contract is equal to one 26 contract, these products roll/mature on different dates.  If you go long Sept CS, part of your position rolls into Sept 26 and part into Oct 26 from a limits perspective.

K3 limits calculation does this automatically.  Atlas also indicates whether the product is a parent/child and the aggregation ratio.

4.No Exchange Delta

Trading options?  The rule is that you need to use an exchange published delta to determine position.  These can be a bit tricky to get.  But ATLAS downloads them  into a nice usable format behind a gorgeous API that can be called from any application.
As we gear up for Dodd Frank Limits, and MiFID II limits we will, of course have all the appropriate data in ATLAS and we are ready to go.  As always if you’d like a free preview of ATLAS to spot check your spot starts or just see how it works we’d be happy to accommodate.  Drop Tom Eisner a line at +1 (646) 461- 3820 or



Building Server Empathy – Broadpeak’s Guide to WebSockets

While building an application, I recently encountered a situation where I needed to use WebSockets. Among other things, the application fits a row from the database (43 columns) into a row in the Facebook FixedDataTable (FDT), which is a Javascript library for displaying tabular data.

A problem occurred when the client requested too many records, and the server could not handle the records in memory (RAM). Depending on the server heap size, at about 75,000 records the server ran out of memory and became unresponsive. Given our large data sets and the possibility of potentially hundreds of connections, the memory issue had to be rectified. I had to implement data streaming to allow the server to run in constant memory. I decided to use WebSockets because I wanted to implement a cooperative streaming protocol that uses bidirectional communication.  This was the start of my journey towards Server Empathy.

The original implementation failed because of a memory overload, but for posterity it is outlined in these three steps:

1) Client sends HTTP request to server asking for all trades fitting a certain criteria from the database
2) Server opens a database connection and reads all relevant rows into memory
3) Server does some work and passes data as response back to client

In order to implement a solution I modified the way the database returns records, and created a communication protocol over WebSockets for client/server communication.

First let’s discuss the database connection. In Java JDBC, when a query is executed, an object of class ResultSet is returned. The ‘Resultset’ is a cursor into the query results prepared by the database server.  The original implementation was fetching the entirety of the result rows, bringing them into memory and sending them off to the client.

In the new implementation the server uses a result set function to manage the flow of data from the database to the server.  In a series of iterations, the server only fetches a small batch of records from the Resultset.  Next the server enriches the data and drops it on the WebSocket for the client, before proceeding on to the next batch. The previous batch automatically gets garbage collected, allowing the server to run in constant memory.  Next outlines how I built a client-server communication protocol to minimize the memory burden on the server.

When the client gets some data, it renders the data into the FDT and then sends a message to the server over the WebSocket. The message tells the server whether it should stop or continue sending records. The client has the option to request the termination of the connection at any time for any reason. The server will terminate the connection either when it has received a stop signal from the client, or when the query has completed.
The interaction is detailed in the diagram below (bi-directional arrows represent a web socket):

WebSockets communication model

In this model the server only needs to keep (at maximum) a predetermined number n records in memory at a given time. This alleviates the server load and allows the client to terminate a connection if the application becomes slow or the user navigates away from the page. The chatty relationship between client and server ensures that both sides are always responsive, and any problems will be rectified immediately.

WebSockets may be more difficult to implement than a standard HTTP request, but they provide a lot of flexibility in the client/server interaction. They are particularly effective for communicating large amounts of data, because WebSockets allow for batch streaming while lightening memory loads on both client and server.  So if you “care” about server memory, speed, or network bandwidth, there is a strong possibility web sockets may be useful!

Technical Information
If you are interested, here is some information on the technology stack used for this application:

Core.async – The asynchronous library we use on both client and server. Processes messages entering and leaving the web socket.

Re-Frame – A small templating library built on top of ReactJS. It helps to manage the state complexity, while providing a way to program functionally.

Clojure/ClojureScript – Respectively, server and client programming languages we use.

Chord – WebSocket library, built on top of http-kit.

Fixed Data Table – Javascript library used to render the trade data in a tabular form.

Page 1 of 1012345...10...Last »
(877) 738-0470