Trade Reporting Under REMIT

Here’s a quick video explaining how REMIT works and how to generate ACER XML for trade reporting.  If you are having trouble with the video below click this link.

Enjoy!

Excellent Commentary by Aviv Handler

New Q&A from ACER.  Commentary by Aviv Handlershutterstock_260340503

http://energytradingregulation.com/2015/07/01/examining-the-new-remit-quarterly-and-qa-documents/

REMIT Regulation | Game of Thrones

Alliances and stratagems. Turns out the political twists and turns of the REMIT regulation, the EU gas and power mandate, is like..”Game of Thrones complicated.”  There’s a very full field of new RRMs (HERE). Most are absolutely new to regulatory reporting. But, that’s not what’s

keeping people up at night.

OMP Fragmentation & Uncertainty = Compliance :(remit game of thrones

Exchanges

Market Participant’s data is going to be all over the place.  We count 34 physical gas and power exchanges subject to REMIT regulation.  Most have either created their own RRM or paired of with a “preferred RRM” that they will pump data to automatically.  CME will go to their own RRM,  ICE will go to their own RRM, Belpex will go to EEX etc, etc…  It’s all well and good, but if you need to make sure all your orders and trades are being reported correctly it’s a nightmare.

Brokers

A lot of physical gas and power is brokered.  And it’s the brokers who have to communicate orders and executions.  The big question is, where to?  A lot of the big brokers like ICAP have paired off with a “preferred RRM.”  A second group have fallen back on, “we are going to pump out ACER XML and send it where you want.”  A third group, who shall remain nameless, ask: “What is REMIT?”

But Here’s The Real Uncertainty

Not everyone is on board with communicating all this data via ACER XML.  In particular a lot of exchanges and possibly some brokers are not likely to give that data up as ACER XML in the near term.  They might send you a flat file, but it won’t be ACER XML.

Second, there are so many RRMs, it’s a setup for significant instability.  We have already heard rumors of serious price dropping to gain customers.  This is great for Market Participants, right up to the point that their RRM decides to call it a day.  Truth be told, running a repository is a very expensive proposition.  Demands for high security, painfully long legal approval and hurdles, not to mention the technology… makes for a very thin margins.

So It Comes to This

Every Market Participant, Every Exchange, Every Broker has got to develop a core competency around ACER XML.  That is taking flat data and turning it into ACER XML and vice versa.  The overall REMIT Regulation is complicated enough.  But this is one thing regulators may have gotten right:  a single standard to communicate trades.

Here’s Where We Can Help

K3 is up to the challenge.  We’ve set up a drop service for REMIT: Drop your CSV into a folder and it automatically converts it into ACER XML… instantly.  More?  We’re an integration company!  So we’ve integrated to Dropbox to make it even easier. Drop a file from anywhere and *Poof* Acer XML wherever you want it.

We’ve sliced and diced the ACER XML. There’s a handful of curveballs in there.

  • We will be having an webinar and try to kick off a discussion group on ACER XML
  • We’ve setup a Linkedin Group to share info.
  • Drop a note to Tom Eisner to get an invite to the webinar, Linkedin Group or to get a view of the REMIT drop service (tje@broadpeakpartners.com or info@broadpeakpartners.com).

Looking for Artists of Code…

We are growing strong at BroadPeak and looking to hire some more key team members.  Do you have experience integrating systems?  Are you a code jedi?  Do you love clojure or other functional languages?

Besides the obvious analytical skills that a good developer must have, we particularly look for folks who like having impact on big business, solving tough problems, and thinking artistically.

Yeah, I said it.  Gone are the days of the “code monkey”.  If you get heart warming feelings when you see an “elegant” technical solution, you’re an artist…and we want you.

Not all RRMs are created equally

It’s just a matter of time…
With all this global trade reporting going on and all sorts of new players entering the market, it’s just a matter of time before someone’s data gets hacked.  Being able to predict trading patterns and how counterparties affect the market is an advantage akin to knowing the other players’ cards at a poker table.  Thus a massive database of all trades is a treasure chest for the unscrupulous.
With Dodd-Frank and EMIR this really wasn’t much of a concern.  But for REMIT, there are a growing number of new entrants applying to be RRMs making the decision tough for participants.  Here’s the real question when deciding…
Does your TR/RRM have a NOC to prevent malicious attacks from hackers?
Players like ICE and CME among others, have spent tons of money protecting their systems so you can rest assured when sending data to them.  Not all RRMs are created equally…

Docker | A $416 Million Loss Too Late?

Seriously, one has to wonder whether Docker could have saved Knight Capital from bankruptcy and a $416 Million one day loss.

The cause of the Knight bankruptcy is pretty straightforward.  Knight was one of the biggest market makers in equities.  They were the guys always providing liquidity on exchanges like NYSE.  A developer moved an upgrade of their market making software from UAT

Large Enterprise Swallows Docker

Large Enterprise Swallows Docker

into production.  But he/she forgot to install one little thing.  That one little thing caused market orders to be duplicated again and again.  Before they could trace the problem Knight was out $416 Million and finished.

Moving environments from a development environment to a production environment has always been a tedious headache.  When I move an application from one server to the next, I have to make sure that all the underlying and supporting libraries, settings, configurations are all the same.  It’s kind of your run of the mill pain in the neck work, but one that demands some attention to detail. Why?  Because if that one little thing is not the way it should be the new instance will misfire.

The beauty of Docker…and the reason enterprise software guys like me are bla-bla-bla-ing about Docker is because it really eliminates a ton of pain in the neck work as well as streamline the application migration.  You simply deploy an application inside a Docker Container and you can move it from one server or VM to the next.  All the settings, libraries, etc…go right along with it. The application and its dependencies are all bundled nicely into a tidy container.  One server to the next, it all goes along swimmingly.

Lessons:

1.       We are in a new golden age of technology. Amazing enterprise technology is rolling out ALL THE TIME.  At a minimum it really pays to try a little skunkworks mindset to give these new things a go and see how they can make an impact on your business.  We are recommending that all our clients at least try out Docker.  We are already using it internally.

2.       When new technology rolls out it is often not “enterprise ready.”  What this means is that the bulk of the product is there but it’s not prime time yet for security, features, usability that fit into an enterprise operating model.  For example, Docker is only supported in Linux.  If your company is running Linux servers you are good to go.  Microsoft servers?  Not ready yet.  But disruptive technology like this is entirely worth staying current.  It won’t take Docker any time to close the gap.

3.       Skunkworking new technology, knowledge distribution is king.  We like to use the surgeon model: See One, Do One, Teach One.  The last thing you want is one person being the only one who knows how to use new technology.

4. The hallmark attribute of this new technological age is absolute “torpedo-ability.” Modern architecture is thoroughly Darwinian, and assumes that any part that becomes less than ideal can be replaced (see my previous blog post on this here.)  Second, when new cool stuff is rolling out all the time we actually have smart evolution at our disposal. This is just one of the reasons your old legacy systems don’t have much of a future…they cant evolve. (see blog post here).

 

Y2.02K- The Enterprise Software User Revolt

What is Y2.02K?*  Don’t bother hiring Mckinsey, it doesn’t take a rocket surgeon to figure it out.  All you have to do is poke around your company to identify the one thing that people absolutely HATE about their job.  The thing that causes grey hair and un-necessary conflict:  You’ve got enterprise software that sucks.  Maybe it’s an old system, maybe it’s a legacy system inherited from an acquisition, maybe it was one

enterprise software

Y2.02K User Revolt

“sold” to you that never came close to expectations.  Whatever the cause, these clunkers are doing really important things…just doing them badly.

First Things First:  Its’ Not Your Fault

These enterprise software systems, when built, were “state of the art” at the time.  Have you ever even heard of an AS/400 mainframe crashing?  Never!   That was the state of enterprise software art at the time.   Doesn’t make it any friendlier, but in a way it wasn’t designed to be friendly.   Other systems may have “all the right buttons and functionality” but the underlying architecture is a dog’s breakfast.  In the data-centric competitive economy, this is the root of really exorbitant maintenance costs and more and more grey hair.

We also can’t discount that our perceptions of software have really changed.  Apple, Google and others drive the “iPhoneification” and gorgeous user design really highlight the value of great user design. Against this backdrop, old software just looks and feels horrible.

And Then This Happened.

Data!  Data analysis!  Predictive Data Analytics!  Listen, I’m not going to mince words here.  The data gauntlet is far, far out of the gate.  Managers and the C level are demanding analytics and they want it yesterday.  But to do this we have to grapple with our old crappy enterprise software systems.  These old software systems hide data behind a labyrinth of spaghetti architecture.

When we first built K3 one of our key fundamentals is that it has to be the WD-40 for data. Part of this is a simple grappling hook into older enterprise software systems (K3 adapter library) and the other half is seamless user driven transformation.  What do I mean by that?  It’s getting hooks into systems and then enabling users to sort, blend, harmonize the data so you can make sense of it and do new and cool things with it.  Call it a ring-fence, call it closing a digital divide, but in the end it’s all about closing the holes on monolith software.

Y2.02K is the User Revolt, and It’s Already Underway

Technology people with eyes wide open have been reading their news feed for the last few years with their mouths agape.  Why?  The new technology rolling out is staggering in simplicity and absolutely amazing.  The “first majority” of large companies are just now putting this technology to work and decoupling key aspects out of their monoliths.  The fact is these technological leaps and bounds coupled with the absolute fed-up disposition of users makes old monolith systems a lost cause.  At a minimum it will drive a powerful replacement cycle in favor of applications with newer technology.   At a minimum users are already reducing the footprint of monoliths in favor of external technology.

By 2020 the monolith is only for absolute laggards. Most large enterprises will have gone far far beyond this.  But the next 5 years are the most exciting the software business has ever seen. Fasten your seat belts.

*In case you didn’t get it.  Y2.02K is Year 2020.

What is an API???

When the Harvard Business Review, a wonderful publication that rarely gets deep in technology “solution-eering”, starts talking about adopting an API strategy,  it’s time to lend an ear.  If you have not read it, here it is in full:  https://hbr.org/2013/08/move-beyond-enterprise-it-to-a/.

I’m going to skip over the smart sounding techno- speak of Cambridge for a moment, and GET ALL THE APIhead Southside to talk about APIs and why they are so important in basic terms.

Let’s say you want a Pizza but there is no such thing as a pizza parlor.  To get a pizza you need a big oven, you need ingredients, and you need some pizza skills to put it all together and cook it to perfection.  Pizza after pizza, sooner or later you get tired of spending all this time in the kitchen.   Pizza making is a time sink on your day and you’ve pretty much boiled your pizza making to a regular set of steps for a reliable pie.  Maybe you can get someone to do it for you?

Enter the invention of the pizza parlor.  There is a person to manage and assemble the ingredients, another to work the oven and cook pies to perfection.   The pizzas come out great.  The staff is great at their assigned task ( but pretty bad at anything else.)   But there is one more important person.  The person at the counter.  No more fooling with ingredients, assembling or cooking.  All you have to do is go to the counter and say, “pepperoni pizza”.   Voila, 30 minutes later pepperoni pizza appears.

The guy at the counter?  That’s your API.   APIs abstract us from the rigors of what happens behind the technological counter of a piece of software.   With an API, I don’t have to touch or do anything behind the technological counter, ever.  As far as I am concerned, as an abstracted consumer, the dirty work stays a nice mystery.   And just like a pizza menu, I can order up any number of special items .  Just talk to the guy behind the counter.  No messing with ingredients (data), assembly  (“ETL”), cooking (processing).

So why is an API strategy so important?

  • 90% of the world’s data was created in the last 2 years.

Using the pizza analogy we now have a TON of ingredients in the fridge.  Spoilage is a real problem (lost/hidden data).  We have neither the time nor person-power to have people knocking about behind the counter trying to make their own pizzas. Think I’m kidding?  Just give a thought to all the CSV files a typical corporation deals with on a day to day basis.  These are useful little devils.  CSVs are useful in the sense that we really, really need the data within;  CSVs are devils in the sense that to use that data we have to get behind the technological counter and vlookup and code ourselves into a knot.

  • API s have evolved beautifully.

Back in the day the ways of ordering a pizza from the behind the technological counter guy was a bit cumbersome.   You had to order a pizza in a very specific and lengthy way.  Kind of like giving the pizza counter guy a secret password, special handshake, awkward exchange of pizza ordering language and salutation to the sun and moon.  Some APIs are still like this, but they have evolved a lot.  These days it’s all Internet speak (Web Services) and has been simplified to the Nth degree (REST).  At BroadPeak we joke about companies that talk about “integrating” into  REST APIs.   If you think about it, saying you have integrated to a REST API is like saying, “We integrate into the thing that is absolutely and thoroughly designed for integration.”   Well, good for you!

Yes, API ’s Have Gotten This Good

How good have APIs gotten?  Good enough that there is a bit of a movement pulling away from traditional notions of SOA (service oriented architecture).  The truth is we don’t really care what applications are doing in the pizza kitchen.  So long as we have a single place to call.  When deployed across the enterprise it eliminates the need for clunky service busses and ESBs.   So, rather than spending millions of dollars on a TIBCO implementation with a high failure rate, we proliferate APIs.  Our developers spend their  time building really cool applications instead of costly data marshaling.

Oh so much more to come. Stay Tuned

 

New Buzzwords You Will Hate by 2016

The New Buzzwords You Will Hate by 2016

Freshly returned from yet another conference.   Since its nearing the end of year I thought I would put out a semi-prediction about what everyone is going to be buzz-wording about in 2015.

“NO ESB” –  The agenda behind the NO ESB movement is this:  Stop thinking about shutterstock_225839086message busses and start thinking about APIs.  It’s not bad advice.  Large Enterprise has been crippled by old legacy systems without APIsfor so long. In the old days we used to dream about connecting all systems by a ubiquitous message bus.  90% of the time these projects failed miserably.  The alternative?…API everything.  Of course the NO ESB camp has already schism’d  saying that it’s not NO ESB but NOT ONLY ESB.

I’d say use what is right for the purpose, but yes, 2015 is the year of the API.  Like a daytime soap opera we will be spending hours transfixed on “The Old and the REST-less.”  Get it?

“Microservices” –  This one right out of the gate is already getting old.  But I guarantee you are going to hear more and more about Microservices! Microservices! Microservices!  What is it?  Not much.  It’s basically the same old service oriented architecture except a bit more granular.  Like SOA its all about componentization.  But with Microservices you might break that up even further with some nice API (hopefully REST) exposure.  Think of it as SOA for Millennials.

“NO IT”- This one is actually kind of neat and long overdue.  Back in the old days, (EyeTee) sat in their ivory towers dictating how operations will run.  This got morphed into running IT as an internal business, where business operations are treated as clients.  According to NO IT, this is all over.  Remember Neo in the Matrix: “There is no spoon.”  With NO IT, there is no IT.  According to Gartner,  40% of technology spend is coming directly out of the business, not IT.  For that matter, what it really comes down to is the  pace of business just can’t wait a week for a technical adjustment.   NO IT simply states that Business Operations are not your minions and they are not your clients.  They are your peers.  NO IT dictates that technologies get out of (Eye Tee) and fold themselves directly into operations.

Of course it’s only a matter of time before someone re-defines NO IT to mean Not Only IT.

“DIY BI” –  DIY BI is freaking awesome.  Back in the olden days (2001) we had the same need for slick customizable reports as we do now.  The trouble is that the data we need is all over the place.  Someone came up with a clever idea of an “OLAP Cube” that batch runs, summarizes and dimensions the data.  Cool but ornery to setup and impossible for real time reports.   With the advent of Tableau, Spotfire and QlikView the cube era is OVER- Gone the way of MySpace.  Any old regular business analyst can park Tableau on top of data sets and create really cool reports and dashboards.  As always, 90% of the job is data marshalling.  (See Citizen Integrator Below)

“Citizen Integrator” –  The Citizen Integrator is a real milestone in technology.  Integration tools, like our very own K3, have been designed to let regular, non-technical, people create and run integrations.  Compared to old integration tools, it’s a major milestone in abstracting users away from coding.  In a nutshell Citizen Integrator takes your average Joe or Jane and turns them into a data marshaling powerhouse.  How?  (see Data Scientist below)

“Data Scientist” –  It’s  like a job title from the future.  Meeting someone with the title of Data Scientist is akin to meeting the President of the United States: somewhat within the realm of possibility, but a rare event.  My gut tells me we are going to hear more and more about Data Scientists in 2015.  So what is it?  In truth a data scientist is 70% data wrangler. To see what I mean ask yourself:  Does a snapshot of your morning sound something like “VLOOKUP, VLOOKUP, IF, IF, IF, THEN, TRIM, HLOOKUP, CTRL+D, IF VLOOKUP?| Select *, Join”   You might be a Data Scientist!  The balance of the workload is doing analysis and reporting.  Remember DIY BI above?  Well, Citizen Integrator + Smart Analysis + DIY BI = Data Scientist 2015-Beyond.

“IOT” –  Don’t worry, not yet another acronym to learn.  It’s just “Internet of Things.”  You know how cartoons late in their run will do a mashup with another cartoon to squeeze a bit more mileage out of it?  This is pretty much the same thing.  The powers that be crunch it down to an acronym to get another year of buzzword out of it.  Yes.  I’m sorry.  It looks like another year of vapid conversations about how our toothbrush and toaster can talk over WiFi.

Last but not least:

“DevOps” –  If you think that Microservices might get on your nerves, prepare for DevOps.  It’s already being badly abused,  DevOps is nothing more than how you conduct “development”.

These are all good attempts to describe the frenetic pace of change in the tech world…but buzzwords…please stop…please just stop.

Designers will be Data Artists

Here is a great article discussing future transformations for designers.  The future is all about Data.  So if you aren’t yet tired of hearing about BigData, you may soon hit your breaking point.  If this article is correct, it’s only going to be a growing phenomenon.

The question for leaders is…

How are you empowering your data artists?

Page 4 of 10« First...23456...10...Last »
(877) 738-0470