Development Philosophy

K3 is a different integration platform.  If  you are reading this then there is a decent chance you have a technical background. This page is intended to impart a bit about K3’s architecture, and why we built it that way. We love technology.  K3 stands on the shoulders of all the Integration ESBs, Message Buses, and ETLs that have come before us.  However, it’s our responsibility to advance the art: more data, faster flows, designed to flex with tough business problems.

Incumbent to us as a software company, we are always mindful that we have to go far beyond just cool tech.  It’s our responsibility to deliver a cohesive application that solves data flow needs while also providing outstanding support.

Routes

A route is how data moves from A to Z. But, it's a higher level concept than something like a "queue." A route is like Dorothy meeting characters on the Yellow Brick Road. The queue is the road. Dorothy is the data payload, and in K3 that data meets a lot of interesting characters (components) along the way which affect the payload. One note is that the route is not always linear, it is actually data flow graph which is an orchestration of data meeting components among topics. Drawing from the distributed architecture a series of routes in K3 often looks like this image.

The K3 Core

K3 is built on an entirely distributed architecture.  We designed it to give customers the power of user driven data management, horizontal scalability of data streams, and the ability to replay historical data on demand.

Custom off-the-shelf components

A route is how data moves from A to Z. But, it's a higher level concept than something like a "queue." A route is like Dorothy meeting characters on the Yellow Brick Road. The queue is the road. Dorothy is the data payload, and in K3 that data meets a lot of interesting characters (components) along the way which affect the payload. One note is that the route is not always linear, it is actually data flow graph which is an orchestration of data meeting components among topics. Drawing from the distributed architecture a series of routes in K3 often looks like this:

Persistence

In a data hungry enterprise, it's imperative to cohesively store data. K3's persistence takes it a step further for users by enabling two types of persistence. The first is K3 Core persistence. Everything that goes through K3 is automatically saved in distributed files. This enables a second persistence layer, K3 Projected DB Persistence. Here's where you gain actionable leverage as users can normalize multiple feeds to a single format, filter, and then iteratively project data to a data store of their choice.  Need additional fields? Forget to add some rules? Drop and reflow data until you get it right. Works with any data store of your choice. MSSQL, MYSQL, REDSHIFT, ORACLE, POSTGRES, MONGO, HADOOP...it does not matter.

Adapters

Adapters are a general term for a K3 components that ingest data from data producers and and outgest data to data consumers. Sure, we’ll play the “let us show you all the logos and you will be impressed game” But, for those who have been on the data battlefield, then you know it takes more than a logo to connect to an enterprise app. Sometimes, an integration project will require different types of adapters to connect to the same application.

Filesystem/FTP Adapters:

A filesystem adapter is the most basic type of adapter. Any delimited file (csv, xml, json, etc) that is put into a designated folder is automatically pulled into K3. Similarly, output folders can be configured for any route. Users can designate any folder such as an FTP/SFTP folder, Dropbox, or other network folder.

FIX Adapters:

K3 FIX adapters are specifically designed to interact with the FIX protocol and FIX format for connecting to the worlds financial institutions. K3 has connections to over 30 exchanges, and like all other adapters, are specifically designed for low latency and reliability.

Database Adapters:

K3’s database adapter allows users to execute custom SQL queries to retrieve data from or emit data to any type of relational database. If enabled, the database adapter can use a delta function to only retrieve data since the last poll, as opposed to polling the entire database.

Payload Specification:

For any of the adapters above, the payload could be any type of structured format (e.g. csv, xls, xml, json, etc). Users can configure how K3 digests input formats as well as how the outbound payload should be structured using easy to configure templates.

API:

API adapters are designed to work with REST and SOAP endpoints. They are leveraged heavily with major enterprise software systems and SaaS vendors. These API adapters expose all key configurations elements to determine how the adapter will interact with the endpoint.

UI and Workflow

We love a lot about what other integration platforms do. But, there is one thing that we we almost universally dislike: Their User Interface. Now, you might be a “all emacs” or “all eclipse” all day kind of person. Us too. So, when it comes to doing functions like creating mappings, rules or day to day functions you are just as happy...if not more happy...to do it in code. But there is something really important to consider: This might become an impossible bottleneck.

We have yet to meet an IT department that isn’t under incredible pressure to deliver. So what if...what if we could let business users safely administer data flows without intervention by IT. Businesses change and there is always data entropy to handle. We are always adding new customers, products, markets etc... Why not abstract these simple day to day functions in a UI? No code needed.

Empowering Constrained Teams

Large teams need work flow. The reality of enterprise integrations is that we can't always expect our most technical people do do everything. K3 is designed for self sufficient integration and ETL. What we mean by this is that once set up 90% of the day to day maintenance of the integration can be serviced by non-coding business analysts. This is a huge shift. It allows our most technical users to focus on the most technical issues, rather than plodding away at data janitorial work.