Welcome!

Cloud Event Processing - Analyze, Sense, Respond

Colin Clark

Subscribe to Colin Clark: eMailAlertsEmail Alerts
Get Colin Clark via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, CIO, Open Source Journal, CEP on Ulitzer

Blog Feed Post

Cloud Event Processing: CEP in the Cloud

Observations & Next Steps

Over the past few weeks, I’ve implemented map/reduce using techniques commonly found in Complex Event Processing.  Here’s a summary of what was involved, and what tools would make such a deployment easier.

Getting the Data
One of the first tasks accomplished was the creation of an OnRamp – we use OnRamps to get data into our cloud for processing.  The specific OnRamp used in this learning exercise subscribed to Twitter and fed the resulting JSON objects onto the service bus, RabbitMQ in this case.  We had to correctly configure RabbitMQ for this, and the OnRamp needed to be specifically aware of and implement semantics required to publish on this bus.  It would be easier and more portable if this were abstracted in some type of OnRamp api; we had abstracted this at Kaskad.  In Korrelera, the bus didn’t matter – we could just as easily use direct sockets, JMS, Tibco or 29West.  The OnRamp didn’t know, and didn’t care.  In our TwitYourl example, there’s no way to monitor or manage the OnRamp other than tailing its output and visually inspecting it.  There is no central management or operations console.

Definition of Services
Although we’ve used Map/Reduce as our first example, the topology doesn’t really matter.  What matters is that we created a number of services and then deployed them.  In our small example, we wrote a RuleBot that performed the Map function in Map/Reduce.  This RuleBot listen for Tweet JSON objects, pulled them apart, found the information we were interested in, chunked it, and then fed it back onto the service bus. Another RuleBot performed the Reduce function – events were pumped into the Esper open source CEP engine where the could then be queried, Again, the RuleBots had to be aware the underlying bus’s semantics and could not be managed or monitored in our TwitYourl example.

Deployment to the Cloud
All of this had to then be deployed to the cloud – there are two main components to this.  First, we assumed that each node in the cloud was configured correctly.  This had to be done by hand – it would have been much easier to have an image that contained everything we needed from an infrastructure, or plumbing, point of view that could have been deployed to any number of servers via point and click.  Secondly, the services themselves needed to be deployed, and as I’ve already pointed out, those services had to be aware of the bus, could not be managed, and could not be monitored.  All of this had to be done by hand.  And log files, or console windows had to be examined both operationally and to examine the fruits of our labors.

How to Make This Easier
First, we need a tool that will configure and provision any number of nodes in our cloud.  There are several vendors that have products in this space and I’m not going to talk about them here (yet).  Secondly, and more importantly, we need an architecture that is layered on top of the hardware/operating system/ESB/etc. that can accept and deploy services dynamically.  An implementation that can be monitored and managed remotely and allow the management of our solution both physically and at some abstracted level.

Another Layer of Abstraction

It would be very handy indeed if we could define what was going in our Event Processing Cloud and then push it out to the cloud.  We need the ability to iteratively develop services, test them with live data and deploy the service to a service pool.  Service pools define some chunk of work that must be done; RuleBots can join service pools and then be automagically managed by our CEP based load balancing tool.  OnRamps can be managed.  And everything going on can be examined, both physically and from a services point of view.  For example, TwitYourl may be running on 100 machines, but the business user really only cares about whether or not the service is available and that the results can be viewed and utilized.

What’s Next?

I’m going to outline the requirements, at a high level, of what this command and control architecture looks like, and we’re going to re-deploy TwitYourl using this new approach.  By doing this, we will be able to compare the ‘old’ way of deploying 1st generation CEP based solutions, which are designed to scale vertically on multiprocessor based single machines, and our new Cloud Event Processing approach which is designed to scale not only vertically, but also horizontally, running on many more machines either in a public, private, or hybrid cloud.  And then we’ll talk about a much better way to look at output than by monitoring a console or tailing a log file!

Thanks for following along!

Read the original blog entry...

More Stories By Colin Clark

Colin Clark is the CTO for Cloud Event Processing, Inc. and is widely regarded as a thought leader and pioneer in both Complex Event Processing and its application within Capital Markets.

Follow Colin on Twitter at http:\\twitter.com\EventCloudPro to learn more about cloud based event processing using map/reduce, complex event processing, and event driven pattern matching agents. You can also send topic suggestions or questions to colin@cloudeventprocessing.com

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.