Welcome!

Cloud Event Processing - Analyze, Sense, Respond

Colin Clark

Subscribe to Colin Clark: eMailAlertsEmail Alerts
Get Colin Clark via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: SOA & WOA Magazine

Blog Feed Post

Data Mining in Streaming Data

Symbolic Representation, Dimension Reduction, Clustering...

Lately, I’ve been working on some interesting projects involving not just the usual suspects of stream processing, but data mining within high velocity time series.  In conjunction with that effort, I’ve been doing a lot of research in the areas of symbolic representation, dimension reduction, clustering, indexing, classification, and anomaly detection.  A prolific  researcher in this area is Dr. Eamonn Keogh – I’ll be applying some of his team’s ideas so some interesting customer problems and telling you all about it here.  Let’s get started!

TOO MUCH DATA

In dealing with real time streaming numerical data, there is just too much of it sometimes to do anything meaningful with it in real time.  For example, in pattern recognition, trying to compute nearest neighbors using continuous, highly dimensional data is a compute nightmare.  Or, once you’ve identified a pattern of interest, finding similar patterns either in historical data or in streaming data is extremely compute intensive, and until recently, outside the scope of streaming engines.  This is because if you need to go outside of main memory, even if you’re distributed like we are, say, “Hello!” to my friend, Latency!

NUMERICAL TECHNIQUES

There are several numerical techniques one can employ to summarize streaming numerical data.  The problem with these representations is that they are all continuous, or real valued.  Another large problem, according to Dr. Keogh, is that none of the popular techniques allows a distance measure that lower bounds a distance measure found in the underlying data.  This means that once you’ve conflated your data, any analysis on that representation might not be accurate, or representative of the underlying data stream.  Also, because the resulting values are not discrete, we can’t use algorithms like hashing or search, Well, that’s no good!  So what to do?

HOT SAX – GETTING DOWN TO THE GIST

Symbolic Aggregate approXimation (SAX) allows data to be conflated, discretized, and distance to be calculated between observations.  That means we can use all of the wholesome goodness out there in the areas of clustering, indexing (search), classification, and anomaly detection while also dramatically reducing the amount of data we need to crunch.  Getting us closer to integrating streaming events and historical data.  Nirvana.  SAX is the result of much work done and still being done by Dr. Keogh and his team at University of California – Riverside and lots of information about that work can be found here.

CALL THE PREP CHEFS

First, we need to do some prep work, and I recommend reading the papers – they’re informative and there’s really not too much math either.  As a precursor to SAX encoding, we’ve got some work to do.  We’ll use Piecewise Aggregate Approximation as in intermediate step and before applying PAA, we’ll normalize the data.  In my next post, we’ll show some spiffy charts and graphs as we implement SAX within DarkStar (our distributed event processing system that incorporates streaming map/reduce & CEP functionality).  Go read the papers and then come back for some fun.

THANKS FOR READING!

Read the original blog entry...

More Stories By Colin Clark

Colin Clark is the CTO for Cloud Event Processing, Inc. and is widely regarded as a thought leader and pioneer in both Complex Event Processing and its application within Capital Markets.

Follow Colin on Twitter at http:\\twitter.com\EventCloudPro to learn more about cloud based event processing using map/reduce, complex event processing, and event driven pattern matching agents. You can also send topic suggestions or questions to colin@cloudeventprocessing.com