1.5.0-cdh5.5.0 scala> df.filter("ad_market_id = 4 and event_date = '2016-05-23'").show +----------+------------+ |event_date|ad_market_id| +----------+------------+ +----------+------------+ scala> df.filter("ad_market_id = 4").filter("event_date = '2016-05-23'").show +----------+------------+ |event_date|ad_market_id| +----------+------------+ +----------+------------+ scala> df.filter("ad_market_id = 4").orderBy("event_date").filter("event_date = '2016-05-23'").show +----------+------------+ |event_date|ad_market_id| +----------+------------+ |2016-05-23| 4| +----------+------------+
Saturday, June 25, 2016
Weird Spark bug?
Tuesday, March 22, 2016
Home Depot Kaggle competition started
Running some cleaning, spell-checking, initial feature generation on my AWS Spark cluster with 33 nodes.

Friday, June 5, 2015
Spark MLlib Review
Iterative methods are at the core of Spark MLlib. Given a problem, we guess an answer, then iteratively improve the guess until some condition is met (e.g. Krylov subspace methods). Improving an answer typically involves passing through all of the distributed data and aggregating some partial result on the driver node. This partial result is some model, for instance, an array of numbers. Condition can be some sort of convergence of the sequence of guesses or reaching the maximum number of allowed iterations.
Tuesday, May 5, 2015
Digit recognition with Multiclass SVM on Spark MLlib
To test this multi-class classifier, we can try it on handwritten digit recognition problem. Get hand-written digits data from here. Accuracy is only 74% with 100 iterations. Maybe it can't get much better with this construction. A different way of constructing multi-class classifiers from binary SVM is to use pairwise (one-vs-one) schemes with some adjustments as described here and also another method described here. Scikit-learn SVM classifier performs better out of the box (if used with RDF kernel accuracy is in high 90's), but the sklearn implementation is not scalable. Hopefully Spark MLlib will be able to beat this in future, when more sophisticated (high-level abstraction) ML pipeline API features comes online.
For comparison, here are some results with tree classifiers. With RandomForest (30 trees, Gini, depth 7) it goes up to 93%. Adding extra 2nd order interactions (Spark doesn't support kernels in classification yet, but here a simple feature transformation that adds second order feature interactions), and increasing allowed tree depth to 15, brings accuracy to 97%. So, there is a lot of room for improvement in multiclass to binary classifier reduction.