Wednesday, July 27, 2016

Saturday, June 25, 2016

Weird Spark bug?


1.5.0-cdh5.5.0

scala> df.filter("ad_market_id = 4 and event_date = '2016-05-23'").show
+----------+------------+
|event_date|ad_market_id|
+----------+------------+
+----------+------------+


scala> df.filter("ad_market_id = 4").filter("event_date = '2016-05-23'").show
+----------+------------+
|event_date|ad_market_id|
+----------+------------+
+----------+------------+


scala> df.filter("ad_market_id = 4").orderBy("event_date").filter("event_date = '2016-05-23'").show
+----------+------------+
|event_date|ad_market_id|
+----------+------------+
|2016-05-23|           4|
+----------+------------+

Tuesday, March 22, 2016

Home Depot Kaggle competition started

Started working on Home Depot Kaggle competition. This competition requires a lot of text cleaning, before any significant improvement over benchmark can be done.
Running some cleaning, spell-checking, initial feature generation on my AWS Spark cluster with 33 nodes.
I might not be able to put a lot of effort into it, but I will make sure I make at least one submission with basic features.

Saturday, December 5, 2015

Thursday, November 5, 2015

Monday, August 31, 2015

Thursday, August 20, 2015

Google Deep Dream Generator

You all have heard of this Google Deep Dream now, so try this online image generator: http://deepdreamgenerator.com/

I thought this image might work well:


Friday, June 5, 2015

Spark MLlib Review

I wrote up a little review of Spark MLlib - it can be found here (PDF).
Iterative methods are at the core of Spark MLlib. Given a problem, we guess an answer, then iteratively improve the guess until some condition is met (e.g. Krylov subspace methods). Improving an answer typically involves passing through all of the distributed data and aggregating some partial result on the driver node. This partial result is some model, for instance, an array of numbers. Condition can be some sort of convergence of the sequence of guesses or reaching the maximum number of allowed iterations.

It appears your Web browser is not configured to display PDF files. No worries, just click here to download the PDF file.

Thursday, May 7, 2015

Batcher's odd-even merging network

Batcher's odd-even merge based sorting network node partner calculation.

I couldn't find a closed-form formula for odd-even network node partner calculation. The only available implementations were recursive and not very elegant. Here is the code that was provided on Wikipedia.

So I decided to work out a simpler and more intuitive solution to odd-even merge-based sorting network partner calculation, and here it is:

Also, here I put up a little interactive sorting network generator. Of course, I updated that Wikipedia article, to make it easier for learners :)

Here is the best performance analysis of this network that I could find.

Tuesday, May 5, 2015

Digit recognition with Multiclass SVM on Spark MLlib

Current version of Spark MLlib doesn't have multi-class classification with SVM, but it is possible to make multi-class classifiers out of binary classifiers. One easy way of doing it is with one-vs-all scheme. It is not as accurate as more sophisticated schemes, but it is relatively easy to implement and have decent results. Here is my implementation.

To test this multi-class classifier, we can try it on handwritten digit recognition problem. Get hand-written digits data from here. Accuracy is only 74% with 100 iterations. Maybe it can't get much better with this construction. A different way of constructing multi-class classifiers from binary SVM is to use pairwise (one-vs-one) schemes with some adjustments as described here and also another method described here. Scikit-learn SVM classifier performs better out of the box (if used with RDF kernel accuracy is in high 90's), but the sklearn implementation is not scalable. Hopefully Spark MLlib will be able to beat this in future, when more sophisticated (high-level abstraction) ML pipeline API features comes online.

For comparison, here are some results with tree classifiers. With RandomForest (30 trees, Gini, depth 7) it goes up to 93%. Adding extra 2nd order interactions (Spark doesn't support kernels in classification yet, but here a simple feature transformation that adds second order feature interactions), and increasing allowed tree depth to 15, brings accuracy to 97%. So, there is a lot of room for improvement in multiclass to binary classifier reduction.

Saturday, March 23, 2013

specialized memory

Who is smarter: a person or an ape? Well, it depends on the task. Consider Ayumu, a young male chimpanzee at Kyoto University who, in a 2007 study, put human memory to shame. Trained on a touch screen, Ayumu could recall a random series of nine numbers, from 1 to 9, and tap them in the right order, even though the numbers had been displayed for just a fraction of a second and then replaced with white squares.
I tried the task myself and could not keep track of more than five numbers—and I was given much more time than the brainy ape. In the study, Ayumu outperformed a group of university students by a wide margin. The next year, he took on the British memory champion Ben Pridmore and emerged the "chimpion."
The Brains of the Animal Kingdom http://online.wsj.com/article/SB10001424127887323869604578370574285382756.html

Sunday, December 3, 2006

Our galaxy: 1 out of ~125,000,000,000.

Milky Way Galaxy probably looks like this.
There are hundreds of billions of stars in a galaxy and there are hundreds of billions of galaxies out there.
Some estimate that there are ~40,000,000,000,000,000,000,000 stars. I don't know how to even comprehend such a huge number.
Others even came up with results that there are ~10 stars for every grain of sand on all of Earth's beaches.
Can I just say that the Universe is mindbogglingly huge and we are so insignificant on that scale?