Machine Learning

Reproducible Research: Valuable course! Learning about R markdown was very useful, I am already using this in one of my books to make sure that several of my examples are reproducible by providing a RMD script to produce all the charts from my book. Statistical Inference: This was an odd class. I already knew statistical inference and did quite well despite not watching any lectures (hardly). I don’t believe this course made anyone happy (hardly). Either you already knew the topic and were bored, or you were completely lost trying to learn statistics for the first time. There are several Khan academy videos that cover all the material in this course. Why dose Hopkins need to reproduce this? Is this not the point of MOOC? Why not link to the Khan academy videos and test the students. Best of both worlds! Also, 90% of the material was not used in the rest of the course, so I suspect many students might have been left wondering why this course is for. Regression Models: Great course, this is the explainable counterpart to machine learning. You are introduced to linear regression and GLM’s. This course was setup as the perfect counterpart to #8. My only beef on this course was that I got screwed by peer review. More on this later. + ($ “http://Guadan.net/EDU/Programming/The-future-science/Technologies “) http://guadan.net/lucene/ https://www.kaggle.com/c/dogs-vs-cats http://www.jeffheaton.com/2015/03/how-i-got-into-data-science-from-it-programming/#comment-20070 http://www.kaggle.com/c/whale-detection-challenge http://scitation.aip.org/content/asa/journal/poma/6/1/10.1121/1.3340128