Carrot and Stick - Part 2 - Q Learning

From Theory to Practice

When I think of Reinforcement Learning I usually think of an agent or robot traveling through a maze, avoiding traps, collecting supplies. In each step it observes its state, tries to estimate what will be the best action to take based on all the experience it gained. The way I visualize it, in each state, the robot scans through a database, looking for all the valid actions it can take in that state, and picks the one with the best chance of being the optimal action - Q Learning is a fundamental Reinforcement Learning algorithm that works similar to this. This post is dedicated to the Q Learning algorithm. By the end of this post you will be able to write your own Q Learning agent and test it in an interactive environment.

Carrot and Stick

A Framework to Learn Reinforcement Learning

A while ago I went to a Meetup about Reinforcement Learning (RL), I got into a conversation with some one that sat next to me. He asked me several question about the subject - What is the difference between RL and supervised/unsupervised learning? What is the difference between several types of algorithms? When would you choose this framework over another one?

A Means to an End

Choosing the right mean for an estimator.

As data scientists we often need to estimate a value from a sample data set in order to answer a business question about the whole population. In most cases we get a sample data set, and we wish to estimate the mean of some value and so the sample mean is a good choice for an estimator.

Gouge Away

Incorporating SQOOP in your Data Pipeline

You have just completed setting up your new and shiny EMR cluster, and want to unleash the full power of Spark on the nearest data-source.

Pagination