## GSoC 2019: A Sampling Notebook

** Published:**

OK, I will keep this one short and sweet, no math. We have a sampling notebook.

** Published:**

OK, I will keep this one short and sweet, no math. We have a sampling notebook.

** Published:**

A little while ago, I wrote a post on doing gradient descent for ODEs. In that post, I used `autograd`

to do the automatic differentiation. While neat, it was really a way for me to get familiar with some math that I was to use for GSoC. After taking some time to learn more about `theano`

, I’ve reimplemented the blog post, this time using `theano`

to perform the automatic differentiation. If you’re read the previous post, then skip right to the code.

** Published:**

Gradient descent usually isn’t used to fit Ordinary Differential Equations (ODEs) to data (at least, that isn’t how the Applied Mathematics departments to which I have been a part have done it). Nevertheless, that doesn’t mean that it can’t be done. For some of my recent GSoC work, I’ve been investigating how to compute gradients of solutions to ODEs without access to the solution’s analytical form. In this blog post, I describe how these gradients can be computed and how they can be used to fit ODEs to synchronous data with gradient descent.

** Published:**

Let’s take stock of exactly where we are in this journey it implement HMC for differential equations.

** Published:**

** Published:**

Toronto started a pilot project to shut down King street to private vehicles in and attempt to ease congestion and increase TTC ridership. I’ve obtained some data from the city and have begun analyzing it. Shown here is an initial plotting of the change in travel times in certain sections of the city. Cooler colors mean travel times have decreased.

** Published:**

I’ll cut right to it. Consider the set $S = (49, 8, 48, 15, 47, 4, 16, 23, 43, 44, 42, 45, 46 )$. What is the expected value for the minimum of 6 samples from this set?

** Published:**

I’m very proud to say I have contributed this example to PyMC3’s documentation. It details how to compute posterior means for Gelman’s rat tumour example in BDA3.

** Published:**

I was recently asked to make 4 plots for a collaborator. The plots are all the same, just a scatter plot and a non-linear trend line. Every time I have to do something repetitive, I wince, *especially with respect to plots*. I thought I would take this opportunity to write a short blog post on how to use functional programming in R to make the same plot for similar yet different data.

** Published:**

Let me ask you a question: Considering logistic regression can be performed without the use of a penalty parameter, why does sklearn include a penalty in their implementation of logistic regression? I think most people would reply with something about overfitting, which I suppose is a reasonable answer, but isn’t very satisfactory, especially since the documentation for `sklearn.linear_model.LogisticRegression()`

is awash with optimization terminology and never mentions overfitting.

** Published:**

Today was a fairly easy challenge. Part one provides us with a 2d array of integers and asks to find the sum of the differences between the largest and smallest numbers in each row. Super easy to do without loops if you know how to use numpy.

** Published:**

** Published:**

When I was doing my Masters, I had to generate a lot of plots, which means I had to generate a lot of data. Usually, the data I would be generating would depend on a parameter (maybe something like the rolling window length, or maybe the bandwidth for some smoothing function) and I would have to try a whirlwind of combinations. In order to do this, I would end up doing is writing code to generate the data once, then just loop over that code for different values of the parameters.

** Published:**

Back in September 2017, I was really tired and learning about Maximum Likelihood and yearned to do some more machine learning. For the longest time I wanted to scrape my gym’s twitter account to get information about when the gym was busiest.

** Published:**

I love Fivethirtyeight’s Riddler column. Usually, I can solve the problem with computation, but on some rare occasions I can do some interesting math to get the solution without having to code. Here is the first puzzle I ever solved. It is a simple puzzle, yet it has an elegant computational and analytic solution. Let’s take a look.

** Published:**

This is my humble website where I post math/stats/data science related stuff. Keep on the look out as I keep updating it with cool projects, questions, and thoughts.