Hello, my name is Norman.
I am a PhD student in Canada. My research focuses on reinforcement learning. The primary objectives of my work are to improve agent performance, data efficiency, and task transferability in complex environments. My research applies to financial markets (algorithmic trading), control systems, and robotics.
In my spare time, I enjoy creating and maintaining profitable trading algorithms. Currently, I am applying reinforcement learning and statistical models to cryptocurrency markets.
Noisy Importance Sampling Actor-Critic: We show that you can train an on-policy algorithm in an off-policy manner by injecting noise. The injected noise fundamentally changes how off-policy samples are weighted.
Dynamic Planning Networks: A model that learns to dynamically construct plans using forward dynamics model. We show that it is better to learn how to plan.
I worked fulltime at Scaled Inference1 in Palo Alto, CA. My work focused on distributed systems and machine learning.
I am the author of PLE a reinforcement learning environment for python.
Interned at Scaled Inference1 in Palo Alto, CA. My internship focused on the combination of bayesian models and deep neural networks. Additionally, I spent time improving modeling speed by porting code to run on GPUs.
Interned at Flipboard where I created a method for Image Super Resolution. While there I had to create a way to optimize model parameters and did so with bayesian optimization techniques over clusters of GPUs.
Performed research at the University of Western Ontario focusing on anomaly detection with electrical stream data using machine learning methods.
@normantasfi or email (n plus tasfi at google email)
1: ceased operations in 2019.