Hands-on Deep Learning: TensorFlow Coding Sessions

October 16, 2018
thumbnail

Hello and welcome to the hands-on deep learning with TensorFlow coding sessions. Unlike the Deep Learning Crash Course series that I produced last semester, this series will focus only on coding.

Read on

Computer Vision & Image Processing

September 15, 2018
thumbnail

The terms computer vision and image processing are used almost interchangeably in many contexts. They both involve doing some computations on images. But are they really the same thing? Let's talk about what they are, how they are different, and how they are linked to each other.

Read on

How Video Compression Works

August 23, 2018
thumbnail

Have you ever thought about how video streaming is possible? Let's think about how big a typical 1080p video is: 1920x1080 pixels, 24-bits each, 30 frames per second... That's almost 1.5 gigabits per second. How can you transmit that much data, over the air, in real time? The answer is video compression.

Read on

How Digital Images are Represented, Compressed, and Stored

July 31, 2018
thumbnail

What's up, everybody! Today we're talking about how digital images are represented, compressed, and stored on your devices. Let's get started!

Read on

How Digital Cameras Process Images

July 7, 2018
thumbnail

Today we're talking about how digital cameras convert raw images into natural-looking pictures. It's natural to think that what we see on a display is what the camera actually sees right at the sensor. That's not really the case. You wouldn't wanna look at a picture that comes directly from the sensor. Several steps of processing need to be done before it looks natural to us.

Read on

Farewell Chicago | Timelapse Video

May 11, 2018
thumbnail

This is my last week in Chicago. I made this timelapse video as a farewell to this beautiful city, before I head to the Golden State. Here's how I made this video...

Read on

Deep Learning Crash Course: Introduction

April 29, 2018
thumbnail

What if we could teach computers by example? Instead of providing them with a comprehensive set of rules, we could show them some examples so that they can understand how the world works. That's what machine learning does. In this series, we will learn the fundamentals of machine learning, with a focus on deep learning. We will talk about where to find data, how to build models that can process data, and generate data as well.

Read on

Practical Methodology in Deep Learning

April 21, 2018
thumbnail

It's certainly useful to know the fundamental ideas and the math behind deep learning, but in many cases, you rarely need a lot of math to build a machine learning model. In this video, we are going to focus on practical information and go through a basic recipe for machine learning, which can be used to tackle many types of machine learning problems.

Read on

Generative Adversarial Networks

April 13, 2018
thumbnail

Many of the examples we had in this series so far were discriminative models. Sometimes we need models that can not only recognize what's in the input but also generate new samples. Such models are called generative models. A recently proposed approach, called generative adversarial networks, has shown great success in building models that can generate samples that are similar to the ones in a given dataset.

Read on

Deep Unsupervised Learning

April 5, 2018
thumbnail

In supervised learning, the learning algorithm tries to learn a mapping between inputs and known outputs. Creating these outputs usually involves some sort of human supervision, such as labeling the inputs by hand or using data that are already annotated by humans in different ways. Unsupervised learning, on the other hand, aims to find some structure in the data without having labels.

Read on

Recurrent Neural Networks

March 28, 2018
thumbnail

So far we have discussed only feedforward neural networks in this series. Feedforward neural networks have no feedback loops, the data flow in one direction from the input layer to the outputs. Recurrent neural networks, or RNNs for short, use inputs from previous stages to help a model remember its past. This is usually shown as a feedback loop on hidden units. These type of models are particularly useful for processing sequential data.

Read on

Optimization Tricks: momentum, adaptive methods, batch-norm, and more...

March 20, 2018
thumbnail

Deep learning is closely related to mathematical optimization. What people usually mean by optimization is to find a set of parameters that minimize or maximize a function. In the context of neural networks, this usually means minimizing a cost function by iteratively tuning the trainable parameters. Perhaps the biggest difference between pure mathematical optimization and optimization in deep learning is that...

Read on

Transfer Learning

February 27, 2018
thumbnail

Let's talk about the fastest and easiest way you can build a deep learning model, without worrying too much about how much data you have. Training a deep model may require a lot of data and computational resources, but luckily there's transfer learning.

Read on

How to Design a Convolutional Neural Network

February 20, 2018
thumbnail

Designing a good model usually involves a lot of trial and error. It is still more of an art than science, and people have their own ways of designing models. So the tricks and design patterns that I will be presenting in this video will be mostly based on 'folk wisdom', my personal experience with designing models, and ideas that come from successful model architectures.

Read on

Convolutional Neural Networks

February 10, 2018
thumbnail

Let's talk about Convolutional Neural Networks, which are specialized kind of neural networks that have been very successful particularly at computer vision tasks, such as recognizing objects, scenes, and faces among many other applications.

Read on

Data Collection and Preprocessing

January 28, 2018
thumbnail

Accio Data! Data collection is one of the most important parts of building machine learning models. Because no matter how well designed our model is, it won't learn anything useful if the training data is invalid. It's garbage-in, garbage-out! Invalid data leads to invalid results.

Read on

Regularization

January 21, 2018
thumbnail

Occam's razor states that when you have two competing hypotheses that make the same predictions, the simpler one is the better. This is not an unquestionable statement, but it is a useful principle in many contexts. In the context of machine learning, we can rephrase this statement as given two models that have a similar performance, it's better to choose the simpler one.

Read on

Overfitting, Underfitting, and Model Capacity

January 14, 2018
thumbnail

Can a machine learning model predict a lottery? Given the lottery is fair and truly random, the answer must be no, right? What if I told you that it is indeed possible to fit a model to historical lottery data. Sounds awesome! Then why don't we go ahead and train such a model to predict the lottery? Let's find out!

Read on

Artificial Neural Networks: going deeper

December 23, 2017
thumbnail

Welcome back! In the previous video, we talked about what artificial neural networks are and how to train a single neuron. If you haven't watched the previous video yet, find the link in the description below. In this video, we will pick up where we left off and talk about how to train deeper and more complex networks.

Read on

Artificial Neural Networks: demystified

December 16, 2017
thumbnail

The first time I heard about neural networks I was 11 or something. I saw an article on a popular tech magazine that said scanners now use neural networks to recognize characters. Naively, I thought they utilized actual biological neurons. When I told my mom about it she said if they use anything biological then you need to feed it. Does the scanner consume sugar or sth? I mean, she was right. But we don't feed our scanners sugar, do we? Many years later I figured out that what they used was nothing but a mathematical model. So what's so neural about them?

Read on

PaperPlane: a simple static blog generator

February 23, 2015
thumbnail

You like this website? You can easily create your own using PaperPlane, which is a very simple, flat-file, static blog generator. No server-side querying is required to display the pages. The data is stored in a folder containing a text file for each blog entry...

Read on

UTimelapse: a tool for creating high-quality timelapse videos

May 4, 2014
thumbnail

UTimelapse is a tool that generates timelapse videos by inputting series of still images. The main goal of this tool is to create high-quality videos without requiring high-end equipment. The tool addresses three main problems that are encountered in timelapse photography: camera shake, flickering, and single-frame artifacts.

Read on