Releasing Docker Container and Binder for using Xeus-Cling, Libtorch and OpenCV in C++

Today, I am elated to share Docker image for OpenCV, Libtorch and Xeus-Cling. We’ll discuss how to use the dockerfile and binder. Before I move on, the credits for creating and maintaining Docker image goes to Vishwesh Ravi Shrimali. He has been working on some cool stuff, please do get in touch with him if you’re interested to know. First question in your mind would be, Why use Docker or Binder? The answer to it lies in the frequency of queries on the discussion forum of PyTorch and Stackoverflow on Installation of Libtorch with OpenCV in Windows/Linux/OSX. I’ve had nightmares setting up the Windows system myself for Libtorch and nothing could be better than using Docker. Read on, to know why. ...

September 15, 2020 · 3 min · Kushashwa Ravi Shrimali

[Training and Results] Deep Convolutional Generative Adversarial Networks on CelebA Dataset using PyTorch C++ API

It’s been around 5 months since I released my last blog on DCGAN Review and Implementation using PyTorch C++ API and I’ve missed writing blogs badly! Straight the to the point, I’m back! But before we start, the PyTorch C++ Frontend has gone through several changes and thanks to the awesome contributors around the world, it resembles the Python API more than it ever did! Since a lot of things have changed, I have also updated my previous blogs (tested on 1.4 Stable build). ...

February 23, 2020 · 3 min · Kushashwa Ravi Shrimali

Deep Convolutional Generative Adversarial Networks: Review and Implementation using PyTorch C++ API

I’m pleased to start a series of blogs on GANs and their implementation with PyTorch C++ API. We’ll be starting with one of the initial GANs - DCGANs (Deep Convolutional Generative Adversarial Networks). The authors (Soumith Chintala, Radford and Luke Metz) in this Seminal Paper on DCGANs introduced DCGANs to the world like this: We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations. ...

September 15, 2019 · 10 min · Kushashwa Ravi Shrimali

Setting up Jupyter Notebook (Xeus Cling) for Libtorch and OpenCV Libraries

Introduction to Xeus Cling Today, we are going to run our C++ codes in the Jupyter Notebook. Sounds ambitious? Not much. Let’s see how we do it using Xeus Cling. I’ll quote the definition of Xeus Cling on the official documentation website. xeus-cling is a Jupyter kernel for C++ based on the C++ interpreter cling and the native implementation of the Jupyter protocol xeus. Just like we use Python Kernel in the Jupyter Notebook, we can also use a C++ based interpreter cling combined with a Jupyter protocol called Xeus to reach closer to implementing C++ code in the notebook. ...

August 28, 2019 · 7 min · Kushashwa Ravi Shrimali

Applying Transfer Learning on Dogs vs Cats Dataset (ResNet18) using PyTorch C++ API

Transfer Learning – Before we go ahead and discuss the Why question of Transfer Learning, let’s have a look at What is Transfer Learning? Let’s have a look at the Notes from CS231n on Transfer Learning: In practice, very few people train an entire Convolutional Network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pretrain a ConvNet on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. ...

August 16, 2019 · 8 min · Kushashwa Ravi Shrimali

Classifying Dogs vs Cats using PyTorch C++: Part 2

In the last blog, we had discussed all but training and results of our custom CNN network on Dogs vs Cats dataset. Today, we’ll be making some small changes in the network and discussing training and results of the task. I’ll start with the network overview again, where we used a network similar to VGG-16 (with one extra Fully Connected Layer in the end). While there are absolutely no problems with that network, but since the dataset contains a lot of images (25000 in training dataset) and we were using (200x200x3) input shape to the network (which is 120,000 floating point numbers), this leads to high memory consumption. In short, I was out of RAM to store these many images during program execution. ...

July 31, 2019 · 7 min · Kushashwa Ravi Shrimali

Classifying Dogs vs Cats using PyTorch C++ API: Part-1

Hi Everyone! So excited to be back with another blog in the series of PyTorch C++ Blogs. Today, we are going to see a practical example of applying a CNN to a Custom Dataset - Dogs vs Cats. This is going to be a short post of showing results and discussion about hyperparameters and loss functions for the task, as code snippets and explanation has been provided here, here and here. ...

July 23, 2019 · 7 min · Kushashwa Ravi Shrimali

Training a Network on Custom Dataset using PyTorch C++ API

Recap of the last blog Before we move on, it’s important what we covered in the last blog. We’ll be going forward from loading Custom Dataset to now using the dataset to train our VGG-16 Network. Previously, we were able to load our custom dataset using the following template: ...

July 5, 2019 · 4 min · Kushashwa Ravi Shrimali

Announcing a series of blogs on PyTorch C++ API

I’m happy to announce a Series of Blog Posts on PyTorch C++ API. Check out the blogs in the series here. Happy Reading!

July 4, 2019 · 1 min · Kushashwa Ravi Shrimali

Custom Data Loading using PyTorch C++ API

Overview: How C++ API loads data? In the last blog, we discussed application of a VGG-16 Network on MNIST Data. For those, who are reading this blog for the first time, here is how we had loaded MNIST data: auto data_loader = torch::data::make_data_loader<torch::data::samplers::SequentialSampler>( std::move(torch::data::datasets::MNIST("../../data").map(torch::data::transforms::Normalize<>(0.13707, 0.3081))).map( torch::data::transforms::Stack<>()), 64); Let’s break this piece by piece, as for beginners, this may be unclear. First, we ask the C++ API to load data (images and labels) into tensors. ...

July 2, 2019 · 8 min · Kushashwa Ravi Shrimali