Sign in

Machine Learning @ Quinnox | Love to write about Deep Learning for NLP and Computer Vision, Model Deployment, and ReactJS.

Natural Language Processing

T5 — A poor man’s GPT-3

Photo by Tech Daily on Unsplash

Introduction to the T5 Transformer

The folks at Google AI published a paper “Exploring the Limits of Transfer Learning with a Unified Text-To-Text Transformer” and presented an empirical study on what type of pre-training approaches or transfer learning techniques work the best and then used that study to create a new model i.e. the Text-To-Text Transformer (T5). This transformer model was pre-trained on a much cleaner version of the Common Crawl Corpus and Google named it the Colossal Clean Crawled Corpus (C4). Sounds cool right 😎. …


Natural Language Processing

Build and Train a Transformer from scratch

Source: Pixabay

Continuing from the last part in this part we will be looking at two different techniques to train a Self-Attention Transformer network to classify a piece of text (a question in our case) into two different categories each category containing some number of classes. We will be using the Encoder-Decoder modules we used in the previous part to code a transformer network and then train it.

Technique — 1: N Class Head Classification Transformer

Our end goal is to provide two different class names to the given question. Here, we can pass the features extracted from the Encoder-Decoder layers of the Self Attention Transformer to two fully connected…


Natural Language Processing

Develop a text generation API

Photo by Tech Daily on Unsplash

In the blog, Generating storylines using a T5 Transformer we saw how we can fine-tune a Sequence2Sequence (Text-To-Text) Transformer (T5) to generate storylines/plots by providing inputs like genre, director, cast, and ethnicity. In this blog, we will check out how we can use that trained T5 Model for inference. Later, we will also see how can we deploy it using gunicorn and flask.

How to do Model Inference?

  • Let’s set up the script with the imports
import os
import re
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook, tnrange
from sklearn.utils import shuffle
import pickle
import math
import torch
import torch.nn.functional as F
from transformers import T5Tokenizer…


Kafka made easy with Flask

Photo by Birmingham Museums Trust on Unsplash

What is Kafka?

Apache Kafka is a highly fault-tolerant event streaming platform. In event streaming the data is captured in real-time and from different even sources, it can be your web analytics data, data from a thermostat or even a database. Along with data capturing there are a horde of resources provide along with Kafka to manipulate and process the data and dividing resources efficiently by prioritising different highly critical processes and moderately impactful process. That’s Kafka and event streaming in short. To learn more about Kafka check out this video.

How will Kafka benefit/help by streaming model inferences?

Most of the Deep Learning models are deployed via Flask over REST…


Computer Vision

SimCLRv2 explained

Source: Shutterstock

Last year the Google Brain team gave a new iteration of their previous state-of-the-art self-supervised approach for image classification called SimCLRv2. The key ingredient of the approach is to use a big (deep & wide) network during pre-training and fine-tuning. The task agnostic approach for self-supervised learning is more fruitful when the preceding supervised training is performed on fewer labels.

The Semi-Supervised learning algorithm can be summarised in three steps:

  • Unsupervised training of a big ResNet model using the SimCLRv2 methodology
  • Supervised fine-tuning on a few labelled examples
  • Distillation with unlabelled data examples for refining and transferring task-specific knowledge or…


Natural Language Processing

Build a Transformer from scratch

Source: Pixabay

1. Coding Transformer network in PyTorch

In this part, we will try to understand the Encoder-Decoder architecture of the Multi-Head Self-Attention Transformer network with some code in PyTorch. There won’t be any theory involved(better theoretical version can be found here) just the barebones of the network and how can one write this network on its own in PyTorch.

The architecture comprising the Transformer model is divided into two parts — the Encoder part and the Decoder part. Several other things combine to form the Encoder and Decoder parts. Let’s start with the Encoder.

Encoder

The Encoder part is quite simpler compared to the Decoder part. The Encoder…


Computer Vision, Deep Learning

A CNN free GAN network

Photo by Daniel McCullough on Unsplash

Most of the NLP tasks are currently solved using the Transformer network or a variation in the Transformer network. Transformers have become an integral part of the NLP eco-system over the past few years because of their reusability. Some multi-modal tasks are using the transformer network somewhere; still, those aren’t CNN free. Any Computer Vision task coupled with Transformers; also employs a CNN as backbones for feature extraction. But with TransGAN, a pure transformer network-based architecture is developed to train a GAN for image synthesis. …


Computer Vision

Deep Learning way to search images

Photo by Maria Teneva on Unsplash

Recently, the researchers at OpenAI published a multi-modal architecture that can be used for 30 different tasks once pre-trained on around 400 million image-text pairs. This methodology isn’t that new previously many other researchers have tried to use a combination of Text Transformer and Pre-Trained CNN model to pre-train a model on Image-Text pairs and then use it on different downwards tasks. But for varieties of reasons those approaches weren’t that successful as discussed in the paper. A variety of pre-training approaches were tried, both predictive and contrastive; to achieve SOTA level accuracy on different downwards tasks. In the predictive…


A hands-on guide on

A hassle free approach to deploy Image models

Source

In this blog, we will try to deploy a Multi-Label Image Classifier using Streamlit. Every Deep Learning practitioner knows it’s very tedious to deploy Deep Learning model with Image input. With text, it’s easy as the input text can be easily passed into a JSON via the API call but with images, there are some few extra steps involved. When passing an image as an input via the API request it should be first converted either to a base64 string or it should be uploaded to a bucket directly from the UI and the link of that image should then…


Deep Learning, Programming

An effortless way to publish data apps to the internet.

Source: freepik.com

Deep Learning and Machine Learning models trained by many data professionals either end up in an inference.ipynb notebook or an app.pyfile 😅. Those meticulous model architectures capable of creating awe in the real world never see the light of the day. Those models just sit there in the background processing requests via an API gateway doing their job silently and making the system more intelligent.

People using those intelligent systems don’t always credit the Data Professionals who spent hours or weeks or months collecting data, cleaning the collected data, formatting the data to use it correctly, writing the model architecture…

Vatsal Saglani

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store