2 October 2019
BERT word embeddings. What is it? How it works? What they have to offer? and mostly how effective it is over what we previously used...
"BERT Word Embeddings"
25 September 2019
Describing the space and meanings of Transfer learning:
"Transfer Learning - Machine Learning's Next Frontier"
20 September 2019
An easy to grasp blog to start knowing and working with Serverless architectures. Also includes a tutorial for setting up a service on Amazon AWS using Serverless framework.
"A Serverless Function Example: Why & How to Get Started"
19 September 2019
Interesing blog to get started with tmux (terminal multiplexer)
"A Quick and Easy Guide to tmux" by Ham Vocke. Also, should be followed by
"Making tmux Pretty and Usable - A Guide to Customizing your tmux.conf"
28 July 2019
A webcomic of romance, sarcasm, math and language, called
xkcd
2 June 2019
About light-weight, order-preserving dictionary string compression in Column stores. Lexicographical order of the strings is preserved by assigning long numbers to each string in the dictionary, in the same order - thereby making comparisions light and delaying actual materialization of strings from the column.
Link to the paper
28 March 2019
How-to article to do big data processing using the MapReduce paradigm on the Amazon AWS infreastructure, using S3 and Lambda fumctions.
Ad Hoc Big Data Processing Made Simple with Serverless MapReduce
15 January 2019
Truth being told, Webhooks are the traditional old observer design pattern...(just over the web)! The what and why of Webhooks:
What are Webhooks? Easy Explanation & Tutorial
25 July 2018
Graduated from BITS Pilani!. 5 years.
Irony is, I am still working on my bachelor's thesis :p
9 March 2018
Life Update: Accepted at University of Waterloo for the Masters in Computer Science (Thesis) program. I will be working in the Data Systems Group with Prof. Semih Salihoglu on Graph Database Management systems. Excited!! Also, shifting to systems (yeah, the core engineering!) after exploring Infomation retrieval, NLP and Comuter vision for around 2 years.
9 March 2018
The original paper of what everyone is talking about: Attention-mechanism for machine translations in text.
Link to the paper
5 March 2018
IBM's abstractive summarization model based on encoder-decoder, and having multi-dimensional rich encoder, attention-mechanism, hierarchical attention for detecting document structure and switch to model OOV words.
Link to the paper
20 February 2018
Deep learning model for the task of abstractive summarization using bidirectional LSTM encoder, simple decoder, attention-mechanism for localized focus on inputs, coverage mechanism to avoid input repetation and pointer-generators to deal with OOV words.
Link to the paper Code on Github