Ashish Kumar Sinha
Machine Learning Engineer and Researcher.
Home
Blog
Notebooks
About
Résumé
Tags
gnn
algorithm
llm
project
nlp
theory
recsys
survey
rl
PEFT
Machine Learning
Deep Learning
Transformers
LoRA
Attention
LLM
Quantization
SFT
#gnn
Node2Vec - Word2Vec in disguise (?)
Graph Convolutional Nets - Basics
#algorithm
Node2Vec - Word2Vec in disguise (?)
Graph Convolutional Nets - Basics
Self Attention Mechanism - Learning Contextualized Embeddings
Frequent Itemset Mining
#llm
huberLLM - Retrieval Augmented LLM for Huberman Podcast
#project
huberLLM - Retrieval Augmented LLM for Huberman Podcast
#nlp
Self Attention Mechanism - Learning Contextualized Embeddings
#theory
Self Attention Mechanism - Learning Contextualized Embeddings
Frequent Itemset Mining
Imitation Learning - How to make it work?
#recsys
Frequent Itemset Mining
#survey
Imitation Learning - How to make it work?
#rl
Imitation Learning - How to make it work?
#PEFT
Parameter Efficient Fine Tuning (PEFT) – Adapting Large Models at Scale
#Machine Learning
Parameter Efficient Fine Tuning (PEFT) – Adapting Large Models at Scale
How Low-Rank Adaptation (LoRA) Changes the Geometry of Self-Attention
Democratizing LLM Inference with Quantization Techniques
Supervised Fine-Tuning (SFT) for LLMs
Supervised Fine-Tuning (SFT) on a Tiny Language Model
Low-Rank Adaptation (LoRA) for BERT Classification
#Deep Learning
Parameter Efficient Fine Tuning (PEFT) – Adapting Large Models at Scale
How Low-Rank Adaptation (LoRA) Changes the Geometry of Self-Attention
Democratizing LLM Inference with Quantization Techniques
Supervised Fine-Tuning (SFT) for LLMs
Supervised Fine-Tuning (SFT) on a Tiny Language Model
Low-Rank Adaptation (LoRA) for BERT Classification
#Transformers
Parameter Efficient Fine Tuning (PEFT) – Adapting Large Models at Scale
How Low-Rank Adaptation (LoRA) Changes the Geometry of Self-Attention
Democratizing LLM Inference with Quantization Techniques
Supervised Fine-Tuning (SFT) for LLMs
Supervised Fine-Tuning (SFT) on a Tiny Language Model
Low-Rank Adaptation (LoRA) for BERT Classification
#LoRA
How Low-Rank Adaptation (LoRA) Changes the Geometry of Self-Attention
Low-Rank Adaptation (LoRA) for BERT Classification
#Attention
How Low-Rank Adaptation (LoRA) Changes the Geometry of Self-Attention
#LLM
Democratizing LLM Inference with Quantization Techniques
Supervised Fine-Tuning (SFT) for LLMs
Supervised Fine-Tuning (SFT) on a Tiny Language Model
Low-Rank Adaptation (LoRA) for BERT Classification
#Quantization
Democratizing LLM Inference with Quantization Techniques
#SFT
Supervised Fine-Tuning (SFT) for LLMs
Supervised Fine-Tuning (SFT) on a Tiny Language Model