Get AI summaries of any video or article — Sign up free

Transformers — Brand Summaries

AI-powered summaries of 9 videos about Transformers.

9 summaries

No matches found.

Generative AI Fine Tuning LLM Models Crash Course

Krish Naik · 3 min read

Fine-tuning large language models becomes practical on limited hardware when three ideas work together: quantization to shrink model weights,...

QuantizationLoRAQLoRA

Getting Started With Meta Llama 3.2 And its Variants With Groq And Huggingface

Krish Naik · 2 min read

Meta’s Llama 3.2 arrives as a new open-source family built for both on-device deployment and multimodal reasoning, with variants spanning 1B, 3B,...

Llama 3.2 VariantsOn-Device InferenceVision Reasoning

SmolDocling - The SmolOCR Solution?

Sam Witteveen · 2 min read

SmolDocling—an IBM-partnered document understanding model on Hugging Face—aims to do more than “plain OCR” by converting documents into a structured,...

Document ConversionStructured OCRVision-Language Models

DeepSeek Coder: AI Writes Code | Free LLM For Code Generation Beats ChatGPT, ChatDev & Code Llama

Venelin Valkov · 3 min read

DeepSeek Coder is an open-source code-focused language model from DeepSeek AI that’s trained heavily on programming data and tuned to follow coding...

DeepSeek CoderCode GenerationLeetCode

Build a custom dataset with LightningDataModule in PyTorch Lightning

Venelin Valkov · 2 min read

A practical path to text classification in PyTorch Lightning starts with turning the multi-annotator GoEmotions dataset into one clean label per...

GoEmotions LabelingElectra TokenizationPyTorch Dataset

Gemma 3n: Open Multimodal Model by Google (Image, Audio, Video & Text) | Install and Test

Venelin Valkov · 3 min read

Google’s Gemma 3n (Geometry N in the transcript) is positioned as an open, mobile-targeted multimodal model that can take in text plus images, audio,...

Gemma 3nMultimodal InferenceHugging Face Transformers

Mixtral - Mixture of Experts (MoE) Free LLM that Rivals ChatGPT (3.5) by Mistral | Overview & Demo

Venelin Valkov · 2 min read

Mistral AI’s Mixtral 8×7B (an open-weight sparse Mixture of Experts model) is positioned as a practical alternative to much larger LLMs by routing...

Mixture of ExpertsSparse RoutingInstruction Tuning

XGen-7B: Long Sequence Modeling with (up to) 8K Tokens. Overview, Dataset & Google Colab Code.

Venelin Valkov · 3 min read

Salesforce’s XGen-7B is positioned as an open 7-billion-parameter language model built for long-context work, with an input sequence length that...

Long ContextModel TrainingMultilingual Data

Deploying Local LLM but It Is Slow? Here's How to Fix It (Hopefully) | LLMOps with vLLM

Venelin Valkov · 2 min read

Deploying a local LLM can feel painfully slow when using the default Hugging Face Transformers inference pipeline, but switching to vLLM can cut...

Local LLM LatencyvLLM vs TransformersPaged Attention