Get AI summaries of any video or article — Sign up free
In 2025 What Should You Learn In AI ? thumbnail

In 2025 What Should You Learn In AI ?

Krish Naik·
4 min read

Based on Krish Naik's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

RAG is the dominant practical approach in 2025, with about 70% of surveyed teams using it in some form.

Briefing

A June 2025 “AI engineering report” based on surveys of hundreds of engineers working in AI points to a clear 2025 learning priority: build practical generative AI systems—especially Retrieval-Augmented Generation (RAG)—and learn how teams operationalize them in real products.

Customer-facing deployments are already shaping model choices. Among the most popular models used for customer-facing applications, OpenAI dominates: three of the top five and half of the top 10 most popular models come from OpenAI. Anthropic also shows strong adoption, with teams using its models extensively. The report also highlights that RAG is no longer experimental. About 70% of respondents use RAG in some form, and RAG sits at the center of how companies are automating workflows—often by inserting retrieval into agent-like systems that can generate answers, take actions, or coordinate tasks.

RAG work spans multiple variants, from traditional RAG to agentic RAG. The survey suggests companies are moving beyond “answering with context” toward more autonomous patterns such as autonomous RAG, self-RAG, and adaptive RAG. That shift matters for job seekers because interviews increasingly demand end-to-end RAG project knowledge: how to build a RAG application, what components to use, and the full lifecycle from data ingestion and parsing through deployment.

Model and prompt iteration is frequent. More than half of respondents update their models at least monthly, which the report ties to fine-tuning activity. Prompt updates are also common: roughly 40% update prompts monthly, while about 1 in 10 do it daily. This emphasis on iteration signals that success in AI engineering isn’t just picking a model—it’s continuously improving system behavior.

The report also flags human oversight as a production reality. Most agents in production have “human in the loop,” indicating that companies are still managing risk and correctness by combining automation with review workflows.

Beyond text, multimodal capabilities are approaching mainstream planning. Audio is poised for a major adoption wave, with 30%–37% of respondents planning to use it soon. The survey also lists common production and near-term use cases: code intelligence and generation, writing assistants, content generation, text summarization, structured data extraction, workflow and app automation, search and recommendation, customer support, metadata generation, sentiment analysis, and fraud/threat detection.

On tooling, the report surfaces a practical stack: LangChain and LangGraph rank among the top app-building frameworks, while LlamaIndex, Guardrails, and DSPy are also used. Monitoring and observability show up as key operational concerns, alongside guardrails for safer outputs.

Overall, the report’s message for 2025 is pragmatic: start integrating generative and agentic AI into day-to-day work, focus on RAG-centric system building, and treat deployment as an ongoing engineering loop—models, prompts, monitoring, and human review all evolve together.

Cornell Notes

The June 2025 AI engineering report surveyed hundreds of AI practitioners and points to one dominant 2025 priority: build generative AI systems that work in production, with RAG at the center. OpenAI models lead customer-facing usage, while Anthropic is also widely adopted. About 70% of respondents use RAG, and many teams are moving toward agentic RAG variants that automate workflows. Teams iterate constantly—over half update models at least monthly, and prompt updates often happen monthly or even daily. Production agents typically rely on “human in the loop,” and teams use frameworks like LangChain/LangGraph plus monitoring, guardrails, and observability to keep systems reliable.

Why does RAG appear to be the main skill to prioritize for 2025 AI engineering roles?

RAG is reported as the most common practical approach: about 70% of respondents use RAG in some form. Companies use it to automate workflows by grounding model outputs in retrieved information, and RAG work spans traditional RAG as well as agentic variants (autonomous RAG, self-RAG, adaptive RAG). The report also notes that interview projects frequently revolve around RAG—covering the full lifecycle from data ingestion/parsing to deployment.

How do model choices in customer-facing applications influence what job candidates should learn?

For customer-facing applications, OpenAI models dominate popularity: three of the top five and half of the top 10 most popular models are from OpenAI. Anthropic is also used extensively. That pattern implies candidates should be comfortable building RAG and agentic systems with these model families and understand how retrieval and prompting are tuned around them.

What does the report suggest about how often AI systems need updating in real teams?

Iteration is frequent. More than half of respondents update models at least monthly, which the report links to fine-tuning activity. Prompt updates are even more regular: roughly 40% update prompts monthly, and about 1 in 10 update prompts daily. The practical takeaway is that AI engineering is an ongoing improvement loop, not a one-time build.

What production constraint shows up repeatedly in agent deployments?

Most agents in production use “human in the loop.” That means automation is paired with human review or approval steps to manage correctness and risk. For learners, it signals that agent design should include escalation, review workflows, and guardrails rather than assuming fully autonomous operation.

Which frameworks and operational practices are highlighted as common in building AI apps?

LangChain and LangGraph are listed among the top app-building frameworks. Other tools mentioned include LlamaIndex, Guardrails, and DSPy. Operationally, monitoring and observability are emphasized, alongside guardrails—reflecting the need to track system behavior and reduce unsafe or incorrect outputs after deployment.

How does the report treat multimodal AI beyond text?

Audio is flagged as the next major adoption wave, with 30%–37% of respondents planning to use it soon. The report also references other modalities like image and video, suggesting multimodal integration will take time but is already on teams’ roadmaps.

Review Questions

  1. If 70% of teams use RAG, what components and lifecycle steps would you prioritize to demonstrate end-to-end RAG competence in an interview?
  2. How would you design an agentic RAG system to include “human in the loop” while still automating workflows?
  3. What evidence from the report suggests that prompt and model updates are part of routine AI engineering rather than occasional maintenance?

Key Points

  1. 1

    RAG is the dominant practical approach in 2025, with about 70% of surveyed teams using it in some form.

  2. 2

    OpenAI models lead customer-facing adoption, while Anthropic is also widely used.

  3. 3

    Agentic RAG variants (autonomous, self, adaptive) are increasingly relevant because teams want workflow automation, not just answers.

  4. 4

    Model and prompt iteration are frequent: over half update models monthly or more, and a meaningful share updates prompts daily.

  5. 5

    Most production agents rely on “human in the loop,” so reliable agent design includes review and escalation paths.

  6. 6

    LangChain/LangGraph are top app-building frameworks, with LlamaIndex, Guardrails, and DSPy also in active use.

  7. 7

    Monitoring, observability, and guardrails are treated as core engineering requirements for production systems.

Highlights

OpenAI accounts for three of the top five and half of the top 10 most popular models used in customer-facing applications.
RAG adoption is widespread—about 70% of respondents use it—and interview projects often center on building RAG end to end.
More than half of teams update models at least monthly, while prompt updates range from monthly to daily.
Most agents in production include “human in the loop,” reflecting risk-managed automation.
LangChain and LangGraph rank among the leading frameworks, alongside monitoring, observability, and guardrails.

Topics

Mentioned