Get AI summaries of any video or article — Sign up free
2024’s Must-Know AI Upgrades—The Tools Academics Can’t Live Without! thumbnail

2024’s Must-Know AI Upgrades—The Tools Academics Can’t Live Without!

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT Playground can generate detailed system instructions that include step-by-step feedback criteria and output formatting, making prompts reusable across different large language models.

Briefing

OpenAI’s ChatGPT Playground is emerging as a practical research upgrade because it turns vague ideas into reusable, highly structured prompts—and even lets users package those prompts into their own GPTs. Instead of starting from scratch in a chat window, users can open OpenAI’s AI Playground (openai.ai.com playground) and use the “chat” panel’s prompt generator to create “perfect prompts” with detailed system instructions. Those instructions break feedback tasks into clear steps—summary, understanding, strengths, areas for improvement, clarity, data and methods, argument support, and actionable suggestions—then specify an output format. The key advantage is portability: the generated instructions can be copied into other large language models (including Perplexity, Claude, ChatGPT, and Bing) or used to build custom GPTs. Users can also create assistants inside the playground by saving generated system instructions, enabling specialized helpers for writing, feedback, or idea generation. The workflow is designed for researchers who repeatedly do the same tasks—like requesting constructive criticism on peer-reviewed papers—while still keeping the prompts detailed enough to drive consistent results across tools.

Perplexity’s “Spaces” adds a second layer of utility by turning those prompt workflows into dedicated, task-specific environments. After signing in, users can create a space with a title and optional description, then set custom instructions—often by pasting the system instructions generated in ChatGPT Playground. Spaces can be tailored for narrow academic jobs such as paper writing, reviewing, or editing. With Pro, users can also adjust the underlying AI model, but the core feature is customization: each space can be given uploaded files and examples. For instance, a “supervisor” space can be trained with writing samples (“write like me”) or successful grant applications so the assistant can mirror the style and structure that worked before. The result is a more consistent academic assistant that behaves like a role-based reviewer rather than a generic chatbot.

Claude’s upgrade focuses on experimental features aimed at research workflows, particularly analysis and LaTeX rendering. Users can enable a “feature preview” panel (via a purple tab or the feature preview setting) and toggle options like analysis and LaTeX rendering for new chats. In practice, Claude can ingest data from a CSV file and perform analysis, generate code, and produce visualizations such as bar charts and box plots. The transcript notes a limitation: Excel documents aren’t accepted, requiring conversion to CSV first. Even when a chart doesn’t perfectly reflect specific values in the plotted output, Claude still generates the correct numbers in its analysis and provides interactive controls (like dropdowns to switch chart types). The broader takeaway is that Claude is increasingly suited to research tasks that combine computation, interpretation, and presentation—turning messy datasets into structured tables and visuals that can be exported into materials like PowerPoint for supervisor meetings and symposia.

Cornell Notes

ChatGPT Playground helps researchers generate detailed, step-by-step system instructions and prompts that can be reused across multiple large language models. Those instructions can be copied into other tools (including Perplexity, Claude, ChatGPT, and Bing) or used to create custom GPTs and saved assistants for recurring tasks like peer-review feedback. Perplexity’s Spaces then packages those workflows into role-based, task-specific environments where custom instructions and uploaded files (e.g., “write like me” samples or successful grant applications) shape the assistant’s behavior. Claude’s experimental features add analysis and LaTeX rendering, enabling CSV-based data analysis, code generation, and chart creation—useful for turning raw data into presentation-ready outputs. Together, these upgrades shift AI use from one-off chats toward repeatable research pipelines.

How does ChatGPT Playground turn a simple research request into something reusable across models?

It uses a prompt generator inside the Chat panel to create structured prompts with system instructions. Instead of only asking for feedback, the generated instructions include explicit steps such as summary, understanding, strengths, areas for improvement, clarity, data and methods, argument support, and actionable suggestions, along with an output format. Those system instructions can then be copied into other large language models (Perplexity, Claude, ChatGPT, Bing) or used to create your own GPTs.

What makes Perplexity “Spaces” different from a standard chat?

Spaces are task-specific helper environments. Users create a space with a title and optional description, then add custom instructions (often pasted from ChatGPT Playground). Spaces can also be customized with uploaded files—such as writing samples to enforce a “write like me” style or successful grant applications to guide future grant writing. Each space behaves like a specialized assistant rather than a general-purpose chatbot.

Why does the transcript emphasize CSV over Excel when using Claude’s analysis features?

Claude’s analysis workflow in the transcript accepts a CSV file but rejects Excel documents. The user had to convert Excel to CSV before uploading. After conversion, Claude could analyze the data, generate code, and create visualizations like bar charts and box plots.

What experimental Claude features are highlighted for research workflows?

The transcript highlights “analysis” and “LaTeX rendering” under a feature preview. When enabled, new chats gain access to these capabilities—supporting data analysis plus LaTeX-friendly output for thesis or paper writing workflows.

How can these tools feed directly into academic deliverables like slides?

Claude’s outputs can be used to create tables and visuals suitable for supervisor meetings and symposia. The transcript specifically notes that the generated tables and chart outputs can be put into a PowerPoint presentation, turning analysis into presentation-ready material.

Review Questions

  1. Which parts of ChatGPT Playground’s generated system instructions are designed to make feedback more consistent (name at least three steps)?
  2. How do Perplexity Spaces use uploaded files to change the assistant’s behavior, and what are two examples given?
  3. What limitations or setup steps are mentioned for Claude’s analysis workflow before uploading data?

Key Points

  1. 1

    ChatGPT Playground can generate detailed system instructions that include step-by-step feedback criteria and output formatting, making prompts reusable across different large language models.

  2. 2

    Generated ChatGPT Playground instructions can be copied into other tools (Perplexity, Claude, ChatGPT, Bing) or used to create custom GPTs and saved assistants.

  3. 3

    Perplexity Spaces turn prompt workflows into role-based, task-specific environments that can be customized with custom instructions and uploaded files.

  4. 4

    Perplexity Spaces can be tailored for academic tasks like paper writing, reviewing, and editing, including style transfer using “write like me” samples.

  5. 5

    Claude’s feature preview includes experimental analysis and LaTeX rendering, enabling CSV-based analysis, code generation, and chart creation.

  6. 6

    Claude’s analysis workflow in the transcript requires converting Excel to CSV because Excel files aren’t accepted.

  7. 7

    These upgrades support research pipelines that move from raw data and drafts to structured feedback, visuals, and presentation-ready outputs.

Highlights

ChatGPT Playground’s prompt generator produces system instructions with a full feedback checklist—summary, strengths, clarity, data/methods, argument support, and actionable suggestions—then formats the output for reuse.
Perplexity Spaces let users create a dedicated “supervisor” assistant by combining custom instructions with uploaded writing samples or successful grant applications.
Claude’s experimental analysis and LaTeX rendering can turn a CSV into code, summaries, and visualizations like bar charts and box plots—after converting Excel to CSV.
The workflow trend across tools is clear: AI becomes more useful when prompts and roles are saved, customized, and reused rather than recreated each session.

Topics

  • ChatGPT Playground
  • Perplexity Spaces
  • Claude Feature Preview
  • Prompt Engineering
  • Academic Research Workflows