Get AI summaries of any video or article — Sign up free
Systematic Literature Review using Dimensions Ai || Bibliometric Analysis and Visualization || Hindi thumbnail

Systematic Literature Review using Dimensions Ai || Bibliometric Analysis and Visualization || Hindi

eSupport for Research·
5 min read

Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Dimension AI supports keyword-driven identification of literature, then narrowing via time range, publication type, SDG/topic alignment, and classification filters to reach an analyzable dataset size.

Briefing

Dimension AI is positioned as a research database platform for running bibliometric analysis and systematic literature reviews (SLRs) by turning keyword searches into filtered, exportable datasets that can then be analyzed through built-in analytics and downstream visualization tools. The core workflow starts with searching for a research theme—such as “ECG signal analysis”—and then narrowing results using time windows, publication types, and other filters to reach a manageable set of records suitable for screening and meta-level synthesis.

After logging in and landing on the profile area, the process emphasizes search-driven identification of relevant literature. A keyword query can initially return very large result counts (the transcript cites figures in the hundreds of thousands to millions), but the platform supports tightening the scope by selecting a recent time range (e.g., last five years), applying topic/SDG-related constraints (the example filters toward “Good health and well-being”), and limiting by publication type (e.g., articles only). Additional narrowing can target specific classification groupings (the transcript references a “UGC CARE Group 2” filter) and exclude certain record categories (such as “open access” considerations), ultimately producing a final dataset size that is practical for analysis—around 11,000 documents in one filtered scenario and about 1,300 documents in the main export example.

With the filtered set ready, the platform’s export function becomes the bridge to bibliometric and SLR-style workflows. Export is capped at a maximum number of records per download (the transcript mentions a limit of 2,500), and exported files can be saved for later use in visualization and analysis environments. The transcript then demonstrates staying within Dimension AI’s analytics view: selecting an “analytical view” for the exported dataset and generating charts and summaries.

Key analytics shown include research category aggregation (presented as bar charts), publication overview metrics (total publications within the chosen time window, with counts and site-based indicators), and downloadable outputs such as PNG images and data files (CSV/PDF are also mentioned). The platform also supports author-level exploration through an “researcher” view, where contributors tied to the dataset can be listed and examined.

Beyond descriptive statistics, the transcript highlights network-style analysis. An author network view reveals how researchers are connected within the selected literature set, and the same dataset can be re-filtered by institution or author. For instance, typing an institution name (e.g., “Sikkim Manipal University”) is described as enabling direct navigation to the relevant analytical results, while author-specific filtering can isolate output associated with a particular researcher affiliation (the transcript cites an example tied to “AMU”).

Overall, Dimension AI is presented as a systematic, filter-first pipeline: search → screen-like filtering → exportable dataset → analytical dashboards and network views. The transcript also signals that deeper bibliometric techniques—such as visualization tools and additional analyses—can be handled in subsequent steps or companion videos, with PRISMA 2020 referenced as a framework for structuring identification, screening, selection, and final analysis stages in SLRs.

Cornell Notes

Dimension AI can be used to build an SLR-ready bibliometric dataset by starting with a keyword search (e.g., “ECG signal analysis”) and then applying filters such as last-N-years, publication type (articles only), SDG alignment (e.g., “Good health and well-being”), and classification constraints (e.g., UGC CARE Group 2). After narrowing to a workable number of records (the example uses ~1,300), the results can be exported (with a per-export cap of 2,500) and analyzed inside Dimension AI’s analytical views. The platform provides category-level aggregation, publication overview metrics, downloadable chart/data outputs, and researcher/institution views. Author network analysis helps show how contributors connect within the selected literature set.

How does the workflow in Dimension AI move from a broad keyword search to an SLR-style dataset?

It begins with a keyword query, which can return very large counts. The next step is narrowing via filters: selecting a time window (e.g., last five years), restricting publication type (e.g., articles only), and applying thematic/SDG constraints (the example filters toward “Good health and well-being”). Additional constraints can include classification filters such as “UGC CARE Group 2” and exclusions related to record categories (the transcript mentions open access being excluded for a specific reason). The goal is to reduce the dataset to a manageable size suitable for screening and downstream analysis—about 1,300 records in the main demonstration.

What kinds of filters are demonstrated, and what effect do they have on the dataset size?

The transcript demonstrates multiple filter layers. A time filter (last five years) reduces results from very large totals to a smaller set (e.g., from millions down to roughly 1.2 million in one intermediate step). Adding SDG/topic constraints (Good health and well-being) further reduces the count (to around 20,600 in that example). Restricting publication type to articles and applying UGC CARE Group 2 reduces the set again (to about 11,000). Finally, excluding or limiting certain record categories can bring the dataset down to around 1,300 for the main export and analysis.

What does exporting accomplish, and what limits are mentioned?

Export turns the filtered records into a downloadable file for bibliometric analysis and visualization outside the immediate dashboard. The transcript notes an export cap of 2,500 records per export. After export, files appear in an export center as ready-to-download, with progress shown while the file is generated. The exported dataset can then be used for further analysis in visualization tools (the transcript previews this as part of a multi-video series).

What analytical outputs are shown inside Dimension AI after exporting?

Inside the platform’s analytical view, the transcript shows chart-based summaries such as bar charts for research categories and an overview panel with total publication counts for the selected time period. It also mentions the ability to switch indicators (e.g., from counts to publication-with-site style metrics). Outputs can be downloaded as PNG images and as data formats like CSV and PDF. A researcher view lists authors associated with the dataset and supports further exploration.

How is network analysis used, and how can it be customized?

An author network view is used to show how researchers are connected within the selected literature set. The transcript also describes re-filtering by institution and author: searching for an institution name (example: “Sikkim Manipal University”) can lead directly to institution-specific analytical results, while author-specific filtering can isolate records linked to a particular researcher affiliation (example: an author associated with “AMU”). These filters change the network and the linked publication set shown in the analytical views.

Review Questions

  1. If a keyword search returns hundreds of thousands of records, which specific filter sequence (time window, SDG/topic, publication type, classification) would most directly reduce it to an SLR-manageable dataset?
  2. What is the purpose of exporting records from Dimension AI, and what per-export record limit is mentioned?
  3. How do author network and researcher/institution views complement category and publication-overview charts in bibliometric analysis?

Key Points

  1. 1

    Dimension AI supports keyword-driven identification of literature, then narrowing via time range, publication type, SDG/topic alignment, and classification filters to reach an analyzable dataset size.

  2. 2

    A practical SLR pipeline in the transcript follows: search → filter/screen-like reduction → select final records → export for bibliometric analysis.

  3. 3

    Exported datasets can be downloaded for further work, with a stated maximum of 2,500 records per export.

  4. 4

    Dimension AI’s analytical view provides category-level aggregation (bar charts), publication overview metrics (counts across the selected period), and downloadable outputs (PNG/CSV/PDF).

  5. 5

    Researcher and author network views reveal contributor-level patterns and collaboration structures within the selected literature set.

  6. 6

    Institution and author-specific filtering (via search) enables targeted bibliometric analysis and changes the resulting network and publication summaries.

Highlights

Filtering a broad keyword search down to an SLR-ready dataset is the central move: time window + SDG/topic + publication type + classification constraints.
Export is capped at 2,500 records per download, turning a filtered result set into a reusable bibliometric dataset.
Dimension AI’s analytics include both descriptive charts (categories, totals) and relational views (author networks and researcher/institution breakdowns).
Author networks can be reshaped by filtering for specific institutions or authors, changing which collaborations appear.

Topics