Get AI summaries of any video or article — Sign up free
Analyzing Dimensions Ai Data with VOSviewer and Biblioshiny || Bibliometric Analysis || Hindi thumbnail

Analyzing Dimensions Ai Data with VOSviewer and Biblioshiny || Bibliometric Analysis || Hindi

eSupport for Research·
5 min read

Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Apply SLR screening filters in Dimension AI first, then export only the included records for bibliometric mapping.

Briefing

The workflow centers on turning Dimension AI bibliographic exports into visual, filter-aware bibliometric maps using VOSviewer and Biblioshiny—then carrying the resulting tables, plots, and networks directly into a systematic literature review (SLR) write-up. After applying screening filters in Dimension AI (e.g., time window, journal selection, and SDG 3 “Good Health and Well-being”), the selected dataset is exported and converted into a format VOSviewer can read. The practical payoff is a reproducible pipeline: the same inclusion/exclusion logic used for SLR screening becomes the basis for author, keyword, and country-level visualizations.

The process begins with downloading the filtered Dimension AI results (the transcript mentions receiving CSV/Excel-style outputs) and moving them into VOSviewer. In VOSviewer, the user installs/opens the Bibliometric analysis package (the transcript references “VOSviewer” and “VOSviewer and Biblioshiny” as the target tools), then loads the raw bibliographic file from Dimension. VOSviewer performs a conversion step, checking for missing data and producing a “readable” bibliometric dataset. Once the conversion succeeds, the user saves the generated report so the input data and derived analysis outputs remain attached to the run.

From there, the analysis outputs are organized into multiple views and exportable artifacts. The transcript highlights that filters can be applied again inside VOSviewer, but the key point is that the dataset already reflects the Dimension AI screening choices—so the user keeps those settings aligned. Outputs include tabular summaries and plots such as annual scientific production, source-related views, and network structures. The workflow also supports exporting images for direct insertion into a paper, plus exporting data tables (CSV) for further reporting.

A major emphasis is on interpreting and customizing the maps. The transcript points to author-related networks (e.g., “author collaboration” style views), country-wise results, and keyword-based clustering. It also notes that some plots may fail if the underlying data is insufficient, and that using alternative data sources (e.g., “Scopus data”) can improve completeness when errors occur.

The second half shifts to VOSviewer’s map creation modes. One mode builds maps based on bibliographic coupling or authorship, with adjustable thresholds such as minimum documents per author and minimum occurrences per term. Another mode creates text-based keyword maps using title and abstract fields, with options like ignoring or keeping structure/abstract text. The transcript describes setting parameters such as full counting vs. binary counting, minimum term occurrences (e.g., a threshold like 5), and then generating overlays that color-code density and recency (e.g., newer publications appearing in brighter/yellow tones). The end result is a set of keyword clusters and co-occurrence patterns that can be used to justify research gaps and thematic trends in an SLR.

Overall, the core insight is that bibliometric mapping becomes much more defensible when it is anchored to the same filtered inclusion set used for SLR screening—so the visuals (networks, clusters, and trend plots) are not generic, but tied to the exact dataset selected from Dimension AI.

Cornell Notes

The workflow links Dimension AI screening to bibliometric visualization by exporting the filtered dataset and importing it into VOSviewer. VOSviewer converts the raw bibliographic file into a readable bibliometric dataset, flags missing fields, and then generates exportable outputs such as tables, plots, and network maps. Users can keep analysis settings consistent with their Dimension AI filters or adjust thresholds inside VOSviewer (e.g., minimum documents per author or minimum term occurrences). The method supports multiple map types—authorship/collaboration, bibliographic coupling, country views, and text-based keyword mapping from title and abstract—often with recency/density overlays. These outputs can be saved as images and CSV tables for direct use in an SLR write-up.

How does the workflow keep bibliometric maps consistent with an SLR’s inclusion/exclusion decisions?

Filtering happens first in Dimension AI (time span, journal selection, and SDG 3 “Good Health and Well-being” are mentioned). The user then exports only the screened/included records and imports that same dataset into VOSviewer. Inside VOSviewer, filters can be changed, but the transcript emphasizes leaving them aligned with the already-filtered Dimension dataset so the maps reflect the same corpus used for the SLR.

What role does VOSviewer’s conversion step play after importing Dimension data?

After selecting the Dimension bibliographic file, VOSviewer reads/uploads it and then converts it into a “readable” bibliometric format. During this step it checks for missing data fields and reports what’s absent. Once conversion completes, the user can save the report so both the input and derived analysis outputs are retained for later export.

Which types of bibliometric outputs are produced, and how are they used in writing?

The transcript highlights tabular outputs and plots such as annual scientific production and source-related views, plus network visualizations like author collaboration and keyword/co-occurrence structures. It also notes exporting images (for insertion into a paper) and exporting data tables (CSV) for reporting. The same outputs can be added to the final report in both tabular and graphical form.

How do threshold settings affect author and keyword maps?

Thresholds control which entities appear in the map. For author-related views, the transcript mentions setting a minimum number of documents per author (e.g., starting at 5, with the ability to adjust). For keyword text-based mapping, it describes setting a minimum term occurrence threshold (e.g., 5) and then generating a list of terms that meet that cutoff. Raising thresholds typically reduces noise; lowering them can reveal more but may increase clutter or missing/insufficient-data issues.

What’s the difference between bibliographic-based mapping and text-based keyword mapping in this workflow?

Bibliographic-based mapping uses fields from the bibliographic database (e.g., authorship, bibliographic coupling, sources, organizations, and countries). Text-based mapping uses title and abstract text to extract terms, then builds a keyword map based on term co-occurrence. The transcript also mentions options such as ignoring or not ignoring structure/abstract text and choosing counting methods (binary vs. full counting).

How do recency and density overlays help interpret keyword clusters?

The transcript describes overlay visualizations where color indicates recency—newer publications appear in brighter/yellow tones—while density indicates where terms are concentrated. This lets users see not only which keywords are common, but also which themes are emerging more recently, supporting trend and gap analysis for an SLR.

Review Questions

  1. When converting Dimension AI exports in VOSviewer, what kinds of missing-data issues should be checked before generating final maps?
  2. Which threshold parameters (minimum documents per author, minimum term occurrences) would you adjust first if a keyword map looks too sparse or too cluttered?
  3. How would you justify in an SLR that your bibliometric visuals reflect your screening criteria rather than an arbitrary dataset?

Key Points

  1. 1

    Apply SLR screening filters in Dimension AI first, then export only the included records for bibliometric mapping.

  2. 2

    Use VOSviewer’s import and conversion step to create a readable bibliometric dataset and check for missing fields before analysis.

  3. 3

    Save VOSviewer reports so both input data and derived outputs (tables/plots/networks) remain tied to the run.

  4. 4

    Export images for direct insertion into a paper and export CSV tables for structured reporting.

  5. 5

    Adjust analysis thresholds inside VOSviewer (e.g., minimum documents per author, minimum term occurrences) to balance coverage and noise.

  6. 6

    Use different VOSviewer map modes depending on the question: authorship/collaboration and bibliographic coupling for structure; title/abstract text for keyword trend mapping.

  7. 7

    Interpret keyword maps using overlays (recency and density) to identify both established and emerging themes.

Highlights

Dimension AI screening filters (time span, journal set, SDG 3) are treated as the starting corpus for every subsequent bibliometric visualization.
VOSviewer converts imported bibliographic files into a readable dataset, with explicit checks for missing data fields before producing maps.
Keyword mapping can be built from title and abstract text, with adjustable minimum term thresholds and overlay colors for recency/density.
Network outputs (author collaboration, keyword clusters, country-wise views) are exported as images and CSV tables for direct SLR reporting.

Topics

Mentioned

  • SLR
  • SDG
  • CSV