Analyzing Dimensions Ai Data with VOSviewer and Biblioshiny || Bibliometric Analysis || Hindi
Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Apply SLR screening filters in Dimension AI first, then export only the included records for bibliometric mapping.
Briefing
The workflow centers on turning Dimension AI bibliographic exports into visual, filter-aware bibliometric maps using VOSviewer and Biblioshiny—then carrying the resulting tables, plots, and networks directly into a systematic literature review (SLR) write-up. After applying screening filters in Dimension AI (e.g., time window, journal selection, and SDG 3 “Good Health and Well-being”), the selected dataset is exported and converted into a format VOSviewer can read. The practical payoff is a reproducible pipeline: the same inclusion/exclusion logic used for SLR screening becomes the basis for author, keyword, and country-level visualizations.
The process begins with downloading the filtered Dimension AI results (the transcript mentions receiving CSV/Excel-style outputs) and moving them into VOSviewer. In VOSviewer, the user installs/opens the Bibliometric analysis package (the transcript references “VOSviewer” and “VOSviewer and Biblioshiny” as the target tools), then loads the raw bibliographic file from Dimension. VOSviewer performs a conversion step, checking for missing data and producing a “readable” bibliometric dataset. Once the conversion succeeds, the user saves the generated report so the input data and derived analysis outputs remain attached to the run.
From there, the analysis outputs are organized into multiple views and exportable artifacts. The transcript highlights that filters can be applied again inside VOSviewer, but the key point is that the dataset already reflects the Dimension AI screening choices—so the user keeps those settings aligned. Outputs include tabular summaries and plots such as annual scientific production, source-related views, and network structures. The workflow also supports exporting images for direct insertion into a paper, plus exporting data tables (CSV) for further reporting.
A major emphasis is on interpreting and customizing the maps. The transcript points to author-related networks (e.g., “author collaboration” style views), country-wise results, and keyword-based clustering. It also notes that some plots may fail if the underlying data is insufficient, and that using alternative data sources (e.g., “Scopus data”) can improve completeness when errors occur.
The second half shifts to VOSviewer’s map creation modes. One mode builds maps based on bibliographic coupling or authorship, with adjustable thresholds such as minimum documents per author and minimum occurrences per term. Another mode creates text-based keyword maps using title and abstract fields, with options like ignoring or keeping structure/abstract text. The transcript describes setting parameters such as full counting vs. binary counting, minimum term occurrences (e.g., a threshold like 5), and then generating overlays that color-code density and recency (e.g., newer publications appearing in brighter/yellow tones). The end result is a set of keyword clusters and co-occurrence patterns that can be used to justify research gaps and thematic trends in an SLR.
Overall, the core insight is that bibliometric mapping becomes much more defensible when it is anchored to the same filtered inclusion set used for SLR screening—so the visuals (networks, clusters, and trend plots) are not generic, but tied to the exact dataset selected from Dimension AI.
Cornell Notes
The workflow links Dimension AI screening to bibliometric visualization by exporting the filtered dataset and importing it into VOSviewer. VOSviewer converts the raw bibliographic file into a readable bibliometric dataset, flags missing fields, and then generates exportable outputs such as tables, plots, and network maps. Users can keep analysis settings consistent with their Dimension AI filters or adjust thresholds inside VOSviewer (e.g., minimum documents per author or minimum term occurrences). The method supports multiple map types—authorship/collaboration, bibliographic coupling, country views, and text-based keyword mapping from title and abstract—often with recency/density overlays. These outputs can be saved as images and CSV tables for direct use in an SLR write-up.
How does the workflow keep bibliometric maps consistent with an SLR’s inclusion/exclusion decisions?
What role does VOSviewer’s conversion step play after importing Dimension data?
Which types of bibliometric outputs are produced, and how are they used in writing?
How do threshold settings affect author and keyword maps?
What’s the difference between bibliographic-based mapping and text-based keyword mapping in this workflow?
How do recency and density overlays help interpret keyword clusters?
Review Questions
- When converting Dimension AI exports in VOSviewer, what kinds of missing-data issues should be checked before generating final maps?
- Which threshold parameters (minimum documents per author, minimum term occurrences) would you adjust first if a keyword map looks too sparse or too cluttered?
- How would you justify in an SLR that your bibliometric visuals reflect your screening criteria rather than an arbitrary dataset?
Key Points
- 1
Apply SLR screening filters in Dimension AI first, then export only the included records for bibliometric mapping.
- 2
Use VOSviewer’s import and conversion step to create a readable bibliometric dataset and check for missing fields before analysis.
- 3
Save VOSviewer reports so both input data and derived outputs (tables/plots/networks) remain tied to the run.
- 4
Export images for direct insertion into a paper and export CSV tables for structured reporting.
- 5
Adjust analysis thresholds inside VOSviewer (e.g., minimum documents per author, minimum term occurrences) to balance coverage and noise.
- 6
Use different VOSviewer map modes depending on the question: authorship/collaboration and bibliographic coupling for structure; title/abstract text for keyword trend mapping.
- 7
Interpret keyword maps using overlays (recency and density) to identify both established and emerging themes.