Get AI summaries of any video or article — Sign up free
lecture 7 : search engines by/ Dr.ebthal dongol thumbnail

lecture 7 : search engines by/ Dr.ebthal dongol

Qena medical student research unit·
6 min read

Based on Qena medical student research unit's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Search engines are built from crawling, indexing, and ranking steps; they search an indexed database rather than the entire internet.

Briefing

Search engines are portrayed as data systems—not the internet itself—built to collect web content, index it, and rank results so researchers can find reliable, up-to-date information with less effort. The core workflow starts with “crawlers” (spiders) that scan websites, then an indexing step that stores keywords and metadata in large databases. When a user searches for a term, the engine queries its index and returns ranked results based on a ranking system (described as partly secret and not fully transparent), while noting that search engines typically display only a fraction of what exists online.

Ranking can follow different principles. One approach emphasizes popularity and visibility: pages that are more widely linked or more frequently referenced tend to appear higher. Another approach relies on subject organization—using directories or human-curated classification—where experts group content by topic, making it easier to find material aligned with a specific field. The lecture also highlights the practical tradeoff between breadth and precision: broad, general queries can produce huge result counts, while adding specificity reduces the number of results but improves relevance and quality.

To get better outcomes, the session recommends using “advanced” search behaviors rather than typing vague phrases. Key tactics include placing important keywords at the start of the query, using quotation marks to force exact phrase matching (e.g., searching for two words together), and using operators to include or exclude terms (such as excluding a word with a minus sign, or narrowing by specifying a field). File-type filters are also emphasized for efficiency—for example, searching specifically for PDFs when the goal is to download documents rather than browse web pages. The lecture further distinguishes between general search and specialized engines: academic-focused search is preferred when the goal is research-grade sources.

Google Scholar is presented as the main academic search tool, with a clear benefit: it surfaces papers and citation-related context, including how often a work has been cited and citation tools for generating references in different styles. Still, it comes with a weakness—some items may not be indexed in major databases (like Scopus or Science Citation Index), which can lead to later problems when verifying whether a journal is legitimate or properly indexed. The advice is to treat Scholar as a starting point, then verify journal quality using indexing and impact-factor indicators, and to build a Google Scholar profile so new publications can be tracked and updated.

Beyond Scholar, the lecture introduces Microsoft Academic (including its semantic research component) as an additional academic option, and then shifts to Egypt’s “Banka El-Ma’rifa” (Knowledge Bank) as a major access route to subscription databases. The Knowledge Bank requires account creation with national ID and an academic email, and it provides access to resources like Web of Science, Scopus, ScienceDirect, Springer-related content, and more—often unlocking full-text and advanced search features that would otherwise be paywalled. It also offers tools for journal discovery (including open-access identification, impact-factor information, and indexing details), plus training and online activities.

Finally, the lecture covers ResearchGate as a research-focused platform where authors share papers and where questions can receive community responses. It closes with a practical reminder: for academic evaluation and promotion systems, the recognized metrics and indexing typically prioritize databases such as Scopus and Web of Science, not necessarily citation counts on ResearchGate or general web visibility. Overall, the message is that better searching—using precision operators, academic tools, and verified databases—directly improves the reliability of the sources used in medical research and writing.

Cornell Notes

Search engines work by crawling websites, indexing content into large databases, and ranking results when a user enters keywords. Because search engines only return a portion of the web, researchers should use precise queries—starting with key terms, using quotation marks for exact phrases, and applying operators to include/exclude words or restrict file types. For academic work, Google Scholar is highlighted for citation context and reference tools, but it can surface journals that may not be indexed in major databases, so verification is essential. The Knowledge Bank is presented as a practical gateway to subscription databases (e.g., ScienceDirect, Scopus, Web of Science) using an academic account, enabling full-text access and journal-quality checks. ResearchGate is treated as a community platform for papers and Q&A, but its metrics are not always what academic promotion systems rely on.

How does a search engine turn a keyword query into ranked results?

The process starts with crawlers/spiders that scan websites and collect data. That data is stored in a database during an indexing step (keywords and related fields are organized for fast retrieval). When a user searches, the engine looks up the query terms in its index and returns results ordered by a ranking system. The ranking logic is described as partly secret, but it generally reflects signals like relevance and popularity/visibility (e.g., how widely a page is referenced or linked).

Why does adding specificity to a query usually improve result quality?

General queries can produce enormous result counts (the lecture gives examples like “what is the …” style searches yielding very large numbers). When the query becomes more specific—by adding the most relevant keywords and structuring the phrase—results drop sharply, but the remaining matches are more likely to be directly relevant. The lecture’s takeaway is to know what you’re searching for and make the query reflect that, rather than relying on broad terms.

What practical operators can narrow results on Google-style search?

Quotation marks force exact phrase matching (two words must appear together). Exclusion can be done with a minus sign to remove unwanted terms (e.g., search for one concept while excluding another). The lecture also mentions using field/structure options (like specifying where terms should appear) and file-type filters to target downloadable documents such as PDFs instead of generic web pages.

What are Google Scholar’s strengths and its main risk?

Google Scholar is useful for academic discovery and citation context: it shows papers, citation information, and provides tools to generate references in different styles via its options. The risk is that some papers/journals may not be indexed in major databases (the lecture notes cases where items appear in Scholar but later fail verification in systems like Scopus/Science Citation Index). The solution is to verify journal legitimacy and indexing before relying on it for academic writing.

How does the Knowledge Bank improve access compared with searching databases directly?

The Knowledge Bank acts as an access layer tied to subscription agreements. Instead of entering sites like ScienceDirect or other databases directly (where many items are paywalled), users access them through the Knowledge Bank account, which unlocks more content and advanced options. It also supports journal discovery workflows—checking indexing, impact-factor-related indicators, open-access status, and other publication details—before submitting or citing work.

Which platforms’ metrics are most likely to matter for academic evaluation?

The lecture emphasizes that recognized evaluation systems typically rely on indexing and metrics from major databases such as Scopus and Web of Science (and their associated journal coverage). ResearchGate and Google Scholar can be helpful for discovery and community interaction, but their counts are not always what promotion/assessment bodies use. The practical advice is to publish in journals that are indexed in the recognized databases.

Review Questions

  1. When should a researcher switch from general web search to academic search tools, and what changes in query strategy should follow?
  2. What verification steps should be taken after finding a paper in Google Scholar to ensure the journal is properly indexed?
  3. Which search operators (e.g., quotes, exclusion, file-type filters) are most useful for turning a vague question into a precise literature query?

Key Points

  1. 1

    Search engines are built from crawling, indexing, and ranking steps; they search an indexed database rather than the entire internet.

  2. 2

    More specific queries typically reduce result volume while improving relevance, so keyword choice and query structure matter.

  3. 3

    Quotation marks enforce exact phrase matching, while exclusion operators (like minus signs) help remove irrelevant topics.

  4. 4

    Google Scholar is strong for academic discovery and citation tooling, but journals must be verified for indexing in major databases.

  5. 5

    The Knowledge Bank provides subscription access to major databases and full-text, using an account tied to national ID and an academic email.

  6. 6

    Journal selection should prioritize recognized indexing/quality signals (e.g., Scopus/Web of Science coverage) over community metrics alone.

  7. 7

    ResearchGate can support discovery and Q&A, but its metrics are not the primary basis for many formal academic evaluations.

Highlights

Search engines return results from their own indexed databases, not from the whole internet, and they usually cover only a fraction of what exists online.
Precision beats volume: adding the right keywords and using phrase matching can cut results dramatically while increasing relevance.
Google Scholar is a powerful starting point for citations and references, but journal indexing must be checked to avoid later problems.
The Knowledge Bank is positioned as the practical route to subscription databases and full-text access in Egypt.
Academic recognition is tied more to major indexing systems (like Scopus/Web of Science) than to ResearchGate or general web visibility.

Topics

Mentioned

  • Dr.ebthal dongol
  • Nazar
  • AI
  • PDF
  • SEO
  • ID
  • CV
  • DOI
  • AI
  • AC
  • AI
  • SCOPUS
  • Web of Science
  • WOS