Original Paper
Retrieving Clinical Evidence: A Comparison of PubMed and
Google Scholar for Quick Clinical Searches
Salimah Z Shariff
1,2
, BMath, PhD; Shayna AD Bejaimal
1
, BMedSc; Jessica M Sontrop
1,2
, PhD; Arthur V Iansavichus
1
,
MLIS; R Brian Haynes
3,4
, MD, PhD; Matthew A Weir
1
, MD; Amit X Garg
1,2,3
, MD, PhD
1
Kidney Clinical Research Unit, Division of Nephrology, Western University, London, ON, Canada
2
Department of Epidemiology and Biostatistics, Western University, London, ON, Canada
3
McMaster University, Department of Clinical Epidemiology and Biostatistics, Hamilton, ON, Canada
4
Department of Medicine, McMaster University, Hamilton, ON, Canada
Corresponding Author:
Salimah Z Shariff, BMath, PhD
Kidney Clinical Research Unit
Division of Nephrology
Western University
800 Commissioners Rd E. Rm ELL-108
London, ON, N6A 4G5
Canada
Phone: 1 519 685 8500 ext 56555
Fax: 1 519 685 8072
Email: salimah.sharif[email protected]
Abstract
Background: Physicians frequently search PubMed for information to guide patient care. More recently, Google Scholar has
gained popularity as another freely accessible bibliographic database.
Objective: To compare the performance of searches in PubMed and Google Scholar.
Methods: We surveyed nephrologists (kidney specialists) and provided each with a unique clinical question derived from 100
renal therapy systematic reviews. Each physician provided the search terms they would type into a bibliographic database to
locate evidence to answer the clinical question. We executed each of these searches in PubMed and Google Scholar and compared
results for the first 40 records retrieved (equivalent to 2 default search pages in PubMed). We evaluated the recall (proportion of
relevant articles found) and precision (ratio of relevant to nonrelevant articles) of the searches performed in PubMed and Google
Scholar. Primary studies included in the systematic reviews served as the reference standard for relevant articles. We further
documented whether relevant articles were available as free full-texts.
Results: Compared with PubMed, the average search in Google Scholar retrieved twice as many relevant articles (PubMed:
11%; Google Scholar: 22%; P<.001). Precision was similar in both databases (PubMed: 6%; Google Scholar: 8%; P=.07). Google
Scholar provided significantly greater access to free full-text publications (PubMed: 5%; Google Scholar: 14%; P<.001).
Conclusions: For quick clinical searches, Google Scholar returns twice as many relevant articles as PubMed and provides
greater access to free full-text articles.
(J Med Internet Res 2013;15(8):e164) doi: 10.2196/jmir.2624
KEYWORDS
information dissemination/methods; information storage and retrieval; medical; library science; PubMed; Google Scholar;
nephrology
Introduction
With the explosion of available health information, physicians
increasingly rely on bibliographic databases for health
information to guide the care of their patients. Unfortunately,
physicians face challenges when trying to find the information
they need. They lack the time to develop efficient search
strategies and often retrieve large numbers of nonrelevant
articles [1-9]. Moreover, many bibliographic resources require
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 1http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
paid subscriptions, which further limit access to current best
evidence.
Two online resources that are freely accessible around the world
are PubMed and Google Scholar. PubMed remains the most
widely used resource for medical literature [10]. More recently,
Google Scholar has gained popularity as an alternative online
bibliographic search resource [11-21]. Available search features
in Google Scholar and PubMed are contrasted in Table 1.
Whereas PubMed indexes only peer reviewed biomedical
literature, Google Scholar also indexes articles, theses, books,
abstracts, and court opinions from a variety of disciplines and
sources including academic publishers, professional societies,
online repositories, universities, and other websites [22]. While
PubMed orders articles in roughly reverse chronological order,
Google Scholar aims to order articles by relevance using a
proprietary algorithm that weighs information from the full text
of each article, author, and journal information, and the number
of times the article has been cited in other scholarly literature.
Only a small fraction of the 21 million records in PubMed are
available as free full-text publications via PubMed Central or
specific journals. In contrast, due to its expanded search
capabilities, Google Scholar may provide greater access to free
full-text publications. To date, the utility of Google Scholar
compared with PubMed for retrieving relevant primary literature
to answer clinical questions has not been sufficiently tested.
In this study, we compare the ability of PubMed and Google
Scholar to retrieve relevant renal literature for searches created
by nephrologists to address questions of renal therapy. Renal
literature is dispersed across more than 400 multidisciplinary
journals and is difficult for nephrologists to track down; thus,
this discipline provides an ideal model for this type of evaluation
[23].
Table 1. Search features available in PubMed and Google Scholar.
Google ScholarPubMedFeature
Searching
YesYesAllows use of Boolean terms (AND, OR, NOT)
Yes (very limited)Yes (extensive)Provides search limits (eg, age, publication type, date)
NoYesProvides use of search filters that limit search results to a specific clinical
study category or subject matter (eg, Clinical Queries, Topic-Specific
Queries)
No (automatically
searches for variances
in words)
YesAllows use of truncation (inclusion of multiple beginnings or endings
achieved by typing in an asterisk “*” in PubMed – eg, cardio*)
NoYesAllows use of controlled vocabulary (eg, MeSH terminology)
YesYesProvides spell checking for search terms
NoYesStores search history
YesNoSorts results by relevance
Access to articles
YesYesIndicates whether articles are available as free full-texts
YesYesAllows linking to institutions for subscription access (eg, link to university
library)
Other services
Yes (can import only
one citation at time)
Yes (can import
multiple selected ci-
tations)
Allows article citations to be imported into bibliography managers (eg,
Reference Manager)
YesYes (only to journals
in PubMed Central)
Tracks the number of times articles are cited by other publications
YesNoWhen searching, algorithm also searches the full-text of publications
Yes (introduced in
2010)
YesProvides email alerts for prespecified searches
YesYesAllows users to view related articles for an article of interest
NoYesSource lists all journals and their publication dates that are included in their
data holdings.
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 2http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Methods
Clinical Questions and Articles of Interest
We derived 100 clinical questions from the objectives statements
of 100 high-quality systematic reviews on renal therapy
published between 2001 and 2009. We identified the systematic
reviews from the EvidenceUpdates service in November 2009,
by selecting the option to view all reviews for the discipline of
nephrology; our search yielded 207 systematic reviews. This
service prescreens and identifies systematic reviews and
meta-analyses that meet strict methodological criteria and have
a high potential for clinical relevance [24,25]. Two nephrologists
used a previously developed checklist to independently confirm
that each review targeted a single clinical question relevant to
adult nephrology care (kappa=0.98) [26] and included at least
2 primary studies. Discrepancies were resolved through
discussion; 100 reviews met the inclusion criteria (see Figure
1 for the process of selecting reviews). We transformed the
objectives statement from each review into a clinical question
(see Multimedia Appendix 1 for a sample of the questions posed
and search queries received). For example, the objective of one
review was “to assess the effectiveness of normal saline versus
sodium bicarbonate for prevention of contrast-induced
nephropathy”. We transformed this statement into the clinical
question: “How effective is normal saline versus sodium
bicarbonate for the prevention of contrast-induced
nephropathy?”[27]. We extracted the citations to the primary
studies referenced in each review that met the eligibility criteria
for inclusion. These citations acted as a set of relevant articles
for the corresponding clinical question (also referred to as the
“reference standard”). The reviews cited a mean of 19 articles,
totaling 1574 unique citations across all reviews.
Figure 1. Process of selecting systematic reviews.
Data Collection and Measurements
We surveyed a simple random sample of nephrologists
practicing in Canada (response rate 75%). Survey details are
available elsewhere [28,29]. Briefly, we asked nephrologists
about their information-gathering practices. In addition, we
provided each nephrologist with a unique, randomly selected
therapy-focused clinical question generated from a renal
systematic review. The nephrologists provided the search terms
they would type into a bibliographic resource to retrieve relevant
studies to address the clinical question (known as a “search
query”). The survey was designed and administered using the
Dillman tailored design method [30]. The sampling frame
consisted of nephrologists practicing in Canada and included
both academic (practicing in a center with a fellowship training
program) and community-based nephrologists. Nephrologists
were selected from the sampling frame using a random number
generator; one nephrologist was selected at a time and randomly
assigned a clinical question. Once a selected nephrologist was
deemed a nonresponder, the same clinical question was assigned
to another nephrologist. In addition, upon receipt of a completed
survey, if a respondent did not provide a search query to the
clinical question, the same survey was re-administered to a new
participant. Survey administration continued until 1 valid search
query for each of the 100 questions was received. Overall, 115
survey responses were received and 15 were excluded from
further analysis because of missing or illegible search queries
(n=8) or because the survey was received after a physician was
deemed a nonresponder (n=7).
To compare the performance of PubMed and Google Scholar
for use by practicing physicians, we executed each
physician-provided search query in PubMed and Google Scholar
using all default settings. Occasionally, physicians provided
misspelled search terms, acronyms, or other discrepancies. To
address this, the syntax of the search was modified slightly using
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 3http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
prespecified rules (listed in Multimedia Appendix 2). This was
done in duplicate and differences were resolved by consensus.
All searches were conducted between May and July 2010. We
restricted each search to the search dates provided in the
methods section of each systematic review. For each search
result, we calculated the total number of records retrieved, the
number of relevant articles retrieved, and the position of the
relevant records in the search results. For each relevant article,
we followed all links for full-text access and documented
whether the full-text version could be retrieved for free. We did
not use any pay-for-privilege accesses. To ensure that we did
not inadvertently make use of our institution’s licensing when
searching, all searches were conducted on a computer with
Internet access provided by a local service provider and not our
institution. We tested and validated our search methodology in
a pilot phase. Two assessors (graduate students with expertise
in computer science and biomedical science) independently
conducted 10 searches in PubMed and Google Scholar and
achieved a percent agreement of 99%.
Content Coverage
To assess the potential for bias due to the absence of articles in
one source over the other, we evaluated the content coverage
for each database. A content coverage analysis determines
whether pertinent literature is contained within a specific
bibliographic database [31]. There are two potential reasons for
not finding an important article when searching a database such
as PubMed: either the article of interest is not included in the
content holdings of the database (referred to as a lack of content
coverage), or the article is present, but the search mechanism
fails to retrieve it when a search phrase is typed into the
database. To determine content coverage, we searched for each
primary article using advanced search strategies as outlined in
other coverage studies [32,33]. This involved various
combinations of the manuscript’s title (both English and
nonEnglish), the authors’ names, journal title, page numbers,
and the year published. We selected all links to candidate
matches to confirm a match. In Google Scholar, the option to
view all versions for a candidate article was always selected
and all links were attempted. If a primary article was not found
in one of the resources, further searches were performed by
another rater to confirm its absence. We previously published
a more comprehensive content coverage analysis of renal
literature that applied the same methods [34].
General Statistical Analytic Strategy and Sample Size
Primary Analysis
The two most prominent performance metrics of searching are
recall and precision (Table 2). Results from our survey indicated
that 80% of nephrologists do not review beyond 40 search
results, which is the equivalent of 2 default search pages in
PubMed [28]. Thus, for the primary analysis, we calculated the
recall and precision for the first 40 retrieved records in each
search. We used a 2-sided paired t test to compare search
outcomes between PubMed and Google Scholar. To reduce the
risk of type I error, we used a conservative P value of .025 to
interpret significance for all comparisons. We used SAS,
Version 9.2 for all statistical analyses.
Secondary Analysis
We repeated the calculation for recall while only considering
relevant articles that are freely accessible. For each
physician-generated search, we also calculated the recall and
precision for all retrieved records (not just the first 40).
Table 2. Formulas for calculating search recall
a
and precision
b
.
Nonrelevant articlesRelevant articles
c
Search in PubMed or Google Scholar for a clinical question
FPTPArticles found
TNFNArticles not found
a
Search recall TP/(TP + FN): the number of relevant articles found as a proportion of the total number of relevant articles.
b
Search precision TP/(TP + FP) (also referred to as the positive predictive value in diagnostic test terminology): the number of relevant articles found
as a proportion of the total number of articles found.
c
For each search, the set of relevant articles were the collection primary studies included in the original systematic review from which the clinical
question was derived.
Results
Nephrologist and Search Characteristics
Participating nephrologists were an average of 48 years old and
had practiced nephrology for an average of 15 years. All
respondents had used an online resource to guide the treatment
of their patients in the previous year. Approximately 90% used
PubMed to search, while 40% used Google Scholar; 32%
indicated using both bibliographic resources. Searches provided
by the nephrologists contained an average of three concept
terms, with each term embodying a single concept, for example,
myocardial infarction. Forty-eight percent of nephrologists used
Boolean terms such as AND, OR, and NOT in their searches.
Seven percent of searches included advanced search features
such as search limits, search filters, and truncation (inclusion
of multiple endings achieved by typing in an asterisk “*” in
PubMed, eg, nephr*). No substantive differences were observed
in searches provided by older versus younger nephrologists,
males versus females, or by those practicing in an academic
versus community setting.
Content Coverage
PubMed and Google Scholar contained similar proportions of
tested articles in their database holdings: each contained 78%
of the 1574 unique citations collected. Google Scholar contained
an additional 5% of the articles not included in PubMed and
PubMed contained an additional 2% of the articles not included
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 4http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
in Google Scholar; 15% of the articles were missing in both
sources.
Primary Analysis
Google Scholar retrieved twice as many relevant articles as
PubMed within the first 40 records (average recall: 21.9% vs
10.9%; Table 3). Precision was similar in the two databases.
When we considered both metrics together, Google Scholar
demonstrated better recall and similar precision in 77% of
searches.
Secondary Analysis
Google Scholar retrieved three times as many relevant articles
with free full-text access compared with PubMed (average
recall: 15% vs 5%; P<0.001; Table 3). When examining all
records (not just the first 40 records), PubMed and Google
Scholar retrieved a similar number of relevant articles, although
Google Scholar continued to provide increased free full-text
access to publications. Overall, searches in Google Scholar
retrieved more records per search compared with PubMed
(median: 1000 records vs 148 records, respectively). This
resulted in lower search precision in Google Scholar when all
retrieved articles were considered.
Table 3. Recall and precision of physician searches tested in PubMed and Google Scholar (within the first 40 citations, PubMed found no relevant
citations for 54% of the searches and Google Scholar found no relevant citations for 21% of the searches).
All citationsWithin first 40 citations
Measure
a
P value
d
Mean, %
b
(SD
c
)P value
d
Mean, %
b
(SD
c
)
.10<.001Recall
38.0 (33)10.9 (20)PubMed
43.2 (29)21.9 (24)Google Scholar
<.001<.001Free full-text recall
16.4 (20)4.7 (11)PubMed
25.1 (23)14.6 (20)Google Scholar
<.001.07Precision
6.0 (11)5.6 (11)PubMed
0.8 (0.8)7.6 (7)Google Scholar
a
Formulas for measures: (1) Recall: (number of relevant articles retrieved) / (total number of relevant articles available); (2) Free full-text recall: (number
of relevant articles retrieved available for free full-text viewing) / (total number of relevant articles available), and (3) Precision: (number of relevant
articles retrieved) / (total number of citations retrieved).
b
Values represent mean of results from 100 searches.
c
SD=Standard deviation.
d
P values compare PubMed to Google Scholar using a paired t test; significance values remained similar when using the nonparametric Wilcoxon signed
rank test.
Discussion
Principal Findings
Nephrologists increasingly rely on online bibliographic
databases to guide the care of their patients. Because most
nephrologists view fewer than 40 search results, important
literature will be missed if not contained within this narrow
window [28]. For our primary objective, we compared the ability
of PubMed and Google Scholar to retrieve relevant renal
literature within the first 40 records. We found that Google
Scholar retrieved twice as many relevant articles as
PubMed—and three times as many relevant articles with free
full-text access. These results are not attributable to differences
in content coverage, as 78% of the tested articles were available
in both databases [34]. Instead, the improved performance of
Google Scholar may result from its method of ranking results
based on relevance. However, when considering all search
results (not just the first 40 records), recall was similar between
Google Scholar and PubMed, while precision favored PubMed.
While many academics see the value in Google Scholar, its
uptake has been slow among physicians [11-21,35-40]. Unlike
Google Scholar, PubMed provides indexed content that is
directly relevant to physicians, including clinical controlled
vocabulary (MeSH [medical subject headings]), search limits
(such as limiting articles by age or study type), and access to
discipline-specific and methods search filters [24,41-43]. These
advanced features have the potential to reduce the number of
nonrelevant articles that are retrieved. However, only 7% of
respondents used these features in their searches for this study.
While 77% of nephrologists reported previous use of search
limits, only 37% used controlled vocabularies, and only 20%
used filters such as the Clinical Queries feature in PubMed
[28,29]. Whereas PubMed searches retrieve published literature
from biomedical journals, Google Scholar searches retrieve
both published and unpublished literature from a range of
disciplines. This may explain the greater overall number of
records found per search (median of 1000 for Google Scholar
and 148 for PubMed).
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 5http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
Google Scholar provided significantly greater access to free
full-text articles. This is notable given that physicians and
institutions in developing nations may lack the resources needed
to maintain subscriptions to journals. Even in developed
countries, the burden of paying for knowledge is felt. Some
academic databases and journals have raised their fees for
university subscriptions by up to 400%. This prompted one
Canadian university library to cancel subscription access to the
Web of Science bibliographic database, citing “a challenging
fiscal climate” as a primary reason [44-46].
Our findings are consistent with those of previous studies
[12,14,15,20,21]. In preliminary testing within targeted areas
of respiratory care, sarcoma, pharmacotherapy, and family
medicine, Google Scholar provided better comprehensiveness
(recall) but worse efficiency (precision) compared with PubMed.
Similar results were seen in our study when we considered all
records that were retrieved and not just the first 40. However,
previous studies tested only a small number of searches (range:
1-22), compared with the 100 searches in the current study. In
addition, the search queries used in previous studies were created
and tested by researchers in idealized settings, which may not
generalize as well to searches generated by physicians in busy
clinical settings.
We followed recommendations of search database evaluations
from the field of information retrieval and designed our study
to improve on limitations of previous studies [47,48]. To ensure
that the clinical questions tested were relevant to practicing
nephrologists, we gathered questions using renal systematic
reviews that targeted questions in patient care where uncertainty
exists. To ensure that all articles in the review were relevant for
the clinical question, we selected systematic reviews that
specified only one objective. Finally, to maximize external
validity, we used a survey to obtain searches created by
practicing nephrologists. Our survey achieved a response rate
of 75% with responses from both newer and more seasoned
nephrologists practicing in both community and academic
settings [28].
Limitations
Our study has some limitations. We did not directly observe
the nephrologists as they searched. There may be a discrepancy
between what search terms busy clinicians report in a survey
and what they actually type in practice [37]. As recommended,
we used primary studies included in high-quality systematic
reviews to define relevance [14,20,49-54]. By using this method,
we were unable to include other articles that some physicians
may find relevant (eg, studies of lower methodological quality,
narrative reviews, case reports, commentaries). However, our
approach engages widely accepted principles of the hierarchy
of evidence to identify the most important primary articles to
retrieve in a search. For reasons of feasibility, our study focused
on questions of therapy. As more systematic reviews for
diagnosis, prognosis, and etiology are published, we will be
able to expand this study to test searches for these types of
studies as well. Our study tests the searches provided by the
physicians, which are likely initial searches; however, in
practice, an unsatisfactory search may be attempted again or
modified based on the results obtained. Yet, our results indicate
that Google Scholar can improve on the nephrologists’ initial
search, which can save valuable clinical time. Given the nature
of the survey, we are uncertain about how many steps
nephrologists take to refine their search and future research
should explore this. Although Google Scholar retrieved twice
as many relevant articles as PubMed (in the first 40 citations),
80% of relevant articles were not retrieved by either source.
Improved methods to efficiently retrieve relevant articles for
clinical decision making requires further development and
testing. Future research might also evaluate the effectiveness
of strategies that apply relevance-based ranking to PubMed
results on physician searches [55].
Conclusions
In conclusion, for quick clinical searches, Google Scholar
returns twice as many relevant articles as PubMed and provides
greater access to free full-texts. Improved searching by clinicians
has the potential to enhance the transfer of research into practice
and improve patient care. Future studies should confirm these
results for other medical disciplines.
Acknowledgments
Funding for this study came from “The Physicians’Services Inc. Foundation”. Dr Shariff was supported by a Canadian Institutes
of Health Research (CIHR) Doctoral Research Award and the Schulich Scholarship for Medical Research from Western University.
Dr Garg was supported by a Clinician Scientist Award from the CIHR.
Conflicts of Interest
None declared.
Multimedia Appendix 1
Sample of systematic reviews selected and search queries received by respondents.
[PDF File (Adobe PDF File), 185KB-Multimedia Appendix 1]
Multimedia Appendix 2
Rules used for syntactically improving physician searches in PubMed and Google Scholar.
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 6http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
[PDF File (Adobe PDF File), 257KB-Multimedia Appendix 2]
References
1. Ely JW, Osheroff JA, Ebell MH, Chambliss ML, Vinson DC, Stevermer JJ, et al. Obstacles to answering doctors' questions
about patient care with evidence: qualitative study. BMJ 2002 Mar 23;324(7339):710 [FREE Full text] [Medline: 11909789]
2. Chambliss ML, Conley J. Answering clinical questions. J Fam Pract 1996 Aug;43(2):140-144. [Medline: 8708623]
3. Norlin C, Sharp AL, Firth SD. Unanswered questions prompted during pediatric primary care visits. Ambul Pediatr
2007;7(5):396-400. [doi: 10.1016/j.ambp.2007.05.008] [Medline: 17870649]
4. Davies K, Harrison J. The information-seeking behaviour of doctors: a review of the evidence. Health Info Libr J 2007
Jun;24(2):78-94. [doi: 10.1111/j.1471-1842.2007.00713.x] [Medline: 17584211]
5. Gorman P. Does the medical literature contain the evidence to answer the questions of primary care physicians? Preliminary
findings of a study. Proc Annu Symp Comput Appl Med Care 1993:571-575 [FREE Full text] [Medline: 8130538]
6. Gorman PN, Helfand M. Information seeking in primary care: how physicians choose which clinical questions to pursue
and which to leave unanswered. Med Decis Making 1995 Apr;15(2):113-119. [Medline: 7783571]
7. Ely JW, Osheroff JA, Ebell MH, Bergus GR, Levy BT, Chambliss ML, et al. Analysis of questions asked by family doctors
regarding patient care. BMJ 1999 Aug 7;319(7206):358-361 [FREE Full text] [Medline: 10435959]
8. González-González AI, Dawes M, Sánchez-Mateos J, Riesgo-Fuertes R, Escortell-Mayor E, Sanz-Cuesta T, et al. Information
needs and information-seeking behavior of primary care physicians. Ann Fam Med 2007 Jul;5(4):345-352 [FREE Full text]
[doi: 10.1370/afm.681] [Medline: 17664501]
9. Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME. Answering physicians' clinical questions: obstacles and
potential solutions. J Am Med Inform Assoc 2005 Mar;12(2):217-224 [FREE Full text] [doi: 10.1197/jamia.M1608]
[Medline: 15561792]
10. National Library of Medicine (US). Key MEDLINE Indicators. URL: http://www.nlm.nih.gov/bsd/bsd_key.html [accessed
2013-03-17] [WebCite Cache ID 6FCUenHlf]
11. Younger P. Using google scholar to conduct a literature search. Nurs Stand 2010;24(45):40-6; quiz 48. [Medline: 20701052]
12. Mastrangelo G, Fadda E, Rossi CR, Zamprogno E, Buja A, Cegolon L. Literature search on risk factors for sarcoma:
PubMed and Google Scholar may be complementary sources. BMC Res Notes 2010;3:131 [FREE Full text] [doi:
10.1186/1756-0500-3-131] [Medline: 20459746]
13. Kulkarni AV, Aziz B, Shams I, Busse JW. Comparisons of citations in Web of Science, Scopus, and Google Scholar for
articles published in general medical journals. JAMA 2009 Sep 9;302(10):1092-1096. [doi: 10.1001/jama.2009.1307]
[Medline: 19738094]
14. Freeman MK, Lauderdale SA, Kendrach MG, Woolley TW. Google Scholar versus PubMed in locating primary literature
to answer drug-related questions. Ann Pharmacother 2009 Mar;43(3):478-484. [doi: 10.1345/aph.1L223] [Medline:
19261965]
15. Shultz M. Comparing test searches in PubMed and Google Scholar. J Med Libr Assoc 2007 Oct;95(4):442-445 [FREE Full
text] [doi: 10.3163/1536-5050.95.4.442] [Medline: 17971893]
16. Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, Web of Science, and Google Scholar:
strengths and weaknesses. FASEB J 2008 Feb;22(2):338-342 [FREE Full text] [doi: 10.1096/fj.07-9492LSF] [Medline:
17884971]
17. Henderson J. Google Scholar: A source for clinicians? CMAJ 2005 Jun 7;172(12):1549-1550 [FREE Full text] [doi:
10.1503/cmaj.050404] [Medline: 15939908]
18. Giustini D, Barsky E. A look at Google Scholar, PubMed, and Scirus: comparisons and recommendations. Journal of the
Canadian Health Libraries Association 2005;26(3):85-89.
19. Vine R. Google Scholar electronic resources review. Journal of the Medical Library Association 2006;94(1):---.
20. Anders ME, Evans DP. Comparison of PubMed and Google Scholar literature searches. Respir Care 2010 May;55(5):578-583
[FREE Full text] [Medline: 20420728]
21. Nourbakhsh E, Nugent R, Wang H, Cevik C, Nugent K. Medical literature searches: a comparison of PubMed and Google
Scholar. Health Info Libr J 2012 Sep;29(3):214-222. [doi: 10.1111/j.1471-1842.2012.00992.x] [Medline: 22925384]
22. Google Scholar Beta: About Google Scholar. URL: http://scholar.google.com/scholar/about.html [accessed 2013-03-17]
[WebCite Cache ID 6FCUrUKMS]
23. Garg AX, Iansavichus AV, Kastner M, Walters LA, Wilczynski N, McKibbon KA, et al. Lost in publication: Half of all
renal practice evidence is published in non-renal journals. Kidney Int 2006 Dec;70(11):1995-2005. [doi:
10.1038/sj.ki.5001896] [Medline: 17035946]
24. Haynes RB, McKibbon KA, Wilczynski NL, Walter SD, Werre SR, Hedges Team. Optimal search strategies for retrieving
scientifically strong studies of treatment from Medline: analytical survey. BMJ 2005 May 21;330(7501):1179 [FREE Full
text] [doi: 10.1136/bmj.38446.498542.8F] [Medline: 15894554]
25. Haynes RB. bmjupdates+, a new free service for evidence-based clinical practice. Evid Based Nurs 2005 Apr;8(2):39
[FREE Full text] [Medline: 15830413]
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 7http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
26. Garg AX, Iansavichus AV, Wilczynski NL, Kastner M, Baier LA, Shariff SZ, et al. Filtering Medline for a clinical discipline:
diagnostic test assessment framework. BMJ 2009;339:b3435 [FREE Full text] [Medline: 19767336]
27. Meier P, Ko DT, Tamura A, Tamhane U, Gurm HS. Sodium bicarbonate-based hydration prevents contrast-induced
nephropathy: a meta-analysis. BMC Med 2009;7:23 [FREE Full text] [doi: 10.1186/1741-7015-7-23] [Medline: 19439062]
28. Shariff SZ, Bejaimal SA, Sontrop JM, Iansavichus AV, Weir MA, Haynes RB, et al. Searching for medical information
online: a survey of Canadian nephrologists. J Nephrol 2011;24(6):723-732. [doi: 10.5301/JN.2011.6373] [Medline: 21360475]
29. Shariff SZ, Sontrop JM, Haynes RB, Iansavichus AV, McKibbon KA, Wilczynski NL, et al. Impact of PubMed search
filters on the retrieval of evidence by physicians. CMAJ 2012 Feb 21;184(3):E184-E190 [FREE Full text] [doi:
10.1503/cmaj.101661] [Medline: 22249990]
30. Dillman DA. Mail and internet surveys: The tailored design method. New Jersey: John Wiley & Sons Inc; 2007.
31. Lancaster FW. The evaluation of published indexes and abstract journals: criteria and possible procedures. Bull Med Libr
Assoc 1971 Jul;59(3):479-494 [FREE Full text] [Medline: 5146770]
32. Christianson M. Ecology articles in Google Scholar: Levels of Access to Articles in Core Journals. Issues in Science and
Technology Librarianship 2007;49:---. [doi: 10.5062/F4MS3QPD]
33. Neuhaus C, Neuhaus E, Asher A, Wrede C. The Depth and Breadth of Google Scholar: An Empirical Study. portal: Libraries
and the Academy 2006;6(2):127-141. [doi: 10.1353/PLA]
34. Shariff SZ, Sontrop JM, Iansavichus AV, Haynes RB, Weir MA, Gandhi S, et al. Availability of renal literature in six
bibliographic databases. Clin Kidney J 2012 Dec;5(6):610-617 [FREE Full text] [doi: 10.1093/ckj/sfs152] [Medline:
23185693]
35. De Leo G, LeRouge C, Ceriani C, Niederman F. Websites most frequently used by physician for gathering medical
information. AMIA Annu Symp Proc 2006:902 [FREE Full text] [Medline: 17238521]
36. Somal K, Lam WC, Tam E. Computer and internet use by ophthalmologists and trainees in an academic centre. Can J
Ophthalmol 2009 Jun;44(3):265-268. [doi: 10.3129/i09-057] [Medline: 19491979]
37. Chiu YW, Weng YH, Lo HL, Ting HW, Hsu CC, Shih YH, et al. Physicians' characteristics in the usage of online database:
a representative nationwide survey of regional hospitals in Taiwan. Inform Health Soc Care 2009 Sep;34(3):127-135. [doi:
10.1080/17538150903102372] [Medline: 19670003]
38. Kitchin DR, Applegate KE. Learning radiology a survey investigating radiology resident use of textbooks, journals, and
the internet. Acad Radiol 2007 Sep;14(9):1113-1120. [doi: 10.1016/j.acra.2007.06.002] [Medline: 17707320]
39. Hider PN, Griffin G, Walker M, Coughlan E. The information-seeking behavior of clinical staff in a large health care
organization. J Med Libr Assoc 2009 Jan;97(1):47-50 [FREE Full text] [doi: 10.3163/1536-5050.97.1.009] [Medline:
19159006]
40. Giustini D. How Google is changing medicine. BMJ 2005 Dec 24;331(7531):1487-1488 [FREE Full text] [doi:
10.1136/bmj.331.7531.1487] [Medline: 16373722]
41. Haynes RB, Wilczynski NL. Optimal search strategies for retrieving scientifically strong studies of diagnosis from Medline:
analytical survey. BMJ 2004 May 1;328(7447):1040 [FREE Full text] [doi: 10.1136/bmj.38068.557998.EE] [Medline:
15073027]
42. Wilczynski NL, Haynes RB, Hedges Team. Developing optimal search strategies for detecting clinically sound prognostic
studies in MEDLINE: an analytic survey. BMC Med 2004 Jun 9;2:23 [FREE Full text] [doi: 10.1186/1741-7015-2-23]
[Medline: 15189561]
43. Wilczynski NL, Haynes RB, Hedges Team. Developing optimal search strategies for detecting clinically sound causation
studies in MEDLINE. AMIA Annu Symp Proc 2003:719-723 [FREE Full text] [Medline: 14728267]
44. Howard J. Canadian University Hopes to Lead Fight Against High Subscription Prices. 2010. URL: https://chronicle.com/
article/Canadian-University-Hopes-to/66095/ [accessed 2013-03-17] [WebCite Cache ID 6FCVHnUW0]
45. Taylor L. Canadian librarian leads worldwide digital revolt for free knowledge. 2010. URL: http://www.thestar.com/life/
2010/08/10/canadian_librarian_leads_worldwide_digital_revolt_for_free_knowledge.html [accessed 2013-03-17] [WebCite
Cache ID 6FCWjRIXK]
46. Howard J. University of California Tries Just Saying No to Rising Journal Costs. 2010. URL: http://chronicle.com/article/
U-of-California-Tries-Just/65823/ [accessed 2013-03-17] [WebCite Cache ID 6FCWe0rCE]
47. Gordon M, Pathak P. Finding information on the World Wide Web: the retrieval effectiveness of search engines. Information
Processing & Management 1999 Mar;35(2):141-180. [doi: 10.1016/S0306-4573(98)00041-7]
48. Hersh WR. Information retrieval: a health and biomedical perspective. New York: Springer Verlag; 2008.
49. Slobogean GP, Verma A, Giustini D, Slobogean BL, Mulpuri K. MEDLINE, EMBASE, and Cochrane index most primary
studies but not abstracts included in orthopedic meta-analyses. J Clin Epidemiol 2009 Dec;62(12):1261-1267. [doi:
10.1016/j.jclinepi.2009.01.013] [Medline: 19364634]
50. Wieland S, Dickersin K. Selective exposure reporting and Medline indexing limited the search sensitivity for observational
studies of the adverse effects of oral contraceptives. J Clin Epidemiol 2005 Jun;58(6):560-567. [doi:
10.1016/j.jclinepi.2004.11.018] [Medline: 15878469]
51. Hopewell S, Clarke M, Lefebvre C, Scherer R. Handsearching versus electronic searching to identify reports of randomized
trials. Cochrane Database Syst Rev 2007(2):MR000001. [doi: 10.1002/14651858.MR000001.pub2] [Medline: 17443625]
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 8http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX
52. Sampson M, Zhang L, Morrison A, Barrowman NJ, Clifford TJ, Platt RW, et al. An alternative to the hand searching gold
standard: validating methodological search filters using relative recall. BMC Med Res Methodol 2006;6:33 [FREE Full
text] [doi: 10.1186/1471-2288-6-33] [Medline: 16848895]
53. Yousefi-Nooraie R, Irani S, Mortaz-Hedjri S, Shakiba B. Comparison of the efficacy of three PubMed search filters in
finding randomized controlled trials to answer clinical questions. J Eval Clin Pract 2010 Sep 16:-. [doi:
10.1111/j.1365-2753.2010.01554.x] [Medline: 20846321]
54. Agoritsas T, Merglen A, Courvoisier DS, Combescure C, Garin N, Perrier A, et al. Sensitivity and predictive value of 15
PubMed search strategies to answer clinical questions rated against full systematic reviews. J Med Internet Res 2012;14(3):e85
[FREE Full text] [doi: 10.2196/jmir.2021] [Medline: 22693047]
55. Lu Z, Kim W, Wilbur WJ. Evaluating relevance ranking strategies for MEDLINE retrieval. J Am Med Inform Assoc 2009
Feb;16(1):32-36 [FREE Full text] [doi: 10.1197/jamia.M2935] [Medline: 18952932]
Edited by G Eysenbach; submitted 18.03.13; peer-reviewed by N Allen, D Perez-Rey, A Manconi; comments to author 04.05.13;
revised version received 16.05.13; accepted 11.06.13; published 15.08.13
Please cite as:
Shariff SZ, Bejaimal SAD, Sontrop JM, Iansavichus AV, Haynes RB, Weir MA, Garg AX
Retrieving Clinical Evidence: A Comparison of PubMed and Google Scholar for Quick Clinical Searches
J Med Internet Res 2013;15(8):e164
URL: http://www.jmir.org/2013/8/e164/
doi: 10.2196/jmir.2624
PMID: 23948488
©Salimah Z Shariff, Shayna AD Bejaimal, Jessica M Sontrop, Arthur V Iansavichus, R Brian Haynes, Matthew A Weir, Amit
X Garg. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.08.2013. This is an open-access
article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/),
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the
Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication
on http://www.jmir.org/, as well as this copyright and license information must be included.
J Med Internet Res 2013 | vol. 15 | iss. 8 | e164 | p. 9http://www.jmir.org/2013/8/e164/
(page number not for citation purposes)
Shariff et alJOURNAL OF MEDICAL INTERNET RESEARCH
XSL
FO
RenderX