1. Abbasi Dashtaki, N; Cheshmeh Sohrabi, M (2019). Google, Yahoo and Bing Search Engines' Performance in the Persian Information Retrieval: A Fuzzy and Classical Evaluation. Journal of National Studies on Librarianship and Information Organization ,30(2),96-111. (Persian)
2. Al-Maskari, A., Sanderson, M., & Clough, P. (2007, July). The relationship between IR effectiveness measures and user satisfaction. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 773-774). [
DOI:10.1145/1277741.1277902]
3. Asadi Qadikolaei O., Asadi S., Noroozi A., Ehsani R, (2014). A Comparison of Precision in General Search Engines and Specialized Databases for Radiology Image Retrieval, Journal of Health Management, 5(2), 77-87. (Persian)
4. Azadi, Gh (2005). The scale of web search engines precision in information retrieval of library and information science discipline, National Studies on Librarianship and Informaion Organization, 16(3), 111. (Persian)
5. Babaei, E and Sajedi, M (2013). A Comparative Study on Efficiency of Medical Specialized Search Engines in Retrieving Information Related to Gynecology and Obstetrics, Health Information Management, 10(2), 234. (Persian)
6. Bama, S. S., Ahmed, M. I., & Saravanan, A. (2015). A survey on performance evaluation measures for information retrieval system. International Research Journal of Engineering and Technology, 2(2), 1015-1020.
7. Bar-Ilan, J. (1998). On the overlap, the precision and estimated recall of search engines. A case study of the query "Erdos". Scientometrics, 42(2), 207-228. [
DOI:10.1007/BF02458356]
8. Bayanvand, A (2012). The Basics of Computer and Internet.Tehran: Chapar.( Persian)
9. Bilal, D. (2012). Ranking, relevance judgment, and precision of information retrieval on children's queries: Evaluation of Google, Y ahoo!, B ing, Y ahoo! K ids, and ask K ids. Journal of the American Society for Information Science and Technology, 63(9), 1879-1896. [
DOI:10.1002/asi.22675]
10. Borlund, P. & Ingwersen, P. (1998) Measures of relative relevance and ranked half-life: performance indicators for interactive IR. In: Croft, B.W, Moffat, A., van Rijsbergen, C.J., Wilkinson, R., and Zobel, J., eds. [
DOI:10.1145/290941.291019]
11. Borlund, P (2003). The IIR evaluation model: a framework for evaluation of interactive information retrieval systems. Information Research, 8(3). [Available at: http://informationr.net/ir/8-3/paper152.html]
12. Buckley, C., & Voorhees, E. M. (2004, July). Retrieval evaluation with incomplete information. In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 25-32). ACM. [
DOI:10.1145/1008992.1009000]
13. Budd, J M (2004). Relevance: Language, semantics, philosophy. Library trend, 52(3).
14. Büttcher, S., Clarke, C. L., & Cormack, G. V. (2016). Information retrieval: Implementing and evaluating search engines. Mit Press.
15. Clarke, S. J., & Willett, P. (1997). Estimating the recall performance of Web search engines. Aslib Proceedings, 49 (7), 184-189. [
DOI:10.1108/eb051463]
16. Comperhensive list of search engines (2017), In The search engine list website, Retrivited 9 Dec. 2017 from http://www.thesearchenginelist.com/
17. Croft, W. B., Metzler, D., & Strohman, T. (2015). Search engines: Information retrieval in Practice. London: Pearson Education.
18. Daverpanah, MR (2004). Paradigm and information retrieval. Iranian Journal of Library and Information Science, 7(3), 2-14.( Persian)
19. Mea, D; Gianluca,V; Luca, D; Gaspero, D and Mizzaro, S. (2006) Measuring retrieval effectiveness with average distance measure (ADM). Information Wissenschaft und Praxis 57( 8), 433-443.
20. Demirci, R. G., Kismir, V. and Bitirim, Y. (2007), An evaluation of popular search engines on finding turkish document, Second International Conference on Internet and Web Applications and Services (ICIW'07), IEEE, Turkey, pp.1-5. [
DOI:10.1109/ICIW.2007.15] [
PMID]
21. Thornley, C. and Gibb, F. (2007). A dialectical approach to information retrieval. Journal of documentation, 63 (5), 755-764. [
DOI:10.1108/00220410710827781]
22. Fidel, R. (2008). Are we there yet? Mixed methods research in library and information science. Library & Information Science Research, 30(4), 265-272. [
DOI:10.1016/j.lisr.2008.04.001]
23. Frické, M. (1998), "Measuring recall", Journal of information science,24 (6), 409-417. [
DOI:10.1177/016555159802400604]
24. Garoufallou, E. (2012). Evaluating search engines: A comparative study between international and Greek SE by Greek librarians. Program: electronic library & information systems, 46(2), 182-198. [
DOI:10.1108/00330331211221837]
25. Ghiasi, M; Daliri, S; Kouchakinejad, L and Abbasian Joushghani, A (2015). A Comparison of Accuracy in Specialized Medical Search and General Search Engines for Retrieving Medical Image, Educational Developement of Jundishapur, 6(2), 131-138. (Persian)
26. Goel, S., & Yadav, S. (2012). An Overview of Search Engine Evaluation Strategies. International Journal of Applied Information Systems, 1, 7-10. [
DOI:10.5120/ijais12-450156]
27. Hariri N, Babalhavaeji F, Farzandipour M, Nadi Ravandi S(2014). Evaluation Criteria of Information Retrieval Systems: What We Know and What We Do Not Know.; Iranian Research Institute for Information Science and Technology, 30 (1):199-221.( Persian)
28. Hariri, N and Vakili Mofrad, H (2014). A Comparison of the Precision of General and Specialized Medical Search Engines in Medical Images Retrieval, Health Information Management, 10(6), 830-839. (Persian)
29. Hariri, N. (2011). Relevance ranking on Google. Online Information Review. 35(4), 598-610. [
DOI:10.1108/14684521111161954]
30. Henzinger, M. (2007). Search technologies for the Internet. Science, 317(5837), 468-471. [
DOI:10.1126/science.1126557] [
PMID]
31. Hjørland, B. (2010). The foundation of the concept of relevance. Journal of the American Society for Information Science and Technology, 61 (2), 217-237. [
DOI:10.1002/asi.21261]
32. Ingwersen, P. (2010). Information retrieval interaction. Translated by Hajar Setodeh.Tehran: Ketabdar.
33. Järvelin, K., & Kekäläinen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4), 422-446. [
DOI:10.1145/582415.582418]
34. Kent, A., Berry, M. M., Luehrs Jr, F. U., and Perry, J. W. (1955), "Operational criteria for designing information retrieval systems", American documentation, 6 (2), 93-101. [
DOI:10.1002/asi.5090060209]
35. Kosha, K (2002). Internet Exploration Tools: Search Principles, Skills, and Features. Tehran: Ketabdar. (Persian).
36. Kumar, B. S., & Prakash, J. N. (2009). Precision and relative recall of search engines: A comparative study of Google and Yahoo. Singapore Journal of Library & Information Management, 38(1), 124-13
37. Kumar, B. T., and Sampath Pavithra, S. M. (2010), Evaluating the searching capabilities of search engines and metasearch engines: a comparative study, Annals of Library and Information Studies), 57 (2), 87-97.
38. Kumar, K. and Bhadu, V. (2013), A comparative study of BYG search engines, American Journal of engineering research, 2 (4), 39-43.
39. Lancaster, F. W. (1979), Information retrieval systems; characteristics, testing, and evaluation (2nd ed ed.), Wiley, New York..
40. Landoni, M. and Bell, S. (2000), Information retrieval techniques for evaluating search engines: a critical overview, Aslib Proceedings, 52 (3), 124-129. [
DOI:10.1108/EUM0000000007006]
41. Lewandowski, D. (Ed.). (2012). Web search engine research. Emerald Group Publishing Limited. [
DOI:10.1108/S1876-0562(2012)4]
42. Lopes, C. T., and Ribeiro, C. (2011). Comparative evaluation of web search engines in health information retrieval. Online Information Review, 35(6), 869-892. [
DOI:10.1108/14684521111193175]
43. Mea, V. D., & Mizzaro, S. (2004). Measuring retrieval effectiveness: A new proposal and a first experimental validation. Journal of the American Society for Information Science and Technology, 55(6), 530-543. [
DOI:10.1002/asi.10408]
44. Mizzaro, S. (2001, September). A new measure of retrieval effectiveness (or: What's wrong with precision and recall). In International workshop on information retrieval (IR'2001) (pp. 43-52). Infotech Oulu.
45. Mohammad Esmaeil, S and Mansour Kiaie, R (2012). A Comparison between Search Engines and Meta-search Engines in Retrieving Information Related to Physics and the Extent of their Overlap, National Studies on Librarianship and Informaion Organization, 22(3), 130. (Persian)
46. Mohammadesmaeil, S; Lafzighazi, E and Gilvari, A (2008). Comparing Search Engines and Meta-search Engines in Pharmaceutics Information Retrieval, Health Information Management, 5(2), 121-129. (Persian)
47. Mohammadesmeil, S and Naraghian, N (2017). Comparing Search Engines and Meta Search Engines in Dentistry Information Retrieval, Journal of Research in Dental Sciences, 14(2), 118-127. (Persian)
48. Nowkarizi, M; Zeynali Tazehkandi, M. (2019). Rethinking the Recall Measure in Appraising Information Retrieval Systems and Providing a New Measure by Using Persian Search Engines. International Journal of Information Science and Management,17(1), 1-16.
49. Nowkarizi,M and Zeynali Tazehkandi, M (2017). The overlap and coverage of 4 local search engines: Parsijoo, Yooz, Parseek and Rismoun, Human Information Interaction, 4(3), 48-59. (Persian)
50. Pao, M. L. (2000). Concepts of information retrieval. Translated by Asad Olah Azad and RahmattolahFattahi.Mashhad: Ferdowsi university of Mashhad. (Persian)
51. Powers, D.M.W., 2011. Evaluation: from Precision, Recall and F-measure to ROC, Informedness, Markedness and Correlation. Journal of Machine LearningTechnologies, 2(1), 37-63.
52. Rajabi,M and Norouzi, Y (2015). Persian Search Engines: Evaluating Search Features, Information Retrieval, Precision and Recall and Their Overlaps, National Studies on Librarianship and Informaion Organization, 26(3), 133-150. (Persian)
53. Riahinia N, Rahimi F and AllahBakhshian L (2015). Matching Scores of System Relevance and User-Oriented Relevance in SID, ISC and Google Scholar. Human Information Interaction, 2 (1), 1-11. (Persian)
54. Sakai, T. (2007, July). Alternatives to bpref. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 71-78). ACM. [
DOI:10.1145/1277741.1277756]
55. Sakai, T. (2012, April). Evaluation with informational and navigational intents. In Proceedings of the 21st international conference on World Wide Web (pp. 499-508). [
DOI:10.1145/2187836.2187904]
56. Sakai, T., & Kando, N. (2008). On information retrieval metrics designed for evaluation with incomplete relevance assessments. Information Retrieval, 11(5), 447-470. [
DOI:10.1007/s10791-008-9059-7]
57. Sakai, T., & Song, R. (2011, July). Evaluating diversified search results using per-intent graded relevance. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval (pp. 1043-1052). [
DOI:10.1145/2009916.2010055] [
PMID]
58. Saracevic, T. (2007). Relevance: A Review of the Literature and a Framework for Thinking on the Notion in Information Science: Nature and Manifestations of Relevance. Journal of the American Society for Information Science and Technology, 58 (13), 1915-1933. [
DOI:10.1002/asi.20682]
59. Saracevic, T. (2015). Why is relevance still the basic notion in information science. In Re: inventing Information Science in the Networked Society. Proceedings of the 14th International Symposium on Information Science (ISI 2015) (pp. 26-36).
60. Shafi, S. and Rather, R. A. (2005), "Precision and recall of five search engines for retrieval of scholarly information in the field of biotechnology", Webology, 2 (2), 42-47.
61. Shang, Y., and Li, L. (2002). Precision evaluation of search engines. World Wide Web, 5(2), 159-173. [
DOI:10.1023/A:1019679624079]
62. Sirotkin, P. (2012). On Search Engine Evaluation Metrics. arXiv preprint arXiv:1302.2318.
63. Smith, A. G. (2003). Think local, search global? Comparing search engines for searching geographically specific information. Online Information Review., 27(2), 102-109. [
DOI:10.1108/14684520310471716]
64. Soleymani, H (2009).Web search and database training. Tehran: Hojatolah Soleymani
65. Vaughan, L. (2004). New measurements for search engine evaluation proposed and tested. Information Processing & Management, 40(4), 677-691. [
DOI:10.1016/S0306-4573(03)00043-8]
66. Yilmaz, E., Carterette, B., & Kanoulas, E (2012). Evaluating Web Retrieval Effectiveness. In Dirk lewandowski, web search engine research. Bingley, west Yorkshire: Emerald Group Publishing Limited. [
DOI:10.1108/S1876-0562(2012)002012a007]
67. Yosefi, A (1997). False drop in information storage and retrieval. Iranian Journal of Scientific Information and Documentation Center, 13(1), 1-9.( Persian).
68. Zeynali Tazehkandi, M. and Nowkarizi, M. (2020). Evaluating the effectiveness of Google, Parsijoo, Rismoon, and Yooz to retrieve Persian documents. Library Hi Tech.
https://doi.org/10.1108/LHT-11-2019-0229 [
DOI:10.1108/LHT-11-2019-0229.]
69. Zhou, B., & Yao, Y. (2010). Evaluating information retrieval system performance based on user preference. Journal of Intelligent Information Systems, 34(3), 227-248.Zuva, K. and Zuva, T. (2012), "Evaluation of information retrieval systems", International journal of computer science and information technology, 4 (3),35-43. [
DOI:10.1007/s10844-009-0096-5]
70. Zuva, K., and Zuva, T. (2012). Evaluation of information retrieval systems. International journal of computer science & information technology, 4(3), 35-43. [
DOI:10.5121/ijcsit.2012.4304]
71. Croft, W. B., Metzler, D., & Strohman, T. (2015). Search engines. In Information Retrieval in Practice. Pearson Education, Inc.