تشات جي بي تي ومستخدمو المكتبات: مخاطر هلوسة الذكاء الاصطناعي والمعلومات المضللة
DOI:
https://doi.org/10.70000/cj.2025.76.642الكلمات المفتاحية:
الذكاء الاصطناعي، تشات جي بي تي، الهلوسة، المعلومات المضللةالملخص
يكشف التحليل أن الاعتماد المتزايد على أدوات الذكاء الاصطناعي مثل ChatGPT من قبل مستخدمي المكتبات يطرح مخاطر كبيرة تهدد نزاهة البحث وجودته. في حين تقدم هذه الأدوات مزايا ملحوظة من حيث الكفاءة وسرعة الوصول إلى المعلومات، إلا أنها تنطوي على عيوب جوهرية، أبرزها ظاهرة "هلوسات الذكاء الاصطناعي"، وهي ميل النموذج إلى توليد معلومات تبدو معقولة ولكنها غير صحيحة أو مختلقة بالكامل.
تشمل المخاطر الرئيسية نشر المعلومات المضللة، وتآكل مهارات التفكير النقدي لدى المستخدمين، وتفاقم المخاوف الأخلاقية المتعلقة بالتحيز في البيانات وحقوق الملكية الفكرية. ونتيجة لذلك، قد يؤدي الاعتماد غير الخاضع للرقابة على ChatGPT إلى تدهور جودة المخرجات الأكاديمية وتقويض دور أمناء المكتبات كمرشدين موثوقين في عملية البحث.
يؤكد البحث على الدور المحوري الذي يجب أن تلعبه المكتبات وأمناؤها في مواجهة هذه التحديات. يتضمن ذلك تثقيف المستخدمين بشكل استباقي حول حدود الذكاء الاصطناعي، وتعزيز ثقافة التقييم النقدي للمحتوى الذي يتم إنشاؤه بواسطة الذكاء الاصطناعي، والتأكيد على ضرورة التحقق من المعلومات عبر مصادر موثوقة. إن تبني استراتيجيات التخفيف، مثل تحسين بيانات التدريب والرقابة البشرية، أمر حاسم لضمان الاستخدام المسؤول لهذه التقنيات القوية.
المراجع
Abdullah, M., Madain, A., & Jararweh, Y. (2022, November). ChatGPT: Fundamentals, applications and social impacts. In 2022 Ninth International Conference on Social Networks Analysis, Management and Security (SNAMS) (pp. 1-8). Ieee.
Abdikhakimov, I. (2023, June). Unraveling the Copyright Conundrum: Exploring AI-Generated Content and its Implications for Intellectual Property Rights. In International Conference on Legal Sciences (Vol. 1, No. 5, pp. 18-32).
Al-Raimi, M., Mudhsh, B. A., Al-Yafaei, Y., & Al-Maashani, S. (2024, March). Utilizing artificial intelligence tools for improving writing skills: Exploring Omani EFL learners’ perspectives. In Forum for Linguistic Studies (Vol. 6, No. 2, pp. 1177-1177).
Augenstein, I., Baldwin, T., Cha, M., Chakraborty, T., Ciampaglia, G. L., Corney, D., ... & Zagni, G. (2023). Factuality challenges in the era of large language models. arXiv preprint arXiv:2310.05189.
Bhattacharya, P., Prasad, V. K., Verma, A., Gupta, D., Sapsomboon, A., Viriyasitavat, W., & Dhiman, G. (2024). Demystifying ChatGPT: An In-depth Survey of OpenAI’s Robust Large Language
Bonner, E., Lege, R., & Frazier, E. (2023). Large Language Model-Based Artificial Intelligence in the Language Classroom: Practical Ideas for Teaching. Teaching English with Technology, 23(1), 23-41. Models. Archives of Computational Methods in Engineering, 1-44.
Chang, E. Y. (2023, December). Examining gpt-4: Capabilities, implications and future directions. In The 10th International Conference on Computational Science and Computational Intelligence.
Gill, S. S., & Kaur, R. (2023). ChatGPT: Vision and challenges. Internet of Things and Cyber-Physical Systems, 3, 262-271.
Gunjal, A., Yin, J., & Bas, E. (2024, March). Detecting and preventing hallucinations in large vision language models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 16, pp. 18135-18143).
Haupt, C. E., & Marks, M. (2023). AI-generated medical advice—GPT and beyond. Jama, 329(16), 1349-1350.
Hadi, M. U., Qureshi, R., Shah, A., Irfan, M., Zafar, A., Shaikh, M. B., ... & Mirjalili, S. (2023). A survey on large language models: Applications, challenges, limitations, and practical usage. Authorea Preprints.
Hassani, H., & Silva, E. S. (2023). The role of ChatGPT in data science: how ai-assisted conversational interfaces are revolutionizing the field. Big data and cognitive computing, 7(2), 62.
Hill, M. (2023). Hallucinating Machines: Exploring the ethical implications of generative language models (Doctoral dissertation, Open Access Te Herenga Waka-Victoria University of Wellington).
Holzinger, A. (2018, August). From machine learning to explainable AI. In 2018 world symposium on digital intelligence for systems and machines (DISA) (pp. 55-66). IEEE.
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., ... & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1-38.
Koos, S., & Wachsmann, S. (2023). Navigating the Impact of ChatGPT/GPT4 on Legal Academic Examinations: Challenges, Opportunities and Recommendations. Media Iuris, 6(2).
Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C. D., & Ho, D. E. (2024). Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools. arXiv preprint arXiv:2405.20362.
Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI Hallucinations: A Misnomer Worth Clarifying. arXiv preprint arXiv:2401.06796.
Maita, I., Saide, S., Putri, A. M., & Muwardi, D. (2024). ProsCons of Artificial Intelligence-ChatGPT Adoption in Education Settings: A Literature Review and Future Research Agendas. IEEE Engineering Management Review.
McIntosh, T. R., Liu, T., Susnjak, T., Watters, P., Ng, A., & Halgamuge, M. N. (2023). A culturally sensitive test to evaluate nuanced gpt hallucination. IEEE Transactions on Artificial Intelligence.
Nahar, M., Seo, H., Lee, E. J., Xiong, A., & Lee, D. (2024). Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations. arXiv preprint arXiv:2404.03745.
Naseem, U., Razzak, I., Khan, S. K., & Prasad, M. (2021). A comprehensive survey on word representation models: From classical to state-of-the-art word representation language models. Transactions on Asian and Low-Resource Language Information Processing, 20(5), 1-35.
Nazir, A., & Wang, Z. (2023). A comprehensive survey of ChatGPT: Advancements, applications, prospects, and challenges. Meta-radiology, 100022.
Niu, Z., Zhong, G., & Yu, H. (2021). A review on the attention mechanism of deep learning. Neurocomputing, 452, 48-62.
Labajová, L. (2023). The state of AI: Exploring the perceptions, credibility, and trustworthiness of the users towards AI-Generated Content.
Lakavath, H., & Satish, C. (2023). Use of ChatGPT in Library Services: A Study. Journal of Applied Research of Information Technology and Computing. 14(1-3), 25-30.
Oladokun, B.D, Yusuf, M., & Dogara, K. (2024). Students’ Attitudes and Experiences with ChatGPT as a Reference Service Tool in a Nigerian University: A Comprehensive Analysis of User Perceptions. Gamification and Augmented Reality, 2, 36-36.
Oladokun, B. D., Enakrire, R. T., Emmanuel, A. K., Ajani, Y. A., & Adetayo, A. J. (2025). Hallucitation in Scientific Writing: Exploring Evidence from ChatGPT Versions 3.5 and 4o in Responses to Selected Questions in Librarianship. Journal of Web Librarianship, 19(1), 62-92.
Pandey, R., Waghela, H., Rakshit, S., Rangari, A., Singh, A., Kumar, R., ... & Sen, J. (2024). Generative AI-Based Text Generation Methods Using Pre-Trained GPT-2 Model. arXiv preprint arXiv:2404.01786.
Rawte, V., Sheth, A., & Das, A. (2023). A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922.
Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems.
Roy, R., & Chanda, A. (2024). Hello ChatGPT: Transforming Academia and Libraries through AI. Tuijin Jishu/Journal of Propulsion Technology, 45(2).
Rosyanafi, R. J., Lestari, G. D., Susilo, H., Nusantara, W., & Nuraini, F. (2023). The dark side of innovation: Understanding research misconduct with chat gpt in nonformal education studies at universitas negeri surabaya. Jurnal Review Pendidikan Dasar: Jurnal Kajian Pendidikan dan Hasil Penelitian, 9(3), 220-228.
Sayers, D., Sousa-Silva, R., Höhn, S., Ahmedi, L., Allkivi-Metsoja, K., Anastasiou, D., ... & Yayilgan, S. Y. (2021). The Dawn of the Human-Machine Era: A forecast of new and emerging language technologies.
Sohail, S. S., Farhat, F., Himeur, Y., Nadeem, M., Madsen, D. Ø., Singh, Y., ... & Mansoor, W. (2023). The future of gpt: A taxonomy of existing chatgpt research, current challenges, and possible future directions. Current Challenges, and Possible Future Directions (April 8, 2023).
Sovrano, F., Ashley, K., & Bacchelli, A. (2023, July). Toward eliminating hallucinations: Gpt-based explanatory ai for intelligent textbooks and documentation. In CEUR Workshop Proceedings (No. 3444, pp. 54-65). CEUR-WS.
Tonmoy, S. M., Zaman, S. M., Jain, V., Rani, A., Rawte, V., Chadha, A., & Das, A. (2024). A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313.
Whalen, J., & Mouza, C. (2023). ChatGPT: Challenges, opportunities, and implications for teacher education. Contemporary Issues in Technology and Teacher Education, 23(1), 1-23.
Wu, X., Duan, R., & Ni, J. (2024). Unveiling security, privacy, and ethical concerns of ChatGPT. Journal of Information and Intelligence, 2(2), 102-115.
Yenduri, G., Ramalingam, M., Selvi, G. C., Supriya, Y., Srivastava, G., Maddikunta, P. K. R., ... & Gadekallu, T. R. (2024). Gpt (generative pre-trained transformer)–a comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions. IEEE Access.
التنزيلات
منشور
كيفية الاقتباس
إصدار
القسم
الرخصة
الحقوق الفكرية (c) 2025 Bolaji Oladokun, Vivien Emmanuel, Onome Queen Osagie, Adefunke Alabi

هذا العمل مرخص بموجب Creative Commons Attribution 4.0 International License.








