Boundaries and Future Trends of ChatGPT Based on AI and Security Perspectives

Albandari Alsumayt, Zeyad M. Alfawaer, Nahla El-Haggar, Majid Alshammari, Fatemah H. Alghamedy, Sumayh S. Aljameel, Dina A. Alabbad, May Issa Aldossary

Abstract


In decades, technology and artificial intelligence have significantly impacted aspects of life. One noteworthy development is ChatGPT, an AI-based model that has created a revolution and attracted attention from researchers, academia, and organizations in a short period of time. Experts predict that ChatGPT will continue advancing, bringing about a leap in artificial intelligence. It is believed that this technology holds the potential to address cybersecurity concerns, protect against threats and attacks, and overcome challenges associated with our increasing reliance on technology and the internet. This technology may change our lives in productive and helpful ways, from the interaction with other AI technologies to the potential for enhanced personalization and customization to the continuing improvement of language model performance. While these new developments have the potential to enhance our lives, it is our responsibility as a society to thoroughly examine and confront the ethical and societal impacts. This research delves into the state of ChatGPT and its developments in the fields of artificial intelligence and security. It also explores the challenges faced by ChatGPT regarding privacy, data security, and potential misuse. Furthermore, it highlights emerging trends that could influence the direction of ChatGPT's progress. This paper also offers insights into the implications of using ChatGPT in security contexts. Provides recommendations for addressing these issues. The goal is to leverage the capabilities of AI-powered conversational systems while mitigating any risks.

 

Doi: 10.28991/HIJ-2024-05-01-010

Full Text: PDF


Keywords


ChatGPT; Artificial Intelligence; Security; Privacy; Cyber Security; Attacks; LLMs.

References


OpenAI. (2024). OpenAI, California, United States. Available online: https://openai.com/ (accessed on January 2024).

Yang, J., Jin, H., Tang, R., Han, X., Feng, Q., Jiang, H., ... & Hu, X. (2023). Harnessing the power of LLMS in practice: A survey on chatgpt and beyond. ACM Transactions on Knowledge Discovery, 1-30. doi:10.1145/3649506.

Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., ... & McGrew, B. (2023). Gpt-4 Technical Report. arXiv preprint, arXiv:2303.08774. doi:10.48550/arXiv.2303.08774.

Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., ... & Lample, G. (2023). Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. doi:10.48550/arXiv.2302.13971.

Gallifant, J., Fiske, A., Levites Strekalova, Y. A., Osorio-Valencia, J. S., Parke, R., Mwavu, R., ... & Pierce, R. (2024). Peer review of GPT-4 technical report and systems card. PLOS Digital Health, 3(1), e0000417. doi:10.1371/journal.pdig.0000417.

Zhu, K., Wang, J., Zhou, J., Wang, Z., Chen, H., Wang, Y., ... & Xie, X. (2023). PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts. arXiv preprint. doi:10.48550/arXiv.2306.04528.

Spatharioti, S. E., Rothschild, D. M., Goldstein, D. G., & Hofman, J. M. (2023). Comparing Traditional and LLM-based Search for Consumer Choice: A Randomized Experiment. arXiv preprint. doi:10.48550/arXiv.2307.03744.

Yao, B., Jiang, M., Yang, D., & Hu, J. (2023). Empowering LLM-based Machine Translation with Cultural Awareness. arXiv preprint, arXiv.2305.14328. doi:10.48550/arXiv.2305.14328

Karpinska, M., & Iyyer, M. (2023). Large language models effectively leverage document-level context for literary translation, but critical errors persist. Conference on Machine Translation – Proceedings, 406–438. doi:10.18653/v1/2023.wmt-1.41.

Jain, R., Gervasoni, N., Ndhlovu, M., & Rawat, S. (2023). A Code Centric Evaluation of C/C++ Vulnerability Datasets for Deep Learning Based Vulnerability Detection Techniques. ACM International Conference Proceeding Series, 6, 1-10. doi:10.1145/3578527.3578530.

Thirunavukarasu, A. J., Ting, D. S. J., Elangovan, K., Gutierrez, L., Tan, T. F., & Ting, D. S. W. (2023). Large language models in medicine. Nature Medicine, 29(8), 1930–1940. doi:10.1038/s41591-023-02448-8.

Wu, S., Irsoy, O., Lu, S., Dabravolski, V., Dredze, M., Gehrmann, S., Kambadur, P., Rosenberg, D., & Mann, G. (2023). BloombergGPT: A Large Language Model for Finance. arXiv preprint. doi:10.48550/arXiv.2303.17564.

Mbakwe, A. B., Lourentzou, I., Celi, L. A., Mechanic, O. J., & Dagan, A. (2023). ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLOS Digital Health, 2(2), e0000205. doi:10.1371/journal.pdig.0000205.

Abdullah, M., Madain, A., & Jararweh, Y. (2022). ChatGPT: Fundamentals, Applications and Social Impacts. 9th International Conference on Social Networks Analysis, Management and Security, 1–8. doi:10.1109/SNAMS58071.2022.10062688.

Curtis, N. (2023). To ChatGPT or not to ChatGPT? The Impact of Artificial Intelligence on Academic Publishing. Pediatric Infectious Disease Journal, 42(4), 275. doi:10.1097/INF.0000000000003852.

Sallam, M. (2023). ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Switzerland), 11(6), 887. doi:10.3390/healthcare11060887.

Du, H., Teng, S., Chen, H., Ma, J., Wang, X., Gou, C., Li, B., Ma, S., Miao, Q., Na, X., Ye, P., Zhang, H., Luo, G., & Wang, F. Y. (2023). Chat With ChatGPT on Intelligent Vehicles: An IEEE TIV Perspective. IEEE Transactions on Intelligent Vehicles, 8(3), 2020–2026. doi:10.1109/TIV.2023.3253281.

Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P. Sen, Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles, C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane, A., Hendricks, L. A., Rimell, L., Isaac, W., … Gabriel, I. (2022). Taxonomy of Risks posed by Language Models. ACM International Conference Proceeding Series, 214–229. doi:10.1145/3531146.3533088.

Pearce, H., Tan, B., Ahmad, B., Karri, R., & Dolan-Gavitt, B. (2023). Examining Zero-Shot Vulnerability Repair with Large Language Models. Proceedings - IEEE Symposium on Security and Privacy, 2339–2356. doi:10.1109/SP46215.2023.10179324.

Wu, X., Duan, R., & Ni, J. (2023). Unveiling security, privacy, and ethical concerns of ChatGPT. Journal of Information and Intelligence, 1-14. doi:10.1016/j.jiixd.2023.10.007.

Qammar, A., Wang, H., Ding, J., Naouri, A., Daneshmand, M., & Ning, H. (2023). Chatbots to ChatGPT in a Cybersecurity Space: Evolution, Vulnerabilities, Attacks, Challenges, and Future Recommendations. arXiv preprint, 1-17. doi:10.48550/arXiv.2306.09255.

Tan, T. F., Thirunavukarasu, A. J., Campbell, J. P., Keane, P. A., Pasquale, L. R., Abramoff, M. D., ... & Ting, D. S. W. (2023). Generative artificial intelligence through ChatGPT and other large language models in ophthalmology: clinical applications and challenges. Ophthalmology Science, 3(4), 100394. doi:10.1016/j.xops.2023.100394.

Khoury, R., Avila, A. R., Brunelle, J., & Camara, B. M. (2024). How Secure is Code Generated by ChatGPT? Honolulu, United States. doi:10.1109/smc53992.2023.10394237.

Renaud, K., Warkentin, M., & Westerman, G. (2023). From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI. MIT Sloan Management Review, 64428, 5.

Derner, E., & Batistič, K. (2023). Beyond the Safeguards: Exploring the Security Risks of ChatGPT. arXiv preprint. doi:10.48550/arXiv.2305.08005.

Sebastian, G. (2023). Do ChatGPT and other AI chatbots pose a cybersecurity risk?: An exploratory study. International Journal of Security and Privacy in Pervasive Computing (IJSPPC), 15(1), 1-11. doi:10.4018/IJSPPC.320225.

Sebastian, G. (2023). Privacy and Data Protection in ChatGPT and Other AI Chatbots. International Journal of Security and Privacy in Pervasive Computing, 15(1), 1–14. doi:10.4018/ijsppc.325475.

Esmailzadeh, Y. (2023). Potential Risks of ChatGPT: Implications for Counterterrorism and International Security. International Journal of Multicultural and Multireligious Understanding, 10(4), 535–543. doi:10.18415/ijmmu.v10i4.4590.

Aiyappa, R., An, J., Kwak, H., & Ahn, Y. Y. (2023). Can we trust the evaluation on ChatGPT? Proceedings of the Annual Meeting of the Association for Computational Linguistics, 47–54. doi:10.18653/v1/2023.trustnlp-1.5.

Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for Education and Research: Opportunities, Threats, and Strategies. Applied Sciences (Switzerland), 13(9), 5783. doi:10.3390/app13095783.

Li, J., Yang, Y., Wu, Z., Vydiswaran, V. G. V., & Xiao, C. (2023). ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger. arXiv preprint. doi:10.48550/arXiv.2304.14475

Lande, D., & Strashnoy, L. (2023). Causality Network Formation with ChatGPT. SSRN Electronic Journal, 1-16. doi:10.2139/ssrn.4464477.

Sobania, D., Briesch, M., Hanna, C., & Petke, J. (2023). An Analysis of the Automatic Bug Fixing Performance of ChatGPT. Proceedings - IEEE/ACM International Workshop on Automated Program Repair, 23–30. doi:10.1109/APR59189.2023.00012.

Sarel, R. (2023). Restraining ChatGPT. SSRN Electronic Journal, 1-65. doi:10.2139/ssrn.4354486.

Shahriar, S., Allana, S., Hazratifard, S. M., & Dara, R. (2023). A Survey of Privacy Risks and Mitigation Strategies in the Artificial Intelligence Life Cycle. IEEE Access, 11, 61829–61854. doi:10.1109/ACCESS.2023.3287195.

Addington, S. (2023). ChatGPT: Cyber Security Threats and Countermeasures. SSRN Electronic Journal, 1-12. doi:10.2139/ssrn.4425678.

Khalid, N., Qayyum, A., Bilal, M., Al-Fuqaha, A., & Qadir, J. (2023). Privacy-preserving artificial intelligence in healthcare: Techniques and applications. Computers in Biology and Medicine, 158. doi:10.1016/j.compbiomed.2023.106848.

Hastings, M. C. (2021). Secure Multi-Party Computation in Practice. Doctoral Dissertation, University of Pennsylvania, Pennsylvania, United States.

Rajan, A. A., & Rajan, A. A. (2020). Data Anonymization Techniques for Preserving Privacy in Public Release Data Model A Technical Review. International Journal of Scientific Research in Computer Science and Engineering, 8(1), 58–62. doi:10.26438/ijsrcse/v8i1.5862.

Jiang, B., Li, J., Yue, G., & Song, H. (2021). Differential Privacy for Industrial Internet of Things: Opportunities, Applications, and Challenges. IEEE Internet of Things Journal, 8(13), 10430–10451. doi:10.1109/JIOT.2021.3057419.

Bai, T., Luo, J., Zhao, J., Wen, B., & Wang, Q. (2021). Recent Advances in Adversarial Training for Adversarial Robustness. IJCAI International Joint Conference on Artificial Intelligence, 4312–4321. doi:10.24963/ijcai.2021/591.

Zhao, W., Alwidian, S., & Mahmoud, Q. H. (2022). Adversarial Training Methods for Deep Learning: A Systematic Review. Algorithms, 15(8), 283. doi:10.3390/a15080283.

Malle, B., Schrittwieser, S., Kieseberg, P., & Holzinger, A. (2016). Privacy Aware Machine Learning and the Right to be forgotten. ERCIM News, 107(10), 22–23.

Park, J., & Lim, H. (2022). Privacy-Preserving Federated Learning Using Homomorphic Encryption. Applied Sciences (Switzerland), 12(2), 734. doi:10.3390/app12020734.

Angulo, E., Márquez, J., & Villanueva-Polanco, R. (2023). Training of Classification Models via Federated Learning and Homomorphic Encryption. Sensors, 23(4), 1966. doi:10.3390/s23041966.

Brauneck, A., Schmalhorst, L., Kazemi Majdabadi, M. M., Bakhtiari, M., Völker, U., Baumbach, J., ... & Buchholtz, G. (2023). Federated machine learning, privacy-enhancing technologies, and data protection laws in medical research: Scoping review. Journal of Medical Internet Research, 25, e41588. doi:10.2196/41588.

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121–154. doi:10.1016/j.iotcps.2023.04.003.

Alawida, M., Mejri, S., Mehmood, A., Chikhaoui, B., & Isaac Abiodun, O. (2023). A Comprehensive Study of ChatGPT: Advancements, Limitations, and Ethical Considerations in Natural Language Processing and Cybersecurity. Information (Switzerland), 14(8), 462. doi:10.3390/info14080462.

Sharma, S., Ahmed, S., Naseem, M., Alnumay, W. S., Singh, S., & Cho, G. H. (2021). A survey on applications of artificial intelligence for pre-parametric project cost and soil shear-strength estimation in construction and geotechnical engineering. Sensors (Switzerland), 21(2), 1–44. doi:10.3390/s21020463.

Al-Mushayt, O. S. (2019). Automating E-Government Services with Artificial Intelligence. IEEE Access, 7, 146821–146829. doi:10.1109/ACCESS.2019.2946204.

Benzaïd, C., & Taleb, T. (2020). AI for beyond 5G Networks: A Cyber-Security Defense or Offense Enabler? IEEE Network, 34(6), 140–147. doi:10.1109/MNET.011.2000088.

Ahmed, A., Aziz, S., Abd-Alrazaq, A., Farooq, F., Househ, M., & Sheikh, J. (2023). The Effectiveness of Wearable Devices Using Artificial Intelligence for Blood Glucose Level Forecasting or Prediction: Systematic Review. Journal of Medical Internet Research, 25, 40259. doi:10.2196/40259.

Choraś, M., & Woźniak, M. (2022). The double-edged sword of AI: Ethical Adversarial Attacks to counter artificial intelligence for crime. AI and Ethics, 2(4), 631–634. doi:10.1007/s43681-021-00113-9.

Adadi, A., & Berrada, M. (2020). Explainable AI for Healthcare: From Black Box to Interpretable Models. Advances in Intelligent Systems and Computing, 1076, 327–337. doi:10.1007/978-981-15-0947-6_31.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 1-11.

Xu, P., Zhu, X., & Clifton, D. A. (2023). Multimodal Learning with Transformers: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(10), 12113–12132. doi:10.1109/TPAMI.2023.3275156.

Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., & Wermter, S. (2019). Continual lifelong learning with neural networks: A review. Neural Networks, 113, 54–71. doi:10.1016/j.neunet.2019.01.012.

Rahman, W., Hasan, M. K., Lee, S., Zadeh, A., Mao, C., Morency, L. P., & Hoque, E. (2020). Integrating multimodal information in large pretrained transformers. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2359–2369. doi:10.18653/v1/2020.acl-main.214.

Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., & Steinhardt, J. (2020). Measuring massive multitask language understanding. arXiv preprint, arXiv:2009.03300. doi:10.48550/arXiv.2009.03300.

Sangwan, R. S., Badr, Y., & Srinivasan, S. M. (2023). Cybersecurity for AI Systems: A Survey. Journal of Cybersecurity and Privacy, 3(2), 166–190. doi:10.3390/jcp3020010.

Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., ... & Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1-210. doi:10.1561/2200000083.

Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. doi:10.1080/15228053.2023.2233814.

Tawfeeq, T. M., Awqati, A. J., & Jasim, Y. A. (2023). The Ethical Implications of ChatGPT AI Chatbot: A Review. Journal of Modern Computing and Engineering Research, 2023, 49–57.

Wang, P. Q. (2024). Personalizing guest experience with generative AI in the hotel industry: there's more to it than meets a Kiwi’s eye. Current Issues in Tourism, 1-18. doi:10.1080/13683500.2023.2300030.

Stahl, B. C., & Eke, D. (2024). The ethics of ChatGPT–Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 102700. doi:10.1016/j.ijinfomgt.2023.102700.

Parra, J. L., & Chatterjee, S. (2024). Social Media and Artificial Intelligence: Critical Conversations and Where Do We Go from Here? Education Sciences, 14(1), 68. doi:10.3390/educsci14010068.

Taddeo, M., McCutcheon, T., & Floridi, L. (2019). Trusting artificial intelligence in cybersecurity is a double-edged sword. Nature Machine Intelligence, 1(12), 557–560. doi:10.1038/s42256-019-0109-1.

Escobar-Viera, C. G., Porta, G., Coulter, R. W., Martina, J., Goldbach, J., & Rollman, B. L. (2023). A chatbot-delivered intervention for optimizing social media use and reducing perceived isolation among rural-living LGBTQ+ youth: Development, acceptability, usability, satisfaction, and utility. Internet Interventions, 34, 100668. doi:10.1016/j.invent.2023.100668.

Tsai, W. H. S., & Chuan, C. H. (2023). Humanizing Chatbots for Interactive Marketing. The Palgrave Handbook of Interactive Marketing, Springer International Publishing, 255–273. doi:10.1007/978-3-031-14961-0_12.

Kajtazi, M., Holmberg, N., & Sarker, S. (2023). The changing nature of teaching future IS professionals in the era of generative AI. Journal of Information Technology Case and Application Research, 25(4), 415–422. doi:10.1080/15228053.2023.2267330.

Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access, 11, 80218–80245. doi:10.1109/ACCESS.2023.3300381.


Full Text: PDF

DOI: 10.28991/HIJ-2024-05-01-010

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 Albandari Alsumayt, Zeyad M. Alfawaer, Nahla El-Haggar, Majid Alshammari, Fatemah H. Alghamedy, Sumayh S. Aljameel, Dina A. Alabbad, May Issa Aldossary