Artificial Intelligence (AI) has rapidly become an integral technology across many sectors, including healthcare, finance, research, and manufacturing. AI’s ability to automate processes, analyse large datasets, and make predictive decisions offers significant opportunities for innovation, but it also raises profound ethical challenges. Ethical concerns regarding AI encompass issues of transparency, accountability, fairness, data privacy, and the need for human oversight. Given the diverse applications of AI, these ethical concerns vary not only by sector but also across different cultural and regulatory environments. Despite growing discourse on AI ethics, empirical tools for assessing ethical perceptions of AI across varied organizational contexts remain limited. From that need, this study introduces the AI and Ethics Perception Scale (AEPS), designed to measure individual and collective perceptions of AI ethics across five key dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight. The AEPS was developed through a rigorous methodological process, beginning with a pilot study of 112 participants and validated with data from 417 participants across three culturally diverse countries: Turkey, India, and the United Kingdom. The scale was used to assess ethical perceptions in sectors such as healthcare, finance, and manufacturing. Both Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were used to validate the scale’s structure. This study reveals significant cross-cultural and cross-sectoral differences in the prioritization of ethical concerns, demonstrating the need for contextually sensitive ethical frameworks for AI governance.
Published in | International Journal of Economic Behavior and Organization (Volume 13, Issue 1) |
DOI | 10.11648/j.ijebo.20251301.14 |
Page(s) | 35-50 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2025. Published by Science Publishing Group |
Artificial Intelligence, Ethics, Governance, Cross-cultural Scale Development, Hofstede
AI | Artificial Intelligence |
AEPS | AI and Ethics Perception Scale |
EFA | Exploratory Factor Analysis |
CFA | Confirmatory Factor Analysis |
GDPR | General Data Protection Regulation |
KMO | Kaiser-Meyer-Olkin Measure |
RMSEA | Root Mean Square Error of Approximation |
CFI | Comparative Fit Index |
TLI | Tucker-Lewis Index |
APA | American Psychological Association |
ML | Machine Learning |
SEM | Structural Equation Modelling |
Question | Factor 1 | Factor 2 | Factor 3 | Factor 4 | Factor 5 |
---|---|---|---|---|---|
Q1 | 0.75 | 0.10 | 0.05 | 0.05 | 0.05 |
Q2 | 0.80 | 0.05 | 0.05 | 0.05 | 0.05 |
Q3 | 0.78 | 0.08 | 0.02 | 0.10 | 0.02 |
Q4 | 0.74 | 0.07 | 0.06 | 0.10 | 0.03 |
Q5 | 0.85 | 0.05 | 0.05 | 0.03 | 0.02 |
Q6 | 0.88 | 0.02 | 0.02 | 0.05 | 0.03 |
Q7 | 0.72 | 0.15 | 0.06 | 0.04 | 0.03 |
Q8 | 0.79 | 0.05 | 0.08 | 0.06 | 0.02 |
Q9 | 0.68 | 0.25 | 0.02 | 0.03 | 0.02 |
Q10 | 0.67 | 0.20 | 0.08 | 0.03 | 0.02 |
Q11 | 0.76 | 0.12 | 0.05 | 0.04 | 0.03 |
Q12 | 0.73 | 0.11 | 0.10 | 0.04 | 0.02 |
Q13 | 0.71 | 0.18 | 0.07 | 0.03 | 0.01 |
Q14 | 0.66 | 0.25 | 0.05 | 0.02 | 0.02 |
Q15 | 0.80 | 0.10 | 0.05 | 0.02 | 0.03 |
Q16 | 0.81 | 0.06 | 0.07 | 0.04 | 0.02 |
Q17 | 0.70 | 0.18 | 0.08 | 0.03 | 0.01 |
Q18 | 0.74 | 0.15 | 0.06 | 0.04 | 0.01 |
Q19 | 0.72 | 0.10 | 0.10 | 0.04 | 0.04 |
Q20 | 0.69 | 0.18 | 0.08 | 0.03 | 0.02 |
Q21 | 0.84 | 0.05 | 0.04 | 0.04 | 0.03 |
Q22 | 0.82 | 0.04 | 0.05 | 0.05 | 0.04 |
Q23 | 0.79 | 0.07 | 0.05 | 0.05 | 0.04 |
Q24 | 0.68 | 0.20 | 0.08 | 0.04 | 0.03 |
Q25 | 0.67 | 0.21 | 0.08 | 0.04 | 0.03 |
Q26 | 0.74 | 0.10 | 0.08 | 0.05 | 0.03 |
Q27 | 0.78 | 0.09 | 0.06 | 0.04 | 0.03 |
Q28 | 0.80 | 0.07 | 0.05 | 0.04 | 0.04 |
Q29 | 0.71 | 0.15 | 0.06 | 0.05 | 0.03 |
Q30 | 0.72 | 0.14 | 0.07 | 0.04 | 0.03 |
Q31 | 0.70 | 0.16 | 0.07 | 0.04 | 0.03 |
Q32 | 0.69 | 0.17 | 0.06 | 0.04 | 0.04 |
Q33 | 0.76 | 0.10 | 0.06 | 0.05 | 0.03 |
Q34 | 0.75 | 0.12 | 0.05 | 0.04 | 0.04 |
Q35 | 0.77 | 0.09 | 0.06 | 0.04 | 0.04 |
Q36 | 0.68 | 0.18 | 0.07 | 0.04 | 0.03 |
Q37 | 0.74 | 0.11 | 0.07 | 0.05 | 0.03 |
Q38 | 0.71 | 0.12 | 0.08 | 0.05 | 0.04 |
Q39 | 0.69 | 0.16 | 0.07 | 0.05 | 0.03 |
Q40 | 0.67 | 0.20 | 0.07 | 0.03 | 0.03 |
Question | Factor 1 | Factor 2 | Factor 3 | Factor 4 | Factor 5 |
---|---|---|---|---|---|
Q1 | 0.75 | 0.1 | 0.05 | 0.05 | 0.05 |
Q2 | 0.8 | 0.05 | 0.05 | 0.05 | 0.05 |
Q3 | 0.78 | 0.08 | 0.02 | 0.1 | 0.02 |
Q4 | 0.74 | 0.07 | 0.06 | 0.1 | 0.03 |
Q5 | 0.85 | 0.05 | 0.05 | 0.03 | 0.02 |
Q6 | 0.88 | 0.02 | 0.02 | 0.05 | 0.03 |
Q7 | 0.72 | 0.15 | 0.06 | 0.04 | 0.03 |
Q8 | 0.79 | 0.05 | 0.08 | 0.06 | 0.02 |
Q9 | 0.68 | 0.25 | 0.02 | 0.03 | 0.02 |
Q10 | 0.67 | 0.2 | 0.08 | 0.03 | 0.02 |
Q11 | 0.76 | 0.12 | 0.05 | 0.04 | 0.03 |
Q12 | 0.73 | 0.11 | 0.1 | 0.04 | 0.02 |
Q13 | 0.71 | 0.18 | 0.07 | 0.03 | 0.01 |
Q14 | 0.66 | 0.25 | 0.05 | 0.02 | 0.02 |
Q15 | 0.8 | 0.1 | 0.05 | 0.02 | 0.03 |
Q16 | 0.81 | 0.06 | 0.07 | 0.04 | 0.02 |
Q17 | 0.7 | 0.18 | 0.08 | 0.03 | 0.01 |
Q18 | 0.74 | 0.15 | 0.06 | 0.04 | 0.01 |
Q19 | 0.72 | 0.1 | 0.1 | 0.04 | 0.04 |
Q20 | 0.69 | 0.18 | 0.08 | 0.03 | 0.02 |
[1] | Akman, I., & Mishra, A. (2010). Evaluating privacy concerns in Turkey: A case of internet users. Journal of Business Ethics, 96(2), 331–342. |
[2] | Anthes, W. (2004). Financial literacy in America: A perfect storm, a perfect opportunity. Journal of Financial Service Professionals, 8(6), 49–56. |
[3] | Austin, J., Gutierrez, R., Ogliastri, E., & Reficco, R. (Eds.). (2006). Effective management of social enterprises. Cambridge, MA: David Rockefeller Center Series on Latin American Studies, Harvard University. |
[4] | Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for artificial intelligence and digital technologies. International Journal of Information Management, 62, 102433. |
[5] | Bacchini, F., & Lorusso, L. (2019). Race, again: How face recognition technology reinforces racial discrimination. Journal of Information, Communication, and Ethics in Society, 17(3), 321-335. |
[6] | Bagozzi, R. P., Yi, Y., & Phillips, L. W. (1991). Assessing construct validity in organizational research. Administrative Science Quarterly, 36(3), 421–458. |
[7] | Balabanis, G., Stables, R. E., & Phillips, H. C. (1997). Market orientation in the top 200 British charity organizations and its impact on their performance. European Journal of Marketing, 31(8), 583–603. |
[8] | Bentham, J. (1789). An introduction to the principles of morals and legislation. Clarendon Press. |
[9] | Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159. |
[10] | Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. |
[11] | Boyatzis, R. E. (2008). Competencies in the 21st century. Journal of Management Development, 27(1), 5-12. |
[12] | Baker-Brunnbauer, J. (2021). Management perspective of ethics in artificial intelligence. AI and Ethics, 1(2), 173-181. |
[13] | Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company. |
[14] | Canca, C. (2020). "Operationalizing AI Ethics Principles." Communications of the ACM, 63(12), 18–21. |
[15] | Coeckelbergh, M. (2020). AI Ethics. MIT Press. |
[16] | Chatman, J., & Caldwell, D. F. (1991). People and organizational culture: A profile comparison approach to assessing person-organization fit. Academy of Management Journal, 34(3), 487-516. |
[17] | Chace, C. (2015). Surviving AI: The Promise and Peril of Artificial Intelligence. Three Cs. |
[18] | Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-33. |
[19] | Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7(3), 309-319. |
[20] | Cook, B., Dodds, C., & Mitchell, W. (2003). Social entrepreneurship—False premises and dangerous forebodings. Australian Journal of Social Issues, 38(1), 57-72. |
[21] | Cornwall, J. R. (1998). The entrepreneur as a building block for community. Journal of Developmental Entrepreneurship, 3(2), 734-745. |
[22] | Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. |
[23] | Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311-313. |
[24] | Custers, B., de Lange, M., & Vermaas, P. (2019). GDPR and AI: Big data challenges for human rights. Springer. |
[25] | Dacin, P. A., Dacin, M. T., & Matear, M. (2010). Social entrepreneurship: Why we don't need a new theory and how we move forward from here. Academy of Management Perspectives, 24(3), 37-57. |
[26] | Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer. |
[27] | Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics and Information Technology, 20(1), 1-3. |
[28] | Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv: 1702.08608. |
[29] | Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press. |
[30] | Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. |
[31] | Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. C., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, 1-49. |
[32] | Floridi, L. (2016). The fourth revolution: How the infosphere is reshaping human reality. Oxford University Press. |
[33] | Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 1–8. |
[34] | Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160361. |
[35] | Freiling, J., & Laudien, S. M. (2013). Explaining new venture failure: A competence-based approach. AIMS 2013 Conference. |
[36] | Gainer, B., & Padanyi, P. (2002). Applying the marketing concept to cultural organisations: An empirical study of the relationship between market orientation and performance. International Journal of Nonprofit and Voluntary Sector Marketing, 7(2), 182-193. |
[37] | Gainer, B., & Padanyi, P. (2005). The relationship between market-oriented activities and market-oriented culture: Implications for the development of market orientation in nonprofit service organizations. Journal of Business Research, 58(6), 854-862. |
[38] | Gasser, U., & Almeida, V. A. (2017). A layered model for AI governance. IEEE Internet Computing, 21(6), 58-62. |
[39] | Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation.” AI Magazine, 38(3), 50-57. |
[40] | Grand, S., Von Krogh, G., Leonard, D., & Swap, W. (2004). Resource allocation beyond firm boundaries: A multi-level model for open-source innovation. Long Range Planning, 37(6), 591-610. |
[41] | Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Proceedings of the 52nd Hawaii International Conference on System Sciences, 212-223. |
[42] | Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120. |
[43] | Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Prentice Hall. |
[44] | Hofstede, G. (1984). Culture's consequences: International differences in work-related values. SAGE Publications. |
[45] | Hitt, M. A., Nixon, R. D., Hoskisson, R. E., & Kochhar, R. (1999). Corporate entrepreneurship and crossfunctional fertilization: Activation, process and disintegration of a new product design team. Entrepreneurship Theory and Practice, 23(3), 145-168. |
[46] | Hynes, B. (2009). Growing the social enterprise—issues and challenges. Social Enterprise Journal, 5(2), 114-125. |
[47] | Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. |
[48] | Jha, R., Shenoi, S., Acharya, S., & Bhogale, S. (2019). AI in agriculture: Harnessing the power of machine learning for smarter farming. Agricultural Systems, 174, 37-48. |
[49] | Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang, Y. (2017). Artificial intelligence in healthcare: Past, present, and future. Stroke and Vascular Neurology, 2(4), 230–243. |
[50] | Joy, I., De Las Casas, L., & Rickey, B. (2011). Understanding the demand for and supply of social finance: Research to inform the Big Society Bank. New Philanthropy Capital in association with the National Endowment for Science, Technology and the Arts (NESTA). |
[51] | Karakoç, F. Y., & Dönmez, L. (2014). Ölçek geliştirme çalışmalarında temel ilkeler. Tıp Eğitimi Dünyası, 40, 39-49. |
[52] | Kaspersen, A., & Wallach, W. (2021). A framework for the international governance of AI. Carnegie Council for Ethics in International Affairs. |
[53] | Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29. |
[54] | Koshiyama, A. S., & Kazim, E. (2021). A high-level overview of AI ethics. Patterns, 2(9), 100310. |
[55] | Köse, U. (Ed.). (2021). Yapay Zeka Etiği. Nobel Akademik Yayıncılik. |
[56] | Lepri, B., Oliver, N., Letouzé, E., Pentland, A. S., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627. |
[57] | Li, Q., Lu, J., Yin, D., & Sun, X. (2024). FinVerse: An Autonomous Agent System for Versatile Financial Analysis. arXiv preprint |
[58] | Maas, M. M. (2023). Advanced AI Governance: A Literature Review of Problems, Options, and Proposals. AI Foundations. |
[59] | Maple, C., Szpruch, L., Epiphaniou, G., Staykova, K., Singh, S., Penwarden, W., Wen, Y., Wang, Z., Hariharan, J., & Avramovic, P. (2023). The AI Revolution: Opportunities and Challenges for the Finance Sector. arXiv. |
[60] | Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. |
[61] | Mittelstadt, B. D. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507. |
[62] | Morgan, N. A., Vorhies, D. W., & Mason, C. H. (2009). Market orientation, marketing capabilities, and firm performance. Strategic Management Journal, 30(8), 909-920. |
[63] | Morris, M. H., Coombes, S., Schindehutte, M., & Allen, J. (2007). Antecedents and outcomes of entrepreneurial and market orientations in a non-profit context: Theoretical and empirical insights. Journal of Leadership & Organizational Studies, 13(4), 12-39. |
[64] | Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089. |
[65] | Nielsen, R. P. (1996). The politics of ethics: Methods for acting, learning, and sometimes fighting with others in addressing ethics problems in organizational life. Oxford University Press.. |
[66] | O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. |
[67] | Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the future—Big data, machine learning, and clinical medicine. The New England Journal of Medicine, 375(13), 1216-1219. |
[68] | Park, S. H., & Han, K. (2018). Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology, 286(3), 800–809. |
[69] | Patel, V. L., Shortliffe, E. H., Stefanelli, M., Szolovits, P., Berthold, M. R., Bellazzi, R., & Abu-Hanna, A. (2009). The coming of age of artificial intelligence in medicine. Artificial Intelligence in Medicine, 46(1), |
[70] | Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. |
[71] | Peredo, A. M., & McLean, M. (2006). Social entrepreneurship: A critical review of the concept. Journal of World Business, 41(1), 56-65. |
[72] | Porter, M. E. (1989). From competitive advantage to corporate strategy. In Readings in Strategic Management (pp. 234-255). Palgrave Macmillan. |
[73] | Pinski, F., & Benlian, A. (2023). Artificial intelligence literacy: Conceptualization, operationalization, and an individual-level measure. Information Systems Research, 34(1), 1-20. |
[74] | Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI systems. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429-435. |
[75] | Roberts, D., & Woods, C. (2005). Changing the world on a shoestring: The concept of social entrepreneurship. University of Auckland Business Review, 7(1), 45–51. |
[76] | Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. |
[77] | Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: Tests of significance and descriptive goodness-of-fit measures. Methods of Psychological Research Online, 8(2), 23-74. |
[78] | Shin, D. (2020). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 140, 102463. |
[79] | Singh, R., & Kaur, P. (2020). AI ethics in India: An inclusive governance approach. AI & Society, 36(1), 47–60. |
[80] | Smith, R., Bell, R., & Watts, H. (2014). Personality trait differences between traditional and social entrepreneurs. Social Enterprise Journal, 200-221. |
[81] | Stokes, D. (2002). Entrepreneurial marketing in the public sector: The lessons of head teachers as entrepreneurs. Journal of Marketing Management, 18(3-4), 397-414. |
[82] | Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Allyn & Bacon. |
[83] | Taras, V., Kirkman, B. L., & Steel, P. (2010). Examining the impact of Culture's Consequences: A three-decade, multilevel, meta-analytic review of Hofstede's cultural value dimensions. Journal of Applied Psychology, 95(3), 405–439. |
[84] | Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books. |
[85] | Tracey, P., & Phillips, N. (2011). Bridging institutional entrepreneurship and the creation of new organizational forms: A multilevel model. Organization Science, 22(1), 60-80. |
[86] | Thrun, S. (2002). "Robotic Mapping: A Survey." Exploring Artificial Intelligence in the New Millennium, 1, 1–35. |
[87] | Ülgen, H., & Mirze, S. K. (2007). İşletmelerde stratejik yönetim. Arıkan Yayınları. |
[88] | Wan, X., Deng, H., Zou, K., & Xu, S. (2024). Enhancing the Efficiency and Accuracy of Underlying Asset Reviews in Structured Finance: The Application of Multi-agent Framework. arXiv. |
[89] | Webb, J. W., Ireland, R. D., Hitt, M. A., Kistruck, G. M., & Tihanyi, L. (2011). Where is the opportunity without the customer? An integration of marketing activities, the entrepreneurship process, and institutional theory. Journal of the Academy of Marketing Science, 39(4), 537-554. |
[90] | Wheelen, T. L., Hunger, J. D., Hoffman, A. N., & Bamford, C. E. (2010). Strategic management and business policy (13th ed.). Pearson Education. |
APA Style
Saatci, E. Y. (2025). AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries. International Journal of Economic Behavior and Organization, 13(1), 35-50. https://doi.org/10.11648/j.ijebo.20251301.14
ACS Style
Saatci, E. Y. AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries. Int. J. Econ. Behav. Organ. 2025, 13(1), 35-50. doi: 10.11648/j.ijebo.20251301.14
AMA Style
Saatci EY. AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries. Int J Econ Behav Organ. 2025;13(1):35-50. doi: 10.11648/j.ijebo.20251301.14
@article{10.11648/j.ijebo.20251301.14, author = {Ezgi Yildirim Saatci}, title = {AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries }, journal = {International Journal of Economic Behavior and Organization}, volume = {13}, number = {1}, pages = {35-50}, doi = {10.11648/j.ijebo.20251301.14}, url = {https://doi.org/10.11648/j.ijebo.20251301.14}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijebo.20251301.14}, abstract = {Artificial Intelligence (AI) has rapidly become an integral technology across many sectors, including healthcare, finance, research, and manufacturing. AI’s ability to automate processes, analyse large datasets, and make predictive decisions offers significant opportunities for innovation, but it also raises profound ethical challenges. Ethical concerns regarding AI encompass issues of transparency, accountability, fairness, data privacy, and the need for human oversight. Given the diverse applications of AI, these ethical concerns vary not only by sector but also across different cultural and regulatory environments. Despite growing discourse on AI ethics, empirical tools for assessing ethical perceptions of AI across varied organizational contexts remain limited. From that need, this study introduces the AI and Ethics Perception Scale (AEPS), designed to measure individual and collective perceptions of AI ethics across five key dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight. The AEPS was developed through a rigorous methodological process, beginning with a pilot study of 112 participants and validated with data from 417 participants across three culturally diverse countries: Turkey, India, and the United Kingdom. The scale was used to assess ethical perceptions in sectors such as healthcare, finance, and manufacturing. Both Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were used to validate the scale’s structure. This study reveals significant cross-cultural and cross-sectoral differences in the prioritization of ethical concerns, demonstrating the need for contextually sensitive ethical frameworks for AI governance. }, year = {2025} }
TY - JOUR T1 - AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries AU - Ezgi Yildirim Saatci Y1 - 2025/02/28 PY - 2025 N1 - https://doi.org/10.11648/j.ijebo.20251301.14 DO - 10.11648/j.ijebo.20251301.14 T2 - International Journal of Economic Behavior and Organization JF - International Journal of Economic Behavior and Organization JO - International Journal of Economic Behavior and Organization SP - 35 EP - 50 PB - Science Publishing Group SN - 2328-7616 UR - https://doi.org/10.11648/j.ijebo.20251301.14 AB - Artificial Intelligence (AI) has rapidly become an integral technology across many sectors, including healthcare, finance, research, and manufacturing. AI’s ability to automate processes, analyse large datasets, and make predictive decisions offers significant opportunities for innovation, but it also raises profound ethical challenges. Ethical concerns regarding AI encompass issues of transparency, accountability, fairness, data privacy, and the need for human oversight. Given the diverse applications of AI, these ethical concerns vary not only by sector but also across different cultural and regulatory environments. Despite growing discourse on AI ethics, empirical tools for assessing ethical perceptions of AI across varied organizational contexts remain limited. From that need, this study introduces the AI and Ethics Perception Scale (AEPS), designed to measure individual and collective perceptions of AI ethics across five key dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight. The AEPS was developed through a rigorous methodological process, beginning with a pilot study of 112 participants and validated with data from 417 participants across three culturally diverse countries: Turkey, India, and the United Kingdom. The scale was used to assess ethical perceptions in sectors such as healthcare, finance, and manufacturing. Both Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were used to validate the scale’s structure. This study reveals significant cross-cultural and cross-sectoral differences in the prioritization of ethical concerns, demonstrating the need for contextually sensitive ethical frameworks for AI governance. VL - 13 IS - 1 ER -