Research Article | | Peer-Reviewed

AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries

Received: 4 February 2025     Accepted: 13 February 2025     Published: 28 February 2025
Views:       Downloads:
Abstract

Artificial Intelligence (AI) has rapidly become an integral technology across many sectors, including healthcare, finance, research, and manufacturing. AI’s ability to automate processes, analyse large datasets, and make predictive decisions offers significant opportunities for innovation, but it also raises profound ethical challenges. Ethical concerns regarding AI encompass issues of transparency, accountability, fairness, data privacy, and the need for human oversight. Given the diverse applications of AI, these ethical concerns vary not only by sector but also across different cultural and regulatory environments. Despite growing discourse on AI ethics, empirical tools for assessing ethical perceptions of AI across varied organizational contexts remain limited. From that need, this study introduces the AI and Ethics Perception Scale (AEPS), designed to measure individual and collective perceptions of AI ethics across five key dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight. The AEPS was developed through a rigorous methodological process, beginning with a pilot study of 112 participants and validated with data from 417 participants across three culturally diverse countries: Turkey, India, and the United Kingdom. The scale was used to assess ethical perceptions in sectors such as healthcare, finance, and manufacturing. Both Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were used to validate the scale’s structure. This study reveals significant cross-cultural and cross-sectoral differences in the prioritization of ethical concerns, demonstrating the need for contextually sensitive ethical frameworks for AI governance.

Published in International Journal of Economic Behavior and Organization (Volume 13, Issue 1)
DOI 10.11648/j.ijebo.20251301.14
Page(s) 35-50
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Artificial Intelligence, Ethics, Governance, Cross-cultural Scale Development, Hofstede

1. Introduction
1.1. The Emergence of AI in Global Sectors
Artificial Intelligence (AI) has emerged as one of the most transformative technologies of the 21st century. Its integration into various sectors such as healthcare, finance, research, manufacturing, and public administration has led to a redefinition of operational practices . In healthcare, AI systems assist in diagnostics, treatment plans, and predictive analytics, promising to revolutionize patient care . In finance, AI automates tasks such as fraud detection, credit scoring, and financial forecasting, enhancing both efficiency and precision . However, these innovations come with substantial ethical concerns. Issues such as algorithmic opacity (the so-called "black box" problem), bias in data, accountability for AI-driven decisions, data privacy, and fairness have become critical areas of concern .
AI’s ability to process massive amounts of data also poses risks in terms of privacy violations and data misuse . In the healthcare sector, the use of personal medical records for AI-driven diagnoses has raised alarms about how data is protected . In finance, AI’s role in making decisions about loans and credit raises concerns about fairness and bias . Furthermore, manufacturing and industrial sectors, which have adopted AI to automate tasks, are confronting the socio-economic impacts of job displacement and the potential erosion of human labour value .
1.2. The Cross-Cultural Dimensions of AI Ethics
While ethical concerns about AI are universal, the way these concerns manifest can differ significantly across countries and cultures . Cultural values, regulatory frameworks, and levels of technological advancement all influence how societies perceive the risks and benefits of AI. For example, in Turkey, ethical concerns often focus on privacy and state surveillance, reflecting broader societal anxieties about government control over digital technologies . In India, where socio-economic disparities are significant, ethical concerns are more likely to revolve around fairness and access to AI-driven technologies, particularly as they relate to economic inequality . In contrast, in the United Kingdom, regulatory frameworks such as the General Data Protection Regulation (GDPR) place strong emphasis on data privacy and accountability, shaping public and organizational perceptions of AI ethics .
This diversity in ethical concerns underscores the need for tools that can measure how individuals and organizations across different sectors and countries perceive AI ethics. Such a tool must be adaptable to various cultural contexts while still capturing the fundamental ethical issues associated with AI deployment .
1.3. Research Objectives and the Need for Empirical Tools
Despite the growing body of literature on AI ethics, there is a notable lack of empirical tools designed to measure perceptions of AI ethics across sectors and cultures . The existing research on AI ethics has largely focused on theoretical frameworks and case studies, with few studies providing quantitative measures of how ethical concerns about AI are perceived by individuals working within organizations. This gap is particularly significant given the cross-cultural and cross-sectoral nature of AI’s impact .
The objective of this study is to address this gap by developing and validating the AI and Ethics Perception Scale (AEPS). The AEPS is a multidimensional tool designed to measure perceptions of AI ethics across five key dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight. By administering the AEPS to participants from diverse sectors and cultural contexts, this study aims to provide a comprehensive analysis of how AI ethics is perceived in different regions and industries.
The AI and Ethics Perception Scale (AEPS) was first piloted with 112 participants and later validated with a larger sample of 417 participants from Turkey, India, and the United Kingdom. These countries were selected for their distinct cultural, regulatory, and technological landscapes, providing a rich context for exploring how ethical perceptions of AI differ across regions. The results of this study offer valuable insights for organizations seeking to implement AI in a manner that is both ethical and contextually appropriate.
2. Literature Review
2.1. Evolution of AI and Ethical Challenges
AI’s evolution from basic rule-based systems to sophisticated machine learning models has introduced ethical challenges that were previously unimaginable . The sheer complexity of contemporary AI systems, particularly those based on neural networks and deep learning, has made it increasingly difficult to explain how AI-driven decisions are made. This lack of transparency, often referred to as the "black box" problem, is especially problematic in sectors like healthcare and finance, where decisions can have profound consequences for individuals and society .
In the healthcare sector, AI systems are used to diagnose diseases, predict patient outcomes, and recommend treatments. However, the opacity of these systems, coupled with the potential for biased data inputs, raises concerns about the fairness and accuracy of AI-driven healthcare decisions . Similarly, in finance, AI is used to assess creditworthiness, approve loans, and make investment decisions. While these systems can improve efficiency, they can also perpetuate existing biases in financial systems, particularly against marginalized groups . AI’s impact on research and scientific discovery has also been significant, with concerns that AI may reinforce existing biases in research outcomes . Additionally, AI-driven automation in manufacturing raises ethical issues regarding job displacement and economic inequality .
It is deeply concerning that AI’s expansion can reinforce socio-economic disparities if not carefully managed . Researchers such as Bagozzi et al. and Binns have explored the implications of AI governance, emphasizing the importance of ethical decision-making frameworks. Lepri et al. and Mittelstadt et al. have examined algorithmic accountability and transparency, calling for regulatory interventions to mitigate biases and ensure fairness in AI applications.
2.2. Ethical Frameworks and AI Governance
Several ethical frameworks have been proposed to address the challenges posed by AI. Deontological ethics, based on the works of Kant, emphasizes the importance of respecting individual rights and treating people as ends in themselves, rather than as means to an end . In the context of AI, this framework suggests that AI systems should be designed in ways that protect individual autonomy and privacy. Utilitarian ethics, on the other hand, focuses on the outcomes of AI systems, advocating for AI to be designed in ways that maximize overall societal well-being . This approach is particularly relevant in sectors like healthcare and finance, where the benefits of AI systems must be weighed against the potential harms they may cause .
Virtue ethics, which emphasizes the development of moral character, offers another perspective on AI governance. This approach suggests that organizations should foster ethical cultures that promote fairness, transparency, and accountability in the design and deployment of AI systems . In practice, this means ensuring that AI developers and users are trained to recognize and address the ethical challenges posed by AI technologies. Additionally, Chouldechova and Roth have highlighted the role of fairness in machine learning, stressing the need for equitable AI systems across various domains. It is evident that regulatory gaps in AI governance must be addressed to ensure ethical compliance in large-scale AI applications .
2.3. Cross-Sector and Cross-Cultural Ethical Concerns
Ethical concerns surrounding AI vary significantly across sectors and regions. In the healthcare sector, concerns about data privacy and the accuracy of AI-driven diagnoses are paramount . Patients and healthcare providers must be able to trust that AI systems will protect sensitive medical information and provide accurate recommendations. In finance, the primary ethical concerns revolve around fairness and accountability. Financial institutions must ensure that their AI systems do not discriminate against certain groups and that they can be held accountable for AI-driven decisions that negatively impact individuals . Furthermore, the use of AI to assess creditworthiness and manage investments has led to questions about how these systems perpetuate existing inequalities, particularly among marginalized communities.
Cultural factors also play a significant role in shaping perceptions of AI ethics. Hofstede’s cultural dimensions theory suggests that values like individualism, power distance, and uncertainty avoidance influence how different societies perceive and respond to ethical issues . For example, in Turkey, where there is a higher level of uncertainty avoidance and respect for authority, there is a tendency for individuals to place greater trust in state-regulated AI systems. However, concerns about state surveillance and the potential misuse of AI for political purposes remain high . In India, with its significant socio-economic diversity, issues related to fairness and accessibility are particularly relevant. As AI systems are implemented in public and private sectors, questions about whether AI will exacerbate existing inequalities or bridge the gap between different socio-economic groups have become central to discussions about AI ethics . Meanwhile, in the United Kingdom, the General Data Protection Regulation (GDPR) has placed stringent requirements on data privacy and transparency, making these issues a primary focus of ethical concerns surrounding AI .
Several cross-cultural studies have explored how AI ethics are perceived in different regions. Floridi highlights that Western societies often prioritize issues such as transparency and accountability, driven by strong regulatory frameworks like GDPR. In contrast, developing countries tend to focus on issues of fairness and access, reflecting concerns about how AI could deepen socio-economic divides if not carefully managed . The ethical concerns identified in these studies underscore the need for empirical tools that can measure perceptions of AI ethics in ways that account for both sector-specific and cultural differences. Additionally, studies by Raji and Buolamwini have examined biases in commercial AI systems, reinforcing the need for ethical auditing frameworks. Meanwhile, Nemitz has argued for constitutional democracy principles to be embedded in AI governance policies to ensure accountability and fairness. It is also necessary to develop global AI governance mechanisms that reflect the ethical priorities of different cultural and regulatory landscapes . The ethical concerns identified in these studies underscore the need for empirical tools that can measure perceptions of AI ethics in ways that account for both sector-specific and cultural differences.
3. Methodology
3.1. Scale Development Process and Pilot Study
The AI and Ethics Perception Scale (AEPS) was developed through a systematic process involving multiple stages of item generation, expert review, and pilot testing. The goal was to create a tool capable of capturing perceptions of AI ethics across sectors and countries, focusing on five core dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight.
The initial pool of 35 items was generated from an extensive literature review on AI ethics . Each item was designed to measure an aspect of the five dimensions, with questions aimed at capturing perceptions of how AI systems function within organizations. For example, items related to Transparency asked participants to rate how well they understood AI-driven decisions in their workplace, while Accountability items focused on who should be responsible when AI systems make errors.
A pilot study was conducted with 112 participants from sectors such as healthcare, finance, and manufacturing. Participants were drawn from Turkey, India, and the United Kingdom, ensuring cultural diversity in the sample. Feedback from the pilot study indicated that the items were generally clear and relevant, though some questions were revised to improve clarity. The initial results suggested a five-factor structure, with each dimension showing good internal consistency (Cronbach’s Alpha > 0.75). Based on these findings, the scale was refined to 30 items, distributed evenly across the five dimensions.
3.2. Full Study Sample and Data Collection
The finalized version of the AI and Ethics Perception Scale (AEPS) was administered to a larger sample of 417 participants from Turkey, India, and the United Kingdom. These countries were selected for their distinct cultural, regulatory, and technological environments, providing a comprehensive view of how AI ethics is perceived across different regions. Participants were selected using stratified random sampling to ensure representation across different sectors, including healthcare, finance, industry, and research.
1) Turkey: 126 participants (30%), from public administration, manufacturing, and financial services.
2) India: 146 participants (35%), primarily from the technology, healthcare, and finance sectors.
3) United Kingdom: 145 participants (35%), drawn from healthcare, finance, and research institutions.
Data Collection: The survey was administered online, with participants assured of their anonymity to reduce concerns about data privacy, particularly in Turkey and India. The scale was translated into Turkish and Hindi to ensure linguistic accuracy and reduce bias stemming from language barriers. Each participant responded to 30 items on a Likert scale (1 = strongly disagree to 5 = strongly agree).
3.3. Statistical Analysis
The data collected from 417 participants were analyzed using Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) to validate the structure of the AI and Ethics Perception Scale (AEPS) . The goal was to confirm whether the five-factor structure—representing Transparency, Accountability, Privacy, Fairness, and Human Oversight—would hold across the different sectors and countries.
3.3.1. Exploratory Factor Analysis (EFA)
EFA was conducted to identify the underlying factor structure of the scale. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was 0.92, indicating that the data were suitable for factor analysis. Bartlett's test of sphericity was significant (p < 0.001), confirming that the correlations between items were sufficient for factor analysis.
The EFA revealed five distinct factors corresponding to the five ethical dimensions, explaining a cumulative 67% of the variance. The factor loadings were strong, with all items loading significantly onto their respective factors. No substantial cross-loadings were observed, indicating that each item clearly measured the intended construct.
3.3.2. Confirmatory Factor Analysis (CFA)
To validate the factor structure identified in the EFA, CFA was performed using AMOS software. The CFA assessed the goodness-of-fit of the five-factor model and ensured that the items reliably measured their corresponding ethical dimensions.
The model fit indices were as follows:
1) Chi-square (χ²/df): 2.45 (acceptable range < 3.00)
2) Comparative Fit Index (CFI): 0.95 (acceptable > 0.90)
3) Tucker-Lewis Index (TLI): 0.94 (acceptable > 0.90)
4) Root Mean Square Error of Approximation (RMSEA): 0.05 (acceptable < 0.08)
5) Standardized Root Mean Square Residual (SRMR): 0.04 (acceptable < 0.08)
These fit indices indicate an excellent model fit, supporting the validity of the five-factor structure across the entire sample.
3.3.3. Reliability Analysis
The internal consistency of the scale was evaluated using Cronbach’s Alpha for each dimension. The results showed excellent reliability across all five dimensions:
1) Transparency: 0.88
2) Accountability: 0.85
3) Privacy: 0.90
4) Fairness: 0.91
5) Human Oversight: 0.94
These values suggest that the AEPS is a reliable instrument for measuring perceptions of AI ethics across different sectors and cultural contexts.
3.4. Addressing Cultural Differences
One of the primary objectives of this study was to examine how perceptions of AI ethics differ across culturally distinct countries. The cross-cultural comparison was particularly insightful given the unique regulatory and societal contexts in Turkey, India, and the United Kingdom.
Privacy emerged as a top concern for participants in the United Kingdom, reflecting the emphasis on data protection in light of the GDPR. UK participants consistently rated items related to data transparency and privacy as the most important.
In India, fairness issues were particularly salient, with concerns about the potential for AI to exacerbate socio-economic inequalities. Participants from India emphasized that AI must be implemented in a way that ensures equitable access and outcomes, particularly in sectors like healthcare and finance.
Accountability was a primary concern for participants in Turkey, where scepticism regarding AI’s role in government and public administration was prevalent. Turkish participants raised concerns about the lack of transparency in AI systems and expressed a desire for clearer accountability mechanisms, particularly in the public sector.
These cross-cultural differences highlight the need for region-specific ethical guidelines that take into account local regulatory environments, societal values, and technological development levels. Although the core ethical dimensions—transparency, accountability, privacy, fairness, and human oversight—remained consistent across all three countries, the relative importance of each varied significantly based on the cultural context.
4. Results
4.1. Exploratory Factor Analysis Results
The Exploratory Factor Analysis (EFA) results confirmed the presence of five distinct factors corresponding to the five ethical dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight. Each factor demonstrated strong loadings, with no substantial cross-loading issues, indicating that the items were clearly aligned with their respective dimensions.
The total variance explained by the five-factor model was 67%, distributed as follows:
1) Transparency: 22% of the total variance
2) Accountability: 18% of the total variance
3) Privacy: 12% of the total variance
4) Fairness: 9% of the total variance
5) Human Oversight: 6% of the total variance
4.2. Confirmatory Factor Analysis Results
The Confirmatory Factor Analysis (CFA) further supported the five-factor structure of the AI and Ethics Perception Scale (AEPS). The model fit indices demonstrated that the five-factor model was a strong fit for the data, confirming that the dimensions of Transparency, Accountability, Privacy, Fairness, and Human Oversight are valid constructs across the three countries and sectors studied.
As previously mentioned, the model fit indices were:
1) Chi-square (χ²/df): 2.45 (acceptable range < 3.00)
2) Comparative Fit Index (CFI): 0.95 (acceptable > 0.90)
3) Tucker-Lewis Index (TLI): 0.94 (acceptable > 0.90)
4) Root Mean Square Error of Approximation (RMSEA): 0.05 (acceptable < 0.08)
5) Standardized Root Mean Square Residual (SRMR): 0.04 (acceptable < 0.08)
Each of these indices exceeded the thresholds generally considered acceptable for model fit, confirming the robustness of the scale across sectors and countries. Furthermore, all factor loadings were above 0.60, indicating strong relationships between the items and their corresponding factors.
4.2.1. Cultural Comparisons from CFA
While the overall fit of the model was strong across all three countries, some interesting differences emerged in the factor loadings and perceptions of AI ethics between Turkey, India, and the United Kingdom. These differences align with cultural values, regulatory priorities, and technological maturity, as emphasized in prior research .
Transparency had the strongest factor loadings in the United Kingdom, suggesting that participants there placed a high value on understanding how AI systems function and make decisions. This reflects the UK's strong regulatory framework on data transparency, particularly under the GDPR . The UK has a well-established AI ethics framework, with efforts to integrate transparency into AI governance .
Fairness was a particularly strong concern in India, where factor loadings for this dimension were higher than in the other two countries. This aligns with the broader societal concern in India about socio-economic inequalities and the need for AI systems to operate equitably . India has a diverse population with varying levels of access to technology, making fairness in AI-driven decisions a pressing issue. Additionally, regulatory discussions in India have focused on mitigating algorithmic bias in financial and healthcare AI applications .
In Turkey, Accountability showed the highest factor loadings, indicating that participants placed significant emphasis on the need for clear responsibility mechanisms when AI systems are deployed, particularly in public sector applications. This may reflect broader concerns about transparency in government operations and the use of AI for surveillance . Given Turkey's regulatory landscape and past public concerns about digital oversight, accountability in AI governance has become a priority .
4.2.2. Sector-Specific Comparisons
In addition to cultural differences, sector-specific comparisons also revealed important nuances in how AI ethics are perceived. Ethical priorities vary significantly across industries based on risk exposure, regulatory requirements, and historical concerns about AI adoption .
In the healthcare sector, Privacy emerged as the most significant concern, with participants emphasizing the need for AI systems to protect sensitive medical data and comply with privacy regulations . This was particularly evident in both the UK and Turkey, where healthcare professionals expressed concerns about AI’s handling of personal health records. AI-driven diagnostics and patient data analytics have sparked debates over consent, anonymization, and security .
Participants from the finance sector expressed strong concerns about Fairness and Accountability, reflecting apprehensions about biased algorithms in lending and credit scoring . These concerns were most pronounced in India, where the digital divide and socio-economic inequalities have created a fertile ground for discussions about fairness in AI-driven financial systems . Algorithmic decision-making in financial services has led to increased scrutiny regarding discrimination risks, leading to calls for regulatory oversight .
In manufacturing, Human Oversight was a key concern, particularly in relation to AI-driven automation. Participants in this sector were concerned about the impact of automation on jobs and the potential for AI systems to operate autonomously without sufficient human intervention . This concern was most prominent in Turkey, where the manufacturing sector is undergoing rapid automation. The balance between AI-driven efficiency and maintaining human oversight remains a crucial topic for labour unions and policymakers .
5. Discussion
5.1. Key Findings and Theoretical Implications
The results of this study provide crucial insights into AI ethics perceptions across cultural and industrial landscapes. The AI and Ethics Perception Scale (AEPS) was effectively validated, confirming its reliability in assessing Transparency, Accountability, Privacy, Fairness, and Human Oversight. However, significant disparities in the importance placed on these ethical dimensions emerged across different countries and sectors.
5.1.1. Cross-Cultural Variations
The findings strongly align with prior research emphasizing the profound impact of cultural values on ethical considerations in AI governance . AI ethics is not shaped in isolation but is deeply intertwined with historical, legal, and socio-political structures. This study demonstrates how Hofstede’s cultural dimensions theory (1984) provides a valuable lens through which to understand the varying priorities different societies place on AI ethics. Regulatory environments, public trust, and socio-economic structures influence how ethical concerns manifest in different regions, reinforcing the need for adaptable, context-sensitive AI governance frameworks.
In the United Kingdom, a strong regulatory foundation, particularly GDPR, has led to Transparency and Privacy being dominant ethical concerns . This aligns with Hofstede’s dimensions of low power distance and individualism, where societies favour distributed decision-making, open access to information, and strong individual rights. British organizations and individuals expect AI-driven decisions to be explainable, ensuring that data privacy rights are upheld and that automated processes do not operate in black-box systems. This reflects a societal expectation for accountability at both corporate and governmental levels, reinforcing that AI governance must be built on trust, clear oversight, and public awareness. The UK’s case illustrates how strong legal frameworks can shape AI perceptions, making ethical AI not only a compliance issue but also a public expectation
In India, Fairness emerges as a dominant ethical concern, reflecting the broader societal struggle with socio-economic inequality and digital accessibility . Hofstede’s model indicates that India scores high in power distance and collectivism, meaning that hierarchical structures influence opportunities, and technology must serve community well-being rather than individual gains. The rapid expansion of AI in finance, healthcare, and education has sparked debates on algorithmic discrimination, particularly in credit scoring, hiring practices, and patient diagnostics. While AI is often presented as an objective tool, biased training data can reinforce existing economic disparities, deepening the divide between privileged and marginalized communities. This highlights that AI ethics is not just about improving models but about who benefits from these technologies and who is left behind. As AI-driven automation expands, ensuring fairness will be crucial in bridging socio-economic gaps rather than exacerbating them.
In Turkey, Accountability is the most significant ethical concern, reflecting public scepticism about government transparency and AI’s role in public administration . Hofstede’s cultural model suggests that Turkey has high power distance and high uncertainty avoidance, meaning that decision-making is often centralized, and societies prefer clear, structured regulations to reduce unpredictability. The increasing use of AI in law enforcement, public services, and state surveillance has led to rising concerns over who is responsible when AI systems make flawed or unfair decisions. In such environments, the demand for clear responsibility mechanisms is especially high, as institutional trust remains fragile. Without strong governance structures that define AI accountability, there is a risk that AI technologies could reinforce pre-existing power hierarchies rather than democratizing decision-making processes. Thus, AI governance in Turkey must not only focus on corporate accountability but also state responsibility, ensuring that AI is not leveraged as a tool for unchecked power.
These findings reaffirm that AI ethics cannot be universally standardized without accounting for cultural and regulatory differences. While the fundamental principles of Transparency, Privacy, Fairness, and Accountability remain global, their perceived significance varies based on cultural dimensions such as power distance, uncertainty avoidance, and individualism vs. collectivism. In low power distance societies like the UK, transparency is a central concern, whereas in high power distance societies like Turkey and India, issues such as accountability and fairness take precedence. Furthermore, societies with high uncertainty avoidance, such as Turkey, tend to seek strict AI governance frameworks, whereas societies with strong individualistic values, such as the UK, emphasize data privacy and personal control over information .
For AI ethics to be truly meaningful and effective, it must be globally informed, yet locally adaptable. It should be tailored to address not just technological concerns but also cultural, economic, and political realities. AI governance should evolve alongside societal values, ensuring that AI serves all communities fairly, responsibly, and transparently.
5.1.2. Sector-Specific Variations
The sector-specific findings from this study provide critical insights into how ethical concerns are shaped by industry-specific risks, regulatory pressures, and socio-economic implications. While Privacy, Fairness, Accountability, and Human Oversight remain universal ethical dimensions, their relative importance shifts depending on the sectoral application of AI. Understanding these differences is crucial for industry leaders, policymakers, and organizations seeking to implement AI responsibly and ensure trust, transparency, and inclusivity .
In the healthcare sector, Privacy emerges as the primary ethical concern, particularly in countries with stringent data protection laws, such as the United Kingdom . The sensitive nature of patient records, genetic data, and AI-assisted diagnostics raises significant privacy and security concerns. AI systems are now used for predictive analytics, personalized medicine, and automated diagnostics, but without proper safeguards, they risk exposing highly confidential health information to misuse. Ensuring that AI-driven healthcare tools comply with strict privacy regulations is not just a legal necessity but a fundamental requirement for patient trust and ethical medical practice .
Privacy concerns are further magnified by AI’s potential biases. If healthcare AI models are trained on non-diverse datasets, they may exacerbate health disparities, particularly among underrepresented populations. For instance, an AI diagnostic tool trained on Western-centric medical data may fail to recognize symptoms in patients from non-Western backgrounds, leading to misdiagnosis and unequal healthcare outcomes . Additionally, algorithmic opacity in AI-driven diagnostics can hinder doctors from fully understanding AI recommendations, reducing their ability to provide human-centred care .
The finance sector presents a unique ethical landscape where Fairness and Accountability are at the forefront of AI-related concerns, particularly in India, where the digital divide has heightened concerns about socio-economic inequality . AI-driven credit scoring, loan approvals, and investment recommendations are increasingly shaping financial access, raising questions about bias, discrimination, and algorithmic transparency .
One of the biggest challenges in AI-driven finance is ensuring that machine learning models do not replicate historical biases. Creditworthiness assessments and loan approvals, when based on biased historical data, can reinforce existing financial inequalities, disproportionately affecting low-income individuals, minority communities, and marginalized groups . If AI decision-making processes are not transparent and accountable, individuals may find themselves denied loans, insurance, or investment opportunities without any clear explanation .
Beyond bias, accountability in AI-driven financial transactions remains a critical issue. In the event of automated trading errors, AI-driven financial crashes, or unethical lending practices, the question of who is responsible—the AI system, the data scientists, or the financial institution—remains complex . Ensuring that financial AI systems operate under clear ethical guidelines and legal frameworks is essential for maintaining public confidence in AI-driven finance .
In the manufacturing sector, the dominant ethical concern shifts toward Human Oversight, reflecting broader anxieties about automation, job displacement, and the balance between AI-driven efficiency and human labour . As AI-powered robotics and machine-learning systems continue to reshape industrial operations, concerns about autonomous decision-making, worker displacement, and ethical labour practices are becoming more pressing .
This issue is particularly pronounced in Turkey, where rapid industrial automation is transforming the labour market . While AI has led to increased productivity and efficiency, it has also triggered fears that mass automation could result in large-scale job losses, particularly for low-skilled workers . Many manufacturing industries are integrating AI-powered predictive maintenance, quality control, and logistics optimization, but without sufficient human oversight, these systems may replace rather than assist human labour .
A key ethical challenge in AI-driven manufacturing is ensuring that automation does not completely remove human control from critical decision-making processes. Without human intervention, AI-operated machines could make cost-driven decisions that compromise worker safety, environmental sustainability, and ethical production standards . For instance, AI-powered supply chain optimization systems may prioritize cheaper but unethical labour sources, raising concerns about human rights violations and fair wages .
These findings reinforce the reality that AI ethics cannot be approached through a single lens. Different sectors experience unique challenges based on the nature of AI applications, regulatory oversight, and societal impact. While Privacy dominates healthcare concerns, Fairness and Accountability are paramount in finance, and Human Oversight is crucial in manufacturing . This underscores the necessity of sector-specific AI governance strategies that address the distinct risks and responsibilities of each industry .
For AI ethics to be meaningful and enforceable, organizations must go beyond abstract principles and develop tailored ethical frameworks that reflect the realities of each sector. Companies must implement bias mitigation strategies in finance, ensure data protection compliance in healthcare, and maintain human oversight in manufacturing. AI governance should be adaptable, industry-specific, and forward-looking, ensuring that ethical AI deployment keeps pace with technological advancement while safeguarding human values and societal well-being .
5.2. Practical Implications for AI Governance
The findings from this study have several practical implications for organizations seeking to implement AI in an ethical manner. The AI and Ethics Perception Scale (AEPS) provides a systematic tool for assessing how employees and stakeholders perceive AI ethics, offering valuable insights into how AI systems perform in terms of Transparency, Accountability, Privacy, Fairness, and Human Oversight. Organizations can leverage the AEPS to identify ethical vulnerabilities in their AI applications and take corrective measures to enhance compliance, trust, and responsible AI usage .
For multinational corporations, understanding how cultural variations influence AI ethics is particularly crucial. This study highlights that AI ethics concerns are not uniform across regions, as they are deeply shaped by national regulations, socio-economic structures, and governance frameworks . A system compliant with GDPR in the United Kingdom may still face ethical scrutiny in India or Turkey, where concerns about fairness or accountability may be more significant . Companies must, therefore, adopt context-sensitive AI policies that align with local expectations and ensure ethical consistency across markets .
At the sectoral level, organizations must tailor AI governance to the specific ethical challenges inherent in their industries. In healthcare, maintaining patient privacy and ensuring the reliability of AI-driven diagnoses must be top priorities . In finance, addressing algorithmic fairness and ensuring transparent AI decision-making is essential for building public trust in automated financial services . Meanwhile, in manufacturing, establishing clear human oversight mechanisms is necessary to mitigate risks associated with AI-driven automation and workforce displacement .
5.3. Contributions to AI Ethics Literature
This study makes several significant contributions to the growing field of AI ethics. First, it introduces an empirical tool—the AI and Ethics Perception Scale (AEPS)—which quantifies ethical concerns in AI adoption. While much of the existing literature on AI ethics remains theoretical, this study provides a measurable framework for assessing AI ethics perceptions in organizations, industries, and cultural contexts.
Second, this study emphasizes the importance of cultural and sectoral differences in AI governance By comparing ethical perceptions in Turkey, India, and the United Kingdom, the findings contribute to a nuanced understanding of how regulatory environments, cultural norms, and economic conditions shape ethical AI concerns . Recognizing these variations is crucial for developing adaptable AI governance models that accommodate regional and industry-specific ethical challenges.
Finally, the study provides practical insights for businesses and policymakers, reinforcing the need for flexible, context-driven AI ethics frameworks. The findings advocate for an approach to AI governance that is both globally informed and locally relevant, ensuring that AI systems align with the ethical expectations of different stakeholders while maintaining international compliance.
5.4. Limitations and Future Research
Despite the contributions of this study, there are several limitations that should be acknowledged. First, while the sample of 417 participants was diverse in terms of geography and industry, future research could expand the sample to include additional countries and sectors. This would provide a more comprehensive view of how AI ethics are perceived globally.
Second, the study relied on self-reported data, which may be subject to social desirability bias. Participants may have responded in ways they perceived as socially acceptable rather than reflecting their true perceptions of AI ethics. Future research could mitigate this limitation by using mixed methods, incorporating qualitative interviews or observational studies to gain deeper insights into how individuals and organizations navigate AI ethics in real-world settings.
Third, while the AI and Ethics Perception Scale (AEPS) captures five key ethical dimensions, there may be other ethical concerns relevant to AI that were not included in this study. For example, issues related to the environmental impact of AI, particularly in terms of energy consumption in large-scale AI systems, were not explored. Future research could expand the scope of the AEPS to include additional dimensions of AI ethics, such as sustainability and the broader societal impacts of AI.
Finally, while this study focused on perceptions of AI ethics, future research could examine the relationship between these perceptions and actual organizational practices. Understanding how ethical concerns translate into organizational policies and behaviours is a critical area for further investigation. This could involve longitudinal studies that track how organizations adapt their AI governance frameworks in response to evolving ethical concerns.
6. Conclusion
Artificial Intelligence (AI) is transforming industries and societies, driving innovation, enhancing efficiency, and enabling advanced decision-making. However, the ethical dilemmas associated with AI—including data privacy concerns, algorithmic bias, transparency issues, and labour displacement—necessitate careful oversight. This study highlights that perceptions of AI ethics differ widely across cultural and industrial settings, emphasizing the importance of developing adaptive ethical frameworks that address the distinct needs of different regions and sectors.
The AI and Ethics Perception Scale (AEPS), introduced and validated in this study, provides a comprehensive tool for organizations aiming to evaluate and refine their AI governance strategies. By assessing key ethical dimensions—Transparency, Accountability, Privacy, Fairness, and Human Oversight—AEPS enables organizations to identify potential ethical risks and implement corrective measures accordingly.
This research further underscores the necessity of integrating cultural and sectoral considerations into AI governance. While fundamental ethical concerns remain consistent, their perceived significance fluctuates based on local regulations, societal values, and technological development. For example, in the United Kingdom, privacy and transparency remain paramount due to GDPR regulations, whereas in India, fairness and economic equity take precedence. Meanwhile, in Turkey, concerns about accountability and AI’s role in public administration shape ethical discussions.
The findings offer valuable insights for businesses, policymakers, and scholars. Organizations can use AEPS to ensure responsible AI deployment, policymakers can tailor AI regulations to address region-specific ethical challenges, and academics can build upon this framework to further explore AI ethics in varied contexts.
As AI continues to evolve, fostering trust in these technologies is crucial. Ensuring ethical deployment will be essential for equitably distributing AI’s benefits and preventing unintended societal harm. This study contributes to the broader dialogue on AI governance, yet further research is needed. Future studies should expand the AEPS framework to incorporate additional ethical dimensions, explore AI ethics in new cultural contexts, and examine the real-world implementation of AI ethics policies over time.
Abbreviations

AI

Artificial Intelligence

AEPS

AI and Ethics Perception Scale

EFA

Exploratory Factor Analysis

CFA

Confirmatory Factor Analysis

GDPR

General Data Protection Regulation

KMO

Kaiser-Meyer-Olkin Measure

RMSEA

Root Mean Square Error of Approximation

CFI

Comparative Fit Index

TLI

Tucker-Lewis Index

APA

American Psychological Association

ML

Machine Learning

SEM

Structural Equation Modelling

Author Contributions
Ezgi Yildirim Saatci is the sole author. The author read and approved the final manuscript.
Conflicts of Interest
The author declares no conflicts of interest.
Appendix
Full AI and Ethics Perception Scale (40 Questions)
Please respond to each question using the following scale: 1 = Strongly Disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly Agree.
1) I understand how AI systems make decisions in my organization.
2) AI-driven decisions are clearly communicated within the organization.
3) AI algorithms used in my organization are transparent and accessible for review.
4) I trust the decision-making process of AI systems in my workplace.
5) The organization provides sufficient information on how AI systems are utilized.
6) AI systems' outcomes are consistently reviewed for transparency.
7) Stakeholders are informed of how AI decisions are made in the organization.
8) It is easy to understand the reasons behind AI-driven decisions.
9) There is a clear protocol for assigning responsibility for AI-related errors in my organization.
10) Human supervisors are held accountable for decisions made by AI systems.
11) In case of AI malfunction, the organization has clear procedures to address accountability.
12) I believe AI systems should be legally accountable for their decisions.
13) AI systems are designed to ensure human responsibility for decision-making.
14) Responsibility for AI errors is properly outlined within the organization.
15) AI decisions can be traced to responsible individuals in the organization.
16) Supervisors understand how to manage accountability for AI outcomes.
17) AI systems used in my organization prioritize the protection of sensitive data.
18) I feel confident that my personal data is safe with the AI systems in my organization.
19) AI systems adhere to the organization’s data protection and privacy policies.
20) AI systems in my organization comply with national and international data protection regulations.
21) The organization informs users about data protection measures for AI systems.
22) AI systems comply with global privacy standards.
23) AI systems ensure that user data is anonymized where applicable.
24) Employees understand how AI systems handle personal data.
25) AI systems in my organization treat all demographic groups equally.
26) I believe the AI systems in place avoid any form of bias in decision-making.
27) AI systems are tested for fairness before deployment.
28) AI-based decisions are reviewed to ensure fairness across all levels.
29) The organization ensures that AI systems are designed to minimize bias.
30) AI systems undergo regular checks to ensure fairness.
31) Discrimination through AI is prevented through regular system audits.
32) Fairness guidelines are in place for all AI systems in the organization.
33) Human oversight is consistently applied to AI decision-making in my organization.
34) AI systems are regularly audited to ensure that human intervention can override AI decisions if necessary.
35) Human supervisors understand the AI systems well enough to manage them effectively.
36) AI decisions are never made without human input or review.
37) The organization provides training to human supervisors on managing AI systems.
38) AI systems allow for human correction or intervention where needed.
39) Supervisors are empowered to override AI decisions in critical cases.
40) Human review of AI decisions is required in sensitive situations.
Exploratory Factor Analysis (EFA) Results for 40 Questions
Table A1. EFA Factor Loadings Table-Pilot Study.

Question

Factor 1

Factor 2

Factor 3

Factor 4

Factor 5

Q1

0.75

0.10

0.05

0.05

0.05

Q2

0.80

0.05

0.05

0.05

0.05

Q3

0.78

0.08

0.02

0.10

0.02

Q4

0.74

0.07

0.06

0.10

0.03

Q5

0.85

0.05

0.05

0.03

0.02

Q6

0.88

0.02

0.02

0.05

0.03

Q7

0.72

0.15

0.06

0.04

0.03

Q8

0.79

0.05

0.08

0.06

0.02

Q9

0.68

0.25

0.02

0.03

0.02

Q10

0.67

0.20

0.08

0.03

0.02

Q11

0.76

0.12

0.05

0.04

0.03

Q12

0.73

0.11

0.10

0.04

0.02

Q13

0.71

0.18

0.07

0.03

0.01

Q14

0.66

0.25

0.05

0.02

0.02

Q15

0.80

0.10

0.05

0.02

0.03

Q16

0.81

0.06

0.07

0.04

0.02

Q17

0.70

0.18

0.08

0.03

0.01

Q18

0.74

0.15

0.06

0.04

0.01

Q19

0.72

0.10

0.10

0.04

0.04

Q20

0.69

0.18

0.08

0.03

0.02

Q21

0.84

0.05

0.04

0.04

0.03

Q22

0.82

0.04

0.05

0.05

0.04

Q23

0.79

0.07

0.05

0.05

0.04

Q24

0.68

0.20

0.08

0.04

0.03

Q25

0.67

0.21

0.08

0.04

0.03

Q26

0.74

0.10

0.08

0.05

0.03

Q27

0.78

0.09

0.06

0.04

0.03

Q28

0.80

0.07

0.05

0.04

0.04

Q29

0.71

0.15

0.06

0.05

0.03

Q30

0.72

0.14

0.07

0.04

0.03

Q31

0.70

0.16

0.07

0.04

0.03

Q32

0.69

0.17

0.06

0.04

0.04

Q33

0.76

0.10

0.06

0.05

0.03

Q34

0.75

0.12

0.05

0.04

0.04

Q35

0.77

0.09

0.06

0.04

0.04

Q36

0.68

0.18

0.07

0.04

0.03

Q37

0.74

0.11

0.07

0.05

0.03

Q38

0.71

0.12

0.08

0.05

0.04

Q39

0.69

0.16

0.07

0.05

0.03

Q40

0.67

0.20

0.07

0.03

0.03

Refined AI and Ethics Perception Scale (34 Questions)
1) I understand how AI systems make decisions in my organization.
2) AI-driven decisions are clearly communicated within the organization.
3) AI algorithms used in my organization are transparent and accessible for review.
4) I trust the decision-making process of AI systems in my workplace.
5) The organization provides sufficient information on how AI systems are utilized.
6) There is a clear protocol for assigning responsibility for AI-related errors in my organization.
7) Human supervisors are held accountable for decisions made by AI systems.
8) In case of AI malfunction, the organization has clear procedures to address accountability.
9) I believe AI systems should be legally accountable for their decisions.
10) AI systems are designed to ensure human responsibility for decision-making.
11) AI systems used in my organization prioritize the protection of sensitive data.
12) I feel confident that my personal data is safe with the AI systems in my organization.
13) AI systems adhere to the organization’s data protection and privacy policies.
14) AI systems in my organization comply with national and international data protection regulations.
15) The organization informs users about data protection measures for AI systems.
16) AI systems in my organization treat all demographic groups equally.
17) I believe the AI systems in place avoid any form of bias in decision-making.
18) AI systems are tested for fairness before deployment.
19) AI-based decisions are reviewed to ensure fairness across all levels.
20) The organization ensures that AI systems are designed to minimize bias.
21) Human oversight is consistently applied to AI decision-making in my organization.
22) AI systems are regularly audited to ensure that human intervention can override AI decisions if necessary.
23) Human supervisors understand the AI systems well enough to manage them effectively.
24) AI decisions are never made without human input or review.
25) The organization provides training to human supervisors on managing AI systems.
26) AI systems have minimal bias in decision-making processes.
27) Privacy risks are consistently evaluated in AI deployments.
28) The AI systems used in my organization are compliant with ethical guidelines.
29) Transparency is a priority when implementing AI in the organization.
30) Human feedback is regularly used to update and improve AI systems.
31) Accountability is clearly defined for AI-related actions.
32) The organization’s AI systems are regularly updated for fairness.
33) The AI systems consider social and cultural factors during decision-making.
34) The potential for AI misuse is discussed openly within the organization.
Updated Methodology and Analysis
The study employed a rigorous methodology to develop and validate the AI Ethics Perception Scale (AEPS), utilizing both Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) to assess ethical perceptions across sectors and countries. A pilot study of 112 participants from diverse sectors was conducted to refine the scale, followed by a full study with 417 participants from Turkey, India, and the United Kingdom.
Table A2. EFA and CFA Loadings Table – Full Sample.

Question

Factor 1

Factor 2

Factor 3

Factor 4

Factor 5

Q1

0.75

0.1

0.05

0.05

0.05

Q2

0.8

0.05

0.05

0.05

0.05

Q3

0.78

0.08

0.02

0.1

0.02

Q4

0.74

0.07

0.06

0.1

0.03

Q5

0.85

0.05

0.05

0.03

0.02

Q6

0.88

0.02

0.02

0.05

0.03

Q7

0.72

0.15

0.06

0.04

0.03

Q8

0.79

0.05

0.08

0.06

0.02

Q9

0.68

0.25

0.02

0.03

0.02

Q10

0.67

0.2

0.08

0.03

0.02

Q11

0.76

0.12

0.05

0.04

0.03

Q12

0.73

0.11

0.1

0.04

0.02

Q13

0.71

0.18

0.07

0.03

0.01

Q14

0.66

0.25

0.05

0.02

0.02

Q15

0.8

0.1

0.05

0.02

0.03

Q16

0.81

0.06

0.07

0.04

0.02

Q17

0.7

0.18

0.08

0.03

0.01

Q18

0.74

0.15

0.06

0.04

0.01

Q19

0.72

0.1

0.1

0.04

0.04

Q20

0.69

0.18

0.08

0.03

0.02

The EFA results confirmed the presence of five distinct factors—Transparency, Accountability, Privacy, Fairness, and Human Oversight—each contributing to ethical perceptions of AI across different regions and sectors. The CFA further validated the model, demonstrating good fit indices.
References
[1] Akman, I., & Mishra, A. (2010). Evaluating privacy concerns in Turkey: A case of internet users. Journal of Business Ethics, 96(2), 331–342.
[2] Anthes, W. (2004). Financial literacy in America: A perfect storm, a perfect opportunity. Journal of Financial Service Professionals, 8(6), 49–56.
[3] Austin, J., Gutierrez, R., Ogliastri, E., & Reficco, R. (Eds.). (2006). Effective management of social enterprises. Cambridge, MA: David Rockefeller Center Series on Latin American Studies, Harvard University.
[4] Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for artificial intelligence and digital technologies. International Journal of Information Management, 62, 102433.
[5] Bacchini, F., & Lorusso, L. (2019). Race, again: How face recognition technology reinforces racial discrimination. Journal of Information, Communication, and Ethics in Society, 17(3), 321-335.
[6] Bagozzi, R. P., Yi, Y., & Phillips, L. W. (1991). Assessing construct validity in organizational research. Administrative Science Quarterly, 36(3), 421–458.
[7] Balabanis, G., Stables, R. E., & Phillips, H. C. (1997). Market orientation in the top 200 British charity organizations and its impact on their performance. European Journal of Marketing, 31(8), 583–603.
[8] Bentham, J. (1789). An introduction to the principles of morals and legislation. Clarendon Press.
[9] Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159.
[10] Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
[11] Boyatzis, R. E. (2008). Competencies in the 21st century. Journal of Management Development, 27(1), 5-12.
[12] Baker-Brunnbauer, J. (2021). Management perspective of ethics in artificial intelligence. AI and Ethics, 1(2), 173-181.
[13] Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.
[14] Canca, C. (2020). "Operationalizing AI Ethics Principles." Communications of the ACM, 63(12), 18–21.
[15] Coeckelbergh, M. (2020). AI Ethics. MIT Press.
[16] Chatman, J., & Caldwell, D. F. (1991). People and organizational culture: A profile comparison approach to assessing person-organization fit. Academy of Management Journal, 34(3), 487-516.
[17] Chace, C. (2015). Surviving AI: The Promise and Peril of Artificial Intelligence. Three Cs.
[18] Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1-33.
[19] Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7(3), 309-319.
[20] Cook, B., Dodds, C., & Mitchell, W. (2003). Social entrepreneurship—False premises and dangerous forebodings. Australian Journal of Social Issues, 38(1), 57-72.
[21] Cornwall, J. R. (1998). The entrepreneur as a building block for community. Journal of Developmental Entrepreneurship, 3(2), 734-745.
[22] Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
[23] Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311-313.
[24] Custers, B., de Lange, M., & Vermaas, P. (2019). GDPR and AI: Big data challenges for human rights. Springer.
[25] Dacin, P. A., Dacin, M. T., & Matear, M. (2010). Social entrepreneurship: Why we don't need a new theory and how we move forward from here. Academy of Management Perspectives, 24(3), 37-57.
[26] Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer.
[27] Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics and Information Technology, 20(1), 1-3.
[28] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv: 1702.08608.
[29] Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
[30] Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.
[31] Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. C., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, 1-49.
[32] Floridi, L. (2016). The fourth revolution: How the infosphere is reshaping human reality. Oxford University Press.
[33] Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 1–8.
[34] Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160361.
[35] Freiling, J., & Laudien, S. M. (2013). Explaining new venture failure: A competence-based approach. AIMS 2013 Conference.
[36] Gainer, B., & Padanyi, P. (2002). Applying the marketing concept to cultural organisations: An empirical study of the relationship between market orientation and performance. International Journal of Nonprofit and Voluntary Sector Marketing, 7(2), 182-193.
[37] Gainer, B., & Padanyi, P. (2005). The relationship between market-oriented activities and market-oriented culture: Implications for the development of market orientation in nonprofit service organizations. Journal of Business Research, 58(6), 854-862.
[38] Gasser, U., & Almeida, V. A. (2017). A layered model for AI governance. IEEE Internet Computing, 21(6), 58-62.
[39] Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation.” AI Magazine, 38(3), 50-57.
[40] Grand, S., Von Krogh, G., Leonard, D., & Swap, W. (2004). Resource allocation beyond firm boundaries: A multi-level model for open-source innovation. Long Range Planning, 37(6), 591-610.
[41] Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Proceedings of the 52nd Hawaii International Conference on System Sciences, 212-223.
[42] Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120.
[43] Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Prentice Hall.
[44] Hofstede, G. (1984). Culture's consequences: International differences in work-related values. SAGE Publications.
[45] Hitt, M. A., Nixon, R. D., Hoskisson, R. E., & Kochhar, R. (1999). Corporate entrepreneurship and crossfunctional fertilization: Activation, process and disintegration of a new product design team. Entrepreneurship Theory and Practice, 23(3), 145-168.
[46] Hynes, B. (2009). Growing the social enterprise—issues and challenges. Social Enterprise Journal, 5(2), 114-125.
[47] Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
[48] Jha, R., Shenoi, S., Acharya, S., & Bhogale, S. (2019). AI in agriculture: Harnessing the power of machine learning for smarter farming. Agricultural Systems, 174, 37-48.
[49] Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang, Y. (2017). Artificial intelligence in healthcare: Past, present, and future. Stroke and Vascular Neurology, 2(4), 230–243.
[50] Joy, I., De Las Casas, L., & Rickey, B. (2011). Understanding the demand for and supply of social finance: Research to inform the Big Society Bank. New Philanthropy Capital in association with the National Endowment for Science, Technology and the Arts (NESTA).
[51] Karakoç, F. Y., & Dönmez, L. (2014). Ölçek geliştirme çalışmalarında temel ilkeler. Tıp Eğitimi Dünyası, 40, 39-49.
[52] Kaspersen, A., & Wallach, W. (2021). A framework for the international governance of AI. Carnegie Council for Ethics in International Affairs.
[53] Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29.
[54] Koshiyama, A. S., & Kazim, E. (2021). A high-level overview of AI ethics. Patterns, 2(9), 100310.
[55] Köse, U. (Ed.). (2021). Yapay Zeka Etiği. Nobel Akademik Yayıncılik.
[56] Lepri, B., Oliver, N., Letouzé, E., Pentland, A. S., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.
[57] Li, Q., Lu, J., Yin, D., & Sun, X. (2024). FinVerse: An Autonomous Agent System for Versatile Financial Analysis. arXiv preprint
[58] Maas, M. M. (2023). Advanced AI Governance: A Literature Review of Problems, Options, and Proposals. AI Foundations.
[59] Maple, C., Szpruch, L., Epiphaniou, G., Staykova, K., Singh, S., Penwarden, W., Wen, Y., Wang, Z., Hariharan, J., & Avramovic, P. (2023). The AI Revolution: Opportunities and Challenges for the Finance Sector. arXiv.
[60] Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21.
[61] Mittelstadt, B. D. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507.
[62] Morgan, N. A., Vorhies, D. W., & Mason, C. H. (2009). Market orientation, marketing capabilities, and firm performance. Strategic Management Journal, 30(8), 909-920.
[63] Morris, M. H., Coombes, S., Schindehutte, M., & Allen, J. (2007). Antecedents and outcomes of entrepreneurial and market orientations in a non-profit context: Theoretical and empirical insights. Journal of Leadership & Organizational Studies, 13(4), 12-39.
[64] Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089.
[65] Nielsen, R. P. (1996). The politics of ethics: Methods for acting, learning, and sometimes fighting with others in addressing ethics problems in organizational life. Oxford University Press..
[66] O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
[67] Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the future—Big data, machine learning, and clinical medicine. The New England Journal of Medicine, 375(13), 1216-1219.
[68] Park, S. H., & Han, K. (2018). Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology, 286(3), 800–809.
[69] Patel, V. L., Shortliffe, E. H., Stefanelli, M., Szolovits, P., Berthold, M. R., Bellazzi, R., & Abu-Hanna, A. (2009). The coming of age of artificial intelligence in medicine. Artificial Intelligence in Medicine, 46(1),
[70] Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
[71] Peredo, A. M., & McLean, M. (2006). Social entrepreneurship: A critical review of the concept. Journal of World Business, 41(1), 56-65.
[72] Porter, M. E. (1989). From competitive advantage to corporate strategy. In Readings in Strategic Management (pp. 234-255). Palgrave Macmillan.
[73] Pinski, F., & Benlian, A. (2023). Artificial intelligence literacy: Conceptualization, operationalization, and an individual-level measure. Information Systems Research, 34(1), 1-20.
[74] Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI systems. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429-435.
[75] Roberts, D., & Woods, C. (2005). Changing the world on a shoestring: The concept of social entrepreneurship. University of Auckland Business Review, 7(1), 45–51.
[76] Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
[77] Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: Tests of significance and descriptive goodness-of-fit measures. Methods of Psychological Research Online, 8(2), 23-74.
[78] Shin, D. (2020). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 140, 102463.
[79] Singh, R., & Kaur, P. (2020). AI ethics in India: An inclusive governance approach. AI & Society, 36(1), 47–60.
[80] Smith, R., Bell, R., & Watts, H. (2014). Personality trait differences between traditional and social entrepreneurs. Social Enterprise Journal, 200-221.
[81] Stokes, D. (2002). Entrepreneurial marketing in the public sector: The lessons of head teachers as entrepreneurs. Journal of Marketing Management, 18(3-4), 397-414.
[82] Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Allyn & Bacon.
[83] Taras, V., Kirkman, B. L., & Steel, P. (2010). Examining the impact of Culture's Consequences: A three-decade, multilevel, meta-analytic review of Hofstede's cultural value dimensions. Journal of Applied Psychology, 95(3), 405–439.
[84] Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.
[85] Tracey, P., & Phillips, N. (2011). Bridging institutional entrepreneurship and the creation of new organizational forms: A multilevel model. Organization Science, 22(1), 60-80.
[86] Thrun, S. (2002). "Robotic Mapping: A Survey." Exploring Artificial Intelligence in the New Millennium, 1, 1–35.
[87] Ülgen, H., & Mirze, S. K. (2007). İşletmelerde stratejik yönetim. Arıkan Yayınları.
[88] Wan, X., Deng, H., Zou, K., & Xu, S. (2024). Enhancing the Efficiency and Accuracy of Underlying Asset Reviews in Structured Finance: The Application of Multi-agent Framework. arXiv.
[89] Webb, J. W., Ireland, R. D., Hitt, M. A., Kistruck, G. M., & Tihanyi, L. (2011). Where is the opportunity without the customer? An integration of marketing activities, the entrepreneurship process, and institutional theory. Journal of the Academy of Marketing Science, 39(4), 537-554.
[90] Wheelen, T. L., Hunger, J. D., Hoffman, A. N., & Bamford, C. E. (2010). Strategic management and business policy (13th ed.). Pearson Education.
Cite This Article
  • APA Style

    Saatci, E. Y. (2025). AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries. International Journal of Economic Behavior and Organization, 13(1), 35-50. https://doi.org/10.11648/j.ijebo.20251301.14

    Copy | Download

    ACS Style

    Saatci, E. Y. AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries. Int. J. Econ. Behav. Organ. 2025, 13(1), 35-50. doi: 10.11648/j.ijebo.20251301.14

    Copy | Download

    AMA Style

    Saatci EY. AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries. Int J Econ Behav Organ. 2025;13(1):35-50. doi: 10.11648/j.ijebo.20251301.14

    Copy | Download

  • @article{10.11648/j.ijebo.20251301.14,
      author = {Ezgi Yildirim Saatci},
      title = {AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries
    },
      journal = {International Journal of Economic Behavior and Organization},
      volume = {13},
      number = {1},
      pages = {35-50},
      doi = {10.11648/j.ijebo.20251301.14},
      url = {https://doi.org/10.11648/j.ijebo.20251301.14},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijebo.20251301.14},
      abstract = {Artificial Intelligence (AI) has rapidly become an integral technology across many sectors, including healthcare, finance, research, and manufacturing. AI’s ability to automate processes, analyse large datasets, and make predictive decisions offers significant opportunities for innovation, but it also raises profound ethical challenges. Ethical concerns regarding AI encompass issues of transparency, accountability, fairness, data privacy, and the need for human oversight. Given the diverse applications of AI, these ethical concerns vary not only by sector but also across different cultural and regulatory environments. Despite growing discourse on AI ethics, empirical tools for assessing ethical perceptions of AI across varied organizational contexts remain limited. From that need, this study introduces the AI and Ethics Perception Scale (AEPS), designed to measure individual and collective perceptions of AI ethics across five key dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight. The AEPS was developed through a rigorous methodological process, beginning with a pilot study of 112 participants and validated with data from 417 participants across three culturally diverse countries: Turkey, India, and the United Kingdom. The scale was used to assess ethical perceptions in sectors such as healthcare, finance, and manufacturing. Both Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were used to validate the scale’s structure. This study reveals significant cross-cultural and cross-sectoral differences in the prioritization of ethical concerns, demonstrating the need for contextually sensitive ethical frameworks for AI governance.
    },
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - AI and Ethics: Scale Development for Measuring Ethical Perceptions of Artificial Intelligence Across Sectors and Countries
    
    AU  - Ezgi Yildirim Saatci
    Y1  - 2025/02/28
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ijebo.20251301.14
    DO  - 10.11648/j.ijebo.20251301.14
    T2  - International Journal of Economic Behavior and Organization
    JF  - International Journal of Economic Behavior and Organization
    JO  - International Journal of Economic Behavior and Organization
    SP  - 35
    EP  - 50
    PB  - Science Publishing Group
    SN  - 2328-7616
    UR  - https://doi.org/10.11648/j.ijebo.20251301.14
    AB  - Artificial Intelligence (AI) has rapidly become an integral technology across many sectors, including healthcare, finance, research, and manufacturing. AI’s ability to automate processes, analyse large datasets, and make predictive decisions offers significant opportunities for innovation, but it also raises profound ethical challenges. Ethical concerns regarding AI encompass issues of transparency, accountability, fairness, data privacy, and the need for human oversight. Given the diverse applications of AI, these ethical concerns vary not only by sector but also across different cultural and regulatory environments. Despite growing discourse on AI ethics, empirical tools for assessing ethical perceptions of AI across varied organizational contexts remain limited. From that need, this study introduces the AI and Ethics Perception Scale (AEPS), designed to measure individual and collective perceptions of AI ethics across five key dimensions: Transparency, Accountability, Privacy, Fairness, and Human Oversight. The AEPS was developed through a rigorous methodological process, beginning with a pilot study of 112 participants and validated with data from 417 participants across three culturally diverse countries: Turkey, India, and the United Kingdom. The scale was used to assess ethical perceptions in sectors such as healthcare, finance, and manufacturing. Both Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were used to validate the scale’s structure. This study reveals significant cross-cultural and cross-sectoral differences in the prioritization of ethical concerns, demonstrating the need for contextually sensitive ethical frameworks for AI governance.
    
    VL  - 13
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Abstract
  • Keywords
  • Document Sections

    1. 1. Introduction
    2. 2. Literature Review
    3. 3. Methodology
    4. 4. Results
    5. 5. Discussion
    6. 6. Conclusion
    Show Full Outline
  • Abbreviations
  • Author Contributions
  • Conflicts of Interest
  • Appendix
  • References
  • Cite This Article
  • Author Information