View Article

Abstract

The past decade has witnessed an evolutionary process that has revolutionized the industry and social norms for Artificial Intelligence (AI). The evolution, nonetheless, has led to a complex web of ethical, legal, and societal issues that necessitate the creation of a robust regulatory system. This article takes a critical look at the signs of progress, issues, and regulations of AI over the last decade. The narrative is built through a multifaceted journey, giving an urgent call to regulations amid the AI revolution. This study highlights specific areas that demand oversight and embodies a global perspective on AI regulations by examining the US, EU, and Indian regulatory landscapes. Insights are further reinforced by a regulation analysis of various approaches, forces that propel AI regulation talks, and the roles of stakeholders, legal authorities, enterprises, and individuals shaping AI governance. The article finishes with a roadmap, identifying the challenges and suggesting mitigation strategies, stressing that balancing innovation with ethical concerns in the AI regulation field is complex but necessary. This input is intended to provide policymakers with helpful information, stimulate international cooperation, and help stakeholders navigate the complicated world of AI governance.

Keywords

AI, Artificial, Development, Challenges

Introduction

In the last ten years, Artificial Intelligence (AI) has witnessed a spectacular increase in the pace of development, with remarkable breakthroughs in many subfields and applications. AI has been evolving from Turing's thought experiment in the 50s to the current era of self-driving cars. The early rule-based systems made progress to data-driven learning in the 1990s, propelled by exponential developments in the computing power and availability of digital data. The 2000s also saw an expansion in Natural Language Processing (NLP), computer vision, and robotics, giving rise to the current AI revolution. Nevertheless, the actual boost of AI is primarily due to the emergence of deep learning and large neural networks. Notably, the launch of Generative Pre-trained Transformers (GPT), including GPT-3 with 175 billion parameters in 2020, demonstrated powerful text generation and natural language understanding capabilities. This success and other successors, such as GPT-4, continue pushing the boundaries of AI. Over the past decade, we have witnessed an era of incredible innovations and breakthroughs in AI technology. Machine learning, especially deep learning, catapulted AI applications to greater heights, making machines perform tasks previously exclusive to humans. AI is transformative, using natural language processing, computer vision, autonomous vehicles, personalized recommendation systems, and medical diagnostics. These developments improved efficiency across sectors and digitized the daily routine.  Considering the significant advancements, it becomes evident that establishing robust regulatory frameworks of equal caliber is imperative. These frameworks serve as crucial safeguards, ensuring ethical advancement, mitigating risks, and maximizing the societal benefits of AI technology. With AI permeating into daily life, regulation becomes necessary to balance between innovation and ethical issues. The latest advancements, such as GPT-4, highlight the pressing need for comprehensive regulations to steer the responsible development and deployment of AI technologies. This is essential to address potential ethical and societal issues effectively.

II. NEED FOR REGULATING AI – BEYOND DEEP FAKES / GAI AND LLM/ AND ETHICS

The significant potential of Artificial Intelligence represents an increasing number of ethical issues and risks. Perhaps more importantly, issues such as deep fakes, which challenge information integrity and trust, and the prospect of general artificial intelligence (GAI) exceeding human-level intelligence, raise ethical questions and existential dilemmas. Moreover, the urgent requirement of regulatory intervention is highlighted by other concerns as well, such as privacy violation, algorithmic bias, employment displacement, and use of autonomous weapons which conclude that AI needs to be developed and deployed responsibly. Unquestionably, AI has enormous potential not only in the performance of many tasks but also in applications that can be dangerous, such as privacy breaches and algorithmic biases, which may result in disastrous consequences unless regulated. Enforcing responsive regulatory policies is critical to mitigating the risks, such as privacy and systemic bias, so that AI benefits humankind rather than hurting it. Privacy Concerns in AI algorithms arise from the fact that these algorithms use lots of your data [14], resulting in the problem of collection, storage, and/or use without your consent and proper safeguards. While there are no regulations to prevent AI applications from unauthorized collection of personal data, data protection rules and informed consent for its use in AI algorithms remain a high priority. Bias in AI Systems is becoming a major problem with the extensive use of AI solutions, including potentially biased information and unleashing bias at the scale. This risk highlights companies' responsibility and, therefore, requires the maintenance of records of data sources, model architecture, and decision-making algorithms. Audits and continuous monitoring are needed to account for biases and their consequences. Misinformation and Fake Content, such as Generative AI’s ability to create realistic yet fraudulent materials, lead to the spread of unregulated misleading information. The increased production of deep fakes that are indistinguishable from real content has raised questions as to the truth, necessitating the development of regulatory frameworks as a safeguard against the widespread misuse of deep fakes that may be catastrophic in different societal, political, and international settings. A recent incident happened on Feb 2023[12], where a clip that seemingly shows U.S. President Joe Biden making transphobic remarks has been shared on social media as if authentic, but it is not true. Another issue - DeSantis campaign posted fake images of Former U.S. President, Donald Trump hugging Fauci in social media video [13] that went viral. The overall security risks of AI tools include breaches exposing sensitive information, data poisoning [15], and malware introduction during training, making the models vulnerable to corruption. Robust security arrangements, data encryption, and ethical standards for AI development are the requisites to stop risks and safeguard responsible AI practices and privacy violations, national security threats, and cyberattacks.

III. WHAT IS TO BE REGULATED?

  1. Regulating Technical Advancements in the Field:

In the context of the technical improvements of governing AI, a multifaceted approach becomes critical. Algorithmic transparency, fairness, and accountability are the foundations that secure the ethical utilization of AI systems. This encompasses the formation of strict standards as well as the clarification of the AI decision-making processes with transparent tracing. With the coexistence of these processes, it becomes important to have fairness measures in place to reduce the biases introduced in AI algorithms and ensure equity among the different user groups. Furthermore, the implementation of accountability tools is indispensable. It is through these tools that AI developers are to be held responsible for the ethical consequences and the inherent flaws in the systems they create.  Along with these initiatives, responsible research practices will be an essential part of constructing the ethical framework of artificial intelligence technologies. This would include promoting ethical behavior across the research lifecycle and placing emphasis on integrity and societal principles. Transparency in the research process is promoted by making methodologies, datasets, and results available for reproduction and peer review. This helps ensure confidence among the scientific community. Addressing algorithmic biases and errors brings deeper reliability and integrity to regulation. The initiatives include developing mechanisms to find and correct biases throughout both the training and deployment phases of the AI systems. Implementing monitoring and evaluation processes of a continuous nature is paramount since it means being vigilant in detecting and rectifying the errors of AI technologies along their operational lifespan. This concert of techniques stresses a holistic paradigm regarding how technical progress should be regulated, which mixes transparency, fairness, and accountability with responsible research practices and proactive prevention of bias and mistakes. This collaborative venture is geared towards establishing an ethical roadmap for the development and application of AI systems.

  1. Regulating data collection, processing, and analysing:

To regulate AI developments that are data-driven, data collection, processing, and analysis must be carefully undertaken. The oversight mechanisms are indispensable, striving to protect individual privacy through the implementation of strong data controls. This implies not only enforcing protocols for the privacy of data but also ensuring that users are adequately informed about the collection and applications of data, thus creating a culture of transparency and user empowerment. Within the data governance realm, the informed consent procedure is the leading principle. Such procedures, which are mandatory for data collection, emphasize the vital role that consent plays in the data-sharing process. On the other hand, there is a reactive push of anonymization techniques. Privacy techniques constitute an important tool in managing privacy risks and hiding classified information from unauthorized exposure, which translates to the main goal of AI development ethically and responsibly. Within the regulatory subset, the joint efforts demonstrate a consensus towards user privacy and informed decision-making. The regulatory framework sets up a robust monitoring system, mandates informed consent, and proactively stimulates data anonymization to strike a fine balance between technological innovation and the safeguarding of individual rights, creating confidence and accountability in the developing scenario of AI.

  1. Regulating inferencing and actions

For regulating the decision-making processes and actions of AI, a complex and integrated approach is a must. This entails defining very clear boundaries for autonomous decision-making, which is a crucial step to making sure that AI systems do not overstep ethical or legal limits. It includes the delineation of separate scopes and boundaries over AI decision-making within different domains, with the high-risk sectors taking the lead where the effects of AI activity are the most severe. On the other hand, mechanisms for accountability and redress are equally important in the regulatory framework. Robust mechanisms should be created to hold AI actors accountable for any errors in algorithms or harm incurred by AI systems. Providing routes of redress to individuals harmed by adverse outcomes as well as a system that values fairness and transparency emphasizes user rights and addresses any unintended consequences caused by the artificial intelligence decision-making process. Adhering to ethical and legal norms in AI decision-making process is another crucial element of the regulatory framework. Compliance with ethical guidelines and legal frameworks should be enforced; thus, AI works in accordance with society's values and laws. The creation of frameworks to guide and supervise AI actions means an AI deployment ethics commitment, which enhances public trust in AI and ensures that AI contributes positively to society. This integrated concept within the legal framework for decision-making processes highlights the importance of well-defined limits, responsibility, adherence to ethical norms and alignment with legal standards. The objective is to create a perfect balance between encouraging innovation and preventing ethical and legal issues that evolve as the world of AI technology changes over time.

  1. Regulating ownership

In the AI ownership regulatory framework, the required comprehensive approach includes intellectual property rights, data possession, and other aspects. Resolving intellectual property rights issues and establishing who owns AI-generated outcome products and the attribution of authorship rights is primary. It involves defining legal frameworks that are inclusive of AI systems, datasets, and intellectual property that may be exploited by AI growth and governance. When it comes to algorithmic transparency, explainability, and ownership, these issues gain central importance in the broader regulatory context. Ensuring transparency and explainability in the AI algorithms is critical, as users better understand and trust AI. Concurrently, the design of effective ownership and accountability systems, which are responsible and liable for AI models and applications, enhances the AI ecosystem.  The legal frameworks and liability considerations are crucial in the regulatory framework, highlighting the necessity for the development of comprehensive solutions. This involves holding parties accountable for any ethical misuse of AI-based systems. The establishment and implementation of regulations with clear guidelines and legal liability regarding AI ensure that ethics and responsibility are upheld within the AI environment. Also, the regulatory approach is a way of creating open dialogue, collaboration, and continual examination. Continuous interaction between regulators, developers, and stakeholders is aimed at perfecting and adopting regulations. The focus on periodic assessment and revision of regulations takes care of the emerging problems and opportunities that become more prominent in the dynamic environment of AI development and, therefore, tailors the regulatory frameworks to remain flexible and result-oriented. This diverse and expansive regulatory approach is aimed at comprehensively tackling ownership problems, thereby driving responsible AI implementation while considering the evolving nature of AI.

IV. ISSUES /SCENARIOS THAT ARE DRIVING THE DISCUSSIONS IN AI REGULATIONS

The ethical AI regulations debate is being stimulated by a multitude of critical issues and situations, each corresponding with some of the challenges inherent in the fast growth of AI. These include:

  1. Bias:

Algorithmic Bias: The problem with AI is the latent bias that may be there, which then leads to discrimination and injustice in outcomes. A recent incident [20] shows AI is misinterpreting black people's facial recognition and thus causing the wrong outcomes. Different types of Bias are been identified by researchers, Bias in the online recruiting tools, Bias in word associations, Bias in online ads, Bias in facial recognition technology, and Bias in the criminal justice algorithm[21].

Discrimination in AI Systems: The issues of inequality are also raised in circumstances where AI systems inadvertently introduce or aggravate existing inequities. Therefore, biased outcomes are often the result. Reports published in the ScientificAmerican clearly show that discrimination in the AI systems causes severe impact on the wrong outcomes [22].

B. Privacy:

Protection of Personal Data: In the privacy discussion of AI, the focus is primarily on the collection, storage, and utilization of personal data. The emphasis is on the implementation of effective privacy protection mechanisms and ethical and responsible user data controls with due account of the applicable data protection rules.

Prevention of Unauthorized Access or Misuse: The coverage is expanded to the prevention of unauthorized access to personal data and the risk reduction of abuse. The regulating factor in this regard is setting up definitive protocols to stop data breaches and guard individuals from being subjected to harm from unauthorized access.

C. Safety:

Safety concerns primarily revolve around ensuring the reliability of AI systems, spanning critical domains such as healthcare, transportation, military, public services, and government operations. Discussions center on developing standards and protocols to ensure the security of AI applications in these diverse sectors, minimizing the risk of errors or malfunctions that could have severe consequences.

  1. Accountability:

Accountability conversation is rooted in the establishment of systems that ensure people who create or deploy AI systems act in accordance with these systems. This involves the division of roles wherein the parties involved are held legally liable for upholding the ethical guidelines and ensuring that the operation of artificial intelligence is within ethical norms and standards.

  1. Transparency:

Transparency is an indispensable element of the dialogue where the emphasis should be on the requirement of algorithms, and AI decision-making processes to be transparent. This is achieved by giving users and stakeholders a view of how the AI systems work, the factors that determine their decisions, and the outcomes. Transparent AI enhances the confidence of the users in the system and improves awareness about the black-box nature of certain advanced AI systems. Above all, these exchanges tell of a common desire to work as a collective to deal with the intricate issues of AI, formulating the regulatory frameworks that can address the questions of bias, privacy, safety, accountability, and transparency. Stakeholders are finding it challenging to handle these complexities as they are grappling with ensuring the development and implementation of AI technologies that are responsible and in line with societal values and ethical principles.

V. AI REGULATIONS ACROSS THE GLOBE:

  1. Regulations in the United States

In the US, the supervision regime for AI is sector-specific and includes bodies like the Food and Drug Administration (FDA) and the Federal Aviation Administration (FAA). Especially the FDA of healthcare AI makes rules, standards, and ethical concerns, while the FAA of aviation behaves similarly. Through the sectoral approach, the sector-specific rules can be formulated based on the individuality and demands of each industry that works with AI in different ways. Additionally, the nature of A prototype is still evolving, and the extensive use of AI in daily life generates the need for federal law for all-round in scope. This is the plain grasping of the wider ethical and societal implications of AI itself that is inspiring the creation of comprehensive regulatory frameworks like this one. The campaign to get the federal law enacted signifies the will to unify standards to make them consistent and jointly manage the multidisciplinary and cross-sectoral nature of AI applications. Federal legislation discussions now gather much steam, with acts like the Algorithmic Accountability Act (H.R.6580) and the American AI Initiative Act ready to intercede in the establishment of a more holistic structure.

  1. Regulations in the European Union

In the European Union (EU), AI has been actively regulated to ensure data privacy and ethical choices. The General Data Protection Regulation (GDPR) acts as a key framework for guidance, with extensive instructions on data protection and privacy when applying AI. It illustrates the EU's dedication to maintaining human rights and responsible AI principles. Along with the GDPR, the EU is about to implement the AI Act, which can be considered an important step towards the creation of the EU-wide harmonized regulatory framework for AI. This legal proposition emphasizes a risk-based approach, focusing on the regulation of high-risk AI applications. Conformity assessment processes are embedded at all stages of the life cycle of AI, ensuring that AI systems align with pre-existing standards and ethical norms, endorsing the EU's stance on responsible and accountable AI deployment.

  1. Regulations in India

India, however, in the early stages of AI regulation, has demonstrated a positive attitude through measures like the National AI Strategy. However, this strategy is not limited to regulatory issues as it also promotes AI research and development. Along the way, ethical and regulatory hurdles are acknowledged, demonstrating an attitude of responsible AI innovation. Alongside the National AI Strategy, other noteworthy initiatives include the Personal Data Protection Bill and the draft National Data Governance Framework. The regulatory structure in India is moving toward the promotion of AI innovation and entrepreneurship. One of the main objectives is to strike a balance between encouraging technological progress and the ethical use of AI. India is in the initial phase of AI regulation, and as such, its policy-making efforts are aimed at adhering to international best practices and ethical standards, leading to an enabling environment for the sustainable and beneficial development of AI.

VI. ANALYSIS OF THE REGULATIONS / INSIGHTS

The regulation of Artificial Intelligence (AI) presents a multifaceted, rapidly evolving landscape with varying approaches worldwide. Understanding the current state, extracting insights, and identifying areas for improvement are vital steps toward a more cohesive global strategy. Currently, major countries shaping AI and related fields employ diverse regulatory frameworks. The US relies on sector-specific regulations, while the EU adopts a risk-based approach with the impending AI Act. India is in the nascent stages, emphasizing AI development alongside ethical considerations. Primary regulatory focuses across these nations include privacy, data collection, processing, and AI model accountability. However, the lack of harmonization creates confusion and uncertainty, hampering developers and users, particularly in regions with limited resources and capacity for implementation. International collaboration is thus imperative for crafting an effective framework. A global strategy for AI regulation necessitates cooperation among governments, international organizations, industry leaders, and civil society. Global leaders can facilitate convergence between national regulations while respecting regional nuances. Developing international standards for critical areas such as data privacy, bias mitigation, and ethical principles is essential, as AI systems often transcend borders, necessitating coordinated regulations to prevent regulatory gaps and inconsistencies.

VII. BEYOND REGULATIONS

Creating ethical Artificial Intelligence (AI) requires a multifaceted approach, with regulatory frameworks forming a cornerstone. However, other measures are equally vital. Raising Awareness and Understanding: Public education campaigns are crucial to inform stakeholders about AI's capabilities, limitations, and risks. Encouraging open dialogues and collaboration among researchers, developers, policymakers, civil society, and the public is essential. Promote Ethical AI Principles: Establish guidelines and principles for developing and deploying AI systems that prioritize fairness, transparency, accountability, and human-centric values. Encourage AI developers and organizations to adopt these principles as guiding frameworks for their work. Encourage Responsible AI Development: Incentivize the creation of AI systems that align with ethical considerations. Support research and development efforts focused on building ethical AI technologies. Foster Regulatory Compliance: Ensure that AI systems comply with existing regulations and legal frameworks relevant to ethics and safety. Advocate for the development of clear and enforceable regulations specifically tailored to address AI-related ethical concerns. Provide Ethical AI Tools and Resources: Develop toolkits, guidelines, and frameworks that assist AI developers in incorporating ethical considerations into their work. Offer training and resources to help organizations implement ethical AI practices within their operations. Continuously Evaluate and Adapt: Regularly review and update ethical considerations and frameworks as AI technologies evolve. Monitor the societal and ethical impacts of AI systems to identify areas for improvement and necessary adaptations. By addressing ethics across these dimensions, we can strive towards building AI systems that align with human values, promote fairness and inclusivity, and contribute positively to society.

VIII. ROLE OF GOVERNMENT / LEGAL AUTHORITIES/ ENTERPRISES / PEOPLE IN REGULATING AI

Governing Artificial Intelligence (AI) presents a multifaceted challenge that demands a coordinated approach across various entities. While governments play a vital role, collaboration among leaders in the field, policymakers, and enforcement authorities is crucial for creating effective frameworks. These frameworks enable organizations, legal entities, and enforcement agencies to ensure compliance with AI development and application. Authors [16] proposed two methods: first, the capabilities and impacts of AI systems already deployed in society, and second, the development and deployment of new AI capabilities. The authors also discussed some concrete examples in their article. European nations have published AI regulations, and more recently, the analysis was published in the Stanford article [19] Policymakers play a key role in translating broader governmental frameworks into concrete regulations. As AI seeps into our lives, policymakers and regulators are navigating the challenges of regulating the technology. They aim to find a balance between encouraging its benefits and mitigating potential risks. Countries have approached this task differently, mirroring their unique legal systems, cultures, and traditions. Policies are advancing fast phase, as mentioned in the article [17] Enterprises play a crucial role in protecting the data used in AI training models. Companies like EY coming forward and sharing their roles and responsibilities in regulating the AI for the responsible AI [18].  The public plays a vital role in ensuring that AI technologies are developed and used in line with societal values and ethical principles. By involving the public in the regulatory process, we can address concerns about safety, privacy, bias, and accountability in AI systems. This participation also leads to a more inclusive, transparent, and responsible development and deployment of AI systems.

IX. ROADMAP, CHALLENGES, AND MITIGATION STRATEGIES

  1. Developing Your AI Adoption Roadmap

Crafting an AI adoption roadmap is essential for businesses aiming to successfully integrate artificial intelligence into their processes. Here’s a concise guide to help you develop your AI adoption roadmap, guiding you through the stages of strategic planning, implementation, and evaluation:

    1. Strategic Planning:

Identify the specific business goals and objectives that AI will address. Analyze your existing resources, data, and infrastructure to determine your readiness for AI adoption. Select the most important and valuable use cases for AI implementation that align with business objectives.

    1. Execution (Infrastructure and Implementation):

Formulate a detailed plan outlining the specific AI technologies, tools, and platforms you intend to adopt. Establish the necessary infrastructure, including data storage, processing, and security measures, to support your AI initiatives. Start with controlled pilot projects to test your AI solutions, refine your approach, and gather valuable insights.

    1. Evaluation:

Continuously monitor the performance of your AI systems to ensure they are meeting the desired outcomes. Assess the return on investment (ROI) and business value generated by your AI implementations. Based on insights gained, refine your AI strategy, adjust use cases, and adapt your roadmap to optimize results.



    Image


 

  1. Technology Adoption Roadmap for Implementing AI

To successfully integrate AI into an organization, a well-defined technology adoption roadmap is essential. This roadmap should act as a guiding framework, outlining a phased approach for the integration of AI-based technologies. By creating such a roadmap, the organization can enhance understanding, preparation, and effective implementation of AI across various operational areas.  Here are some additional points to consider when creating a technology adoption roadmap for AI integration:

    1. Define the AI vision and goals:
  • Determine what the organization aims to achieve by integrating AI.
  • Identify how AI can enhance products, services, or operations.
    1. Identify relevant AI technologies:
  • Assess which AI technologies offer the greatest value.
  • Understand the risks and limitations associated with these technologies.
    1. Develop a phased plan for AI integration:
  • Outline the necessary steps for integrating AI into operations, including data collection, model development, and deployment.
    1. Allocate resources:
  • Ensure adequate allocation of financial, human, and infrastructural resources to support AI integration efforts.
    1. Monitor and evaluate progress:
    • Track progress against the roadmap and make adjustments as needed to stay aligned with goals and objectives.


          Image


       

  1. Challenges

AI systems require extensive amounts of high-quality data to train and operate effectively. The lack of sufficient and representative data can result in biased outcomes and inaccurate results. Organizations need to prioritize ensuring access to relevant and comprehensive data. This involves addressing data silos, privacy concerns, and data-sharing regulations. Allocating resources towards robust data collection and management processes is essential to ensure the availability of high-quality data for AI systems. There is a significant shortage of skilled AI professionals, including researchers, engineers, and data scientists. This scarcity hinders the development and implementation of AI solutions. Organizations need to invest heavily in training initiatives and cultivate internal expertise to bridge the talent gap and foster a workforce capable of driving AI advancements. Moreover, the integration of AI raises ethical concerns regarding fairness, transparency, and accountability. Organizations need to address these issues to ensure the responsible and ethical use of AI technologies. Navigating the complex regulatory landscape and ensuring compliance with data privacy and security regulations is a significant challenge for AI integration. These challenges underscore the need for a comprehensive and collaborative framework for AI development and implementation. In conclusion, organizations must prioritize data quality, invest in talent development, and address ethical and regulatory considerations to fully harness the potential of AI.

  1. Mitigation Strategies

Successfully implementing AI necessitates a comprehensive strategy to overcome inherent challenges: Ensure high-quality, diverse data for AI model training. Implement robust data collection processes. Leverage representative datasets to minimize bias. Establish mechanisms for secure data access and management. Address the shortage of skilled AI professionals. Invest in talent acquisition strategies. Promote the development of in-house AI expertise.

CONCLUSION

Not only in the United States but also globally there's a widespread discussion on ensuring AI adheres to regulations. This presents a challenge, as current laws were not designed with AI in mind. The American Bar Association notes the difficulty for companies, lawyers, and courts in comprehending the technical aspects and applying the law fairly in business disputes. AI possesses remarkable capabilities, such as aiding in early cancer detection or combating climate change. However, it can also be exploited for nefarious purposes, assisting criminals or scammers. Responsibility lies with individuals and companies in how they utilize AI, as it is not inherently at fault for misuse. Lawmakers must establish clear guidelines delineating acceptable and unacceptable uses. Those involved in AI operations must adhere to these regulations consistently. The U.S. government is working on laws for AI at the national level. Different agencies are involved, and some states are also making their own rules. They're trying to figure out how existing laws can apply to AI instead of making entirely new ones. In Europe, the EU AI Act has been enacted to ensure the safety, fairness, and environmental sustainability of AI. AI systems are categorized based on risk levels, each with corresponding regulations. Globally, countries are developing their own AI legislation to keep pace with technological advancements. Some nations are drafting comprehensive laws, while others concentrate on specific AI applications. Additionally, there are international endeavors, such as those led by organizations like the OECD, aimed at coordinating these laws on a global scale.

REFERENCE

  1. https://iapp.org/resources/article/us-federal-ai-governance/
  2. https://www.brookings.edu/articles/the-future-of-the-world-is-intelligent-insights-from-the-world-economic-forums-ai-governance-summit/
  3. US: White House Office of Science and Technology Policy (OSTP), "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" (February 2023)
  4. Buiten, M. C. (2019). Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation, 10(1), 41-59.
  5. Ellul, J., Pace, G., McCarthy, S., Sammut, T., Brockdorff, J., & Scerri, M. (2021, June). Regulating artificial intelligence: a technology regulator's perspective. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law (pp. 190-194).
  6. Reed, C. (2018). How should we regulate artificial intelligence?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170360.
  7. Wischmeyer, T., & Rademacher, T. (Eds.). (2020). Regulating artificial intelligence (Vol. 1, No. 1, pp. 307-321). Heidelberg: Springer.
  8. https://www.euaiact.com/
  9. https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/01/ai-regulation-in-india-current-state-and-future-perspectives
  10. https://www.linkedin.com/pulse/developing-your-ai-adoption-roadmap-technology-implementing-rajoo-jha-tcktc/?midToken=AQHi9t1ZXieqKA&midSig=0XVP9zLnwuzH81&trk=eml-email_series_follow_newsletter_01-newsletter_content_preview-0-headline_&trkEmail=eml-email_series_follow_newsletter_01-newsletter_content_preview-0-headline_-null-6sk2h8~lszfwzzc~7-null-null&eid=6sk2h8-lszfwzzc-7&otpToken=MTYwNDFlZTcxNDJjY2ZjN2IyMjQwNGVkNDYxNmUwYjA4ZmNhZDA0MDljYTc4NjYxNzBjNDAxNmM0OTUzNWZmMWZlZDNkZjliNjVmMGViZjQ1MWJjZDU5M2RiNTkzM2VmZTdhMjE5ZTU5Yjk1ZWY3YmQ4MTA4ZiwxLDE=
  11. https://www.linkedin.com/pulse/overcoming-challenges-ai-implementation-iain-munro-m-sc-mba-2mync/
  12. https://www.reuters.com/article/factcheck-biden-transphobic-remarks/fact-check-video-does-not-show-joe-biden-making-transphobic-remarks-idUSL1N34Q1IW/
  13. https://www.cnn.com/2023/06/08/politics/desantis-campaign-video-fake-ai-image/index.html
  14. https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/
  15. https://www.techradar.com/pro/unravelling-the-threat-of-data-poisoning-to-generative-ai
  16. https://doi.org/10.48550/arXiv.2108.12427
  17. https://iapp.org/resources/article/us-federal-ai-governance/
  18. https://www.ey.com/en_us/ai/principles-for-ethical-and-responsible-ai
  19. https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement
  20. https://www.aclu.org/news/privacy-technology/wrongfully-arrested-because-face-recognition-cant-tell-black-people-apart
  21. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
  22. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/

Reference

  1. https://iapp.org/resources/article/us-federal-ai-governance/
  2. https://www.brookings.edu/articles/the-future-of-the-world-is-intelligent-insights-from-the-world-economic-forums-ai-governance-summit/
  3. US: White House Office of Science and Technology Policy (OSTP), "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" (February 2023)
  4. Buiten, M. C. (2019). Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation, 10(1), 41-59.
  5. Ellul, J., Pace, G., McCarthy, S., Sammut, T., Brockdorff, J., & Scerri, M. (2021, June). Regulating artificial intelligence: a technology regulator's perspective. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law (pp. 190-194).
  6. Reed, C. (2018). How should we regulate artificial intelligence?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170360.
  7. Wischmeyer, T., & Rademacher, T. (Eds.). (2020). Regulating artificial intelligence (Vol. 1, No. 1, pp. 307-321). Heidelberg: Springer.
  8. https://www.euaiact.com/
  9. https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/01/ai-regulation-in-india-current-state-and-future-perspectives
  10. https://www.linkedin.com/pulse/developing-your-ai-adoption-roadmap-technology-implementing-rajoo-jha-tcktc/?midToken=AQHi9t1ZXieqKA&midSig=0XVP9zLnwuzH81&trk=eml-email_series_follow_newsletter_01-newsletter_content_preview-0-headline_&trkEmail=eml-email_series_follow_newsletter_01-newsletter_content_preview-0-headline_-null-6sk2h8~lszfwzzc~7-null-null&eid=6sk2h8-lszfwzzc-7&otpToken=MTYwNDFlZTcxNDJjY2ZjN2IyMjQwNGVkNDYxNmUwYjA4ZmNhZDA0MDljYTc4NjYxNzBjNDAxNmM0OTUzNWZmMWZlZDNkZjliNjVmMGViZjQ1MWJjZDU5M2RiNTkzM2VmZTdhMjE5ZTU5Yjk1ZWY3YmQ4MTA4ZiwxLDE=
  11. https://www.linkedin.com/pulse/overcoming-challenges-ai-implementation-iain-munro-m-sc-mba-2mync/
  12. https://www.reuters.com/article/factcheck-biden-transphobic-remarks/fact-check-video-does-not-show-joe-biden-making-transphobic-remarks-idUSL1N34Q1IW/
  13. https://www.cnn.com/2023/06/08/politics/desantis-campaign-video-fake-ai-image/index.html
  14. https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/
  15. https://www.techradar.com/pro/unravelling-the-threat-of-data-poisoning-to-generative-ai
  16. https://doi.org/10.48550/arXiv.2108.12427
  17. https://iapp.org/resources/article/us-federal-ai-governance/
  18. https://www.ey.com/en_us/ai/principles-for-ethical-and-responsible-ai
  19. https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement
  20. https://www.aclu.org/news/privacy-technology/wrongfully-arrested-because-face-recognition-cant-tell-black-people-apart
  21. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
  22. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/

Photo
Durga Chavali
Corresponding author

Trinity Health, Trinity Information Services 20555 Victor Parkway, Livonia, USA

Photo
Biju Baburajan
Co-author

5307 Quarter Horse Ln, Moseley, VA 23120

Photo
Ashokkumar Gurusamy
Co-author

1054 Kingston Grove Dr, Cary, NC 27519

Photo
Vinod Kumar Dhiman
Co-author

Vice President Information Technology, GAVS Technologies, Charlotte NC, USA

Photo
Siri Chandana Katari
Co-author

Dept. Computer Science & Engineering (IoT), Vasireddy Venkatadri Institute of Technology, Nambur, India

Durga Chavali, Biju Baburajan, Ashokkumar Gurusamy, Vinod Kumar Dhiman, Siri Chandana Katari, Regulating Artificial Intelligence: Developments And Challenges, Int. J. of Pharm. Sci., 2024, Vol 2, Issue 3, 1250-1261. https://doi.org/10.5281/zenodo.10898480

More related articles
Formulation And Evaluations Of Alfalfa Tablet...
Vaishnavi Prakash Lande , Harishkumar Rathod , Swati Deshmukh , S...
The Art of Finding the Right Drug Target: Emerging...
Harshal Borse, Ganeshmal Chaudhari, Sanket Gabhale, Utkarsh Manda...
Formulation And Evaluation Of Herbal Cough Syrup F...
Dinesh D Thore , Geeta N Kaje, Rupali B Jadhav , Pavanraj B Lodwa...
Detection Of Impurities: A Review On Advance In Impurities Detection And Charact...
Sagar R. Pagade , Sahil Agrawal, Prathmesh Shejal, Pranjal Chougule, Nilesh Chougule, ...
Simultaneous Estimation Of Bilastine And Montelukast In Bulk And Pharmaceutical ...
Punam Nivritti Bandgar, Dr.Monika G. Shinde, Pradnya P. Shinde, Aishwarya A. Ubale, ...
A Review On The Analytical Method Development And Validation Of Sertralinetablet...
Ashishkumar Panpatil, Vedant Salunke , Snehal Jadhav, Faisal Shaikh, Pravin Jadhav, Vinod Bairagi, K...
Related Articles
Global Monkeypox Outbreak: Lessons Learned and Future Directions ...
Arnab Roy, Ankita Singh , Suraj Kumar , Jiten Goray , Shivam Kashyap , Gangadhar Singh , Sudarshan R...
A Review On- Artificial Intelligence In Pharmacy & Applications Of Ai In Pharmac...
Meet Shah, Swapnil G. Kale, Kalyani Raut, Sayali Gandhi, Swapnil Katkhade, Rohini Haral, Sneha S. Ka...
Formulation and Evaluation of Mouth Dissolving Tablet of Metoclopramide: A Revie...
Amol Dhayarkar , Pravin Navgire , Dr. V. M satpute, S. R. Ghodake , ...
Method Development And Validation Of Propylthiouracil By UV Spectroscopy...
Shalu Bharti, Dev Prakash Dahiya, Palak , Chinu Kumari, ...
Formulation And Evaluations Of Alfalfa Tablet...
Vaishnavi Prakash Lande , Harishkumar Rathod , Swati Deshmukh , Saloni S. Bangar, ...
More related articles
Formulation And Evaluations Of Alfalfa Tablet...
Vaishnavi Prakash Lande , Harishkumar Rathod , Swati Deshmukh , Saloni S. Bangar, ...
The Art of Finding the Right Drug Target: Emerging Methods and Strategies...
Harshal Borse, Ganeshmal Chaudhari, Sanket Gabhale, Utkarsh Mandage , Sanket Pawar, Dr.Saurabh Bias,...
Formulation And Evaluation Of Herbal Cough Syrup From Seeds Extract Of Hedge Mus...
Dinesh D Thore , Geeta N Kaje, Rupali B Jadhav , Pavanraj B Lodwal , Ingle Ram B , ...
Formulation And Evaluations Of Alfalfa Tablet...
Vaishnavi Prakash Lande , Harishkumar Rathod , Swati Deshmukh , Saloni S. Bangar, ...
The Art of Finding the Right Drug Target: Emerging Methods and Strategies...
Harshal Borse, Ganeshmal Chaudhari, Sanket Gabhale, Utkarsh Mandage , Sanket Pawar, Dr.Saurabh Bias,...
Formulation And Evaluation Of Herbal Cough Syrup From Seeds Extract Of Hedge Mus...
Dinesh D Thore , Geeta N Kaje, Rupali B Jadhav , Pavanraj B Lodwal , Ingle Ram B , ...