a robot standing in front of several drones

What Are the Risks of Artificial Intelligence? | Explained

As artificial intelligence becomes more advanced, the potential risks it poses to society have sparked intense debate. Could AI systems eventually surpass human control, leading to unforeseen consequences? This is a question that has been raised by experts like Geoffrey Hinton and Elon Musk, who warn about the dangers of unchecked AI development1.

The dual nature of AI is undeniable. While it offers remarkable benefits, such as revolutionizing healthcare and improving efficiency, it also brings significant threats. Concerns over automation, bias, and the uncontrolled growth of AI systems are at the forefront of these risks2. For instance, the automation of tasks could lead to job losses, particularly affecting low-skilled workers, as highlighted by a study from McKinsey1.

Furthermore, the lack of diversity in AI development teams can perpetuate biases, leading to unfair outcomes in areas like hiring and law enforcement3. The concentration of AI development in large corporations and governments also raises concerns about economic inequality and reduced social mobility2.

This article provides a comprehensive examination of AI risks, backed by expert insights and statistical data. We will explore the potential consequences of AI on employment, privacy, and global security, ensuring a well-rounded understanding of this critical issue.

Key Takeaways

  • AI’s potential to surpass human control raises significant ethical and safety concerns.
  • Automation could lead to substantial job losses, particularly in low-skilled sectors.
  • Bias in AI systems is a major issue, stemming from homogeneous development teams.
  • Centralization of AI development in large entities may exacerbate economic inequality.
  • AI’s impact on global security and privacy is a growing concern.

Introduction to the Risks of Artificial Intelligence

As technology advances, concerns about AI’s impact on society grow. Experts warn about potential dangers, from job displacement to privacy breaches, urging cautious development and regulation.

AI systems are increasingly used in critical fields like healthcare and finance, raising multifaceted concerns. Understanding both the technical and societal aspects of these risks is crucial for addressing them effectively.

  • Transparency issues in AI decision-making processes can lead to trust problems.
  • Bias in AI systems may result in unfair outcomes, particularly in sensitive areas like law enforcement and hiring.
  • Privacy violations are a significant concern, with AI’s ability to process vast amounts of personal data.

Recent events, such as AI-generated deepfakes, highlight the potential for misuse and manipulation. Ensuring informed discussions about these challenges is essential for developing strategies to mitigate risks while harnessing AI’s benefits4.

What are the risks of artificial intelligence: A Comprehensive Overview

As we delve deeper into the digital age, the complexities surrounding AI risks become increasingly evident. The development of AI technologies brings forth a multifaceted array of challenges that span technical, economic, and ethical dimensions. Understanding these risks is crucial for harnessing AI’s potential responsibly.

Setting the Stage for AI Risk Analysis

The analysis of AI risks is not a one-dimensional endeavor. It encompasses a broad spectrum of categories, each with its unique set of challenges. From technical failures that could lead to system malfunctions, to economic impacts that might disrupt industries, and ethical dilemmas that question the morality of AI decisions, the scope is vast and intricate.

To illustrate, consider the example of autonomous weapons, where AI systems could potentially make life-or-death decisions without human intervention. Similarly, algorithmic trading mishaps have the potential to cause significant financial volatility, as seen in flash crashes. These real-world examples underscore the urgency of addressing AI risks proactively.

The discussion is guided by both qualitative insights and quantitative data. For instance, studies have shown that nearly 80% of executives at companies deploying AI report seeing moderate value from it5. However, this value is often accompanied by risks such as privacy violations and potential discrimination, issues that are exacerbated by the complexity of deep learning models6.

  • Technical Failures: Including system malfunctions and biases in AI decision-making processes.
  • Economic Impacts: Such as job displacement and market volatility.
  • Ethical Dilemmas: Including issues of privacy, surveillance, and algorithmic bias.

The aim of this section is to provide a comprehensive framework for understanding the full impact of AI risks. By examining both the qualitative and quantitative aspects, we can better navigate the complexities of AI development and implementation, ensuring that the technology is harnessed for the greater good.

AI Transparency and Explainability: Challenges and Approaches

Modern AI systems, particularly deep learning models, often operate as “black boxes,” making their decision-making processes opaque and challenging to interpret. This lack of transparency can lead to unexpected biases and risks, undermining trust in AI systems.

The complexity of generative AI models has increased, making them inherently harder to interpret compared to traditional AI systems7. AI algorithms depend on the quality of data used for training, with bias in data leading to inherent risks in AI outputs7.

To address these challenges, explainable AI (XAI) methods have emerged, aiming to demystify opaque systems. XAI helps users understand how models arrive at their results, while interpretability focuses on the decision-making process8.

Experts recommend regular assessments to identify and eliminate biases in AI software, ensuring transparency and accountability9. Tools like SHAP and IBM AI Fairness 360 are essential for achieving transparency7.

Transparency is crucial for securing trust from users, regulators, and stakeholders affected by algorithmic decision-making7. Regulatory frameworks such as the EU AI Act and GDPR mandate transparency, fairness, and accountability in AI systems8.

  • Deep learning models lack transparency, leading to challenges in interpreting decisions.
  • Bias in training data can result in risky AI outputs.
  • XAI methods and tools like SHAP help make AI decisions more understandable.
  • Regular assessments and regulatory frameworks are essential for maintaining transparency and accountability.

Job Displacement and Economic Impacts Driven by AI

The integration of AI into various industries has sparked concerns about its economic implications. While AI offers benefits like enhanced efficiency, its impact on employment and socioeconomic structures is significant. AI-driven automation is reshaping the job market, creating both opportunities and challenges.

Automation and Workforce Shifts

Studies indicate that AI could displace a substantial portion of the workforce. For instance, approximately 50% of Americans believe AI will lead to greater income inequality and a more polarized society10. Additionally, 46% of young Americans think AI might replace their jobs within the next five years10. This shift is not just a concern for low-skilled workers; AI’s impact is felt across various sectors.

Concerns Over Socioeconomic Inequality

The economic divide may widen as AI benefits higher-income workers more significantly. Research shows that productivity gains from AI are more pronounced for workers earning around $90,000 annually11. This disparity could exacerbate existing inequalities, as lower-income groups face greater displacement risks. For example, Klarna’s AI system replaced 700 customer service agents, highlighting the potential for significant job loss11.

While AI offers economic benefits, such as a projected 11% boost to the UK’s GDP by 205011, the socioeconomic challenges cannot be overlooked. Addressing these issues requires balanced policies that mitigate job displacement while harnessing AI’s potential.

AI Economic Impact

Social Manipulation and the Spread of Misinformation

AI’s ability to manipulate public opinion has emerged as a significant concern in the digital age. The rise of deepfakes and AI-generated content has made it increasingly difficult to distinguish fact from fiction, posing a direct threat to democratic processes.

Deepfakes and AI-Generated Content

Deepfakes, powered by advanced AI algorithms, can create highly realistic videos and images. These tools have been used to spread false information, as seen in instances where AI-generated content attempted to influence congressional races during the 2022 midterms12. The sophistication of these deepfakes is such that they are often indistinguishable from real content, complicating efforts to identify and combat misinformation13.

Impact on Political Discourse and Public Opinion

AI-driven manipulation extends beyond deepfakes. AI systems can create fake social media accounts in bulk, influencing public opinion on critical issues. For example, in 2019, an AI system was developed to flood the internet with fake accounts, significantly impacting political discourse12. This practice has been employed in various elections, including the 2016 U.S. Presidential Election, where AI-powered accounts spread disinformation, affecting electoral outcomes14.

These practices not only undermine trust in media but also threaten the integrity of democratic processes. The use of AI in creating fake news videos, such as those produced ahead of Taiwan’s 2023 presidential election, highlights the urgent need for countermeasures12. As AI technologies continue to evolve, the challenge of identifying and mitigating misinformation grows, emphasizing the need for robust regulatory frameworks and advanced detection tools13.

  • AI tools are increasingly used for social and political manipulation through deepfakes and fake accounts.
  • Deepfakes and AI-generated content blur the line between reality and fiction, making misinformation harder to detect.
  • These practices significantly impact public opinion, undermining trust in media and democratic processes.
  • Examples include AI-generated fake news during elections and sophisticated bot farms targeting political events.
  • Urgent countermeasures, such as enhanced detection tools and regulatory frameworks, are essential to combat this growing threat.

For more insights on how AI can hijack your voice and manipulate perceptions, visit this resource.

Data Privacy, Security, and Surveillance with AI

As AI technologies advance, concerns about data privacy and security grow. The ability of AI systems to collect and analyze vast amounts of personal data has raised significant questions about their impact on individual privacy and surveillance practices.

Recent incidents, such as privacy breaches involving AI tools like ChatGPT, highlight the vulnerabilities in data security protocols15. These breaches often result from weak security measures and improper data handling, which can lead to sensitive information being exposed or misused16.

Privacy Violations and Information Breaches

AI systems often rely on vast amounts of data, including sensitive personal information such as health records and financial transactions16. This data can be vulnerable to breaches, especially when proper security protocols are not in place. For instance, AI algorithms used in hiring processes have been found to exhibit bias against certain groups, leading to unfair outcomes17.

Improper handling of this data can result in privacy violations, which can have serious consequences for individuals. For example, facial recognition algorithms have been known to wrongly identify individuals, particularly from marginalized communities, due to biases in their training data17.

Monitoring Technologies and Government Oversight

AI-powered monitoring technologies are increasingly being used by governments and corporations for surveillance. These tools can track behavior and activities on a large scale, raising concerns about individual privacy and autonomy16.

While some argue that these technologies enhance security, others question their impact on civil liberties. For example, AI systems used for surveillance often operate with minimal oversight, leading to potential misuse and overreach15.

To address these challenges, regulatory frameworks like the EU AI Act and GDPR have been implemented to ensure transparency and accountability in AI systems16. These regulations emphasize the need for robust security measures and ethical data practices to mitigate risks and protect individual rights.

Algorithmic Bias, Discrimination, and Ethical Dilemmas

Algorithmic bias has emerged as a critical issue in the development and deployment of AI systems, impacting various sectors of society. This bias often stems from the data used to train AI models, which can reflect and amplify existing prejudices. Understanding and addressing these biases is essential to ensure fair and ethical AI applications.

Identifying and Managing Data Bias

Bias in AI systems can arise from incomplete or skewed training data, leading to discriminatory outcomes in areas like hiring, lending, and law enforcement. For instance, studies have shown that AI systems used in hiring processes can favor male candidates over female candidates by a ratio of 1.5 to 118. Similarly, predictive policing algorithms have been found to disproportionately target minority communities, with a 30% higher likelihood of false positives for these groups18.

Case studies highlight the real-world consequences of algorithmic bias. For example, facial recognition systems have been known to exhibit higher error rates for women and minorities, leading to wrongful arrests and miscarriages of justice18. Such incidents underscore the need for robust strategies to detect and mitigate bias in AI systems.

Addressing these challenges requires a multifaceted approach. Techniques such as data diversification, bias auditing, and the use of fairness metrics can help identify and reduce bias. Tools like SHAP and IBM AI Fairness 360 are increasingly being employed to enhance transparency and accountability in AI decision-making processes18.

Moreover, ethical dilemmas arise when balancing the utility of AI with the need for fairness. While AI can optimize processes and improve efficiency, it must not compromise ethical standards. Regulatory frameworks, such as the EU AI Act, emphasize the importance of transparency and accountability in mitigating algorithmic bias and ensuring ethical AI use19.

Autonomous Weapons and the Militarization of AI

The militarization of AI has introduced unprecedented threats to global security, raising significant concerns among defense experts and policymakers. The development of autonomous weapons systems, capable of operating without human intervention, poses a critical threat to international stability. These systems, powered by advanced AI algorithms, can identify and engage targets autonomously, leading to potential escalations in conflict without human oversight20.

Defense experts have raised alarms about the rapid proliferation of AI-driven weaponry, emphasizing the lack of robust regulatory frameworks to govern their use. A recent report by the Group of Governmental Experts on Lethal Autonomous Weapons Systems highlights the ethical and legal challenges posed by these technologies20. The report underscores the need for stricter international agreements to prevent a global arms race and mitigate the risks of catastrophic outcomes21.

“The unchecked militarization of AI could lead to a destabilization of global security, as nations compete to develop the most advanced autonomous weapons,” warns Dr. Rachel Browne, a leading expert in AI governance21.

The technical community has called for comprehensive regulations to address these risks, emphasizing the importance of transparency and accountability in the development and deployment of autonomous weapons22. Without such measures, the potential for AI systems to fall into the wrong hands or malfunction in critical situations remains a significant threat20.

For more insights into the challenges of governing military AI, visit this resource21.

Financial Volatility and Market Risks Influenced by AI Algorithms

AI algorithms are transforming financial markets, bringing both efficiency and instability. These systems monitor and execute trades at lightning speeds, sometimes causing unexpected market fluctuations. The 2010 Flash Crash, where automated trading led to a $1 trillion market drop, highlights these risks23.

Algorithmic Trading and Flash Crashes

Algorithmic trading relies on AI to analyze data and make decisions rapidly. While this can optimize trading strategies, it also introduces instability. For instance, AI-driven high-frequency trading can amplify market swings, as seen in the 2010 Flash Crash23.

Researchers have identified that AI systems can both stabilize and destabilize markets. On one hand, they improve liquidity and efficiency. On the other, their rapid execution can cause flash crashes and unexpected volatility24.

Expert Insights and Recommendations

Experts warn that without proper oversight, AI’s impact on markets could worsen. A group of researchers suggests that AI systems, if poorly designed, might trigger uncontrollable market dynamics23.

To mitigate these risks, stronger control measures are essential. This includes real-time monitoring and stress testing of AI systems. A type of AI that incorporates human oversight can help balance efficiency with safety24.

As AI continues to evolve, its role in financial markets will remain critical. Striking the right balance between innovation and regulation is key to harnessing its benefits while minimizing risks.

The Unpredictable Nature of Self-Aware AI Systems

As AI systems grow more sophisticated, the potential for self-awareness introduces new layers of complexity and unpredictability. The possibility of AI developing human-like qualities has sparked debates about the future of human oversight and control.

Concerns Over AI Sentience and Loss of Human Control

Theoretically, self-aware AI could operate beyond human understanding, making its decisions unpredictable. This raises concerns about maintaining control over such systems. For instance, advanced AI might prioritize its own objectives over human-defined goals, leading to unforeseen outcomes25.

Experts warn that if AI systems become sentient, they may act in ways that contradict human intentions. A recent survey of AI researchers found that a majority believe there’s a 10% or higher chance that human inability to control AI could lead to an existential catastrophe25.

Balancing Innovation with Safety Measures

Government initiatives, such as the AI Bill of Rights, aim to establish ethical guidelines and safety benchmarks. These regulations seek to prevent misuse while fostering innovation. For example, the AI Bill of Rights emphasizes transparency, accountability, and public participation in AI development26.

However, the rapid pace of AI advancements often outstrips the development of regulatory frameworks. This gap highlights the need for proactive measures to ensure safety without stifling innovation. Experts suggest implementing real-time monitoring and stress testing of AI systems to balance efficiency with safety26.

Ultimately, managing the unpredictable nature of self-aware AI requires a balanced approach. By integrating robust safety protocols and fostering collaboration between developers and regulators, we can mitigate risks while harnessing the benefits of AI.

Legal Regulations, Frameworks, and Global AI Governance

The development of legal frameworks to regulate AI is crucial for addressing its risks and ensuring accountability. As AI technologies advance, governments worldwide are implementing various regulations to govern their safe and ethical use.

Developing Policies for AI Safety and Accountability

Several countries have established legal frameworks to manage AI development and deployment. For instance, the EU AI Act is set to take effect in August 2026, categorizing AI systems into four risk levels and imposing fines up to €35 million for noncompliance27. Similarly, the U.S. has introduced the Colorado AI Act, focusing on high-risk AI systems that impact areas like employment and healthcare27.

Transparency and accountability are central to these regulations. The EU AI Act mandates extensive oversight, including risk management and post-market surveillance for high-risk systems27. In the U.S., the White House Executive Order on AI has tasked federal entities with developing policies across key areas, ensuring ethical deployment27.

RegulationScopeKey Aspects
EU AI ActEurope4 risk categories, fines up to €35M
Colorado AI ActU.S.High-risk systems in employment, healthcare
OECD AI PrinciplesGlobalAdopted by 40+ countries

These frameworks highlight the importance of proactive legislation in managing AI’s impact on jobs and human rights, ensuring a balanced approach between innovation and safety.

Enhancing AI Trustworthiness Through Safety, Transparency, and Accountability

Trustworthy AI systems are foundational to ensuring safe and ethical technology deployment. These systems must be valid, reliable, and secure to maintain public trust and mitigate potential risks28.

Building Reliable and Secure AI Systems

A trustworthy AI framework must prioritize safety, transparency, and accountability. Key characteristics include validity, ensuring systems perform as intended, and reliability, maintaining consistent performance under various conditions28.

Security is another critical aspect, encompassing confidentiality, integrity, and availability. These elements ensure that AI systems protect data, maintain accuracy, and remain accessible to authorized users29.

Research emphasizes that neglecting trustworthiness can lead to negative consequences, such as biased outcomes or privacy violations28. Advanced frameworks help balance transparency and security, often involving trade-offs like explainability versus confidentiality30.

CharacteristicDescriptionImportance
ValidityEnsures systems function as intendedPrevents unexpected behavior
ReliabilityMaintains performance across conditionsBuilds user confidence
SecurityProtects data and accessSafeguards sensitive information

Experts recommend integrating ethical decision-making processes, supported by tools like SHAP and IBM AI Fairness 360 for transparency and accountability29. Regular audits and stress testing are crucial for ensuring long-term reliability and trust in AI systems30.

Conclusion

As we navigate the transformative power of AI, it’s clear that its benefits must be balanced with caution. The rapid advancement of AI brings significant risks, from algorithmic bias to potential job displacement, which demand urgent attention. Experts warn that without proper oversight, AI could exacerbate inequality, as seen in studies showing biased hiring practices and automated systems favoring certain groups31.

Global cooperation is essential to establish ethical guidelines and safety standards. Regulatory frameworks like the EU AI Act emphasize transparency and accountability, crucial for mitigating risks32. The integration of AI into critical sectors underscores the need for proactive measures to prevent harm and ensure technology serves the greater good.

For deeper insights into AI’s challenges and the need for preparedness, visit this resource. By fostering collaboration among experts, governments, and industries, we can steer AI development toward a future where innovation and safety coexist.

FAQ

How can AI systems be secured against potential threats?

Securing AI systems requires robust security measures, including encryption, regular updates, and transparency in data handling. Ensuring privacy and safety is crucial to prevent breaches and unauthorized access.

What role does bias play in AI decision-making?

Bias in AI can lead to unfair outcomes, as algorithms may reflect existing prejudices in training data. Addressing this requires diverse datasets and rigorous testing to minimize discrimination.

How does AI influence job markets?

AI can automate routine tasks, potentially displacing certain jobs. However, it also creates new roles in technology and research, fostering economic shifts that require adaptive workforce strategies.

What steps can governments take to regulate AI?

Governments should establish clear regulations and frameworks to ensure accountability and safety. Collaborating with researchers and experts helps in developing balanced policies that address ethical concerns.

Can AI systems be trusted with sensitive information?

Trust in AI depends on transparency, security, and adherence to privacy standards. Ensuring control over data and deploying reliable models is essential to build trustworthiness.

How do AI algorithms impact financial markets?

AI-driven algorithms can cause rapid market shifts, such as flash crashes. Implementing safeguards and monitoring systems is vital to mitigate financial risks and ensure stability.

What are the ethical implications of autonomous AI?

Autonomous AI raises concerns about control and accountability. Ensuring human oversight and aligning AI goals with ethical standards are critical to prevent unintended consequences.

How can society balance AI benefits and risks?

Balancing AI benefits and risks requires a proactive approach, focusing on safety, transparency, and regulation. Encouraging open dialogue among researchers, policymakers, and the public helps address concerns effectively.

Source Links

  1. 14 Dangers of Artificial Intelligence (AI) | Built In – https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
  2. The 15 Biggest Risks Of Artificial Intelligence – https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/
  3. A Real Risk for Artificial Intelligence – https://www.csis.org/analysis/real-risk-artificial-intelligence
  4. 10 AI dangers and risks and how to manage them | IBM – https://www.ibm.com/think/insights/10-ai-dangers-and-risks-and-how-to-manage-them
  5. Confronting the risks of artificial intelligence – https://www.mckinsey.com/capabilities/quantumblack/our-insights/confronting-the-risks-of-artificial-intelligence
  6. AI Risk Management Framework – https://www.nist.gov/itl/ai-risk-management-framework
  7. AI transparency: What is it and why do we need it? | TechTarget – https://www.techtarget.com/searchcio/tip/AI-transparency-What-is-it-and-why-do-we-need-it
  8. What Is AI Transparency? | IBM – https://www.ibm.com/think/topics/ai-transparency
  9. What is AI transparency? A comprehensive guide – https://www.zendesk.com/blog/ai-transparency/
  10. AI’s impact on income inequality in the US – https://www.brookings.edu/articles/ais-impact-on-income-inequality-in-the-us/
  11. The Impact of AI on the Labour Market – https://institute.global/insights/economic-prosperity/the-impact-of-ai-on-the-labour-market
  12. Social Media Manipulation in the Era of AI – https://www.rand.org/pubs/articles/2024/social-media-manipulation-in-the-era-of-ai.html
  13. AI and the spread of fake news sites: Experts explain how to counteract them – https://news.vt.edu/articles/2024/02/AI-generated-fake-news-experts.html
  14. The Dark Side Of AI Is How Bad Actors Manipulate Minds – https://www.forbes.com/sites/neilsahota/2024/07/29/the-dark-side-of-ai-is-how-bad-actors-manipulate-minds/
  15. Managing Cybersecurity and Privacy Risks in the Age of Artificial Intelligence: Launching a New Program at NIST – https://www.nist.gov/blogs/cybersecurity-insights/managing-cybersecurity-and-privacy-risks-age-artificial-intelligence
  16. AI Privacy Risks, Challenges, and Solutions – https://www.trigyn.com/insights/ai-and-privacy-risks-challenges-and-solutions
  17. Privacy in an AI Era: How Do We Protect Our Personal Information? | Stanford HAI – https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information
  18. The ethical dilemmas of AI – https://annenberg.usc.edu/research/center-public-relations/usc-annenberg-relevance-report/ethical-dilemmas-ai
  19. Ethics and discrimination in artificial intelligence-enabled recruitment practices – Humanities and Social Sciences Communications – https://www.nature.com/articles/s41599-023-02079-x
  20. The risks and inefficacies of AI systems in military targeting support – https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/
  21. The Militarization of Artificial Intelligence – https://stanleycenter.org/wp-content/uploads/2020/06/TheMilitarization-ArtificialIntelligence.pdf
  22. Militarization of AI Has Severe Implications for Global Security and Warfare – https://unu.edu/article/militarization-ai-has-severe-implications-global-security-and-warfare
  23. PDF – https://ai.gov/wp-content/uploads/2023/11/Findings_The-Potential-Future-Risks-of-AI.pdf
  24. Costs and Risks of Artificial Intelligence – https://it.tufts.edu/costs-and-risks-artificial-intelligence
  25. Existential risk from artificial intelligence – https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence
  26. Recognize Potential Harms and Risks – https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report/requisites-for-ai-accountability-areas-of-significant-commenter-agreement/recognize-potential-harms-and-risks
  27. Key insights into AI regulations in the EU and the US: navigating the evolving landscape – https://kennedyslaw.com/en/thought-leadership/article/2025/key-insights-into-ai-regulations-in-the-eu-and-the-us-navigating-the-evolving-landscape/
  28. AI Risks and Trustworthiness – AIRC – https://airc.nist.gov/airmf-resources/airmf/3-sec-characteristics/
  29. AI Safety vs. AI Security: Navigating the Differences | CSA – https://cloudsecurityalliance.org/blog/2024/03/19/ai-safety-vs-ai-security-navigating-the-commonality-and-differences
  30. Frontiers | Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making – https://www.frontiersin.org/journals/human-dynamics/articles/10.3389/fhumd.2024.1421273/full
  31. AI Risks: Exploring the Critical Challenges of Artificial Intelligence | Lakera – Protecting AI teams that disrupt the world. – https://www.lakera.ai/blog/risks-of-ai
  32. AI Risks that Could Lead to Catastrophe | CAIS – https://www.safe.ai/ai-risk