rramos.github.io

05 Jun, 2024 - About 3 minutes

Risks of AI

Intro

AI Risks and aggressive strategies Big Tech companies are following

Context

It is not new the mention of risks associated with AI developments. What concerns me the most, is strategies this major companies are doing to silence the experts in the field eg: here that are resigning with this major companies, not for better opportunities but differences on ethic ways to do business and advancements on the field.

As hummanity is advancing into AGI (Artificial General Intelligence) it is important to listen the experts on the field and understand the potential risks this may bring can suprass the value that would bring us all, with a potential risk of extintion.

When you start seeing one of this companies (NVIDIA) worth more than all German stocks combined, it makes me think, how can the general public be aware on the advancements and what level of control is being made.

The following site is signed by several experts on the field that agree with it pay attention to the list as this is not fiction.

Malicious use

People could intentionally harness powerful AIs to cause widespread harm. AI could be used to engineer new pandemics or for propaganda, censorship, and surveillance, or released to autonomously pursue harmful goals. To reduce these risks, we suggest improving biosecurity, restricting access to dangerous AI models, and holding AI developers liable for harms.

AI race

Competition could push nations and corporations to rush AI development, relinquishing control to these systems. Conflicts could spiral out of control with autonomous weapons and AI-enabled cyberwarfare. Corporations will face incentives to automate human labor, potentially leading to mass unemployment and dependence on AI systems. As AI systems proliferate, evolutionary dynamics suggest they will become harder to control. We recommend safety regulations, international coordination, and public control of general-purpose AIs.

Organizational risks

There are risks that organizations developing advanced AI cause catastrophic accidents, particularly if they prioritize profits over safety. AIs could be accidentally leaked to the public or stolen by malicious actors, and organizations could fail to properly invest in safety research. We suggest fostering a safety-oriented organizational culture and implementing rigorous audits, multi-layered risk defenses, and state-of-the-art information security.

Rogue AIs

We risk losing control over AIs as they become more capable. AIs could optimize flawed objectives, drift from their original goals, become power-seeking, resist shutdown, and engage in deception. We suggest that AIs should not be deployed in high-risk settings, such as by autonomously pursuing open-ended goals or overseeing critical infrastructure, unless proven safe. We also recommend advancing AI safety research in areas such as adversarial robustness, model honesty, transparency, and removing undesired capabilities.

Right to Warn about AI

Although the risks are identified, AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures and are preventing ex-employeers to disclosure that information with NDAs or other aggressive tactics.

The following site https://righttowarn.ai is signed by former employees from such companies, with hopes that these companies change their policies in order to commit with principals that would grant:

  1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit
  2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise
  3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected
  4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.

Kudos to all of you that are making this effort to inform the public.

References

OLDER > < NEWER