Technology

Regulations for Artificial Intelligence: Necessity or Just Control?

Regulations for Artificial Intelligence: Necessity or Just Control?

Washington has succeeded in obtaining "voluntary commitments" from seven major technology companies based on regulations set by the White House to ensure the safety of AI products they offer before launching them to users. These commitments are a step towards regulating AI programs to achieve a balance between benefiting from their advantages and high services, and mitigating their significant risks, which have been described as "threatening human civilization and security if left unchecked." Some commitments require third-party oversight of how commercial AI systems operate, though there are no clarifications about who will review the technology or hold companies accountable.

The list of companies that announced their voluntary commitments includes "Amazon," "Google," "Meta," "Microsoft," and "OpenAI," the developer of the "ChatGPT" application, as well as the two emerging firms "Anthropic" and "Inflection AI."

According to the White House statement, the companies committed to the following:

- Conducting security tests, partially carried out by independent experts, to protect against major risks like biosecurity and cybersecurity.

- The methods used to report vulnerabilities and risks in their systems.

- Using digital watermarks to help distinguish between real images and those generated by AI.

The voluntary commitments aim to "facilitate risk management until Congress can be persuaded to enact laws regulating technology." Although the White House did not specify who will monitor these systems, it is expected that a unit led by the White House and technology experts will perform this task in the short term until a more concrete solution is formed, with a dedicated body anticipated to be established for this purpose in the future.

Previously, the Chinese government established a list of rules governing the operation of AI technologies, which will take effect with modifications on August 15. Notably, among the rules drafted in April, AI providers are required to conduct security reviews and register their algorithms with the government.

Our readers are reading too