How Biden's executive order will impact the future of AI

President Joe Biden has issued an executive order that directly regulates artificial intelligence. The new order will establish new standards for AI safety and security.

How Biden's executive order will impact the future of AI

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

Artificial intelligence (AI) is a powerful technology that can bring many benefits to society but also poses many challenges and risks. To ensure that the United States leads the way in seizing the opportunity and managing the risks of AI, President Biden issued a landmark executive order on October 30, 2023.

The Biden administration is using the Defense Production Act, a 75-year-old law that gives the White House wide authority to regulate industries related to national security, to compel companies to tell the federal government about potential national security risks related to their AI work.

This is the first executive order from the federal government that directly regulates AI, and it follows the voluntary commitments of 15 major AI companies, such as Google, Microsoft and OpenAI.

The executive order aims to accomplish four major things. It establishes new standards for AI safety and security; protects Americans’ privacy, civil rights and consumer rights; supports workers and innovation; and advances American leadership worldwide. While those focuses are addressed, many critics say it is watered down and likely not enough to install much-needed safety rails for this rapidly progressing technology that is capable of outperforming many human minds.

CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO’S TO MAKE YOU SMARTER

MORE: WHY IS GOOGLE'S AI CHATBOT DODGING ISRAEL-HAMAS CONFLICT QUESTIONS? 

One of the key components of the executive order is to create new standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy.

The executive order requires developers of the most powerful AI systems, such as foundation models that pose a serious risk to national security or public health and safety, to share their safety test results and other critical information with the U.S. government before making them public. This will allow the government to assess the potential risks and benefits of these systems and prevent any harmful or malicious use.

The executive order also directs the National Institute of Standards and Technology (NIST) to set rigorous standards for extensive red-team testing to ensure safety before public release. Red-team testing is a method of evaluating the security and robustness of a system by simulating attacks from adversaries.

CHATGPT CHIEF WARNS OF SOME 'SUPERHUMAN' SKILLS AI COULD DEVELOP

The Department of Homeland Security (DHS) will apply these standards to critical infrastructure sectors and establish the AI Safety and Security Board.

The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear and cybersecurity risks.

Additionally, the executive order aims to protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.

Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.

These actions are the most significant steps ever taken by any government to advance the field of AI safety and security. They will help ensure that AI systems are aligned with human values and will not cause harm or damage to people or the environment.

MORE: PREVENTING MASS SHOOTINGS WITH A DETECTION: NAVY SEALS-INSPIRED INTERVENTION 

Another important aspect of the executive order is to protect Americans' privacy, civil rights and consumer rights in the age of AI.

The executive order creates guidelines that agencies can use to evaluate privacy techniques used in AI, such as differential privacy or federated learning. These techniques are designed to protect sensitive or personal data from unauthorized access or disclosure while still allowing useful analysis or learning.

The executive order also advances equity and civil rights by providing guidance to landlords and federal contractors to help avoid AI algorithms furthering discrimination, such as in housing or employment decisions. It also creates best practices on the appropriate role of AI in the justice system, such as when it is used in sentencing, risk assessments or crime forecasting. These actions will help prevent bias and unfairness in AI applications that affect people’s lives and opportunities.

Furthermore, the executive order protects consumers by directing the Department of Health and Human Services (HHS) to create a program to evaluate potentially harmful AI-related health care practices, such as misdiagnosis or over-treatment. It also creates resources on how educators can responsibly use AI tools, such as personalized learning or adaptive testing. These measures will help ensure that AI systems are used in ways that benefit people’s health, education and well-being. 

EXPERT WARNS BIDEN'S AI ORDER HAS 'WRONG PRIORITIES' DESPITE SOME POSITIVE REVIEWS

MORE: THIS DATING APP USES AI TO FIND YOUR SOUL MATE BY YOUR FACE 

The executive order also supports workers and innovation in the AI sector to help foster a vibrant and diverse AI ecosystem that drives scientific discovery and economic prosperity.

The executive order produces a report on the potential labor market implications of AI and studies the ways the federal government could support workers who are affected by a disruption to the labor market. This will help prepare workers for the changing nature of work and provide them with opportunities for reskilling or upskilling.

The executive order also promotes innovation and competition by directing agencies to increase their investments in AI research and development (R&D), especially in areas that have a high potential for social impact or economic growth. It also encourages agencies to collaborate with industry, academia, civil society and international partners on advancing responsible AI innovation.

MORE: META CONFESSES IT'S USING WHAT YOU POST TO TRAIN ITS AI 

Finally, the executive order advances American leadership around the world by directing agencies to engage with allies and partners in developing common norms and principles for responsible AI use.

It also directs agencies to promote human rights and democratic values in their AI-related activities and to oppose any attempts by authoritarian regimes to misuse AI for repression or surveillance.

These actions will help ensure that the United States remains a global leader in shaping the future of AI in a way that reflects its values and interests.

MORE: HOW TO USE AI TO HELP YOU GET A BETTER JOB INSTEAD OF IT STEALING ONE 

The order omits some rules that have been part of this year’s public debates. For instance, there is no regulation for licensing the most advanced models, a proposal endorsed by OpenAI CEO Sam Altman, and there are no restrictions on the most risky uses of the technology.

Also, the order does not compel the release of details about training data and model size, which many experts and critics argue is essential for understanding the technology and anticipating its potential harms.

In addition, there is no guidance on how intellectual property law will apply to works created with or by AI — that is now left to courts to decide.

GET MORE OF MY SECURITY ALERTS, QUICK TIPS & EASY VIDEO TUTORIALS WITH THE FREE CYBERGUY NEWSLETTER - CLICK HERE

Biden’s executive order on AI is a step in the right direction, even though it might not be enough or permanent. Some argue that the executive order is not legally binding, and it can be changed or revoked by future presidents.

They also suggest that more specific and enforceable regulations are needed to address the complex and evolving challenges and opportunities of AI. Without the executive order, AI could still be regulated by existing laws and voluntary standards, but they might not be sufficient or consistent enough to ensure that AI is responsible, ethical, and beneficial for all.

Biden's executive order on AI is a significant government action that hopefully will have a positive impact on the future of AI. It attempts to ensure that AI systems are safe, secure, trustworthy and beneficial for all Americans and the world but ignored core advice from AI leaders like OpenAI’s CEO Sam Altman. It leaves a number of serious unresolved questions to be answered. 

How do you feel about the Biden executive order? Do you think it will help the U.S. lead the way in AI innovation and safety? Let us know by writing us at Cyberguy.com/Contact.

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

Ask Kurt a question or let us know what stories you'd like us to cover

Answers to the most asked CyberGuy questions:

What is the best way to protect your Mac, Windows, iPhone and Android devices from getting hacked?

What is the best way to stay private, secure and anonymous while browsing the web?

How can I get rid of robocalls with apps and data removal services?

Copyright 2023 CyberGuy.com. All rights reserved.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow