By Bobby Carlton
The White House is working to address the various concerns raised by the public about the security of AI, particularly in areas such as cybersecurity and safety.
The Biden and Harris administration today unveiled several new initiatives aimed at promoting responsible artificial intelligence (AI) development and ensuring the safety and rights of individuals. These actions build on the strong record of engagement the former has made in addressing the various challenges associated with this technology. These include establishing a comprehensive strategy to address the risks and opportunities that it presents to society.
Despite the technological advancements that have occurred in AI, it is still important that we take the necessary steps to mitigate its risks. In order to ensure that the technology is used for the benefit of society, the president has stated that we must place people at the center of this innovation. This means that companies must take the necessary steps to ensure that their products are safe.
Top executives from some of the leading companies in the field of AI are set to meet with Vice President Harris and other officials to discuss the importance of fostering ethical, responsible, and trustworthy innovations. These companies include Alphabet, OpenAI, Microsoft, and Anthropic.
“I think it might not be possible for OpenAI to actually detail all of its training data at a level of detail that would be really useful in terms of some of the concerns around consent and privacy and licensing,” said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face said in an interview Tuesday adding, “From what I know of tech culture, that just isn’t done.”
The meeting is part of the administration’s ongoing efforts to engage with various groups and individuals on the critical issues related to AI. Some of these include researchers, organizations, and companies.
According to Mitchell, the decision will be up to governments, as they will have to decide whether or not they want to discard all the work that was done with the technology. Doing so would have a significant impact on the operations of companies, as they would have to start over.
This initiative follows on from the administration’s previous actions to promote responsible AI development and includes the establishment of the AI Blueprint and other executive actions. This effort builds on the considerable steps the Biden/Harris Administration has taken to date to promote responsible innovation. These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.
In February, President Biden signed an executive order aimed at preventing federal agencies from discriminating against individuals using new technologies, such as AI. It also ordered them to investigate and eliminate bias in the design and use of such innovations.
Following the signing of the executive order, several federal agencies, including the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justice’s Civil Rights Division, issued a joint statement to highlight their commitment to protecting American citizens from the harmful effects of AI.
“The Blueprint for an AI Bill of Rights is for everyone who interacts daily with these powerful technologies — and every person whose life has been altered by unaccountable algorithms,” said Alondra Nelson, OSTP’s deputy director for science and society, in a release. “The practices laid out in the Blueprint for an AI Bill of Rights aren’t just aspirational; they are achievable and urgently necessary to build technologies and a society that works for all of us.”
The White House is also working to address the various concerns raised by the public about the security of AI, particularly in areas such as cybersecurity and safety. To this end, it has partnered with some of the country’s leading cybersecurity experts to ensure that companies have access to the best practices related to AI.
What Does This Mean?
One of the latest steps taken by the administration to promote responsible R&D is by the National Science Foundation. It is investing $140 million to establish seven new AI research institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state.
- Safe and Effective Systems: As its name suggests, this concept aims to protect individuals against unsafe or ineffective systems.
- Algorithmic Discrimination Protections: This concept would shield against discrimination by algorithms and ensure that systems are used and designed equitably.
- Data Privacy: This addresses abusive data practices through built-in protections and provides users with information about how others use their data.
- Notice and Explanation: Alerts individuals when an automated system is being used and how it impacts them.
- Alternative Options: This focuses on individuals opting out, where appropriate, and having access to someone who can quickly consider and remedy problems they might encounter during this process.
These new institutes are designed to catalyze collaborations across various academic institutions, government agencies, and industry to develop transformative AI technologies that foster ethical, responsible, trustworthiness, and service the public good. In addition to promoting responsible innovation, these Institutes bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce.
These new institutes will help accelerate the development of AI technology by focusing on key areas such as agriculture, cybersecurity, education, and climate change.
Developers of AI systems will be participating in a public evaluation of their creations to ensure that they are following proper disclosure standards. The administration made this announcement during the DEF CON 31 conference, where it was also revealed that some of the world’s leading developers of AI would be participating.
Through the evaluation, which will be carried out by thousands of experts and community partners, the government and the private sectors will be able to assess the effects of certain AI models on the public and society. It will also help them identify areas where they can improve their operations. This independent assessment will provide valuable information to both the public and researchers about the potential impacts of AI.
An independent evaluation of AI models is a crucial part of any assessment, as it can help reveal any potential issues.
To ensure that the US government is leading the way in addressing the various concerns about the use of AI, the Office of Management & Budget is preparing to release a draft policy guidance that will provide guidelines for federal agencies and departments on how to manage the risks and opportunities associated with the technology.
The goal of the draft policy guidance is to help federal agencies and departments make informed decisions when it comes to using AI. It will also help them develop effective strategies and procedures to manage the risks and opportunities associated with the technology. This draft guidance will be released for public comment this summer so that various stakeholder groups can participate in the development of the final version.
These guidelines could be used to help tackle different forms of discrimination and harm associated with AI technology, according to White House Domestic Policy Adviser Susan Rice. “Taken together, these actions will help tackle algorithmic discrimination and address the harms of automated systems on underserved communities,” said Rice.