Department of Defense publishes AI ethical guidelines for technology

[ad_1]

The purpose of the guidelines is to ensure that technology contractors adhere to the current Department of Defense guidelines. Ethical principles for AI, says Goodman. The Department of Defense announced these principles last year after a two-year study commissioned by the Defense Innovation Board, an advisory panel of leading technology researchers and businesspeople formed in 2016 to bring the spark of Silicon Valley to the U.S. military. The board was chaired by former Google CEO Eric Schmidt until September 2020, and current members include Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab.

Still, some critics question whether the work promises meaningful reform.

During the study, the board consulted with a number of experts, including members of the Killer Robots Campaign and those who have been harshly critical of the military’s use of AI, such as Meredith Whittaker, a former Google researcher who helped organize the Maven Project protests.

Whittaker, who is now faculty director at New York University’s AI Now Institute, was not available for comment. But according to institute spokesperson Courtney Holsworth, she attended a meeting where she argued with senior members of the board, including Schmidt, about the direction she was taking. “She was never consulted in any meaningful way,” Holsworth says. “Asserting that in a small part of a long process the presence of dissenting voices can be read as a form of ethical washing, in which a particular outcome is used to claim widespread acceptance from relevant stakeholders.”

If the Department of Defense does not have broad involvement, can its guidelines still help build trust? “There will be people who will never be satisfied with any set of ethical codes that the DoD has produced because they find the idea paradoxical,” Goodman says. “It is important to be realistic about what the guidelines can and cannot do.”

For example, the guidelines say nothing about the use of lethal autonomous weapons, a technology that some campaigners argue should be banned. But Goodman points out that the regulations governing such technologies are decided further up the chain. The purpose of the guidelines is to make it easier to build artificial intelligence that complies with these regulations. And part of that process is making clear the concerns that third-party developers have. “A valid application of these guidelines is to decide not to follow a particular system,” says DIU’s Jared Dunnmon. “You may decide that this is not a good idea.”

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

/** * The template for displaying the footer * * Contains the closing of the #content div and all content after. * * @link https://developer.wordpress.org/themes/basics/template-files/#template-partials * * @package BeShop */ $beshop_topfooter_show = get_theme_mod( 'beshop_topfooter_show', 1 ); $beshop_basket_visibility = get_theme_mod( 'beshop_basket_visibility', 'all' ); ?>