Backup Header Below

Want ethical AI? Hand the keys to middle schoolers.

When they returned to the Zoom matrix of digital faces and told one another about their constructions, they realized something: Each of them had made a slightly different sandwich, favoring the characteristics they held dear. Not necessarily good, not necessarily bad, but definitely not neutral. Their sandwiches were biased. Because they were biased, and they had built the recipe.

The activity was called Best PB&J Algorithm, and Zhang and more than 30 other Boston-area kids between the ages of 10 and 15 were embarking on a two-week initiation into artificial intelligence—the ability of machines to display smarts typically associated with the human brain. Over the course of 18 lessons, they would focus on the ethics embedded in the algorithms that snake through their lives, influencing their entertainment, their social lives, and, to a large degree, their view of the world. Also, in this case, their sandwiches.

“Everybody’s version of ‘best’ is different,” says Daniella DiPaola, a graduate student at Massachusetts Institute of Technology who helped develop the series of lessons, which is called Everyday AI. “Some can be the most sugary, or they’re optimizing for an allergy, or they don’t want crust.” Zhang put her food in the oven for a warm snack. A parent’s code might take cost into account.

A pricey PB&J is low on the world’s list of concerns. But given a familiar, nutrient-rich example, the campers could squint at bias and discern how it might creep into other algorithms. Take, for example, facial recognition software, which Boston banned in 2020: This code, which the city’s police department potentially could have deployed, matches anyone caught on camera to databases of known faces. But such software in general is notoriously inaccurate at identifying people of color and performs worse on women’s faces than on men’s—both of which lead to false matches. A 2019 study by the National Institute of Standards and Technology used 189 algorithms from 99 developers to analyze images of 8.49 million people worldwide. The report found that false positives were uniformly more common for women and up to 100 times more likely among West and East African and East Asian people than among Eastern Europeans, which had the lowest rate. Looking at a domestic database of mug shots, the rate was highest for American Indians and elevated for Black and Asian populations.

The kids’ algorithms showed how preference creeps in, even in benign ways. “Our values are embedded in our peanut butter and jelly sandwiches,” DiPaola says.

The camp doesn’t aim to depress students with the realization that AI isn’t all-knowing and neutral. Instead, it gives them the tools to understand, and perhaps change, the technology’s influence—as the AI creators, consumers, voters, and regulators of the future.

To accomplish that, instructors based their lessons on an initiative called DAILy (Developing AI Literacy), shaped over the past few years by MIT educators, grad students, and researchers, including DiPaola. It introduces middle schoolers to the technical, creative, and ethical implications of AI, taking them from building PB&Js to totally redesigning YouTube’s recommendation algorithm. For the project, MIT partnered with an organization called STEAM Ahead, a nonprofit whose mission is to create educational opportunities for Boston-area kids from groups traditionally underrepresented in scientific, technical, and artistic fields. They did a trial run in 2020, then repeated the curriculum in 2021 for Everyday AI, expanding the camp to include middle-school teachers. The goal is for educators across the country to be able to easily download the course and implement it.

Source : https://www.popsci.com/science/mit-camp-teaches-ethical-ai/

    Other Press Releases