AI and well-being—there's a concept
Each of us has to find our way in to the investigation of AI. [Note: it will be an ongoing adventure.] Though it might seem relatively new with the ChatGPT and generative AI media deluge, congressional hearings, the public conversation, lots of people have been doing some very deep thinking about AI for years…I especially appreciate the individuals, teams, and organizations thinking deeply about AI and young people.
I’ve been impressed with what the 75-year-old organization UNICEF (United Nations Children’s Fund) has to say about how to protect the rights of every child, everywhere…and in this case, in regards to artificial intelligence.
Here is a description of UNICEF in their own words on their website:
“UNICEF works in the world’s toughest places to reach the most disadvantaged children and adolescents – and to protect the rights of every child, everywhere. Across more than 190 countries and territories, we do whatever it takes to help children survive, thrive and fulfill their potential, from early childhood through adolescence. And we never give up.”
Every technology challenges our human values which is a good thing because it causes us to reflect on what these values are.
—Sherry Turkle
I’ve been rereading the document, “Policy guidance on AI for children,” produced by UNICEF’s Office of Global Insight and Policy in collaboration with several organizations including Berkman Klein Center for Internet and Society at Harvard, and the 5Rights Foundation, with funding and technical support from Finland’s Ministry of Foreign Affairs.
In this document, the authors recommend nine requirements for child-centered AI.
Support children’s development and well-being
Ensure inclusion of and for children
Prioritize fairness and non-discrimination for children
Protect children’s data and privacy
Ensure safety for children
Provide transparency, explainability, and accountability for children
Empower governments and businesses with knowledge of AI and children’s rights
Prepare children for present and future developments in AI
Create an enabling environment
Often, I see people beginning with 4 and 5, definitely critical requirements. But to begin with children’s development and well-being is why I think this document is so essential to read.
Introducing requirement 1 on page 32 of the policy doc, the authors write: “When applied appropriately, AI systems can support the realization of every child’s right to develop into adulthood and contribute to his or her well-being, which involves being healthy and flourishing across mental, physical, social and environmental spheres of life.”
The section continues with a specific set of requirements for #1 including guidance on metrics:
Integrate metrics and processes to support children’s well-being in the use of AI.(91) Since children will increasingly spend a large part of their lives interacting with or being impacted by AI systems, developers of AI systems should tie their designs to well-being frameworks and metrics – ideally ones focused on and tested with children specifically(92) – and adopt some measure of improved child well-being as a primary success criterion for system quality. Such a framework must integrate a holistic understanding of children’s experiences, and should include material, physical, psychological and social factors, among others. Governments, policymakers, businesses and developers should work with child well-being experts to identify appropriate metrics and indicators, and design processes that account for the changes of children's well-being. This includes efforts towards increasing awareness of the importance of well-being, and developing processes for integrating well-being considerations into design parameters, data collection, decision-making, roles and responsibilities, and risk management.
Footnotes 91 and 92 lead to several important frameworks on well-being including OECD’s child well-being measurement framework.
Especially great about this work, UNICEF consulted with ~250 adolescents in workshops held in Brazil, Chile, South Africa, Sweden, and the U.S. True to their own set of requirements, these 250 youth voices influenced and helped shape the policy document.
In addition, they published a separate 30-page report, “Adolescent Perspectives on Artificial Intelligence,” in February 2021. It offers a high-level summary of the workshops, and reports on youth responses to several questions about their thoughts about and their relationships to AI. Make sure to read this in tandem with the policy doc—they are a set.
[Note: In “Policy guidance on AI for children,” the authors include a link on page 32 to a Google doc for: Workshop Manual: Child and Youth Consultations on AI—A child consultation methodology with accompanying materials, developed by the Young and Resilient Research Centre at Western Sydney University, in partnership with UNICEF, used for the AI for Children project. The templates can be tailored to suit various local contexts.]
I think of this document as a central hub for exploring AI and youth, and for guiding your own conversations and investigations with young people. I recommend not only reading the document, but taking the time to explore all the links included. This important work helps to nudge us out of our personal/local AI bubbles to see and hear a diverse group of young people in their own words.
UNICEF released V1 in September, 2020 and took in feedback. For V2, released November, 2021, they incorporated feedback to add resources and it is full of them. This is not a read-once-and-move-on document. Bookmark it and come back to it. It will surprise you with what you can find if you invest the time in looking.
Of note: “... in March 2021 the Government of Scotland launched its national AI strategy and announced its formal adoption of the policy guidance. It is the first country to do so and signals the validity and growing recognition of the guidance.”
From the MDL archives
Citizen Science—hands-on & participatory
Elsewhere on Substack
Hands down, the best post about Barbie and Oppenheimer month—by Anya Kamenetz on The Golden Hour. Disclaimer: I haven’t seen either film.
In the Wilds
I really appreciated this opinion piece by writer Rebecca Solnit in the Guardian’s Climate Crisis reporting: We can’t afford to be climate doomers
Related to the piece above from the archives on citizen science and Scistarter.org: Citizen science motivates Girl Scouts to tackle problems
"We've found that after participating in citizen science, students do not just learn more science content or the process of science, or have better attitudes or trust in science. It can be the basis for motivating action." —Caren Cooper, study coauthor
Thanks for reading. Be well.