Impact of GenAI on Social and Individual Well-being
🆕 Program is available here.
Aims and New Challenges
The emergence of generative AI (GenAI) has forged profound intersections between society and human well-being. While GenAI’s potential enhancements to our daily lives are immense, they also present unique challenges. As we further incorporate GenAI into societal frameworks, the emphasis shouldn’t be solely on technological prowess or economic benefits. It’s equally crucial to ensure ethical considerations, such as fairness, transparency, accountability, and the protection of privacy and security.
Take GenAI’s potential in healthcare, for example. Generative models used in diagnostics need to be both precise and interpretable. The data these models operate on must be comprehensive, representing various cultures, ages, genders, and geographical areas to accurately mirror the complexities of our diverse societies. GenAI impact and potential in creative arts, education and journalism are expected to be equally profound and challenging.
Given GenAI’s significant influence, it’s essential to establish ethical boundaries in this era. This symposium seeks to explore this multifaceted topic from two primary perspectives.
The first perspective, “Individual Impact of GenAI on Well-being,” aims to elucidate the mechanisms and issues to consider when designing AI and GenAI for personal well-being. In this context, the focus excludes societal aspects. Topics might include Efficiency in Individual Work Enhancement, Personalized Medical Care, Support in Learning and Education, New Forms of Entertainment, and Privacy Concerns. The discussion should center on how AI and GenAI together can enhance opportunities for individual well-being, with an emphasis on understanding the emotional and quality-of-life implications of these technologies.
The second perspective, “Social Impact of GenAI on Well-being,” intends to highlight mechanisms and issues to consider when incorporating societal aspects into GenAI for well-being. Topics may involve changes in employment structures due to automated AI, preventing the worsening of social inequalities, the potential to enhance the quality of health and medical treatments, the risk of misinformation spread, and ethical debates on AI’s judgment criteria and values. It’s anticipated that exploring the social impact of GenAI on well-being will shed light on both the potential benefits and risks of AI and GenAI. We must also explore ways to prevent machines from adopting human biases, ensuring fairness and producing socially responsible outcomes.
We welcome both technical and philosophical discussions on the individual and social “Impacts of GenAI” on well-being, particularly in the realms of ethical design, machine learning software, robotics, and social media (though not exclusively). Topics like interpretable forecasts, responsible social media, beneficial robotics, combating loneliness with AR/VR, and promoting personal health are pivotal in our discussions. This symposium aims to share the latest advancements, current challenges, and potential applications related to social responsibility for well-being. Evaluations of digital experiences and insights into human health and well-being are also encouraged.
Scope of Interests
We will have the following technical, and ethical challenges on “Impact of GenAI for Well-being”. Technical research for clarifying the possibilities and limitations of “Impact of GenAI for Well-being” are welcomed. The following topics are the scope of our interests, but are not limited to;
● Impact on Individual Well-being
-
Positive Impacts:
Personalized Learning: Catering to individual learning styles and paces. GenAI can adapt educational materials and support to fit each learner, making education more effective and enjoyable.
Efficient Daily Life: With AI support, everyday tasks and decisions become more streamlined. From scheduling to content recommendations, GenAI can enhance individual productivity and leisure.
Creative Assistance: Generating novel ideas or suggestions to bolster human creativity. GenAI can be a tool for artists, writers, and other creatives, offering inspiration or direct content generation.
We call for theoretical and empirical research to understand the possibilities and limitations of current AI/ML technologies for discussing positive impacts on individual well-being.
-
Negative Impacts:
Over-reliance: A potential decrease in independent decision-making and thinking abilities due to excessive reliance on AI. Overusing AI for decisions can hinder personal growth and critical thinking skills.
Privacy Concerns: Potential misuse or mishandling of personal data. With more data feeding into AI systems, there’s increased risk to individual privacy.
Mental Health Issues: Increased stress or deterioration of human relationships due to over-interaction with AI. Reduced human contact and over-reliance on virtual interactions might affect emotional well-being.
Ethical Dilemmas: Questions about the extent to which one should rely on AI suggestions or decisions. Drawing the line between AI assistance and personal judgment becomes a crucial ethical consideration.
We call for theoretical and empirical research to understand the possibilities and limitations of current AI/ML technologies for discussing negative impacts on individual well-being.
-
Research Challenges for Individual Well-being
Data Safety: Research on technologies and policies to bolster the protection of personal data and privacy.
Mental Health Assessment: Evaluating the impact of AI interactions on psychological health and well-being.
Ethical Boundaries: Investigating the ethical standards and guidelines for using AI in daily life.
Maintaining Autonomy: Exploring ways to balance AI support with personal judgment and autonomy.
The integration of Generative AI brings profound impacts both at societal and individual levels. Deep understanding and appropriate research are essential to harness its benefits and address its challenges.
● Impact on Social Well-being
-
Positive Impacts:
Educational Revolution: GenAI has the potential to personalize learning environments, enhancing both the efficacy and accessibility of education. Tailored educational content can cater to individual needs, maximizing the learning potential for students from diverse backgrounds.
Content Abundance: Rapid generation of diverse content can elevate the quality and speed of entertainment and information dissemination. GenAI can produce vast amounts of content, from news articles to digital artwork, making it easier for creators and media outlets to meet audience demands.
Equalizing Information Access: Catering information to diverse populations promotes equal access to knowledge. GenAI can break down barriers by customizing content to different linguistic and cultural groups, ensuring equitable information distribution.
Discussions on the possibilities and limitations of the current AI/ML technologies for discussing positive impacts on social well-being are welcomed.
-
Negative Impacts:
Spread of Misinformation: There is an increased risk of quickly generating and disseminating false information, such as deepfakes. Enhanced generation capabilities can be misused, leading to misinformation campaigns or fraudulent activities.
Amplification of Bias: If trained on biased data, there’s a possibility of reinforcing existing societal prejudices. Algorithms may unintentionally propagate and even amplify societal biases present in training data.
Job Displacement: Certain professions or tasks might become redundant due to AI automation, leading to unemployment concerns. As GenAI takes over creative and information-based roles, traditional jobs in these sectors might decline. Discerning human intent and originality from AI-generated content becomes a profound philosophical debate.
Philosophical Concerns: The blurred boundary between human and AI-generated content raises questions about truth and values.
Discussions on the possibilities and limitations of the current AI/ML technologies for discussing negative impacts on social well-being are welcomed.
-
Research Challenges for Social Well-being
Counteracting Misinformation: Research on technologies and policies to mitigate the negative impact of AI-generated false information.
Bias Rectification: Exploring methods to design and train AI without inheriting societal biases.
Role in Education: Investigating the optimal role and boundaries of AI in educational settings.
Societal Assessment: Developing methods to evaluate and predict the broader societal impacts of AI.
In order to encourage people to accept the results of AI, we need to develop desirable human-AI partnerships. We welcome ethical and philosophical discussions on this issue. The topics include “Machine Intelligence vs. Human Intelligence”, or “How AI affects our human society or way of thinking”, issues on Infodemic (fake news) with social media, personal identity, etc
A preliminary proposed schedule for the symposium (with keynote speakers if confirmed)
We will have invited talks, technical paper presentations, demonstrations, and discussion sections in the symposium. Around 20 – 25 authors of the accepted papers will have 20 minutes presentations; two or three keynote speakers will have 70 minutes talks and three or four guest speakers will have 45 minutes talks. We will have award sessions for best papers and best presentations with reviewers’ and participants’ voting on 3rd day. In the middle of the symposiums, we will have poster and demonstration sessions. The followings are tentative preliminary symposium schedules.
● 1st day, March 25th
[Introduction] Welcome and Self-introduction
[Session 1] GenAI Impacts for Individual and Social Well-being
[Invited Talk and Guest Talk 1]
[Invited Talk 2]
[Session 2] Issues on Wellbeing AI
[Reception]
● 2nd day, March 26th
[Introduction] Wrap up of the first day.
[Session 3] GenAI Challenges for Individual and Social Well-being
[Invited Talk and Guest Talk 2]
[Poster & Demonstration session]
[Session 4] AI and Humanity
[Plenary Session]
● 3rd day, March 27th
[Introduction] Wrap up of the first and second day.
[Session 5] Issues on Ethics and others
[Symposium wrap-up] Award selection, summary of new insights and questions
Format
The symposium is organized by the invited talks, presentations, and posters and interactive demos.
Submission Information
Interested participants should submit either full papers (8 pages maximum) or extended abstracts (2 pages maximum). Extend abstracts should state your presentation types (long paper (6-8 pages), short paper (1-2 pages), demonstration, or poster presentation). All submissions should be uploaded to AAAI’s EasyChair site at https://easychair.org/conferences/?conf=sss24, and in addition, email your submissions to
sss2024-genai[at]cas.lab.uec.ac.jp
by
December 22nd, 2023
January 12th, 2024.
The format templates (Latex and Word) of the submitted paper are available here for the review submission and here for the camera-ready paper submission. We encourage you to use these templates for the submitted paper.
[*** NOTE ***] Since we are planning to publish our proceedings from AAAI, please prepare the camera-ready paper with the AAAI sytle file.
Important Dates
Submission
deadline: December 22nd, 2023
The deadline has been extended to January 12th, 2024
Author notification: January 5th, 2024 January 26th, 2024 January 19th, 2024
Camera-ready papers: February 4th, 2024 February 12th, 2024 February 5th, 2024 (To include your paper in the proceedings of AAAI, the new deadline are determined)
Registration deadline: February 29th, 2024
Symposium: March 25th-27th, 2024
Publication of online proceeding: October 31st, 2024 (It might be changed.)
Invited Speakers
We are planning to invite some key-note speakers from Stanford University and international communities on interpretable AI and well-being computing. Invited speakers will be announced, later.
Organizing Committee
Co-Chairs
Takashi Kido (Teikyo University, Japan)
Keiki Takadama (The University of Electro-Communications, Japan)
Program Committee
Amy Ding (Carnegie Mellon University, U.S.A)
Melanie Swan (DIYgenomics, U.S.A.)
Katarzyna Wac (Stanford University, U.S.A and University of Geneva, Switzerland)
Ikuko Eguchi Yairi (Sophia Univ ersity, Japan)
Fumiko Kano (Copenhagen Business School, Denmark)
Takashi Maruyama (Stanford, U.S.A)
Chirag Patel (Harvard University, U.S.A)
Rui Chen (Stanford University, U.S.A)
Ryota Ka nai (University of Sussex, UK.)
Yoni Donner (Stanford, U.S.A)
Yutaka Matsuo (University of Tokyo, Japan)
Eiji Aramaki (Nara Institute of Science and Technology, Japan)
Pamela Day (Stanford, U.S.A)
Tomohiro Hoshi (Stanford, U.S.A)
Miho Otake (Riken, Japan)
Yotam Hineberg (Stanford, U.S.A)
Yukiko Shiki (Kansai Un iversity, Japan)
Yuichi Yoda (Ritsumeikan University, Japan)
Robert Reynolds (Wayne University, U.S.A)
Dragutin Petkovic(San Francisco State University,U.S.A)
Advisory committee
Atul J. Butte (University of California San Francisco, U.S.A.)
Seiji Nishino (Stanford University, U. S.A.)
Katsunori Shimohara (Doshisha University, Japan)
Takashi Maeno (Keio University, Japan)
Note: Since we are contacting other researchers, more program committees will be added to the above list.
Contact Information
Takashi Kido
Email: kido.takashi@gmail.com
Institution: Teikyo University, Advanced Comprehensive Research Organization
Institutional address: Kaga 2-11-1, Itabashi-ku, Tokyo, Japan, 173-8605