Human-Compatible AI for Well-being: Harnessing Potential of GenAI for AI-Powered Science



Aims and New Challenges

The rapid evolution of Artificial Intelligence (AI), such as Generative AI (GenAI), presents immense potential for improving individual and societal well-being, especially in sectors such as healthcare, education, and creativity. Notably, this year’s Nobel Prizes in Physics and Chemistry were awarded to research that significantly utilizes AI, underscoring the transformative impact of these technologies in scientific advancement.


Such an AI has the power to positively influence on well-being which encompasses not only physical health, but also mental fulfillment, autonomy, and social cohesion. However, these advancements have faced significant philosophical and technical challenges. For example, over-reliance on AI for personal decision making in healthcare can diminish individuals’ critical thinking skills, potentially leading to poor health outcomes when users blindly trust AI recommendations. Additionally, AI-driven recommendation systems can create echo chambers that limit users’ exposure to diverse viewpoints, leading to social polarization and reduced community cohesion.


To address these challenges, Stuart Russell’s “The Problem of Control” emphasizes the need for AI systems that align with human values and remain under human control to prevent unintended and harmful consequences. As AI systems become more complex and autonomous, the risk of misalignment with human intentions increases.


To effectively navigate these issues, we must maximize the potential of recent AI technologies while maintaining control over their applications. Achieving this balance requires a dual approach that encompasses both Human-Compatible AI and AI-Powered Science: Human-Compatible AI to safeguard human values, and AI-Powered Science to leverage real-world applications. Integrating both ensures effective navigation of AI’s complexities of AI.



Human-Compatible AI: This approach focuses on creating AI systems that operate within human-defined boundaries. This ensures that AI does not erode autonomy, reinforce biases, or manipulate behavior in ways that could lead to social fragmentation. By emphasizing controllability, fairness, and interpretability, we can develop AI that enhances individual agency and fosters trust among users.

AI-Powered Science: This approach leverages advanced technologies such as GenAI to provide personalized services and innovative solutions in areas such as healthcare and education. However, it is crucial to navigate the ethical concerns associated with these technologies, including issues of privacy, data security, and potential biases, to maximize their positive impact on well-being.


By integrating both Human-Compatible AI and AI-Powered Science, we can comprehensively address the challenges posed by AI technologies. Both approaches complement each other, that is, while Human-Compatible AI ensures that AI systems remain aligned with human values, AI-Powered Science explores the practical applications and benefits of these technologies in real-world settings. This integrated strategy allows us to harness the full potential of AI while safeguarding it against its risks, making it indispensable for the enhancement of well-being.

Since well-being does not only have an individual aspect but also a social aspect, this symposium explores how Human-Compatible AI and AI-Powered Science enhances individual and social well-being while harnessing their potential. For this issue, the discussion will be guided by two key perspectives:

  1. Individual Well-Being: The impact of these technologies on individual well-being, focusing on autonomy, mental health, and personal growth, is essential for fostering a supportive environment where users can thrive.


  2. Social Well-Being: The broader implications of AI, such as equity, misinformation, and job automation, highlight the need for responsible approaches to technology implementation. Addressing these challenges is crucial for creating a fair and inclusive society.

By addressing these dimensions, the symposium aimed to foster a comprehensive dialogue on how AI can be designed to promote well-being while remaining safe, interpretable, and aligned with human values.


Symposium Perspectives

Since both Human-Compatible AI and AI-Powered Science play pivotal roles in shaping individual and societal well-being, this symposium clarifies how innovative applications of Human-Compatible AI and AI-Powered Science should be designed in areas such as healthcare, education, and social systems, focusing on how these technologies can enhance well-being while harnessing their potential.


Scope of Interests

We invite papers that address the following topics on Human-Compatible AI and AI-Powered Science, focusing on the intersection of technical solutions and philosophical challenges but are not limited to:



Human-compatible AI for safeguarding human values

  • Responsible AI for Personalized Healthcare, Education, and Mental Health
    • Exploring approaches to designing AI systems, including Large Language Models (LLMs) that uphold human values such as privacy, autonomy, and fairness in personalized services.
  • Interpretable AI for personal decision-making
    • Developing transparent AI models that allow individuals to understand and trust AI’s decision-making processes, especially in critical domains such as healthcare and finance.
  • AI-augmented creativity and personal growth
    • Investigating how AI can support human creativity and self-fulfillment without undermining individual autonomy, focusing on tools and systems that enhance creative processes.
  • Ethical Design Principles for GenAI and LLMs
    • Establishing guidelines for designing ethical AI, emphasizing transparency, accountability, and alignment with human values to avoid manipulation or bias reinforcement.

AI-powered Science to Leverage Real-World Applications

  • Advancements in Bias Detection and Fairness in Machine Learning
    • Research on cutting-edge methodologies to detect, mitigate, and prevent bias in AI systems to support fairness and equity in real-world applications.
  • AI-Driven Computational Sociology and Public Discourse
    • Examining how AI shapes information diffusion and public discourse in social networks and exploring innovative strategies to address challenges such as echo chambers and filter bubbles.
  • AI-driven societal transformations and Workforce Transitions
    • Analyzing the implications of AI-driven changes in industries such as healthcare, education, and creative sectors and designing systems to facilitate equitable workforce transitions.
  • Breakthrough Applications in Healthcare and Beyond
    • Investigating innovative uses of AI, such as Generative AI (GenAI) and LLMs, in sectors such as healthcare, education, and creative industries, highlights their potential to drive transformative advancements and societal impact.

    We encourage researchers to submit proposals that delve into the practical applications and implications of Human-Compatible AI and AI-Powered Science, examining their potential to foster well-being in various domains. By addressing the ethical considerations and societal impacts of these technologies, we can work towards ensuring that AI not only enhances individual well-being but also contributes to social well-being as a fair and inclusive society.


    Preliminary proposed schedule for the symposium (with keynote speakers if confirmed)

    We will have invited talks, technical paper presentations, demonstrations, and discussion sections in the symposium. Approximately 20–25 authors of the accepted papers will have 20 minutes presentations; two or three keynote speakers will have 70 minutes talks and three or four guest speakers will have 45 minutes talks. We will have award sessions for the best papers and best presentations with reviewers’ and participants’ votes on the 3rd day. We will have posters and demonstration sessions in the middle of the symposium. The following are tentative preliminary symposium schedules.



    1st day, March

    [Introduction] Welcome and Self-introduction

    [Session 1] Human-Compatible AI for Well-being

    [Invited Talk and Guest Talk 1]

    [Invited Talk 2]

    [Session 2] Issues on Wellbeing AI

    [Reception]


    2nd day, March

    [Introduction] Wrap-up on the first day.

    [Session 3] Individual Impact of GenAI and AI-Powered Science on Well-being

    [Invited Talk and Guest Talk 2]

    [Poster & Demonstration session]

    [Session 4] Social Impact of GenAI and AI-Powered Science on Well-being

    [Plenary Session]


    3rd day, March

    [Introduction] Wrap-up on the first and second days.

    [Session 5] Issues on Ethics and others

    [Symposium wrap-up] Award selection, summary of new insights and questions


    Format

    The symposium is organized by the invited talks, presentations, and posters and interactive demos.


    Submission Information

    Interested participants should submit either full papers (8 pages maximum) or extended abstracts (2 pages maximum). Extend abstracts should state your presentation types (long paper (6-8 pages), short paper (1-2 pages), demonstration, or poster presentation). All submissions should be uploaded to AAAI’s EasyChair site at https://easychair.org/conferences/?conf=sss25, and in addition, email your submissions to
    sss2025-hcai[at]cas.lab.uec.ac.jp by December 30th, 2024 extended to January 10th extended to January 17th, 2025. The format templates (Latex and Word) of the submitted paper are available here for the paper submission. We encourage you to use these templates for the submitted paper.
    [*** NOTE ***] Since we are planning to publish our proceedings from AAAI, please prepare the camera-ready paper with the AAAI sytle file.


    Important Dates

    Submission deadline: December 30th, 2024 extended to January 10th extended to January 17th, 2025

    Author notification: January 17th, 2025

    Camera-ready paper: January 31st, 2025

    Registration deadline: February 17th, 2025

    Symposium: March 31st-Apr 2nd, 2025

    Publication of online proceeding: October 31st, 2025 (It might be changed.)



    Organizing Committee

    Co-Chairs

    Takashi Kido (Teikyo University, Japan)

    Keiki Takadama (The University of Tokyo, Japan)


    Program Committee

    Hong Qin (Old Dominion University, U.S.A)

    Amy Ding (Carnegie Mellon University, U.S.A)

    Melanie Swan (DIYgenomics, U.S.A.)

    Katarzyna Wac (Stanford University, U.S.A and University of Geneva, Switzerland)

    Ikuko Eguchi Yairi (Sophia Univ ersity, Japan)

    Fumiko Kano (Copenhagen Business School, Denmark)

    Takashi Maruyama (University of Occupational and Environmental Health)

    Chirag Patel (Harvard University, U.S.A)

    Rui Chen (Stanford University, U.S.A)

    Ryota Kanai (University of Sussex, UK.)

    Yoni Donner (Stanford, U.S.A)

    Yutaka Matsuo (University of Tokyo, Japan)

    Eiji Aramaki (Nara Institute of Science and Technology, Japan)

    Pamela Day (Stanford, U.S.A)

    Tomohiro Hoshi (Stanford, U.S.A)

    Miho Otake (Riken, Japan)

    Yotam Hineberg (Stanford, U.S.A)

    Yukiko Shiki (Kansai University, Japan)

    Yuichi Yoda (Ritsumeikan University, Japan)

    Robert Reynolds (Wayne State University,U.S.A)

    Dragutin Petkovic(San Francisco State University,U.S.A)


    Advisory Committee

    Atul J. Butte (University of California San Francisco, U.S.A.)

    Seiji Nishino (Stanford University, U.S.A.)

    Katsunori Shimohara (Doshisha University, Japan)

    Takashi Maeno (Keio University, Japan)

    Robert Reynolds (Wayne University, U.S.A)


    Potential Participants

    We plan to get 30-50 participants from past AAAI symposium participants and new interest groups. We expect to have interdisciplinary groups from artificial intelligence, computer science, human-computer interaction, psychology, neuroscience, proactive bio-citizens, collaborative healthcare communities, and quantified self-communities to share global interests in understanding and enhancing human health and cognition.


    Contact Information

    Takashi Kido

    Email: kido.takashi@gmail.com

    Institution: Teikyo University, Advanced Comprehensive Research Organization, Professor

    Institutional address: 2-21 1-1 Kaga, Itabashi-ku, Tokyo, Japan, 173-0003