Socially Responsible AI

for Well-being

Tentative Time Schedule is available here

Description of the Symposium

Aims and New Challenges

AI has incredible potential to help humans make happy while it also has risks to cause unintentional harm. For our happiness (not being harmed), AI is not enough to be productive in exponential growth or economic/financial supremacies, but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability and safety, privacy, and security. This means that AI technology for performance improvement (e.g., classification accuracy of camera image) or optimization (e.g., minimization of traffic jam) limits to improving our happiness, but AI technology that is not only responsible but also socially accepted has a potential for improving our quality of life. For example, AI diagnosis system should provide responsible results (e.g., a high accuracy of diagnostics result with its understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests an importance of discussing “What is socially responsible?” in several potential situations of well-being in the coming AI age. For this issue, we will focus on two perspectives in this symposium.


The first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms/issues should be taken into consideration to design Responsible AI for well-being (In this perspective, social aspects are not required to be included). One of the goals of Responsible AI for well-being is to provide responsible results for our health which condition may change every day. This means that a result of AI for one day may not be useful for other days, i.e., the result is not responsible from the viewpoint of long days. Since such health condition changes are often caused by our environment that escalates stress, provides unlimited caffeine, distributes nutrition-free “fast” food, and encourages unhealthy sleep behavior, Responsible AI for well-being is expected to provide responsible results by understanding how our digital experience affects our emotions and our quality of life.


The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms/issues should be taken into consideration to implement social aspects in Responsible AI for well-being. One of aspects of socially responsibility is fair decisions, which means that results of AI should be equally useful for all people. For this issue, we need to tackle the “bias” problem in AI (and in humans) to achieve fairness. Other aspects of socially responsibility is knowledge applicability among people. For example, the health-related knowledge found by AI for one person (e.g., tips for a good sleep) may not be useful for other persons, which means that such knowledge is not socially responsible. For these issues, we need to find a way to keep machines from absorbing Human biases by understanding how fair is fair and to provide socially responsible results.


We welcome the technical and philosophical discussions on “Socially Responsible AI for Well-being”, in the design and implementation of ethics, machine learning software, robotics, and social media (but not limited). For example, interpretable forecasts, sound social media, helpful robotics, fighting loneliness with AI/VR, and promoting good health are the important scope of our discussions.

This symposium is aimed at sharing the latest progress, current challenges, and potential applications related to social responsibility for well-being. The evaluation of digital experience and understanding of human health and well-being is also welcomed.


Scope of Interests

We will have the following technical, and ethical challenges on “Socially Responsible AI for Well-being”. Technical research for clarifying the possibilities and limitations of “Socially Responsible AI” are welcomed. The following topics are the scope of our interests, but are not limited to;



(Individually) Responsible AI

  1. Interpretable AI

    Interpretable AI aims at understanding decision (actions) of AI, which technology needs to understand how results of AI are responsible for well-being. For this issue, we need to develop powerful tools for understanding what exactly, deep neural networks and other quantitative methods are doing. We call for theoretical and empirical research for understanding the possibilities and limitations of current AI/ML technologies on interpretable AI for well-being. Topics will include, (not limited to), interpretability of machine learning systems, accountability of black box prediction models, interpretable AI for precision medicine, interpretability in human/robot communications, trusting AI, social computing for trusting humans in the loop computational systems, etc.


  2. How can we define and measure the well-being for humans?

    To provide responsible results on well-being, we have to start to define and measure well-being, which gives us inspiration for new success metrics for (Individually) Responsible AI. Interdisciplinary research such as positive psychology, positive computing, predictive medicine, human well-being, the neuroscience of happiness and pleasure, cultural algorithms, flourishing environment, cross-cultural analyses for wellbeing values are welcomed.


  3. Dynamical change of well-being

    For a dynamical change of human health condition, we need to explore advanced machine learning technologies to integrate their technologies in (Individually) Responsible AI. Topics will include (but are not limited to), deep learning, data mining and knowledge modeling for wellness, accuracy and efficiency change in health, collective intelligence/knowledge, life log analysis (e.g., vital data analyses, Twitter-based analysis), data visualization, human computation), biomedical informatics, and personalized medicine. Discussions on evaluating the possibilities and limitations of the current technologies are welcomed.



Socially Responsible AI

  1. How can we define and measure Fairness?

    AI should provide fair results from the viewpoint of well-being. For this issue, we have to start to define the “Fairness” in well-being, which gives us inspiration for new success metrics for Socially Responsible AI. Interdisciplinary research such as, (not limited to) criteria and metrics for fairness, fairness in robotics, fairness in machine learning, fairness in social media, fairness in “human in loop systems”, fairness in collective systems, causal inference to reason about fairness, multi-agent simulations on fairness, game theory-based analyses on fairness, human bias vs. computational (data) bias, bias analysis on social media, political orientation analyses.


  2. Knowledge applicability for well-being

    To explore health-related knowledge that can be applied for many people, we welcome the empirical and technical research on this issue. Topics will include (but are not limited to), social data analyses and social relation design, mood analyses, health care communication system, natural language dialog system, personal behavior discovery, Kansei, Zone and creativity, compassion, calming technology, Kansei engineering, Gamification, Assistive technologies, Ambient Assisted Living (AAL) technology, medical recommendation system, care support system for aged person, web service for personal wellness, games for health and happiness, life log applications, disease improvement experiment (e.g., metabolic syndrome, diabetes), sleep improvement experiment, healthcare /disable support system, community computing platform.


  3. Ethical Issues on “AI and Humanity”: desirable human-AI partnerships.

    In order for promoting people to accept results of AI, we need to develop desirable human-AI partnerships. We welcome ethical and philosophical discussions on this issue. The topics include “Machine Intelligence vs. Human Intelligence”, or “How AI affects our human society or way of thinking”, issues on Infodemic (fake news) with social media, personal identity, etc.


Format

The symposium will consist of invited talks, presentations, posters, and interactive demonstrations. This symposium will be conducted with hybrid style (Zoom and Real).


Submission Information

Submission Format and Guideline


Authors should submit either full papers of up to 8 pages (minimum 6pages) or extended abstracts of up to 2 pages. Extended abstracts should state your presentation type (short paper (1–2 pages), demonstration, or poster presentation). Please note that the full and short paper presentation will be held both onsite and online while the demonstration and poster presentation will be held only onsite. But, the short presentation of demonstration and poster will held both onsite and online in addition to the demonstration and poster presentation onsite to share their works with the online participants. All submissions should be uploaded to AAAI’s EasyChair site at https://easychair.org/conferences/?conf=sss23, and in addition, email your submissions to aaai2023-srai@cas.lab.uec.ac.jp by January 22th, 2023.
The format templates (Latex and Word) of the submitted paper are available here. We encourage you to use these templates for the submitted paper.


Important Dates

Submission deadline: January 22th, 2023

Author notification: January 31th, 2023

Camera-ready papers: February 28th, 2023
The deadline has been extended to March 15th, 2023

Registration deadline: March 4, 2023

Symposium: March 2729, 2022


Organizing Committee

Co-Chairs


Takashi Kido (Teikyo University, Japan)

Keiki Takadama (The University of Electro-Communications, Japan)

Program Committee


Amy Ding (Carnegie Mellon University, USA)

Melanie Swan (DIYgenomics, USA)

Katarzyna Wac (Stanford University, USA and University of Geneva, Switzerland)

Ikuko Eguchi Yairi (Sophia University, Japan)

Fumiko Kano Glückstad (Copenhagen Business School, Denmark)

Takashi Maruyama (Stanford, USA)

Chirag Patel (Harvard University, USA)

Rui Chen (Stanford University, USA)

Ryota Ka nai (University of Sussex, UK)

Yoni Donner (Stanford, USA)

Yutaka Matsuo (University of Tokyo, Japan)

Eiji Aramaki (Nara Institute of Science and Technology, Japan)

Pamela Day (Stanford, USA)

Tomohiro Hoshi (Stanford, USA)

Miho Otake (Riken, Japan)

Yotam Hineberg (Stanford, USA)

Yukiko Shiki (Kansai University, Japan)

Yuichi Yoda (Ritsumeikan University, Japan)

Maki Sugimoto (Teikyo University, Japan)

Mitsuhiro Ogawa (Teikyo University, Japan)


Contact Information

Takashi Kido

Email: kido.takashi@gmail.com

Institution: Teikyo University, Advanced Comprehensive Research Organization

Institutional address: Kaga 2-11-1, Itabashi-ku, Tokyo, Japan, 173-8605