top of page

About the Symposium

In popular discourse, contemplation of intelligence or agency among machines often settles on a future technological dystopia or conversely, a machine-led utopia.  However, software and autonomous systems will require legal regulation, changes to business practices, and embedded ethics engines in the medium-term.

​

Basic and applied research on AI ethics is gaining in importance and attention from a variety of stakeholders.  AI practitioners and their organizations will have to build ethics into their code, into their processes, and into combined human-computer systems, long before AI approaches broad human capacities.

​

New AI applications are generating more, and more complex, possibilities for abuse.  Without applying AI ethics, subtle opportunities for repeated ethical violations could seriously harm users and also reflect poorly on AI as an engineering practice.

​

Societal awareness of some of the risks of AI is increasing. Organizations in various sectors and industries are either introducing new governance mechanisms or expanding existing ones to build AI responsibly and to ensure appropriate governance models.  Legislators and agency personnel are discussing various legal and regulatory approaches. Algorithmic bias and fairness have drawn considerable attention of late, with many organizations seeking solutions to detailed questions.  Meanwhile, when robots, self-driving cars, chatbots and autonomous agents are allowed to act on their own, they will be expected to act according to both ethical standards and the norms of their environment.

​

AI ethics has been an active area of academic and policy discussions over the past few years. A number of workshops, conferences, and academic bodies (e.g., IEEE/ACM’s Conference on AI, Ethics and Society, Fairness, Accountability, Transparency in Machine Learning (FAT/ML)) have resulted in a rich synthesis of ethical principles for AI. Policy makers and professional institutions (e.g., IEEE’s Ethically Aligned Design, EU’s Ethical Design for Trustworthy AI, G20, and UNESCO) are drafting documents for consultation and potential regulation. Members of AAAI will be called upon to present both technology solutions and frameworks for ethical and responsible use of technology.  The AI community must be prepared to play a constructive role, or risk being bypassed by decision-makers.

​

This symposium will facilitate a deeper discussion on how intelligence, agency, and ethics may intermingle in organizations and in software implementations. For example, ethical behavior can be formulated as rules, values, quantitative measures, and in a number of other ways.  What are the consequences of implementing each approach?  How can AI ethics take advantage of technology for explainable AI?  Can AI ethics help people make more ethical choices when grappling with the choices they already make regularly?

​

Practitioners, companies, policy makers, professional bodies, technology providers, junior researchers, and senior academics with an interest in the implications of concrete implementations of machine ethics are welcome to participate. We particularly encourage speakers from diverse fields since we view the ethics of AI to be a collaborative, bottom-up exercise, as opposed to a top-down 'sky-hook' approach. We would like to be a venue for disparate approaches (technical, legal and sociological) rather than a closed venue where only optimization with regard to some specific functions are discussed.

bottom of page