Superalignment and interdisciplinary collaboration: Arts and values for inclusive AI

In AI policy discussions, advocacy for ethical superalignment plays an increasingly important role, uniting stakeholders from diverse backgrounds to champion fair and transparent policies.

In our adventures so far exploring topics like digital arts and artificial intelligence (AI), there is one concept we have really taken an interest in. It’s called superalignment. Originating from economics and political science, superalignment refers to the convergence of diverse stakeholders around a shared vision or set of policies. 

In the context of artificial intelligence, concepts behind superalignment hold immense promise for enhancing collaboration, especially when it comes to the ethical development, and responsible deployment of AI technologies. 

It can be a lot to unpack! From interdisciplinary collaboration to ethical AI governance and data sharing, achieving superalignment among researchers, developers, policymakers, and other stakeholders is a key goal for realizing the full potential of AI while addressing its ethical, societal, and regulatory implications. As we start to explore a few different aspects of AI for organizational development, understanding and applying principles of superalignment can help guide our efforts.

A lot of AI projects rely heavily on interdisciplinary collaboration. Technology development, especially in the area of AI, often involves interdisciplinary teams with expertise in computer science, mathematics, statistics, psychology, neuroscience, and various other fields. These diverse disciplines each contribute unique insights and methodologies that are essential for tackling challenges inherent in AI research and development. For example, computer scientists bring expertise in algorithm design and software engineering, mathematicians offer mathematical frameworks for modeling and optimization, statisticians provide tools for data analysis and inference, psychologists contribute insights into human cognition and behavior, and neuroscientists offer knowledge about the underlying mechanisms of the brain.

And it’s not just about the technical development aspects of AI systems. It’s also about incubating the kinds of inclusive environments needed for intersectoral and interdisciplinary collaboration. There are a lot of pressing societal challenges facing many northern communities. These challenges include poverty and unemployment, systemic inequality, food insecurity, education, climate adaptation, and many more.

‘As AI systems become more advanced and autonomous, there is a growing concern about the potential risks associated with them. These risks include unintended consequences, bias, and the possibility of AI systems acting in ways that are harmful to humans or society. Programs like Open AI’s Superalignment Fast Grants program involves designing AI systems and algorithms in such a way that they not only achieve their specified objectives but also do so in a manner that aligns with human values and goals. This includes ensuring that AI systems respect ethical principles, comply with legal regulations, and operate in a manner that is transparent and accountable. It should also be about values.

Traditional Indigenous knowledge also has an important role in the pursuit of superalignment in AI research. Indigenous knowledge systems often prioritize holistic perspectives, sustainability, and interconnectedness with the natural world. We feel these principles can provide valuable insights into designing AI systems that are more aligned with broader human values and goals. One area we really want to explore is what the “intersection” of technical superalignment research and traditional values looks like.

Interdisciplinary collaboration also supports a culture of creativity and cross-pollination of ideas, where researchers (including artists!), are encouraged to think outside the boundaries of their own disciplines and explore new approaches and methodologies. Artists bring unique perspectives to the table, leveraging their creativity, imagination, and aesthetic sensibilities to inspire new and unique ways of conceptualizing and communicating complex ideas. Whether it’s through data visualizations, storytelling, or interactive experiences, artists have a lot to contribute to the development of AI systems. In particular, artists play key roles by helping to humanize technology and making it more accessible and engaging to broader audiences. These interdisciplinary synergies can also lead to breakthrough innovations that would not have been possible within the confines of a single discipline. As technologies continue to advance, there are many opportunities for next generation arts and culture workers to lead the infusion of these kinds of projects with fresh perspectives and imaginative solutions that challenge conventional thinking and push the boundaries of what is possible.

As we reflect on our exploration of concepts like superalignment, we acknowledge that there is much more to learn. However, one thing is clear: aligning the efforts of AI researchers, developers, philanthropists, non-profit organizations, and other stakeholders is increasingly vital. These tools aren’t “coming.” They’re here. And we need to start leveraging them to their full potential, while encouraging the next generation to be part of development processes.

Picture of Jamie Bell

Jamie Bell

Jamie Bell is a skilled media and interdisciplinary arts professional with extensive experience in journalism, public affairs and media. A long-time arts administrator, Jamie is a founding member of the @1860 Winnipeg Arts Program.

Our program began with a pilot program aimed at building organizational capacity for digital arts administration, skills development and training. It is supported by the non-profit organization Niriqatiginnga.

Stay Connected