Oasis Labs partners with Meta: Building a privacy-preserving platform to support the fair development of AI models

Oasis Chinese
2022-08-02 15:07:36
Collection
The secure multi-party computation method developed by Meta in collaboration with Oasis Labs is a privacy-centric approach that allows for critical measurements in fairness while prioritizing people's privacy through mature privacy protection methods.

Author: Oasis Chinese

image

Oasis Labs announced a partnership with the renowned technology company Meta, where Oasis Labs will launch a platform to assess the fairness of Meta products while protecting user privacy, achieving pioneering initiatives for inclusivity and fairness.

Recently, Meta conducted a survey based on the Instagram platform, where users were asked to voluntarily share their racial or ethnic information. As an important technical partner of Meta, Oasis Labs built a platform using Secure Multi-Party Computation (SMPC) to protect users' private information.

This collaborative project will advance the measurement of fairness in artificial intelligence models, positively impacting individuals' lives globally and benefiting society as a whole. The pioneering platform launched by Oasis Labs will play a crucial role in an initiative, marking an important step in determining whether AI models are fair and allowing for appropriate mitigation in the field of artificial intelligence.

Let's take a look at the exciting details of the collaboration.

Collaboration Details: How Oasis Labs Platform Will Assess AI Model Fairness

Meta's Responsible AI, Instagram Equity, and Civil Rights teams are launching a survey for Instagram platform users, where users need to voluntarily share their racial/ethnic information.

In this survey, a third-party research organization collects data and transmits it to third-party collaborators through secret sharing. This way, neither the third-party collaborators nor Meta can know the specific survey responses of the users.

Then, the third-party collaborators use encrypted predictive data from AI models to calculate measurement results. This data is encrypted and shared by Meta, and the aggregated, de-identified data results from each third-party collaborator are re-integrated by Meta into an overall fairness measurement result.

The encryption technology used by the platform launched by Oasis Labs enables Meta to measure bias and fairness while providing a high level of privacy protection for individuals providing sensitive data.

Click the link for more information about the platform.

Shared Vision: Benefiting Millions with Global Inclusivity and Fairness of AI Models

In terms of Responsible AI and responsible data usage, both Meta and Oasis teams share a common vision. The encryption technology adopted by the platform launched by Oasis Labs is an unprecedented innovation, regardless of its scale of use, marking the beginning of a new journey.

Esteban Arcaute, Director of Responsible AI at Meta, stated:

We want to ensure that Meta's AI benefits humanity and society, which requires in-depth communication and collaboration across different teams, both internally and externally. The Secure Multi-Party Computation method developed in collaboration with Oasis Labs is a privacy-centric approach that can conduct critical measurements in fairness while employing mature privacy protection methods that prioritize people's privacy.

After establishing the partnership, Oasis Labs will explore further privacy protection methods for more complex bias studies with Meta. Since we want to touch billions of people around the world with concepts like privacy protection, we hope to further explore new uses of emerging Web3 technologies based on blockchain networks through deeper collaboration. Our goal is to provide further global accessibility, auditability, and transparency in conducting, collecting survey data, and its use in data computation measurements.

Professor Dawn Song, Founder of Oasis Labs, stated:

We are excited to be a technical partner of Meta, participating in this groundbreaking initiative to assess the fairness of AI models while using cutting-edge encryption technology to protect user privacy. This is an unprecedented use of these technologies for large-scale measurement of AI model fairness in the real world. We look forward to collaborating with Meta to establish Responsible AI and responsible data usage to build a fairer and more inclusive society.

Oasis Labs Research Work and Development Mission

Among all development visions, responsible data usage and ownership have always been a top priority for Oasis Labs.

We understand that in the Web3 world, no entity can claim user data as its own and take it for granted. We are developing technologies to achieve privacy protection, ensuring that data ownership and control remain in the hands of individuals.

By leveraging blockchain, privacy computing, and privacy protection technologies, our vision is to establish a series of relevant platforms and products that further promote individual privacy protection, data governance, and responsible data usage. Oasis's technology focuses on making it easier for developers to integrate privacy-protecting data storage, governance, and computation.

The power of decentralization and Web3 can reach people around the world. When combined with data privacy, this enables companies to reach global user audiences, obtaining and using sensitive data under the premise of privacy protection, thereby building better products to foster a fairer environment.

Related tags
ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators