Ocean, an open-source protocol, and Raven, a decentralized and distributed deep-learning training protocol, are partnering together. Raven will become a Compute Provider on Ocean Market and publish machine-learning algorithms consisting of Federated Analytics and Federated Learning.
Raven resolves the trade-off between the benefits of using private data and the risk of exposing it on Ocean Market. The data will remain on-premise while allowing third parties to run algorithms to get valuable results, such as averaging or building an AI model.
Democratizing Private Data And Training AI Models
Ocean provides access to datasets and AI in a democratized manner. Raven’s decentralized algorithms allow the training of neural networks cost-effectively. It prioritizes ethical handling of the privacy and security of AI training data. Raven has been working on its protocol to enable it to integrate various encryption techniques into its protocol.
The Compute-to-Data on Ocean Market is a perfect fit here. Data publishers need to approve AI algorithms to run their data as a first step. Raven Protocol or other third parties can publish the protocols. Compute-to-Data then trains AI models orchestrating remote computation and execution on the data required.
The remote computation can be handled by either Raven Network or other Compute Providers approved by Data Publisher. Razvan Olteanu, Chief Operating Officer at Ocean Protocol, is excited about the partnership as AI is at the heart of what the protocol does. He states,
“Ocean’s mission is to democratize access to data sets and AI capabilities; this aligns perfectly with Raven’s mission to provide cost-efficient and faster training of deep neural networks using a decentralized and distributed network of compute nodes. Together we are one step closer to unlocking the Open Data Economy.”
Upholding The Spirit Of Decentralization
Ocean has set its Compute-to-Data infrastructure as a Kubernetes (K8s) cluster with AWS or Azure running in the background. The cluster runs the actual compute jobs away from the eyes of end-users and clients.
Data publishers and users can have an alternative option in the Compute Providers they choose to approve. Raven protocol will provide this option, keeping up with the spirit of decentralization. Since private data is involved, there is a higher chance of users expecting decentralization to be a strict requirement.
An Additional Layer Of Security
Ocean’s Compute-to-Data gains an additional layer of privacy thanks to Raven Protocol. If Raven wanted to publish or run a Federated Learning algorithm, a neural network is randomly initialized. The weight updates required are computed next to the data itself in a data silo.
This is then sent to the data network and repeated in the subsequent data silos (data silo #1, data silo #2, data silo #3, and so on). The neural network is trained with all these data silos without the data having to leave the premises of its respective silo. This is enabled in Compute-to-Data by the Raven Distribution Framework.
Kailash Ahirwar, Co-founder of Raven Protocol, says that at Raven, they believe that distributed computing is the future of computing. He further adds,
“Our framework RDF is a distributed and decentralised compute engine, consisting of various libraries like RavOp, RavML and RavDL. As we are inching towards our goal, our partnership with Ocean is going to be a powerful collaboration providing value to our contributors, Ocean’s data-providers and clients. We are super excited to be working with Ocean’s team.”
Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.
Credit: Source link