Features
Federated Learning
The AI model is initially split and distributed to all participating nodes. Each node receives a copy of the model or a portion of the model as appropriate, which is then trained locally on the node's dataset. Each node uses its local data to train the model/portion of the model in parallel. The training process includes multiple iterations (or epochs), where the model learns patterns, features, and relationships from the data. After local training, instead of sending the raw data to a central server, each node sends only the model updates (e.g., the gradients or weights) back to a central aggregation server or directly to the network's orchestration layer. The aggregation server, or decentralized aggregation protocol in Orbitalx’s case, collects the updates from all participating nodes and combines them to update the global model. This is typically done by averaging the weights or using more sophisticated methods like secure aggregation to ensure privacy and integrity. The updated global model is then redistributed to all nodes, and the process repeats. Over time, the model becomes more accurate as it learns from the diverse data spread across the network.
Secure Multi-Party Computation (MPC)
Before any computation begins, each node encrypts its data using a cryptographic scheme such as homomorphic encryption or secret sharing. This ensures that the data remains confidential throughout the computation process. The nodes engage in a joint computation process where they collaboratively perform the necessary calculations to train the AI model. During this process, the nodes exchange encrypted values, ensuring that no single node has access to the complete dataset or any individual data points from other nodes. Throughout the computation, intermediate results are also kept encrypted. This prevents any leakage of information and ensures that only the final result, which is necessary for the model training, is revealed. Once the computation is complete, the nodes decrypt the final model update. This update reflects the aggregated knowledge from all nodes but does not expose the underlying data used to generate it. Similar to Federated Learning, the updated model parameters are then aggregated across all participating nodes to refine the global model, which is shared back with the network.
Zero-Knowledge Proofs (ZKPs)
After a node has completed its assigned AI training task, it generates a Zero-Knowledge Proof. This proof demonstrates that the node has performed the computations correctly and according to the protocol, without actually revealing the specifics of the data or the computation. The generated proof is then submitted to the network’s verification layer. The verifier (which could be a smart contract or a decentralized protocol within Orbitalx) checks the proof to ensure its validity. The key feature of ZKPs is that they allow the verifier to confirm that the computation was done correctly without needing access to the data itself. This ensures that the network can trust the results submitted by nodes without compromising privacy. Once the ZKP is validated, the node is rewarded with $ORB tokens. If the proof fails, the node may be penalized, and its stake may be slashed. ZKPs are integrated with Orbitalx’s Proof-of-Training (PoT) consensus mechanism. This ensures that only nodes that have correctly followed the protocol and produced valid results are rewarded, maintaining the integrity and security of the network.
Last updated