How it works
Nodes
Each node in the Orbitalx network is set up with specific software that enables it to communicate with the network, receive tasks, and perform computations using its GPU. The software includes modules for task management, data encryption, and secure communication with other nodes. Nodes are designed to handle data locally, which means that the raw data used for training AI models is never uploaded to a central server. Instead, the data remains on the user's device. The node processes the data to extract features and train the model locally. The AI model training occurs entirely within the node. The node uses its GPU to perform the computationally intensive tasks required for training, such as running deep learning algorithms, optimizing model parameters, and generating gradients. After training, the node generates essential outputs, such as updated model parameters or gradients. These outputs are then encrypted and securely shared with the network’s orchestration layer for aggregation, ensuring that no raw data is exposed during this process. Nodes operate autonomously, meaning they do not need to trust other nodes or a central authority. They also incorporate security features like secure boot, encrypted storage, and network-level encryption to protect data and computation integrity.
Privacy Protocols
Orbitalx allows each node to train AI models locally on its own data. The model updates are aggregated across nodes without sharing the actual data. This is achieved through:
Nodes perform training on their local data and generate updates to the model's parameters.
Secure Aggregation: The updates are encrypted and sent to the orchestration layer, where they are securely aggregated to improve the global model. In scenarios where multiple nodes need to collaborate on a computation, MPC ensures that this can be done without exposing individual datasets. The MPC protocol involves:
Each node encrypts its data using techniques like secret sharing or homomorphic encryption.
Nodes collaboratively perform the required computation on encrypted data, exchanging encrypted results.
The final result, which is necessary for model training, is decrypted and used, without ever revealing the underlying data. Data communications within the Orbitalx network are encrypted using advanced encryption standards (AES-256 or similar). This ensures that data in transit is secure and cannot be intercepted or tampered with. Orbitalx may also implement differential privacy techniques to add noise to the data or model updates before they are shared. This ensures that individual data points cannot be reverse-engineered from the aggregated model updates.
Orchestration Layer
The orchestration layer dynamically assigns tasks to nodes based on their current capacity, GPU availability, and performance history. This involves:
The system evaluates the computational power, memory, and bandwidth of each node to assign appropriate tasks. For instance, a high-powered node might be tasked with training a large neural network, while a lower-powered node might handle simpler models or data preprocessing.
The orchestration layer ensures an even distribution of work across the network to avoid overloading certain nodes while underutilizing others.
The orchestration layer tracks the progress of each task, managing dependencies and ensuring that all components of the AI model training are completed efficiently. It handles:
Prioritizing and scheduling tasks based on the overall network demand and node availability.
If a node fails or becomes unavailable, the orchestration layer reallocates the task to another suitable node.Secure Communication: The orchestration layer uses secure communication protocols (such as TLS or custom encrypted channels) to send and receive data from nodes. This ensures that task assignments and results are transmitted safely. Aggregation and Integration: Once model updates are received from various nodes, the orchestration layer aggregates these updates to refine the global model. This aggregation process is done securely, often involving techniques like secure aggregation to maintain privacy.
Consensus Mechanism: Proof-of-Training (PoT)
When a node completes a training task, it submits the result to the network, along with a proof that it has followed the training protocol correctly. This proof may include:
Verifying the correctness of computations without revealing the data.
The trained model or updates are checked against a validation dataset to ensure they meet accuracy thresholds. The network uses the PoT mechanism to collectively verify the contributions of all nodes. This process includes:
Other nodes in the network may be tasked with verifying the results of a given node. This peer review process adds an additional layer of security and accuracy.
Special nodes (or a decentralized committee) may be responsible for the final validation of proofs and model updates. Once the work is verified, nodes are rewarded with $ORB tokens based on the quality and quantity of their contributions. The PoT mechanism ensures that rewards are proportional to the node's effort and the accuracy of the results. If a node submits invalid results or is found to be acting maliciously, its staked $ORB tokens may be slashed as a penalty. This deters bad behavior and incentivizes nodes to act honestly. PoT ensures that only accurate and reliable model updates are integrated into the global model. This maintains the integrity of the AI training process and ensures that the final model is trustworthy and robust.
Last updated