What is MMC and POW + POS mechanism?

MMC Network

MMC is a blockchain computing power network designed to serve AI applications. Unlike traditional blockchains, MMC incorporates computing power consensus for completing "AI tasks".

The MMC network consists of validator nodes, smart router nodes, and computing power nodes. For more information, please refer to the whitepaper.

POW + POS (Proof of AI Tasks)

The node providers run the node and AI services, executes the node provider command, and registers and binds the node's revenue wallet address.

Use the registered wallet to connect to the MicroMatrix Node DApp for node staking.

MMC network validator nodes, every 1-2 hours, based on the current online nodes (nodes that reported status in the last 30 minutes), check node status and send an API detection request to MMC Hub.

MMC Hub records every request from validators or API users, saving call results (including executing node, execution status, API credit consumption).

MMC Hub performs a power settlement round every 2 hours based on API call records, recording summarized data for each node successfully called by the API. This data includes consumed credit points and reward weights (algorithm referenced in the white paper).

Node providers, using the registered wallet, connect to the MicroMatrix Node DApp to real-time query already settled rewards, initiate a reward extraction request -> wait for the node reward contract to synchronize data -> Claim rewards.

How to join MMC and Earn Rewards? (Testnet)

1. Join the Network

The computing nodes need to join the MMC computing power network and become a part of the network. The node can connect to other nodes in the network and communicate with them.

2. How to Participate

This guide provides essential information for participants, outlining the current priority focus on collaborating with partners through the MicroMatrix Partner Network (EPN) and how individual node operators can contribute to the testnet.

3. EPN (MicroMatrix Partner Network) Program

The MMC testnet prioritizes collaboration with partners within the MicroMatrix Partner Network (EPN). If you belong to a global Internet Data Center (IDC) or computing power center and can provide stable computing power, consider joining the MMC EPN program. For inquiries, please reach out to us on Discord.

4. Individual Nodes

If you are an individual node operator capable of contributing stable computing power to support global AI applications and clients, you can also contact us to join the testnet whitelist on Discord.

How to Deploy MMC node

Requirements

The MMC testnet currently prioritizes GPU providers from EPN (MicroMatrix Partner Network) partners.

If you represent a global IDC or computing power center, consider joining the EPN program,

If you are an individual node operator capable of providing stable computing power for global AI applications and clients, feel free to contact us to join the whitelist for the testnet. Please reach out to the administrators on Discord for further details before following below instructions or stake $MMC to your node.

System Requirements

sytem: Ubuntu 22.04

nvidia driver: 535、graphics memory: >=12G

1.Install Docker、Nvidia Virtualization

Copy

# 1.install dependencies

sudo apt-get update

sudo apt-get install ca-certificates curl gnupg

sudo install -m 0755 -d /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

sudo chmod a+r /etc/apt/keyrings/docker.gpg

# 2.config source

echo \

"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \

$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \

sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# 3.install Docker

sudo apt update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# 4.install NVIDIA GPU Docker Virtualization

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \

&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \

&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \

sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \

sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt-get update

sudo apt-get install -y nvidia-container-toolkit

sudo systMMCtl restart docker

sudo nvidia-ctk runtime configure --runtime=docker

sudo systMMCtl restart docker

# 5.validate

sudo docker run --rm --gpus=all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi

2.Deploy MMC and MMC Cloud

Copy

# Linux

curl -s https://install.Micromatrix.pro/setup_linux.sh |sh #Initialize MMC

curl -s https://install.Micromatrix.pro/setup_cloud_linux.sh |sh #Initialize MMC CLOUD

# start node

cd MMC

./start.sh

# start MMC CLOUD

cd ../cloud

./cloud_client.sh

3.Regist Wallet For Reward

Copy

# 'xxxxxx' change to your owner Sol chain wallet address

cd ../MMC

./Micro-matrix node register --commit set --node computing --owner xxxxxx

4.Bind Wallet Address And Stake

First, ensure that the node is running normally, otherwise, you cannot register the SOL wallet address (which is the wallet address for staking and receiving rewards on the node)

Please reach out to the administrators on Discord for further details on whitelist before staking $MMC to your node.

url:https://dashboard.Micromatrix.pro/#/nodes/xxxxxx

ps. change 'xxxxxx' to your Node ID(The Node ID will be displayed in start MMC cloud).

5.Validate And Query Commands

Copy

# show version

./Micro-matrix version

# show node ID

./Micro-matrix secrets output --data-dir Micro_data

# show node status

./Micro-matrix node status

To validate nodes and routing nodes, MMC tokens must be staked, hence most ordinary users mainly register as computing power nodes. If you need to register as a validator node or a routing node, please contact us.

Reinstallation

You need backup the MMC/Micro_data folder when you have to reinstall. It can make sure use the same nodeid and wallet address.

Copy

# 1.kill the MMC process

sudo pkill Micro-matrix

sudo pkill nomad

# 2.backup the MMC/Micro_data folder

cd MMC

sudo cp -r Micro_data ../

# 3.remove the MMC and cloud folder, remove MMC_cloud_linux_64.tgz and MMC_linux_64.tgz file

cd ..

sudo rm -rf MMC cloud MMC_cloud_linux_64.tgz MMC_linux_64.tgz

# 4.go to the first page of How to Deploy MMC node and reinstall

# 5.copy the Micro_data back when reinstall finish

sudo cp -rf Micro_data MMC/

If you have any questions, you can leave a message on Telegram or Discord .

AI Training and Inference

The process of utilizing MMC for AI training and AI inference is as follows:

Resource Preparation: MMC integrates idle GPUs from abandoned Ethereum (ETH) miners and Filecoin (FIL) miners to build a distributed GPU computing network. Before using MMC, it is necessary to ensure that the computing nodes have the appropriate hardware and the capability to connect to the MMC network.

Task Scheduling: The MMC protocol handles task scheduling and task distribution. During AI training, the task scheduling module divides the tasks into appropriate computing units and distributes them to participating computing nodes in the MMC network. This ensures that tasks can be efficiently and in parallel executed across the distributed GPU computing nodes.

AI Model Training: When using MMC for AI model training, the task scheduling module distributes the training tasks to available computing nodes. These nodes utilize their GPU computing power to execute the training tasks, performing backpropagation and parameter optimization using datasets, ultimately training high-performance AI models.

AI Inference: Once the AI models are trained, MMC can be used for AI inference. Inference tasks involve passing input data to the trained models and generating corresponding output results. The task scheduling module distributes the inference tasks to computing nodes, which leverage their GPU computing power to perform real-time processing of input data and generate inference results.

By distributing AI training and inference tasks to the distributed GPU computing network of MMC, task processing speed and computational efficiency can be accelerated. MMC's distributed architecture provides greater computational power and flexibility, enabling more efficient and scalable AI model training and inference.

It is important to note that utilizing MMC for AI training and inference requires appropriate technical configurations and resource management to ensure smooth task execution. Additionally, MMC provides economic incentives to encourage participation and contribution from computing nodes, further promoting the development and application of AI training and inference.

FOLLOW ME ON SOCIAL MEDIA RIGHT NOW:

2024-2025 MMC FOUNDATION.

All Rights Reserved.