Earn HLM tokens with your GPU

Add Your Model
Start Earning

Contribute your GPU and AI model to the OpenHLM network. Earn 90% of every job fee you serve. Works behind NAT. No port forwarding needed.

Revenue Estimator

Monthly JobsAvg GASAvg TokensMonthly Revenue
1,000105004,500,000 HLM
10,0001050045,000,000 HLM
100,00010500450,000,000 HLM

Revenue = Jobs × GAS × Tokens × 0.90 (90% goes to node operator)

Get Started in 3 Steps

1

Install the Agent

curl -fsSL https://openhlm.com/install.sh | bash
2

Onboard Your Model

openhlm-agent onboard \
  --pool llama-70b \
  --endpoint https://openhlm.com \
  --capacity 2 \
  --default-gas 10 \
  --badges solar-powered,carbon-free

This generates your wallet (12-word mnemonic — save it!), registers your node, and declares which model pools you serve.

Popular Model Pools

llama-70bllama-7bmixtraldeepseek-coderphi-3gemma-7bmistral-7bqwen-72b

You can create new pools too! Just use any model name.

3

Start Serving & Earning

openhlm-agent start

Your node connects to the OpenHLM network via gRPC and starts receiving inference jobs. You earn 90% of every fee automatically.

🌐

Works Behind NAT

Outbound gRPC connection. No port forwarding, no dynamic DNS needed.

🔒

Ed25519 Identity

Your node has a unique cryptographic identity. All messages are signed.

📊

Live Dashboard

Monitor your node, earnings, and reputation from the web dashboard.

🌿

Eco Badges

Declare your green credentials. Users can choose eco-friendly nodes.

🧠

Multi-Model

Serve multiple model pools from a single node. Ollama, llama.cpp, vLLM support.

Reputation System

Higher reputation = more jobs = more earnings. Built on performance metrics.

🔗

Your Direct Chat Link

Get a unique shareable URL (openhlm.com/m/your-id). Share it anywhere -- 100% of GAS goes directly to you. Build your own customer base.

🎲

Fair Random Selection

Normal chat uses weighted random -- not "best node wins". Even new nodes start earning from day one.

Supported Model Runners

O
Ollama
Easiest setup. Just install Ollama and pull a model.
Recommended
L
llama.cpp
Optimized C++ inference. Best for performance.
Advanced
V
vLLM
High-throughput serving. Multi-GPU support.
Coming Soon
?
Custom Runner
Implement the Runner interface for any backend.
DIY

Ready to contribute?

Join the network and start earning HLM tokens today.