Installation & configuration
This guide covers installing the Asset Tokenization Kit to Kubernetes environments using the included Helm charts.
Overview
The deployment architecture consists of:
- Blockchain network (Hyperledger Besu validators and RPC nodes)
- RPC gateway (ERPC for load balancing and caching)
- Indexing layer (TheGraph node and Blockscout explorer)
- Application layer (DApp frontend, Portal IAM, Hasura GraphQL)
- Support services (PostgreSQL, Redis, MinIO, NGINX Ingress)
- Observability stack (Grafana, Loki, VictoriaMetrics, Tempo)
Helm chart structure
The Asset Tokenization Kit uses an umbrella chart architecture located in
kit/charts/atk/:
Chart dependencies
The main chart
(kit/charts/atk/Chart.yaml)
orchestrates 11 dependent subcharts:
dependencies:
- name: support # Infrastructure (NGINX, Redis, PostgreSQL, MinIO)
- name: observability # Metrics, logs, traces (Grafana, Loki, VictoriaMetrics)
- name: network # Blockchain network (Besu nodes)
- name: erpc # RPC gateway with caching
- name: ipfs # IPFS cluster for distributed storage
- name: blockscout # Blockchain explorer
- name: graph-node # TheGraph indexing protocol
- name: portal # Identity and access management
- name: hasura # GraphQL engine for database
- name: txsigner # Transaction signing service
- name: dapp # Frontend applicationEach subchart can be enabled/disabled via the enabled flag in values.yaml.
Directory layout
kit/charts/atk/
├── Chart.yaml # Chart metadata and dependencies
│ https://github.com/settlemint/asset-tokenization-kit/blob/main/kit/charts/atk/Chart.yaml
├── values.yaml # Default configuration values
│ https://github.com/settlemint/asset-tokenization-kit/blob/main/kit/charts/atk/values.yaml
├── values-openshift.yaml # OpenShift-specific overrides
├── templates/ # Kubernetes resource templates
│ ├── _helpers.tpl # Template helpers
│ ├── _common-helpers.tpl # Shared helper functions
│ └── image-pull-secrets.yaml
└── charts/ # Subchart definitions
https://github.com/settlemint/asset-tokenization-kit/tree/main/kit/charts/atk/chartsPrerequisites
Before deploying, ensure you have:
-
Kubernetes cluster (v1.27+)
- Minimum 8 CPU cores, 32GB RAM for full deployment
- Storage provisioner with dynamic PVC support
- LoadBalancer or Ingress controller support
-
Required tools:
kubectl(v1.27+)helm(v3.13+)bun(for chart documentation generation)
-
Container registry access:
- GitHub Container Registry (
ghcr.io) - Docker Hub (
docker.io) - Kubernetes Registry (
registry.k8s.io)
- GitHub Container Registry (
-
DNS configuration:
- Wildcard DNS or individual records for each service hostname
- Default hostnames use
.k8s.orb.local(customize for your environment)
Installation steps
1. Build chart dependencies
Navigate to the charts directory and update dependencies:
cd kit/charts/atk
helm dependency updateThis downloads all subchart dependencies into the charts/ directory.
2. Configure values
Create a custom values.yaml file for your environment. Start by copying the
default:
cp values.yaml values-production.yaml3. Customize configuration
Edit values-production.yaml to match your environment:
Update hostnames
Replace all .k8s.orb.local hostnames with your domain:
# Use .k8s.orb.local (default) or .localhost
dapp:
ingress:
hosts:
- host: dapp.k8s.orb.local
erpc:
ingress:
hostname: rpc.k8s.orb.local
blockscout:
blockscout:
ingress:
hostname: explorer.k8s.orb.local# Staging environment with subdomain
dapp:
ingress:
hosts:
- host: dapp-staging.example.com
erpc:
ingress:
hostname: rpc-staging.example.com
blockscout:
blockscout:
ingress:
hostname: explorer-staging.example.com
graph-node:
ingress:
hostname: graph-staging.example.com
hasura:
ingress:
hostName: hasura-staging.example.com
portal:
ingress:
hostname: portal-staging.example.com
observability:
grafana:
ingress:
hosts:
- grafana-staging.example.com# Production with custom domain
dapp:
ingress:
hosts:
- host: dapp.example.com
erpc:
ingress:
hostname: rpc.example.com
blockscout:
blockscout:
ingress:
hostname: explorer.example.com
graph-node:
ingress:
hostname: graph.example.com
hasura:
ingress:
hostName: hasura.example.com
portal:
ingress:
hostname: portal.example.com
observability:
grafana:
ingress:
hosts:
- grafana.example.comUpdate authentication URLs
The DApp requires the BETTER_AUTH_URL to match the ingress hostname:
dapp:
secretEnv:
BETTER_AUTH_URL: "https://dapp.example.com"Update database and storage passwords
CRITICAL: Change all default passwords before production deployment:
global:
datastores:
default:
redis:
password: "YOUR_REDIS_PASSWORD"
postgresql:
password: "YOUR_PG_PASSWORD"
portal:
postgresql:
password: "YOUR_PORTAL_DB_PASSWORD"
txsigner:
postgresql:
password: "YOUR_TXSIGNER_DB_PASSWORD"
graphNode:
postgresql:
password: "YOUR_GRAPH_DB_PASSWORD"
blockscout:
postgresql:
password: "YOUR_BLOCKSCOUT_DB_PASSWORD"
hasura:
postgresql:
password: "YOUR_HASURA_DB_PASSWORD"
# Redis auth
support:
redis:
auth:
password: "YOUR_REDIS_PASSWORD"
# Grafana admin credentials
observability:
grafana:
adminUser: admin
adminPassword: "YOUR_GRAFANA_PASSWORD"Update transaction signer mnemonic
CRITICAL: Generate a new mnemonic for production:
txsigner:
config:
mnemonic: "YOUR_PRODUCTION_MNEMONIC_HERE"
derivationPath: "m/44'/60'/0'/0/0"Use a secure mnemonic generator or BIP39 tool. Never commit this to version control.
Configure resource limits
Adjust resource requests and limits based on your cluster capacity:
# Minimal resources for local testing
network:
network-nodes:
validatorReplicaCount: 1
rpcReplicaCount: 1
resources:
requests:
cpu: "60m"
memory: "512Mi"
limits:
cpu: "360m"
memory: "1024Mi"
dapp:
resources:
requests:
cpu: "100m"
memory: "512Mi"
limits:
cpu: "500m"
memory: "1024Mi"# Moderate resources for testing
network:
network-nodes:
validatorReplicaCount: 2
rpcReplicaCount: 2
resources:
requests:
cpu: "200m"
memory: "1024Mi"
limits:
cpu: "1000m"
memory: "2048Mi"
dapp:
resources:
requests:
cpu: "250m"
memory: "1024Mi"
limits:
cpu: "2000m"
memory: "2048Mi"# Full resources for production workloads
network:
network-nodes:
validatorReplicaCount: 4
rpcReplicaCount: 3
resources:
requests:
cpu: "500m"
memory: "2048Mi"
limits:
cpu: "2000m"
memory: "4096Mi"
dapp:
resources:
requests:
cpu: "500m"
memory: "2048Mi"
limits:
cpu: "4000m"
memory: "4096Mi"See the Resource summary section for default allocations.
Configure storage sizes
Adjust persistent volume sizes for your data retention requirements:
# Blockchain data
network:
network-nodes:
persistence:
size: 100Gi # Scale based on expected chain growth
# Metrics retention
observability:
victoria-metrics-single:
server:
persistentVolume:
size: 50Gi
# Log retention
observability:
loki:
singleBinary:
persistence:
size: 50Gi4. Deploy the chart
Install the chart with your custom values:
helm install atk . \
--namespace atk \
--create-namespace \
--values values-production.yaml \
--timeout 20mOr upgrade an existing deployment:
helm upgrade atk . \
--namespace atk \
--values values-production.yaml \
--timeout 20m5. Verify deployment
Check pod status:
kubectl get pods -n atkAll pods should reach Running or Completed state. The deployment includes:
- Init jobs:
network-bootstrapper(generates genesis file) - StatefulSets: Besu nodes, Graph Node, PostgreSQL, Redis
- Deployments: DApp, Portal, Hasura, ERPC, Blockscout
- DaemonSets: Node exporters, log collectors
Check service endpoints:
kubectl get ingress -n atkVerify each ingress has an external address assigned.
6. Access applications
Once deployed, access the services via configured hostnames:
- DApp:
https://dapp.example.com - Blockchain Explorer:
https://explorer.example.com - Grafana Dashboards:
https://grafana.example.com - Hasura Console:
https://hasura.example.com/console - RPC Endpoint:
https://rpc.example.com
Configuration reference
Global settings
All subcharts inherit global configuration:
global:
# Blockchain network identity
chainId: "53771311147"
chainName: "ATK"
# Labels applied to all resources
labels:
environment: production
team: platform
# Centralized datastore configuration
datastores:
# Shared default settings
default:
redis:
host: "redis"
port: 6379
username: "default"
password: "atk"
postgresql:
host: "postgresql"
port: 5432
username: "postgres"
password: "atk"
# Service-specific overrides
portal:
postgresql:
database: "portal"
username: "portal"
redis:
db: 4
hasura:
redis:
cacheDb: 2
rateLimitDb: 3Network configuration
Control blockchain network topology:
network:
enabled: true
network-bootstrapper:
settings:
validators: 1 # Number of validator identities to generate
network-nodes:
validatorReplicaCount: 1 # Validator pods (consensus)
rpcReplicaCount: 1 # RPC pods (queries)
persistence:
size: 20Gi # Per-node storage
resources:
requests:
cpu: "60m"
memory: "512Mi"
limits:
cpu: "360m"
memory: "1024Mi"ERPC gateway configuration
Configure RPC load balancing and caching:
erpc:
enabled: true
ingress:
enabled: true
ingressClassName: "atk-nginx"
hostname: rpc.k8s.orb.local
resources:
requests:
cpu: "60m"
memory: "256Mi"
limits:
cpu: "360m"
memory: "512Mi"ERPC caches responses in Redis databases configured in global.datastores.erpc.
DApp configuration
Frontend application settings:
dapp:
enabled: true
image:
repository: ghcr.io/settlemint/asset-tokenization-kit
# tag: defaults to chart appVersion
ingress:
enabled: true
hosts:
- host: dapp.k8s.orb.local
paths:
- path: /
pathType: ImplementationSpecific
# Environment variables (stored as secrets)
secretEnv:
BETTER_AUTH_URL: "https://dapp.k8s.orb.local"
SETTLEMINT_BLOCKSCOUT_UI_ENDPOINT: "https://explorer.k8s.orb.local/"
SETTLEMINT_MINIO_ENDPOINT: "http://minio:9000"
SETTLEMINT_MINIO_ACCESS_KEY: "console"
SETTLEMINT_MINIO_SECRET_KEY: "console123"
resources:
requests:
cpu: "100m"
memory: "1024Mi"
limits:
cpu: "3000m"
memory: "2048Mi"Operational note: You can override
SETTLEMINT_MINIO_BUCKETto point at a different object store bucket without changing any frontend URLs. The/atk/*asset route now resolves files using the configured bucket name, so branding uploads keep working even when the bucket is not literallyatk.
Selective component deployment
Disable components not required for your environment:
# Minimal deployment (no observability, no IPFS)
observability:
enabled: false
ipfs:
enabled: false
# Keep core services
network:
enabled: true
erpc:
enabled: true
graph-node:
enabled: true
hasura:
enabled: true
dapp:
enabled: true
support:
enabled: trueResource summary
Default resource allocations with full stack enabled:
| Component | Replicas | Request CPU | Limit CPU | Request Memory | Limit Memory | Storage |
|---|---|---|---|---|---|---|
| network.network-nodes | 2 | 60m (total 120m) | 360m (total 720m) | 512Mi (total 1024Mi) | 1024Mi (total 2048Mi) | 20Gi (total 40Gi) |
| erpc | 1 | 60m | 360m | 256Mi | 512Mi | - |
| blockscout.blockscout | 1 | 100m | 600m | 640Mi | 1280Mi | - |
| blockscout.frontend | 1 | 60m | 360m | 320Mi | 640Mi | - |
| graph-node | 1 | 60m | 360m | 512Mi | 1024Mi | - |
| hasura | 1 | 80m | 480m | 384Mi | 768Mi | - |
| portal | 1 | 60m | 360m | 256Mi | 512Mi | - |
| txsigner | 1 | 60m | 360m | 192Mi | 384Mi | - |
| dapp | 1 | 100m | 3000m | 1024Mi | 2048Mi | - |
| support.ingress-nginx | 1 | 120m | 720m | 256Mi | 512Mi | - |
| support.redis | 1 | 40m | 240m | 64Mi | 128Mi | 1Gi |
| support.postgresql | 1 | 80m | 480m | 256Mi | 512Mi | 8Gi |
| support.minio | 1 | 50m | 300m | 256Mi | 512Mi | - |
| observability.grafana | 1 | 60m | 360m | 256Mi | 512Mi | - |
| observability.loki | 1 | 200m | 1200m | 512Mi | 1024Mi | 10Gi |
| observability.victoria-metrics-single | 1 | 60m | 360m | 256Mi | 512Mi | 10Gi |
| observability.alloy | 1 | 120m | 720m | 512Mi | 1024Mi | - |
| Totals | - | 1.43 cores | 11.0 cores | 6.8Gi | 13.6Gi | 69Gi |
These totals represent the minimum cluster capacity required. Add overhead for system components (kube-system, DNS, etc.).
See also
- Production operations - Production best practices, monitoring, and troubleshooting
- Testing and QA - Running tests against deployed environments
- Development FAQ - Common deployment issues
Corporate action entities
Dividends, voting proposals, and other corporate actions for token holders
ADI testnet on AWS EKS
Complete deployment guide for running the Asset Tokenization Kit on ADI Foundation's Layer 2 testnet using AWS EKS Auto Mode. This guide covers smart contract deployment, Kubernetes infrastructure setup, networking configuration, and production-ready observability.