
## Building on SettleMint
EVM chains development guide
Create An Application
Add Network And Nodes
Add Private Keys
Setup Code Studio
Deploy Smart Contracts
Setup smart contract portal
Setup Graph Middleware
Setup Offchain Database
Setup Storage
Deploy Custom Services
Integration Studio
Attestation Indexer
Audit Logs
Hyperledger fabric development guide
Create An Application
Add Network And Nodes
Setup Code Studio
Deploy Chain Code
Setup Fabconnect Middleware
Setup Offchain Database
Setup Storage
Deploy Custom Services
Integration Studio
Audit Logs
Platform components
SettleMint offers the most complete and easiest-to-use blockchain development platform, purpose-built to accelerate enterprise adoption. Its modular architecture covers every layer of the stack, from infrastructure and blockchain networks to smart contract development, data indexing, API generation, and integration tooling.
Each platform component is designed for rapid deployment, seamless scalability, and full lifecycle management. Developers benefit from well tested tools, built-in IDE, SDKs, and pre-built application kits, while IT teams get robust governance, observability, and DevOps automation.
SettleMint Application Kits are designed to dramatically accelerate the
development of enterprise blockchain applications by providing pre-packaged,
full-stack solutions inlculding both - smart contrcats and the dAPP UI for
common use cases.
Read more - [Aplication kits](/application-kits/introduction)
file: ./content/docs/about-settlemint/introduction.mdx
meta: {
"title": "Platform overview",
"icon": "House"
}
## The SettleMint platform: what it is and what it does
SettleMint is a full-stack blockchain infrastructure and application development
platform designed to accelerate the creation, deployment, and management of
enterprise-grade decentralized applications. It streamlines blockchain adoption
by combining essential infrastructure services, such as network setup, node
configuration, smart contract development, middleware, off-chain integrations,
and front-end deployments, into a unified environment. SettleMint supports both
**permissioned networks** (Hyperledger Besu, Quorum, and Hyperledger Fabric) and
**public networks** (Ethereum, Polygon, Optimism, Arbitrum, Fantom, Soneium and
Hedera Hashgraph), significantly reducing complexity and accelerating your
time-to-market.
Acting as a Swiss Army knife for blockchain developers, SettleMint provides
comprehensive, pre-configured tooling to simplify every stage of your blockchain
development journey. The platform includes built-in IDEs for smart contract
development, automatically generated REST and GraphQL APIs, real-time data
indexing middleware, enterprise-grade integrations, and secure off-chain storage
and database options. Whether deploying applications via a Managed SaaS or
Self-Managed (on-premises) model, SettleMint's integrated approach ensures
robust security, seamless scalability, and simplified operational management for
enterprise-grade decentralized applications.

***
# Settlemint components
SettleMint's platform encompasses a **comprehensive ecosystem** of services that
can be configured for diverse blockchain scenarios. Below is a **high-level
summary table**, followed by **detailed component descriptions** (including
specifics about private permissioned networks, Layer 1 and Layer 2 public
blockchains, participant management, node configuration, transaction signing,
off-chain integrations, and more).
## Components overview
| **Category** | **Component** | **Usage & Ecosystem Fit** | **Docs** |
| ----------------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| **1. Blockchain Infrastructure** | **Blockchain Network Manager** | Launch and manage both public and private networks. Configure nodes, choose consensus mechanisms, and handle chain settings to lay the foundation of your application. | [Network Management](/platform-components/blockchain-infrastructure/network-manager) |
| | **Consortium Manager** | Oversee participant onboarding and permissioning in private or consortium-based blockchains, enforcing governance rules in multi-organization projects. | [Consortium Setup](/platform-components/blockchain-infrastructure/consortium-manager) |
| | **Blockchain Nodes** | Deploy validating or non-validating nodes on EVM networks. Validating nodes participate in consensus, while non-validating nodes manage load distribution and serve read requests. On Fabric, deploy peers and orderers to respectively validate transactions and order them into blocks. | [Node Management](/platform-components/blockchain-infrastructure/blockchain-nodes) |
| | **Blockchain Load Balancer** | On EVM networks, distribute transaction requests across multiple nodes, improving throughput, fault tolerance, and overall network resilience. | [Load Balancer](/platform-components/blockchain-infrastructure/load-balancer) |
| | **Transaction Signer** | Provides a secure environment for signing transactions before they are broadcast to the network, minimizing the exposure of private keys at the application layer. | [Transaction Signer](/platform-components/blockchain-infrastructure/transaction-signer) |
| | **Blockchain Explorer** | Inspect transactions, blocks, and contracts through a graphical interface or API. This is essential for diagnostics and confirming on-chain activity. | [Blockchain Explorer](/platform-components/blockchain-infrastructure/insights) |
| **2. Smart Contract Development** | **Code Studio (IDE)** | A web-based IDE for writing, compiling, and deploying smart contracts (e.g., in Solidity for EVM networks or Typescript or Go for Fabric). It integrates with the rest of the platform for a seamless dev experience. | [Code Studio Guide](/platform-components/dev-tools/code-studio) |
| | **SDK** | A software development kit for programmatically interacting with your blockchain workflows directly from your codebase or CI/CD pipeline. | [SDK Guide](/platform-components/dev-tools/sdk) |
| | **CLI** | A command-line toolkit that supports automated workflows, including contract compilation, deployment, and versioning, all from within your terminal or CI/CD pipeline. | [CLI Guide](/platform-components/dev-tools/cli) |
| **3. Middleware & API Layer** | **Smart Contract API Portal** | Automatically generates REST or GraphQL endpoints for your deployed contracts, eliminating manual integration code and simplifying front-end or third-party access. Available for EVM networks only. | [Smart Contract API Portal](/platform-components/middleware-and-api-layer/smart-contract-api-portal) |
| | **Graph Middleware** | Indexes on-chain events so you can query them in real time using GraphQL. Suited for analytics, marketplace applications, or any scenario involving data-intensive queries. Available for EVM networks only. | [Graph Middleware](platform-components/middleware-and-api-layer/graph-middleware) |
| | **Ethereum Attestation Indexer** | Focused on indexing attestations from the Ethereum Attestation Service (EAS), facilitating credential management, compliance checks, and advanced auditing. Available for EVM networks only. | [Attestation Indexer](/platform-components/middleware-and-api-layer/attestation-indexer) |
| | **Blockchain Explorer API** | Grants programmatic access to the data shown in the Explorer, which is useful for automated monitoring, scripting, or custom analytics integrations. | [Blockchain Explorer API](/platform-components/blockchain-infrastructure/insights) |
| | **Integration Studio** | A drag-and-drop interface for building workflows that connect on-chain events to external systems (e.g., ERP, CRM, HR). Reduces custom coding for routine integrations. | [Integration Studio](/platform-components/middleware-and-api-layer/integration-studio) |
| | **Firefly Fabconnect** | FireFly FabConnect is an API middleware that enables seamless integration between enterprise systems and Fabric networks for secure and scalable digital asset and workflow automation. | [Firefly Fabconnect](/platform-components/middleware-and-api-layer/fabconnect) |
| **4. Database, Storage & App Deployment** | **S3 Storage (MinIO)** | An S3-compatible object storage ideal for large files and logs that don't require on-chain immutability. Can be used for user-generated content or enterprise documents. | [S3 Storage](/platform-components/database-and-storage/s3-storage) |
| | **IPFS Storage** | Decentralized and tamper-proof file storage for documents, certificates, and other sensitive data. Ideal for publicly verifiable artifacts like NFT metadata. | [IPFS Storage](/platform-components/database-and-storage/ipfs-storage) |
| | **Hasura GraphQL Engine** | A real-time GraphQL API atop a PostgreSQL database for off-chain data. Simplifies data handling by providing instant schema-based queries and updates. | [Hasura GraphQL](/platform-components/database-and-storage/hasura-backend-as-a-service) |
| | **Custom Deployments** | Containerize and deploy both front-end and back-end components. This approach makes it straightforward to scale each component independently and roll out updates efficiently. | [Custom Deployments](/platform-components/custom-deployments/custom-deployment) |
| **5. Security & Authentication** | **Private Key Management** | Various options, from software-based storage to Hardware Security Modules (HSMs), for safeguarding cryptographic keys and ensuring secure transactions. | [Key Management](/platform-components/security-and-authentication/private-keys) |
| | **Access Tokens (PAT/AAT)** | Control access to the platform and its APIs using token-based authentication. Enables role-based permissions for both user and machine (app) identities. | [Access Tokens](/platform-components/security-and-authentication/personal-access-tokens) |
| **7. Application Kits** | **Asset Tokenization Kit** | A full-stack accelerator for tokenizing assets, including prebuilt smart contracts and a ready-to-use dApp codebase to jump-start tokenization projects. | [Asset Tokenization Kit](/application-kits/introduction) |
***
## Platform components:
Below is an **in-depth** look at each major component. We have seamlessly
**incorporated** the details on private permissioned networks (Hyperledger Besu,
Quorum, Hyperledger Fabric), Layer 1 and Layer 2 public blockchains, participant
management, node configuration, transaction signing, code development, and more.
All content is retained to ensure you have the **full context** needed for
enterprise blockchain projects.
## **Private permissioned networks**
* **Hyperledger Besu** A highly popular permissioned blockchain framework
offering enterprise-grade security, private transactions, and governance
control with QBFT consensus.
* **Quorum** A private Ethereum fork incorporating encrypted transactions and
privacy features. Suitable for enterprises that want Ethereum smart contract
compatibility without exposing sensitive data.
* **Hyperledger Fabric** A modular blockchain allowing pluggable consensus.
Widely used in business settings that require robust security, customizable
endorsement policies, and efficient performance.
**Consortium Manager & Participant Permissions**
In SettleMint, the **Consortium Manager** helps you manage participants for
private networks. Each participant can have **granular permissions** (e.g.,
ability to add validating nodes, invite members, or manage governance), ensuring
enterprise-class security and **decentralized decision-making**.
**Network Manager: Genesis Files & External Nodes**
The **Network Manager** allows you to create or join external blockchain
networks by configuring genesis files (defining chain parameters) and specifying
bootnodes. This fosters **interoperability** and **consortium formation**,
letting you align all nodes under a shared initial state while securely
integrating additional participants.
## **Layer 1 (L1) public networks**
* **Ethereum** A decentralized blockchain that transitioned to Proof of Stake
(PoS), known for its extensive developer community and smart contract
capabilities.
* **Avalanche** High-speed chain with subnet support and a PoS approach,
delivering low-cost and near-instant finality.
* **Hedera Hashgraph** A scalable public ledger offering enterprise-level
security and low fees, relying on asynchronous Byzantine Fault Tolerance.
* **Sonic** Sonic, originally launched as Fantom, is a high-performance public
blockchain network which leverages a unique consensus mechanism called
Lachesis.
By selecting an L1 network within the **Network Manager**, you can deploy and
manage nodes, handle load balancing, and integrate with SettleMint's transaction
signing, making it easier to develop or migrate dApps onto top-tier public
blockchains.
## **Layer 2 (L2) public networks**
* **Polygon PoS** A sidechain for Ethereum that offers faster transactions and
lower fees, connected to mainnet for added security.
* **Polygon zkEVM** A zero-knowledge rollup solution providing even greater
efficiency, bundling transactions off-chain while preserving Ethereum's
security.
* **Optimism** Uses optimistic rollups to group off-chain transactions into
batches verified on Ethereum.
* **Arbitrum** Another leading optimistic rollup-based approach to improve
Ethereum's scalability and reduce fees.
* **Soneium** Soneium operates as a layer 2 solution built atop Ethereum,
emphasizing high throughput, and seamless cross-chain connectivity..
Layer 2s are favored for high-volume applications, as they ease congestion on
mainnet Ethereum while retaining EVM compatibility. Through SettleMint, you can
deploy or connect to these networks, benefiting from the platform's end-to-end
infrastructure and dev tooling.
## **Blockchain nodes**
The **Nodes** panel in SettleMint's Network Manager provides a holistic view of
the network, whether it's private or public. You can:
* **Add Validating Nodes:** Nodes that participate in consensus, securing the
network.
* **Add Non-Validating Nodes:** Handle data queries and reduce validator load.
* **Configure Load Balancers:** Improve performance by routing requests across
multiple nodes.
* **Add Peers:** On Fabric, peers are responsible for maintaining the ledger and
executing chaincode. You can add or configure peers to support transaction
endorsement and data consistency.
* **Add Orderers:** On Fabric, orderers handle transaction ordering and ensure
consistent block creation across the network. Configure orderers to maintain
consensus and streamline block propagation.
* **Check Live Logs:** Monitor node statuses in real time, tracking identity,
enode URLs, and configuration details. This granular management ensures your
**network remains stable** and **scalable**, even under heavy workloads.
## **Transaction signer**
The **Transaction Signer** is a critical piece in SettleMint that securely signs
and broadcasts transactions. By integrating with nodes via JSON-RPC or
WebSockets, it provides:
* **Key Management Services:** Including HSM support, ensuring sensitive private
keys remain protected.
* **API Access & Audit Logging:** Allowing you to monitor transaction flows and
enforce role-based control.
* **Automated Transaction Execution:** Suitable for workflows requiring
consistent, programmatic on-chain updates.
## **Blockchain load balancer**
To maintain **high availability** and resource efficiency, SettleMint includes a
dedicated load balancer. It distributes JSON-RPC calls, GraphQL queries, and
transaction submissions across multiple nodes, minimizing downtime if one node
fails and preventing any single node from becoming a bottleneck. This is
especially vital for enterprise-scale applications with large user bases or
transaction volumes.
## **Blockchain explorer**
The **Blockchain Explorer** offers real-time insights into:
* **Transactions:** See if they've been mined or validated.
* **Blocks:** Examine block production, verifying chain integrity.
* **Smart Contracts:** Inspect states, method calls, and event logs.
* **Network Participants:** Track node identities and governance roles in
private networks. It relies on fast JSON-RPC and GraphQL queries, making it a
cornerstone for auditing, diagnostic checks, and compliance reporting.
## **Code studio IDE**
A **browser-based IDE** that streamlines contract development for various
networks:
* **Foundry/Hardhat Integration:** Preconfigured setups to compile, test, and
deploy Solidity contracts for EVM-based chains like Hyperledger Besu or
Quorum.
* **Chaincode Support:** For Hyperledger Fabric networks, enabling
enterprise-grade business logic.
* **Templates & Custom Libraries:** Jump-start new projects or adapt existing
ERC20, ERC721 and ERC1155 standards easily.
* **Terminal & GitHub Integration:** Enables collaboration, version control, and
quick dependency management. Because it's fully hosted in SettleMint, you
don't need a local environment—resulting in a frictionless dev experience.
## **Smart contract api portal**
After deployment, the **Smart Contract API Portal** translates your contract
ABIs into **REST** and **GraphQL** endpoints—often termed "write middleware"
because they allow writing data on-chain through these automatically generated
APIs. It includes:
* **OpenAPI Documentation:** So you can test endpoints directly in the browser.
* **Interactive Interface:** Easily check function parameters and event outputs.
* **Hundreds of Endpoints per Contract:** Eliminating the need to manually code
them. This shortens the time from contract deployment to integration with
front ends or third-party services.
## **Graph middleware**
**Graph Middleware** accelerates read operations by indexing specified on-chain
data in real time. Developers define subgraphs that specify which events,
transactions, and states to monitor, enabling quick data retrieval via GraphQL.
Popular for:
* **DeFi Dashboards**
* **NFT Marketplaces**
* **Real-time Analytics** By removing the need to scan entire blockchains
manually, Graph Middleware makes complex queries simple and efficient.
## **Ethereum attestation indexer**
Enterprise solutions often require **verifiable credentials** (e.g., identity
attestations or compliance confirmations). The **Ethereum Attestation Indexer**
monitors and indexes data produced by the **Ethereum Attestation Service
(EAS)**. It then presents these attestations through a GraphQL API, allowing:
* **Identity Verification**
* **Reputation Systems**
* **Regulatory Tracking** This specialized middleware simplifies trust-based
interactions, reducing custom code for indexing or auditing attestations.
## **Integration studio**
The **Integration Studio** is a **low-code, Node-RED-based** environment for
orchestrating cross-system workflows:
* **4,000+ Pre-Built Connectors:** Link blockchains to ERP, CRM, HR, AI/ML, and
other external systems.
* **Event-Driven Processes:** React to on-chain activities by triggering
off-chain actions, such as sending emails or updating databases.
* **API Management:** Expose blockchain functions as RESTful endpoints or
incorporate external APIs into on-chain processes. This reduces the need for
heavy custom coding when bridging decentralized and centralized systems.
## **Hasura graphql engine**
**Hasura** seamlessly manages **off-chain** data—often user details,
authentication, or large volumes of records that don't need to reside on-chain.
Paired with a PostgreSQL database, Hasura automatically generates a **real-time
GraphQL schema**, offering:
* **Instant Queries & Mutations**
* **Role-Based Access Control**
* **Real-Time Updates** for dashboards and front ends By decoupling large or
frequently changing data from blockchain storage, you optimize both
performance and cost while retaining cryptographic proof references on-chain
as needed.
## **S3 storage (minio)**
For storing large files, such as logs, digital certificates, transaction
receipts, SettleMint offers an **S3-compatible MinIO** service. You can:
* **Upload & Retrieve Files via Standard S3 APIs**
* **Control Access Permissions**
* **Benefit from High-Performance Object Storage** It's ideal for data that
doesn't require on-chain immutability or public distribution (unlike IPFS).
Typical use cases include operational logs, user-generated content, and
archives that must remain accessible and secure.
## **Ipfs storage**
SettleMint integrates **IPFS** for **decentralized**, **tamper-proof** file
storage. A unique hash (CID) identifies each file, enabling:
* **Verified Authenticity:** Hash-based references confirm file content hasn't
changed.
* **Permanent Distribution:** Files remain online as long as peers host them,
removing dependency on a single provider.
* **Ideal for NFTs, Public Certificates, and Audit Logs** that require trustless
verification. By offloading large files to IPFS and storing only the hash
on-chain, you can preserve blockchain efficiency while retaining provable data
integrity.
## **Private key management**
Depending on risk, compliance requirements, and scale, SettleMint supports
multiple approaches:
* **Accessible ECDSA P-256:** Straightforward software-based storage.
* **Hierarchical Deterministic (HD) ECDSA P-256:** Generate multiple child keys
from a master seed for structured backups.
* **Hardware Security Modules (HSMs):** Tamper-resistant devices ensuring
maximum security for enterprise or regulated use cases. Each approach
integrates with the **Transaction Signer**, guaranteeing seamless and secure
execution of on-chain operations.
## **Access tokens (pat/aat)**
Two forms of token-based authentication and authorization:
* **Personal Access Tokens (PATs):** Tied to individual users for tasks like
contract deployment, node setup, or platform configuration.
* **Application Access Tokens (AATs):** For machine-to-machine interactions,
often used by microservices or scripts that require secure blockchain access.
Admins can create, rotate, or revoke tokens, applying granular role-based
controls to ensure only authorized entities interact with the network.
## **Custom deployments**
SettleMint enables developers to containerize custom applications, front-end
dashboards, specialized microservices, or custom oracles, and host them within
the platform. You can:
* **Define Container Images & Environments**
* **Configure Domains & SSL**
* **Scale Resources** based on user traffic This integrated approach eliminates
the need for separate hosting services, simplifying operational overhead and
unifying observability.
## **Asset tokenization kit**
The Asset Tokenization Kit is a **full-stack accelerator** designed to simplify
and speed up the development of tokenized asset platforms. It provides
**pre-built smart contracts and a ready-to-use dApp UI**, enabling businesses to
launch tokenized assets quickly and efficiently.
* **Pre-configured Smart Contracts:** Based on ERC20 standard for various asset
types, including stablecoins, bonds, equity, funds, and more.
* **Meta Transactions & Account Abstraction:** Enables gasless transactions and
wallet abstraction for a seamless user experience.
* **Compliance & Access Control:** User management, KYC, whitelisting, and
role-based access control for better governance.
This plug-and-play solution accelerates blockchain adoption, allowing
enterprises to tokenize assets securely while ensuring flexibility and
scalability.
***
file: ./content/docs/application-kits/introduction.mdx
meta: {
"title": "Introduction"
}

## SettleMint Application Kits
SettleMint Application Kits are designed to dramatically accelerate the
development of enterprise blockchain applications by providing pre-packaged,
full-stack solutions for common use cases. These kits bundle essential
components such as smart contract templates, pre-built decentralized application
(dApp) UIs, integration tools, and deployment configurations into ready-to-use
modules that can be launched in minutes and customized to fit specific business
needs.
Each Application Kit addresses a particular industry requirement or blockchain
pattern, such as asset tokenization, NFT issuance, supply chain traceability, or
secure data exchange. With built-in support for smart contract deployment,
on-chain/off-chain data flows, and secure API integrations, these kits reduce
the complexity of blockchain development while ensuring compliance, scalability,
and performance.
Developers benefit from low-code tooling, customizable open-source templates,
and real-time dashboards, while business users gain access to robust governance
features, analytics, and user management capabilities. Whether deployed in cloud
environments or on self-managed infrastructure, SettleMint’s Application Kits
offer a seamless path from idea to production, empowering teams to focus on
business innovation rather than technical implementation.
Read more about Asset Tokenization Kit here - [Asset Tokenization Kit](/application-kits/asset-tokenization/introduction)
file: ./content/docs/blockchain-and-ai/ai-code-assistant.mdx
meta: {
"title": "AI code assistant",
"description": "RooCode Assistant"
}
## AI code assistant
RooCode is an AI-powered coding assistant integrated into SettleMint's Code
Studio, replacing the former "AI Genie". It enhances Code Studio by introducing
a more versatile and powerful AI engine directly in your development
environment. With RooCode, you can generate and improve code using natural
language, leverage multiple AI models for different tasks, and even integrate
custom or local AI instances to meet your project's needs. This guide will walk
you through what RooCode is, how to set it up, and how to make the most of its
features.

### What is roocode and how does it enhance code studio?
RooCode is a next-generation AI assistant that lives in your Code Studio editor.
Think of it as your intelligent pair programmer: you can ask it to write code,
explain code, suggest improvements, or even create new project files – all
through simple prompts. Unlike the previous AI Genie (which was tied to a single
AI model), RooCode is built to be provider-agnostic and highly extensible. This
means it can connect to a range of AI models and services: • Multiple AI
Providers: Out of the box, RooCode supports popular AI providers like OpenAI
(GPT models), Anthropic (Claude), Google Vertex AI, AWS Bedrock, and more.
You're not limited to one AI engine; you can choose the model that best fits
your task for better results.
* Advanced Context Awareness: RooCode can handle larger context windows and
smarter context management than before. It "remembers" more of your codebase
and conversation history, which helps it generate responses that consider your
entire project scope. In practice, you'll notice more coherent help even as
your files grow or you switch between different parts of your project.
* Extensibility via MCP: RooCode supports the Model Context Protocol (MCP) , a
framework that lets the AI assistant use external tools and services,
including the [SettleMint MCP server](/platform-components/dev-tools/mcp).
This is a big enhancement for Code Studio – it means the AI can potentially
perform complex operations like looking up information in a knowledge base,
running test suites, or controlling a web browser for web-related tasks, all
from within the coding session. (By default, you'll only use these features if
you choose to enable or add them, so the environment stays straightforward
unless you need the extra power.)
* Seamless Code Studio Integration: RooCode is fully embedded in SettleMint's
Code Studio interface. You can access it through the familiar chat or prompt
interface. You can access it through the familiar chat or prompt panel. It
works alongside your code in real-time – for example, you can highlight a
piece of code and ask RooCode to explain or refactor it, and it will provide
the answer or suggestion in seconds. This tight integration means your
development workflow is smoother and more efficient, with AI help always at
your fingertips.
In summary, RooCode enhances Code Studio by making the AI assistance more
powerful, flexible, and context-aware. Whether you're a developer looking for
quick code generation or an enterprise user needing compliance-friendly AI,
RooCode adapts to provide the best experience.
### Step-by-step setup and configuration
Getting started with RooCode in Code Studio is straightforward. Here's how to
set up and configure it for your needs:
1. Open Code Studio: Log in to the SettleMint Console and open your Code Studio
environment. Ensure you have the latest version of the Code Studio where
RooCode is available (if SettleMint releases updates, make sure your
environment is updated). You should notice references to RooCode or AI
Assistant in the IDE interface.
2. Access RooCode Settings: In Code Studio, locate the RooCode settings panel.
This is accessible via an rocket icon the Code Studio toolbar. Click on that
to open the configuration settings.
3. Choose an AI Provider: In the RooCode settings, you'll see an option to
select your AI provider or model. RooCode supports many providers; common
options include OpenAI, Anthropic, Google Vertex AI, AWS Bedrock, etc. Decide
which AI service you want to use for generating suggestions. For instance, if
you have an OpenAI API key and want to use GPT-4, select "OpenAI." If you
prefer Anthropic's Claude, choose "Anthropic" from the dropdown. (You can
change this later or even set up multiple profiles for different
providers.) 4. Enter API Keys/Credentials: After selecting a provider, you'll
need to provide the API key or credentials for that service:
* For cloud providers like OpenAI or Anthropic: Enter your API key in the
provided field. You might also need to specify any additional info (for
example, an OpenAI Organization ID if applicable, or select the model
variant from a list). RooCode's Anthropic integration, for example, will
have a field for the Anthropic API Key and a dropdown to pick which Claude
model to use.
* If you choose OpenAI Compatible or custom endpoints (for instance, via a
service like OpenRouter or Requesty that aggregates models), input the base
URL or choose the service name, and then provide the corresponding API key.
* For Azure OpenAI or enterprise-specific endpoints: you'll typically enter
an endpoint URL and an API key (and possibly a deployment name) as required
by that service. RooCode allows configuring a custom base URL for providers
like Anthropic or OpenAI if needed, which is useful for enterprise proxies
or Azure endpoints.
4. Configure Model and Settings: Once your key is in place, select the exact
model or version you want to use. For example, choose "GPT-4" or a specific
Claude variant from the model dropdown. You can also adjust any optional
settings here:
* Context Limit or Mode Settings: Some providers/models allow adjusting the
maximum tokens or response length. RooCode might expose these or just
manage them automatically. (By default, it optimizes context usage for
you.)
* MCP and Tools: If you plan to use advanced features, ensure that MCP
servers are enabled in settings (this might be on by default). There may be
an option like "Enable MCP Tools" or similar. If you don't need these, you
can leave it as is. (Advanced users can add specific MCP server
configurations later, this is optional and not required for basic usage.)
* Profiles (Optional): RooCode supports multiple configuration profiles. You
might see an option to create or switch "API Profiles." This is useful if
you want to quickly switch between different providers or keys (say one
profile for OpenAI, another for a local model). For now, using the default
or a single profile is fine.
5. Save and Test: Save your settings (there might be a "Save" button or it may
apply changes immediately). Now test RooCode to confirm it's working:
* Look for the RooCode chat panel or command input in Code Studio. It might
be a sidebar or bottom panel where you can type a prompt.
* Try a simple prompt like: "Hello RooCode" or ask it to write a snippet,
e.g., "// Prompt: write a Solidity function to add two numbers".
* RooCode should respond with a code suggestion or answer. If it prompts for
any permissions (like file access, since RooCode can write to files),
approve it to allow the AI to assist with coding tasks.
* If you get an error (e.g., unauthorized or no response), double-check your
API key and internet connectivity, or see if the provider might have usage
limits. Adjust keys or settings as needed.
* With setup complete, you can now fully leverage RooCode in your development
workflow. Use natural language to ask for code, explanations, or
improvements. For example:
* "Create a unit test for the above function." – RooCode will generate test
code.
* "I'm getting a validation error in this contract, can you help find the
bug?" – RooCode can analyze your code and point out potential issues.
* "Document this function." – RooCode will write documentation comments
explaining the code.
* You can interact with it as you code, and it will utilize the configured AI
model to assist you. Feel free to adjust the provider or model as you see
what works best for your project.
## Roo Code Interface
Components of the Chat Interface The chat interface consists of the following
main elements:
1. Chat History: This area displays the conversation history between you and Roo
Code. It shows your requests, Roo Code's responses, and any actions taken
(like file edits or command executions).
2. Input Field: This is where you type your tasks and questions for Roo Code.
You can use plain English to communicate.
3. Action Buttons: These buttons appear below the input field and allow you to
approve or reject Roo Code's proposed actions. The available buttons change
depending on the context.
4. Send Button: This looks like a small plane and it's located to the far right
of the input field. This sends messages to Roo after you've typed them.
5. Plus Button: The plus button is located at the top in the header, and it
resets the current session.
6. Settings Button: The settings button is a gear, and it's used for opening the
settings to customize features or behavior.
7. Mode Selector: The mode selector is a dropdown located to the left of the
chat input field. It is used for selecting which mode Roo should use for your
tasks.

### Key features and benefits of roocode
RooCode brings a rich set of features to improve your development experience in
Code Studio. Here are some of the highlights:
* Multiple AI Models & Providers: Connect RooCode to various AI backends. You're
not locked into one AI engine – choose from OpenAI's GPT series, Anthropic's
Claude, Google's PaLM/Gemini (via Vertex AI), or even open-source models
through services like Ollama or LM Studio. This flexibility means you can
leverage the strengths of different models (e.g., one might be better at
Ollama or LM Studio. This flexibility means you can leverage the strengths of
different models (e.g., one might be better at code completion, another at
explaining concepts) as needed.
* 📚 Advanced Context Management: RooCode is designed to handle large codebases
and lengthy conversations more gracefully. It uses intelligent context
management to include relevant parts of your project when generating answers.
For you, this means less time spent copy-pasting code to show the AI – RooCode
will automatically consider the files you're working on and recent
interactions. The result is more informed suggestions that truly understand
your project's context.
* 🤖 MCP (Model Context Protocol) Support: One of the standout advanced features
is RooCode's ability to use MCP. This allows the AI assistant to interface
with external tools and services in a standardized way . For example, with an
appropriate MCP server configured, RooCode could perform a task like searching
your company's knowledge base, querying a database for a value, or running a
custom script – all triggered by an AI command. This extends what the AI can
do beyond text generation, turning it into a mini agent that can act on your
behalf. (This is an optional power-user feature; you can use Code Studio and
RooCode fully without ever touching MCP, but it's there for those who need to
integrate with other systems.)
* 🛠 In-Editor Tools & Actions: RooCode comes with a variety of built-in
capabilities accessible directly in the editor. It can read from and write to
files in your project (with your permission), meaning it can create new code
files or modify existing ones when you accept its suggestions. It can execute
terminal commands in the Code Studio environment – useful for running tests or
compiling code to verify solutions. It even has the ability to control a
browser or other tools via MCP, as mentioned. These actions help automate
routine tasks: imagine generating code and then automatically running your
test suite to verify it, all through AI assistance.
* 🔒 Customization & Control: Despite its power, RooCode gives you control over
the AI's behavior. You can set custom instructions (for example, telling the
AI about project-specific guidelines or coding style preferences). You can
also adjust approval settings – e.g., require manual approval every time
RooCode tries to write to a file or run a command, or relax this for trusted
actions to speed up your workflow. For enterprise scenarios, features like
disabling MCP entirely or restricting certain actions are available for
compliance (administrators can centrally manage these policies). This balance
ensures you get helpful automation without sacrificing oversight.
* 🚀 Continuous Improvement: RooCode is regularly updated with performance
improvements and new features. Being a part of the SettleMint platform means
it's tested for our specific use cases (like blockchain and smart contract
development) and tuned for reliability. Expect faster responses and new
capabilities over time – for instance, support for the latest AI models as
they become available, improved prompt handling, and more. All these benefits
come to you automatically through platform updates.
Together, these features make RooCode a robust AI co-developer. You'll find that
repetitive tasks get easier, complex tasks become more approachable with AI
guidance, and your team's overall development speed and quality can increase.
### Integrating personal api keys and enterprise/local instances
One of the great advantages of RooCode is its flexibility in how it connects to
AI models. Depending on your needs, you can either use personal API keys for
public AI services, or leverage local/enterprise instances for more control.
Here's how to manage those scenarios:
* Using Your Own API Keys: If you have your own accounts with AI providers (such
as an OpenAI API subscription or access to Anthropic's Claude), you can plug
those credentials into RooCode. In the RooCode settings profile, select the
provider and enter your API key (as described in the setup steps). This will
make Code Studio use your allotment of that AI service for all AI completions
and chats. The benefit is that you can tailor which model and version you use
(and often get the newest models immediately), and you have full visibility
into your usage via the provider's dashboard. For instance, you might use your
OpenAI key to get GPT-4's latest features. RooCode will respect any rate
limits or quotas on your key, and you'll be billed by the provider according
to your plan with them (if applicable). This approach is ideal for individual
power users or teams who want the best models and are okay managing their own
API costs.
* Enterprise API Integrations: Enterprises often have special arrangements or
requirements for AI usage – such as using Azure OpenAI Service, deploying
models via AWS Bedrock, or using a private endpoint hosted in a secure
environment. RooCode supports these cases. You can configure a custom base URL
and API key to point RooCode to your enterprise's AI endpoint. For example, if
your company uses Azure OpenAI, you'd select "OpenAI Compatible" and provide
the Azure endpoint URI and key. Similarly, for AWS Bedrock, choose the Bedrock
option and enter the necessary credentials. By doing so, all AI requests from
Code Studio will route through those enterprise channels, ensuring compliance
with your org's data policies (no data leaves your approved environment). This
is crucial for sectors with strict data governance – you get the convenience
of AI coding assistance while keeping data management in line with internal
rules.
* Local Instances (Offline/On-Premises Use): RooCode can also work with local AI
models running on your own hardware. This is a powerful feature if you need
full offline capability or extra privacy. Using a tool like Ollama or LM
Studio via AWS Bedrock, or using a private endpoint hosted in a secure
environment. RooCode supports these cases. You can configure a custom base URL
and API key to point RooCode to your enterprise's AI endpoint. For example, if
your company uses Azure OpenAI, you'd select "OpenAI Compatible" and provide
the Azure endpoint URI and key. Similarly, for AWS Bedrock, choose the Bedrock
option and enter the necessary credentials. By doing so, all AI requests from
Code Studio will route through those enterprise channels, ensuring compliance
with your org's data policies (no data leaves your approved environment). This
is crucial for sectors with strict data governance – you get the convenience
of AI coding assistance while keeping data management in line with internal
rules.
* Local Instances (Offline/On-Premises Use): RooCode can also work with local AI
models running on your own hardware. This is a powerful feature if you need
full offline capability or extra privacy. Using a tool like Ollama or LM
Studio , you can host language models on a local server that mimics the
OpenAI API. In RooCode's settings, you would choose a "Local" provider option
(for instance, LM Studio appears as an option) and set the base URL to your
local server (often something like [http://localhost:PORT](http://localhost:PORT) with no API key
needed or a token if the local server requires one). Once configured, RooCode
will send all requests to the local model, meaning your code and queries never
leave your machine. Keep in mind, running local models may require a powerful
computer, and the AI's performance depends on the model you use (some
open-source models are smaller than the big cloud ones). Still, this option is
fantastic for experimentation, working offline, or ensuring absolute
confidentiality for sensitive code.
* Switching and Managing Configurations: Through RooCode's configuration
profiles feature , you can maintain multiple setups. For instance, you might
have one profile called "Personal-OpenAI" with your OpenAI key and GPT-4,
another called "Enterprise-Internal" for your company's endpoint, and a third
called "Local-LLM" for a model on your machine. In Code Studio, you can
quickly switch between these depending on the project or context. This
flexibility means you're never locked in – you can always choose the best
route for AI assistance on a case-by-case basis.
> Tip: Always ensure that when using external API keys or services, you follow
> the provider's usage policies and secure your keys. Never commit API keys into
> your code repositories. Set them via the Code Studio interface or environment
> variables if supported. SettleMint's platform will store any keys you enter in
> a secure way, but it's good practice to manage and rotate keys periodically.
> For enterprise setups, work with your system administrators to obtain the
> correct endpoints and credentials.
By integrating your own keys or instances with RooCode, you essentially bring
your preferred AI brain into SettleMint's Code Studio. This empowers you to use
the AI on your terms – whether prioritizing cost, performance, or compliance.
It's all about giving you the choice.
### Conclusion and next steps
RooCode dramatically expands the AI capabilities of SettleMint Code Studio,
making it a versatile assistant for blockchain development and beyond. We've
covered what RooCode is, how to get it up and running, its key features, and how
to tailor it to your environment. As you start using RooCode, you may discover
new ways it can help in your daily coding tasks – don't hesitate to explore
features like custom modes or ask RooCode itself for tips on how it can assist
you best!
For more detailed technical information, troubleshooting, and advanced tips,
check out the (official RooCode documentation)\[[https://docs.roocode.com](https://docs.roocode.com)]. The
RooCode community is also active – you can find resources like FAQ pages or
community forums (e.g., RooCode's Discord or subreddit) via the documentation
site if you're interested in deep dives or sharing experiences.
file: ./content/docs/blockchain-and-ai/blockchain-and-ai.mdx
meta: {
"title": "Blockchain and AI",
"description": "Using the Model Context Protocol (MCP) to connect LLM to blockchain"
}
## Blockchain and AI: Convergence and Complementarity
## Introduction
Blockchain and Artificial Intelligence (AI) are two transformative technologies
that, when combined, promise more than the sum of their parts. Blockchain
provides a decentralized, tamper-proof ledger for recording transactions or
data, while AI offers intelligent algorithms capable of learning from data and
automating complex decisions. Industry leaders, executives, and developers are
increasingly interested in how these technologies can reinforce each other.
## Blockchain as a Foundation for AI
Blockchain technology can act as a foundational layer for AI systems by ensuring
the integrity, security, and availability of the data and processes that AI
relies on. Key characteristics of blockchains – immutability, distributed trust,
and smart contracts – directly address several challenges faced in AI
deployment. Below, we examine how blockchain supports AI in terms of data
integrity, access control, auditability, and decentralization-
### Data Integrity and Provenance
AI algorithms are only as reliable as the data they consume. Blockchain's
immutable ledger guarantees that once data is recorded, it cannot be tampered
with or altered without detection. This assures AI models of consistent,
trustworthy input data. By leveraging blockchain as an immutable record-keeping
system, AI decision-making can be tied to verifiable data lineage, improving
overall trust in the system. For example, in a supply chain scenario, sensor
readings (e-g. temperature, location) can be logged to a blockchain at each
step.
This creates a permanent data provenance trail that an AI model can later use to
trace back anomalies or confirm the origin and quality of training data-
Blockchain's digital records thus provide insight into the provenance of data
used by AI, addressing one aspect of the AI "black box" problem and improving
confidence in AI driven recommendations. In essence, blockchain ensures data
integrity for AI – the data feeding AI models remains accurate, untampered, and
traceable
### Secure Access Control and Privacy
Blockchains, especially permissioned or consortium blockchains, include
mechanisms for access control through cryptographic keys and smart contracts-
This means AI systems built on such a blockchain can enforce fine-grained data
access policies: only authorized parties (nodes or users with the correct
keys/permissions) can read or contribute certain data.
Such decentralized access control is managed by code rather than by a central
administrator, reducing single points of failure. For instance, patient
healthcare data could be stored on a blockchain in encrypted form; only
hospitals or AI diagnostic agents with the proper cryptographic credentials can
access the data, and every access is recorded on-chain. Smart contracts can
automate the enforcement of consent and usage policies, giving data providers
and users real-time control over data access with a transparent log of who
accessed what.
This not only secures sensitive data for AI applications but also builds trust
that privacy is preserved. Moreover, because the ledger is distributed, there is
no central database vulnerable to breaches – data remains distributed across
nodes, aligning with principles of secure multi-party data sharing and helping
to preserve privacy.
### Auditability and Transparency
One of the biggest challenges with advanced AI models (especially deep learning
systems) is the lack of transparency in their decision-making. Blockchain can
help alleviate this by providing an audit trail for AI processes. Every input
fed into an AI model, every model update, or even every key decision or
prediction made by an AI could be logged as a transaction on the blockchain-
This creates an immutable history that auditors or stakeholders can later review
to understand how a conclusion was reached. In regulated industries, such an
audit trail is invaluable for compliance and accountability. Blockchain's
transparent and tamper-evident log of events makes AI operations more
interpretable and trustworthy to outsiders.
For example, consider an AI system in finance that approves loans: each step
(input data attributes, intermediate risk scores, final decision) can be hashed
or recorded on-chain. Later, if a decision is contested, the bank can prove
exactly what data the AI saw and how the decision was derived, thanks to the
verifiable on-chain record. Researchers have noted that blockchain's
decentralized, immutable, and transparent characteristics present a promising
solution to enhance AI transparency and auditability.
By improving decision traceability, data provenance, and model accountability,
blockchain can make AI's "black-box" decisions more open to scrutiny. In
summary, blockchain adds a layer of auditability to AI systems: all transactions
and decisions are chronologically recorded and cannot be hidden or tampered
with, thus fostering greater trust and explainability-
### Decentralization and Resilience
Decentralization is at the core of blockchain's design. For AI, decentralization
means an AI system or application can be run collaboratively by many parties
without requiring a single, controlling authority. This has several benefits-
First, it increases resilience: with blockchain, the AI ecosystem has no single
point of failure. If one node or participant in the network goes offline or
attempts malicious behavior, the overall system can continue functioning
correctly based on consensus from other nodes. This is crucial for
mission-critical AI applications (e-g. autonomous vehicles or smart grids) that
cannot rely on one central server. Second, decentralization enables
multi-stakeholder collaboration in AI.
Multiple organizations can contribute data or algorithms to a shared AI model
via blockchain, knowing that the rules of interaction are enforced by the
protocol rather than by one party's goodwill- Blockchain's consensus mechanisms
and distributed trust allow untrusted participants to cooperate in AI tasks
securely without a central broker. For instance, in a decentralized medical
research effort, different hospitals might each analyze local patient data with
AI and then share only the model insights or updates via blockchain. No single
hospital "owns" the process, but the blockchain ensures each contribution is
recorded and the overall model evolves reliably. Additionally, the immutable
history and consensus help detect and reject any corrupted inputs, thereby
defending the distributed AI system against data poisoning or unauthorized
interventions. Overall, blockchain's decentralization aligns well with emerging
AI paradigms that require distributed computing and collaboration, enabling
robust and democratic AI systems rather than siloed, centralized ones-
## AI Enhancing Blockchain Capabilities
Just as blockchain strengthens AI's foundation, AI can significantly enhance
blockchain networks and applications. Blockchains generate and rely on vast
amounts of data and complex operations, and here AI's strengths in pattern
recognition, prediction, and automation can be leveraged. We discuss several
dimensions of how AI augments blockchain: through intelligent automation of
processes, anomaly detection for security, data analysis and classification for
insight, and smart contract management for more robust autonomous code-
### Intelligent Automation in Blockchain Workflows
Blockchains often underpin multi-party business processes (for example, supply
chain workflows or inter-bank settlements). While smart contracts can automate
simple if-then logic, integrating AI allows more complex, adaptive automation-
AI systems can be embedded alongside smart contracts to make on-chain workflows
smarter and more responsive. For instance, an AI model could be used to monitor
real-time data (from IoT sensors or external feeds) and then trigger on-chain
actions through smart contracts based on learned patterns or predictions. IBM
researchers describe scenarios where AI models are integrated into smart
contracts on a blockchain to automate decision-making across a business network
– recalling expired products, reordering inventory, executing payments,
resolving disputes, or selecting optimal logistics – all without manual
intervention.
In a food supply chain context, imagine a blockchain that tracks shipments and
storage conditions. An AI embedded in this system could predict if a certain
batch of food is likely to spoil based on temperature readings. Upon a high-risk
prediction, the AI could automatically invoke a smart contract to initiate a
product recall or reroute the shipment, with all parties immediately notified
via the blockchain. Such AI-driven automation adds a layer of intelligence to
the autonomous execution already offered by smart contracts. It helps blockchain
systems move from static rule execution to dynamic decision-making, greatly
increasing efficiency in processes that involve uncertainty or large data
inputs. The net effect is a streamlining of multi-party workflows – removing
friction and delay – as AI makes quick complex judgments and the blockchain
enforces those judgments transparently-
### Anomaly Detection and Security Enhancement
Blockchain networks, especially public ones, must contend with security issues
like fraudulent transactions, cyber-attacks, or network anomalies. AI excels at
analyzing patterns and can detect outliers far more effectively than manual
monitoring or simple static rules. By applying machine learning models to
blockchain data (e-g. transaction histories, user behavior patterns, network
traffic), one can identify suspicious activities or inefficiencies in real-time-
Anomaly detection AI agents can run either on-chain (if lightweight) or
off-chain in blockchain analytics systems, flagging issues for further action-
For example, in cryptocurrency networks an AI might analyze transaction graph
data to detect money laundering patterns or unusual spikes in activity that
could indicate a theft or hack. Successfully detecting anomalies in blockchain
transaction data is essential for bolstering trust in digital payment systems-,
as noted by researchers.
If an AI model flags a transaction as likely fraudulent or a smart contract as
behaving abnormally, the blockchain network or validators could automatically
put that transaction on hold or trigger an alert, preventing potential damage.
Similarly, AI can help secure blockchain consensus itself – by predicting and
mitigating DDoS attacks on nodes, optimizing node communications, or even
adjusting consensus parameters based on network conditions. Beyond security,
anomaly detection also means performance tuning: AI could spot congestion
patterns and recommend protocol tuning or sharding to improve scalability. In
summary, AI provides a form of intelligent surveillance over blockchain systems,
enhancing security through continuous learning. It can adapt to new threat
patterns (such as emerging fraud tactics) much faster than human-defined rules,
thus protecting the integrity of blockchain networks in an automated way.
### Data Classification and Insight Extraction from Blockchain Data
Every blockchain, by design, accumulates a growing ledger of transactions or
records. In networks with rich data (for instance, blockchains that handle
supply chain events, identity credentials, or IoT readings), there is a trove of
information that could be mined for value. AI brings advanced analytics to this
domain: it can parse through large volumes of on-chain and off-chain associated
data to classify information, discover patterns, and extract actionable
insights.
For example, AI might categorize transactions into different types (normal,
microtransactions, suspicious, etc-), or classify addresses/wallets by usage
patterns (exchange, individual, smart contract, bot) which is useful for network
analytics. Natural Language Processing (NLP) AI could even read unstructured
data stored or referenced on blockchains (like contract source code or metadata
in transactions) and classify or summarize it. One clear complementary pattern
is using blockchain as the trusted data layer and AI as the analytical layer on
top.
Because blockchain ensures data reliability and consistency, AI analytics on
that data can produce trustworthy insights for decision-makers. Conversely, by
analyzing blockchain data, AI can help identify inefficiencies or opportunities
in business processes, which can then be codified back into new smart contracts
or governance rules. An industry example is advanced auditing: a blockchain
might record every step in a financial audit trail, and an AI tool can sift
through these records to identify anomalies, categorize expense types, or
predict compliance issues.
The AI effectively turns raw, immutable ledger data into higher-level knowledge.
As one guide noted, by analyzing large amounts of blockchain data, AI can detect
patterns and extract meaningful insights that would enable better
decision-making and pattern recognition for businesses. In essence, AI unlocks
the value in blockchain data, providing comprehension and foresight (through
predictions or classifications) from what would otherwise be just extensive
logs. This synergy transforms a passive ledger into an active intelligence
source for organizations-
### AI for Smart Contract Development and Management
Smart contracts are self-executing programs on the blockchain that enforce
agreements. However, they come with challenges: they are hard to change once
deployed, prone to bugs if not written carefully, and limited in their ability
to handle complex logic or adapt over time. AI can assist at multiple stages of
the smart contract lifecycle to overcome these limitations. During development,
AI techniques (like program synthesis or code generation models) can help write
or optimize smart contract code.
Researchers have even proposed AI-powered blockchain frameworks that include
auto-coding features for smart contracts – essentially creating "intelligent
contracts" that can improve themselves. In practice, an AI assistant could
suggest safer code patterns to a developer or even automatically generate parts
of a contract based on high-level specifications, reducing human error. AI can
also be used to verify and validate smart contracts.
Machine learning models might learn from past vulnerabilities to predict if a
new contract has a security flaw or inefficiency, complementing formal
verification by quickly scanning for likely bug patterns. Once contracts are
deployed, AI can help manage them by monitoring their performance and usage- For
example, an AI system could monitor how often certain functions of a contract
are called and dynamically suggest optimizations (or even autonomously trigger
an upgrade via a governance mechanism if one exists). In terms of contract
operations, AI can be integrated to handle exceptions or complex decision
branches that are difficult to hard-code.
For instance, an insurance smart contract might use an AI oracle to decide claim
approvals (evaluating evidence like photos or sensor data) rather than a fixed
rule – thus the contract "adapts" its behavior intelligently within allowed
bounds. AI can also assist in predictive maintenance of blockchain networks,
forecasting when a contract might run out of funds or when a network might
congest, allowing preemptive actions (like raising gas limits or deploying a new
instance). In summary, AI makes smart contracts more robust and user-friendly by
automating code creation, improving security audits, and introducing adaptive
logic. This convergence is steering us toward a future in which AI-driven smart
contracts are a cornerstone of Web3, making decentralized applications more
intelligent, secure, and efficient-
## Architectural Complementarities
Beyond individual benefits, blockchain and AI can be woven together into unified
system architectures that leverage the strengths of each. In such designs,
blockchain often serves as the backbone for trust, data integrity, and
coordination, while AI provides the brain for data processing, decision-making,
and pattern recognition. We highlight a few key architectural complementarities
and patterns that illustrate this symbiosis:
* **Data Provenance on Blockchain, Analytics by AI**: Perhaps the most
straightforward complementary architecture is to use blockchain for recording
provenance of data and processes, and use AI to perform analytics on that
data. In this pattern, all critical data events (e-g-, creation of a dataset,
updates to a model, results of an AI inference) are time-stamped and stored on
a blockchain. This yields an immutable timeline that is extremely useful for
verifying where data came from and how it has been used. AI systems then
operate on this verified data to generate insights.
For example, consider a pharmaceutical supply chain: a blockchain logs each
handoff of a drug shipment (maintaining provenance), and an AI model uses this
log data to predict supply bottlenecks or detect counterfeit products by
spotting irregularities in handoff patterns. The blockchain guarantees the AI
is using authentic data, while the AI extracts meaning from the data.
In practice, this addresses a critical issue for AI , the garbage in, garbage
out problem , by ensuring the input data quality is high (thanks to blockchain
integrity) and well-understood in origin. It also addresses trust:
stakeholders are more likely to trust AI-driven insights or decisions if they
can independently verify the underlying data trail on a public or consortium
ledger. Thus, this architecture marries blockchain's strength in data fidelity
with AI's strength in data interpretation-
* **AI Oracles for Smart Contracts**: Blockchains are inherently self-contained
and cannot directly fetch external information without oracles. AI can serve
as an advanced kind of oracle that not only provides external data to smart
contracts but also interprets it. In this complementary setup, an AI system
sits off-chain, ingesting data from the outside world (such as market prices,
weather reports, news feeds, sensor readings) and making sense of it.
It could perform tasks like image recognition (e-g-, verify an insurance claim
photo), NLP on news (e-g-, detect a relevant event), or aggregate and analyze
IoT sensor streams. The AI then sends a distilled, verifiable piece of
information or decision to the blockchain via a cryptographic proof or signed
message. The blockchain's smart contract logic can trust this input because it
comes from a known, authenticated AI oracle service. This pattern effectively
extends smart contracts' capabilities – they can react to complex real-world
situations by outsourcing interpretation to AI.
For instance, a crop insurance contract on blockchain might rely on an AI
oracle to analyze satellite images and weather data to determine if a drought
occurred, then trigger payouts accordingly. The combination creates a
closed-loop system: blockchain enforces rules and transactions, AI expands the
scope of what those rules can cover by bringing in intelligent judgments from
real-world data. Importantly, the blockchain can also record the input and
output of the AI oracle for transparency and later auditing (so one could see
which image was used and why the AI decided a drought happened). This
architectural interplay ensures that even when AI is used for complex logic,
the accountability and determinism of blockchain systems is not lost-
* **On-Chain Governance and Off-Chain AI Computation**: Another complementary
design splits heavy computation and governance between AI and blockchain-
Training sophisticated AI models or performing large-scale data analytics is
computationally intensive and not feasible directly on most blockchain
platforms. Instead, these tasks are done off-chain (for example, in cloud
servers or edge devices running AI), but orchestrated and verified via
blockchain.
One pattern is to use blockchain for coordinating a network of AI workers:
imagine a decentralized network where many participants train parts of a model
(or compute parts of a task). A smart contract can coordinate the assignment
of tasks, aggregation of results, and reward distribution. The actual AI
computation happens off-chain for efficiency, but whenever a result is
produced, a hash or digital signature of the result is posted to the
blockchain.
The blockchain thus maintains end-to-end oversight: it knows which data was
assigned, which model version was used, and it can even require multiple
independent AI agents to submit results for cross-verification (majority vote,
for instance) before accepting an outcome. This approach is used in some
decentralized machine learning platforms where blockchain tracks contributions
and ensures fairness, while AI does the heavy lifting externally. The result
is an architecture where blockchain handles orchestration, trust, and reward
mechanisms, and AI handles computation and learning. Both pieces work in
lockstep: the blockchain never blindly trusts a result without consensus or
validation, and the AI participants rely on the blockchain for fair
coordination-
* **Secure Data Exchange with Encryption and AI**: In scenarios where data
privacy is paramount (such as multi-organization AI collaborations),
blockchain and AI can be combined with cryptographic techniques to enable
secure insight without data leakage. Here, blockchain can store encrypted data
or model parameters, or even homomorphic encryption commitments, and only
share them under certain conditions.
AI models (like federated learning models or encrypted AI inference) operate
on this data in encrypted form or distributed form. The blockchain might use
smart contracts to enforce that, for example, only aggregates of data are
revealed and not individual private data. One concrete architectural example
is using secure multi-party computation (MPC) or federated learning (discussed
in the next section) where each party's data stays local, but a blockchain
smart contract coordinates the process of combining results.
Blockchain provides an immutable log of the computation and a platform for
agreement on results, while cryptographic AI techniques ensure the actual raw
data is never exposed. In effect, blockchain contributes transparency to the
process (everyone can see that steps X, Y, Z happened in sequence and who
contributed) and AI/cryptography ensures confidentiality of the inputs.
This complementary architecture is powerful for enterprises that want to
collectively benefit from AI on shared data (for better models or insights)
without compromising privacy or trust. It shows how blockchain's transparency
and AI's privacy-preserving algorithms can be configured to work together,
rather than being at odds.
For instance, if banks want to jointly build an AI model for fraud detection
across all their transaction data, they can employ MPC-based training and use
a blockchain to record each training round's parameter updates. The blockchain
acts as a neutral ground that all banks trust for logging updates and
enforcing protocol (ensuring each bank followed the agreed process), while the
sensitive customer data never leaves each bank's servers. This pattern
exemplifies a secure and trustworthy AI workflow enabled by blockchain
integration-
## Decentralized AI Networks and Collaborative Learning
One of the frontier areas at the intersection of blockchain and AI is the
creation of decentralized AI networks, where AI agents, models, or data are
distributed across participants rather than centralized in one entity-
Blockchain plays a critical role in enabling such networks by providing the
trust, incentive, and coordination layer.
Here we explore three important themes: decentralized AI agent networks,
blockchain-based federated learning, and secure multi-party computation, all of
which aim to harness multiple AI participants in a trustworthy manner-
### Blockchain for Decentralized AI Agents
In a decentralized AI agent network, many autonomous agents (which could be AI
software bots or intelligent IoT devices) interact and collaborate without a
central server. These agents might trade services, share data, or jointly make
decisions. Blockchain serves as the communication and agreement platform for
these interactions. Each agent is typically associated with a blockchain
identity (e-g-, an address or public key) and can execute smart contract
transactions.
By doing so, agents can enter into agreements, exchange value, or vote on
decisions in a secure and transparent way. The blockchain ensures that all
agents see a consistent view of the "world state" and that no single agent can
manipulate shared facts to its advantage (thanks to consensus). This is crucial
for trust among autonomous entities.
For example, imagine a network of autonomous economic agents that manage power
distribution in a smart grid. Each agent (perhaps controlling a home battery or
an EV charger with AI that learns when to buy/sell power) uses the blockchain to
post its offers and agreements. A smart contract could automatically match
supply and demand between these AI agents. The blockchain records each
transaction (energy bought, sold, at what price) immutably, preventing disputes.
In this setup, blockchain provides the marketplace and arbitration layer, while
the agents' AI handles local decision-making (like predicting when electricity
prices will be high or when their device needs charging). Over time, agents
could even adapt their strategies (reinforcement learning) based on the outcomes
recorded on-chain- This concept extends to many domains: fleets of self-driving
cars negotiating rights-of-way or traffic optimization via blockchain, AI bots
in finance forming a decentralized exchange, or autonomous supply chain agents
negotiating contracts.
The decentralization of AI through blockchain leads to more democratic and
robust systems, preventing any single party from having undue control over the
AI ecosystem. It addresses concerns that today's AI is too centralized in the
hands of a few tech giants by spreading computation and decision power across a
community, anchored by a blockchain for transparency, security, and fairness-
### Federated Learning Coordination via Blockchain
Federated Learning (FL) is a collaborative AI training approach where multiple
parties (clients) train a shared model together without directly sharing their
raw data. Traditionally, FL relies on a central server to coordinate rounds of
training: the server sends the current model to clients, they train on local
data and send updates back, and the server averages these updates into a new
global model. Blockchain can decentralize this process, removing the need for a
central server and adding more trust to the collaboration.
In a blockchain-based federated learning system, a smart contract can take on
the role of coordinator: it can store the current model parameters (or a hash of
them) on-chain, solicit updates from participants, and even perform aggregation
if the logic is simple or verify an off-chain aggregation. Each participant's
update (e-g-, encrypted gradients or model weights) could be submitted as a
transaction to the blockchain. This creates an immutable record of
contributions, which is useful for auditing and also for incentive mechanisms
(like rewarding participants for useful updates).
More importantly, using blockchain in FL addresses key vulnerabilities: it
allows untrusted or unknown participants to safely collaborate because the
protocol rules are enforced by code-, and it can deter or detect malicious
behavior. For example, a dishonest client might try to poison the model by
submitting bad updates; on a blockchain, such an update could be spotted by
outlier detection logic in a smart contract or by other clients validating
updates.
Researchers have proposed using smart contracts to identify and exclude
unreliable or malicious contributors in federated learning, thereby defending
against poisoning attacks and improving overall model quality- Blockchain also
inherently provides an audit trail of all model updates, which enhances
accountability – one can trace which participant contributed which update, and
how the model evolved, which is valuable in sensitive applications (e-g-, a
consortium of banks jointly training a fraud detection model needs to ensure no
participant is sabotaging it).
Another benefit is improved fault tolerance: if one participant or even several
drop out, the others can continue the training round, and new participants can
join by reading the latest model state from the blockchain, all without a
central orchestrator. In short, blockchain empowers federated learning by
providing distributed trust, security, and continuity. It transforms FL into a
more open, yet secure, process – sometimes called Blockchain-Based Federated
Learning (BFL).
Studies have shown that integrating blockchain's decentralization and
tamper-proof logging with FL can overcome single points of failure and even
manage participant reputation in a decentralized manner to ensure high-quality
contributions. This paves the way for large-scale AI model training across
organizations that do not fully trust each other, using blockchain as the glue
that binds their cooperation-
### Secure Multi-Party Computation with Blockchain
Secure Multi-Party Computation (MPC) refers to techniques that allow multiple
parties to jointly compute a function over their inputs while keeping those
inputs private. It's highly relevant when several entities want to contribute
data to an AI computation (training or inference) without revealing sensitive
information to one another.
MPC alone provides privacy, but it doesn't inherently provide a public record or
easy way to enforce the correct sequence of steps beyond cryptographic proofs.
Here, blockchain and MPC can work hand-in-hand to enable privacy-preserving yet
transparent AI computations. In such an architecture, participants use MPC
protocols (or related methods like homomorphic encryption) to do the actual AI
computation (for instance, computing an aggregate statistic or a machine
learning inference) such that no individual's data is exposed.
The blockchain operates in parallel as a coordination and verification layer: it
can outline the steps of the MPC (which all parties must follow), log
commitments or hashes of intermediate results, and ultimately record the final
output of the computation. Because all parties can inspect the blockchain, they
gain confidence that everyone followed the agreed protocol (e-g-, certain
commitments were posted before revealing a result, etc-), and any deviation
would be caught.
Blockchain provides MPC with an immutable timeline and audit trail, bringing
transparency and order to an otherwise opaque joint computation. Conversely, MPC
enhances blockchain-based systems by adding capabilities for handling private
data that blockchain alone cannot process (since on-chain data is usually
visible to all). A practical example could be a consortium of hospitals
computing an AI prediction on combined patient data (like predicting outbreak
risks) via MPC.
The blockchain would record that each hospital provided an encrypted input
(without revealing the data itself), then record the encrypted intermediate
calculations, and finally store the AI prediction result once the MPC protocol
finishes. All hospitals see the final result and the proof that the computation
was done correctly, but none learns any other hospital's raw data. In finance,
MPC is used for things like jointly training risk models or even managing shared
crypto wallets; with blockchain, every MPC operation (like each signing step in
a multi-signature wallet managed via MPC) can be logged for audit.
In summary, blockchain + MPC yields systems that are both highly
secure/privacy-preserving and transparent. The blockchain ensures an immutable
representation of the MPC transactions and results-, which is key for trust,
while MPC ensures sensitive inputs to AI computations remain confidential.
Together, they allow multi-party AI-driven computations that no single party
could trust to do alone, opening the door to broader cooperation (for example,
competitors jointly benefiting from AI on combined data, without giving away
business secrets). This synergy exemplifies the complementariness of blockchain
and AI-driven cryptographic methods in creating new possibilities for secure,
distributed intelligence-
## Towards Transparent, Secure, and Autonomous Systems at Scale
As we have seen, blockchain and AI complement each other in fundamental ways:
blockchain brings transparency, trust, and decentralization to AI systems, and
AI brings automation, intelligence, and adaptability to blockchain systems-
Together, they form the building blocks of next-generation digital platforms
that can operate autonomously at scale while remaining secure and auditable-
**Transparency**: By integrating blockchain, AI-driven processes can be made
transparent and explainable to stakeholders. Every critical action taken by an
AI , whether it's a data transformation, a decision output, or a model update ,
can be traced on an immutable ledger. This level of transparency helps overcome
the lack of trust that often plagues AI ("why should we trust the algorithm?")
because there is a verifiable record backing it.
When AI models, data, and decisions are registered on a blockchain, we enable
independent verification and explainability. For instance, an autonomous
vehicle's decisions could be logged to a blockchain for later analysis in case
of an accident, contributing to public trust in such AI systems.
On the flip side, AI can enhance the transparency of blockchain by making sense
of the vast data on-chain and presenting it in human-understandable forms (e-g-,
anomaly reports, trend analyses), thereby illuminating what's happening inside
decentralized systems- The outcome is systems that are not opaque black boxes,
but glass boxes – open to inspection at multiple layers.
**Security**: Both blockchain and AI offer unique security benefits, and
together they cover more ground. Blockchain provides security through
cryptography (signatures, hashes) and consensus, ensuring data integrity and
resistance to tampering.
AI enhances security by proactively monitoring and reacting to threats (like
detecting fraud, intrusions, or system failures as discussed). Additionally, AI
can manage the scale of security – as systems grow to millions of transactions
or events, AI is necessary to filter signal from noise and prioritize threats.
By building AI agents into blockchain networks (for tasks like fraud detection,
network optimization, and user behavior analytics), the security of the overall
system is markedly improved, as it becomes feasible to handle security events in
real time and even predict them- Moreover, blockchain can secure AI models
themselves: for example, a model's parameters or hashes might be stored on-chain
to ensure they haven't been maliciously altered, and only authorized updates
(with proper signatures or proofs) are accepted.
This prevents attackers from subtly corrupting an AI model (a real concern in ML
called model integrity attack) because any unauthorized change wouldn't match
the chain record. Thus, the integrated design supports end-to-end security: from
data input, to model, to decision output, every component is guarded either by
blockchain's cryptographic guarantees or AI's vigilance, or both.
**Autonomy**: The fusion of AI and blockchain is a key enabler of truly
autonomous systems and organizations. Blockchains allow for decentralized
governance – using smart contracts and consensus rules, one can create
applications that run without human intervention (often termed decentralized
autonomous organizations, or DAOs).
However, traditional DAOs and smart contracts can only follow pre-defined rules;
they lack the ability to adapt or improve themselves over time. By incorporating
AI, these autonomous blockchain systems gain the ability to learn from
experience, optimize strategies, and handle novel situations. The result is
self-driving operations in a business or network. Consider an autonomous supply
chain network: blockchain smart contracts handle the enforcement of rules and
financial transactions between parties, while AI components handle demand
forecasting, inventory optimization, and exception management.
The combined system could run with minimal human input, automatically adjusting
to supply shocks or demand changes and negotiating actions among participants.
Importantly, such autonomy scales with the system – adding more AI agents or
more nodes doesn't require a linear increase in central oversight, because
coordination is handled algorithmically.
The scalability of these systems comes from their decentralized nature (adding
more nodes can even strengthen a blockchain network up to a point) and AI's
capability to manage large amounts of data and decision complexity. As one
analysis put it, decentralized AI systems leveraging blockchain can pave the way
for a more inclusive and resilient digital future, democratizing access to AI
and distributing its benefits across society.
In large-scale scenarios (smart cities, global supply chains, planetary-scale
sensor networks), a combination of blockchain for inter-entity coordination and
AI for local intelligence is likely the only feasible way to achieve autonomy
with reliability-
In summary, the convergence of blockchain and AI supports the creation of
systems that are at once transparent, secure, and autonomous, even at large
scale. Blockchain ensures that as these systems scale to more users and more
devices, the integrity and trust in the system does not degrade – everyone sees
a single source of truth and can verify rules are followed.
AI ensures that as complexity grows, the system can handle complexity
intelligently – automating decisions and optimizing resources without constant
human oversight. This powerful synergy is driving innovation toward
infrastructures that operate with the trust of blockchain and the intelligence
of AI-
## Integrated Design Patterns and Examples
To concretize the interaction of blockchain and AI, we now present several
integrated design patterns and example systems. These examples illustrate how
the technologies interlock in practical scenarios, highlighting system
interactions step by step-
### Pattern 1: Trusted Data Pipeline for AI Insights
**Scenario**: A food supply chain involving farmers, distributors, retailers,
and regulators wants to ensure product quality and predict issues like spoilage
or contamination. They also want an audit trail for food safety compliance-
**Design**: Every time food changes hands or conditions (e-g-, temperature,
humidity) are measured, the event is logged to a consortium blockchain shared by
all stakeholders. IoT sensors attached to shipments write data (temperature
readings, location updates) to the blockchain via transactions, perhaps through
gateway nodes.
This establishes a trusted data pipeline – any data an AI will use is first
committed to an immutable ledger where it's timestamped and signed by the
source. On the analytics side, an AI system aggregates and analyzes this
blockchain-recorded data. For instance, an AI model might continuously read the
latest temperatures and logistics records from the blockchain and use them to
predict if a given shipment is at risk of spoilage (perhaps using a predictive
model trained on historical data).
If the AI detects an anomaly (say a cooler malfunction leading to rising
temperature), it flags it. Here's where integration tightens: upon a
high-confidence prediction of spoilage, the AI (which could be running as a
trusted oracle service) triggers a smart contract on the blockchain to execute a
predefined action – for example, issuing a recall order for the affected batch
or notifying all relevant parties.
The smart contract might automatically release an insurance payout to the
retailer for the spoiled goods and initiate an order for replacement stock. All
these actions (the AI's alert, the contract's execution, notifications) are
recorded on the blockchain as well.
**Why Blockchain**: Blockchain guarantees the integrity of the supply data. No
distributor can falsify the temperature logs (to hide negligence) because the
data is secured once on-chain. Regulators auditing this system can always
retrieve the full history and trust its accuracy. Also, the recall and payouts
triggered are executed via smart contract, ensuring transparency and fairness
(no delays or bias in who gets compensated).
**Why AI**: AI provides the intelligent insight that something is wrong or needs
attention – a role traditional rule-based monitoring might miss. It can consider
multiple sensor streams and learn patterns (perhaps certain combinations of
humidity and temperature spikes predict bacterial growth) that static thresholds
would not catch. The AI essentially turns the raw data into a decision ("this
batch is likely spoiled") which then the blockchain mechanisms act upon.
**Outcome**: This pattern results in a secure, automated supply chain quality
control system. It is autonomous in responding to issues (thanks to AI-driven
contracts), transparent to all stakeholders (thanks to the blockchain log of
data and actions), and trust-minimized (parties trust the system, not
necessarily each other, since the blockchain mediates).
It also scales across many products and shipments because adding more sensors or
participants simply means more blockchain transactions and more data for the AI
to learn from – which modern systems can handle with proper engineering. The
example aligns with IBM's vision of combining AI and blockchain in supply chains
to remove friction and respond swiftly to events (e-g-, recalling expired
products via AI-triggered contracts)-
### Pattern 2: Decentralized Collaborative Learning
**Scenario**: A group of hospitals wants to build a powerful AI model (say, for
predicting disease outbreaks or assisting in diagnosis) using their combined
patient data. Due to privacy laws and competitive concerns, they cannot pool all
the raw data in one place.
They need a way to collaborate without a central authority and without exposing
sensitive data.
**Design**: The hospitals employ a federated learning approach with blockchain
coordination. Initially, a base AI model (which could be as simple as an initial
guess at a neural network) is posted as a reference on the blockchain (perhaps
stored on IPFS with the hash on-chain for integrity). Each hospital in each
training round downloads the latest model state from the blockchain and then
trains that model locally on its own patient data (e-g-, medical images, health
records). Instead of sending their private data, they compute model weight
updates (gradients) from their local training.
They then submit these updates as transactions to a smart contract on the
blockchain. Each update might be encrypted or signed to ensure authenticity. The
smart contract collects updates from multiple hospitals. To combine them, either
the smart contract performs a simple aggregation (like averaging the weights, if
feasible on-chain through a solidity loop), or a designated round leader (which
could be one of the hospitals or a consortium server) aggregates off-chain and
submits the aggregated result back to the blockchain.
The new global model parameters are then updated on the blockchain for the next
round. The blockchain thus holds the canonial model state at all times.
Importantly, the smart contract can include logic to evaluate contributions –
for example, it might reject an update that is too far off from others
(potentially malicious) or weigh updates by the size of the contributing
dataset. It could also maintain a reputation score for each participant based on
past contributions.
If a hospital consistently submits outlier gradients (which could be an attempt
to poison the model), the contract could flag or exclude those contributions in
future rounds. All of this happens in a decentralized manner: no single hospital
or central server is in charge; the blockchain's consensus ensures each step
(posting model, collecting updates, updating model) is executed correctly and
transparently-
**Why Blockchain**: Blockchain removes the need for a trusted central aggregator
in the federated learning setup – the coordination is handled by code that all
hospitals trust to execute fairly. It ensures an immutable audit trail of the
training process: later, anyone can verify what data (in aggregate) influenced
the model by examining the sequence of updates on-chain, adding credibility to
the model's integrity.
It also can tokenize the process – for example, automatically reward hospitals
(perhaps with cryptocurrency or just a reputation metric) for participating,
based on the contributions recorded, incentivizing collaboration. And crucially,
by having a shared ledger, new hospitals can join the effort by syncing the
chain and don't have to trust a central authority to catch up with the model
state-
**Why AI**: Here, AI (specifically the federated learning algorithm) is the
whole point of the exercise – the blockchain is supporting it. The AI model
benefits from far more data (spread across institutions) than any single
hospital alone could provide, leading to better accuracy. And by training in a
distributed way, it preserves patient privacy (raw data stays in the hospital)
which might make the difference between having a model or not (as otherwise
data-sharing agreements would block it).
Furthermore, the AI can be enhanced with techniques like differential privacy or
secure MPC so that even the model updates reveal minimal information, and those
techniques can dovetail with blockchain (e-g-, postings are encrypted). The
intelligence gained (e-g-, an outbreak prediction model) is shared by all
hospitals for the common good, illustrating how AI can be done collaboratively
when bolstered by the right trust framework.
**Outcome**: This pattern demonstrates a decentralized AI training system that
is privacy-preserving, trustless, and robust. It turns what is normally a
centralized workflow into a distributed one without sacrificing performance-
Each hospital has confidence in the model because they can verify the training
sequence. Patients' data privacy is respected, yet the whole network benefits
from a more data-rich AI model.
This example highlights blockchain's role in enabling multi-party AI projects
that would otherwise be impossible due to trust barriers. It could be applied to
other domains too – banks jointly training fraud detection, manufacturers
jointly training predictive maintenance models – any case where data is siloed
but insights are needed globally-
### Pattern 3: Autonomous Decentralized Agent Network
**Scenario**: Consider a smart city deployment where hundreds of AI-powered
devices and services – traffic lights, autonomous drones, public transport,
ride-sharing cars, energy grids, and environmental sensors – need to coordinate
actions for efficiency and safety. No single entity controls all devices; they
belong to different organizations or stakeholders. The goal is to enable these
disparate AI agents to cooperate and make real-time decisions (like traffic
routing, energy distribution, emergency responses) in a reliable, leaderless
way.
**Design**: The city deploys a permissioned blockchain as an underlying
coordination layer for all these systems. Every device or service runs an AI
agent that makes local decisions (e-g-, a traffic light controller with AI that
optimizes green/red times based on sensor input). These agents communicate and
coordinate via posting transactions to the blockchain or reading data from it-
For example, a self-driving car's AI might publish a transaction announcing it's
about to enter a particular intersection.
The traffic light's AI agent, seeing this on the blockchain, could adjust its
schedule or negotiate right-of-way in a transparent, verifiable manner. Perhaps
multiple cars and lights participate in a smart contract that fairly assigns
crossing priority based on rules (emergency vehicles get highest priority,
etc-). Because all events are on blockchain, malicious agents (or malfunctioning
ones) cannot lie about the state (a car can't secretly claim priority without
others seeing it).
Additionally, the blockchain could hold shared global state that all agents use
– for instance, an up-to-date city-wide traffic congestion map built from inputs
of all sensors, or a ledger of energy credits for each building. AI agents use
this shared data to make decisions that optimize overall system performance, not
just local goals- They could also form ad-hoc contracts: e.g., a building's HVAC
AI agent might buy excess solar power from a neighbor's AI agent via an on-chain
auction if it predicts a cooling need, with the blockchain settling the
micropayment instantly.
The entire network operates autonomously: agents sense, decide (with AI), act
(through blockchain transactions), and effect changes in the real world.
**Why Blockchain**: Blockchain provides the common communication fabric in a
trustless environment. It's crucial that all these devices and stakeholders have
a shared source of truth for the city's state and a way to enforce agreements-
Blockchain's immutable log and consensus ensure that if there's a dispute (say
two cars claim the same right-of-way), there is a clear record of messages and
timing to resolve it or assign fault.
It also provides security – messages are signed, so a rogue device can't
impersonate another. Smart contracts on the blockchain encode the rules of the
city (like traffic protocols, energy trading rules, etc-) in a way that everyone
must abide by, which prevents chaos. In short, blockchain is the city's
decentralized control hub, without needing a central traffic control center or
energy management center, thereby eliminating single failure points and giving
each stakeholder equal footing in governance-
**Why AI**: AI is necessary because the environment is complex and dynamic. No
simple algorithm can optimize city traffic in real-time or balance a smart grid
perfectly; these require learning from data, predicting future states, and
handling uncertainties – which is AI's domain.
Each agent uses AI to operate its device optimally (e-g-, a drone's AI to avoid
collisions and plan routes, a traffic AI to reduce jams, a power grid AI to
predict demand surges). They can also improve over time (learning from
historical data which is also available via blockchain logs). In such a large
system, AI acts as the distributed intelligence, making sense of local sensor
inputs and deciding on best actions, while blockchain ensures those actions are
coordinated and mutually consistent with others-
**Outcome**: This pattern yields a secure autonomous multi-agent ecosystem. It
is secure because blockchain and cryptography guard the interactions, and any
misbehaving agent can be identified or overridden by consensus of others. It is
autonomous because once set up, the network of AIs and smart contracts can
manage city operations with minimal human intervention, adapting to conditions
like accidents or power outages on the fly.
And it is scalable: new devices or services can join the network (by getting
appropriate credentials) and will immediately start cooperating by following
on-chain protocols; the system's decentralized nature means it doesn't
bottleneck easily, and AI helps in optimizing performance as the network grows.
While this example is ambitious, we already see early forms of it in
decentralized energy grids and transportation projects. It underscores how
combining AI decision-makers with a blockchain coordination substrate can
realize complex cyber-physical systems that are resilient and efficient at large
scale-
The convergence of blockchain and AI represents a paradigm shift toward building
systems that are at once intelligent and trustworthy. Blockchain provides the
qualities of integrity, transparency, and decentralized trust that AI systems
need in order to be widely accepted in mission-critical roles. It acts as a
foundational layer that ensures data and processes cannot be maliciously altered
and that all actions are accountable.
AI, on the other hand, injects adaptivity, learning, and automation into
blockchain-based processes, overcoming the rigidity of predefined rules and
handling complexity at scale. Specific complementarities , such as using
blockchain for data provenance and using AI for extracting insights, or using
blockchain to coordinate distributed agents and AI to optimize their behavior ,
demonstrate that each technology fills gaps in the other.
Blockchain's strengths in providing an auditable shared truth directly bolster
AI's weaknesses in explainability and trust, making AI decisions more traceable
and verifiable. Conversely, AI's strengths in pattern recognition and
decision-making address blockchain's challenges in automation and analysis,
making blockchain networks more efficient and insightful-
Crucially, this synthesis enables systems that can operate securely and
autonomously at scale – from decentralized finance platforms using AI to detect
fraud and manage risk in real-time, to smart manufacturing plants where
blockchain logs every transaction and AI optimizes production without human
input. Both technologies support a vision of autonomous agents and organizations
that are self-governing yet accountable.
A blockchain-backed AI agent is not a black box operating in isolation; it is an
agent whose actions are recorded on an immutable ledger, providing confidence to
users and regulators that it's functioning correctly. Meanwhile, a blockchain
network infused with AI is not a passive ledger; it becomes an active, learning
system that can adjust to new conditions and improve over time.
It is important to note that realizing this convergent potential is not without
challenges. Issues of scalability (blockchains can be slow or
resource-intensive, and AI models can be large), integration complexity (making
AI and smart contracts work together seamlessly), and computational overhead
(e-g-, running heavy AI computations in a decentralized way) need continued
innovation.
Solutions are emerging: Layer-2 scaling and more efficient consensus algorithms
for blockchains, model compression and federated learning for AI, and hybrid
architectures (off-chain computing with on-chain verification) are helping
bridge these gaps. As these challenges are addressed, we expect to see more
patterns of blockchain-AI integration in real-world systems-
file: ./content/docs/blockchain-and-ai/mcp.mdx
meta: {
"title": "MCP",
"description": "Using the Model Context Protocol (MCP) to connect LLM to blockchain"
}
## Introduction to model context protocol MCP
The Model Context Protocol (MCP) is a framework designed to enhance the
capabilities of AI agents and large language models (LLMs) by providing
structured, contextual access to external data. It acts as a bridge between AI
models and a variety of data sources such as blockchain networks, external APIs,
databases, and developer environments. In essence, MCP allows an AI model to
pull in relevant context from the outside world, enabling more informed
reasoning and interaction.

MCP is not a single tool but a standardized protocol. This means it defines how
an AI should request information and how external systems should respond. By
following this standard, different tools and systems can communicate with AI
agents in a consistent way. The result is that AI models can go beyond their
trained knowledge and interact with live data and real-world applications
seamlessly.
### Why does AI matter?
Modern AI models are powerful but traditionally operate as closed systems - they
generate responses based on patterns learned from training data, without
awareness of the current state of external systems. This lack of live context
can be a limitation. MCP matters because it bridges that gap, allowing AI to
become context-aware and action-oriented in real time.
Here are a few reasons MCP is important:
* Dynamic Data Access: MCP allows AI models to interact seamlessly with external
ecosystems (e.g., blockchain networks or web APIs). This means an AI agent can
query a database or blockchain ledger at runtime to get the latest
information, rather than relying solely on stale training data.
* Real-Time Context: By providing structured, real-time access to data (such as
smart contract states or application status), MCP ensures that the AI's
decisions and responses are informed by the current state of the world. This
contextual awareness leads to more accurate and relevant outcomes.
* Extended Capabilities: With MCP, AI agents can execute actions, not just
retrieve data. For example, an AI might use MCP to trigger a blockchain
transaction or update a record. This enhances the agent's decision-making
ability with precise, domain-specific context and the power to act on it.
* Reduced Complexity: Developers benefit from MCP because it offers a unified
interface to various data sources. Instead of writing custom integration code
for each external system, an AI agent can use MCP as a single conduit for many
sources. This streamlines development and reduces errors.
Overall, MCP makes AI more aware, adaptable, and useful by connecting it to live
data and enabling it to perform tasks in external systems. It's a significant
step toward AI that can truly understand and interact with the world around it.
### Key features and benefits
MCP introduces several key features that offer significant benefits to both AI
developers and end-users:
* Contextual Awareness: AI models gain the ability to access live information
and context on demand. Instead of operating in isolation, an AI agent can ask
for specific data (like "What's the latest block on the blockchain?" or "Fetch
the user profile from the database") and use that context to tailor its
responses. This results in more accurate and situationally appropriate
outcomes.
* Blockchain Integration: MCP provides a direct connection to on-chain data and
smart contract functionality. An AI agent can query blockchain state (for
example, checking a token balance or reading a contract's variable) and even
invoke contract methods via MCP. This opens up possibilities for AI-managed
blockchain operations, DeFi automation, and more, all through a standardized
interface.
* Automation Capabilities: With structured access to external systems, AI agents
can not only read data but also take actions. For instance, an AI could
automatically adjust parameters of a smart contract, initiate a transaction,
or update a configuration file in a repository. These automation capabilities
allow the creation of intelligent agents that manage infrastructure or
applications autonomously, under specified guidelines.
* Security and Control: MCP is designed with security in mind (covered in more
detail later). It provides a controlled environment where access to external
data and operations can be monitored and sandboxed. This ensures that an AI
agent only performs allowed actions, and sensitive data can be protected
through authentication and permissioning within the MCP framework.
By combining these features, MCP greatly expands what AI agents can do. It
transforms passive models into active participants that can sense and influence
external systems - all in a safe, structured manner.
## How**MCP**works
### The core concept
At its core, MCP acts as middleware between an AI model and external data
sources. Rather than embedding all possible knowledge and tools inside the AI,
MCP keeps the AI model lean and offloads the data fetching and execution tasks
to external services. The AI and the MCP communicate through a defined protocol:
1. AI Agent (Client): The AI agent (e.g., an LLM or any AI-driven application)
formulates a request for information or an action. This request is expressed
in a standard format understood by MCP. For example, the AI might ask, "Get
the value of variable X from smart contract Y on blockchain Z," or "Fetch the
contents of file ABC from the project directory."
2. MCP Server (Mediator): The MCP server receives the request and interprets it.
It acts as a mediator that knows how to connect to various external systems.
The server will determine which external source is needed for the request
(blockchain, API, file system, etc.) and use the appropriate connector or
handler to fulfill the query.
3. External Data Source: This can be a blockchain node, an API endpoint, a
database, or even a local development environment. The MCP server
communicates with the external source, for example by making an API call,
querying a blockchain node, or reading a file from disk.
4. Contextual Response: The external source returns the requested data (or the
result of an action). The MCP server then formats this information into a
structured response that the AI agent can easily understand. This might
involve converting raw data into a simpler JSON structure or text format.
5. Return to AI: The MCP server sends the formatted data back to the AI agent.
The AI can then incorporate this data into its reasoning or continue its
workflow with this new context. From the perspective of the AI model, it's as
if it just extended its knowledge or took an external action successfully.
The beauty of MCP is that it abstracts away the differences between various data
sources. The AI agent doesn't need to know how to call a blockchain or how to
query a database; it simply makes a generic request and MCP handles the rest.
This modular approach means new connectors can be added to MCP for additional
data sources without changing how the AI formulates requests.
### Technical workflow
Let's walk through a typical technical workflow with MCP step by step:
1. AI Makes a Request: The AI agent uses an MCP SDK or API to send a request.
For example, in code it might call something like mcp.fetch("settlemint",
"getContractState", params) - where "settlemint" could specify a target MCP
server or context.
2. MCP Parses the Request: The MCP server (in this case, perhaps the SettleMint
MCP server) receives the request. The request will include an identifier of
the desired operation and any necessary parameters (like which blockchain
network, contract address, or file path is needed).
3. Connector Activation: Based on the request type, MCP selects the appropriate
connector or module. For a blockchain query, it might use a blockchain
connector configured with network access and credentials. For a file system
query, it would use a file connector with the specified path.
4. Data Retrieval/Action Execution: MCP executes the action. If it's a data
retrieval, it fetches the data: e.g., calls a blockchain node's API to get
contract state, or reads from a local file. If it's an action (like executing
a transaction or writing to a file), it will perform that operation using the
credentials and context it has.
5. Data Formatting: The raw result is often in a format specific to the source
(JSON from a web API, binary from a file, etc.). MCP will format or serialize
this result into a standard format (commonly JSON or a text representation)
that can be easily consumed by the AI model. It may also include metadata,
like timestamps or success/failure status.
6. Response to AI: MCP sends the formatted response back to the AI agent. In
practice, this could be a return value from an SDK function call or a message
sent over a websocket or HTTP if using a networked setup.
7. AI Continues Processing: With the new data, the AI can adjust its plan,
generate a more informed answer, or trigger further actions. For example, if
the AI was asked a question about a user/s blockchain balance, it now has the
balance from MCP and can include it in its answer. If the AI was autonomously
managing something, it might decide the next step based on the data.
This workflow happens quickly and often behind the scenes. From a high-level
perspective, MCP extends the AI's capabilities on-the-fly. The AI remains
focused on decision-making and language generation, while MCP handles the grunt
work of fetching data and executing commands in external systems.
### Key components
MCP consists of a few core components that work together to make the above
workflow possible:
```mermaid
flowchart LR
A[AI Agent / LLM] --(1) request--> B{{MCP Server}}
subgraph MCP Server
B --> C1[Blockchain Connector]
B --> C2[API Connector]
B --> C3[File System Connector]
end
C1 -- fetch/query --> D[(Blockchain Network)]
C2 -- API call --> E[(External API/Data Source)]
C3 -- read/write --> F[(Local File System)]
D -- data --> C1
E -- data --> C2
F -- file data --> C3
B{{MCP Server}} --(2) formatted data--> A[AI Agent / LLM]
```
* MCP Server: This is the central service or daemon that runs and listens for
requests from AI agents. It can be thought of as the brain of MCP that
coordinates everything. The MCP server is configured to know about various
data sources and how to connect to them. In practice, you might run an MCP
server process locally or on a server, and your AI agent will communicate with
it via an API (like HTTP requests, RPC calls, or through an SDK).
* MCP SDK / Client Library: To simplify usage, MCP provides SDKs in different
programming languages. Developers include these in their AI agent code. The
SDK handles the communication details with the MCP server, so a developer can
simply call functions or methods (like mcp.getData(...)) without manually
constructing network calls. The SDK ensures requests are properly formatted
and sends them to the MCP server, then receives the response and hands it to
the AI program.
* Connectors / Adapters: These are modules or plugins within the MCP server that
know how to talk to specific types of external systems. One connector might
handle blockchain interactions (with sub-modules for Ethereum, Hyperledger,
etc.), another might handle web APIs (performing HTTP calls), another might
manage local OS operations (file system access, running shell commands). Each
connector understands a set of actions and data formats for its domain.
Connectors make MCP extensible - new connectors can be added to support new
systems or protocols.
* Configuration Files: MCP often uses configuration (like JSON or YAML) to know
which connectors to activate and how to reach external services. For example,
you might configure an MCP instance with the URL of your blockchain node, API
keys for external services, or file path permissions. The configuration
ensures that at runtime the MCP server has the info it needs to carry out
requests safely and correctly.
* Security Layer: Since MCP can access sensitive data and perform actions, it
includes a security layer. This may involve API keys (like the --pat personal
access token in the example) or authentication for connecting to blockchains
and databases. The security layer also enforces permissions: it can restrict
what an AI agent is allowed to do via MCP, preventing misuse. For instance,
you might allow read-only access to some data but not allow any write or
state-changing operations without additional approval.
These components together make MCP robust and flexible. The separation of
concerns (AI vs MCP vs Connectors) means each part can evolve or be maintained
independently. For example, if a new blockchain is introduced, you can add a
connector for it without changing how the AI asks for data. Or if the AI model
is updated, it can still use the same MCP server and connectors as before.
## Settlemint's implementation of AI
SettleMint is a leading blockchain integration platform that has adopted and
implemented MCP to empower AI agents with blockchain intelligence and
infractructure control. In SettleMint's implementation, MCP serves as a bridge
between AI-driven applications and blockchain environments managed or monitored
by SettleMint's platform. This means AI agents can deeply interact with
blockchain resources (like smart contracts, transactions, and network data) but
also with the underlying infrastructure (nodes, middlewares) through a
standardized interface.
By leveraging MCP, SettleMint enables scenarios where:
* An AI assistant can query on-chain data in real time, such as retrieving the
state of a smart contract or the latest block information.
* Autonomous agents can manage blockchain infrastructure tasks (deploying
contracts, adjusting configurations) without human intervention, guided by AI
decision-making.
* Developers using SettleMint can integrate advanced AI functionalities into
their blockchain applications with relatively little effort, because MCP
handles the heavy lifting of connecting the two worlds.
```mermaid
sequenceDiagram
participant AI as AI Model (Agent)
participant MCP as MCP Server
participant Chain as The Graph / Portal / Node
participant API as External API
AI->>MCP: (1) Query request (e.g., get contract state)
Note over AI,MCP: AI asks MCP for on-chain data
MCP-->>AI: (2) Acknowledgement & processing
MCP->>Chain: (3) Fetch data from blockchain
Chain-->>MCP: (4) Return contract state
MCP->>API: (5) [Optional] Fetch related off-chain data
API-->>MCP: (6) Return external data
MCP-->>AI: (7) Send combined response
Note over AI,MCP: AI receives on-chain data (and any other context)
AI->>MCP: (8) Action request (e.g., execute transaction)
MCP->>Chain: (9) Submit transaction to blockchain
Chain-->>MCP: (10) Return tx result/receipt
MCP-->>AI: (11) Confirm action result
```
In summary, SettleMint's version of MCP extends their platform's capabilities,
allowing for AI-driven blockchain operations. This combination brings together
the trust and transparency of blockchain with the adaptability and intelligence
of AI.
### Capabilities and features
SettleMint's MCP implementation comes with a rich set of capabilities tailored
for blockchain-AI integration:
* Seamless IDE Integration: SettleMint's tools work within common developer
environments, meaning you can use MCP in the context of your development
workflow. For example, if you're coding a smart contract or an application, an
AI agent (like a code assistant) can use MCP to fetch blockchain state or
deploy contracts right from your IDE. This streamlines development by giving
real-time blockchain feedback and actions as you code.
* Automated Contract Management: AI agents can interact with and even modify
smart contracts autonomously through MCP. This includes deploying new
contracts, calling functions on existing contracts, or listening to events.
For instance, an AI ops agent could detect an anomaly in a DeFi contract and
use MCP via SettleMint to trigger a safeguard function on that contract, all
automatically.
* AI-Driven Analytics: Through MCP, AI models can analyze blockchain data for
insights and predictions. SettleMint's platform might feed transaction
histories, token movements, or network metrics via MCP to an AI model
specialized in analytics. The AI could then, say, identify patterns of
fraudulent transactions or predict network congestion and feed those insights
back into the blockchain application or to administrators.
These features demonstrate how SettleMint's integration of MCP isn't just a
basic link to blockchain, but a comprehensive suite that makes blockchain data
and control accessible to AI in a meaningful way. It effectively makes
blockchain networks intelligent by allowing AI to continuously monitor and react
to on-chain events.
### Usage in AI and blockchain
By combining the strengths of AI and blockchain via MCP, SettleMint unlocks
several powerful use cases:
* AI-Powered Smart Contract Management: Smart contracts often need tuning or
updates based on external conditions (like market prices or usage load). An AI
agent can use MCP to monitor these conditions and proactively adjust smart
contract parameters (or advise humans to do so) through SettleMint's tools.
This creates more adaptive and resilient blockchain applications.
* Real-time Blockchain Monitoring: Instead of static dashboards, imagine an AI
that watches blockchain transactions and alerts you to important events. With
MCP, an AI can continuously query the chain for specific patterns (like large
transfers, or certain contract events) and then analyze and explain these to a
user or trigger automated responses.
* Autonomous Governance: In blockchain governance (e.g., DAOs), proposals and
decisions could be informed by AI insights. Using MCP, an AI agent could
gather all relevant on-chain data about a proposal's impact, simulate
different outcomes, and even cast votes or execute approved decisions
automatically on the blockchain. This merges AI decision support with
blockchain's execution capabilities.
* Cross-System Orchestration: SettleMint's MCP doesn't have to be limited to
blockchain data. AI can use it to orchestrate actions that span blockchain and
off-chain systems. For example, an AI agent might detect that a supply chain
shipment (tracked on a blockchain) is delayed, and then through MCP, update an
off-chain database or send a notification to a logistics system. The AI acts
as an intelligent middleware, using MCP to ensure both blockchain and
traditional systems stay in sync.
In practice, using MCP with SettleMint's SDK (discussed next) makes implementing
these scenarios much easier. Developers can focus on the high-level logic of
what the AI should do, while the MCP layer (managed by SettleMint's platform)
deals with the complexity of connecting to the blockchain and other services.
## Practical examples
To solidify the understanding, let's look at some concrete examples of how MCP
can be used in a development workflow and in applications, especially with
SettleMint's tooling.
### Implementing AI in a development workflow
Suppose you are a developer working on a blockchain project, and you want to use
an AI assistant to help manage your smart contracts. You can integrate MCP into
your workflow so that the AI assistant has direct access to your project's
context (code, files) and the blockchain environment.
For instance, you might use a command (via a CLI or an npm script) to start an
MCP server that is pointed at your project directory and connected to the
SettleMint platform. An example command could be:
```sh
npx -y @settlemint/sdk-mcp@latest --path=/Users/llm/asset-tokenization-kit/ --pat=sm_pat_xxx
```
Here's what this command does:
* npx is used to execute the latest version of the @settlemint/sdk-mcp package
without needing a separate install.
* \--path=/Users/llm/asset-tokenization-kit/ specifies the local project
directory that the MCP server will have context about. This could allow the AI
to query files or code in that directory through MCP and have access to the
environment settings from `SettleMint connect`
* \--pat=sm\_pat\_xxx provides a Personal Access Token (PAT) for authenticating
with SettleMint's services. This token (masked here as xxx) is required for
the MCP server to connect to the SettleMint platform on your behalf.
After running this command, you would have a local MCP server up and running,
connected to both your local project and the SettleMint platform. Your AI
assistant (say a specialized Claude Sonnet-based agent) could then do things
like:
* Ask MCP to write forms and lists based on the data you indexed in for example
The Graph.
* Query the live blockchain to get the current state of a contract you're
working on, to verify something or test changes.
* Deploy an an extra node in your network
* List and later mint some new tokens in your stablecoin contract
This greatly enhances a development workflow by making the AI an active
participant that can fetch and act on real information, rather than just being a
passive code suggestion tool.
#### Using the SettleMint mpc in cursor
Cursor (0.47.0 and up) provides a global `~/.cursor/mcp.json` file where you can
configure the SettleMint MCP server. Point the path to the folder of your
program, and set your personal access token.
> The reason we use the global MCP configuration file is that your personal
> access token should never, ever, ever be committed into hits and putting it in
> the project folder, which is also possible in cursor opens up that
> possibility.
```json
{
"mcpServers": {
"settlemint": {
"command": "npx",
"args": [
"-y",
"@settlemint/sdk-mcp@latest",
"--path=/Users/llm/asset-tokenization-kit/",
"--pat=sm_pat_xxx"
]
}
}
}
```
Open Cursor and navigate to Settings/MCP. You should see a green active status
after the server is successfully connected.
#### Using the SettleMint mpc in claude desktop
Open Claude desktop and navigate to Settings. Under the Developer tab, tap Edit
Config to open the configuration file and add the following configuration:
```json
{
"mcpServers": {
"settlemint": {
"command": "npx",
"args": [
"-y",
"@settlemint/sdk-mcp@latest",
"--path=/Users/llm/asset-tokenization-kit/",
"--pat=sm_pat_xxx"
]
}
}
}
```
Save the configuration file and restart Claude desktop. From the new chat
screen, you should see a hammer (MCP) icon appear with the new MCP server
available.
#### Using the SettleMint mpc in cline
Open the Cline extension in VS Code and tap the MCP Servers icon. Tap Configure
MCP Servers to open the configuration file and add the following configuration:
```json
{
"mcpServers": {
"settlemint": {
"command": "npx",
"args": [
"-y",
"@settlemint/sdk-mcp@latest",
"--path=/Users/llm/asset-tokenization-kit/",
"--pat=sm_pat_xxx"
]
}
}
}
```
Save the configuration file. Cline should automatically reload the
configuration. You should see a green active status after the server is
successfully connected.
#### Using the SettleMint mpc in windsurf
Open Windsurf and navigate to the Cascade assistant. Tap on the hammer (MCP)
icon, then Configure to open the configuration file and add the following
configuration:
```json
{
"mcpServers": {
"settlemint": {
"command": "npx",
"args": [
"-y",
"@settlemint/sdk-mcp@latest",
"--path=/Users/llm/asset-tokenization-kit/",
"--pat=sm_pat_xxx"
]
}
}
}
```
Save the configuration file and reload by tapping Refresh in the Cascade
assistant. You should see a green active status after the server is successfully
connected.
### Ai-driven blockchain application or agent
To illustrate a real-world scenario, consider an AI-driven Decentralized Finance
(DeFi) application. In DeFi, conditions change rapidly (prices, liquidity, user
activity), and it's critical to respond quickly.
Scenario: You have a smart contract that manages an automatic liquidity pool.
You want to ensure it remains balanced - if one asset's price drops or the pool
becomes unbalanced, you'd like to adjust fees or parameters automatically.
Using MCP in this scenario:
1. An AI agent monitors the liquidity pool via MCP. Every few minutes, it
requests the latest pool balances and external price data (from on-chain or
off-chain oracles) through the MCP server.
2. MCP fetches the latest state from the blockchain (pool reserves, recent
trades) and maybe calls an external price API for current market prices, then
returns that data to the AI.
3. The AI analyzes the data. Suppose it finds that Asset A's proportion in the
pool has drastically increased relative to Asset B (perhaps because Asset A's
price fell sharply).
4. The AI decides that to protect the pool, it should increase the swap fee
temporarily (a common measure to discourage arbitrage draining the pool).
5. Through MCP, the AI calls a function on the smart contract to update the fee
parameter. The MCP's blockchain connector handles creating and sending the
transaction to the network via SettleMint's infrastructure.
6. The transaction is executed on-chain, adjusting the fee. MCP catches the
success response and any relevant event (like an event that the contract
might emit for a fee change).
7. The AI receives confirmation and can log the change or inform administrators
that it took action.
In this use case, MCP enabled the AI to be a real-time guardian of the DeFi
contract. Without MCP, the AI would not have access to the live on-chain state
or the ability to execute a change. With MCP, the AI becomes a powerful
autonomous agent that ensures the blockchain application adapts to current
conditions.
This is just one example. AI-driven blockchain applications could range from
automatic NFT marketplace management, to AI moderators for DAO proposals, to
intelligent supply chain contracts that react to sensor data. MCP provides the
pathway for these AI agents to communicate and act where it matters - on the
blockchain and connected systems.
file: ./content/docs/blockchain-and-ai/open-ai-nodes-and-pg-vector.mdx
meta: {
"title": "Open AI nodes and pgvector",
"description": "A Guide to Building an AI-Powered Workflow with OpenAI Nodes and Vector Storage in Hasura",
"sidebar_position": 2,
"keywords": [
"integration studio",
"OpenAI",
"Hasura",
"pgvector",
"AI",
"SettleMint"
]
}
This guide will demonstrate how to use the **SettleMint Integration Studio** to
create a flow that incorporates OpenAI nodes for vectorization and utilizes the
`pgvector` plugin in Hasura for similarity searches. If you are new to
SettleMint, check out the
[Getting Started Guide](/building-with-settlemint/getting-started).
In this guide, you will learn to create workflows that:
* Use **OpenAI nodes** to vectorize data.
* Store vectorized data in **Hasura** using `pgvector`.
* Conduct similarity searches to find relevant matches for new queries.
### Prerequisites
* A SettleMint Platform account with **Integration Studio** and **Hasura**
deployed
* Access to the Integration Studio and Hasura consoles in your SettleMint
environment
* An OpenAI API key for using the OpenAI nodes
* A data source to vectorize (e.g., Graph Node, Attestation Indexer, or external
API endpoint)
### Example Flow Available
The Integration Studio includes a pre-built AI example flow that demonstrates
these concepts. The flow uses the SettleMint Platform's attestation indexer as a
data source, showing how to:
* Fetch attestation data via HTTP endpoint
* Process and vectorize the attestation content
* Store vectors in Hasura
* Perform similarity searches
You can use this flow as a reference while building your own implementation.
Each step described in this guide can be found in the example flow.
***
## Part 1: Creating a Workflow to Fetch, Vectorize, and Store Data
### Step 1: Set Up Vector Storage in Hasura
1. Access your SettleMint's Hasura instance through the admin console.
2. Create a new table called `document_embeddings` with the following columns:
* `id` (type: UUID, primary key)
* `embedding` (type: vector(1536)) - For storing OpenAI embeddings
### Step 2: Set Up the Integration Studio Flow
1. **Open Integration Studio** in SettleMint and click on **Create Flow** to
start a new workflow.
### Step 3: Fetch Data from an External API
1. **Add an HTTP Request Node** to retrieve data from an external API, such as a
document or product listing service.
2. Configure the **API endpoint** and any necessary authentication settings.
3. **Add a JSON Node** to parse the response data, focusing on fields like `id`
and `content` for further processing.
### Step 4: Vectorize Data with OpenAI Node
1. **Insert an OpenAI Node** in the workflow:
* Use this node to generate vector embeddings for the text data using
OpenAI's Embedding API.
* Configure the OpenAI node to use the appropriate model and input data, such
as `text-embedding-ada-002`.

### Step 5: Store Vectors in Hasura with pgvector
1. **Add a GraphQL Node** to save the vector embeddings and data `id` in Hasura.
2. Set up a **GraphQL Mutation** to store the vectors and associated IDs in a
table enabled with `pgvector`.
Example Mutation:
```graphql
mutation insertVector($id: uuid!, $vector: [Float!]!) {
insert_vectors(objects: { id: $id, vector: $vector }) {
affected_rows
}
}
```
3. Ensure correct data mapping from the fetched data and vectorized output.
### Step 6: Deploy and Test the Workflow
1. **Deploy the Flow** within Integration Studio and **run it** to confirm that
data is fetched, vectorized, and stored in Hasura.
2. **Verify Hasura Data** by checking the table to ensure vectorized entries and
corresponding IDs are stored correctly.
***
## Part 2: Setting Up a Similarity Search Endpoint
### Step 1: Create a POST Endpoint
1. **Add an HTTP POST Node** to accept a JSON payload with a `query` string to
be vectorized and compared to stored data.
Payload Example:
```json
{
"query": "input string for similarity search"
}
```
2. **Parse the Request** by adding a JSON node to extract the `query` field from
the incoming POST request.
### Step 2: Vectorize the Input Query
1. **Add an OpenAI Node** to convert the incoming `query` string into a vector
representation.
Example Configuration:
```text
Model: text-embedding-ada-002
Input: {{msg.payload.query}}
```
### Step 3: Perform a Similarity Search with Hasura
1. **Add a GraphQL Node** to perform a vector similarity search within Hasura
using the `pgvector` plugin.
2. Use a **GraphQL Query** to order results by similarity, returning the top 5
most similar records.
Example Query:
```graphql
query searchVectors($vector: [Float!]!) {
vectors(order_by: { vector: { _vector_distance: $vector } }, limit: 5) {
id
vector
}
}
```
3. Map the vector from the OpenAI node output as the `vector` input for the
Hasura query.
### Step 4: Format and Return the Results
1. **Add a Function Node** to format the response, listing the top 5 matches in
a structured JSON format.
### Step 5: Test the Flow
1. **Deploy the Flow** and send a POST request to confirm the similarity search
functionality.
2. **Verify Response** to ensure that the flow accurately returns the top 5
matches from the vectorized data in Hasura.
***
## Next Steps
Now that you have built an AI-powered workflow, here are some
blockchain-specific applications you can explore:
### Vectorize On-Chain Data
* Index and vectorize smart contract events for similarity-based event
monitoring
* Create embeddings from transaction data to detect patterns or anomalies
* Vectorize NFT metadata for content-based recommendations
* Build semantic search for on-chain attestations
### Advanced Use Cases
* Combine transaction data with natural language descriptions for enhanced
search
* Create AI-powered analytics dashboards using vectorized blockchain metrics
* Implement fraud detection by vectorizing transaction patterns
* Build a semantic search engine for smart contract code and documentation
### Integration Ideas
* Connect to multiple blockchain indexers to vectorize data across networks
* Combine off-chain and on-chain data vectors for comprehensive analysis
* Set up automated alerts based on similarity to known patterns
* Create a knowledge base from vectorized blockchain documentation
For further resources, check out:
* [SettleMint Integration Studio Documentation](/building-with-settlemint/integration-studio/)
* [Node-RED Documentation](https://nodered.org/docs/)
* [OpenAI API Documentation](https://openai.com/docs/)
* [Hasura pgvector Documentation](https://hasura.io/docs/3.0/connectors/postgresql/native-operations/vector-search/)
***
This guide should enable you to build AI-powered workflows with SettleMint's new
OpenAI nodes and `pgvector` support in Hasura for efficient similarity searches.
file: ./content/docs/building-with-settlemint/getting-started.mdx
meta: {
"title": "Getting started",
"description": "Overview of blockchain development process"
}
## Select between EVM and Fabric chains, or start with pre-built application kits
{/* EVM Chain Development */}
EVM chains
For Besu, Ethereum, Polygon, Optimism, and other EVM-compatible blockchains
✓Step-by-step development workflow
✓Solidity smart contract development and deployment
{[
[
"1",
"Sign up at console.settlemint.com using a corporate email.",
"Sign Up",
"https://console.settlemint.com/",
],
[
"2",
"Once logged in, create a new organization",
"Create organization",
"/documentation/building-with-settlemint/setup-account-and-billing",
],
[
"3",
"Invite collaborators and assign them roles such as Admin or User.",
"Add team members",
"/documentation/platform-components/blockchain-infrastructure/consortium-manager",
],
[
"4",
"Within the organization, create an application",
"Create Application",
"/documentation/building-with-settlemint/evm-chains-guide/create-an-application",
],
].map(([step, action, link, url]) => (
{/* Section 3: Smart contract development & deployment */}
Smart contract development & deployment
{[
[
"9",
"Setup private keys and attach them to a node for Tx Signer",
"Add private keys",
"/documentation/building-with-settlemint/evm-chains-guide/add-private-keys",
],
[
"10",
"Add Code Studio IDE to create development environment",
"Setup code studio",
"/documentation/building-with-settlemint/evm-chains-guide/setup-code-studio",
],
[
"11",
"Develop your smart contract code or use one of the templates",
"Develop contract",
"/documentation/building-with-settlemint/evm-chains-guide/deploy-smart-contracts#1-lets-start-with-the-solidity-smart-contract-code",
],
[
"12",
"Write test scripts and test your smart contract ",
"Test contract",
"/documentation/building-with-settlemint/evm-chains-guide/deploy-smart-contracts#5-test-the-smart-contract",
],
[
"13",
"Compile smart contract and get the ABI",
"Compile contract",
"/documentation/building-with-settlemint/evm-chains-guide/deploy-smart-contracts#4-compile-the-smart-contract-code",
],
[
"14",
"Deploy contract to the network",
"Deploy contract",
"/documentation/building-with-settlemint/evm-chains-guide/deploy-smart-contracts#6-deploy-the-smart-contract-to-platform-network",
],
[
"15",
"Note the deployed contract address",
"Get contract address",
"/documentation/building-with-settlemint/evm-chains-guide/deploy-smart-contracts#deployed-contract-address",
],
].map(([step, action, link, url]) => (
{/* Section 4: Setup middlewares and get APIs */}
Setup middlewares and get APIs
{[
[
"16",
"Add smart contract portal Middleware and get write APIs for your contract",
"Smart contract portal",
"/documentation/building-with-settlemint/evm-chains-guide/setup-api-portal",
],
[
"17",
"Add Graph Middleware and write subgraph files in IDE",
"Setup subgraph",
"/documentation/building-with-settlemint/evm-chains-guide/setup-graph-middleware#subgraph-deployment-process",
],
[
"18",
"Build and deploy sub-graphs to setup indexing",
"Deploy subgraph",
"/documentation/building-with-settlemint/evm-chains-guide/setup-graph-middleware#codegen-build-and-deploy-subgraph",
],
[
"19",
"Do a transaction from API Portal",
"Write data on chain",
"/documentation/building-with-settlemint/evm-chains-guide/setup-api-portal#4-how-to-configure-rest-api-requests-in-the-portal",
],
[
"20",
"Read indexed data from Graph middleware",
"Read data from chain",
"/documentation/building-with-settlemint/evm-chains-guide/setup-graph-middleware#graph-middleware---querying-data",
],
].map(([step, action, link, url]) => (
{/* Section 6: Deploy frontend and other services */}
Deploy frontend and other services
{[
[
"24",
"Use custom deployment module to deploy frontend or other services",
"Deploy frontend",
"/documentation/building-with-settlemint/evm-chains-guide/deploy-custom-services",
],
[
"25",
"Monitor RAM, CPU, and disk usage or apply upgrades.",
"Monitoring dashboards",
"/documentation/platform-components/usage-and-logs/monitoring-tools",
],
[
"26",
"Reach out to us for further assistance or technical support",
"Get support",
"/documentation/support/support",
],
].map(([step, action, link, url]) => (
{[
[
"1",
"Sign up at console.settlemint.com using a corporate email.",
"Sign Up",
"https://console.settlemint.com/",
],
[
"2",
"Once logged in, create a new organization",
"Create organization",
"/documentation/building-with-settlemint/setup-account-and-billing",
],
[
"3",
"Invite collaborators and assign them roles such as Admin or User.",
"Add team members",
"/documentation/platform-components/blockchain-infrastructure/consortium-manager",
],
[
"4",
"Within the organization, create an application",
"Create Application",
"/documentation/building-with-settlemint/hyperledger-fabric-guide/create-an-application",
],
].map(([step, action, link, url]) => (
{/* Section 6: Deploy frontend and other services */}
Deploy frontend and other services
{[
[
"15",
"Use Custom Deployment module to deploy frontend or other services",
"Deploy frontend",
"/documentation/building-with-settlemint/hyperledger-fabric-guide/deploy-custom-services",
],
[
"16",
"Monitor RAM, CPU, and disk usage or apply upgrades.",
"Monitoring dashboards",
"/documentation/platform-components/usage-and-logs/monitoring-tools",
],
[
"17",
"Reach out to us for further assistance or technical support",
"Get support",
"/documentation/support/support",
],
].map(([step, action, link, url]) => (
To get started, enter your work email. No password is required! Simply enter your email, and you'll receive a magic link that allows you to sign up instantly without needing to create a password. After entering your email, click the "Send Magic Link" button. A secure link will be sent to your inbox, allowing you to log in effortlessly.
If you prefer, you can sign up using Google, GitHub or Auth0 as well. A 250 Euro credit is available for first time users enabling them to try out the platform.
Enter a name for your organization to serve as the primary identifier for managing projects and collaboration within SettleMint. At a later stage, users can invite members to the organization for better collaboration.
Provide your billing details securely via Stripe, with support for Visa, Mastercard, and Amex, to activate your organization. Follow the prompts to complete the setup and gain full access to SettleMint’s blockchain development tools. Ensure all details are accurate to enable a smooth onboarding experience. Your organization is billed monthly, with the invoice dates set for 1st of every month.
file: ./content/docs/knowledge-bank/art-gaming-usecases.mdx
meta: {
"title": "Media and gaming use cases",
"description": "A comprehensive guide to blockchain-powered transformations in digital art, content distribution, interactive media, and virtual economies"
}
## Introduction to blockchain in creative and entertainment industries
The convergence of blockchain technology with art, media, gaming, and digital
collectibles has created a new paradigm for ownership, participation, and
monetization in the creative economy. These sectors, once dependent on
centralized platforms, are undergoing a transformation as creators seek direct
engagement with audiences, transparent revenue models, and lasting digital
value.
Traditional models for creators often rely on intermediaries such as galleries,
record labels, publishers, and distributors who control access, dictate terms,
and retain a significant share of revenues. Audiences, on the other hand,
typically receive limited rights to the digital content they consume and have no
verifiable stake in its success or authenticity.
Blockchain redefines this dynamic by introducing digital scarcity, decentralized
ownership, programmable royalties, and tokenized interaction between creators
and fans. It empowers artists, game developers, musicians, streamers, and
content creators to distribute work independently, engage communities directly,
and monetize value beyond attention or ad-based revenue.
From NFTs and DAO-governed content platforms to play-to-earn economies and
digital rights marketplaces, blockchain is reshaping how value is created,
shared, and preserved in digital culture.
## Digital ownership and non-fungible tokens (NFTs)
NFTs are cryptographic tokens that represent unique digital items on a
blockchain. Unlike cryptocurrencies such as Bitcoin or Ether, which are
interchangeable (fungible), each NFT has a distinct identity, metadata, and
ownership record.
Core features of NFTs:
* Unique token ID and metadata recorded on-chain
* Ownership transfers logged immutably
* Provenance tracking including creator, transaction history, and authenticity
* Compatibility with decentralized marketplaces and wallets
In creative industries, NFTs allow creators to issue verifiable, tradable
versions of digital art, music tracks, video clips, 3D assets, or written works.
Buyers receive a blockchain-verified certificate of ownership, even if the
underlying file remains publicly viewable.
Example:
* A digital artist mints a limited edition series of 10 animated artworks
* Each is assigned a unique NFT with a cryptographic signature and IPFS-hosted
media
* Collectors buy, trade, and showcase the pieces on NFT platforms, while the
creator receives royalties from each resale
NFTs restore scarcity and collectibility to digital content, allowing creators
to monetize directly while giving buyers ownership, resale value, and status
within communities.
## Royalties and programmable revenue sharing
One of the most powerful applications of blockchain in media and arts is the
automation of royalty payments. Traditional royalty systems rely on manual
tracking, opaque accounting, and long payment cycles. Smart contracts enable
real-time, rule-based distribution of income to multiple stakeholders.
Features of programmable royalties:
* Automatic revenue splits upon each sale or resale
* Recurring payments to creators, collaborators, and agents
* Transparent, immutable tracking of who earns what and when
* Compatibility with platforms and wallets without centralized control
Example:
* A musician mints a song as an NFT with a 10 percent creator royalty
* Every time the NFT is sold, a smart contract routes 10 percent of the
transaction to the musician’s wallet
* If there are featured artists or producers, their wallets are also linked and
receive a predefined share
This model aligns incentives across creators, makes income streams predictable,
and reduces dependency on record labels or publishing houses to collect and
distribute funds.
## Creator-owned platforms and direct monetization
Blockchain enables creators to bypass centralized distribution platforms such as
YouTube, Spotify, or app stores by building or joining decentralized
alternatives. These platforms use smart contracts, NFTs, and tokens to allow
creators to monetize directly, set their own terms, and engage with audiences as
stakeholders.
Key elements include:
* Token-gated content access and fan subscriptions
* NFT-based ticketing or exclusive merch sales
* Creator DAOs for collaborative decision-making
* Transparent analytics and community rewards
Example:
* A filmmaker releases a short film on a blockchain streaming platform
* Viewers purchase access using platform tokens or NFTs
* Supporters who hold a certain number of tokens can vote on the creator’s next
project or receive exclusive behind-the-scenes content
These models reward loyalty, encourage experimentation, and create durable value
networks between creators and their fans.
## Provenance, authentication, and forgery prevention
Provenance is a critical issue in the art world and luxury media markets.
Without a reliable method to verify the origin and ownership of a digital work,
creators face plagiarism, and collectors face fraud. Blockchain solves this by
creating a permanent, tamper-proof record of creation and transfer.
Benefits include:
* Timestamped registration of original works on-chain
* Public verification of artist wallets and signature authenticity
* Transparent ownership trails for galleries, auction houses, and collectors
* Reduction in legal disputes and insurance claims due to forgery
Example:
* A digital painter registers a new series by minting NFTs immediately upon
creation
* A gallery verifies these on-chain records before listing the work for auction
* Buyers can confirm authenticity by checking that the NFT came from the
verified artist’s wallet and has not been altered or duplicated
This creates trust in digital art markets and extends similar authentication to
photography, music, digital fashion, and literature.
## Tokenized fan engagement and community building
Creators can now involve their audiences not just as consumers but as
participants, collaborators, and investors. Blockchain introduces fan tokens,
governance rights, and revenue-sharing models that allow audiences to shape
content and share in its success.
Use cases include:
* Fans buying creator tokens that grant voting rights or access
* Crowdfunding future projects using NFTs with future revenue rights
* Discord or Telegram communities gated by token ownership
* Leaderboards and on-chain reputation scores for early supporters
Example:
* A podcaster issues a limited number of membership NFTs that grant early access
to episodes, voting on guest lists, and discounts on merch
* As the podcast grows in popularity, these NFTs become collector items with
rising secondary market value
* Fans feel a sense of ownership, which fuels viral promotion and retention
Tokenized engagement shifts value creation from centralized platforms to
creators and their most loyal communities.
## Interoperability and digital identity in metaverse environments
With the rise of virtual worlds and metaverse platforms, creators are building
persistent digital personas, avatars, and assets. Blockchain provides a
cross-platform identity layer and asset ownership framework that allows users to
port NFTs, wearables, and achievements between environments.
Features include:
* NFTs representing avatars, skins, or in-game items usable across metaverses
* Verifiable creator identities for collaboration and attribution
* Social graphs linked to wallet activity, reputation, and past contributions
* Wallet-based access to events, games, or private spaces
Example:
* A 3D artist creates a line of virtual sneakers as NFTs
* These can be used by holders in Decentraland, The Sandbox, and other metaverse
platforms
* Owners can also showcase them in wallet-based galleries or use them to unlock
exclusive chat channels and games
This interoperability gives creators new markets, increases asset utility, and
encourages collaboration across ecosystems.
## Gaming economies and play-to-earn models
Blockchain gaming introduces real digital ownership of in-game assets,
player-driven marketplaces, and income opportunities through gameplay. Players
no longer simply consume game content but earn, trade, and invest within
decentralized gaming economies.
Blockchain gaming features:
* NFTs for characters, weapons, skins, and land with provable rarity
* Play-to-earn models where players earn tokens for achievements or
participation
* DAO governance of game rules, development priorities, or treasury funds
* Secondary markets where assets are traded peer-to-peer
Example:
* A strategy game issues limited edition NFT spaceships with specific
capabilities
* Players who win battles or complete missions earn game tokens
* These tokens can be used to buy new assets or traded for other
cryptocurrencies
* Top players vote on game balance updates or expansions using their token
holdings
This model empowers players as co-creators, blurs the line between gaming and
work, and enables sustainable game-based economies.
## Virtual land, real estate, and immersive experience design
Digital land and 3D experiences are becoming assets with real-world value.
Platforms like Decentraland, Voxels, and Otherside allow users to own parcels of
virtual space and monetize them through content, advertising, or commerce.
Blockchain ensures verifiable ownership and enables secondary trading.
Use cases include:
* NFTs representing parcels of land or buildings in virtual environments
* Smart contract-based leases, events, and rentals
* Galleries, concerts, brand activations, or storefronts hosted in virtual
spaces
* In-world assets such as art, wearables, and soundtracks tied to NFTs
Example:
* A fashion designer purchases a plot in a virtual world and builds an immersive
boutique
* Visitors can walk through the space, try on digital outfits, and mint them as
wearable NFTs
* Events hosted in the space are ticketed via NFTs and reward participants with
airdrops
Virtual land creates new revenue channels and experiential storytelling formats
for creators, brands, and curators.
## Streaming, licensing, and fair-use automation
Streaming platforms face growing tension between user access, creator
compensation, and legal compliance. Blockchain introduces programmable content
rights that automate licensing terms and provide transparent revenue flows.
Streaming use cases:
* Tokenized media access where viewers pay per stream or subscription
* Embedded licenses that define how media can be used, remixed, or distributed
* Streaming royalties distributed to all contributors via smart contracts
* Auditable play counts and view logs recorded immutably
Example:
* An indie filmmaker releases a short film on a decentralized streaming platform
* Each view is tracked on-chain and generates a micro-payment to the creator’s
wallet
* Distributors and co-producers receive their share based on the smart contract
* Viewers can remix the film under a creative license by purchasing a tokenized
derivative right
This model creates transparency, rewards creativity, and eliminates friction
between distribution, licensing, and payment.
## Tokenized storytelling and interactive content
Storytelling in digital media is evolving into interactive, multi-perspective
formats where audiences contribute to narrative development. Blockchain enables
collaborative storytelling through tokenized participation, on-chain narrative
branches, and shared ownership of characters or universes.
Use cases include:
* Story NFTs representing plotlines, characters, or chapters
* Readers voting on story directions through DAO proposals
* Writer royalties encoded in derivative works and spin-offs
* Licensing of in-world assets via token-based permissions
Example:
* A science fiction author releases the first three chapters of a story as NFTs
* Holders of the NFTs propose and vote on how the story continues
* Selected writers contribute new arcs, which are minted and added to the series
* A marketplace allows publishing houses or filmmakers to license characters,
with smart contracts routing royalties to all co-creators
This transforms readers into stakeholders, enables open-world storytelling, and
fosters new economic models for serialized fiction, comics, and transmedia
franchises.
## DAOs for creative collaboration and funding
Decentralized Autonomous Organizations (DAOs) enable artists, developers,
curators, and fans to coordinate around shared creative goals. These groups
manage treasuries, select projects, and govern ecosystems through on-chain
voting and proposal mechanisms.
Creative DAOs can support:
* Collective funding and commissioning of artworks, games, or films
* Community-driven curation, exhibition, or programming
* Shared revenue from NFT drops, royalties, or events
* Transparent governance and conflict resolution processes
Example:
* An art DAO is formed by collectors, curators, and emerging artists
* Members vote on which creators to commission and how to distribute funds
* Completed works are auctioned as NFTs, with sales funding the next cycle
* DAO tokens represent voting power and claim on collective revenues
DAOs create trust-minimized structures for global creative collaboration,
enabling projects that traditional institutions might overlook due to risk or
lack of commercial viability.
## NFT infrastructure and platform design
The NFT ecosystem relies on a complex stack of technologies and standards to
ensure interoperability, security, and scalability. Creators and developers
benefit from understanding the layers involved in building NFT-based products.
Key infrastructure components:
* NFT standards such as ERC-721 and ERC-1155 defining token structure and
metadata
* Decentralized file storage systems like IPFS, Arweave, or Filecoin for hosting
media
* Marketplaces for minting, listing, and trading NFTs
* Smart contract libraries for royalties, airdrops, and auction mechanics
Example:
* A photography platform builds an NFT minting portal for verified creators
* Metadata is stored on-chain, while high-resolution media is pinned to IPFS and
mirrored via Arweave
* Sales are processed using Dutch auctions with smart contract-enforced
royalties
* Buyers can display their collections in virtual galleries or export them to
other platforms
Building with these modular components allows creators to launch and scale NFT
projects while preserving decentralization and ownership.
## On-chain provenance and metadata integrity
Metadata defines the meaning, context, and value of NFTs. Whether it is the
traits of a generative art piece, the credits of a music track, or the license
type of a film, preserving metadata integrity is essential for long-term trust
and utility.
Blockchain ensures that:
* Metadata cannot be altered without detection
* Ownership and edit permissions are clearly defined
* Version control is maintained for iterative content
* Third parties can query, index, and verify NFT attributes
Example:
* A motion designer releases a short video NFT with embedded sound design,
resolution specs, and frame count
* This metadata is recorded on-chain and linked to verifiable sources
* If the work is remixed or used in an ad campaign, the original creator is
credited via smart contract logic
Accurate metadata builds trust in digital marketplaces, supports creator
attribution, and enables NFT composability across ecosystems.
## Wallets, identity, and cross-platform reputation
Wallets are more than just transaction tools — they are identity containers in
web3. Artists, fans, and developers use wallets to access content, sign
contributions, prove reputation, and receive earnings. Blockchain allows users
to carry their identity and history across platforms.
Wallet-linked identity includes:
* Verification of creator credentials, curation history, or voting activity
* On-chain badges, POAPs (Proof of Attendance Protocol), and certifications
* Social graphs based on mutual NFT ownership or DAO participation
* Pseudonymous reputations backed by creative outputs
Example:
* A 3D artist builds a wallet-linked resume that shows which NFT collections
they contributed to, DAO proposals they passed, and conferences they attended
* Curators vet collaborators by viewing on-chain credentials and previous works
* As the artist’s reputation grows, they gain access to exclusive drops and
funding pools
Decentralized identity empowers creators and users to build long-term
credibility without relying on centralized accounts or institutional
endorsements.
## Cross-platform composability of media assets
Blockchain makes it possible for media assets to be reused, remixed, and
reinterpreted across platforms and applications. Composability refers to the
ability of NFTs and tokens to function in multiple contexts, enhancing their
utility and value.
Examples of composable media:
* A music track NFT used as a game soundtrack, live performance token, or
ambient layer in an art installation
* A character NFT playable in multiple games or displayed in various metaverse
environments
* A 3D asset NFT used in both VR galleries and AR filters
Example:
* A visual artist mints a creature design as an NFT with metadata for pose,
rigging, and file format
* Game developers import the asset as a playable avatar
* VR world builders integrate the asset as an NPC or boss enemy
* The NFT holder earns a share of revenue each time the asset is used
commercially
This composability enables creators to build persistent digital universes and
unlock new revenue streams through inter-platform collaborations.
## Legal frameworks and intellectual property management
While blockchain records ownership of tokens, intellectual property (IP) law
governs rights to use, distribute, or modify the underlying content. Bridging
the gap between digital assets and legal enforceability requires careful design
of licenses, terms, and jurisdictional awareness.
NFT legal considerations include:
* Licensing terms embedded in metadata (e.g., personal use, commercial rights,
CC0)
* Transferability and sublicensing rights during resale
* Smart contracts that enforce payment, royalties, or usage boundaries
* Tools to register NFTs with legal registries or notarize off-chain contracts
Example:
* A digital sculptor releases a work under a Creative Commons license, with
clear terms recorded in the NFT metadata
* Buyers can use the asset in derivative works but cannot sell merchandise
without upgrading to a commercial license token
* Disputes are resolved via arbitration integrated into the marketplace or
referenced through an on-chain notary service
Combining legal frameworks with smart contract enforcement ensures that digital
rights are respected and disputes are minimized in the web3 creator economy.
## Generative art, randomness, and algorithmic creation
Generative art is one of the most celebrated NFT categories. Artists use
algorithms to create large collections of visual pieces, each with unique
traits. Blockchain enables provable randomness, scarcity, and distribution
mechanisms that reward discovery and curation.
Generative NFT projects include:
* Procedural creation of thousands of artwork variations at mint time
* On-chain randomness using oracles or commit-reveal schemes
* Trait rarity, visual layering, and metadata encoding
* Whitelists, pre-mints, and gamified minting experiences
Example:
* A generative art project creates 10,000 abstract compositions using
mathematical functions
* Each mint triggers a random seed that determines color palette, symmetry, and
movement pattern
* Some traits are extremely rare, creating collector demand and social
engagement
* Smart contracts assign each piece an edition number and royalty structure
This artform explores the fusion of code, creativity, and community, while
leveraging blockchain for fair distribution and verification.
## Art curation, exhibition, and fractional ownership
Blockchain enables decentralized curation of art collections, collaborative
exhibitions, and shared ownership models. These innovations expand access to
art, engage global audiences, and create new collector communities.
Applications include:
* NFT-based gallery curation where holders vote on featured pieces
* Virtual exhibitions in metaverse spaces with ticketing and merch
* Fractionalized NFTs representing partial ownership of high-value works
* Collaborative collections managed by DAOs or cultural institutions
Example:
* A collective of art patrons purchases a rare NFT from a blue-chip artist
* The token is fractionalized and each member receives a share
* The group showcases the artwork in a virtual museum and licenses it to
exhibitions
* Profits from ticket sales and merch are distributed to fractional owners
These models create inclusive, scalable, and borderless art ecosystems powered
by decentralized governance.
## Game studios, indie developers, and token economies
Blockchain gives both large and independent game studios the ability to rethink
monetization, community engagement, and content ownership. Developers can issue
native tokens, sell in-game assets as NFTs, and establish DAOs for roadmap
decisions or development bounties.
Applications in game development:
* Pre-selling assets, characters, or land as NFTs before launch
* Rewarding beta testers or early adopters with tokens that hold future utility
* Creating player-owned marketplaces where item value is determined by demand
* Building decentralized game guilds that pool resources and share earnings
Example:
* An indie developer releases a beta version of a fantasy game and sells NFT
weapons to fund development
* Buyers receive game tokens that can be used to craft items or trade with other
players
* Token holders vote on future quests, expansions, and balance changes
* A guild forms around collecting rare items and enters competitions for token
prizes
This model builds passionate communities from day one and turns players into
long-term contributors to game economies.
## Digital fashion, wearables, and avatar customization
Digital fashion refers to clothing, accessories, and design elements created for
use in virtual environments. These items, often minted as NFTs, can be worn by
avatars, used in games, or displayed in metaverse spaces. Designers are creating
entire collections of digital fashion that function as identity tools and
investment assets.
Key elements include:
* NFT garments that can be worn across multiple platforms
* Time-limited or edition-based releases to create scarcity
* Interactivity, animation, and augmented reality functionality
* Secondary markets for resale and customization
Example:
* A digital fashion house releases a seasonal collection of 3D jackets and
glasses as NFTs
* Users dress their avatars in these pieces for events, livestreams, and VR
meetups
* Each item carries metadata for brand, designer, rarity, and compatible
platforms
* Collectors showcase their looks in galleries, AR apps, or social feeds
Digital fashion enables sustainable design, cross-platform identity, and new
forms of brand loyalty beyond physical apparel.
## Cultural preservation and digital heritage
Blockchain provides a permanent, tamper-proof ledger for recording, archiving,
and distributing cultural content. From indigenous knowledge to ancient
manuscripts, digital preservation of heritage materials can benefit from
decentralized storage, provenance tracking, and community governance.
Applications in cultural preservation:
* Archiving language, music, or ritual recordings using decentralized file
systems
* Creating NFTs of digitized artifacts for educational or funding purposes
* Issuing access tokens for scholars, museums, or diaspora communities
* Preventing manipulation or erasure of culturally significant content
Example:
* A cultural preservation project partners with local communities to digitize
ancestral songs
* Each recording is uploaded to IPFS, with accompanying history and attribution
stored on blockchain
* NFTs are issued to supporters, granting access to educational resources and
curation rights
* Funds raised go back to the community through transparent smart contracts
Blockchain ensures that cultural memory is preserved for future generations in
ways that respect ownership, participation, and authenticity.
## Creator tooling, SDKs, and launch platforms
A new generation of creator tools is emerging to simplify the blockchain
experience. These tools allow artists, musicians, and developers to mint NFTs,
build interactive projects, and deploy smart contracts without writing code.
Tooling and platform trends:
* No-code NFT minting interfaces with metadata customization
* APIs and SDKs for integrating wallet-based access into games or websites
* Templates for generative collections, auctions, and token drops
* Cross-chain deployment tools for reaching diverse audiences
Example:
* A musician uses a no-code platform to mint a series of concert ticket NFTs
* They choose royalty percentages, media hosting options, and gating rules
* Fans purchase the NFTs using fiat or crypto, and gain backstage access via a
connected mobile app
* The artist tracks sales and royalties through a dashboard without technical
complexity
These tools lower the barrier to entry, empower experimentation, and foster
creative independence across domains.
## Digital twins and virtual asset mirroring
Digital twins are virtual representations of physical objects or environments,
often used in engineering and manufacturing. In art and fashion, digital twins
allow creators to issue NFTs that correspond to real-world items, enabling
verification, resale, and interaction in the digital realm.
Digital twin applications:
* Minting NFTs linked to physical artworks, sculptures, or garments
* Augmented reality filters that animate the digital twin for social use
* Ownership transfer mechanisms tied to physical handover or redemption
* Display and storage solutions for bridging physical and digital spaces
Example:
* A sculptor creates a bronze statue and mints an NFT version that carries a
video of the making process and a 3D model
* The collector receives both the physical piece and a digital twin
* When the artwork is sold, the NFT is transferred and used to track provenance
* The digital twin is also showcased in online exhibitions or used in virtual
reality
Digital twins create new opportunities for authenticity, interaction, and
archival of creative works.
## Copyright management and DRM replacement
Digital rights management (DRM) has long been used to protect content from
unauthorized copying or use, but it often limits fair use and inconveniences
legitimate buyers. Blockchain introduces a new model of rights management that
is flexible, programmable, and user-respecting.
Features of blockchain DRM:
* NFTs granting time-based, region-based, or usage-limited access to content
* Smart contracts defining sharing rules, revocation, and renewals
* Audit logs showing exactly how and when content was accessed
* Creator-controlled permissions for collaboration or licensing
Example:
* An e-book publisher distributes reading access via tokenized content licenses
* Each token allows five reads within 30 days, with metadata recording usage
* If the token is resold, the license resets and royalties are routed to the
original author
* Libraries or institutions purchase bundles of tokens for lending purposes
This approach replaces rigid DRM with adaptable, enforceable rules that preserve
creator rights and reader convenience.
## Cross-chain NFT ecosystems and multichain strategy
As the NFT space matures, creators and collectors are exploring multiple
blockchains for different use cases. Ethereum remains dominant for high-value
collectibles, but other chains offer lower fees, faster transactions, and unique
communities.
Multichain strategy considerations:
* Launching on Ethereum for provenance and recognition
* Using Polygon or Solana for accessibility and gas efficiency
* Interoperability tools to bridge NFTs or sync metadata
* Cross-chain marketplaces for unified trading and discovery
Example:
* A generative art project launches its premium editions on Ethereum and its
open edition on Avalanche
* The artist uses a cross-chain indexer to display all works in one gallery
* Collectors can bridge NFTs between chains to access different experiences or
liquidity pools
* Smart contracts synchronize royalties across ecosystems
Multichain deployments increase reach, optimize performance, and build
resilience into digital collections.
## Blockchain monetization risks and regulatory issues
Despite the potential of blockchain in creative sectors, monetization brings
legal, financial, and ethical challenges. Creators must navigate fluctuating
token markets, evolving regulations, and community expectations.
Risks and concerns:
* Securities classification of NFTs or tokens triggering regulatory oversight
* Tax implications of sales, royalties, and airdrops across jurisdictions
* Market volatility affecting long-term project viability
* Community backlash from perceived cash grabs or misaligned incentives
Example:
* A gaming DAO issues tokens for early supporters, but fails to clearly define
governance utility
* Regulators question whether the tokens constitute unregistered securities
* Treasury mismanagement and community discontent lead to project decline
Mitigating these risks requires legal counsel, transparent communication, and
alignment between economic design and creative integrity.
## Educational content and onboarding experiences
To bring millions of creators and users into web3, education and user experience
must be prioritized. Onboarding involves more than setting up a wallet — it
includes understanding blockchain principles, risks, and opportunities.
Educational efforts can include:
* Interactive tutorials that reward learners with NFTs or tokens
* Artist residencies and incubators focused on blockchain skills
* Documentation, templates, and case studies for each use case
* Web3 education DAOs, open source courses, and peer-to-peer mentoring
Example:
* A music NFT platform runs a six-week cohort for emerging artists
* Participants mint practice NFTs, join DAO calls, and receive feedback from
industry mentors
* Upon graduation, each artist launches a curated drop with built-in support
* Community members vote to feature top-performing graduates in showcases
Education accelerates adoption, reduces fraud, and cultivates a more diverse and
capable creative ecosystem.
## Immersive performances and virtual event monetization
Blockchain enables creators to produce, host, and monetize virtual performances
in ways that are secure, permissioned, and transparent. From live concerts in
metaverse arenas to interactive poetry readings, artists can token-gate
experiences, reward participants, and archive performances immutably.
Applications include:
* NFT tickets for virtual concerts and streamed events
* Token-based access to backstage content, meetups, or replays
* Smart contracts handling split payments for performers and organizers
* Proof-of-attendance collectibles and engagement rewards
Example:
* A DJ hosts a virtual performance in a voxel-based metaverse space
* Fans purchase NFT tickets, which also unlock exclusive tracks and merchandise
* The event is streamed and recorded, with blockchain metadata capturing
attendance
* Royalties from ticket sales and replay streams are split automatically among
collaborators
This model allows for global reach, minimal overhead, and long-tail monetization
from community-driven experiences.
## Rights management for performing arts and public installations
Theater productions, public murals, sound installations, and live art can all
benefit from blockchain-enabled rights management. Artists can define licensing
conditions, usage permissions, and archival rules through smart contracts and
tokenized rights.
Features include:
* On-chain documentation of contributor credits and performance rights
* Licensing models for derivative works, restaging, or media adaptation
* Community governance of installation maintenance or location changes
* Timestamped records of public feedback, engagement, and impact
Example:
* A choreographer tokenizes the staging rights for a contemporary dance piece
* Cultural institutions or universities can purchase licenses and restage the
work with attribution
* Smart contracts track royalties and assign roles to dancers, lighting
designers, and composers
* All performance data is logged for future reference and research
This supports cultural institutions in honoring creative attribution,
simplifying administrative processes, and ensuring that performance legacies
endure across time and place.
## Decentralized funding for creative projects
Blockchain provides tools for creators to fund their work through direct
community support rather than centralized grants or sponsors. Funding can be
structured through token sales, NFT campaigns, or decentralized autonomous grant
pools.
Models include:
* Crowdsourced treasury governed by community voting (e.g., MolochDAO, Gitcoin)
* NFT pre-sales to fund writing, animation, or album production
* Matching donation models based on community preference
* Social tokens linked to creator milestones and content access
Example:
* A documentary filmmaker launches a campaign using collectible frame NFTs
* Supporters mint frames that represent moments from the production timeline
* Funds are released in tranches tied to completion of key phases, such as
filming or post-production
* NFT holders are credited in the final cut and receive revenue shares from
distribution
Decentralized funding redistributes power in creative ecosystems, enabling more
diverse voices and community-driven cultural expression.
## AI-generated content and creative attribution
The rise of AI-generated images, music, and text introduces new challenges for
attribution, provenance, and intellectual property. Blockchain offers solutions
for tracking contributions, verifying originality, and structuring collaborative
compensation models.
Blockchain + AI applications:
* Tokenizing AI models and prompt parameters to track inputs and training
* Recording attribution logs for co-created works between humans and machines
* Auditable provenance for AI-generated media published or sold
* Licensing models that define AI usage rights and output restrictions
Example:
* An artist creates a collection of AI-assisted abstract works using
custom-trained models
* Each piece is minted with metadata including prompt hashes, model versions,
and human edits
* The NFTs specify whether the output can be reused, remixed, or displayed
commercially
* Smart contracts route royalties to the model’s creator, prompt designer, and
visual editor
Blockchain provides clarity, fairness, and provenance in a rapidly evolving
landscape where creativity and computation are deeply intertwined.
## Digital preservation and archival integrity
Digital art, media, and games require long-term preservation strategies.
Centralized hosting is vulnerable to censorship, decay, or shutdowns. Blockchain
and decentralized storage systems ensure that cultural works remain accessible
and verifiable across decades.
Preservation tools:
* Decentralized file storage using IPFS, Arweave, or Filecoin
* Content-addressable hashes stored immutably on-chain
* Archive DAOs that fund the long-term storage of works across formats
* Cross-institutional redundancy with timestamped metadata
Example:
* A university partners with an NFT archive to preserve the digital works of a
contemporary poet
* Each poem is stored using content-addressable storage and linked to NFTs
* Metadata includes publication context, critical essays, and reading
performances
* Future researchers access a tamper-proof, accessible archive without relying
on centralized publishers
This secures cultural memory, ensures creator visibility over time, and supports
historical analysis of digital creativity.
## Blockchain-native storytelling platforms
New platforms are emerging where narrative structure, character arcs, and media
releases are tied directly to blockchain. These platforms integrate NFTs,
tokens, and smart contracts into the fabric of narrative development.
Features include:
* On-chain world-building, where each character or location is tokenized
* Reader governance over plot development or character survival
* Dynamic NFTs that evolve based on community interaction or data feeds
* Episodic content gated by ownership or subscription tokens
Example:
* A fantasy series is developed with each character represented by a token
* Readers who own a character vote on its fate during climactic story events
* Side quests, lore, and media expansions are unlocked based on community
milestones
* The entire world is built collaboratively by writers, artists, and players
Blockchain-native stories redefine authorship, encourage interactive creativity,
and foster community ownership of fictional universes.
## Market trends, sustainability, and ethical considerations
As adoption grows, the creative blockchain space faces important questions
around environmental impact, accessibility, and cultural ethics. Projects are
responding with innovation in protocol design, governance, and social
responsibility.
Key developments:
* Migration to energy-efficient blockchains such as Polygon, Tezos, and Solana
* Offset protocols for NFT minting and trading emissions
* DAOs focused on indigenous rights, anti-plagiarism, and creator diversity
* Open source tools for transparency, reproducibility, and education
Example:
* A collective of environmental artists releases a zero-emission NFT series
* Minting is done on a proof-of-stake chain and bundled with verified carbon
credits
* The project funds local restoration efforts and supports creators from
frontline climate communities
* Smart contracts ensure transparency of donation flows and allow community
oversight
This convergence of technology, culture, and ethics pushes the creative sector
toward a regenerative model of growth, innovation, and justice.
Blockchain is redefining the landscape of art, media, gaming, and creative
collaboration. It introduces technical primitives for trust, ownership, and
automation — but its real impact lies in how these tools empower creators to
imagine new models of cultural production.
Its influence includes:
* Shifting control from intermediaries to creators and communities
* Unlocking new formats of storytelling, interaction, and immersion
* Enabling sustainable, global funding models for creative work
* Preserving digital heritage and validating artistic innovation
This is not simply a technological shift — it is a cultural movement. As
blockchain becomes increasingly embedded in the tools and platforms of everyday
creativity, it will expand who gets to create, how value is measured, and what
stories are told.
The future of creative expression will be more open, participatory, and
programmable — and blockchain is playing a foundational role in that
transformation.
file: ./content/docs/knowledge-bank/besu-transaction-flow.mdx
meta: {
"title": "Besu transaction cycle",
"description": "Hyperledger besu transaction cycle"
}
## Ethereum Virtual Machine (EVM) Transaction Lifecycle
### Key Generation and Account Creation
The transaction lifecycle begins with cryptographic key generation using the
secp256k1 elliptic curve. A randomly generated 256-bit private key produces a
corresponding public key through elliptic curve multiplication. The Ethereum
address is derived as the last 20 bytes of the Keccak-256 hash of the
uncompressed public key. This address serves as the account identifier,
analogous to a bank account number in traditional finance.
### Smart Contract Compilation and Encoding
For smart contract interactions, Solidity code undergoes compilation into two
critical components:
1. **Bytecode**: The EVM-executable machine code containing initialization and
runtime segments
2. **ABI**: A JSON interface specifying function signatures and parameter
types\
Constructor arguments are encoded using Recursive Length Prefix (RLP)
encoding and appended to the deployment bytecode. Dynamic types like strings
include length prefixes and offsets in their encoding scheme.
### Transaction Construction and Signing
Transactions contain several critical fields:
* `nonce`: Sequence number preventing replay attacks
* `gasPrice`/`gasLimit`: Fee market parameters
* `chainId`: Network identifier (EIP-155)
* `data`: For contract calls, ABI-encoded function selectors and arguments
These are signed using ECDSA with the sender's private key, producing three
signature components:
* `v`: Recovery identifier + chainId×2 + 35
* `r`, `s`: Components of the elliptic curve signature
### EVM Execution Mechanics
The EVM processes transactions through deterministic opcode execution:
1. **Calldata Decoding**: Extracts function selectors and parameters using ABI
specifications
2. **Storage Computation**: State variables are stored at slots computed via
`keccak256(padded_slot_index)`
3. **State Modification**: `SSTORE` updates contract storage, while `SLOAD`
reads values
4. **Memory Management**: Temporary data stored in linear memory during
execution
### State Trie Architecture
Ethereum maintains three Merkle Patricia Tries (MPT):
1. **State Trie**: Maps addresses to account states (nonce, balance,
storageRoot, codeHash)
2. **Storage Trie**: Contract-specific key-value storage (updated per
transaction)
3. **Receipts Trie**: Transaction execution metadata
Each storage slot update modifies the trie structure, with branch nodes (17-item
arrays) and leaf nodes (compact-encoded paths) forming cryptographic proofs of
state transitions.
### Layer 2 Scaling Solutions
#### zkEVMs
* **Validity Proofs**: Generate cryptographic proofs of correct execution
* **On-chain Verification**: Posts state roots + SNARK/STARK proofs to L1
* **Full EVM Equivalence**: Maintains identical storage layouts and ABI encoding
#### Optimistic Rollups
* **Fraud Proofs**: Challenges invalid state transitions during dispute windows
* **Data Availability**: Batches transaction data via calldata compression
* **Delayed Finality**: 7-day challenge periods for state finalization
### Deterministic Execution Guarantees
The system enforces consistency through:
* **RLP Encoding**: Standardized serialization for all persistent data
* **Keccak-256 Hashing**: Uniform slot computation across execution environments
* **Gas Accounting**: Precise opcode cost tracking preventing infinite loops
This architecture demonstrates how Ethereum combines cryptographic primitives,
optimized data structures, and distributed consensus to achieve secure,
verifiable computation in a decentralized environment.
## EVM Transaction Lifecycle
## 1. Key Pair & Account Creation
Ethereum accounts are generated using the elliptic curve secp256k1. The private
key is a randomly generated 256-bit number, the public key is computed via
elliptic curve multiplication, and the address is the last 20 bytes of the
Keccak256 hash of the uncompressed public key.
```javascript
const { ethers } = require("ethers");
const wallet = ethers.Wallet.createRandom();
console.log("Private Key:", wallet.privateKey);
console.log("Public Key:", wallet.publicKey);
console.log("Address:", wallet.address);
```
Example Output:
```
Private Key: 0x9c332b1492d2d9ccdbb4b4628d8695095ad2c22b86c5ef79a2173e0c6f877c22
Public Key: 0x04535b2d2a6c9c44c1791f26791ed5ed1e50481f79cf6bdb238a5d4ae54fe65d74a57e72a2ef5e22a0f8bb006e6f85ea552d4c4c30df5c841b43f9cd1493acfb80
Address: 0xd8cD4DAfD4e581dE9e69fB9588b6E547C206Efd1
```
Layman Explanation: Your Ethereum identity is just a pair of cryptographic keys.
The private key is like your password. The address is like your bank account
number , derived from the public key using a hashing function.
***
## 2. Smart Contract: HelloWorld.sol
We write a basic smart contract with a message variable and a setter method.
```solidity
pragma solidity ^0.8.0;
contract HelloWorld {
string public message;
constructor(string memory _msg) {
message = _msg;
}
function updateMessage(string memory _msg) public {
message = _msg;
}
}
```
Layman Explanation: This contract is a small program that stores a message. When
deployed, it sets the message to something like "Hello Ethereum!" and anyone can
later update it.
***
## 3. Compilation → ABI & Bytecode
We compile the contract using solc and extract:
* ABI , a JSON description of the contract interface
* Bytecode , the raw EVM machine code
ABI:
```json
[
{
"inputs": [{ "internalType": "string", "name": "_msg", "type": "string" }],
"stateMutability": "nonpayable",
"type": "constructor"
},
{
"inputs": [],
"name": "message",
"outputs": [{ "internalType": "string", "name": "", "type": "string" }],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [{ "internalType": "string", "name": "_msg", "type": "string" }],
"name": "updateMessage",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
}
]
```
Bytecode (full, no truncation):
```
0x608060405234801561001057600080fd5b5060405161011b38038061011b83398101604081905261002f9161003b565b806000819055506100db565b600080fd5b6000819050919050565b61005781610044565b811461006257600080fd5b50565b6000813590506100748161004e565b92915050565b6000602082840312156100905761008f61003f565b5b600061009e84828501610065565b91505092915050565b6100b281610044565b82525050565b60006020820190506100cd60008301846100a9565b92915050565b6000819050919050565b6100e7816100d4565b81146100f257600080fd5b50565b600081359050610104816100de565b92915050565b6000602082840312156101205761011f61003f565b5b600061012e848285016100f5565b9150509291505056fea2646970667358221220bd485cd0e3e06eeb6eac6e324b8e121b6fba8332faafbe3e60ad7fdfaf0b649264736f6c634300080c0033
```
Layman Explanation: The ABI acts like a menu of available functions in the
contract. The bytecode is the actual machine-readable code that the EVM will
run.
***
## 4. Constructor Arguments Encoding
We encode constructor arguments to include with the deployment bytecode.
```javascript
const ethers = require("ethers");
const encodedArgs = ethers.utils.defaultAbiCoder.encode(
["string"],
["Hello Ethereum!"]
);
```
Encoded Constructor Args (hex):
```
0x0000000000000000000000000000000000000000000000000000000000000020
0000000000000000000000000000000000000000000000000000000000000010
48656c6c6f20457468657265756d210000000000000000000000000000000000
```
* 0x20: offset to string data
* 0x10: length of string (16 bytes)
* "Hello Ethereum!" = 0x48656c6c6f20457468657265756d21 padded to 32 bytes
Final Full Deployment Bytecode = bytecode + encoded args:
```
0x608060405234801561001057600080fd5b5060405161011b38038061011b83398101604081905261002f9161003b565b806000819055506100db565b600080fd5b6000819050919050565b61005781610044565b811461006257600080fd5b50565b6000813590506100748161004e565b92915050565b6000602082840312156100905761008f61003f565b5b600061009e84828501610065565b91505092915050565b6100b281610044565b82525050565b60006020820190506100cd60008301846100a9565b92915050565b6000819050919050565b6100e7816100d4565b81146100f257600080fd5b50565b600081359050610104816100de565b92915050565b6000602082840312156101205761011f61003f565b5b600061012e848285016100f5565b9150509291505056fea2646970667358221220bd485cd0e3e06eeb6eac6e324b8e121b6fba8332faafbe3e60ad7fdfaf0b649264736f6c634300080c00330000000000000000000000000000000000000000000000000000000000000020
0000000000000000000000000000000000000000000000000000000000000010
48656c6c6f20457468657265756d210000000000000000000000000000000000
```
Layman Explanation: We attach the initial message ("Hello Ethereum!") to the
bytecode during deployment. The encoded version includes length and position
info so the EVM can read it correctly when deploying.
***
## 5. Raw Deployment Transaction: RLP Encoding and ECDSA Signature
We will now create a raw transaction to deploy the HelloWorld contract, and
generate its signature using ECDSA over the RLP-encoded payload.
Transaction Object (Pre-Signature):
```json
{
"nonce": "0x00",
"gasPrice": "0x04a817c800", // 20 gwei
"gasLimit": "0x2dc6c0", // 3000000
"to": null, // contract creation
"value": "0x00",
"data": "",
"chainId": 1
}
```
Step 1: RLP Encoding (pre-signature)
RLP of the transaction (pre-signature) includes:
```
[
nonce,
gasPrice,
gasLimit,
to (null → 0x),
value,
data,
chainId,
0,
0
]
```
We use:
```
nonce = 0x00
gasPrice = 0x04a817c800 (20,000,000,000 wei)
gasLimit = 0x2dc6c0 (3000000)
to = null (for contract deployment)
value = 0x00
data = (as in Point 4)
chainId = 1
```
Full RLP-Encoded Unsigned TX (Hex):
```
0xf9012a808504a817c800832dc6c080b90124608060405234801561001057600080fd5b5060405161011b38038061011b83398101604081905261002f9161003b565b806000819055506100db565b600080fd5b6000819050919050565b61005781610044565b811461006257600080fd5b50565b6000813590506100748161004e565b92915050565b6000602082840312156100905761008f61003f565b5b600061009e84828501610065565b91505092915050565b6100b281610044565b82525050565b60006020820190506100cd60008301846100a9565b92915050565b6000819050919050565b6100e7816100d4565b81146100f257600080fd5b50565b600081359050610104816100de565b92915050565b6000602082840312156101205761011f61003f565b5b600061012e848285016100f5565b9150509291505056fea2646970667358221220bd485cd0e3e06eeb6eac6e324b8e121b6fba8332faafbe3e60ad7fdfaf0b649264736f6c634300080c00330000000000000000000000000000000000000000000000000000000000000020
0000000000000000000000000000000000000000000000000000000000000010
48656c6c6f20457468657265756d210000000000000000000000000000000000
018080
```
***
Step 2: Sign the Keccak256 Hash of Above
We now hash the RLP-encoded transaction (excluding v, r, s) and sign it using
the private key.
```javascript
const txHash = keccak256(rlpEncodedUnsignedTx);
const signature = ecsign(txHash, privateKey);
```
Example:
```json
{
"v": 0x25,
"r": "0x3aeec3c3a7eb1a13c6d408419816f6bb5563a9cf4263a6b9d170e9bb5b88e5bb",
"s": "0x275d3d113e2f06d90d3dc9e16ff3387ff145f1fe9d62c1e421693d6d24eaa598"
}
```
* v = 37 = 1 \* 2 + 35 for chain ID 1 (EIP-155)
* r, s are ECDSA signature components (from secp256k1)
***
Final Signed Raw Transaction (RLP w/ Signature):
```
0xf9015a808504a817c800832dc6c080b90124608060405234801561001057600080fd5b5060405161011b38038061011b83398101604081905261002f9161003b565b806000819055506100db565b600080fd5b6000819050919050565b61005781610044565b811461006257600080fd5b50565b6000813590506100748161004e565b92915050565b6000602082840312156100905761008f61003f565b5b600061009e84828501610065565b91505092915050565b6100b281610044565b82525050565b60006020820190506100cd60008301846100a9565b92915050565b6000819050919050565b6100e7816100d4565b81146100f257600080fd5b50565b600081359050610104816100de565b92915050565b6000602082840312156101205761011f61003f565b5b600061012e848285016100f5565b9150509291505056fea2646970667358221220bd485cd0e3e06eeb6eac6e324b8e121b6fba8332faafbe3e60ad7fdfaf0b649264736f6c634300080c00330000000000000000000000000000000000000000000000000000000000000020
0000000000000000000000000000000000000000000000000000000000000010
48656c6c6f20457468657265756d210000000000000000000000000000000000
25
3aeec3c3a7eb1a13c6d408419816f6bb5563a9cf4263a6b9d170e9bb5b88e5bb
275d3d113e2f06d90d3dc9e16ff3387ff145f1fe9d62c1e421693d6d24eaa598
```
Layman Explanation: This is like digitally signing a message that says "Deploy
this program with this code." The r and s values prove it's your signature. The
v value tells the network which chain you're sending it to.
***
## 6. Send Transaction to Ethereum Network
The signed raw transaction is sent via:
```javascript
await provider.sendTransaction(signedTx);
```
A node will verify:
* Signature is valid (recover sender address)
* Nonce is correct
* Sender has enough ETH to cover gasLimit × gasPrice
If valid, the transaction is broadcast into the mempool and included in the next
block.
***
## 7. Contract Address Calculation
The contract address is computed before deployment completes using:
```javascript
const contractAddress = ethers.utils.getContractAddress({
from: "0xd8cD4DAfD4e581dE9e69fB9588b6E547C206Efd1",
nonce: 0,
});
```
Internally:
```
contractAddress = keccak256(rlp([sender, nonce]))[12:]
```
Step-by-step:
1. RLP(\[0xd8cD4DAfD4e581dE9e69fB9588b6E547C206Efd1, 0]) → 0xd6... (RLP-encoded)
2. keccak256(RLP) →
0x5cbd38cc74f924b1ef5eb86d9b54f9931f75d7e3c5e17a63ab7aeb7ddde893b1
3. Contract Address = 0x5cbd38cc74f924b1ef5eb86d9b54f9931f75d7e3
Layman Explanation: Ethereum pre-computes the future address of the contract
using your address and how many transactions you've sent before (the nonce).
***
## 8. Storage Trie Initialization
Ethereum contracts store all state variables in a Merkle Patricia Trie. Each
storage slot is addressed by:
```javascript
slotKey = keccak256(padded_slot_index);
```
For variable message at slot 0x00:
```javascript
slot = ethers.utils.keccak256(ethers.utils.hexZeroPad("0x00", 32));
// Output:
("0x290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e563");
```
Layman Explanation: The contract's internal variables are stored in a key-value
database where keys are hashed. Slot 0x00 refers to the first declared state
variable, which is message.
***
9. Submit updateMessage("Goodbye Ethereum!")
We now send a second transaction that calls the smart contract's updateMessage()
function with the new string "Goodbye Ethereum!".
Encode Calldata with ABI
```javascript
const iface = new ethers.utils.Interface(abi);
const data = iface.encodeFunctionData("updateMessage", ["Goodbye Ethereum!"]);
```
***
Full Calldata (Untruncated):
```
0xc47f00270000000000000000000000000000000000000000000000000000000000000020
0000000000000000000000000000000000000000000000000000000000000011
476f6f6462796520457468657265756d2100000000000000000000000000000000
```
Breakdown:
| Bytes Range | Value | Meaning |
| ----------- | ------------------------------------- | ------------------------------------------------------------------ |
| 0x00–0x04 | 0xc47f0027 | Function selector = keccak256("updateMessage(string)").slice(0, 4) |
| 0x04–0x24 | 0000000000...0000020 | Offset to start of string data (32 bytes) |
| 0x24–0x44 | 0000000000...0000011 | Length of the string = 17 bytes |
| 0x44–0x64 | 476f6f6462796520457468657265756d21... | ASCII "Goodbye Ethereum!" padded to 32 bytes |
Layman Explanation: This is the binary version of: "Hey smart contract, call
updateMessage() with the new string 'Goodbye Ethereum!'" The EVM uses this exact
layout to read parameters.
***
## 9. Construct and Sign Transaction
```javascript
const tx = {
nonce: 1,
to: "0x5cbd38cc74f924b1ef5eb86d9b54f9931f75d7e3", // deployed contract address
gasPrice: ethers.utils.parseUnits("20", "gwei"),
gasLimit: 100000,
value: 0,
data:
"0xc47f00270000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000011" +
"476f6f6462796520457468657265756d2100000000000000000000000000000000",
chainId: 1,
};
const signedTx = await wallet.signTransaction(tx);
```
***
## 10. EVM Execution Trace for updateMessage()
Let's now simulate the internal EVM execution step-by-step.
The EVM receives the transaction, parses the calldata, and executes opcodes that
perform:
* Decoding the dynamic string argument
* Computing the storage slot
* Writing the new string to that slot using SSTORE
Instruction-Level Breakdown (Simplified):
```
CALLDATALOAD → push offset (0x20) → stack: [0x20]
ADD → string pointer = 0x04 + 0x20 = 0x24
CALLDATALOAD → string length (0x11)
[... memory allocation and copy string ...]
SHA3 → keccak256(0x00) = storage slot
SSTORE → write to slot
```
***
Storage Slot Computation
```javascript
slotKey = ethers.utils.keccak256(ethers.utils.hexZeroPad("0x00", 32));
// Result:
("0x290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e563");
```
***
Storage Value (UTF-8 String → Hex)
"Goodbye Ethereum!" (17 bytes) → 0x476f6f6462796520457468657265756d21
Padded to 32 bytes (for EVM):
```
0x476f6f6462796520457468657265756d2100000000000000000000000000000000
```
Layman Explanation: The EVM copies the string to memory, calculates the exact
key to store it under, and then saves the new message in the contract's
database.
***
## 11. State Trie and Storage Trie Update
The Ethereum state trie now reflects this update.
Account Object:
```json
{
"nonce": 2,
"balance": 0,
"storageRoot": "0xa1c9f3d17704e632bb58bb85e332e0bcbcc181c1cce6dd13a6adca048f2e94f3",
"codeHash": "0x1b449b7a3f5b631d5fa963dfba2dfc19a7d62a9a79e0f6828aee5f785dcfd94a"
}
```
* nonce: 2 (this is the second tx from the account)
* storageRoot: Merkle root of contract's key-value store (after update)
* codeHash: unchanged unless contract self-destructs or is overwritten
***
Storage Trie Node:
Key (slot):
```
0x290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e563
```
Value (RLP encoded string):
```
0x83476f6f6462796520457468657265756d21
```
Explanation:
* RLP prefix 0x83 means string length is 3 bytes more than 32 , due to prefix
* The value is a string, padded and hashed into the trie
Layman Explanation: The new message replaces the old one in a secure data
structure called the Merkle Patricia Trie. The root hash of this trie proves
that the value was updated, even without seeing the full database.
***
## 12. Transaction Receipt, Logs, and Bloom Filter
If an event was emitted, logs would be added to the receipt. Even though we
didn't emit an event, here's what a standard receipt might include.
Example Receipt:
```json
{
"transactionHash": "0x9e81fbb3b8fd95f81c0b4161d8ef25824e64920bca134a9b469ec72f4db3cf61",
"blockNumber": 18465123,
"from": "0xd8cD4DAfD4e581dE9e69fB9588b6E547C206Efd1",
"to": "0x5cbd38cc74f924b1ef5eb86d9b54f9931f75d7e3",
"gasUsed": "0x4a38", // 19000+ gas
"status": "0x1", // success
"logs": [],
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
}
```
Layman Explanation: This receipt is like a receipt you get from a store: it
proves the transaction happened. The Bloom filter is a searchable fingerprint of
all logs in the block , so you can find your transaction without scanning
everything.
***
## 13. Merkle Patricia Trie Proofs: Structure, Path, Nodes
Ethereum uses a Merkle Patricia Trie (MPT) for three major tries:
| Trie | Purpose | Root Hash Stored In |
| ------------ | -------------------------- | -------------------- |
| State Trie | All EOAs and contracts | Block header |
| Storage Trie | Contract key-value storage | Per-contract account |
| Receipt Trie | All tx receipts in a block | Block header |
Each key is hashed with Keccak256 and converted to hex-nibbles (base-16) for
trie traversal. Nodes are:
* Branch node: 17 slots (16 for hex chars + 1 for value)
* Extension/Leaf node: \[prefix, value] with compact encoding
***
Example: Storage Trie Proof Path for Slot 0x00
Slot key (from earlier):
```
key = 0x290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e563
```
Hex nibble path:
```
[2, 9, 0, d, e, c, d, 9, 5, 4, 8, b, 6, 2, a, 8, d, 6, 0, 3, 4, 5, a, 9, 8, 8, 3, 8, 6, f, c, 8, 4, b, a, 6, b, c, 9, 5, 4, 8, 4, 0, 0, 8, f, 6, 3, 6, 2, f, 9, 3, 1, 6, 0, e, f, 3, e, 5, 6, 3]
```
The path guides the trie down nodes (branch → extension → leaf).
Each proof includes:
* All nodes from root → leaf
* Sibling hashes
* RLP encoding of node contents
***
Example Leaf Node:
```json
[
"0x35ab3c...", // Compact-encoded key
"0x83476f6f6462796520457468657265756d21"
]
```
Layman Explanation: The Ethereum state is like a massive tree of keys and
values. To prove something exists in it, you can walk the exact path and show
only the parts needed , no need to reveal the entire tree. This enables
efficient proof of data inclusion.
***
## 14. Final Contract Account Object (Post-Execution)
After two transactions, the account object in the state trie looks like this:
```json
{
"address": "0x5cbd38cc74f924b1ef5eb86d9b54f9931f75d7e3",
"nonce": 1,
"balance": "0x00",
"codeHash": "0x1b449b7a3f5b631d5fa963dfba2dfc19a7d62a9a79e0f6828aee5f785dcfd94a",
"storageRoot": "0xa1c9f3d17704e632bb58bb85e332e0bcbcc181c1cce6dd13a6adca048f2e94f3"
}
```
Details:
* nonce: Number of txs this contract has sent (usually 1 or 0)
* codeHash: keccak256(contract bytecode)
* storageRoot: Root hash of contract's internal storage trie
***
## 15. ZK-Rollup (zkEVM) Transaction Lifecycle Differences
zkEVM-compatible rollups (e.g. Polygon zkEVM, Scroll, Taiko) execute
transactions differently:
Execution Flow:
1. Transaction is submitted to the zk-rollup (off-chain)
2. EVM is simulated inside a zk-circuit
3. A validity proof is generated for:
* Execution trace
* State changes
* Storage writes
4. L1 receives:
* Final stateRoot
* SNARK/STARK proof
* Compressed calldata
***
Compatible With:
| Feature | zkEVM Behavior |
| -------------------- | ------------------- |
| Bytecode | Fully supported |
| Storage slot hashing | Identical |
| Calldata encoding | Same ABI format |
| Gas metering | Emulated in circuit |
| Logs/events | Recorded for proof |
Layman Explanation: Instead of making every Ethereum node re-execute your
transaction, zkEVM rollups do it once, then prove mathematically to everyone
else that the result is valid. The proof is posted to Ethereum, and no one needs
to trust the prover , the math guarantees correctness.
***
## 16. Optimistic Rollups (Arbitrum, Optimism)
Optimistic Rollups take a different approach.
Execution Model:
1. Transactions are sent to the rollup chain
2. Execution is done off-chain, but results are considered valid by default
3. After a delay window (e.g. 7 days), L1 finalizes the result
4. Anyone can submit a fraud proof if they detect an invalid state root
***
L1 Calldata Format
Rollup batches are posted to Ethereum as calldata in a single tx:
```
0x0000...
|-- tx1 calldata
|-- tx2 calldata
|-- ...
```
Calldata compression techniques:
* Zlib/snappy
* Merkle batching
* Prefix compression
***
Rollup vs zkEVM Differences
| Feature | Optimistic Rollup | zkEVM |
| ----------------- | ----------------------- | ---------------------------- |
| Trust model | Fraud-proof (challenge) | Validity-proof (math) |
| Finality | Delayed (7 days) | Near-instant |
| Gas efficiency | Higher (calldata heavy) | Lower (proof cost amortized) |
| EVM compatibility | High (native EVM) | High (EVM circuits) |
Layman Explanation: Optimistic rollups assume everything is okay unless someone
proves otherwise. zkEVMs prove everything is correct from the start. Both post
data to Ethereum, but the trust assumptions and confirmation times are
different.
***
Final World State Snapshot
```json
{
"ContractAddress": "0x5cbd38cc74f924b1ef5eb86d9b54f9931f75d7e3",
"Storage": {
"SlotKey": "0x290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e563",
"SlotValue": "0x476f6f6462796520457468657265756d2100000000000000000000000000000000"
},
"AccountObject": {
"nonce": 1,
"codeHash": "0x1b449b7a3f5b631d5fa963dfba2dfc19a7d62a9a79e0f6828aee5f785dcfd94a",
"storageRoot": "0xa1c9f3d17704e632bb58bb85e332e0bcbcc181c1cce6dd13a6adca048f2e94f3"
},
"LastTransaction": {
"Function": "updateMessage(string)",
"Calldata": "0xc47f00270000000000000000000000000000000000000000000000000000000000000020" +
"0000000000000000000000000000000000000000000000000000000000000011" +
"476f6f6462796520457468657265756d2100000000000000000000000000000000",
"v": "0x25",
"r": "0x3aeec3c3a7eb1a13c6d408419816f6bb5563a9cf4263a6b9d170e9bb5b88e5bb",
"s": "0x275d3d113e2f06d90d3dc9e16ff3387ff145f1fe9d62c1e421693d6d24eaa598"
},
"ExecutionContext": {
"L1": "Ethereum Mainnet",
"Rollups": {
"zkEVM": "Validity proof enforces correctness",
"Optimistic": "Post and challenge model with 7d delay"
}
}
}
```
***
## Ethereum vs Hyperledger Fabric - Comparison
## Technical Comparison Table
| Category | Ethereum (EVM-Based Chains) | Hyperledger Fabric |
| ---------------------------------- | ------------------------------------------------------------ | --------------------------------------------------------------- |
| **1. Identity Model** | ECDSA secp256k1 key pair; address = Keccak256(pubkey)\[12:] | X.509 certificates issued by Membership Service Providers (MSP) |
| **2. Network Type** | Public or permissioned P2P (Ethereum Mainnet, Polygon, BSC) | Fully permissioned consortium network |
| **3. Ledger Architecture** | Global state stored in Merkle Patricia Trie (MPT) | Channel-based key-value store (LevelDB/CouchDB) |
| **4. State Model** | Account-based: balances and storage in accounts | Key-value database with versioned keys per channel |
| **5. Smart Contract Format** | EVM bytecode; written in Solidity/Vyper/Yul | Chaincode packages in Go, JavaScript, or Java |
| **6. Contract Execution** | Executed in deterministic sandbox (EVM) | Executed in Docker containers as chaincode |
| **7. Contract Invocation** | `eth_sendTransaction`: ABI-encoded calldata | SDK submits proposals → endorsers simulate |
| **8. Transaction Structure** | Nonce, to, value, gas, calldata, signature | Proposal + RW Set + endorsements + signature |
| **9. Signing Mechanism** | ECDSA (v, r, s) signature from sender | X.509-based MSP identities; multiple endorsements |
| **10. Endorsement Model** | No built-in multi-party endorsement (unless multisig logic) | Explicit endorsement policy per chaincode |
| **11. Consensus Mechanism** | PoS (Ethereum 2.0), PoW (legacy), rollup validators | Ordering service (Raft, BFT) + validation per org |
| **12. Ordering Layer** | Implicit in block mining / validator proposal | Dedicated ordering nodes create canonical blocks |
| **13. State Change Process** | Contract executes → SSTORE updates global state | Endorsers simulate → Orderer orders → Peers validate/commit |
| **14. Double-Spend Prevention** | State root update + nonce per account | MVCC: Version check of key during commit phase |
| **15. Finality Model** | Probabilistic (PoW), deterministic (PoS/finality gadget) | Deterministic finality after commit |
| **16. Privacy Model** | Fully public by default; private txs via rollups/middleware | Channel-based segregation + Private Data Collections (PDCs) |
| **17. Data Visibility** | All nodes hold all state (global visibility) | Per-channel; only authorized peers see data |
| **18. Data Storage Format** | MPT for state; key-value in trie; Keccak256 slots | Simple key-value in LevelDB/CouchDB |
| **19. Transaction Validation** | EVM bytecode + gas + opcode checks | Validation system chaincode enforces endorsement policy + MVCC |
| **20. Gas / Resource Metering** | Gas metering for all computation and storage | No gas model; logic must guard resource consumption |
| **21. Events and Logs** | LOGn opcode emits indexed events | Chaincode emits named events; clients can subscribe |
| **22. Query Capability** | JSON-RPC, The Graph, GraphQL, custom RPC | CouchDB rich queries, GetHistoryForKey, ad hoc queries |
| **23. Time Constraints** | Optional: `block.timestamp`, `validUntil` for EIP-1559 txs | Custom fields in chaincode; no native tx expiry |
| **24. Execution Environment** | Global EVM sandbox; each node runs all txs | Isolated Docker container per chaincode; endorsers simulate |
| **25. Deployment Flow** | Deploy via signed transaction containing bytecode | Lifecycle: package → install → approve → commit |
| **26. Smart Contract Upgrade** | Manual via proxy pattern or CREATE2 | Controlled upgrade via chaincode lifecycle & endorsement policy |
| **27. Programming Languages** | Solidity (primary), Vyper, Yul | Go (primary), also JavaScript and Java |
| **28. Auditability & History** | Full block-by-block transaction trace, Merkle proof of state | Immutable ledger + key history queries |
| **29. Hashing Functions** | Keccak256 (SHA-3 variant) | SHA-256, SHA-512 (standard cryptographic primitives) |
| **30. zk / Confidentiality Tools** | zkRollups, zkEVM, TornadoCash, Aztec | External ZKP libraries; no native zero-knowledge integration |
***
## Execution Lifecycle Comparison
| Stage | Ethereum (EVM) | Hyperledger Fabric |
| ----------------- | -------------------------------------------- | -------------------------------------------------------- |
| **1. Initiation** | User signs tx with ECDSA and submits to node | Client sends proposal to endorsing peers via SDK |
| **2. Simulation** | EVM runs the tx using opcode interpreter | Endorsing peers simulate chaincode, generate RW set |
| **3. Signing** | Sender signs tx (v, r, s) | Each endorser signs the proposal response |
| **4. Ordering** | Block produced by validator | Ordering service batches txs into blocks |
| **5. Validation** | Gas limit, signature, nonce, storage check | Validation system checks endorsement + MVCC versioning |
| **6. Commit** | State trie updated, new root in block header | Valid txs update state in DB; invalid txs marked as such |
| **7. Finality** | Final after sufficient blocks (PoW/PoS) | Final immediately after block commit |
***
## Summary Insights
* **Ethereum** offers a globally synchronized, public execution model with gas
metering and strong ecosystem tooling. It emphasizes decentralization,
programmability, and composability.
* **Fabric** is a modular enterprise-grade DLT with configurable privacy,
endorsement policies, and deterministic execution. It separates simulation
from ordering, enabling fine-grained control.
file: ./content/docs/knowledge-bank/bfsi-blockchain-usecases.mdx
meta: {
"title": "BFSI use cases",
"description": "Use case guide for blockchain applications in BFSI"
}
The BFSI sector has long relied on legacy systems, manual workflows, and siloed
databases to manage highly sensitive operations. From international remittances
and loan processing to fraud detection and claims settlement, the industry must
balance security, speed, trust, and compliance.
Blockchain technology introduces a shared, immutable ledger that enables secure,
transparent, and auditable transactions between parties without the need for
intermediaries. Its adoption within BFSI brings the potential to drastically
reduce operational friction, lower costs, and improve customer trust.
Banks, financial institutions, and insurance providers are increasingly piloting
and deploying blockchain-based solutions to streamline payments, digitize
assets, improve regulatory reporting, automate claims processing, and prevent
fraud.
Unlike traditional centralized databases, blockchain offers real-time
settlement, consensus-based data integrity, and cryptographic proof of records,
making it highly suitable for use cases that require trust, traceability, and
automation.
## Core benefits of blockchain
Blockchain platforms provide specific benefits that directly address
longstanding inefficiencies in financial and insurance systems:
* Real-time transaction finality without reconciliation delays
* Cryptographic immutability that ensures tamper-evident audit trails
* Decentralized access and trustless execution via smart contracts
* Permissioned data sharing across regulated entities
* Tokenization of financial instruments for faster settlement
* Transparent, on-chain identities for AML/KYC verification
These benefits enable use cases that range from programmable payments to
automated reinsurance and interbank settlements. Whether in public or
permissioned blockchains, these principles can modernize core workflows across
BFSI ecosystems.
## Cross-border payments and remittances
Cross-border transfers suffer from multiple inefficiencies: high fees, slow
settlement times, limited visibility, and heavy reliance on correspondent
banking networks.
Blockchain-based payment networks remove intermediaries and offer near-instant
settlement with reduced costs.
In a typical setup, banks or remittance providers integrate with a blockchain
ledger where stablecoins or central bank digital currencies (CBDCs) are used to
represent fiat. Users can initiate payments across jurisdictions with real-time
FX conversion, on-chain confirmation, and smart contract-enforced compliance
checks.
Example workflow:
* A customer initiates a payment in USD from the United States to a recipient in
India
* USD is converted into a stablecoin or CBDC and recorded on-chain
* The blockchain transaction is finalized and visible to both sending and
receiving institutions
* The recipient receives the equivalent INR, settled in local currency and
credited directly
Key advantages:
* Settlement in seconds instead of 2–5 days
* Dramatically reduced foreign exchange spread and wire fees
* Full transparency of transaction status and audit trail
* Reduced dependency on SWIFT or correspondent banking infrastructure
Blockchain-based payment networks like RippleNet and Stellar have shown strong
results in this domain, partnering with hundreds of banks and remittance
corridors.
## Trade finance and supply chain finance
Trade finance involves multiple stakeholders, document exchanges, and settlement
workflows, often delayed by manual verifications, fraud risk, and jurisdictional
complexity.
Blockchain can digitize trade documents, automate terms via smart contracts, and
provide a shared, tamper-proof ledger that all parties agree upon.
Use cases include:
* Letter of credit automation
* Bill of lading digitization
* Proof-of-origin tracking
* Invoice financing with tokenized invoices
* Milestone-based disbursement via smart contracts
Example scenario:
* A buyer and seller agree on a contract where goods are shipped from China to
Germany
* A smart contract encodes the delivery terms, payment conditions, and timelines
* Shipping updates are submitted on-chain via IoT or logistics APIs
* Upon delivery confirmation and customs clearance, the payment is automatically
released to the seller
This process reduces counterparty risk, lowers trade finance barriers for SMEs,
and ensures real-time compliance reporting for regulators and banks involved in
trade facilitation.
Consortium platforms like we.trade, Marco Polo, and Contour are building on R3
Corda and Hyperledger Fabric to deliver these solutions to global financial
institutions.
## Asset tokenization and capital markets
Tokenization refers to the process of creating blockchain-based representations
of real-world assets. These tokens can represent equity, bonds, real estate,
commodities, or fund shares, enabling programmable ownership, fractional access,
and 24/7 trading.
In capital markets, this enables faster issuance, improved liquidity, and
automated compliance.
Types of tokenized assets:
* Tokenized equity and shares for private companies
* Digitally issued debt instruments (bonds, debentures)
* Tokenized REITs or real estate portfolios
* Gold and commodity-backed tokens
* Asset-backed security tokens for capital raising
Blockchain improves capital market infrastructure by:
* Automating cap table management
* Enforcing transfer restrictions via smart contracts
* Providing real-time investor registries
* Enabling peer-to-peer secondary trading
Example: A company issues tokenized bonds via a permissioned blockchain.
Investors can subscribe directly through digital wallets, receive interest
payouts via smart contracts, and trade the tokens on regulated secondary
marketplaces. Custody, KYC, and audit trails are all maintained on-chain.
Institutions like SIX Digital Exchange, JPMorgan Onyx, and Deutsche Börse are
actively exploring and launching tokenization platforms.
## Digital identity and KYC
Banks and insurers must perform Know Your Customer (KYC), Anti-Money Laundering
(AML), and other due diligence checks for each new customer. This results in
redundant checks, slow onboarding, and fragmented records.
Blockchain enables self-sovereign identities and shared KYC registries that
reduce duplication and protect privacy.
Use case model:
* A user completes KYC once with a trusted institution and receives a digital
identity token or credential on-chain
* The token contains zero-knowledge proofs or signed attestations from the
verifier
* When opening an account with another institution, the user can share their KYC
credential, which is cryptographically verified without resubmitting all
documents
Advantages:
* Reduced onboarding time and friction
* Shared trust between financial institutions
* User-controlled privacy and selective disclosure
* Real-time regulator audit capabilities
Hyperledger Indy and Sovrin are examples of identity-focused networks enabling
decentralized KYC. Several central banks and consortiums are building private
networks for verifiable credential exchange.
## Credit scoring and lending automation
Traditional credit scoring relies on outdated models and opaque datasets, often
excluding individuals without formal banking history.
Blockchain introduces new methods for creditworthiness evaluation, especially in
underbanked markets.
Use cases:
* Blockchain-based micro-lending platforms where users build reputation through
repayment history stored on-chain
* Collateralized lending using tokenized assets or NFTs as security
* Peer-to-peer lending marketplaces governed by smart contracts
* Open credit registries where borrower behavior is immutably recorded
A blockchain lending workflow:
* A user pledges tokenized assets into a lending pool
* A smart contract verifies asset type and risk level
* Funds are disbursed based on predefined ratios
* Interest accrues and is paid out on-chain
* Collateral is liquidated automatically upon default
This model removes intermediaries, reduces operational costs, and enables global
access to credit markets. On-chain data like wallet activity, DAO participation,
or DeFi history can supplement or replace traditional credit bureaus.
## Insurance claims and policy management
Insurance processes are often burdened by manual claims processing, delayed
validations, document fraud, and lack of transparency for policyholders.
Blockchain introduces an efficient alternative through shared ledgers, smart
contracts, and automated oracles.
Use cases in insurance:
* Automated claims approval via smart contracts
* Fraud-resistant policy records with digital proofs
* Shared risk pools with on-chain contribution tracking
* Parametric insurance that triggers payouts based on predefined events
Parametric models are especially impactful in agricultural or travel insurance.
A parametric contract might pay out when rainfall drops below a defined level or
when a flight is delayed beyond a threshold. With oracles feeding real-world
data, claims are processed instantly and without human involvement.
Example:
* A farmer purchases a crop insurance policy recorded on-chain
* Weather APIs feed rainfall data into the blockchain through a trusted oracle
* A drought condition is detected and meets the payout threshold
* The smart contract automatically disburses the insured amount to the farmer’s
wallet
Benefits for insurers:
* Lower administrative overhead
* Higher customer satisfaction due to faster settlements
* Greater transparency in premium calculation and claim handling
* Immutable audit trails for regulators
Insurtech startups like Lemonade and Etherisc are already piloting such systems
for flight delay insurance, weather protection, and more.
## Regulatory compliance and audit automation
Compliance is a non-negotiable aspect of banking and financial services.
Institutions must report to central banks, tax agencies, financial intelligence
units, and auditing firms. Traditional compliance workflows are reactive,
expensive, and prone to human error.
Blockchain turns compliance into a real-time, traceable, and proactive process.
Use cases:
* Automated generation of audit logs for transactions
* Smart contract enforcement of regulatory thresholds (e.g., transaction limits)
* AML pattern monitoring on-chain using transparent analytics
* Tamper-evident timestamping of disclosures and approvals
Example:
* A digital bank integrates a smart contract that flags transactions over
$10,000
* When a flagged transaction occurs, the details are shared with a registered
regulatory node
* The compliance officer views the data instantly and approves or freezes the
transaction in near real-time
The benefits are substantial:
* Reduction in regulatory fines due to non-compliance
* Faster turnaround on audits and reconciliation
* Greater accountability in process governance
* Trusted reporting to tax and enforcement agencies
Permissioned blockchains like Hyperledger Fabric or Corda allow fine-grained
access control, ensuring that only authorized regulators can see sensitive data
while maintaining transparency.
## Fraud detection and prevention
Financial fraud takes many forms: identity theft, money laundering, account
manipulation, synthetic identities, insider collusion. Traditional fraud
detection tools rely on pattern recognition in siloed systems.
Blockchain changes this paradigm by providing a unified, tamper-evident, and
verifiable ledger across entities.
Fraud prevention use cases:
* Detecting duplicate or forged loan applications by checking hashes of
submitted data
* Verifying transaction history using zero-knowledge proofs
* Preventing multiple claims on the same insured asset using NFT-based ownership
* Cross-institutional fraud investigation via shared analytics on a consortium
ledger
How it works:
* Each customer interaction, transaction, or submitted document is hashed and
recorded
* Fraud detection models query the ledger for patterns of reuse, suspicious
timing, or high-risk counterparties
* Alerts are triggered and routed to the risk management team with auditable
metadata
By design, blockchain makes backdating, overwriting, or double-spending nearly
impossible. This drastically raises the bar for attackers while making
legitimate operations easier to monitor.
Banks that adopt blockchain-based audit layers report higher detection rates and
faster resolution cycles.
## Blockchain in reinsurance
Reinsurance involves the transfer of risk from an insurance company to a
reinsurer. The process often involves multi-party contracts, delayed
settlements, and complex reconciliations.
Blockchain brings transparency and shared truth to reinsurance treaties and
settlement workflows.
Use cases:
* Recording reinsurance contracts as smart contracts with embedded payout logic
* Automating premium calculation and claim apportionment
* On-chain risk transfer between insurers and reinsurers
* Shared claims history to reduce disputes
Example:
* An insurer writes a group life insurance policy
* A reinsurance smart contract is signed and recorded on-chain with automated
payout triggers
* When a claim is validated and paid, the reinsurance contract allocates the
appropriate reimbursement to the reinsurer
* All records, documents, and funds are tracked with full transparency
Benefits:
* Reduced operational friction between insurer and reinsurer
* Real-time visibility into portfolio exposure and liabilities
* Automated, trusted settlement of reinsurance claims
* Elimination of spreadsheet-based reconciliations
Blockchain platforms like B3i and RiskStream Collaborative are working to
digitize global reinsurance networks using distributed ledger infrastructure.
## Blockchain in capital adequacy and liquidity tracking
Banks are required to maintain sufficient liquidity and capital under Basel III
regulations. Monitoring these ratios requires real-time awareness of
obligations, exposures, and market positions.
Blockchain enables transparent, real-time monitoring and automated triggers
based on compliance thresholds.
Use cases:
* Real-time exposure tracking across clearing networks
* Instant visibility into pledged collateral or reserve assets
* Automated capital requirement testing based on smart contract rules
* Stress test modeling using shared on-chain simulations
With tokenized assets and digital balance sheets, blockchain allows regulators
to run stress tests or compliance checks directly from the ledger, improving
visibility and response time.
Banks can also build internal dashboards that pull on-chain collateral positions
and simulate liquidity thresholds without manual inputs or spreadsheets.
This model supports proactive compliance and smoother communication with central
banks and auditors.
## Blockchain in wealth and asset management
Wealth management involves portfolio balancing, client onboarding, investor
reporting, and regulatory alignment. Asset managers must coordinate multiple
parties, custodians, and asset classes across global jurisdictions.
Blockchain simplifies this process through tokenization, automated reporting,
and smart contract-based advisory services.
Use cases:
* Tokenized funds with programmable compliance and fractional ownership
* Blockchain-based investor onboarding and KYC checks
* Digital audit logs for every portfolio rebalancing event
* Peer-to-peer reallocation of assets with embedded rules
For example:
* A mutual fund tokenizes its shares using an ERC1400-compliant smart contract
* Each investor receives tokenized fund units in a secure wallet
* The fund manager can update NAV daily on-chain and issue redemptions directly
through the contract
* All investor actions are recorded and accessible for regulatory audit
This not only reduces fund administration costs but also enables the creation of
next-generation digital investment platforms, opening the door to robo-advisory,
DeFi-native funds, and 24/7 investment access.
## Blockchain in insurance fraud prevention
Insurance fraud often includes staged accidents, inflated claims, or identity
misuse. These are hard to detect in traditional systems due to data
fragmentation.
Blockchain offers real-time cross-verification and immutable claims history for
both insurers and regulators.
Use cases:
* On-chain claim submission with encrypted document hashes
* Shared fraud detection database among insurers with privacy-preserving
analytics
* NFT-based vehicle or property identity to prevent multiple claims on the same
item
* Integration of IoT data (e.g., dashcam, GPS) with on-chain verification
Example:
* A car accident claim is submitted along with timestamped dashcam footage
* The footage hash is registered on-chain and linked to the claim
* Another insurer receives a similar claim from the same VIN and sees the
duplicate in real-time
* The fraud is flagged and halted before payout
Blockchain’s immutable design and shared access model dramatically reduce the
opportunity for fraudulent behaviors and facilitate collaborative anti-fraud
strategies.
## Smart contract auditability in financial agreements
In the BFSI sector, contractual obligations between parties require
transparency, enforceability, and historical traceability. Smart contracts offer
a programmable way to encode these terms on-chain with automatic execution and
audit trails.
However, for adoption at scale, these contracts must be auditable by legal
teams, regulators, and counterparties.
Use cases:
* Encoding loan terms, covenants, and repayment logic in smart contracts
* Automated escrow releases based on multi-party agreement
* Modular compliance layers for jurisdictional alignment
* Real-time reporting of contract events for audits
Example:
* A syndicated loan agreement is recorded as a smart contract
* Each lender’s portion, repayment schedule, and interest accrual logic is
visible and enforced by code
* Regulators can access a read-only view of the contract lifecycle
* Auditors can verify that each clause executed correctly and that no party
modified or bypassed terms
Smart contracts provide a single source of truth, eliminating disputes and
removing ambiguity. With logging, versioning, and structured storage, audit
readiness becomes continuous rather than periodic.
Standards like ACTUS and ISDA’s Common Domain Model are being integrated into
blockchain contracts to enable interoperable financial logic.
## Risk scoring and blockchain-based rating models
Credit risk and counterparty risk are central to financial decision-making.
Traditional models rely on centralized credit bureaus, proprietary data, and
delayed updates.
Blockchain-based scoring models offer real-time, transparent, and decentralized
alternatives.
Use cases:
* On-chain credit scoring based on wallet behavior, loan repayment, or DAO
participation
* Open-source risk models using oracles and DeFi reputation metrics
* Federated scoring systems where multiple banks contribute data while
preserving customer privacy
Example:
* A borrower participates in three DeFi lending pools, repaying on time for six
months
* A DAO-based credit protocol aggregates on-chain behavior, collateral ratios,
and staking history
* A score is generated and shared as a verifiable credential
* A bank uses the score to offer a microloan without redoing the entire KYC
process
Risk scoring models can evolve continuously with on-chain behavior, and are
portable across platforms. Combining blockchain analytics with zero-knowledge
proofs enables scoring without exposing sensitive user data.
## ESG reporting and sustainability tracking
Environmental, Social, and Governance (ESG) compliance is a growing requirement
for banks, insurers, and institutional investors.
Blockchain enhances ESG initiatives by offering transparent, tamper-proof
tracking of sustainability metrics and carbon-related disclosures.
Use cases:
* Tokenized carbon credits with traceable issuance and retirement
* Real-time ESG impact logging in lending and investment portfolios
* Supply chain tracking for ethically sourced goods
* On-chain emissions and energy data for green financing
Example:
* A bank issues a green bond on a public-permissioned blockchain
* The bond terms include a clause that funds must be used for renewable energy
deployment
* Solar panel installations are tracked via IoT devices and registered on-chain
* Each energy milestone triggers a smart contract report to regulators and ESG
dashboards
Blockchain provides both accountability and automation. Issuers, verifiers, and
regulators can collaborate on shared networks to ensure that sustainability
claims are verifiable and traceable.
Institutions like the World Bank and IFC have piloted blockchain-based green
bonds and carbon registries to improve transparency in ESG-linked instruments.
## Central bank digital currencies and interbank settlement
Central Bank Digital Currencies (CBDCs) represent a transformative application
of blockchain in the financial sector.
They offer programmable, state-issued digital money that can be used for retail
payments, wholesale banking, and government services.
CBDC use cases:
* Real-time gross settlement between banks without intermediaries
* Tokenized cash collateral in derivative clearing
* Instant payroll and government subsidy disbursement
* Cross-border CBDC interoperability for FX efficiency
Example:
* The central bank issues a wholesale CBDC to commercial banks as digital tokens
* When Bank A wants to settle an interbank transfer with Bank B, it sends a CBDC
token on-chain
* The smart contract checks settlement logic and confirms instantly
* No intermediaries, no reconciliation, no delays
Benefits:
* Reduced systemic risk via 24/7 settlement finality
* Automated monetary policy tools (e.g., interest-bearing tokens)
* Enhanced auditability and anti-money laundering controls
* Frictionless cross-border settlement between central banks
Pilot programs are active in multiple countries including India (e₹), China
(e-CNY), Europe (Digital Euro), and Singapore. Platforms like mBridge, Project
Dunbar, and BIS’s Innovation Hub are shaping the global CBDC infrastructure.
## Derivatives and structured product automation
Derivatives trading relies on complex legal documentation, manual clearing
processes, and fragmented settlement systems. Blockchain allows these
instruments to be modeled, issued, and settled automatically using smart
contracts.
Use cases:
* On-chain options and futures contracts with programmable strike logic
* Collateral management using tokenized assets
* Real-time margin tracking and liquidation
* Structured note issuance with embedded payout calculations
Example:
* A structured note is issued with a payoff tied to the performance of a crypto
index
* The smart contract calculates the value daily and updates investor balances
* If a stop-loss trigger is hit, the product unwinds automatically and funds are
returned
* Regulatory reports are generated in real-time and sent to the compliance node
This approach eliminates settlement delays, reduces counterparty risk, and
allows for instant customization and automation of derivative products.
Projects like ISDA’s CDM on DLT and derivatives protocols on platforms like
Corda and Hyperledger are advancing this domain.
## Regulatory sandboxes and blockchain test networks
As innovation accelerates, regulators need secure ways to observe, experiment
with, and understand new financial technologies.
Blockchain-based sandboxes allow regulated testing of new digital products with
full visibility and control.
Use cases:
* Deploying pilot versions of CBDCs or digital securities in isolated
environments
* Sharing testnet analytics with regulators and oversight boards
* Real-time rule enforcement simulation for AML/KYC logic
* Stress-testing financial instruments on synthetic chains
Example:
* A startup launches a tokenized investment platform inside a regulatory sandbox
* The regulator node is granted access to transaction data and contract logic
* The sandbox simulates investor flows, stress scenarios, and compliance
breaches
* Insights are shared transparently without impacting real users
These sandboxes help both startups and incumbents validate concepts, while
giving regulators the tools to shape future rules with hands-on data.
## Blockchain-based treasury management
For corporations and banks, treasury functions like cash positioning, liquidity
monitoring, and inter-entity settlements are mission-critical.
Blockchain simplifies these functions through tokenization, smart workflows, and
global asset visibility.
Use cases:
* Tokenized cash and intra-company transfers across jurisdictions
* Real-time cash pooling and visibility dashboards
* Treasury collateral backed by tokenized assets
* Automated FX hedging and conversions
Example:
* A multinational enterprise tokenizes its cash across five subsidiaries
* Each treasury operation (e.g., funding, FX, reconciliation) is performed
on-chain using smart contracts
* Daily cash positions and liquidity buffers are visible to HQ instantly
Benefits:
* Faster liquidity management and funding decisions
* Compliance with capital controls and internal audit policies
* Elimination of interbank friction for internal settlements
Treasury digitization using blockchain transforms a traditionally opaque
function into a real-time, transparent, and auditable system.
## Financial inclusion and microfinance
Traditional banking infrastructure often fails to reach underserved populations,
especially in rural or developing areas. Lack of physical branches, credit
history, and identification prevent individuals from accessing savings, credit,
or insurance.
Blockchain reduces onboarding friction and expands reach via mobile-first,
peer-to-peer financial services.
Use cases:
* Wallet-based micro-savings accounts accessible with a phone
* On-chain credit history and loan tracking for unbanked individuals
* Community lending pools governed by smart contracts
* Blockchain identity tokens linked to social or behavioral data
Example:
* A rural cooperative uses blockchain wallets to collect savings from members
* The funds are pooled into a smart contract-managed treasury
* Members can apply for loans, which are approved by consensus or voting
* Repayments are tracked on-chain, and successful borrowers build a
decentralized reputation
Benefits:
* Lower operational costs for service providers
* Trust-building in communities without intermediaries
* Reduced corruption or mismanagement in fund allocation
* Inclusion of populations without formal documentation
Projects like Celo, Moeda, and Kotani Pay use mobile-first blockchain tools to
deliver microfinance and community banking solutions globally.
## Cross-jurisdictional compliance automation
BFSI institutions often operate across multiple regulatory jurisdictions, each
with its own AML, tax, capital, and reporting rules. Ensuring compliance in this
fragmented landscape requires constant coordination and adaptation.
Blockchain offers a standardized yet flexible foundation for building regulatory
logic into transactions themselves.
Use cases:
* Automated jurisdiction checks before trade execution
* On-chain withholding tax calculation and remittance
* Smart contract enforcement of transfer restrictions by region
* Role-based access for regulators to specific transaction types
Example:
* A securities exchange tokenizes debt instruments accessible to both EU and
APAC investors
* The smart contract includes jurisdiction filters that validate the user’s
region before allowing a purchase
* Tax is calculated and logged automatically based on regional rules
* The local regulator can query regional transaction summaries via API
Benefits:
* Lower cost of multi-region compliance
* Fewer errors due to built-in rule enforcement
* Transparent and real-time reporting to regulators
* Easier entry for fintechs operating in multiple countries
This model turns regulation into an API, improving trust and reducing legal
risk.
## Blockchain integration with decentralized finance (DeFi)
DeFi is a rapidly growing ecosystem of permissionless financial applications
that run on public blockchains. Traditional BFSI players are increasingly
exploring integration opportunities with DeFi protocols to unlock liquidity,
reach new markets, and automate services.
Bridging these worlds requires risk controls, compliance mechanisms, and robust
custody.
Use cases:
* Tokenized assets from banks used as collateral in DeFi lending protocols
* DeFi vault strategies embedded within traditional fund products
* CeFi (centralized finance) APIs for users to access DeFi under a regulated
wrapper
* Regulated stablecoins issued by banks and used in liquidity pools
Example:
* A bank creates a tokenized treasury bond product
* Users can deposit the bond token into a DeFi protocol to earn yield
* A risk layer ensures only approved tokens or verified addresses can interact
with the protocol
* Custody and reporting are managed through the bank’s regulated interface
Benefits:
* Hybrid offerings combining security and innovation
* Greater capital efficiency and 24/7 liquidity
* Access to programmable yield strategies
* Participation in open financial infrastructure
Platforms like Aave Arc, Compound Treasury, and Fireblocks are building
institutional-grade interfaces for DeFi-BFSI convergence.
## Customer identity, consent, and privacy management
Managing customer identity is fundamental to BFSI operations. However,
centralized identity systems create silos, increase data leakage risk, and
impose high friction on the user experience.
Blockchain enables a new model of decentralized, user-controlled identity and
consent.
Use cases:
* Self-sovereign identities with reusable KYC credentials
* Selective disclosure using zero-knowledge proofs
* Time-bound or context-specific access tokens
* Consent receipts recorded immutably on-chain
Example:
* A user completes KYC once with a trusted bank and receives a digital identity
credential
* When accessing another bank or insurer, the user shares a proof without
exposing underlying documents
* The institution verifies authenticity and timestamp without storing user data
* The user can revoke or audit consent from a wallet-based dashboard
Benefits:
* Lower onboarding and compliance costs
* Increased user privacy and control
* Shared trust across financial institutions
* Verifiable, real-time identity auditability
Standards like W3C Verifiable Credentials, DID, and zkSNARK-based privacy
schemes are enabling compliant yet private financial identity ecosystems.
## Governance and internal audit automation
Internal governance and audit processes are often manual, retrospective, and
disconnected across departments.
Blockchain enables proactive, continuous governance with structured logic, audit
trails, and programmable access controls.
Use cases:
* Multi-signature workflows for approvals (e.g., fund transfers, vendor
onboarding)
* On-chain documentation of board decisions and change logs
* Tamper-evident internal audit journals
* Event-based compliance rule triggers (e.g., time-locked decisions)
Example:
* An insurance firm encodes its claim approval policy into a governance smart
contract
* Each claim over $100,000 requires digital signatures from two managers and one
compliance officer
* Once approved, funds are released and a record is posted to a compliance-only
channel
* Internal audit can trace the entire process in real time
This approach:
* Reduces risk of non-compliance or fraud
* Enables faster resolution of governance events
* Creates transparent alignment between teams and regulators
* Makes internal processes more efficient and enforceable
Blockchain’s deterministic nature ensures that what is written is what executes,
minimizing ambiguity and enhancing accountability.
## Blockchain for financial product traceability
Many financial products — especially in insurance and investment — pass through
multiple hands before reaching the end user. Each transfer creates legal,
financial, and regulatory obligations that are often poorly tracked.
Blockchain provides granular, real-time traceability of asset creation,
ownership, transfer, and redemption.
Use cases:
* Tracking of structured product ownership from issuance to redemption
* Provenance of insurance-backed financial guarantees
* Mapping product flows across multiple financial intermediaries
* Detection of unauthorized product reselling or re-wrapping
Example:
* A mortgage-backed security is tokenized, with each mortgage traced to its
original loan agreement
* Each transfer, tranche, and investor action is logged on-chain
* When regulators or auditors review the instrument, they see full traceability
from origin to current holder
Benefits:
* Enhanced investor protection
* Clear proof of ownership and entitlement
* Compliance with distribution and suitability rules
* Reduced complexity in downstream servicing and auditing
Product traceability powered by blockchain builds a digital thread from origin
to delivery, fostering trust and reducing opacity.
## Digital custody and asset safekeeping
As banks and asset managers enter the world of tokenized assets and digital
securities, the need for institutional-grade custody becomes paramount.
Digital custody refers to the secure storage and management of private keys and
on-chain assets using regulated, auditable infrastructure.
Use cases:
* Custody of tokenized bonds, equities, and funds
* Secure storage for corporate treasury crypto holdings
* Delegated access and transaction signing for institutional accounts
* Integration with trading, compliance, and risk systems
Example:
* A bank launches a digital asset desk to serve clients interested in investing
in tokenized products
* Private keys are held inside a certified Hardware Security Module (HSM) with
multi-factor access
* Client requests are signed by an approval workflow and broadcast via a secure,
compliant node
* The system logs every action and connects to internal risk, reconciliation,
and client reporting tools
Benefits:
* Meets regulatory requirements around safekeeping and segregation
* Supports operational controls like transaction limits, role separation, and
alerts
* Enables DeFi participation while protecting private key exposure
* Bridges traditional custody models with on-chain capabilities
Leading players like Anchorage, Fireblocks, Metaco, and BitGo offer
custody-as-a-service to banks, while many incumbents are launching in-house
digital vaults.
## Blockchain for clearing and settlement
Clearing and settlement infrastructure in capital markets is built on layers of
intermediaries, each introducing latency and cost. Trades settle in T+1 or T+2
cycles, and reconciliation can take days.
Blockchain reduces this friction by enabling real-time atomic settlement of
trades, fully transparent to both parties.
Use cases:
* Tokenized securities and cash settled instantly via Delivery versus Payment
(DvP)
* Post-trade reconciliation eliminated via shared ledger
* Real-time clearing with automated netting logic
* Instant asset transfer across accounts without custodial lag
Example:
* Two institutions trade a tokenized bond and stablecoin via a smart contract
* The DvP logic confirms both assets are available and locks them until mutual
execution
* The swap executes atomically, and both institutions receive their new holdings
* Regulators view the transaction on a read-only node with full metadata
Benefits:
* Elimination of counterparty and settlement risk
* Reduced reliance on clearinghouses and central depositories
* 24/7 trade finality and reporting
* Lower cost of transaction infrastructure
Platforms like SDX (SIX Digital Exchange), Fnality, and Deutsche Börse’s D7 use
blockchain-based settlement rails for digitized financial products.
## Financial messaging and interbank workflows
Banks and financial institutions depend heavily on messaging networks like
SWIFT, FIX, and ISO 20022 for routing payments, orders, and confirmations.
These networks are limited by format rigidities, message latency, and
reconciliation gaps. Blockchain introduces a shared messaging and settlement
layer that unifies both instruction and execution.
Use cases:
* On-chain settlement instructions replacing SWIFT messages
* Tokenized representations of ISO 20022 messages
* Real-time messaging between correspondent banks
* Transparent workflow tracking for trades, FX, and asset servicing
Example:
* A bank sends a message to initiate a cross-border payment with embedded
compliance metadata
* The message is tokenized and signed on-chain, with built-in approval logic
* The receiver bank validates and confirms, and funds are released by a smart
contract
* The audit trail includes timestamps, sender identity, and message content hash
Benefits:
* Secure, verified, and standardized instruction layer
* End-to-end visibility into transaction processing
* Faster dispute resolution and exception handling
* Integration with digital asset payment rails
Blockchain-based messaging reduces opacity, harmonizes data, and links action
with outcome for real-time workflows.
## Real-world deployment case studies
### Case: JP Morgan Onyx and JPM Coin
JP Morgan launched its Onyx platform to digitize wholesale payments and
settlement using blockchain. JPM Coin is used by institutional clients to settle
USD payments instantly and with finality.
Clients onboard through a permissioned system and can send programmable payments
between entities without waiting for ACH or SWIFT cycles.
Results:
* Billions settled through JPM Coin with full compliance oversight
* Tokenized repo trades executed with intraday maturity
* Pilot of multicurrency settlement across Onyx network
### Case: DBS Bank and Project Guardian
DBS Bank partnered with MAS and other institutions to tokenize government
securities, forex, and liquidity pools under Project Guardian.
The pilot demonstrated end-to-end execution of real-world financial instruments
in a composable, on-chain environment.
Results:
* Instant settlement of tokenized bonds and FX swaps
* Asset managers could compose and execute DeFi strategies with full KYC
* MAS confirmed regulatory frameworks for future pilots
### Case: ICICI and blockchain trade finance
ICICI Bank deployed a blockchain-based trade finance solution to process
import-export documentation between Indian firms and international banks.
The platform digitized letters of credit, shipping data, and invoices with
participant-specific visibility.
Outcomes:
* Reduced trade cycle from weeks to days
* 100 percent visibility into transaction progression
* Lower document error rates and fraud risk
### Case: AXA’s parametric flight insurance
AXA piloted a smart contract-based flight delay insurance product. Users
purchased policies via an app, and the blockchain oracle tracked flight status.
If a flight was delayed beyond two hours, the smart contract paid the insured
party automatically.
Benefits:
* Elimination of claims submission process
* Fully automated payout in hours
* Enhanced transparency and customer trust
These examples showcase how blockchain is evolving from a theoretical construct
to an operational backend across various BFSI verticals.
## Blockchain’s role in the future of banking and finance
As blockchain infrastructure matures, BFSI institutions are undergoing a
structural shift toward programmable, transparent, and collaborative systems.
The core value drivers include:
* Shared truth among participants
* Immutable yet flexible digital infrastructure
* Smart contract automation of financial logic
* Decentralized control without operational chaos
The convergence of blockchain with AI, IoT, and privacy technologies will power
the next wave of intelligent finance. Smart assets will self-report status,
compliance will be machine-enforced, and customer onboarding will happen in
seconds — all anchored to tamper-evident ledgers.
Financial firms that adopt blockchain thoughtfully can:
* Reduce back-office costs by up to 50 percent
* Open new markets with 24/7 tokenized offerings
* Enhance trust through cryptographic audit trails
* Innovate faster with composable infrastructure
Regulators are evolving alongside, building legal frameworks, innovation hubs,
and oversight mechanisms that support safe experimentation.
While challenges remain — from scalability to interoperability — the BFSI
sector’s early investments are maturing into high-impact deployments.
Blockchain is not a silver bullet, but in BFSI, it solves real structural
problems: trust gaps, data fragmentation, manual workflows, fraud risk, and
legacy costs.
Its success lies not in replacing core systems outright, but in augmenting them
with secure, verifiable, and programmable layers.
As institutions pilot and scale their blockchain strategies, collaboration
becomes essential — across banks, regulators, startups, and technology
providers.
The future of finance is decentralized in design, centralized in standards, and
distributed in execution.
file: ./content/docs/knowledge-bank/blockchain-ai-usecases.mdx
meta: {
"title": "Blockchain & AI use cases",
"description": "Exploring the intersection of artificial intelligence and blockchain for secure, decentralized, and intelligent systems across industries"
}
## Introduction to blockchain and AI convergence
Artificial intelligence and blockchain represent two of the most transformative
technologies of the 21st century. AI brings capabilities such as predictive
analytics, natural language understanding, image recognition, and autonomous
decision-making. Blockchain, on the other hand, provides tamper-proof data
storage, decentralized consensus, and programmable transactions.
When combined, these technologies offer unique synergies that unlock trust,
auditability, and intelligence at the edge of digital ecosystems. AI needs
high-quality, reliable, and often distributed data — while blockchain ensures
data integrity, traceability, and verifiability. Blockchain systems benefit from
adaptive and efficient algorithms — which AI provides through optimization,
pattern detection, and autonomous logic.
Together, blockchain and AI empower new architectures for decision automation,
decentralized data marketplaces, model provenance, autonomous agents, and
verifiable insight delivery. These use cases cut across sectors including
healthcare, finance, logistics, cybersecurity, education, insurance, smart
cities, agriculture, and supply chains.
This documentation explores joint applications of blockchain and AI,
highlighting their combined potential to deliver systems that are intelligent,
decentralized, explainable, and secure.
## Verifiable AI models and on-chain provenance
AI models are only as trustworthy as their training data and evolution history.
In high-stakes environments such as finance, medicine, and critical
infrastructure, it is essential to prove how models were built, what data they
used, and who owns them. Blockchain provides a mechanism to record and verify
every step of an AI model's lifecycle.
Applications include:
* Storing hashes of training datasets on-chain to prove their integrity
* Logging model versions, retraining events, and hyperparameter changes
* Registering intellectual property claims for proprietary AI models
* Creating audit trails for compliance and regulatory purposes
Example:
* A bank develops a credit scoring model and stores a cryptographic hash of the
training dataset on blockchain
* Each retraining session is logged with timestamp, data source identifier, and
performance benchmarks
* If regulators audit the system, they can verify that the model was trained
fairly, with documented bias mitigation steps
This ensures that AI models deployed in production are explainable, traceable,
and auditable — reducing legal risk and building institutional trust.
## Decentralized data marketplaces and federated learning
AI systems thrive on large, diverse datasets. However, in industries like
healthcare or finance, data sharing is restricted by privacy regulations,
competitive interests, and security concerns. Blockchain enables decentralized
data marketplaces where organizations can contribute, access, and monetize data
without relinquishing control.
Combined with federated learning, AI models can be trained across decentralized
nodes without exposing raw data. Blockchain coordinates trust, payment, and
access control across participants.
Applications include:
* Token-based data exchange platforms with provenance and access rules
* Incentive models for contributing anonymized, high-quality datasets
* Federated learning smart contracts that log model performance per data node
* Monetization of underutilized datasets for AI researchers and startups
Example:
* Multiple hospitals participate in a federated learning initiative to build a
cancer prediction model
* Each hospital trains the model locally on their data and submits encrypted
updates to a central aggregator
* Blockchain records each contribution, assigns weights based on data volume and
quality, and distributes rewards accordingly
* No raw patient data is ever shared, preserving HIPAA and GDPR compliance
This approach accelerates AI development in regulated environments while
maintaining control, privacy, and fairness.
## AI for smart contract risk analysis and testing
Smart contracts are the core execution layer of blockchain applications.
However, they are vulnerable to bugs, exploits, and logic flaws — which can
result in irreversible financial loss. AI systems can analyze smart contracts to
detect potential vulnerabilities, test for attack vectors, and optimize gas
usage.
Key use cases:
* AI-based static code analysis for smart contract logic and dependencies
* Natural language processing (NLP) to match smart contract behavior with legal
terms
* Reinforcement learning to generate adversarial transaction sequences for
stress testing
* Automated report generation for auditors and protocol maintainers
Example:
* A DeFi protocol integrates an AI engine that reads newly deployed smart
contracts and flags anomalies in fund locking logic
* Developers receive alerts about integer overflows, unprotected upgrade paths,
or faulty access controls
* A dashboard displays risk scores and recommended patches, improving platform
resilience
AI acts as a real-time co-auditor, significantly reducing the time and effort
required to secure decentralized applications.
## Blockchain for AI model sharing and incentivization
Training AI models is computationally expensive and often requires
infrastructure that many developers lack. Blockchain supports ecosystems where
model developers can publish, license, and monetize their AI models securely and
transparently.
Features of blockchain-enabled AI sharing:
* Tokenized model access rights and subscriptions
* Usage-based royalties enforced via smart contracts
* Provenance tracking of model versions and forks
* Distributed inference markets where developers earn for API calls
Example:
* An NLP researcher publishes a sentiment analysis model on a blockchain AI
marketplace
* Each time the model is queried via API, a micropayment is triggered through a
smart contract
* Derivative models built on the base version are linked and routed royalties
through a shared economic model
This democratizes AI access, aligns incentives between creators and users, and
encourages innovation through composable model ecosystems.
## AI-powered identity verification and fraud detection
Identity verification is a foundational challenge in digital systems. AI models
excel at biometric analysis, pattern recognition, and behavioral profiling. When
combined with blockchain-based identity frameworks, they enable secure,
privacy-preserving identity verification systems.
Use cases:
* AI-based facial recognition paired with self-sovereign identity wallets
* Behavioral authentication through typing patterns, device signals, or voice
* On-chain identity scoring to detect bots, sybils, or social engineering
attempts
* AI-trained fraud detection models for KYC, AML, and credit scoring workflows
Example:
* A decentralized exchange integrates an AI model that detects fraudulent
behavior based on wallet activity and transaction timing
* The user’s self-sovereign identity is linked to a dynamic risk score recorded
on-chain
* If a transaction crosses a fraud threshold, the platform requests multi-factor
authentication or flags it for review
Combining AI and blockchain delivers both intelligence and accountability in
identity management and access control.
## Governance automation and AI-assisted DAOs
Decentralized autonomous organizations (DAOs) coordinate resources, make
collective decisions, and manage treasuries. AI enhances DAO functionality by
providing analytics, forecasting, and decision recommendations to voters.
Blockchain ensures that proposals, votes, and outcomes are tamper-proof.
Applications:
* AI models analyzing voting history and proposing policy simulations
* Automated treasury management with risk-adjusted investment strategies
* NLP interfaces that summarize proposals and translate governance documents
* Reinforcement learning agents optimizing DAO efficiency and participation
Example:
* A climate impact DAO uses an AI agent to score grant proposals based on carbon
impact, feasibility, and regional needs
* The scoring algorithm is recorded on-chain for transparency
* DAO voters review AI recommendations alongside human-curated comments before
casting votes
This augments human governance with machine intelligence, leading to faster,
more informed decision-making without sacrificing transparency.
## AI-enhanced oracles for real-world data verification
Oracles connect blockchain systems to external data sources. AI enhances oracle
reliability by filtering, validating, and scoring incoming data before it is
injected into smart contracts. This improves decision accuracy in DeFi,
insurance, gaming, and prediction markets.
AI-integrated oracle functions:
* Outlier detection and data sanity checks before transmission
* Confidence scoring and reliability reputation for data providers
* Real-time sentiment or event detection from web, media, and sensors
* AI prediction models that translate raw data into actionable insights
Example:
* A decentralized insurance protocol uses an AI oracle to detect natural
disaster events by analyzing satellite imagery and news reports
* The AI model confirms the likelihood of a flood event, assigns a confidence
score, and transmits the result to the smart contract
* If the score exceeds a threshold, claims are paid automatically without manual
investigation
AI-powered oracles bridge the gap between raw information and verified
decision-ready signals.
## Generative AI and NFT ecosystem integration
Generative AI models can create unique digital art, music, video, and code. When
paired with blockchain, each creation can be minted as an NFT, attributed to its
creator, and monetized through programmable royalties.
Applications include:
* On-demand generative content tied to ownership or subscription NFTs
* Co-creation platforms where users guide AI models and mint outputs
* Generative collectibles where traits are AI-generated at mint time
* Metadata linking prompts, model versions, and provenance to each NFT
Example:
* A creator uses a generative AI model to produce one-of-a-kind digital
sculptures
* Buyers mint these pieces as NFTs, each containing the prompt, algorithmic
seed, and rendering data
* Royalties are routed to the creator each time the NFT is resold
* Some NFTs grant remix rights, allowing new artworks to be created and
monetized collaboratively
This opens up a new frontier of programmable, AI-generated art governed by
blockchain-based intellectual property frameworks.
## Autonomous agents and blockchain coordination
Autonomous agents are software entities that operate independently to perform
tasks such as negotiation, data collection, and transaction execution. When
powered by AI, these agents can make context-aware decisions. Blockchain allows
them to interact in a trustless environment with accountability, persistence,
and value transfer.
Key features of blockchain-coordinated agents:
* Identity and reputation management through on-chain logs
* Smart contract-based payment for services or data exchange
* Agent-to-agent negotiation for logistics, bandwidth, or compute resources
* Multi-agent systems for collaborative supply chain or market tasks
Example:
* A fleet of delivery drones operates in a city to transport medical supplies
* Each drone is an autonomous agent that uses AI to plan routes, avoid
obstacles, and respond to weather
* Drones negotiate drop-offs, pickups, and recharges using blockchain-based
tokens and smart contracts
* The entire fleet operates transparently, and each drone’s actions are logged
for compliance and optimization
Combining AI and blockchain in multi-agent systems leads to autonomous economies
capable of self-organization, resilience, and distributed negotiation.
## Healthcare diagnostics and decentralized AI validation
AI is rapidly transforming healthcare diagnostics, helping detect anomalies in
medical images, predict patient outcomes, and personalize treatments. However,
clinical environments require strict auditability, transparency, and data
protection. Blockchain complements AI by preserving data provenance, enforcing
model transparency, and supporting decentralized clinical collaboration.
Healthcare applications include:
* Blockchain-logged diagnosis reports with verified AI inputs and outputs
* Secure sharing of diagnostic models between hospitals and research labs
* Training models across decentralized medical datasets using federated learning
* Clinical trial coordination with real-time data logging and audit support
Example:
* A consortium of hospitals trains an AI model to detect diabetic retinopathy
from eye scans
* Each diagnosis is recorded on blockchain with an encrypted reference to the
scan, AI decision, and physician override
* Patients can grant revocable access to their health record for second opinions
or follow-up
* Regulators audit the model's accuracy and fairness using blockchain-anchored
training documentation
This architecture strengthens confidence in AI-assisted care while preserving
patient rights, transparency, and medical ethics.
## Regulatory compliance and algorithmic accountability
As AI systems become integral to decision-making in sectors like banking,
insurance, and hiring, regulators are demanding more transparency and
auditability. Blockchain provides immutable logs of how, when, and why AI models
made certain decisions — creating a verifiable trail for regulators, users, and
stakeholders.
Key benefits of blockchain for AI compliance:
* Recording model scores, thresholds, and decision criteria
* Timestamped logs of AI decisions, overrides, and exceptions
* Storage of bias mitigation steps and retraining triggers
* Secure evidence repositories for audits and investigations
Example:
* A fintech platform uses an AI model to evaluate loan applications
* Each decision is accompanied by an explanation token and logged on-chain with
the applicant’s consent
* If an applicant is rejected, they can retrieve the reason and request a manual
review
* Regulators receive monthly compliance digests with model changes and
performance audits
This ensures that AI decisions comply with legal frameworks such as GDPR, the EU
AI Act, and the US Equal Credit Opportunity Act, while preserving fairness and
recourse for users.
## Explainability and interpretability of AI via blockchain records
AI explainability refers to the ability to understand how models reach their
conclusions. In high-risk domains such as finance, defense, and law,
explainability is crucial. Blockchain enables traceable recording of decision
flows, feature importance rankings, and post-hoc explanations.
Use cases include:
* On-chain logs of LIME, SHAP, or other explanation algorithm outputs
* Storage of attention maps, saliency maps, or causal graphs tied to model
output
* Verifiable model audit logs showing who accessed what, when, and how
* User-facing explanation tokens embedded in digital transactions or content
Example:
* A digital hiring platform uses an AI model to screen resumes
* For each decision, it generates a SHAP explanation that identifies the
features that contributed to the ranking
* This explanation is stored as a hash on blockchain and linked to the
candidate’s application record
* Hiring managers and candidates can view the reasoning and flag inaccuracies
for redress
Explainability builds user trust, supports audits, and helps organizations prove
compliance while reducing reputational risk.
## AI training transparency and dataset bias monitoring
One of the biggest risks in AI is bias in training data. Bias can result in
discriminatory behavior by models, especially in domains such as criminal
justice, credit scoring, or hiring. Blockchain offers mechanisms to log dataset
composition, model behavior on protected attributes, and community-verified
fairness audits.
Applications include:
* Dataset registration with attribute distribution and source metadata
* Logs of model performance across demographic slices (e.g., age, race, income)
* Community-driven model probing, challenge-response testing, and crowd audits
* Smart contracts that trigger retraining when bias thresholds are exceeded
Example:
* A government uses AI to screen grant applications
* The training dataset is logged on-chain with metadata about gender and
geographic representation
* A fairness watchdog DAO conducts periodic audits and submits probes to test
model outcomes
* If bias is detected, the smart contract alerts administrators and freezes
further deployment
This framework promotes responsible AI development and encourages transparency
by design.
## AI-enhanced legal contracts and dispute resolution
Smart contracts are deterministic and efficient but often lack nuance in
interpreting real-world ambiguity. AI systems can assist in translating natural
language contracts into code, resolving disputes through semantic analysis, and
interpreting context in decentralized arbitration.
Features of blockchain + AI in law:
* NLP models parsing legal text to generate contract logic or flags
* Machine-assisted arbitration through case summarization and similarity
matching
* Predictive models estimating outcomes based on prior case history
* Blockchain records storing claims, arguments, and rulings immutably
Example:
* A decentralized freelance platform uses smart contracts for payments and
delivery conditions
* If a dispute arises over quality, an AI system reviews previous interactions,
checks for keyword compliance in the deliverable, and summarizes arguments
* Arbitrators receive AI-generated digests and make decisions recorded on-chain
* The smart contract then executes the payout or refund based on the decision
Combining AI’s analytical power with blockchain’s trust layer transforms how
contracts are created, interpreted, and enforced globally.
## AI in decentralized finance and algorithmic portfolio management
Decentralized finance (DeFi) protocols automate financial services using smart
contracts. AI enhances DeFi by enabling dynamic risk analysis, portfolio
optimization, market prediction, and yield strategy selection.
AI + blockchain in DeFi supports:
* Portfolio rebalancing based on risk profiles and market signals
* Detection of arbitrage, rug pulls, or suspicious trading activity
* AI-generated DeFi strategies encoded as DAO proposals or automation scripts
* Reputation scores for wallets based on past trades, interactions, and strategy
quality
Example:
* An investment DAO uses an AI engine that tracks liquidity pools, token
volatility, and macroeconomic data
* Based on these inputs, it recommends staking in low-risk stablecoin pairs for
a two-week window
* The DAO votes on the strategy, and if approved, a smart contract executes the
allocation
* Performance is tracked, logged on-chain, and used to refine future
recommendations
This fusion of data-driven intelligence and automated execution creates adaptive
financial ecosystems without centralized control.
## Content generation and IP attribution in synthetic media
AI models like large language models (LLMs) and generative adversarial networks
(GANs) are capable of producing text, images, music, and video at scale.
Blockchain ensures that each piece of synthetic content can be traced to its
origin, model, prompt, and usage rights.
Applications in synthetic media:
* Tokenizing AI-generated content with embedded attribution and license
* Registering prompts and model configuration as part of NFT metadata
* Revenue sharing among prompt engineers, model creators, and remix artists
* Storing hashes of generated content for plagiarism detection
Example:
* A marketer uses an AI model to generate taglines for a product campaign
* The chosen content is minted as an NFT containing the prompt and model version
used
* As the campaign succeeds, the prompt designer receives a bonus through a smart
contract split
* If disputes arise over originality, the blockchain record is used to verify
authorship
This architecture supports synthetic creativity while maintaining intellectual
integrity, transparency, and legal clarity.
## AI agents for compliance monitoring and reporting
Regulatory compliance requires continuous monitoring, accurate reporting, and
audit-readiness. AI agents can scan on-chain data, evaluate contracts, and
detect violations in real time. Blockchain provides the substrate for evidence
collection, report generation, and tamper-proof storage.
Examples include:
* AML and KYC enforcement using AI flagging of transaction behavior
* Monitoring emissions or sustainability KPIs in tokenized carbon markets
* Smart contract evaluation for blacklisted wallet interaction or slippage
* Dashboards for regulators linked to AI-generated compliance metrics
Example:
* A green finance protocol tokenizes verified carbon credits
* An AI model monitors transaction flows and compares them to emissions targets,
reporting anomalies
* Dashboards used by regulators receive alerts if trading exceeds pre-set
thresholds or bypasses audit triggers
* Every report is hashed and timestamped on blockchain for future accountability
This combination creates real-time compliance systems that are data-rich,
automated, and trustworthy by default.
## AI-assisted DAO governance and treasury forecasting
DAOs rely on collective decision-making to allocate funds, vote on upgrades, and
manage ecosystems. However, coordinating thousands of members with different
preferences and technical backgrounds can lead to inefficiencies. AI systems
help by modeling decision outcomes, summarizing proposals, and forecasting
treasury health.
Capabilities include:
* Budget simulations based on historical DAO spending and market data
* Clustering of proposals by theme, urgency, or category
* NLP-based summaries of governance discussions or proposal descriptions
* Predictive modeling of vote outcomes and stakeholder alignment
Example:
* A grants DAO receives 200 funding proposals in a month
* An AI assistant tags each proposal based on content, filters out duplicates,
and highlights strategic relevance
* Treasury models show that funding 70 percent of them would reduce runway to
five months
* The AI ranks proposals based on alignment, budget impact, and contributor
history
This use of AI improves governance quality and scalability while keeping the
decision process transparent and explainable through blockchain logs.
## Predictive supply chain intelligence and blockchain provenance
Supply chains increasingly rely on predictive analytics to manage risk, forecast
demand, and optimize inventory. When paired with blockchain, AI models can use
trusted, real-time data across suppliers, logistics providers, and regulators —
creating predictive intelligence ecosystems.
Applications include:
* Predicting shortages, delays, or compliance failures using blockchain-verified
data
* AI models trained on real-time events like customs clearances or IoT logs
* Smart contract responses to risk events based on AI thresholds
* Model versioning and performance tracking logged on-chain
Example:
* A global food supplier tracks shipments using blockchain-based logistics
records
* An AI model monitors port congestion, weather, and customs clearance rates to
forecast delivery delays
* If risks are detected, a smart contract reroutes orders to secondary suppliers
or triggers contract renegotiation clauses
This ensures that decisions are made on trusted data, with audit trails for
every predictive action and automated contingency handling via blockchain
workflows.
## Dynamic token economics and AI-guided parameter tuning
Designing token economies requires complex trade-offs between incentives, supply
dynamics, staking rewards, and inflation. AI models simulate different token
configurations and forecast economic behaviors. Blockchain smart contracts
enforce these rules on-chain.
Use cases include:
* Agent-based modeling to simulate user behavior under different reward curves
* AI optimization of staking multipliers and liquidity incentives
* On-chain governance adjusting token parameters based on predictive models
* Transparent logs of token economic changes and their justifications
Example:
* A play-to-earn game suffers from token oversupply and falling engagement
* AI models test several inflation reduction curves and staking bonuses
* Community votes on the best model, and a smart contract implements the new
parameters
* Treasury and user behavior are monitored for rebound indicators, all logged
on-chain
This approach results in adaptive, data-driven token economies that evolve with
ecosystem needs and remain accountable to stakeholders.
## AI-powered education platforms with on-chain credentials
Education platforms benefit from adaptive learning algorithms that personalize
content, assess mastery, and guide students. Blockchain enhances this by issuing
verifiable, portable credentials that reflect progress, reputation, and skill
ownership.
Key applications:
* On-chain credentials tied to AI-assessed knowledge milestones
* Tokenized incentives for peer mentoring, quiz completion, or content creation
* AI-generated learning paths with real-time adjustment
* DAO-based governance of learning content and certification standards
Example:
* A decentralized coding school issues badges to students as they complete
AI-curated modules
* Tests are proctored by biometric AI tools and issued time-locked certification
tokens
* Top students earn tokens they can use for mentorship, DAO voting, or fee
waivers
* Institutions verify graduates by querying blockchain for course history and
assessment provenance
This system makes learning more accessible, verifiable, and globally
interoperable without centralized gatekeepers.
## AI in climate and energy optimization with blockchain tracking
AI models are essential for optimizing energy use, predicting emissions, and
simulating climate risks. Blockchain enables transparent, decentralized systems
for reporting emissions, tracking credits, and enforcing sustainability goals.
Applications include:
* AI models forecasting energy demand and grid usage
* Blockchain-recorded emissions data for carbon credits and ESG compliance
* Smart contracts adjusting resource pricing based on AI forecasts
* Distributed oracles for environmental data verified through multi-party
sources
Example:
* A city uses AI to predict peak energy demand across districts based on
weather, usage patterns, and historical data
* Smart meters upload data to blockchain, and credits are adjusted in real time
through automated contracts
* Companies that exceed emission limits purchase verified offsets tracked via
tokenized carbon credits
* Dashboards provide real-time reporting to regulators and community
stakeholders
Combining AI’s foresight with blockchain’s verifiability helps build more
sustainable and responsive energy systems.
## Behavioral economics and gamification in blockchain systems
AI models can simulate user psychology, preferences, and motivation in
decentralized platforms. When combined with blockchain’s programmable
incentives, these insights guide the design of effective gamification and
nudges.
Applications include:
* Predicting user churn and adjusting incentive structures accordingly
* Modeling the effect of reward frequency, randomness, or tiering
* Adaptive leaderboards and engagement tiers tied to wallet activity
* AI-suggested quests, challenges, or missions based on user profile clustering
Example:
* A decentralized learning platform uses AI to detect drops in engagement for
intermediate learners
* It launches a “streak challenge” with NFT rewards personalized to individual
goals
* Completion data is stored on-chain, and social sharing triggers bonus airdrops
* New users are matched with peer mentors based on profile similarity
This approach helps DAOs, DApps, and ecosystems retain users, reward loyalty,
and sustain long-term value through intelligent incentive design.
## Ethical alignment and value modeling for autonomous systems
As autonomous AI systems make increasingly complex decisions, ensuring ethical
alignment becomes critical. Blockchain allows the encoding, tracking, and
collective shaping of AI values through governance, audits, and verifiable
behavior logs.
Applications:
* Embedding ethical rules into autonomous agent policies
* Blockchain-anchored decisions showing why an action was taken
* Stakeholder votes on ethical trade-offs or value conflicts
* Penalties and corrections enforced through DAO-mediated redress systems
Example:
* A self-driving logistics company trains an AI fleet to prioritize safety,
efficiency, and eco-friendliness
* Each delivery decision logs its path, trade-offs, and rationale using
AI-generated summaries stored on-chain
* If an incident occurs, stakeholders review the blockchain record to assess
alignment with declared values
* Public feedback guides model retraining and the adjustment of priorities
This model ensures that autonomous decisions remain transparent, improvable, and
aligned with evolving human norms.
## Cross-chain AI agents and interoperability
As blockchain ecosystems fragment across multiple chains, AI agents act as
intelligent routers, translators, and coordinators of logic across platforms.
These agents can abstract away complexity and enable seamless multichain user
experiences.
Capabilities include:
* AI-powered bridges that choose optimal chains for transactions
* Cross-chain arbitration of disputes based on policy prediction
* AI summarization of multichain identity profiles for dApps
* Unified dashboard interfaces driven by AI indexing across chains
Example:
* A wallet AI scans user assets and gas fees across Ethereum, Avalanche, and
Arbitrum
* It recommends bridging funds to the most cost-effective chain for a DeFi
strategy
* Transactions are signed, routed, and logged on blockchain with traceable agent
IDs
* Portfolio performance is summarized with AI-powered alerts and yield tips
AI improves the usability and intelligence of the multichain future while
blockchain guarantees security and transaction consistency.
## Final remarks on emerging directions
The convergence of blockchain and AI is still in its early stages, but momentum
is growing rapidly. This dual stack of decentralized infrastructure and
intelligent computation is driving the evolution of:
* Autonomous markets and machine-to-machine coordination
* Verifiable intelligence pipelines and trustless analytics
* Privacy-preserving AI training and secure multi-party learning
* AI-native governance interfaces for DAOs and digital nations
Projects that integrate both technologies will benefit from transparency,
decentralization, and optimization — opening the door to a new class of
applications where systems are not only decentralized but adaptive, explainable,
and aligned with stakeholder interests.
## Edge AI and blockchain for decentralized infrastructure
Edge computing refers to processing data closer to its source — such as on
mobile devices, IoT sensors, or autonomous drones — rather than in centralized
cloud systems. AI deployed at the edge enables real-time decision-making, while
blockchain ensures secure data exchange, usage verification, and tamper-proof
audit trails.
Key joint capabilities:
* Logging model inputs and decisions at the device level using lightweight
blockchains
* Authenticating edge devices using decentralized identity and access control
* Triggering smart contracts based on edge AI outcomes (e.g., anomaly detection)
* Federated edge learning with blockchain-coordinated updates and incentives
Example:
* A network of agricultural sensors uses AI to monitor soil moisture and crop
health
* When drought conditions are detected, smart contracts trigger alerts to
irrigation DAOs
* Farmers receive recommended actions and funding for intervention
* Sensor data and actions are logged on-chain to build trust and track
environmental impact
This architecture allows for privacy-preserving, scalable intelligence on
distributed hardware with secure coordination across stakeholders.
## Blockchain-AI synergy in decentralized science (DeSci)
DeSci refers to decentralized science ecosystems where researchers,
institutions, and citizen scientists collaborate openly on research, publishing,
funding, and data sharing. AI helps automate research workflows, while
blockchain ensures that data, models, and credit are verifiable, transparent,
and resistant to censorship.
Use cases include:
* Open-access scientific datasets with AI-assisted metadata tagging and indexing
* Blockchain records of peer review, model training, and publication edits
* Tokenized reputation scores for contributors, reviewers, and AI-assisted
analysis
* On-chain lab notebooks and time-stamped research provenance
Example:
* A cancer research group publishes datasets on a decentralized registry
* An AI model helps identify correlations between gene expression and treatment
outcomes
* Results, model versions, and citations are registered on blockchain
* Contributors receive token rewards based on reproducibility metrics and
community validation
This ecosystem promotes reproducibility, transparency, and equitable
participation in global research efforts.
## Zero-knowledge proofs and AI: verifiable privacy
Zero-knowledge proofs (ZKPs) allow parties to prove that a statement is true
without revealing the underlying data. When combined with AI, ZKPs enable models
to operate on private data while still proving correctness. This is essential
for use cases involving sensitive information.
Applications include:
* Verifying that an AI made a decision using valid rules without revealing input
data
* Proving fairness or bias checks were run correctly before deploying a model
* Enabling private inference: showing a model output is valid without exposing
the model weights
* Protecting trade secrets or proprietary logic during multi-party computation
Example:
* A credit scoring model runs locally on a user’s device and returns an approval
decision
* A zero-knowledge proof is generated showing that the result was computed using
a regulatory-approved model
* The score and proof are recorded on-chain without exposing income, history, or
other private features
* Auditors can confirm validity using public smart contracts
This unlocks AI-powered services in finance, health, and defense where
confidentiality is non-negotiable.
## AI pattern detection in blockchain analytics
Blockchain datasets are publicly accessible, vast, and rapidly growing. AI
models are uniquely suited to analyzing on-chain behavior, transaction flows,
and ecosystem dynamics. These insights can be used for fraud detection,
investment research, compliance, and market intelligence.
Key applications:
* Graph neural networks for identifying clusters, mixers, or bot networks
* Sequence modeling of wallet activity to detect Ponzi schemes or insider
trading
* Topic modeling and NLP analysis of governance forums and DAO chats
* Predictive analytics for token velocity, DeFi positions, or NFT trends
Example:
* A compliance firm uses a machine learning model to analyze transaction graphs
across multiple chains
* It flags wallets that interact with sanctioned entities or show signs of
front-running
* Results are embedded in smart contract risk scores used by DeFi aggregators
* DAOs use this data to exclude high-risk actors from participating in votes or
rewards
AI transforms raw blockchain data into structured insight, while blockchain
ensures that the models and alerts remain accountable and tamper-resistant.
## Reinforcement learning in autonomous financial agents
Reinforcement learning (RL) is a type of AI that learns optimal behavior through
trial and error in dynamic environments. Blockchain allows RL agents to interact
with real financial systems in a secure, trackable manner — creating intelligent
strategies for trading, liquidity provision, and hedging.
Applications:
* AI agents that autonomously stake, borrow, lend, or rebalance portfolios
* Smart contracts defining environments, rewards, and penalties for RL agents
* On-chain validation of RL performance and behavior constraints
* Governance frameworks for agent registration, oversight, and improvement
Example:
* A decentralized hedge fund deploys multiple RL agents across lending protocols
* Each agent competes to maximize return while maintaining a target risk level
* Performance and model updates are published periodically and recorded
immutably
* Token holders vote on which agents to fund, scale, or sunset
This model creates financial ecosystems where strategies evolve autonomously,
but transparently, under shared governance.
## AI-enabled copyright enforcement and creative provenance
Creative industries face growing challenges around content theft, plagiarism,
and unauthorized use of generative AI outputs. Blockchain and AI together
provide tools to enforce copyright, track creative lineage, and preserve
attribution.
Use cases:
* AI detection of duplicate media or model-generated derivatives
* Blockchain anchoring of original content hashes and licensing terms
* Smart contract enforcement of royalty splits and resale rights
* IP registries that index AI-generated assets with human attribution logs
Example:
* An artist mints a generative video piece as an NFT
* An AI crawler detects a copy used without permission on a centralized platform
* The violation is flagged and proof is recorded on-chain
* A smart contract automates the claim process or engages a DAO for arbitration
This protects creative ecosystems, ensures AI compliance with original licenses,
and deters unauthorized appropriation at scale.
## Collaborative AI agents in creative DAOs
AI tools can participate in DAOs not just as tools, but as creative
collaborators. From generating visual ideas to suggesting storylines, these
agents operate within rules, track their outputs, and receive attribution and
compensation. Blockchain tracks contributions, manages payments, and enables
remix licensing.
Examples:
* AI tools writing base melodies that human artists refine
* DAO-licensed AI avatars acting as NPCs in games or virtual stories
* Generative poetry bots trained by DAO members and voted on for publishing
* Shared revenue pools that reward both code and content contributors
Example:
* A visual art DAO uses a collective AI model trained on their style
* Each new piece is minted with dual attribution: DAO and the human editor
* Royalties are split, and the AI’s training logs and source weights are
verifiable on-chain
* Holders of creative contribution tokens can propose new styles or curation
themes
This unlocks collaborative workflows where creativity is distributed,
documented, and governed transparently.
## Autonomous NFT behavior and on-chain AI triggers
NFTs are evolving from static digital representations into dynamic, interactive
agents. AI allows NFTs to adapt, evolve, or respond based on context. Blockchain
defines the logic and execution triggers behind these behaviors.
Capabilities:
* NFTs that change appearance based on real-world data (weather, location,
events)
* AI-generated evolutions or narrative updates embedded in NFT metadata
* On-chain inputs driving state changes such as rarity, traits, or utility
* Smart contract-controlled interactions with games, social platforms, or
marketplaces
Example:
* A story-based NFT evolves through chapters unlocked via wallet interaction
* Each new chapter is generated with AI assistance, and token holders vote on
which plotline is accepted
* Evolution logs, prompt metadata, and user choices are recorded on-chain
* The NFT becomes a living artifact, responsive to both machine logic and human
community
This redefines what digital ownership means — from owning a file to co-creating
an evolving digital identity.
## Intelligent robotics and blockchain-based coordination
Robots that act autonomously in real-world environments require trust,
coordination, and verifiable interaction logs. AI gives robots perception and
decision-making capabilities. Blockchain ensures that robots authenticate their
actions, resolve tasks collaboratively, and exchange value securely.
Key integrations:
* On-chain registration of robots as agents with unique identities and
capabilities
* AI models for navigation, manipulation, and interaction
* Blockchain-based task assignment, contract fulfillment, and payment
* Decentralized logs of incidents, maintenance, and updates
Example:
* A smart factory deploys robots that handle manufacturing, inspection, and
packaging
* Each robot is linked to a blockchain profile recording uptime, tasks, and
upgrades
* When products are damaged, AI logs the cause and records the event immutably
* Robots can bid on task assignments or share status through a coordination
smart contract
This setup makes robotics infrastructure transparent, interoperable, and
accountable across manufacturers, regulators, and service providers.
## Synthetic identity and AI-persona ecosystems
Synthetic identity systems use AI to generate personas, agents, or avatars that
interact with users across platforms. Blockchain provides the identity layer for
anchoring these agents to wallets, contracts, or reputational histories.
Capabilities include:
* AI-generated personas (voice, image, behavior) with unique token-bound IDs
* On-chain reputation tied to agent conduct, task completion, or content quality
* Privacy-preserving attestation of skills, access rights, or certifications
* Marketplaces for agent leasing, licensing, or delegation
Example:
* A news platform uses synthetic presenters generated via voice and image
synthesis
* Each AI persona is tied to a blockchain credential showing training data, bias
testing, and ownership
* Advertisers can verify that content delivery met tone and demographic targets
using on-chain logs
* If an AI persona violates terms or receives negative engagement scores, it is
paused or retrained via governance
Synthetic identity systems create programmable, accountable agents that operate
transparently in regulated or user-facing environments.
## AI and blockchain in metaverse experience design
The metaverse is an emerging domain where users interact via immersive digital
environments. AI drives behavior, narrative, and simulation. Blockchain enables
ownership, transactions, and persistence of digital identities and assets.
Integrated metaverse use cases:
* AI agents as NPCs or guides trained by DAO-curated content
* NFTs linked to adaptive in-game assets powered by machine learning
* Smart contracts enforcing experience triggers, progression, or access
* Behavioral analytics for user experience personalization stored on-chain
Example:
* A museum in the metaverse features interactive AI docents trained on cultural
archives
* Visitors earn badges (NFTs) based on completed tours, which grant deeper
access
* Conversations are anonymized, indexed, and used to improve AI knowledge via
on-chain reputation scores
* Curators propose new exhibits, and AI suggests layouts based on visitor
behavior
This enables metaverse platforms to deliver highly interactive, personalized,
and verifiable digital worlds governed by creative communities and intelligent
agents.
## Long-context LLMs as DAO tools and co-creators
Large language models (LLMs) like GPT can act as summarizers, editors, debaters,
and decision-support systems within DAO environments. With blockchain
integration, their outputs, prompts, and roles can be tracked, governed, and
monetized.
Use cases:
* LLMs generating DAO proposal summaries or explaining governance processes
* Verified chain-of-prompt records to ensure output provenance and transparency
* Reward systems for helpful prompts, evaluated through DAO voting or activity
logs
* LLM moderation of community channels with recordable intervention logic
Example:
* A public goods DAO uses an LLM to analyze grant proposals and produce summary
dashboards
* Each summary links to the original prompt, user, and model checkpoint ID
on-chain
* Token holders can upvote useful summaries, triggering a smart contract reward
* The DAO also funds model tuning for domain specificity, with updates versioned
on blockchain
These language models become trusted members of decentralized ecosystems with
defined responsibilities and transparent influence.
## Blockchain-based AI risk governance frameworks
As AI systems take on greater autonomy, society needs mechanisms to assess,
approve, and govern their deployment. Blockchain enables decentralized risk
registers, audit logs, and enforcement contracts for AI models.
Applications:
* On-chain declarations of AI risk category, training methods, and guardrails
* Decentralized peer review of model behavior and edge case testing
* Smart contracts enforcing risk thresholds or operational constraints
* Public dashboards visualizing exposure, coverage, and performance over time
Example:
* An autonomous drone AI is registered as a high-risk system under an EU-aligned
taxonomy
* It logs test flights, anomalies, and retraining efforts to a blockchain risk
registry
* DAO-based ethics reviewers flag behavior outliers, triggering a vote on usage
restrictions
* Smart contracts automatically ground the drone if certain violation scores are
breached
This ensures that powerful AI systems are deployed with accountability,
verifiability, and shared oversight — not just centralized risk reporting.
## AI-native token design and behavioral feedback loops
Designing token systems that incentivize positive behavior and sustainable
growth is a complex challenge. AI models can analyze wallet behaviors, protocol
interactions, and community health to guide token supply and governance
decisions.
Features include:
* Behavioral analytics for token holders and DApp users
* AI-generated recommendations for inflation schedules, staking rates, or
airdrop eligibility
* Blockchain-enforced adoption of updated parameters through DAO vote
* Real-time feedback loops between user behavior, reward curves, and protocol
health
Example:
* A contributor DAO uses AI to detect periods of low morale, inactivity, or
whale influence
* Token issuance slows automatically, and bonus pools are redirected to
reputation growth events
* The AI also suggests modified voting quorums or review incentives based on
activity levels
* The full rationale is published on-chain with links to model inputs and
expected outcomes
This keeps token ecosystems adaptive, healthy, and aligned with community
contribution dynamics.
## Distributed AI inference and blockchain monetization
As more models run on decentralized infrastructure, blockchain enables metering,
access control, and revenue sharing for AI inference. This approach reduces
reliance on centralized API providers and promotes open infrastructure.
Applications:
* On-chain payments for inference queries processed on decentralized hardware
* Proof-of-inference attestations recorded by nodes for transparency
* Model routing optimization to reduce latency and compute cost
* Token reward systems for GPU contributors in distributed model networks
Example:
* An open-source image model is hosted across decentralized compute nodes
* Users pay small fees in stablecoins or protocol tokens to run image
stylization tasks
* Each node logs proof of service, verified by zk-snarks or cross-checking peers
* Earnings are split among node operators, model authors, and prompt engineers
This powers AI-as-a-service ecosystems that are censorship-resistant,
cost-efficient, and equitably monetized.
## Decentralized large language models and model ownership
Large language models (LLMs) are currently hosted by centralized providers,
limiting transparency, usage control, and monetization options for independent
developers. Blockchain offers the foundation for decentralized LLM ecosystems
where training, hosting, and revenue can be distributed fairly.
Key capabilities:
* Tokenized ownership of model checkpoints and training weights
* Federated hosting of model shards with incentive structures for node operators
* Governance of training data policies, fine-tuning directions, and access
pricing
* On-chain usage metering for API calls and query responses
Example:
* A language model trained on open-source legal texts is split into segments
across hosting nodes
* Each node receives micropayments for query execution via smart contracts
* Token holders vote on which datasets should be added for fine-tuning
* Researchers build wrappers on top of the core model and receive licensing
royalties through programmable attribution
Decentralized LLMs align with the goals of transparency, interoperability, and
sovereignty in knowledge systems.
## AI-powered DAO recruitment and skill matching
As decentralized organizations grow, hiring and contributor engagement become
difficult to manage through manual processes. AI can assist by parsing
proposals, analyzing contributions, and recommending roles. Blockchain ensures
that reputation, verification, and incentives are handled securely and
transparently.
Applications:
* AI parsing of GitHub, forum, and wallet data to identify active contributors
* Natural language extraction of skills and past work from public profiles
* Matching between proposal requirements and contributor availability
* On-chain verification of completed tasks and issuance of skill credentials
Example:
* A DeFi protocol launches a bounty for a new smart contract module
* AI recommends developers based on their past Solidity work, DAO participation,
and timezone availability
* Accepted contributors receive tokens, which convert to credentials visible in
future DAO hiring rounds
* Community votes adjust role criteria and signal demand for specific skills
This enhances the agility, inclusiveness, and quality of contributor onboarding
across the decentralized economy.
## Blockchain-powered digital twin modeling with AI integration
Digital twins are virtual representations of real-world systems such as
factories, vehicles, or cities. They rely on sensor data, predictive models, and
simulation engines. Blockchain provides the shared data backbone and
tamper-proof logging necessary for collaboration, versioning, and compliance.
Capabilities:
* AI models predict behavior, degradation, or failure of real-world assets
* Blockchain stores lifecycle logs, update history, and control permissions
* Stakeholders verify simulation parameters and update rights
* Smart contracts automate reconfiguration, billing, or intervention triggers
Example:
* A wind farm maintains digital twins of each turbine using telemetry data and
AI forecasts
* Maintenance DAOs receive alerts when components exceed vibration thresholds
* Spare part logistics, crew scheduling, and token-based accountability are
coordinated on-chain
* Regulators and insurance providers audit operational logs in real time
This fusion of real-time AI and blockchain ensures high trust, uptime, and
auditability for cyber-physical infrastructure.
## Security co-design between AI and blockchain layers
Blockchain applications require robust security at multiple levels — from smart
contract logic to network behavior. AI models assist in detecting anomalies,
defending against attacks, and predicting exploits. In return, blockchain logs
are used to train and evaluate these security systems.
Joint security use cases:
* Anomaly detection for smart contract interactions and DApp behavior
* Machine learning models for phishing, MEV, or transaction front-running
identification
* Real-time mitigation or alerts triggered via smart contracts
* Blockchain-verified training and test data for AI red-teaming
Example:
* A Web3 wallet integrates an AI assistant that flags suspicious transactions or
approvals
* If an approval looks risky, the user is prompted to verify with a second
wallet or biometric check
* The AI model learns from on-chain feedback (was it fraud or not?) and improves
over time
* Model versions, incident hashes, and retraining events are stored immutably
This system increases security while minimizing false positives, empowering
users and developers alike.
## Generative AI in decentralized entertainment and gameplay
Entertainment experiences increasingly integrate generative AI to produce
real-time dialogue, characters, or visuals. With blockchain, these experiences
can be owned, traded, and governed — forming the foundation for participatory
digital storytelling and economies.
Applications include:
* Procedural narrative engines where AI co-authors plotlines and quests
* Dynamic NFT content that adapts to player behavior or in-game choices
* Token-gated prompts and co-creation layers for fan-fiction and story arcs
* Royalty flows and remix rights enforced via smart contracts and generative
fingerprints
Example:
* A fantasy game allows players to summon AI-generated characters whose
backstories evolve over time
* Players mint episodes of their journey as NFT story arcs that other players
can adopt or remix
* Popular characters are licensed by other creators, with all attribution and
income streams handled on-chain
* A meta-AI tracks lore consistency across thousands of parallel stories
This infrastructure supports co-created entertainment that is dynamic, owned,
and endlessly generative.
## Federated governance of AI ethics and compliance
AI governance faces global fragmentation and value pluralism. Blockchain enables
federated, transparent mechanisms where organizations, communities, and
regulators can collaboratively shape AI behavior — even in the absence of a
single authority.
Governance capabilities:
* Voting protocols for ethical rule selection, weighting, or overrides
* Registry of regulatory compliance templates and audited outcomes
* Incentivized red-teaming and bug bounty protocols for model behavior
* Composability between different jurisdictional constraints and model versions
Example:
* An international group develops an AI model for news recommendation
* Each jurisdiction submits policy constraints, like source diversity or
misinformation thresholds
* The model is audited by both AI agents and human reviewers, with results
logged on-chain
* Deployments in different regions use distinct configurations, all traceable to
a shared governance base
This creates trust across borders and encourages compliance with evolving
expectations of fairness, inclusiveness, and accountability.
## Future outlook for AI and blockchain convergence
The integration of artificial intelligence and blockchain will define the
infrastructure layer for tomorrow’s economy, society, and governance systems.
Their joint application is evolving along several major trajectories:
* Autonomous value networks: Machines transact, coordinate, and optimize without
centralized control
* Programmable trust: Smart contracts and AI agents dynamically evaluate and
enforce social, legal, and economic rules
* Decentralized intelligence ecosystems: Communities train, audit, and own
models collaboratively
* On-chain analytics: Blockchain is no longer just a ledger, but a knowledge
substrate updated by AI in real time
In this landscape:
* Models will be born and live in public
* Knowledge will be composable, licensed, and monetized through transparent
rails
* Governance will shift from binary votes to nuanced, contextual decision
support powered by explainable systems
* Safety, creativity, and alignment will become provably auditable properties
As blockchain provides the secure substrate and AI supplies the adaptive logic,
together they will shape a world where intelligence is not centralized, opaque,
or proprietary — but shared, programmable, and decentralized.
file: ./content/docs/knowledge-bank/blockchain-app-design.mdx
meta: {
"title": "Blockchain application design",
"description": "Guide to designing blockchain applications"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
# Blockchain application design
Effective blockchain application design requires careful consideration of
architecture, security, and scalability.

## Architecture patterns
### On-Chain Components
* Smart contracts
* State management
* Access control
* Business logic
### Off-Chain Components
* User interface
* Data storage
* API integration
* Business services
In traditional application architectures, the application code and the data it
processes are typically managed as two distinct components. The application
code, written in languages like Java, Python, or JavaScript, resides on a
central server or in a containerized cloud environment and is responsible for
handling business logic, user sessions, and orchestrating data flows. In
parallel, the actual storage of transactional data, user profiles, orders, logs,
etc. is delegated to a separate database layer, such as PostgreSQL, MySQL, or
MongoDB. The application code interacts with the database through APIs or query
languages, and both components are independently developed, scaled, and
maintained. This separation provides flexibility in system design but also
introduces dependencies on central operators and trust in the integrity and
availability of the database.
In blockchain-based applications, this separation collapses into a single,
unified execution environment. Both the application logic and the transactional
data reside on the blockchain itself. Smart contracts, typically written in
languages like Solidity or Vyper, are deployed directly onto the blockchain
network and serve as immutable programs that execute predefined business logic.
When users interact with the application, they submit transactions that trigger
functions within these smart contracts. These transactions, including the input
parameters and resulting state changes, are recorded permanently on the
blockchain ledger and are independently validated by all participating nodes.
This convergence of logic and data on a shared decentralized layer introduces
several key properties. First, it ensures that the execution of application
logic is transparent and verifiable by all parties, since both the contract code
and the input/output of each transaction are publicly accessible. Second, it
eliminates the reliance on a single trusted database provider, replacing it with
consensus-based trust. Every piece of data written to the ledger has been
validated by the network and is cryptographically linked to previous
transactions, providing tamper-evident auditability.
In blockchain-based systems, the application code, deployed as smart contracts,
is inherently tamper-proof once published to the network. Unlike traditional
applications where backend code can be modified or patched by system
administrators at any time, smart contracts are immutable by default. Once
deployed on the blockchain, the code is stored across all nodes and executed
identically by every participant. This ensures that no single party can alter
the logic or behavior of the application unilaterally, providing strong
guarantees of integrity, consistency, and trustless execution.
The integrated nature of code and data on the blockchain also imposes
constraints. Unlike traditional applications that can easily modify database
records or iterate on business logic by updating backend services, smart
contracts are immutable once deployed unless they are explicitly designed to be
upgradeable. Additionally, since blockchain networks maintain global state
across distributed nodes, every write operation consumes resources and incurs
transaction fees, making optimization of both logic and storage essential.
Nonetheless, this architecture provides unparalleled security, traceability, and
consistency, particularly in multi-party applications where trust boundaries are
complex.
By collapsing the application tier and data tier into a single,
consensus-governed layer, blockchain shifts the paradigm from “you trust my
backend and my database” to “we all trust the same code and data on-chain.” This
creates a powerful foundation for building systems that are not only resilient
and secure but also provably fair and transparent to all participants.
Blockchain application development requires a fundamentally different approach
than traditional software engineering. It introduces decentralized state
management, cryptographically enforced rules, and distributed consensus to the
application architecture. At its core, the design of a blockchain application is
rooted in a few foundational principles, decentralization, security, and
scalability, all of which influence the choice of technologies, development
patterns, and system boundaries.
Decentralization lies at the heart of blockchain systems and must be
thoughtfully applied across application layers. This includes distributing data
storage across nodes, ensuring no single point of failure or control exists, and
relying on consensus mechanisms such as Proof of Authority (PoA), IBFT2, or QBFT
to validate transactions. Network topology must be designed to accommodate
validator nodes, light clients, and external observers while maintaining
synchronization and performance. The application architecture should aim to
minimize trust assumptions between parties by delegating critical workflows to
smart contracts, ensuring that execution is deterministic and transparently
verifiable on-chain.
Security is a non-negotiable aspect of blockchain application design. Smart
contracts must undergo rigorous review and testing to prevent vulnerabilities
such as reentrancy, integer overflows, and improper access control. Every
interaction must be governed by robust access control policies, often
implemented using role-based patterns. Key management must be enforced across
both client and infrastructure layers, ensuring that private keys used for
transaction signing are never exposed or misused. Moreover, blockchain systems
provide a natural audit trail through their immutable transaction history, which
can be leveraged to ensure accountability and compliance with regulatory
standards.
Scalability must be considered from both a technical and user experience
perspective. While Layer 1 blockchains offer security and decentralization, they
often face throughput limitations. Therefore, developers may choose to integrate
Layer 2 solutions such as sidechains, rollups, or state channels to offload
transaction volume. On the data side, efficient storage patterns, like
separating on-chain references from off-chain payloads, and leveraging caching
strategies can significantly enhance application responsiveness. Load balancing
across API services and indexers also ensures that the system remains performant
under real-world usage conditions.
The blockchain application stack typically consists of three main layers:
frontend, middleware, and the blockchain itself. The frontend is the user’s
point of interaction and includes Web3 integration libraries such as ethers.js
or web3.js, modern UI frameworks like React or Vue, and robust state management
solutions like Redux or Zustand. Frontends connect to wallets, sign
transactions, and present real-time blockchain states to users. The user
experience must account for asynchronous transaction finality, network
confirmation delays, and error feedback to guide users through actions like
signing or waiting for a block to be mined.
The middleware layer serves as a bridge between the frontend and blockchain. It
includes event listeners that subscribe to smart contract events, transform them
into structured data, and store them in off-chain databases. Middleware may also
include cache layers to accelerate queries, API gateways for routing and
authentication, and custom logic for enforcing workflows that span both on-chain
and off-chain systems. This layer is crucial for supporting backend integration,
indexing, alerting, and analytics.
At the blockchain layer, the smart contracts govern the core business rules of
the application. These contracts are deployed on networks selected based on the
project’s performance, cost, and decentralization requirements. Developers must
carefully design contract logic to be modular, upgradeable, and optimized for
gas consumption. Storage patterns such as mapping-based structures and
event-based tracking are preferred to reduce state bloat and execution cost. Gas
efficiency and deterministic behavior are essential not only for performance but
also for ensuring user affordability and network stability.
Smart contract development should follow a few established best practices.
Contracts should be designed in a modular way, separating core logic, access
control, and storage. Where upgradability is required, proxy patterns such as
UUPS or Transparent Proxy should be used to allow future extension without
compromising the initial deployment. Security checks must be embedded at every
function entry point to validate sender roles, parameter ranges, and external
call risks. Testing suites must simulate edge cases and validate all logic under
both normal and adversarial conditions.
Data management also plays a key role in blockchain-based systems. Developers
must decide what data is stored on-chain versus off-chain. Typically, hashes of
documents, references to IPFS files, or key-value mappings are stored on-chain,
while the actual content lives in IPFS, cloud storage, or SQL/NoSQL databases.
This separation allows for efficient querying, large data handling, and
regulatory compliance. Caching layers such as Redis or ElasticSearch may be
introduced to improve responsiveness, especially for dashboards or frequently
accessed metadata.
Integration patterns are essential to bridge smart contract logic with the rest
of the digital ecosystem. Events emitted from smart contracts are captured by
event listeners and passed to downstream processes, whether for updating UI
state, triggering business workflows, or invoking external APIs. REST and
GraphQL APIs must be designed to abstract the blockchain complexity while
exposing key application functions securely and efficiently. Error handling in
blockchain applications is critical due to the probabilistic nature of block
confirmations and potential gas price volatility. Transaction management
components must handle nonce tracking, confirmation polling, and user feedback
loops to ensure a smooth experience.
file: ./content/docs/knowledge-bank/blockchain-introduction.mdx
meta: {
"title": "Blockchain introduction",
"description": "A comprehensive overview of blockchain technology"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
Blockchain is a decentralized, tamper-resistant technology that enables secure data sharing and transaction recording without relying on a central authority. It works by storing information in blocks that are cryptographically linked, creating an immutable audit trail.
This architecture enhances transparency, trust, and accountability across multi-party systems. Blockchain is widely used in finance, supply chain, healthcare, and government to digitize workflows and automate trust. Its programmability through smart contracts further enables the creation of decentralized applications and digital assets.
## Cryptographic foundations
Blockchain systems rely heavily on cryptographic techniques to ensure security
and integrity of data:
> **Hashing**: A cryptographic hash function takes an input (such as a
> transaction or an entire block) and produces a fixed-size alphanumeric digest
> unique to that input. Even a small change in the input will produce a
> drastically different hash. Blockchain uses hashing to chain blocks together
> by including the previous block's hash in each new block.
Hashing also helps in creating digital fingerprints for data (for example,
transaction IDs or block IDs) and contributes to the formation of Merkle trees
for efficient verification.
> **Nonce**: A nonce ("number only used once") is a value that will be varied to
> influence a cryptographic process. In proof-of-work blockchains, the nonce is
> a field in the block header that miners adjust when hashing the block. Miners
> repeatedly hash the block header with different nonce values until a hash is
> produced that meets the network's difficulty target (typically a hash with a
> certain number of leading zeros).
The correct nonce makes the block's hash valid under the consensus rules,
allowing the block to be added. Nonces ensure that each mining attempt produces
a different hash output. (In other contexts, such as transactions, the term
"nonce" can also refer to a one-time number used to prevent replay attacks or
track transaction order, as seen in account-based blockchains.)
> **Public/private key pairs**: Blockchain uses asymmetric cryptography for
> identity and authentication. Each participant has a private key (kept secret)
> and a corresponding public key. The private key can be used to generate
> digital signatures on transactions, and the public key allows others to verify
> those signatures. This ensures that only the holder of the private key could
> have authorized a given transaction.
Public keys (or their hashes) often serve as addresses or identifiers on the
network. The cryptography (typically elliptic curve cryptography) is designed
such that it is computationally infeasible to derive the private key from the
public key, providing strong security for users' funds and data.
## Block structure
A blockchain is composed of a sequence of blocks, each containing a batch of
transactions and a header linking it to the previous block. The structure of a
block typically includes:
> **Block header**: The header contains metadata about the block. Crucially, it
> holds the previous block's hash, linking the block to the chain and ensuring
> continuity. It also includes a Merkle root (a single hash representing all
> transactions in the block, explained further in the Merkle Trees section) that
> commits to the contents of the block.
Additional header fields commonly include a timestamp (when the block was
created), a difficulty target and nonce (in proof-of-work systems, as described
above), and a version or protocol indicator. In some blockchains, the header may
also contain other fields; for example, Ethereum's header contains the root of
the global state and other metadata.
> **Block body**: The body of the block contains the list of transactions that
> are included in this block. Each transaction in the body is fully detailed
> (including information like sender, receiver, amount, signatures, etc.,
> covered in the Transactions section).
Typically, the block body begins with a special transaction called the coinbase
transaction (in cryptocurrencies like Bitcoin) or miner reward transaction,
which awards the block creator (miner or validator) any newly minted coins and
fees from included transactions. The rest of the body is the series of validated
transactions that this block is adding to the ledger.
> **Metadata**: Beyond header fields, some blockchains include additional
> metadata or auxiliary structures. For instance, a block may contain a block
> height (the sequence number of the block in the chain) or references to
> alternate chains (like "uncles" or "ommers" in Ethereum). However, these
> details vary by blockchain implementation.
The key aspect is that any metadata included is also summarized by the block's
hash (directly or indirectly), so that the block's identity reflects all of its
content.
Every block's header (especially the previous hash link and Merkle root) ensures
that blocks are tamper-evident. If anything in an earlier block's content were
altered, the change would propagate to that block's hash and invalidate all
subsequent links, breaking the chain's continuity unless massive recomputation
or a consensus override occurs.
## Transactions
Transactions are the fundamental operations that update the ledger state within
a blockchain. Each transaction represents a transfer of value or an execution of
some logic (in the case of smart contracts). Key points about transactions
include:
> **Transaction structure**: While specifics differ between blockchain
> platforms, a transaction generally includes fields such as the source
> (implicitly or explicitly indicated by a signature or an input reference),
> destination address(es), the amount of value to transfer, and other
> parameters.
In a UTXO-based system (Unspent Transaction Output model, used by Bitcoin), a
transaction has one or more inputs (references to unspent outputs from previous
transactions that the sender owns) and one or more outputs (newly created
outputs assigning value to new owners). Each output can be locked by a
cryptographic condition (e.g., "only unlockable by a signature from X's key").
In an account-based system (used by Ethereum and others), a transaction
explicitly contains the sender's address, the receiver's address, the transfer
amount, and a unique sequence number (nonce) for the sender's account. It may
also include a payload data field (for carrying arbitrary data or contract
commands) and a gas limit or fee information (especially in systems that charge
computational fees).
> **Transaction validation**: When a node receives a new transaction, it will
> validate it before accepting it into the local transaction pool. Validation
> includes checking that the transaction is properly formed and that the sender
> has sufficient rights to spend the funds.
For UTXO transactions, this means verifying that each input refers to an
existing unspent output and that the sum of input values matches or exceeds the
sum of outputs (the difference being the transaction fee). For account-based
transactions, validation involves checking that the sender's account balance is
sufficient and that the nonce (sequence number) is correct (to prevent replay or
out-of-order execution).
In all cases, the transaction's digital signature(s) must be verified using the
associated public key(s) to ensure authenticity. If any part of validation
fails, the transaction is rejected by the node.
> **Transaction signing**: A transaction must be authorized by the owner of the
> funds or resources it is spending. This is achieved through digital
> signatures. The creator of a transaction uses their private key to sign the
> transaction's data (often the transaction hash or a structured message derived
> from the transaction fields). This signature is then attached to the
> transaction.
Nodes will use the corresponding public key (usually derivable from information
in the transaction, such as an included public key or an implicit address
reference) to verify the signature. A valid signature proves that the
transaction was approved by the holder of the private key associated with the
source address.
Modern blockchains use secure signature schemes (like ECDSA or EdDSA on elliptic
curves) to ensure that forging a signature without the private key is
computationally infeasible. Once signed and validated, transactions are
broadcast to the network for inclusion in a block.
## Wallets
A wallet in blockchain is a software or hardware component that manages a user's
key pairs and facilitates the creation of transactions. Importantly, wallets
store keys, not coins , the actual assets remain recorded on the blockchain
ledger. Key points about wallets include:
> **Types of wallets**: There are several forms of wallets, each with different
> security and usability trade-offs. Software wallets are applications (on a
> desktop, web, or mobile) that store private keys on the user's device, often
> encrypted with a password. Hardware wallets are dedicated physical devices
> that securely store private keys in a protected hardware module and sign
> transactions internally (the private key never leaves the device).
Paper wallets are an offline approach, where the key information (often a seed
phrase or a QR code of the private key) is printed or written on paper and kept
physically secure. Wallets can also be categorized as hot (connected to the
internet, e.g., a mobile app wallet for daily use) or cold (completely offline,
e.g., hardware or paper wallets) depending on how they are stored and used.
> **Key storage and usage**: Wallets generate or import a private key (or a set
> of keys). Modern wallets often use a single master seed (a random secret
> usually represented as a 12-24 word mnemonic phrase) from which they derive
> multiple key pairs (this is the hierarchical deterministic wallet approach,
> allowing one backup to secure many addresses).
The wallet stores the private key(s) securely, typically encrypted with a user
passphrase if it's a software wallet, or in secure hardware in the case of
hardware wallets. When the user wants to send a transaction, the wallet software
will assemble the transaction data (recipient, amount, etc.), then use the
appropriate private key to produce a digital signature on that transaction.
The signed transaction is then broadcast to the blockchain network via a node.
Wallets also manage addresses (which are often derived from the public keys) and
will track the user's balances by monitoring the blockchain (either by running a
full node internally, or by querying external nodes).
In summary, wallets abstract the cryptographic key management for users,
ensuring private keys are safely stored and used to sign transactions when
needed.
## Nodes and networking
Nodes are the computers that participate in the blockchain network, maintaining
and updating the ledger. The network of nodes is peer-to-peer, meaning there is
no central server; instead, each node connects to other nodes, forming a
resilient mesh that shares data. There are different kinds of nodes and various
responsibilities they hold:
> **Node types and roles**: In a typical blockchain, a full node downloads and
> stores the entire blockchain (all blocks and transactions) and independently
> verifies all transactions and blocks against the consensus rules. Full nodes
> are the backbone of the network's security and decentralization, as they do
> not trust others for validation.
A light node (SPV node), by contrast, downloads only block headers or a subset
of data and relies on full nodes to provide proofs of transactions (using
techniques like Merkle proofs). Light nodes verify that a transaction is
included in the chain without storing everything, trading some trust and
completeness for efficiency.
Miner/Validator nodes are specialized full nodes that, in addition to validating
blocks, also create new blocks. In proof-of-work, these are miners that compete
to find valid blocks, and in proof-of-stake or BFT systems, these are validators
that are selected or rotate to add blocks.
Some networks may further differentiate roles (for example, in certain protocols
there are "archival nodes" that keep full history vs. pruning nodes, or
dedicated witness/masternodes for special tasks), but fundamentally all nodes
share the goal of maintaining consensus on the blockchain state.
> **Peer-to-peer communication**: Nodes communicate through a peer-to-peer (P2P)
> network protocol. When a node starts up, it will discover and connect to a set
> of peer nodes (using discovery protocols or a list of known bootstrap peers).
> Once connected, nodes exchange information in a gossip-like fashion: if a node
> finds out about a new transaction or block (either because it created it or
> received it from a peer), it will verify it and then forward it to its other
> peers.
In this way, new transactions propagate through the network, and newly
mined/validated blocks are quickly distributed to all nodes. The P2P network is
typically unstructured and robust: each node connects to a random sampling of
peers, ensuring redundancy. There is no single point of failure; even if many
nodes drop offline, others can still maintain the network.
Nodes continuously maintain connections, update each other about the latest
block (the tip of the chain), and request data (for example, if a node is
syncing from scratch, it will ask peers for blocks sequentially from the genesis
up to the latest block). This decentralized networking ensures that all copies
of the ledger held by honest nodes eventually converge to the same state.
## Transaction pool (mempool)
Before transactions are confirmed and added to a block, they reside in what's
commonly called the transaction pool or memory pool (mempool) of each node. The
mempool is a staging area for all pending transactions that have been propagated
to the network but not yet included in a block. Key aspects of the mempool
include:
> **Collection of pending transactions**: When a valid transaction is broadcast,
> each node that receives it and validates it will place it into its mempool (an
> in-memory list of unconfirmed transactions). These transactions remain queued
> in the mempool until a miner or validator picks them up to include in a new
> block.
Each node's mempool might not be exactly identical at all times (due to network
propagation delays or node-specific policies), but in general, popular
transactions will quickly be seen by most nodes' mempools.
> **Prioritization and fees**: Because blocks have a limited capacity (either a
> maximum size in bytes, or a gas limit in systems like Ethereum that limits
> computational work per block), not all pending transactions can be included
> immediately. Transactions in the mempool are typically prioritized by the fee
> they offer to the miner/validator.
For example, in Bitcoin, transactions offering higher satoshis per byte (fee
density) will be preferred, and in Ethereum, transactions with higher gas price
(or effective tip, under the EIP-1559 fee mechanism) take priority. Miners will
sort the mempool to select the highest paying transactions that fit in the next
block.
This market mechanism encourages users to attach sufficient fees during busy
periods to have their transactions confirmed faster. Low-fee transactions might
remain in the mempool for an extended time if the network is congested.
> **Mempool management**: Nodes typically impose limits on their mempool size
> (in memory), and may drop old or low-fee transactions if the pool is full.
> There are also rules to prevent spam, such as not relaying transactions with
> absurdly low fees or invalid transactions that could never be mined.
Some networks support transaction replacement policies (for instance,
"Replace-By-Fee" in Bitcoin allows a transaction in the mempool to be replaced
with a new version that pays a higher fee). In general, the mempool ensures that
the network has a reservoir of ready-to-include transactions and that no valid
transaction is forgotten.
Once a transaction is included in a new block and that block is accepted, nodes
will remove that transaction from their mempools (since it's now confirmed
on-chain). The mempool serves as the buffer and waiting room for the transaction
throughput of the network.
## Consensus mechanisms
Consensus mechanisms are protocols that enable distributed nodes to agree on the
contents of the blockchain (which block comes next) in the presence of potential
faults or malicious actors. Different blockchains use different consensus
algorithms, each with its own trade-offs:
> **Proof of work (PoW)**: PoW is a consensus mechanism where miners compete to
> solve a cryptographic puzzle. The puzzle involves finding a hash value for the
> next block that is below a target threshold (the target is adjusted
> periodically to control the block production rate). Miners achieve this by
> varying the nonce in the block header and hashing repeatedly until a valid
> hash is found.
The first miner to find a valid block broadcasts it, and the network verifies
it. PoW makes it extremely costly to rewrite history, because an attacker would
need to redo the cumulative work (hashing computations) of the chain. It thus
leverages computational difficulty to secure the chain.
PoW is robust and fully decentralized (anyone with hardware can attempt to
mine), but it consumes significant energy and can have relatively longer times
to finality (since multiple blocks might need to pile up to consider a block
settled). Bitcoin pioneered PoW, and Ethereum used PoW until it transitioned to
PoS; many public blockchains still use PoW for its proven security.
> **Proof of stake (PoS)**: PoS is a class of consensus algorithms where the
> ability to create new blocks and secure the network is based on ownership of
> the blockchain's native asset (the stake) rather than computational work.
> Validators in a PoS system must lock up a certain amount of cryptocurrency as
> stake.
The protocol then pseudo-randomly selects a validator (or a committee of
validators) to propose the next block, with probability often weighted by the
amount of stake. Other validators will then validate the proposed block, and
depending on the protocol, may vote or sign to finalize it.
Honest behavior is encouraged by economic incentives: a validator who produces a
fraudulent block or contradicts consensus can be penalized, typically by
slashing (losing a portion of their staked funds). PoS can significantly reduce
the resource usage compared to PoW and often achieves faster consensus (for
instance, by finalizing blocks in a few network rounds).
There are many PoS variants: some operate in rounds with designated leaders,
others use random beacon mechanisms; examples include Casper-style finality,
Ouroboros, Tendermint, and more. The security of PoS relies on the assumption
that a majority (by stake weight) of validators act honestly, and that the cost
of acquiring a majority stake is prohibitive.
> **Byzantine fault tolerant protocols (e.g., PBFT)**: In permissioned or
> consortium blockchains where participants are known, more traditional
> Byzantine Fault Tolerance (BFT) algorithms can be used for consensus.
> Practical Byzantine Fault Tolerance (PBFT) is a classic algorithm that allows
> a network to reach agreement even if some fraction (typically up to 1/3) of
> nodes are faulty or malicious.
In a PBFT-like consensus, a block (or transaction batch) is proposed by a leader
node, and then a series of voting rounds occur: other validator nodes will vote
to accept the block in a prepare phase and a commit phase. If a sufficient
supermajority (usually ≥2/3 of nodes) agree on the block, it is finalized and
becomes part of the ledger.
BFT protocols provide immediate finality , once a block is agreed on, it will
not be reversed as long as the assumptions hold. They tend to have higher
communication overhead (each node often needs to communicate with all others)
and thus are used in networks with dozens of validators rather than thousands.
Variants of BFT are used in various contexts: for example, Istanbul BFT (IBFT
2.0) and QBFT are used in some Ethereum-based consortium chains, and Tendermint
BFT is used in Cosmos. In addition to PBFT variants, some blockchains use
simplified consensus for known validators (like proof-of-authority schemes, or
Raft/Kafka-based ordering in Fabric) which are not fully Byzantine fault
tolerant but can be suitable when a higher level of trust exists among
participants.
All these consensus mechanisms aim to ensure that all honest nodes eventually
agree on the same sequence of blocks, preserving a single authoritative ledger.
They deter double-spending and conflicting histories through different means
(economic cost, computational cost, or reliance on trust among a group), but the
end result is a tamper-resistant chain agreed upon by the network.
## Forks and protocol upgrades
In blockchain terminology, a "fork" can refer to a divergence in the chain's
history or an update to the rules governing the system. Here we discuss protocol
forks (rule changes) and their implications:
> **Hard fork**: A hard fork is a change to the blockchain protocol that is not
> backward compatible. This means that blocks created under the new rules are
> considered invalid by nodes running the old software. To avoid a permanent
> split, all participants must upgrade their software to follow the new rules.
If there is disagreement or incomplete upgrade, a blockchain can split into two
separate chains at the fork point: one following the old rules and one following
the new rules. Each chain will continue independently, and they will not
reconverge since their consensus rules differ.
Hard forks are used for major upgrades or changes (for example, altering block
size limits, changing consensus rules, or repairing a severe security flaw), and
require coordination. After a hard fork, nodes that have not upgraded will
either stop at the fork point or continue on an incompatible chain.
> **Soft fork**: A soft fork is a protocol change that is backward compatible
> with older nodes, typically achieved by making the new rules a strict subset
> of the old rules. In a soft fork, blocks that follow the new rules also appear
> valid to old nodes (because they don't violate the old rules), though the
> reverse is not necessarily true (old nodes might accept some transactions that
> new rules would reject).
Soft forks usually rely on a majority of miners/validators enforcing the new
rules; once the majority does so, the network as a whole will reject any blocks
that don't conform to the new rules, effectively bringing all participants onto
a single upgraded chain even if some nodes haven't updated their software.
Because old nodes still accept the new blocks, a chain split is less likely (as
long as the majority enforces the new rules). Soft forks have been used to add
features or restrictions without splitting the network (for example, Bitcoin's
Segregated Witness was deployed as a soft fork). They require careful
coordination to succeed, as an unsuccessful soft fork (without sufficient
support) could lead to temporary confusion or orphaned blocks.
In summary, a hard fork mandates an update by all and can result in a permanent
chain split if consensus isn't reached among the community, whereas a soft fork
is an incremental change that can be adopted more gradually and usually
maintains one chain (given sufficient support). Both mechanisms are ways a
blockchain protocol can evolve over time.
## Blockchain network models
Blockchain systems can be categorized by how they are governed and who is
allowed to participate in the network:
> **Public blockchains**: These are open, permissionless networks where anyone
> can join as a node, submit transactions, and participate in the consensus
> process (e.g., mining or validating). Public blockchains prioritize
> decentralization and trustlessness – they assume no central authority, and
> consensus mechanisms (PoW, PoS, etc.) are used to secure the network against
> Sybil attacks and malicious actors.
All transaction data on public chains is generally visible to any observer
(though participants are usually pseudonymous, identified only by their
addresses or public keys). Public networks often have a native cryptocurrency
used as an incentive for participants and as a way to prevent spam (transaction
fees paid in the native coin).
> **Private blockchains**: A private blockchain is a closed network where write
> permissions (and sometimes read permissions) are restricted to one
> organization or a specific group of participants. These are permissioned
> ledgers often used internally within a company or organization.
In a private blockchain, nodes are known and controlled by the organization, so
the consensus mechanism can be simpler (since there is a higher degree of trust
internally , some private chains even use a single authority node or basic
majority vote to confirm blocks).
Private chains trade decentralization for speed and control – they can achieve
high transaction throughput and can enforce strict privacy on the data (since
access is limited). However, they rely on the trustworthiness of the controlling
entity and do not have the censorship resistance or open participation features
of public chains.
> **Consortium blockchains**: Consortium chains (also known as federated
> blockchains) are a hybrid model where the network is permissioned, but instead
> of a single organization, a group of independent organizations collaboratively
> maintain the blockchain. Only approved participants (the consortium members)
> run nodes and validate blocks.
This model is common in enterprise scenarios where multiple organizations (for
example, a group of banks or supply chain partners) want to share a distributed
ledger without any single party having sole control. Consensus in consortium
chains might use Byzantine fault tolerant algorithms or rotating leadership,
since participants are known entities (e.g., a group of validators where 2/3
agreement finalizes a block).
Consortium blockchains strike a balance between decentralization and controlled
access: they are more decentralized than a purely private single-owner chain,
but more controlled than a completely public network. Data can be kept private
to the consortium members, and performance can be optimized for the relatively
smaller number of nodes.
Each model has its design considerations: public blockchains for trust-minimized
environments with open participation, private blockchains for fully internal use
with trusted nodes, and consortium blockchains for collaborative applications
among multiple organizations. Technically, the underlying blockchain data
structures may be similar; the differences lie in how nodes are managed, how
consensus is achieved, and what trust assumptions are made.
## Data immutability and append-only design
One of the defining features of blockchain technology is the immutability of the
ledger. The data structure is effectively append-only: new transactions can be
added (through new blocks), but once a block is confirmed and part of the chain,
its contents cannot be altered or deleted without breaking the chain's
consensus. This immutability is achieved through cryptographic linking and the
consensus process:
> **Hash linking for integrity**: As described in the block structure, each
> block header contains the hash of the previous block. This creates a chain of
> hashes from the latest block back to the first block (genesis). If an
> adversary attempted to change a transaction in an old block, that block's hash
> would change.
Consequently, the next block (which contains the previous hash) would no longer
be consistent, and every subsequent block's hash would be invalid as well. The
only way to make such a change "stick" would be to recompute all the subsequent
blocks' hashes and, in a proof-of-work system, also redo all the computational
work (and catch up and surpass the current chain length to convince others).
In a well-secured blockchain, this is computationally infeasible without
controlling the majority of the network's hashing power or validation power.
> **Consensus and finality**: Consensus algorithms reinforce immutability by
> making it extremely difficult to replace a confirmed block with an
> alternative. In PoW, a malicious chain reorganization requires creating an
> alternate chain with more total work – an almost impossible task if honest
> miners control most of the hash power.
In PoS or BFT systems, once a block is finalized by a supermajority, protocol
rules will not allow reverting it without significant collusion or violation of
assumptions (and in many PoS protocols, such collusion would result in the
offenders' stakes being slashed).
Thus, after a certain point (a number of confirmations or a finality
checkpoint), a block and its transactions can be considered permanent.
Blockchain's append-only nature means that to correct errors or compensate for
fraud, new transactions must be issued (for example, a reversing transaction)
rather than rewriting history.
> **Auditability**: The immutable ledger provides a verifiable history of all
> transactions. Anyone can audit the chain from the beginning and be confident
> that what's recorded is exactly what occurred, since any tampering would be
> evident in the broken hash links or invalid signatures.
In scenarios where a blockchain is permissioned, immutability still holds within
the trust assumptions of that network (the operators agree via consensus not to
rewrite history arbitrarily, and the software enforces that by cryptographic
checks).
Some blockchains add checkpoints or use cryptographic commitments to further
cement history (for example, periodically snapshotting or notarizing the
blockchain state elsewhere), but the core operation remains that the ledger
grows by appending blocks and previous records remain indelible.
Immutability can be thought of as a spectrum depending on the threat model
(e.g., a private chain controlled by one entity could technically alter history
if that entity chose to, but cryptographic proofs would reveal the change to any
observers with the original data). In public decentralized chains, immutability
is one of the strongest guarantees, upheld by economic and computational
security measures.
## Merkle trees and block integrity
Merkle trees are a fundamental data structure used in blockchain to ensure the
integrity of large sets of data (such as all transactions in a block) in an
efficient manner:
> **Merkle tree structure**: A Merkle tree is a binary tree of hashes built from
> the bottom up. For a given block, each transaction is hashed (typically using
> the same hash function as the block hash, e.g., SHA-256) to produce a leaf
> node. These transaction hashes form the leaves of the Merkle tree.
Pairs of hashes are then concatenated and hashed together to form parent nodes.
This process repeats layer by layer until a single hash remains at the top ,
this is the Merkle root. The Merkle root, as mentioned in the block structure
section, is placed in the block header.
The tree structure means that the Merkle root is a cryptographic summary of all
transactions in the block. If any single transaction were different, its leaf
hash would change, which would change its parent hash, and so on up to the root,
yielding a completely different Merkle root. Thus, the Merkle root in the header
effectively seals the content of the block.
> **Efficient verification (Merkle proofs)**: Merkle trees enable a feature
> called Merkle proofs or Merkle paths. Suppose a node (like a light client)
> wants to verify that a particular transaction is included in a block without
> downloading the entire block.
The node can obtain the transaction itself and a set of sibling hashes that form
the path from that transaction's leaf up to the known Merkle root. By
iteratively hashing the transaction with its sibling hash, then hashing that
result with the next sibling, and so forth, the node can reproduce the Merkle
root.
If the computed Merkle root matches the one in the block header (which the light
client trusts as part of the known chain), then the transaction's inclusion is
verified. This way, a client does not need the full list of transactions, only a
small logarithmic number of hashes relative to the total number of transactions
in the block.
Merkle proofs are crucial for scalability features like Simplified Payment
Verification (SPV) in Bitcoin, where lightweight wallets verify transactions by
relying on Merkle proofs and block headers instead of all transaction data.
Additionally, Merkle trees allow quick comparisons of entire datasets , if two
Merkle roots differ, one can deduce that the underlying data has differences,
and by comparing branches, pinpoint which transaction(s) differ. Overall, Merkle
trees contribute to blockchain integrity by making verification of contents both
secure (cryptographically) and efficient.
## Chain reorganization and finality
In a distributed system where multiple blocks can be proposed (especially in PoW
or certain PoS chains), temporary forks in the chain may occur. A chain
reorganization (reorg) is the process of the network abandoning one branch of
the chain in favor of a longer or more "correct" branch. Additionally, the
concept of finality relates to how confident participants can be that a given
block will not be reversed.
> **Chain reorganization**: A reorg typically happens when two miners find a
> block at nearly the same time, causing a short-term divergence (two competing
> "tips"). Nodes might temporarily disagree on the latest block. When the next
> block is found, if it attaches to one of these branches, that branch becomes
> longer; the network will adopt the longer chain as the canonical one (this is
> part of the longest-chain rule in PoW).
The transactions in the orphaned block (the block that lost the race) are not
lost – if they weren't included in the winning block, they go back to the
mempool to be retried in a later block. Reorgs in normal operation are usually
only one or two blocks deep and resolve quickly.
Longer reorgs can happen in the event of major network delays or attacks (for
example, if a malicious actor had significant mining power and privately mined a
hidden chain and then released it, overtaking the public chain). Reorganizations
ensure that the network eventually converges on a single chain, but they imply
that blockchain confirmations are not absolutely final until a block has several
blocks on top of it.
During a reorg, no protocol rules are violated – it's a natural outcome of
decentralized block production and the rule of adopting the chain with the most
work (or highest stake weight, in some PoS cases).
> **Finality**: Finality is the guarantee that a block (and the transactions in
> it) will not be reverted or dropped from the chain. Different consensus
> mechanisms provide different notions of finality. In PoW systems, finality is
> probabilistic – a block becomes more secure the more blocks are mined on top
> of it.
For example, after 6 confirmations in Bitcoin, the probability of a block being
reversed is vanishingly small (because an attacker would need to redo the
proof-of-work faster than the rest of the network). However, it's never
absolute; there's always a theoretical chance if someone controls enough hashing
power.
In PoS systems, especially those with explicit voting and checkpoints, finality
can be deterministic (or economic finality). Many modern PoS protocols have
validators vote to finalize checkpoints; once a block or epoch is finalized
(e.g., by 2/3 of validators voting for it), reverting it would require an
extremely large collusion and typically results in severe penalties (slashing of
staked funds).
Thus, users can consider finalized blocks practically immutable. In pure BFT
consensus used in private/consortium chains, finality is immediate – once
validators reach agreement in a round and commit a block, that block is
irrevocably part of the ledger (assuming less than the fault-tolerance threshold
of nodes are malicious).
In summary, finality means that after a certain point in time or number of
blocks, participants can trust that the ledger's history will remain fixed.
Blockchain designs try to minimize the uncertainty window; for instance, by
aiming for fast block times and quick finality (as in many PoS networks) or by
advising waiting for several confirmations (as in PoW networks) to achieve
practical finality.
## Smart contracts
Smart contracts are programs that run on the blockchain network, enabling
automated and complex transactions beyond simple value transfers. They are an
integral part of blockchain architecture on platforms that support them (like
Ethereum, Hyperledger Fabric, and others), and they operate as follows:
> **Embedded code execution**: A smart contract is essentially code stored on
> the blockchain that executes in response to transactions. When a transaction
> invokes a smart contract (for example, calling a function of a contract with
> certain parameters), every node in the network will execute that code as part
> of block processing.
All nodes must arrive at the same result (since the code is deterministic),
which then is used to update the ledger's state. This means the blockchain not
only stores data but also enforces the logic defined by the contracts. In
effect, the ledger becomes a state machine that is advanced by executing
contract code included in transactions, with every node verifying the outcome.
> **Deterministic, sandbox environment**: Smart contract execution happens in a
> controlled environment (such as the Ethereum Virtual Machine for Ethereum's
> contracts, or docker containers for Fabric's chaincode). The code cannot
> perform disallowed or undeterministic operations (for example, it generally
> cannot make external network requests or generate truly random numbers without
> consensus) because all nodes need to replicate the execution exactly.
Instead, contracts are limited to the data on the blockchain and the input
provided. The environment ensures that the execution is deterministic and
sandboxed from the node's host system. In public blockchain settings, gas or
execution fees are used to meter the computation and storage usage of contracts
, the sender of the transaction must pay for the operations their contract call
performs.
This not only prevents abuse (infinite loops or excessive computation) but also
ties the execution cost to economic incentives.
> **State and immutability of contracts**: Once deployed, smart contract code is
> usually immutable (the code becomes part of the blockchain record). Contracts
> often live at a specific address on the blockchain, and they maintain their
> own persistent state (for example, a token contract keeps track of balances
> mapping addresses to numbers).
Every time the contract is invoked and modifies its state, those changes are
recorded on the blockchain as part of the transaction results. There are
patterns to introduce upgradability (such as proxy contracts that can delegate
calls to a new implementation), but these must be designed intentionally;
otherwise, a bug in a contract is permanent.
The immutability of the code and the transparency of its logic mean that anyone
can inspect how the contract will behave. The contract's state is also
transparent (though it may be encoded), and all changes to it are a matter of
public record on the ledger. This combination ensures that the rules set by the
contract are enforced exactly and predictably, which is crucial in trustless
environments.
> **Trustless automation**: Smart contracts remove the need for a central or
> trusted party to execute agreements or business logic. Instead of relying on a
> server or an authority, the blockchain network itself enforces the execution
> of the contract code. For example, a simple smart contract for an escrow will
> automatically release funds to the seller when conditions are met, without
> needing a bank or escrow agent.
The participants trust the code and the consensus of the network rather than
each other. However, this also means that errors or exploits in the code can
have serious consequences, since there is no easy way to intervene once the
contract is deployed (short of all nodes agreeing to a fork or upgrade, which is
rare and controversial).
For a technical team, it's important to follow secure coding practices and
thorough testing when developing smart contracts. Nonetheless, the ability to
encode arbitrary rules that will execute automatically and consistently across
the network is a powerful feature that turns the blockchain into a platform for
decentralized applications and protocols, not just a ledger of coin transfers.
## Enterprise blockchain platforms
Not all blockchain frameworks follow the exact same design as public
cryptocurrencies. Two notable enterprise-focused blockchain platforms are
Hyperledger Fabric and Hyperledger Besu. These platforms incorporate the core
ideas of blockchain (distributed ledger, cryptography, consensus) but make
different architectural choices to suit enterprise needs.
### Hyperledger fabric
Hyperledger Fabric is a permissioned blockchain framework under the Linux
Foundation's Hyperledger project. It is designed for enterprise use, with a
focus on modularity and flexibility.
> **Architecture and roles**: Fabric's architecture divides the transaction
> workflow into distinct phases and assigns different roles to different node
> types. It introduces the concept of peers (nodes that maintain the ledger and
> can execute smart contract code) and orderers (nodes that provide ordering
> service for transactions).
When a transaction proposal is initiated (for example, a user invokes a
chaincode function), it is first sent to designated endorsing peers. These peers
simulate the transaction by executing the chaincode with the provided input, but
they do not update the ledger at this stage. Instead, each endorsing peer
returns a cryptographic endorsement (essentially the proposed transaction's
output and a signature) to the client.
The client then collects these endorsements and sends the transaction to the
ordering service. The ordering service (which can consist of multiple ordering
nodes) is responsible for establishing a total order of transactions across the
network. It takes endorsed transactions from clients and packages them into
blocks in a sequential order.
The ordered block is then disseminated to all peers. Finally, each peer will
validate the block: it checks that the transactions in the block have the
required endorsements (per the network's endorsement policy) and that there are
no conflicts, such as double-spending or version conflicts in the state.
Transactions that fail validation are marked invalid in the block.
After validation, each peer appends the block to its copy of the ledger and
updates the world state database with the results of the valid transactions.
This design (execute first on endorsers, then order, then validate/commit on all
peers) improves performance and scalability and allows for certain privacy
file: ./content/docs/knowledge-bank/chaincode.mdx
meta: {
"title": "Chaincode development",
"description": "A complete guide to writing, deploying, and managing chaincode for Hyperledger Fabric networks"
}
## Introduction to chaincode
Chaincode is the smart contract implementation in Hyperledger Fabric. It defines
the business logic that runs on a Fabric network and is responsible for reading
and writing data to the distributed ledger.
Unlike Ethereum-based smart contracts, which run on a global public chain,
chaincode runs in a permissioned network and is executed by selected endorsing
peers. It is deployed in isolated Docker containers and communicates with the
Fabric peer nodes through well-defined interfaces.
Chaincode allows organizations in a consortium to define rules for asset
exchange, access control, regulatory checks, and other workflows using trusted
code. It is executed deterministically and only changes the ledger when
transaction endorsement policies are met.
## Language support
Chaincode can be written in several programming languages, each offering the
same functionality through different SDKs.
Currently supported languages include:
* Go
* JavaScript (Node.js)
* Java
The Go language is most commonly used for production-grade chaincode due to its
performance and concurrency features. Node.js is preferred for rapid prototyping
or when integrating with existing JavaScript-based applications. Java is used in
regulated environments where strict typing and object modeling are beneficial.
## Chaincode lifecycle overview
The chaincode lifecycle defines the steps required to install, approve, and
commit chaincode to a Fabric channel.
The lifecycle process is decentralized and allows each organization to
participate in chaincode governance. The high-level steps are:
* Package the chaincode
* Install the chaincode on peers
* Approve the chaincode definition for the channel
* Commit the chaincode definition to the channel
* Initialize the chaincode (optional)
Each of these steps is executed using the peer CLI or Fabric SDKs. All actions
are recorded on the blockchain and can be audited by members of the consortium.
## Project structure
A chaincode project typically consists of:
* Source code files (`.go`, `.js`, or `.java`)
* A `go.mod` file (for Go chaincode) or `package.json` (for Node.js)
* Dependency modules or imports
* A defined `Init` or `InitLedger` function
* Business logic functions for create, read, update, and delete operations
* Utility and helper functions for serialization and validation
For Go-based chaincode, the standard layout includes a `main.go` or
`chaincode.go` entry point. This registers the chaincode and invokes the `shim`
interface.
Node.js chaincode has an entry file like `index.js` or `chaincode.js`, which
sets up the contract classes using the Fabric contract API.
## Key interfaces
In Go, chaincode implements the `Chaincode` interface provided by the Fabric
shim package. This interface includes two methods:
* `Init` for initialization when the chaincode is instantiated
* `Invoke` for handling all other function calls
In newer chaincode implementations using the contract API, developers define
contract classes with named transaction functions. This approach supports
modularity and multiple logical contracts in one chaincode.
```go
type SmartContract struct {
}
func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error {
// initialization logic
}
func (s *SmartContract) CreateAsset(ctx contractapi.TransactionContextInterface, id string, value string) error {
// asset creation logic
}
```
This structure improves clarity, testing, and integration with Fabric’s access
control and endorsement systems.
## Writing chaincode functions
Chaincode functions define how a Fabric network processes input data, verifies
conditions, and updates the ledger state.
Each function receives a transaction context, which provides access to APIs for
reading and writing the world state, retrieving transaction metadata, and
verifying identities.
A typical chaincode function follows this flow:
* Read input parameters using the function signature
* Perform validation on inputs
* Query or modify the world state using key-value operations
* Return success or error based on logic outcomes
The function must be deterministic and must not depend on external state, time,
or randomness. All peers must reach the same result independently for
endorsement to succeed.
```go
func (s *SmartContract) CreateItem(ctx contractapi.TransactionContextInterface, id string, name string) error {
exists, err := s.ItemExists(ctx, id)
if err != nil {
return err
}
if exists {
return fmt.Errorf("item %s already exists", id)
}
item := Item{
ID: id,
Name: name,
}
itemJSON, err := json.Marshal(item)
if err != nil {
return err
}
return ctx.GetStub().PutState(id, itemJSON)
}
```
In this example, the function checks for duplicates, constructs a new item,
marshals it into JSON, and writes it to the ledger.
## Reading and writing world state
Fabric maintains a key-value database known as the world state. Each chaincode
function can read and write to this store using the `stub` interface.
Common operations include:
* `GetState(key)` to retrieve a value by key
* `PutState(key, value)` to write or update a key-value pair
* `DelState(key)` to delete a key
* `GetStateByRange(start, end)` to iterate over a key range
* `GetQueryResult(query)` for CouchDB rich queries
Data is stored as byte arrays and usually encoded in JSON for compatibility.
Developers should define clear entity structures and handle serialization
explicitly.
```go
itemJSON, err := ctx.GetStub().GetState("item1")
if err != nil || itemJSON == nil {
return fmt.Errorf("item not found")
}
var item Item
err = json.Unmarshal(itemJSON, &item)
if err != nil {
return err
}
```
All writes to the ledger are recorded in the transaction log, and the world
state reflects the latest version of each key after transaction validation.
## Using client identity and attributes
Fabric supports identity-aware chaincode execution. The client identity object
provides access to the invoker’s certificate, MSP ID, and attributes.
This enables use cases such as:
* Role-based access control
* Certificate-based ownership validation
* Organization-specific business logic
To access the client identity:
* Use `ctx.GetClientIdentity()` in Go
* Use `ctx.clientIdentity` in Node.js
Examples of identity operations:
* `GetID()` returns the subject of the client certificate
* `GetMSPID()` returns the organization MSP
* `GetAttributeValue(name)` retrieves an attribute set in the certificate
```go
cid, err := ctx.GetClientIdentity().GetID()
if err != nil {
return err
}
mspid, _ := ctx.GetClientIdentity().GetMSPID()
if mspid != "Org1MSP" {
return fmt.Errorf("unauthorized organization")
}
```
These identity checks can be combined with endorsement policies to enforce
multi-organization consensus.
## Error handling and validation
Chaincode must return errors for invalid transactions. Errors prevent the
proposal from being endorsed or committed and maintain data integrity.
Typical validation checks include:
* Verifying that required input parameters are present
* Ensuring keys do not already exist before creating entities
* Confirming keys exist before reading or updating
* Validating that caller has permission to modify a record
Use structured error messages and proper formatting. Avoid panics or uncaught
exceptions. All error messages should be deterministic and consistent across all
endorsing peers.
The best practice is to define helper functions for common checks and reuse them
across transaction handlers.
## Emitting chaincode events
Chaincode can emit events that are captured by client applications or monitoring
tools.
Events are useful for triggering off-chain workflows, synchronizing UI
components, or indexing ledger activity for analytics.
An event is emitted using the `SetEvent` method on the chaincode stub. It
includes:
* A name string that identifies the event type
* A payload in bytes, typically a serialized JSON object
```go
eventPayload := map[string]string{"itemId": "123", "status": "created"}
eventJSON, _ := json.Marshal(eventPayload)
ctx.GetStub().SetEvent("ItemCreated", eventJSON)
```
Applications can subscribe to events using the Fabric SDK and filter by event
name. Events are recorded in the block that commits the transaction and are part
of the transaction receipt.
Events do not modify ledger state and should not be used as the sole source of
truth. Their purpose is to notify off-chain systems, not to enforce logic.
## Chaincode initialization
Chaincode may include an optional initialization function that is invoked once
when the chaincode is committed to a channel.
This function can perform setup tasks such as:
* Seeding initial records
* Setting ownership
* Registering system-level settings
Initialization must be explicitly requested during chaincode invocation using
the `--isInit` flag or its SDK equivalent.
Example initialization function:
```go
func (s *SmartContract) InitLedger(ctx contractapi.TransactionContextInterface) error {
items := []Item{
{ID: "item1", Name: "Pen"},
{ID: "item2", Name: "Notebook"},
}
for _, item := range items {
itemJSON, _ := json.Marshal(item)
ctx.GetStub().PutState(item.ID, itemJSON)
}
return nil
}
```
This method is called only once and is not part of regular transaction flow. If
initialization is skipped or fails, the chaincode remains inactive.
## Endorsement policies
An endorsement policy defines which peers must approve a transaction before it
can be committed to the ledger.
Chaincode logic enforces application-level rules, while endorsement policies
enforce organizational-level trust and validation.
Policies are configured during the chaincode definition phase and use logical
conditions like:
* `OR('Org1MSP.peer','Org2MSP.peer')`
* `AND('Org1MSP.peer','Org2MSP.peer')`
* Custom signature policies with nested conditions
These rules determine which endorsing peers must sign off on a proposal. If the
required number of signatures is not collected, the transaction fails
endorsement.
The endorsement policy ensures that no single organization can unilaterally
update the ledger. It also enables multi-party workflows where different
participants must validate the action.
## Working with private data
Hyperledger Fabric allows chaincode to read and write private data collections.
Private data is not stored on the public ledger. Instead, it is distributed only
to authorized peers and stored in a separate private database.
This feature supports use cases where sensitive information must be hidden from
certain members of the network while still being verifiable.
Key methods for private data:
* `GetPrivateData(collection, key)`
* `PutPrivateData(collection, key, value)`
* `DelPrivateData(collection, key)`
```go
order := Order{ID: "order1", total: 100}
orderJSON, _ := json.Marshal(order)
ctx.GetStub().PutPrivateData("OrderCollection", "order1", orderJSON)
```
Collections are defined in the chaincode configuration file
`collections-config.json` and include:
* Collection name
* Member organizations
* Endorsement policy
* Required and maximum peer counts
Private data can also be used with hashed reads and transient data inputs,
enabling zero-knowledge-style logic and selective disclosure.
Access to private data is enforced at the peer level. Unauthorized peers do not
receive the data and cannot query it through chaincode.
## Testing chaincode
Testing chaincode is critical for ensuring correctness, security, and
reliability before deployment.
Tests can be written using standard unit testing frameworks for the target
language. In Go, the `testing` package is used to simulate chaincode
transactions and verify expected behavior.
Key testing strategies include:
* Unit tests for transaction functions using mocked contexts
* Integration tests using Fabric test networks
* End-to-end scenario tests with CLI or SDK interactions
Mock objects simulate the chaincode stub and transaction context. This allows
developers to control inputs and check function outputs without running a full
Fabric network.
Example test in Go:
```go
func TestCreateItem(t *testing.T) {
ctx := new(MockTransactionContext)
stub := new(MockChaincodeStub)
ctx.On("GetStub").Return(stub)
cc := new(SmartContract)
err := cc.CreateItem(ctx, "item1", "Laptop")
assert.NoError(t, err)
}
```
Fabric also provides sample test networks using Docker Compose and scripts to
simulate channel creation, peer joining, and chaincode deployment.
## Packaging chaincode
Before deployment, chaincode must be packaged into a compressed archive format.
Packaging involves:
* Creating a folder with the chaincode source and dependencies
* Using the peer CLI to generate a `.tar.gz` archive
* Assigning a label that includes version and metadata
Packaging command:
```
peer lifecycle chaincode package mycc.tar.gz --path ./chaincode/ --lang golang --label mycc_1
```
The label must be unique for each version and is used to identify the chaincode
package during installation and approval.
## Installing and approving chaincode
Once packaged, the chaincode must be installed on all endorsing peers and
approved by all required organizations.
Installation command:
```
peer lifecycle chaincode install mycc.tar.gz
```
After installation, each peer returns a package ID that will be used during
approval.
Approval command:
```
peer lifecycle chaincode approveformyorg --channelID mychannel --name mycc --version 1 --sequence 1 --package-id --init-required
```
Each organization must run this command and commit the approval to the channel.
## Committing chaincode
After all required approvals, the chaincode is committed to the channel using
the following command:
```
peer lifecycle chaincode commit --channelID mychannel --name mycc --version 1 --sequence 1 --init-required
```
This step activates the chaincode and allows it to begin processing
transactions.
If the chaincode includes an initialization function, it must be invoked with
the `--isInit` flag:
```
peer chaincode invoke --channelID mychannel --name mycc -c '{"function":"InitLedger","Args":[]}' --isInit
```
Committing the chaincode broadcasts the definition to all peers in the channel
and enables consistent execution.
## Upgrading chaincode
Chaincode upgrades are handled by repeating the lifecycle steps with a higher
sequence number.
To upgrade:
* Modify the source code
* Repackage the chaincode with a new label
* Install the new package on all peers
* Approve the new definition with `--sequence` incremented
* Commit the new definition to the channel
This enables version-controlled deployment and supports backward-compatible
changes.
Upgrade scenarios may include:
* Adding new functions
* Changing endorsement policy
* Modifying access control logic
* Migrating state formats
Developers must preserve storage layout and state compatibility across upgrades.
It is also recommended to document all changes and test thoroughly in a staging
environment.
## Chaincode deployment strategies
In production networks, chaincode should be deployed using controlled CI/CD
pipelines.
Best practices for deployment include:
* Automating package generation and installation steps
* Using version control to track chaincode changes
* Storing deployment artifacts and configurations securely
* Performing dry runs on test channels
* Applying environment-specific parameters for each organization
Multi-org deployment requires coordination to ensure that all approvals are
collected and that no inconsistent versions exist in the network.
Deployment logs, peer responses, and chaincode events should be monitored to
verify successful rollout.
## Multi-contract chaincode design
Chaincode can contain multiple logical contracts within a single package.
This is useful when building complex applications where multiple domains or
entities must be managed independently, such as in a marketplace with users,
products, and transactions.
Each contract is defined as a separate class and registered using the Fabric
contract API. Contracts share the same chaincode but have isolated namespaces
for better modularity.
Example:
```go
type UserContract struct {
contractapi.Contract
}
type ProductContract struct {
contractapi.Contract
}
func main() {
chaincode, err := contractapi.NewChaincode(new(UserContract), new(ProductContract))
if err != nil {
panic(err)
}
if err := chaincode.Start(); err != nil {
panic(err)
}
}
```
Clients invoke specific contracts using the format `ContractName:FunctionName`.
This pattern enables structured development and simplifies logic segregation
across modules.
## Ledger state migration
When upgrading chaincode or modifying data structures, state migration may be
required.
This process involves reading old data formats, transforming them to the new
schema, and saving updated versions to the ledger.
Migration can be performed:
* Automatically during initialization of the new chaincode version
* Manually using a migration function triggered by an admin
Best practices for migration:
* Maintain backward compatibility for a defined period
* Validate data before overwriting
* Log migrated keys and results
* Use a dry-run mode before full execution
```go
func (s *SmartContract) MigrateState(ctx contractapi.TransactionContextInterface) error {
resultsIterator, err := ctx.GetStub().GetStateByRange("", "")
if err != nil {
return err
}
defer resultsIterator.Close()
for resultsIterator.HasNext() {
response, err := resultsIterator.Next()
if err != nil {
return err
}
var oldRecord OldItem
err = json.Unmarshal(response.Value, &oldRecord)
if err != nil {
return err
}
newRecord := NewItem{ID: oldRecord.ID, Label: oldRecord.Name}
newJSON, _ := json.Marshal(newRecord)
ctx.GetStub().PutState(newRecord.ID, newJSON)
}
return nil
}
```
State migration must be tested extensively to prevent corruption or data loss.
## Performance optimization
Efficient chaincode execution ensures faster transaction endorsement and lower
peer load.
To improve performance:
* Use simple and direct key-value access patterns
* Minimize writes and avoid unnecessary `PutState` calls
* Cache intermediate results in memory where possible
* Avoid large objects and excessive JSON nesting
* Use indexed keys for fast range queries
* Avoid heavy use of private data unless needed
Complex filtering should be done in the client application. Chaincode should
serve as a deterministic validator and not as a data processing layer.
For CouchDB-based networks, rich queries should be tested for index coverage and
speed. Index definitions can be added to the collection configuration for better
performance.
## Chaincode logging and auditability
Chaincode logs help with debugging, compliance, and transaction tracing.
Logging is supported through standard output and is captured by peer containers.
Use descriptive logs to trace function entry, key operations, and errors. Avoid
logging sensitive data or large payloads in production.
In Go:
```go
fmt.Printf("Creating item: %s\n", item.ID)
```
In Node.js:
```js
console.log(`Creating item: ${itemID}`);
```
Chaincode operations are also recorded in transaction logs and can be queried
using:
* Block explorer tools
* SDK query APIs
* Peer CLI for history inspection
Audit trails include:
* Proposal identities
* Endorsing organizations
* Read and write sets
* Time of transaction
* Chaincode version used
These features allow organizations to verify compliance, trace business
activity, and investigate disputes.
## Chaincode development summary
Chaincode enables secure, decentralized business logic in Hyperledger Fabric
networks.
Its deterministic nature, access control capabilities, and modular architecture
make it ideal for enterprise applications in finance, supply chain, healthcare,
and more.
Throughout this guide, we have covered:
* Core concepts and interfaces
* Writing and testing transaction logic
* World state management and identity enforcement
* Event emission and chaincode lifecycle operations
* Deployment, upgrades, and migration
* Performance tuning and audit mechanisms
Successful chaincode projects follow a disciplined approach, including version
control, peer review, CI pipelines, and thorough testing.
With the right patterns and tooling, chaincode becomes a powerful foundation for
trusted workflows and collaborative networks.
file: ./content/docs/knowledge-bank/fabric-transaction-flow.mdx
meta: {
"title": "Fabric transaction cycle",
"description": "Hyperledger fabric transaction cycle"
}
## Hyperledger Fabric Transaction Lifecycle
### Identity and Membership Setup
The transaction lifecycle in Hyperledger Fabric begins with identity management through X.509 certificates issued by a Certificate Authority (CA). Each network participant - whether an organization, peer node, or user - receives a unique digital identity. These identities are managed through Membership Service Providers (MSPs), which define the rules for authentication and authorization within each organization. The MSP contains cryptographic materials including CA certificates, admin certificates, and node-specific signing keys that enable secure participation in the network.
### Network Architecture Components
Fabric's modular architecture consists of several key components working together. Organizations join the network with their own peers (which maintain the ledger), CAs (for identity management), and client applications. The ordering service (comprised of orderer nodes) forms the backbone of the network, responsible for creating the immutable sequence of transactions blocks. This separation of concerns between execution (peers), ordering (orderers), and identity (CAs) provides Fabric with its flexible and scalable architecture.
### Chaincode Development Process
Smart contracts in Fabric, called chaincode, contain the business logic that governs transactions. Developers write chaincode in general-purpose languages like Go or JavaScript, defining functions that interact with the ledger state. The chaincode specifies how assets are created, modified, or queried, with functions like InitLedger for initialization and custom functions like TransferAsset for business operations. Chaincode is versioned and can be upgraded without losing the existing ledger state.
### Chaincode Deployment Lifecycle
Before execution, chaincode goes through a rigorous four-stage deployment process:
1. **Packaging** - The code is bundled with its dependencies into a deployable package
2. **Installation** - The package is installed on all endorsing peers across organizations
3. **Approval** - Required organizations approve the chaincode definition based on policy
4. **Commitment** - The chaincode becomes active on the channel after orderer verification
### Transaction Endorsement Flow
When a client submits a transaction, it first creates a proposal that is sent to endorsing peers. These peers simulate the transaction by executing the chaincode against their current state, generating a read/write set that shows what would change. The peers then cryptographically sign these results if the simulation meets policy requirements. This endorsement process ensures transactions are validated before being committed to the ledger.
### Ordering and Finalization
Endorsed transactions are sent to ordering service nodes, which:
* Gather transactions from across the network
* Arrange them into blocks
* Establish the definitive order of transactions
* Distribute blocks to all peers
Peers then validate each transaction in the block against endorsement policies and current state before appending it to their copy of the ledger and updating the world state.
### Advanced Features
Fabric supports sophisticated enterprise requirements through:
* **Private Data Collections**: Enables confidential transactions between specific organizations
* **Access Control**: Attribute-based rules govern who can invoke chaincode functions
* **Versioning**: Chaincode can be upgraded while preserving ledger state
* **Plugable Components**: Supports different consensus mechanisms and databases
### Consensus and Finality
Fabric achieves finality through execute-order-validate architecture:
1. Transactions are first executed and endorsed
2. Then ordered into blocks with deterministic sequencing
3. Finally validated against current state before commitment
This three-phase approach provides high throughput while preventing double-spending and other inconsistencies.
### World State Management
The ledger maintains two components:
* **Blockchain**: Immutable sequence of transaction blocks
* **World State**: Current value database (LevelDB/CouchDB) for efficient queries
This separation allows for efficient access to current values while maintaining complete transaction history.
### Security Model
Fabric's security derives from:
* PKI-based identity with certificate revocation
* Configurable endorsement policies
* Channel-based data isolation
* Cryptographic hashing of all transactions
* Byzantine fault tolerant ordering
These features combine to create an enterprise-grade permissioned blockchain suitable for business networks.
## Hyperledger Fabric Transaction Lifecycle
## 1. Identity Creation and MSP Configuration
**What is happening?**
Fabric uses X.509 certificates for identity. These certs are issued by a Certificate Authority (CA) and represent different users, peers, and orderers in the network. Each organization defines a Membership Service Provider (MSP) to manage these identities.
**Technical Components:**
* MSP folder structure (per org):
```
msp/
├── cacerts/ # CA root cert
├── keystore/ # Private ECDSA key
├── signcerts/ # X.509 signing cert
├── admincerts/ # Org admins
```
**Layman Explanation:**
Think of MSP like your "organizational passport system". The CA is like a passport office, and your certificates are digital passports proving who you are in the network.
***
## 2. Network Map and Actor Roles
**What is happening?**
Fabric has a modular architecture. Every participant plays a role:
| Org | Peers | CA | Orderer | MSP |
| ---------- | ---------- | ---------- | ------------------- | ---------- |
| Org1 | peer0.org1 | ca.org1 | - | Org1MSP |
| Org2 | peer0.org2 | ca.org2 | - | Org2MSP |
| OrdererOrg | - | ca-orderer | orderer.example.com | OrdererMSP |
**Layman Explanation:**
Imagine Org1 and Org2 are banks. Each has a "teller" (peer), a "notary" (CA), and a way to validate and store transactions. The orderer is like a shared accountant who logs all entries in a common ledger.
***
## 3. Chaincode (Smart Contract) Development
**hello.go (written in Go):**
```go
type HelloWorldContract struct {
contractapi.Contract
}
type Message struct {
Text string `json:"text"`
}
func (c *HelloWorldContract) InitLedger(ctx contractapi.TransactionContextInterface) error {
return ctx.GetStub().PutState("message", []byte(`{"text":"Hello Fabric!"}`))
}
func (c *HelloWorldContract) UpdateMessage(ctx contractapi.TransactionContextInterface, newMsg string) error {
msg := Message{Text: newMsg}
data, _ := json.Marshal(msg)
return ctx.GetStub().PutState("message", data)
}
```
**Layman Explanation:**
This is the program logic deployed to the blockchain. It initializes a message and lets users update it.
***
## 4. Package, Install, Approve, Commit (Chaincode Lifecycle)
**What is happening?**
Fabric separates chaincode deployment into lifecycle phases.
### 4.1 Package Chaincode
```bash
peer lifecycle chaincode package hello.tar.gz \
--path ./hello --lang golang --label hello_1
```
### 4.2 Install on Peers
```bash
peer lifecycle chaincode install hello.tar.gz
```
### 4.3 Approve by Each Org
```bash
peer lifecycle chaincode approveformyorg ...
```
### 4.4 Commit Chaincode Definition
```bash
peer lifecycle chaincode commit ...
```
**Layman Explanation:**
Imagine writing a company policy, getting department heads to sign off (approve), and then publishing it to everyone (commit).
***
## 5. Endorsement Policy (Who Must Approve Transactions?)
**Examples:**
```sh
OR('Org1MSP.peer','Org2MSP.peer') # Allow either org
AND('Org1MSP.peer','Org2MSP.peer') # Require both orgs
```
**Layman Explanation:**
It's like saying: "For payments, either the manager or finance must sign" vs "Both manager and finance must sign."
***
## 6. InitLedger Transaction
```bash
peer chaincode invoke -C mychannel -n hello -c '{"function":"InitLedger","Args":[]}' ...
```
***
## 7. Submit UpdateMessage("Goodbye Fabric!")
**Proposal Payload:**
```json
{
"txID": "f9d7...",
"args": ["UpdateMessage", "Goodbye Fabric!"],
"creator": "user1@Org1MSP",
"endorsers": ["peer0.org1", "peer0.org2"]
}
```
***
## 8. World State Update
**Before:**
```json
{"message": {"text": "Hello Fabric!"}}
```
**After:**
```json
{"message": {"text": "Goodbye Fabric!"}}
```
***
## 9. Block Structure
**Example:**
```json
{
"number": 7,
"data": {
"transactions": [
{
"txID": "f9d7...",
"chaincode": "hello",
"rwSet": {
"writes": [
{"key": "message", "value": "{\"text\":\"Goodbye Fabric!\"}"}
]
},
"status": "VALID"
}
]
}
}
```
***
## 10. Chaincode Upgrade (v1 → v2)
```bash
peer lifecycle chaincode package hello_v2.tar.gz ...
peer lifecycle chaincode install hello_v2.tar.gz
peer lifecycle chaincode approveformyorg --version 2 --sequence 2
peer lifecycle chaincode commit --version 2 --sequence 2 ...
```
***
## 11. Private Data Collection (PDC)
**collections\_config.json:**
```json
[
{
"name": "msgCollection",
"policy": "OR('Org1MSP.member')",
"requiredPeerCount": 1,
"maxPeerCount": 2,
"blockToLive": 100,
"memberOnlyRead": true
}
]
```
***
## 12. Access Control via Attributes (ABAC)
**Chaincode Example:**
```go
val, ok, _ := ctx.GetClientIdentity().GetAttributeValue("role")
if val != "auditor" { return fmt.Errorf("unauthorized") }
```
***
## Fabric Ledger Internals
| Component | Description |
| -------------- | ---------------------------------------- |
| Ledger | Immutable sequence of blocks |
| Block Store | Stores header, data, metadata |
| State Database | Current values only (LevelDB or CouchDB) |
***
## Summary Table
| Category | Details |
| ------------------- | ------------------------------------ |
| Identity & MSP | X.509 + CA + ECDSA keys |
| Chaincode Lifecycle | Package → Install → Approve → Commit |
***
## Ethereum vs Hyperledger Fabric - Comparison
## Technical Comparison Table
| Category | Ethereum (EVM-Based Chains) | Hyperledger Fabric |
| ---------------------------------- | ------------------------------------------------------------ | --------------------------------------------------------------- |
| **1. Identity Model** | ECDSA secp256k1 key pair; address = Keccak256(pubkey)\[12:] | X.509 certificates issued by Membership Service Providers (MSP) |
| **2. Network Type** | Public or permissioned P2P (Ethereum Mainnet, Polygon, BSC) | Fully permissioned consortium network |
| **3. Ledger Architecture** | Global state stored in Merkle Patricia Trie (MPT) | Channel-based key-value store (LevelDB/CouchDB) |
| **4. State Model** | Account-based: balances and storage in accounts | Key-value database with versioned keys per channel |
| **5. Smart Contract Format** | EVM bytecode; written in Solidity/Vyper/Yul | Chaincode packages in Go, JavaScript, or Java |
| **6. Contract Execution** | Executed in deterministic sandbox (EVM) | Executed in Docker containers as chaincode |
| **7. Contract Invocation** | `eth_sendTransaction`: ABI-encoded calldata | SDK submits proposals → endorsers simulate |
| **8. Transaction Structure** | Nonce, to, value, gas, calldata, signature | Proposal + RW Set + endorsements + signature |
| **9. Signing Mechanism** | ECDSA (v, r, s) signature from sender | X.509-based MSP identities; multiple endorsements |
| **10. Endorsement Model** | No built-in multi-party endorsement (unless multisig logic) | Explicit endorsement policy per chaincode |
| **11. Consensus Mechanism** | PoS (Ethereum 2.0), PoW (legacy), rollup validators | Ordering service (Raft, BFT) + validation per org |
| **12. Ordering Layer** | Implicit in block mining / validator proposal | Dedicated ordering nodes create canonical blocks |
| **13. State Change Process** | Contract executes → SSTORE updates global state | Endorsers simulate → Orderer orders → Peers validate/commit |
| **14. Double-Spend Prevention** | State root update + nonce per account | MVCC: Version check of key during commit phase |
| **15. Finality Model** | Probabilistic (PoW), deterministic (PoS/finality gadget) | Deterministic finality after commit |
| **16. Privacy Model** | Fully public by default; private txs via rollups/middleware | Channel-based segregation + Private Data Collections (PDCs) |
| **17. Data Visibility** | All nodes hold all state (global visibility) | Per-channel; only authorized peers see data |
| **18. Data Storage Format** | MPT for state; key-value in trie; Keccak256 slots | Simple key-value in LevelDB/CouchDB |
| **19. Transaction Validation** | EVM bytecode + gas + opcode checks | Validation system chaincode enforces endorsement policy + MVCC |
| **20. Gas / Resource Metering** | Gas metering for all computation and storage | No gas model; logic must guard resource consumption |
| **21. Events and Logs** | LOGn opcode emits indexed events | Chaincode emits named events; clients can subscribe |
| **22. Query Capability** | JSON-RPC, The Graph, GraphQL, custom RPC | CouchDB rich queries, GetHistoryForKey, ad hoc queries |
| **23. Time Constraints** | Optional: `block.timestamp`, `validUntil` for EIP-1559 txs | Custom fields in chaincode; no native tx expiry |
| **24. Execution Environment** | Global EVM sandbox; each node runs all txs | Isolated Docker container per chaincode; endorsers simulate |
| **25. Deployment Flow** | Deploy via signed transaction containing bytecode | Lifecycle: package → install → approve → commit |
| **26. Smart Contract Upgrade** | Manual via proxy pattern or CREATE2 | Controlled upgrade via chaincode lifecycle & endorsement policy |
| **27. Programming Languages** | Solidity (primary), Vyper, Yul | Go (primary), also JavaScript and Java |
| **28. Auditability & History** | Full block-by-block transaction trace, Merkle proof of state | Immutable ledger + key history queries |
| **29. Hashing Functions** | Keccak256 (SHA-3 variant) | SHA-256, SHA-512 (standard cryptographic primitives) |
| **30. zk / Confidentiality Tools** | zkRollups, zkEVM, TornadoCash, Aztec | External ZKP libraries; no native zero-knowledge integration |
***
## Execution Lifecycle Comparison
| Stage | Ethereum (EVM) | Hyperledger Fabric |
| ----------------- | -------------------------------------------- | -------------------------------------------------------- |
| **1. Initiation** | User signs tx with ECDSA and submits to node | Client sends proposal to endorsing peers via SDK |
| **2. Simulation** | EVM runs the tx using opcode interpreter | Endorsing peers simulate chaincode, generate RW set |
| **3. Signing** | Sender signs tx (v, r, s) | Each endorser signs the proposal response |
| **4. Ordering** | Block produced by validator | Ordering service batches txs into blocks |
| **5. Validation** | Gas limit, signature, nonce, storage check | Validation system checks endorsement + MVCC versioning |
| **6. Commit** | State trie updated, new root in block header | Valid txs update state in DB; invalid txs marked as such |
| **7. Finality** | Final after sufficient blocks (PoW/PoS) | Final immediately after block commit |
***
## Summary Insights
* **Ethereum** offers a globally synchronized, public execution model with gas metering and strong ecosystem tooling. It emphasizes decentralization, programmability, and composability.
* **Fabric** is a modular enterprise-grade DLT with configurable privacy, endorsement policies, and deterministic execution. It separates simulation from ordering, enabling fine-grained control.
file: ./content/docs/knowledge-bank/industrial-usecases.mdx
meta: {
"title": "Industrial use cases",
"description": "Comprehensive guide to blockchain applications across manufacturing, logistics, energy, and industrial supply chains"
}
## Introduction to blockchain in the industrial sector
The industrial sector spans a wide array of activities including manufacturing,
logistics, energy production, industrial equipment, aerospace, and raw material
sourcing. These processes rely on multi-tier supply chains, coordinated
operations, extensive compliance requirements, and large-scale data management.
Industrial systems face ongoing challenges such as fraud in procurement, limited
traceability, inefficient maintenance, counterfeiting, and siloed data.
Blockchain introduces a shared, decentralized infrastructure that enhances
trust, transparency, and efficiency across industrial ecosystems. By enabling a
single source of truth that is tamper-evident, blockchain allows multiple
stakeholders—including suppliers, manufacturers, regulators, logistics
providers, and end customers—to coordinate processes, verify records, and
enforce rules without relying on a central authority.
As industrial systems evolve toward smart manufacturing and Industry 4.0,
blockchain acts as a complementary layer to IoT, automation, and AI. It
strengthens data provenance, streamlines compliance, and automates workflows in
environments that demand precision, auditability, and resilience.
## Key benefits of blockchain for industrial applications
Blockchain enables industrial ecosystems to become more secure, auditable, and
digitally synchronized. Its core value propositions include:
* Immutable, timestamped records of material flow, production events, and
inspections
* Distributed visibility across multi-tier suppliers and logistics partners
* Automation of trust-based processes through smart contracts
* Secure integration with industrial IoT sensors and devices
* Authenticity verification for components, documents, and certifications
* Real-time tracking of asset condition, ownership, and maintenance status
These capabilities help industrial firms reduce operational risks, prevent
fraud, improve compliance, and increase agility in fast-moving production
environments.
## Supply chain traceability and material provenance
Modern industrial supply chains involve multiple layers of sourcing, processing,
assembly, and distribution. Tracking the provenance of materials and finished
goods is critical for quality control, compliance, and sustainability. However,
traditional supply chain systems are fragmented and opaque.
Blockchain offers a unified, verifiable record of product history from raw
material origin to final delivery. Each event—such as shipment, inspection, or
transformation—is recorded on-chain, linked to specific parties, timestamps, and
documents.
Example scenario:
* A manufacturer sources titanium from a certified mining firm
* The shipment is registered on a blockchain with certificate of origin and
environmental compliance data
* As the titanium is transformed into components, each processing step is logged
and linked to machines, operators, and quality data
* The final aerospace part carries a digital passport that buyers and regulators
can verify on-chain
Benefits include:
* Instant authentication of product origin, quality, and sustainability claims
* Easier compliance with industry regulations and audits
* Improved recall and defect tracing capabilities
* Better visibility for downstream partners and customers
Industries such as aerospace, automotive, defense, and semiconductors are
adopting blockchain to manage complex, high-stakes supply chains with
zero-defect tolerance.
## Digital twins and asset lifecycle management
Industrial assets such as turbines, vehicles, machines, and equipment have
lifecycles that span decades. Managing the maintenance, usage, upgrades, and
ownership of these assets requires a secure, verifiable record system.
Blockchain supports the concept of a digital twin—a digital representation of a
physical asset that evolves over time with usage and service data.
Blockchain-powered digital twins can store:
* Initial manufacturing specifications and certifications
* Operating hours, performance logs, and sensor data
* Maintenance schedules and service history
* Ownership transfers, inspections, and incidents
These records are accessible to asset owners, service providers, insurers, and
regulators, creating a reliable audit trail for the full asset lifespan.
Example:
* A power plant turbine is manufactured and its serial number registered
on-chain
* During each maintenance cycle, the service team logs parts replaced,
technician ID, and test results
* If the asset is sold or relocated, the transaction is recorded as an ownership
transfer
* In case of a failure, stakeholders can analyze the complete lifecycle without
relying on fragmented paper records
This approach reduces downtime, supports predictive maintenance, simplifies
audits, and increases resale value through verified maintenance history.
## Counterfeit prevention and product authentication
Counterfeiting is a major challenge across industrial domains, particularly for
spare parts, pharmaceuticals, electronics, and branded components. Blockchain
allows manufacturers to uniquely identify and track every unit produced,
ensuring that only genuine items are recognized and accepted in the market.
Blockchain-based anti-counterfeit solutions include:
* Serializing each item with a tamper-evident QR code or RFID tag linked to a
blockchain record
* Verifying each scan or checkpoint with geolocation and timestamp
* Allowing customers and partners to verify authenticity via mobile apps
* Detecting duplicate or suspicious items through anomaly detection on the
ledger
Example:
* A machinery manufacturer issues digital certificates for each gear unit it
produces
* Each unit is scanned and verified at installation, service, and return
* Any attempt to insert counterfeit units into the supply chain is detected due
to missing or mismatched blockchain entries
This protects brand reputation, ensures warranty enforcement, and reduces safety
risks posed by inferior counterfeit parts.
## Industrial automation and smart contracts
Industrial environments are increasingly reliant on automation—from robotics to
machine-to-machine communication. However, many automation workflows still
depend on centralized logic and manual verification. Blockchain enables
decentralized automation through smart contracts that enforce rules and
interactions between machines, sensors, and enterprise systems.
Examples of smart contract-based automation:
* Automatically triggering procurement orders when inventory falls below a
threshold
* Releasing payments upon verified delivery and quality inspection
* Locking down equipment until required safety checks are confirmed
* Updating compliance status when calibrated sensors transmit verified readings
A typical workflow could involve:
* A smart sensor detects that a part is nearing end-of-life
* The sensor emits a signal recorded on the blockchain
* A smart contract evaluates warranty status and initiates a service request
* Upon technician confirmation, a replacement part is ordered and payment
scheduled
Smart contracts reduce delays, eliminate redundant approvals, and ensure
consistent enforcement of rules across distributed sites.
## Logistics, shipping, and freight management
Logistics and shipping are integral to industrial operations, but they often
involve paper documents, handoffs, and delays. Blockchain introduces
transparency and automation into freight tracking, customs clearance, warehouse
transfers, and cross-border shipments.
Blockchain use cases in logistics include:
* Digital bills of lading that are tamper-proof and instantly transferrable
* Smart contract coordination between carriers, ports, customs, and buyers
* Real-time shipment status with geofencing and condition data
* Dispute resolution through shared access to delivery records and timestamps
Example:
* A container carrying raw materials is loaded at the port of origin
* Its bill of lading is recorded on blockchain, accessible to the shipper, port
authority, and receiving manufacturer
* If the container is delayed or rerouted, the smart contract adjusts delivery
deadlines and penalty clauses automatically
Projects like TradeLens (Maersk and IBM), GSBN (Global Shipping Business
Network), and CargoSmart are already transforming global trade using
blockchain-enabled logistics platforms.
## Collaborative manufacturing and industrial consortia
Industrial production increasingly involves collaboration across multiple
entities—contract manufacturers, OEMs, component suppliers, and logistics
partners. Blockchain provides a shared data layer that facilitates secure
collaboration without exposing proprietary data.
Blockchain-based collaboration features:
* Shared bills of materials (BOMs) with component tracking
* Access-controlled data exchange between supply chain participants
* Automated fulfillment verification and payment settlement
* Joint IP registration and licensing enforcement
Example:
* A group of suppliers work together to produce parts for an electric vehicle
* Each supplier logs their contribution and quality certification on-chain
* The OEM receives the final assembly with verified provenance, pricing, and
terms
* Royalties or incentives are distributed automatically via smart contracts
Blockchain enhances trust in multi-party workflows, supports co-innovation, and
aligns incentives through transparent logic and immutable logs.
## Decentralized energy grids and industrial utilities
Energy production, distribution, and consumption play a critical role in
industrial operations. Traditional energy grids are centralized and limited in
flexibility, especially as demand increases for renewable energy, prosumer
participation, and real-time monitoring. Blockchain supports decentralized
energy markets and transparent grid management.
Key applications include:
* Tokenized energy units for peer-to-peer electricity trading
* Smart contract-based billing based on actual usage
* Transparent recording of generation, storage, and grid balancing
* Verifiable carbon offset credits and emissions tracking
* Integration with industrial IoT devices and smart meters
For example, an industrial park using solar panels can generate excess energy
and sell it to neighboring facilities or contribute to the grid. Each
transaction is recorded on a blockchain, and payments are automatically routed
through smart contracts. A regulator or utility company can audit generation and
consumption in real time.
Projects like Power Ledger, Energy Web Foundation, and LO3 Energy are enabling
blockchain-powered microgrids and energy marketplaces that reduce dependence on
centralized utilities and promote sustainable energy management in industrial
zones.
## Mining, metallurgy, and raw material provenance
Mining operations face scrutiny around environmental impact, labor practices,
and conflict minerals. Downstream industries in automotive, electronics, and
aerospace must validate the source of critical raw materials. Blockchain enables
transparent tracking of mined materials from extraction to refinement and
manufacturing.
Use cases in mining include:
* Digital records of extraction permits, inspections, and environmental audits
* Blockchain-tagged containers for ore and refined metal shipments
* Verification of certifications for conflict-free or ethically sourced
materials
* Integration with trade documents and customs declarations
For example:
* A lithium mining company registers its extraction permits and environmental
impact assessments on-chain
* Each batch of extracted ore is tagged and logged with location, time, and
handler details
* Refiners, transporters, and battery manufacturers access this data to verify
supply chain ethics and compliance
Blockchain ensures that sustainability claims are verifiable, prevents
greenwashing, and builds trust with regulators, investors, and global buyers.
## Predictive maintenance and asset reliability
Industrial machinery requires regular maintenance to prevent unplanned downtime
and safety risks. Maintenance schedules are often based on fixed intervals or
reactive monitoring, leading to inefficiencies. By combining blockchain with IoT
and analytics, predictive maintenance can be logged, verified, and coordinated
across stakeholders.
Blockchain use cases in predictive maintenance:
* Immutable logs of vibration, temperature, or load anomalies
* Smart contract rules to trigger alerts or work orders based on thresholds
* Equipment maintenance history shared across departments or vendors
* Automated part ordering and technician scheduling
Example:
* A wind turbine’s sensor detects irregular blade vibration
* Data is recorded on the blockchain with timestamp and location
* A smart contract evaluates the reading, matches it against past patterns, and
issues a maintenance request
* Upon resolution, the event is closed with digital signature and part
verification
Blockchain creates a shared memory for all maintenance actions, helping reduce
mean time to repair (MTTR), increasing uptime, and enabling insurance or
warranty integration based on verified asset history.
## Industrial financing and invoice tokenization
Manufacturing and logistics firms depend on working capital financing, often
delayed by slow invoice processing or lack of visibility into order fulfillment.
Blockchain supports invoice tokenization and supply chain finance by enabling
real-time proof of delivery, service, and acceptance.
Applications include:
* Tokenized invoices that can be traded or financed on marketplaces
* Smart contracts that release payments upon confirmed milestones
* Real-time visibility for lenders and auditors into invoice status
* Embedded insurance and factoring linked to verified supply data
Example:
* A parts supplier delivers equipment to a factory and receives confirmation via
a blockchain-registered RFID scan
* The delivery smart contract marks the invoice as eligible for early payment
* The supplier lists the tokenized invoice on a financing platform
* A fund advances capital at a discount and receives repayment when the OEM pays
the invoice
This reduces financing friction, supports MSMEs, and eliminates disputes over
invoice authenticity or terms.
## Environmental, social, and governance (ESG) compliance
Industrial firms are under growing pressure to demonstrate ESG compliance across
their operations and supply chains. Traditional sustainability reporting relies
on self-disclosed data and unauditable declarations. Blockchain enables
trustworthy ESG tracking, verification, and reporting.
Blockchain in ESG compliance includes:
* Tamper-proof logs of emissions, energy usage, and waste disposal
* Smart contract-based enforcement of emission caps or offset purchase
* Third-party audit trails linked to certification events
* Supply chain labor and sourcing practices recorded on-chain
Example:
* A factory publishes monthly energy consumption and waste output on blockchain
* Carbon offset purchases are recorded with verified credits and retirement
proof
* External auditors review compliance via shared dashboards, eliminating
document submissions
Blockchain improves transparency, allows real-time ESG scoring, and enables
data-driven investment and procurement decisions based on actual impact, not
just claims.
## Circular economy and recycling systems
Industrial manufacturing creates waste, obsolete parts, and scrap materials that
can be reclaimed or reused. Implementing circular economy principles requires
traceability and verification of reused components, remanufacturing cycles, and
recycling outcomes. Blockchain supports closed-loop systems through digital
records and incentives.
Key use cases:
* Registering parts with lifecycle and material composition metadata
* Logging disassembly, refurbishing, and recycling events
* Smart incentives for customers who return used equipment
* Marketplace coordination for recycled raw materials
For example, a consumer electronics company can track returned devices using
blockchain-registered IDs. Each disassembled component is logged and routed to
approved recyclers. Materials that meet quality standards are reintroduced into
the production supply chain. Carbon credits or product discounts are issued
automatically.
This ensures compliance with extended producer responsibility laws and supports
sustainability goals while building trust among consumers and regulators.
## Industrial certification and compliance documentation
Certifications are critical in industrial sectors for safety, quality, and legal
compliance. These include ISO standards, equipment testing, safety audits, and
regulatory approvals. Paper-based certificates are easy to forge or lose.
Blockchain enables permanent, verifiable certification records.
Blockchain-powered certification registries support:
* Issuance of digital certificates linked to verified credentials
* Public and permissioned access to certification history
* Expiry tracking and renewal workflows
* Cross-border verification for international operations
Example:
* A factory installs a new high-pressure boiler
* The installation certificate, safety test results, and operator training logs
are registered on-chain
* Inspectors and buyers verify these credentials before approving production or
insurance
Blockchain reduces certificate fraud, simplifies compliance audits, and supports
long-term traceability for sensitive equipment and processes.
## Industrial intellectual property and R\&D protection
Innovation in the industrial sector often involves patented designs, proprietary
formulas, and confidential specifications. Protecting these assets requires
secure timestamping, controlled disclosure, and licensing transparency.
Blockchain helps organizations manage IP across collaborative and competitive
environments.
Applications:
* Proof of invention timestamps for patent protection
* Smart contract licensing with automated royalty tracking
* Secure sharing of technical documents with audit logs
* IP provenance for compliance with trade or export controls
Example:
* An engineering firm develops a new process for aluminum alloy treatment
* The initial idea, test data, and process specifications are hashed and
recorded on blockchain
* Collaborators access a redacted version under a usage license governed by
smart contract
* If the patent is later contested, the timestamped blockchain records serve as
legal evidence
This approach supports open innovation while preserving IP rights, managing
risk, and enabling monetization of R\&D outputs.
## Auditability and industrial insurance
Insurers in the industrial sector require extensive documentation of assets,
processes, risk controls, and loss history. Blockchain enables insurers and
clients to share a common view of asset status, incidents, and mitigation
efforts, reducing delays and disputes.
Blockchain enables:
* Real-time risk profiles based on verified data
* Smart contracts for parametric insurance (e.g., temperature, downtime)
* Instant claim filing with verified incident records
* Secure access for underwriters, adjusters, and reinsurers
Example:
* A fire suppression system fails in a warehouse, triggering a sensor
* The incident is logged with temperature data, camera footage, and inspection
records
* The insurance contract validates the conditions and releases a payout based on
predefined criteria
* The insurer audits all inputs via blockchain without needing physical
inspection
Blockchain lowers claims processing time, improves fraud detection, and provides
transparent risk modeling for actuarial analysis.
## Global standards and industrial interoperability
Industries operate globally, but regulatory frameworks, certifications, and data
formats often differ between countries and regions. Blockchain helps bridge this
gap by standardizing how data, contracts, and credentials are exchanged across
borders.
Examples of cross-border industrial interoperability include:
* Shared supply chain compliance records accessible to customs, regulators, and
buyers
* Recognition of safety or quality certifications across jurisdictions
* Data-sharing agreements that enforce legal and privacy requirements using
smart contracts
For instance, an EU-based automotive OEM sourcing parts from multiple countries
can use a blockchain-based compliance ledger to verify that each component meets
emissions and safety standards. Suppliers, customs authorities, and logistics
providers all access the same real-time data, reducing errors and shipment
delays.
This supports smoother trade, faster product launches, and better collaboration
in complex, multi-national supply networks.
## Aerospace manufacturing and component integrity
The aerospace sector involves some of the most demanding engineering standards
in any industry. Aircraft components require detailed documentation,
traceability, and regulatory approval throughout their lifecycle. A single
non-compliant or counterfeit part can compromise safety and lead to substantial
liability.
Blockchain is well suited to address these concerns through:
* Immutable records of part manufacturing, certification, and test results
* Chain-of-custody logging during transportation and storage
* Maintenance and retrofit history linked to specific components
* Shared compliance access for manufacturers, regulators, and airlines
Example:
* A turbine blade manufactured in Germany is registered with production batch,
quality assurance results, and engineer sign-off
* During global transportation, temperature and vibration sensors log
conditions, submitting data to the blockchain
* Upon installation on an aircraft, its maintenance record is updated and shared
with the aviation authority
This system eliminates manual reconciliation of maintenance logs, improves
regulatory oversight, and prevents the introduction of faulty or untraceable
components into high-risk machinery.
## Construction, infrastructure, and modular builds
The construction industry faces challenges such as delays, material
mismanagement, and lack of documentation around inspections and certifications.
Blockchain provides transparency, auditability, and automation across the
lifecycle of construction and infrastructure projects.
Blockchain-enabled construction platforms support:
* On-chain issuance of permits, licenses, and inspection results
* Smart contracts for subcontractor payments tied to milestone completions
* Inventory tracking for modular parts, concrete usage, or steel placement
* Equipment rental history and usage validation for billing
A large-scale project such as a hospital or airport could benefit from:
* Digital contracts with contractors where payment is released only after
certified inspections are submitted to blockchain
* A shared ledger of project milestones, site activities, and budget
expenditures
* Auditable logs of safety checks, worker access control, and resource
consumption
Governments and construction consortia are increasingly exploring these models
to reduce corruption, eliminate disputes, and enable more reliable delivery of
public infrastructure.
## Manufacturing-as-a-service platforms
Digital manufacturing platforms now offer distributed access to 3D printing, CNC
machining, and tooling services across geographies. These platforms coordinate
orders, specifications, delivery, and quality assurance among independent
manufacturers. Blockchain helps validate each transaction, ensure fair
compensation, and provide traceability.
Key blockchain roles in distributed manufacturing include:
* Upload and hash of CAD files with access logs
* Smart contracts for job assignment, completion validation, and payment
* Certification of output quality and machine performance
* Record of who produced what, when, and using which materials
Example:
* An automotive company uploads a design file for a custom metal part
* The file is hashed and permissions granted to an approved workshop
* Once printed and quality-tested, the results are recorded on-chain and payment
is released
* All interactions are stored immutably to support dispute resolution or
warranty claims
Blockchain adds trust and structure to decentralized, on-demand production
ecosystems, enabling global scale while preserving traceability and IP
integrity.
## Worker safety and compliance monitoring
In industrial environments such as factories, mines, and construction sites,
worker safety is a top priority. Ensuring that safety protocols are followed,
certifications are updated, and incidents are reported accurately is essential
for legal compliance and operational reliability. Blockchain provides secure
logging and real-time verification.
Examples of blockchain in worker safety:
* Digital certificates for safety training, licenses, and hazard briefings
* Smart PPE (personal protective equipment) integration with access control
systems
* Incident reporting and resolution timelines with immutable records
* Incentive programs for safety compliance linked to verifiable behavior
For example, a mining company may require all personnel entering a site to scan
their digital ID. A smart contract verifies whether the individual has completed
necessary training and logged equipment checks. If any requirements are missing,
access is denied and a record is created.
In case of accidents, blockchain-logged sequences provide trusted data for
analysis, legal inquiry, or insurance evaluation. Worker unions and safety
regulators also benefit from shared access to verified safety compliance data.
## Labor certification and ethical sourcing
Global industrial supply chains often face scrutiny over ethical labor
practices, including forced labor, underage workers, and poor workplace
conditions. Regulatory and corporate social responsibility frameworks require
proof of ethical sourcing and labor certification.
Blockchain supports ethical labor tracking through:
* Verification of worker identities, contracts, and training logs
* On-chain audit reports from third-party certifiers
* Smart contract enforcement of fair wage payment terms
* Incident reports and whistleblower protections with anonymous proofs
Example:
* A textile supplier in Southeast Asia registers employees on a blockchain-based
labor compliance registry
* Auditors upload periodic reviews and findings
* Brands sourcing from the supplier access this data to ensure alignment with
ethical trade requirements
* Payments are structured to ensure no deduction below agreed wages, verified by
blockchain entries
This builds consumer trust, supports international labor laws, and protects
vulnerable workers by establishing accountability across industrial supply
networks.
## Industrial IoT and sensor integration
Industrial automation depends heavily on sensors, actuators, and edge devices
that monitor everything from temperature and pressure to machine vibration and
location. Integrating these IoT systems with blockchain ensures that sensor data
is tamper-evident, shareable, and usable in smart contract workflows.
Blockchain + IoT capabilities include:
* Real-time telemetry that triggers alerts or contract execution
* Verifiable data feeds to oracles for compliance and process validation
* Audit logs for calibration, signal loss, or device errors
* Maintenance optimization using predictive analytics and on-chain diagnostics
Example:
* A refrigerated shipping container logs internal temperature at one-minute
intervals
* This data is hashed and submitted to the blockchain during transit
* If a reading crosses the allowed threshold, a smart contract triggers an alert
and possible route adjustment
* If a shipment is rejected due to spoilage, the blockchain record helps assign
liability based on exact failure time and conditions
Combining IoT and blockchain improves data integrity, enables distributed
control, and strengthens accountability across automated industrial systems.
## Warehouse automation and inventory reconciliation
Warehouses play a crucial role in industrial distribution, housing raw
materials, spare parts, and finished goods. Managing stock levels,
reconciliation, and routing requires integration between sensors, ERP systems,
and logistics networks. Blockchain helps streamline warehouse operations through
immutable tracking and cross-stakeholder access.
Warehouse blockchain solutions may include:
* Real-time inventory visibility for suppliers, buyers, and auditors
* QR or RFID-based asset movement tracking with timestamped events
* Condition monitoring of sensitive goods such as chemicals or electronics
* Smart restocking logic based on blockchain-validated levels
Example:
* A warehouse receives 500 units of precision components and logs receipt
on-chain
* As parts are picked for delivery, each movement is recorded with location,
handler, and destination
* A smart contract reconciles available stock with expected demand and places
automatic reorders
* Auditors verify movement logs without physical inventory checks
Blockchain ensures that warehouse records match physical reality, improves
transparency for stakeholders, and reduces disputes over missing or damaged
items.
## Construction equipment and fleet management
Industrial equipment and vehicles such as cranes, loaders, and transport fleets
are high-value assets with intensive usage, maintenance, and rental records.
Blockchain enables trusted tracking of equipment condition, availability, and
service history for better planning and ROI.
Applications in fleet and equipment management:
* Digital logbooks of usage hours, fuel consumption, and job assignments
* Verification of inspection reports and operator certification
* Rental contracts with usage-based billing and insurance integration
* Predictive replacement scheduling and downtime tracking
Example:
* A construction company rents heavy equipment from a third-party vendor
* The rental contract is encoded in a smart contract, with daily logs uploaded
from onboard telematics
* Fuel usage, work hours, and location are tracked and shared with all parties
* Any damage, late returns, or usage anomalies trigger automated clauses
Blockchain reduces administrative load, provides precise billing, and protects
both owners and renters from disputes or hidden liabilities.
## Industrial auctions and asset liquidation
Surplus industrial equipment, materials, or production capacity are often resold
through auctions or liquidation platforms. Blockchain ensures that auctions are
transparent, fair, and tamper-resistant. It also supports traceability of
ownership and conditions for sensitive or regulated assets.
Blockchain-enabled auctions provide:
* Timestamped, irreversible bids with verified user identities
* Smart contract resolution of winners, pricing, and payment deadlines
* Asset condition metadata linked to inspections and prior usage
* On-chain transfer of ownership and delivery confirmation
Example:
* A steel manufacturer lists surplus rolling stock on a blockchain auction site
* Interested buyers submit sealed bids before a deadline
* The auction smart contract reveals bids, determines the winner, and issues
payment instructions
* Once confirmed, the ownership record is updated and logistics is initiated
This approach prevents last-minute manipulation, reduces administrative costs,
and increases buyer trust in industrial resale markets.
file: ./content/docs/knowledge-bank/keys-and-security.mdx
meta: {
"title": "Private keys and security",
"description": "A comprehensive guide to private key management in blockchain"
}
## Introduction to private keys
Private keys are the cornerstone of blockchain security. They serve as proof of
ownership and control over digital assets and smart contract interactions.
A private key is a randomly generated number that allows its holder to
sign transactions, access wallets, and interact with the network. Without the
correct private key, no one can move funds or authorize changes tied to a
blockchain address.
Every blockchain account is derived from a key pair. The private key is kept
secret, while the public key or derived address is used for receiving assets or
verifying signatures.
If a private key is lost, access to the associated funds is permanently lost. If
it is stolen, the attacker gains full control. This makes private key handling a
critical responsibility in any blockchain-based system.
## Cryptographic foundations
Private keys rely on public-key cryptography, also known as asymmetric
encryption.
In this system, each user generates a key pair consisting of:
* A private key, which is kept secret
* A public key, which is shared openly
Blockchain systems such as Ethereum and Bitcoin use elliptic curve cryptography
to generate keys and validate transactions. The commonly used curve is
`secp256k1`, which offers strong security with efficient computation.
The core principles include:
* Only the holder of the private key can generate a valid signature
* Anyone with the public key can verify the signature’s authenticity
* The key pair ensures non-repudiation, integrity, and authentication
Private keys are never transmitted during a transaction. Instead, they are used
to generate a signature, which is included in the transaction payload and
verified by network validators.
## Generating private keys
Private keys are 256-bit numbers that must be chosen with high entropy. They can
be generated using secure cryptographic libraries or hardware devices.
Key generation approaches include:
* Cryptographically secure pseudorandom number generators (CSPRNGs)
* Hardware wallets with built-in secure elements
* Operating system entropy pools (e.g., `/dev/random`)
* Browser-based generators with added caution
Generated keys are typically encoded in hexadecimal, WIF (Wallet Import Format),
or Base58 for ease of storage and transport.
Example Ethereum private key (hex):
```
0x4c0883a69102937d6231471b5dbb6204fe512961708279f2a41e2eaed2931c0e
```
A good key generation tool ensures randomness, prevents key reuse, and never
exposes the key to insecure memory or external APIs.
## Storing private keys
Storage is the most vulnerable aspect of private key management.
If keys are stored improperly, they can be leaked, corrupted, or lost. Secure
storage methods are essential for both individual users and enterprise systems.
Key storage options include:
* Hardware wallets (e.g., Ledger, Trezor) for physical isolation
* Encrypted keystore files (e.g., JSON-V3 for Ethereum)
* Secure elements in mobile devices (e.g., iOS Secure Enclave, Android Keystore)
* Custodial wallets with trusted third-party key management
* HSMs (Hardware Security Modules) in enterprise infrastructures
* Cold storage using air-gapped systems or paper wallets
Best practices for key storage:
* Use hardware devices where possible
* Encrypt keys at rest using strong passphrases
* Backup keys securely in multiple locations
* Avoid storing plain-text keys on disk or in source code
* Rotate keys periodically if applicable
Loss of private keys leads to irreversible loss of funds. Multiple layers of
protection and redundancy should always be considered.
## Using private keys to sign data
Signing is the main operation performed with private keys in blockchain systems.
A digital signature proves that a transaction or message originated
from the private key holder and has not been tampered with.
The signature process includes:
* Hashing the transaction data using a secure hash function (e.g., Keccak256)
* Signing the hash with the private key using ECDSA
* Producing a signature composed of values `(r, s, v)` for Ethereum or `(r, s)`
for Bitcoin
Signature verification is done by nodes using the corresponding public key or
address. If the signature is invalid, the transaction is rejected.
Example in Ethereum:
```js
const message = "Transfer 100 tokens";
const hash = keccak256(message);
const signature = sign(hash, privateKey);
```
Signatures are also used in off-chain authentication, multisig wallets, permit
functions (EIP-2612), and decentralized identity systems.
## Key recovery and backups
Key recovery is essential to protect against accidental loss or device failure.
A well-designed recovery strategy ensures that keys can be restored
without compromising their secrecy or availability.
Common key recovery methods include:
* Mnemonic phrases based on BIP-39 (12 or 24 words)
* Shamir’s Secret Sharing to split a key into multiple parts
* Encrypted backups stored in separate secure locations
* Hardware wallet seed recovery using offline procedures
Mnemonic phrases convert a binary seed into a set of easily writable words. The
same seed always produces the same key pair. These phrases must be protected
like the key itself.
Best practices for recovery:
* Write seed phrases on physical paper or metal backups
* Store in fireproof and waterproof containers
* Do not store recovery data online or in cloud services
* Test recovery procedures before going live
For organizations, backup keys may be held by compliance officers, escrow
providers, or board members under strict policies.
## Threats and attack vectors
Private keys are targeted by a range of threats. Understanding these helps
define stronger defenses.
Key threats include:
* Malware and keyloggers on infected devices
* Phishing attacks that trick users into revealing keys
* Memory dumps or side-channel attacks on hot wallets
* Insider threats within organizations
* Compromised browser extensions or dApps
* Insecure random number generators or reused entropy
* Clipboard hijacking or exposed keystrokes
Even small mistakes can lead to total loss. Attackers often automate discovery
of leaked private keys across GitHub, cloud logs, or system files.
Mitigation strategies:
* Use hardware wallets that isolate key operations
* Run key-handling apps in sandboxed environments
* Monitor processes and file access for anomalies
* Apply least-privilege access to signing systems
* Educate users against phishing and social engineering
Security posture must evolve continuously, especially in high-value
environments.
## Multisignature and threshold schemes
Multisignature schemes offer a powerful way to secure private key usage.
Instead of relying on a single key, multisig requires multiple parties to
approve an action. This reduces the risk of compromise and supports distributed
governance.
In Ethereum, multisig is implemented through smart contracts such as Gnosis
Safe. In Bitcoin, native multisig is supported via `m-of-n` scripts.
Common use cases:
* Treasury and fund control
* DAO governance approvals
* Enterprise key custody
* Shared wallets for partnerships
Multisig types:
* Standard multisig (e.g., 2-of-3)
* Threshold signatures (e.g., BLS or FROST)
* Hierarchical structures (e.g., role-based access)
Benefits of multisig:
* Reduced single point of failure
* Transparent approval flows
* Configurable access control and time delays
Multisig setups require clear policies, signer coordination, and robust
auditing. The key principle is that no single party can act unilaterally.
## Enterprise key management strategies
Enterprises managing digital assets need rigorous key management architectures.
Enterprise solutions may include:
* Hardware security modules (HSMs) for isolated key signing
* Multi-party computation (MPC) for collaborative key operations
* Key management services integrated with compliance controls
* Role-based access and transaction approval workflows
* Audit trails, policy engines, and emergency lockdowns
MPC allows parties to sign a transaction without any party ever having the full
private key. This approach is gaining popularity among custodians and exchanges.
Integration with existing security systems such as LDAP, HSMs, or SIEM tools
enables seamless control and visibility.
Enterprises must enforce:
* Segregation of duties
* Key rotation policies
* Incident response for key exposure
* Regular audits and pen-testing
Institutional-grade security is critical in contexts such as fund custody, token
issuance, or regulated DeFi platforms.
## Secure user onboarding
User onboarding is the first point of contact where private keys are generated
or introduced.
A secure onboarding flow must ensure that users understand their
responsibility and that no third party intercepts the key or recovery material.
Methods for onboarding include:
* Generating keys locally in the browser with no network exposure
* Allowing users to bring their own keys via hardware devices
* Presenting mnemonic phrases with forced manual backup
* Integrating with secure authentication modules on mobile
Usability should never compromise security. Developers must:
* Explain what the key or phrase means
* Warn that recovery is not possible without a backup
* Block screenshots or clipboard access during key display
* Offer guided verification by asking users to re-enter selected words
The onboarding design directly affects user retention and security posture. A
poor experience leads to either user drop-off or mismanaged keys.
## Wallet management best practices
Wallets are interfaces to private keys and blockchain interactions. They can be
hot, cold, custodial, or self-managed.
Best practices for wallet management include:
* Using separate wallets for savings and daily use
* Keeping large balances in cold wallets disconnected from the internet
* Using multisig wallets for organizational funds
* Avoiding browser extensions for sensitive storage
* Setting transaction limits, alerts, and withdrawal delays
Hardware wallets offer the best balance of usability and security for
individuals. They support signing without revealing the private key to the host
device.
Mobile wallets benefit from secure enclaves but are exposed to more threats.
They should use biometric locks, OS-level key storage, and encrypted local
backups.
Custodial wallets shift the key responsibility to a third party. This may be
acceptable for regulated exchanges or financial institutions but should come
with SLAs, audits, and transparency.
## Biometric login and passkey systems
Modern devices support biometric authentication, which can replace traditional
key management for consumer dApps.
Biometrics include:
* Face ID or fingerprint readers
* Device-level passkeys
* WebAuthn and FIDO2 standards
Instead of storing private keys directly, wallets can wrap the key using a
secure enclave and decrypt it only with biometric confirmation.
Passkeys allow cross-device login without revealing credentials. They bind the
user to the device and browser, offering phishing resistance and ease of use.
Benefits:
* No need to remember or store seed phrases
* Fast and seamless login experience
* Compatible with mobile-first dApps
Challenges:
* Recovery is tied to device backup or platform ecosystem
* May not offer true self-custody
* Limited support across decentralized systems
Biometric and passkey-based flows are ideal for onboarding new users who are not
yet familiar with Web3 but want a secure experience.
## Trends in private keyless cryptography
Private keyless systems are an emerging class of identity models where users
don’t need to manage cryptographic keys directly.
Approaches include:
* Social recovery wallets (e.g., Argent)
* Session-based ephemeral keys (e.g., Lit Protocol)
* Delegated signer protocols (e.g., Biconomy, Account Abstraction)
* Zero-knowledge login using zk-proof of identity
* Encrypted key fragments managed by guardians
Account abstraction in Ethereum (EIP-4337) decouples private key signatures from
transaction authorization. This opens the door to:
* Smart contract wallets that define custom access logic
* Recovery methods based on biometrics or guardians
* Bundled transactions and gasless operations
Private keyless systems aim to solve Web3’s largest UX barrier: secure key
handling. By abstracting keys away from users, these systems offer convenience
without sacrificing control.
Private keys define access, control, and value in the blockchain world. Managing
them properly is critical to protect both individual assets and institutional
trust.
A secure key strategy includes:
* Strong cryptographic generation
* Encrypted and redundant backups
* Segmented usage for different roles or balances
* Education on threat models and phishing
* Use of hardware devices or secure computation
As the ecosystem matures, key handling will become safer, smarter, and more
user-friendly. From multisig to MPC and passkeys, the future of blockchain
security will balance cryptographic rigor with human usability.
file: ./content/docs/knowledge-bank/private-blockchains.mdx
meta: {
"title": "Private blockchains",
"description": "Understanding private and permissioned blockchain networks"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
# Private blockchain networks
Private blockchains are permissioned networks where participation is
controlled and restricted to authorized participants.
## Major private networks
## Technical deep dive
Blockchain technology, at its core, provides a decentralized digital ledger that
records transactions in a secure and transparent manner . While public
blockchains offer open access and broad participation, private permissioned
blockchains represent a distinct category tailored for controlled environments,
often operating within or between organizations . The increasing interest from
businesses in private permissioned blockchains stems from their potential to
offer the benefits of distributed ledger technology while addressing specific
enterprise needs for data privacy, access control, and regulatory compliance .
This emergence signifies a recognition that a one-size-fits-all approach to
blockchain may not suit every organizational context.
## Defining Characteristics of Private Permissioned Blockchains
A private blockchain, frequently termed a "trusted" or "permissioned"
blockchain, operates as a closed network accessible exclusively to authorized or
select verified users . This fundamental characteristic is underpinned by an
additional access control layer, ensuring that only users with explicit
permissions can interact with the blockchain . Furthermore, the actions that
permitted users can perform are strictly defined and granted by the
administrators of the ledger . To gain access and execute these authorized
operations, participants are typically required to authenticate themselves
through methods such as digital certificates or other digital identifiers .
Participation in a private permissioned blockchain network is restricted, with
network administrators holding the authority to determine who can join . Access
often involves a formal invitation process where the identity and other
pertinent information of potential participants are authenticated and verified
by the network operator(s) . Moreover, the system allows for the assignment of
different levels of user permissions or roles, providing granular control over
network interactions .
These blockchains are frequently owned and operated by specific companies or
organizations for the purpose of managing sensitive data and internal
information . In some cases, a single private organization may wield complete
authority over the network, dictating the participants and their roles . The
owner or operator may also retain the privilege to override, edit, add, or even
delete records on the blockchain, depending on the network's governance model .
The degree of decentralization in a private permissioned blockchain is not a
fixed attribute and can vary significantly, ranging from highly centralized
systems controlled by a single entity to partially decentralized networks
operating among a consortium of authorized participants . The network members
typically decide on the level of decentralization and the specific mechanisms
used for achieving consensus .
Transparency, a hallmark of many public blockchains, is not a mandatory feature
of private permissioned blockchains and is often considered optional to enhance
security . The level of transparency is usually determined by the objectives of
the organization managing the blockchain network . However, regardless of the
chosen level of transparency for general users, the ledger itself maintains a
comprehensive record of every transaction along with the identities of the
participating parties .
In contrast to the anonymity often associated with public blockchains, private
permissioned blockchains generally lack anonymity. Access to the identity of
each participant involved in a transaction is frequently a critical requirement
for private entities seeking accountability and a verifiable chain of custody .
Every modification or transaction is linked to a specific user, enabling network
administrators to have immediate insight into who made a change and when .
The fundamental aspect that distinguishes these blockchains is the controlled
access and the presence of an entity or group responsible for managing
permissions. This fundamentally alters the trust model compared to public
blockchains, where trust is distributed across a large, anonymous network. The
flexibility in decentralization and transparency allows private permissioned
blockchains to be adapted to specific organizational needs and regulatory
requirements, offering a key advantage over the more standardized structures of
public blockchains. The capability of a central authority to potentially modify
the ledger introduces a trade-off between immutability and control, a balance
that must be carefully considered based on the intended application.
## Private Permissioned vs. Public Blockchains: A Detailed Comparison
The fundamental difference between private permissioned and public blockchains
lies in their approach to access control. Private permissioned blockchains
restrict participation to authorized entities who have been granted permission
by a central authority or through a predefined protocol . Conversely, public
blockchains are permissionless, allowing anyone to join and participate in the
network's core activities .
In terms of anonymity, private permissioned blockchains generally do not offer
it, as participants' identities are known and tracked to ensure accountability .
Public blockchains, on the other hand, provide a degree of anonymity through the
use of pseudonymous addresses, although the transactions themselves are publicly
viewable .
Governance also differs significantly. In private permissioned blockchains,
decisions are authorized by a specific group or the network owners through a
centralized, predefined structure . Governance in these networks is often
customizable . Public blockchains operate under a decentralized governance
model, where no single entity controls the network or its protocols, and changes
typically require consensus from the community .
The level of decentralization varies considerably. Private permissioned
blockchains can range from centralized systems controlled by a single
organization to partially decentralized networks managed by a consortium of
authorized participants . Public blockchains are inherently decentralized,
distributed across a vast network of nodes, which makes them highly resilient to
single points of failure or control .
Transparency is another key differentiator. In private permissioned blockchains,
transparency is optional and often limited to authorized participants, with the
level being customizable . Public blockchains are highly transparent, with all
transactions recorded and publicly accessible on the blockchain .
Security approaches also differ. Private permissioned blockchains rely on access
control mechanisms, encryption, and potentially consensus protocols. However,
they can be vulnerable if the controlling entity's systems are compromised or
due to a limited number of validators . Public blockchains derive their security
from the large number of participants, cryptographic hashing, and the
distributed nature of the network, making them highly resistant to attacks,
although this can sometimes impact speed .
Transaction speed and throughput are generally higher in private permissioned
blockchains due to the smaller number of participants and the use of potentially
more efficient consensus mechanisms . These networks can often be configured for
high transaction throughput and even zero transaction fees . In contrast,
transaction processing in public blockchains can be slower due to network
congestion and the need for broad consensus among numerous participants , often
involving transaction fees .
Use cases for each type of blockchain also vary. Private permissioned
blockchains are well-suited for enterprise applications requiring data privacy,
accountability, and controlled access, such as supply chain management, internal
financial systems, healthcare data management, and collaborations between
businesses . Public blockchains are ideal for applications that demand
transparency, trustless environments, and broad participation, such as
cryptocurrencies, decentralized finance (DeFi), and open-source projects .
Identity management is typically built into private permissioned blockchains,
allowing for the definition of roles and permissions for participants .
Authentication often occurs through certificates or digital identifiers . Public
blockchains generally lack built-in identity management, with transactions being
linked to pseudonymous wallet addresses .
Scalability in terms of transaction throughput is generally better in private
permissioned blockchains compared to public blockchains due to the limited
number of participants . Public blockchains can face significant scalability
challenges when dealing with a high volume of transactions .
The decision of whether to use a private permissioned or a public blockchain is
fundamentally driven by the specific requirements of the application,
particularly the desired balance between control, privacy, transparency, and
trust. Organizations must carefully assess their needs and priorities to
determine which type of blockchain aligns best with their objectives.
### Table 1: Comparison of Private Permissioned and Public Blockchains
| Feature | Private Permissioned Blockchain | Public Blockchain |
| ------------------- | ---------------------------------------------------------- | ------------------------------------------------------ |
| Access Control | Restricted, permissioned | Open, permissionless |
| Anonymity | Generally lacks anonymity | Offers pseudonymity |
| Governance | Centralized or controlled by authorized group | Decentralized, community-driven |
| Decentralization | Variable, can be centralized or partially decentralized | Inherently decentralized |
| Transparency | Optional, customizable, often limited to participants | High, all transactions publicly viewable |
| Security | Relies on access control, encryption, fewer validators | Relies on a large number of participants, cryptography |
| Transaction Speed | Fast, high throughput potential | Can be slower, lower throughput potential |
| Use Cases | Enterprise applications, supply chain, internal systems | Cryptocurrencies, DeFi, open-source projects |
| Identity Management | Built-in, role-based access control | Typically lacks built-in identity management |
| Scalability | Generally more scalable in terms of transaction throughput | Can face scalability challenges |
## Architecture of Private Permissioned Blockchain Networks
The architecture of a private permissioned blockchain network comprises several
key components working in concert. Nodes are the participants in the network,
each typically holding a copy of the ledger . In this controlled environment,
these nodes are usually known and authorized entities . It's common to find
different types of nodes within the network, each with specific roles and
permissions, such as validator nodes responsible for confirming the validity of
transactions .
Clients serve as the applications or interfaces that participants use to
interact with the blockchain network . These clients enable users to submit
transactions, query the data stored on the ledger, and potentially execute smart
contracts .
The ledger is the foundational element – a distributed, immutable record that
chronologically captures all transactions that have occurred on the blockchain .
In private permissioned blockchains, access to view or modify the ledger is
strictly controlled based on the permissions assigned to each user .
Smart contracts, which are self-executing agreements with the terms directly
encoded in the program, play a crucial role in automating processes, enforcing
predefined rules, and managing assets within the permissioned environment .
Platforms like Hyperledger Fabric and Quorum provide robust support for the
development and deployment of smart contracts .
The network structure, or topology, that connects the various nodes can vary
depending on the specific design of the blockchain. Common structures include
peer-to-peer networks and hub-and-spoke models . In many private permissioned
blockchains, a "trusted intermediary" or a consortium of organizations might
manage the core network infrastructure, overseeing the operation and governance
of the blockchain . Some architectural designs involve a distinction between
validator nodes, operated by the trusted intermediary, and participant nodes
that may have more limited capabilities .
A critical component for managing participation and access is the identity
management layer. This layer is responsible for verifying the identities of
participants and managing their associated permissions within the network . It
handles authentication processes, determines authorization levels for various
actions, and may also include mechanisms for revoking access when necessary .
The architecture of these networks is carefully crafted to strike a balance
between security, control, efficiency, and performance, leading to diverse
implementations based on the specific use case and the governing entity. Unlike
the more standardized architecture observed in public blockchains, private
permissioned blockchains offer greater flexibility in their design to cater to
the unique needs of organizations. The central role of the "trusted
intermediary" or the governing consortium significantly shapes the architecture,
particularly concerning the distribution of responsibilities for transaction
validation and overall network maintenance. This central entity introduces a
degree of centralization but also establishes a clear point of accountability
and control within the network.
## Consensus Mechanisms in Private Permissioned Blockchains
While highly centralized private blockchains might forgo traditional consensus
mechanisms, most distributed private permissioned networks rely on them to
ensure agreement among authorized participants regarding the state of the ledger
. Several consensus algorithms are commonly employed in these settings, each
with its own technical details and trade-offs.
**Raft** is a consensus algorithm favored for its understandability and
performance, making it suitable for permissioned environments. It operates
through a leader election process where one node is chosen as the leader,
responsible for proposing new blocks to the network. Follower nodes then
replicate these proposals, and a block is committed to the ledger only when a
majority of followers agree. Raft's primary focus is on maintaining consistency
of the transaction log across all participating nodes.
**Paxos** represents a family of consensus algorithms renowned for their
robustness and fault tolerance, even in asynchronous networks where message
delivery times are not guaranteed. While more complex to understand and
implement than Raft, Paxos involves distinct roles of proposers, acceptors, and
learners to achieve agreement on a specific value, such as a transaction or a
block. It is designed to tolerate a certain number of faulty processes within
the network.
The **Practical Byzantine Fault Tolerance (PBFT)** algorithm is specifically
engineered to tolerate Byzantine faults, where nodes can exhibit arbitrary
behavior, including malicious actions . In PBFT, a round of communication occurs
between a primary node and backup nodes to reach consensus. The system can
guarantee safety and liveness as long as a supermajority of nodes are behaving
honestly (typically 2f+1 honest nodes out of a total of 3f+1 nodes, where 'f'
represents the number of potentially faulty nodes). PBFT is frequently used in
permissioned blockchains where the participants might not all be fully trusted.
**Federated Byzantine Consensus (FBFT)** is a variation of the BFT algorithm
where each node in the blockchain designates a set of trusted transaction
validators who receive and order transactions . Consensus is achieved when a
predefined minimum number of these validators reach an agreement. FBFT offers a
compromise between full decentralization and trust by relying on a federation of
known and trusted validators.
**Round-Robin Consensus** presents a simpler approach where nodes take turns in
proposing and validating new blocks . This mechanism is particularly well-suited
for highly controlled environments where all participants are considered
trustworthy. While it can be very efficient in such settings, it typically
offers less fault tolerance compared to BFT-based algorithms.
Some private blockchain platforms also utilize **multi-party voting schemes** to
achieve consensus . In these systems, authorized participants cast votes on
proposed transactions or blocks, and consensus is reached when a predefined
threshold of votes is met. The specific voting rules and thresholds can be
customized based on the network's requirements.
The selection of a particular consensus mechanism is largely dictated by the
level of trust that exists among the participants and the desired degree of
fault tolerance and performance for the network. In environments where
participants are known and trusted, simpler and more efficient algorithms like
Raft or Round-Robin may be sufficient. However, in scenarios involving
potentially less trusted entities, more robust mechanisms such as PBFT or FBFT
are often preferred. The emphasis on efficiency and reduced computational
overhead in private permissioned blockchains often leads to the adoption of
consensus mechanisms that are less resource-intensive compared to the
Proof-of-Work (PoW) algorithm commonly used in many public blockchains. This
contributes to the faster transaction speeds and potentially lower energy
consumption observed in private networks.
## Protocols in Private Permissioned Blockchains
Protocols form the backbone of any blockchain network, defining the rules and
procedures that govern how participants interact and how the system operates. In
private permissioned blockchains, these protocols are often tailored to meet
specific organizational requirements and security considerations.
**Communication protocols** dictate how nodes within the network exchange
information. This includes the dissemination of details about new transactions,
newly formed blocks, and updates to the overall state of the ledger. While
fundamental networking protocols like TCP/IP provide the underlying
infrastructure, specific blockchain platforms may implement their own optimized
communication protocols to enhance efficiency and security within their
particular architecture and consensus framework. These protocols ensure that
message passing between nodes is both secure and reliable.
**Transaction processing protocols** outline the precise steps involved in
submitting, verifying, and ultimately committing transactions to the blockchain.
This encompasses the format in which transactions are structured, the methods
used for digitally signing them to ensure authenticity, and how they are
propagated across the network to other participating nodes. These protocols also
establish the rules for validating transactions, which may include verifying
digital signatures, ensuring sufficient account balances, and confirming
adherence to the logic defined within smart contracts.
**Data sharing protocols** are particularly important in private permissioned
blockchains, where controlling access to information is a primary concern. These
protocols govern how data stored on the ledger is shared among authorized
participants. They can enforce granular access control policies at the level of
individual data elements, ensuring that only users with the appropriate
permissions can view specific pieces of information. Techniques such as state
channels or private data collections might be employed to facilitate
confidential data sharing within the network while still leveraging the benefits
of a shared ledger.
**Smart contract interaction protocols** define how users and external
applications can interact with smart contracts that have been deployed on the
blockchain. This includes the protocols for invoking specific functions within a
contract, passing the necessary parameters, and receiving the results of the
contract's execution. Standardized APIs and interfaces are often used to
simplify and streamline the process of interacting with smart contracts.
The protocols employed in private permissioned blockchains are carefully
selected and often customized to prioritize efficiency, maintain security within
a controlled environment, and ensure strict adherence to predefined access
policies. Unlike the more open and standardized protocols found in public
blockchains, private networks have the flexibility to implement bespoke
protocols that are finely tuned to their specific use cases and the
characteristics of their participants. The emphasis on data sharing protocols
underscores the critical importance of granular control over information access
in enterprise settings, where confidentiality and compliance with regulations
are paramount. These protocols enable organizations to harness the advantages of
a shared, distributed ledger while simultaneously maintaining the necessary
levels of data privacy and security.
## Identity Management and Access Control
In the realm of private permissioned blockchains, robust identity management and
access control mechanisms are paramount for ensuring the security, integrity,
and proper governance of the network . These systems control who can participate
in the network and precisely define the actions each participant is authorized
to perform. This is crucial for establishing accountability and maintaining a
clear audit trail of all activities within the blockchain .
Permissions within these networks are typically granted by the network
administrators or through the enforcement of predefined rules that are often
embedded within smart contracts . A common approach involves defining different
roles, each associated with a specific set of access privileges and capabilities
. Access can be granted based on various criteria, including the participant's
identity, their organizational affiliation, or other relevant attributes that
align with the network's policies.
The enforcement of these permissions occurs at multiple layers within the
blockchain infrastructure. This includes controlling access to the network
itself, regulating the submission of transactions, restricting the visibility of
certain data on the ledger, and governing the execution of smart contracts.
Authentication mechanisms, such as digital certificates and API keys, are
employed to verify the identity of each participant attempting to interact with
the network . Once a user is authenticated, authorization policies are then
applied to determine whether that user possesses the necessary permissions to
perform the specific action they are attempting.
Private permissioned blockchains often integrate with existing enterprise-level
identity management systems, allowing organizations to leverage their current
infrastructure and processes for managing user identities . Additionally, some
blockchain platforms offer built-in identity management features that can be
configured to meet the specific needs of the network . The modular nature of
many blockchain architectures also facilitates the integration of various
third-party identity management solutions, providing flexibility and
customization options .
Commonly used mechanisms for managing permissions within these networks include
Access Control Lists (ACLs) and Role-Based Access Control (RBAC). ACLs
explicitly specify which users or groups have access to particular resources
within the blockchain. RBAC, on the other hand, assigns permissions to
predefined roles, and users are then assigned to these roles based on their
responsibilities and requirements within the network. This approach simplifies
permission management and ensures consistency across the network.
The presence of strong identity management and access control is a fundamental
aspect of private permissioned blockchains, distinguishing them from their
public counterparts. This controlled environment ensures that the network
operates according to its intended design and that sensitive data is protected
from unauthorized access or modification. The ability to precisely define and
enforce who can do what within the blockchain network is a key factor driving
the adoption of this technology by enterprises seeking secure and auditable
solutions. Furthermore, the seamless integration with existing identity
management systems can significantly streamline the process of onboarding and
managing users for organizations deploying private permissioned blockchains,
reducing administrative overhead and leveraging existing expertise.
## Scalability and Performance Considerations
Private permissioned blockchains generally exhibit higher transaction throughput
and lower network latency compared to public blockchains, primarily due to the
limited number of participants and the potential for employing more efficient
consensus mechanisms . The absence of open competition for transaction
validation and the utilization of voting-based or leader-based consensus
protocols can significantly enhance processing speeds. Moreover, because these
networks typically involve a smaller and often geographically localized set of
participants, the time it takes for information to propagate across the network,
known as network latency, tends to be lower .
Despite these inherent advantages, private permissioned blockchains are not
immune to scalability challenges. As the number of participants and the volume
of transactions increase, these networks can still encounter limitations. The
specific consensus mechanism employed and the underlying network architecture
play a crucial role in determining the scalability of a given private
blockchain. For instance, some consensus algorithms, like PBFT, can experience
performance degradation as the number of participating nodes grows
significantly.
When compared directly with public blockchains, the differences in scalability
and performance become more pronounced. Public blockchains often face
scalability bottlenecks due to the sheer number of participants and the
computationally intensive nature of some of their consensus mechanisms, such as
Proof-of-Work . In contrast, private permissioned blockchains prioritize
efficiency and immutability within a controlled environment, often at the
expense of the high degree of decentralization found in public chains. This
trade-off typically results in superior performance in enterprise-focused
applications .
Several factors can influence the overall performance of a private permissioned
blockchain. The choice of consensus algorithm is critical, as different
algorithms have varying performance characteristics under different network
conditions. The underlying network infrastructure, including the bandwidth and
connectivity between nodes, also plays a significant role. The complexity of the
smart contracts being executed on the blockchain can impact processing times, as
can the hardware and software resources available to each node in the network.
The generally better scalability and performance characteristics of private
permissioned blockchains make them particularly attractive for enterprise use
cases where high transaction volumes and low latency are often critical
requirements. This makes them well-suited for applications such as supply chain
tracking, real-time payment processing, and efficient asset management within
organizations or among trusted consortia. However, while generally more scalable
than public chains, careful design and ongoing optimization are still essential
to ensure that private permissioned blockchains can effectively handle the
anticipated workload as adoption expands. Factors such as the selection of an
appropriate consensus mechanism and the design of an efficient network
architecture must be carefully considered to avoid potential performance
bottlenecks as the network evolves.
## Real-World Use Cases of Private Permissioned Blockchains
Private permissioned blockchains are finding increasing adoption across various
industries, demonstrating their versatility and suitability for specific
enterprise needs. In supply chain management, these blockchains enable the
tracking of goods and their provenance throughout the supply chain, fostering
transparency and accountability among all participating organizations . This can
lead to improved efficiency, reduced instances of fraud, and enhanced visibility
into complex supply networks .
The financial services sector is exploring and implementing private permissioned
blockchains for several applications, including facilitating secure and
efficient interbank payments and settlements . They are also being used to
streamline trade finance processes, reducing the reliance on cumbersome
paperwork, and for managing digital assets and tokens within a regulated
framework.
In healthcare, private permissioned blockchains offer a secure and auditable way
to store and share patient data among authorized healthcare providers, ensuring
both privacy and interoperability . They can also be used to track the
provenance of pharmaceuticals, helping to combat the issue of counterfeit drugs.
For identity management, these blockchains can be used to create secure and
verifiable digital identities for both individuals and organizations,
simplifying processes that require identity verification and facilitating secure
access to various services and data.
Organizations are also leveraging private permissioned blockchains for internal
voting systems, providing a transparent and auditable platform for
decision-making within the enterprise . Similarly, they are being integrated
into Enterprise Resource Planning (ERP) systems to enhance data integrity and
automate various business processes .
Beyond these specific examples, private permissioned blockchains are proving
valuable in logistics and accounting, improving efficiency and transparency in
logistics operations and automating accounting processes while ensuring data
immutability . They are also being used for securing and streamlining payroll
and internal financial transactions within organizations . The ability to track
the movement and ownership of various assets beyond just supply chains makes
them ideal for a wide range of track and trace applications .
The suitability of private permissioned blockchains for these diverse
applications stems from their fundamental ability to provide a shared, auditable
ledger with strictly controlled access and robust identity management
capabilities. This addresses key challenges related to data security,
transparency, and operational efficiency within and between organizations. The
capacity to tailor these blockchain solutions to the specific requirements of
different industries makes them a highly adaptable technology for enterprise
adoption.
Private permissioned blockchains offer a compelling solution for organizations
seeking to leverage the benefits of distributed ledger technology within a
controlled and secure environment. Their defining characteristics, including
restricted access, variable decentralization, and customizable transparency,
make them distinct from public blockchains and well-suited for a wide range of
enterprise applications. The ability to precisely manage participant identities
and permissions ensures accountability and data privacy, while the selection of
efficient consensus mechanisms contributes to high transaction throughput and
low latency.
These blockchain networks are particularly advantageous in scenarios where
control, privacy, and performance are paramount, such as supply chain
management, financial services, healthcare, and internal enterprise systems.
Their real-world applications continue to expand as organizations recognize
their potential to enhance efficiency, security, and transparency in various
operational aspects.
However, it is important to acknowledge the trade-offs associated with deploying
private permissioned blockchains. The reliance on a trusted intermediary or
consortium introduces a degree of centralization, and the security of the
network is heavily dependent on the robustness of the access control mechanisms
and the integrity of the participating nodes. Improper implementation can lead
to security vulnerabilities.
file: ./content/docs/knowledge-bank/public-blockchains.mdx
meta: {
"title": "Public blockchains",
"description": "Understanding public blockchain networks and their characteristics"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
# Public blockchain networks
Public blockchains are permissionless networks where anyone can participate in
the network operations.
## Major public networks
### Layer 1 Networks
* **Ethereum**
* Smart contract platform
* EVM compatibility
* Large developer ecosystem
* **Bitcoin**
* First blockchain
* Store of value
* Limited programmability
### Layer 2 Solutions
* **Polygon PoS**
* Ethereum sidechain
* Fast transactions
* Low fees
* **Optimism & Arbitrum**
* Optimistic rollups
* EVM compatible
* Scalability focused
## Public blockchain architecture deep dive: bitcoin, ethereum, and polygon
## Bitcoin: architecture and core components
Bitcoin is the original public blockchain, designed as a decentralized ledger of
transactions. Its architecture is relatively simple and highly robust, optimized
for security and censorship-resistance. Bitcoin uses a UTXO (Unspent Transaction
Output) model and Nakamoto Proof-of-Work (PoW) consensus to append new blocks to
its chain. Key technical components of Bitcoin include the block structure,
transaction format, mining mechanism, and peer-to-peer networking.
## Block structure and composition in bitcoin
Each Bitcoin block consists of a block header and a list of transactions (the
block body). The header is 80 bytes and contains several fields critical to
linking blocks and proving work:
* Version: A 4-byte field indicating the software/protocol version and consensus
rule set used by the miner.
* Previous Block Hash: A 32-byte hash pointer referencing the prior block in the
chain, establishing the chain continuity.
* Merkle Root: A 32-byte hash of the root of the Merkle tree of all transactions
in this block. Every transaction's hash is combined pairwise up the tree to
produce this single root, which allows efficient verification of any
transaction's inclusion.
* Timestamp: A 4-byte timestamp (Unix epoch format) roughly indicating when the
miner created the block (to the nearest second). It helps in ordering blocks
and is used in difficulty adjustment calculations.
* Difficulty Target (nBits): A 4-byte encoded target threshold that the block's
hash must be below for the PoW to be valid. This represents the mining
difficulty for that block.
* Nonce: A 4-byte arbitrary number that miners vary to find a hash below the
target. Together with other fields (and extra nonce data in the coinbase
transaction), the nonce is what miners adjust in brute-force to produce a
valid block hash.
Following the header, a block includes a variable number of transactions. The
first transaction is always the coinbase transaction, which has no inputs and
creates new bitcoins (the block reward) to pay the miner. The coinbase also
often contains extra data (like the miner's signature or signal flags for
upgrades) and, since Segregated Witness, commits to an additional witness Merkle
root for SegWit data. All other transactions are user-generated transfers of
bitcoins.
Bitcoin's use of a Merkle tree for transactions means that one can prove a
particular transaction is in a block by supplying an authentication path (the
neighboring hashes up the tree). The block header alone (which is just 80 bytes)
is enough for light clients (SPV clients) to verify chain proof-of-work and
transaction inclusion via Merkle proofs, without downloading full transactions.
Block Size and Weight: In Bitcoin's original design, blocks were limited to 1 MB
in size. The Segregated Witness (SegWit) upgrade in 2017 introduced the concept
of block weight, allowing up to 4 million weight units (WU), roughly equating to
4 MB of data when counting witness (signature) data separately. This increased
throughput modestly while maintaining compatibility. The block header itself
remains constant in size; the number of transactions per block depends on their
size and the current block weight limit.
## Transaction format and utxo model
Bitcoin transactions are structured around the UTXO model. Each transaction
consumes some existing unspent outputs as inputs and creates new outputs:
* Inputs: Each input references a previous transaction's output by txid and
output index, and provides an unlocking script (scriptSig) that satisfies the
conditions set by that previous output's locking script. Typically, the
previous output's script requires a signature from a certain public key; the
input therefore contains a digital signature (and public key) proving the
spender's authorization. If the input is from a SegWit output, part of the
unlocking script is instead provided in a separate witness field.
* Outputs: Each output contains a value (amount of BTC in satoshis) and a
locking script (scriptPubKey) that specifies the conditions required to spend
this output in the future. The most common locking script is a public key hash
(Pay-to-Pubkey-Hash, or P2PKH) which means the output can only be spent by
presenting a corresponding signature and public key. Other types include P2SH
(Pay-to-Script-Hash), multisig, or newer ones like P2WPKH (native SegWit) and
Taproot outputs.
* Transaction Metadata: Bitcoin transactions also include a version number, a
locktime (which can specify the earliest time or block height when it can be
included in the chain), and sequence numbers on inputs (used for relative
timelocks or to signal replacement policies like RBF).
When a Bitcoin transaction is created, it must obey the rule that the sum of
inputs ≥ sum of outputs. The difference (inputs minus outputs) is the
transaction fee paid to the miner. Because Bitcoin uses UTXO, each output can
only be spent once; once consumed as an input in a new transaction, that UTXO is
considered spent and is no longer valid. The set of all unspent outputs in the
system forms the UTXO set, which is the core of Bitcoin's state. Unlike an
account model, there are no balances stored for addresses – only UTXOs that any
given address can spend.
Bitcoin's scripting language is deliberately simple and not Turing-complete.
It's a stack-based bytecode that enables basic conditions (hash locks, signature
checks, timelocks, multisignature, etc.). This simplicity enhances security and
predictability. Scripts execute during transaction validation: each input's
unlocking script is combined with the referenced output's locking script to form
a complete script which the Bitcoin node executes. If the script returns true
(valid signature, etc.), the input is considered valid. If any input's script
fails, the entire transaction is invalid.
Taproot and Upgrades: In recent upgrades (like Taproot in 2021), Bitcoin has
improved its script capabilities and privacy. Taproot outputs allow complex
spending conditions (multi-signatures, alternative scripts) to remain hidden
unless used, and use Schnorr signatures which enable batching and more flexible
scripting (MAST – Merklized Abstract Syntax Trees). These upgrades are part of
Bitcoin's slow but steady evolution while preserving the fundamental
architecture.
## Transaction lifecycle: from creation to finality in bitcoin
Bitcoin transactions pass through several stages from the moment a user
initiates a payment to final settlement:
1. Creation and Signing: A user's wallet application selects one or more UTXOs
that the user controls (has keys for) as inputs, specifies one or more
outputs (addresses and amounts to pay, plus change back to themselves if
any), and then signs the inputs. The result is a complete, serialized
transaction ready for broadcast. Each input is signed with the owner's
private key, and the signature proves authorization to spend the referenced
UTXO. The wallet will also calculate an appropriate fee to include, based on
the transaction size in bytes and current fee rates needed for timely mining.
2. Broadcast to Network: The signed transaction is sent to a nearby Bitcoin node
(often the user's own full node or a connected node). That node will validate
the transaction: checking signatures, ensuring inputs exist and are unspent,
and that it abides by consensus rules (no overspending, proper format, etc.).
If valid, the node accepts it into its mempool (the in-memory pool of valid
but unconfirmed transactions) and then propagates it to its peers. Bitcoin's
peer-to-peer network uses a gossip protocol – each node relays new
transactions to other nodes, spreading quickly across the global network.
Nodes announce transactions by their hash (inv messages), and peers request
full details (via getdata) if they haven't seen it.
3. Mempool and Waiting: Once in the mempool, the transaction waits to be
included in a block. Each node's mempool might hold thousands of
transactions. Miners (which are specialized full nodes) are constantly
looking at their mempool to select transactions for the next block.
Typically, miners prioritize by fee rate (satoshis per byte) to maximize
their revenue. Users can increase fees to get faster confirmation, especially
in times of congestion.
4. Mining and Inclusion in a Block: A miner assembles a candidate block: it
picks a set of transactions from its mempool (up to the block weight limit,
and usually maximizing total fees), and then builds the Merkle tree of
transactions to set the Merkle root in the block header. It sets the other
header fields (pointing to the tip of the chain the miner is extending,
current timestamp, the target difficulty from the network, etc.), and puts
the coinbase transaction as the first transaction (paying themselves the
block subsidy plus the sum of selected transaction fees). Now the miner
begins the PoW hashing process: varying the nonce (and if needed, modifying
extra data in the coinbase to extend the search space) and hashing the header
to find a hash below the target. This is essentially a brute-force race
performed by mining hardware (ASICs) across the network.
5. Block Propagation: When a miner finally finds a valid hash meeting the
difficulty target, it has successfully mined a new block. The miner
immediately broadcasts this new block to its peers. Just like transactions,
blocks propagate via gossip: nodes announce the new block hash to peers, who
then request the block if they don't have it. Efficient relay protocols (like
Compact Blocks and Graphene) compress the data by assuming peers have most
transactions already, further speeding up propagation. The goal is to spread
a new block to the majority of nodes (and miners) within a few seconds, so
the network can start building the next block on top of it.
6. Validation and Chain Update: Each node that receives the new block will
validate it thoroughly. This includes verifying the block header's PoW (hash
meets target), checking that the block's transactions are all valid (no
double spends, signatures correct, scripts run to true, no inflation beyond
block reward, etc.), and that the block follows consensus rules (size/weight
limits, correct coinbase reward, valid Merkle root, etc.). If everything
checks out, the node links the block to its existing chain. This extends the
main chain (the node's best chain tip).
7. Confirmations and Finality: The user's transaction is now confirmed in that
block. The block that contains it becomes part of the blockchain. However, at
this point, the confirmation is still probabilistic – there is a chance
(albeit small) that another competing block could appear (a chain fork) and
override this block if it gets more PoW work. Nakamoto consensus, which
Bitcoin uses, prioritizes the longest (heaviest) chain. Finality in Bitcoin
is not instant; instead, the probability of reversal decreases as more blocks
are added on top. A common best practice is waiting for 6 confirmations (6
additional blocks) for high-value transactions, which takes on average \~60
minutes. Six blocks deep, a transaction is extremely unlikely to be reversed
barring an immense and infeasible reorganization attack. Practically, for
lower-value payments or everyday use, fewer confirmations (or even one
confirmation) are acceptable risk in most cases, given Bitcoin's hashrate and
security.
Bitcoin's consensus mechanism, Nakamoto Consensus, relies on this probabilistic
finality and economic incentives. Miners are incentivized by block rewards and
fees to follow the rules and extend the longest valid chain. If they try to
cheat (e.g., double spend or create an invalid block), honest nodes will reject
those blocks and they will have wasted their energy. Approximately every 10
minutes a new block is mined on average, by design. The network automatically
adjusts the difficulty every 2016 blocks (\~ every 2 weeks) to maintain that
cadence, increasing difficulty if blocks came in too fast (hash power increased)
or decreasing it if blocks were too slow (hash power lost).
## Mining and proof-of-work consensus
Proof-of-Work is the heartbeat of Bitcoin's security. In PoW, miners compete to
solve a computationally difficult puzzle: find a block header whose SHA-256 hash
is below a target value. This target is adjusted so that, statistically, the
entire network will find a valid block about every 10 minutes. The puzzle's
difficulty ensures that no single party can dominate block creation without
commanding enormous computational resources, and it ties the creation of blocks
to a real-world cost (energy expenditure).
Mining Process Details: Bitcoin mining today is performed by specialized
hardware (ASICs) that can compute SHA-256 hashes trillions of times per second.
Miners typically join mining pools, where many miners share work and split
rewards, smoothing out the variance of finding blocks. Within a pool or solo,
the process is:
* Construct the block header (as described earlier), including the Merkle root
of chosen transactions and the reference to the previous block.
* Set the nonce to an initial value (and adjust extraNonce in the coinbase if
needed for more range).
* Hash the block header (essentially performing double SHA-256 per attempt).
* Check if the resulting 256-bit hash interpreted as a number is less than the
target (which is stored in the block header as nBits).
* If not, modify the nonce (or extraNonce) and hash again. Repeat rapidly.
This is a brute force search in a vast space. The target is inversely related to
difficulty: a lower target means fewer acceptable hashes and thus more work on
average to find one. The current Bitcoin difficulty makes the target so low that
miners must perform on the order of 2^\[\[70+]] hashes on average to find a valid
block. This enormous number is what secures the chain, an attacker would need at
least 51% of the global hash power to consistently outcompete honest miners,
which is economically and physically prohibitive at Bitcoin's scale.
Chain Reorganization: If two miners happen to find a block at nearly the same
time (a race condition), the network could temporarily see a fork (split brain)
where some nodes have one block as tip and others have the competing block. This
is resolved when the next block is found: whichever chain becomes longer (i.e.,
gains the next block) will be accepted as the main chain, and the other block
becomes an "orphaned" block. Bitcoin's consensus dictates that all miners should
switch to mining on the longest valid chain. This mechanism, simple but
effective, eventually converges all honest nodes on a single chain. Orphaned
blocks are rare and transactions in them return to the mempool to await
inclusion in a later block.
## Network topology and message propagation
Bitcoin's network is a peer-to-peer unstructured mesh. Nodes in the network
connect to a random set of peers (by default, up to 8 outbound connections for a
full node, and accepting inbound connections from others). There is no
centralized node; any node can join and leave, and discovery is done through a
mix of DNS seed servers and peer exchanges. The design goal for the P2P layer is
to reliably broadcast transactions and blocks to all participants in a timely
manner, despite the network's decentralized nature and latency.
Propagation Mechanisms: Bitcoin nodes use an "inv" (inventory) message system to
announce new objects (transactions or blocks) by their hashes. Peers that don't
have the object can request it with a "getdata" message. To avoid flooding the
network with large data, Bitcoin employs strategies like:
* Gossip with random delays: Nodes will announce new transactions to a subset of
peers with a slight delay and not to everyone at once, to reduce redundant
traffic.
* Relay Policies: A transaction must pass certain checks (minimal fees, standard
script forms, etc.) for a node to relay it (this prevents spam and malicious
data from propagating).
* Compact Block Relay: When propagating new blocks, instead of sending full
blocks (which might be large), nodes often send a "compact" block message
which contains the block header and short hashes of transactions. Peers
reconstruct the block from their mempool for any known transactions, and only
ask for missing ones. This dramatically cuts down block propagation time and
bandwidth.
Latency and Throughput: Bitcoin's design prioritizes decentralization over
performance. The 10-minute block interval helps ensure that propagation and
validation of blocks (which could be up to \~4MB of data with SegWit) is easily
done within that time by nodes globally, even with modest network connections.
The trade-off is higher latency (it takes minutes to confirm transactions).
However, this is an acceptable cost to achieve a permissionless system with
thousands of nodes reaching eventual agreement.
## State management: utxo set
Bitcoin's global state at any point in time can be thought of as the set of all
unspent transaction outputs (UTXOs). Maintaining this UTXO set is crucial for
validating new transactions (to check if inputs are unspent and amount
balances). Full nodes keep an indexed database of UTXOs in memory or on disk for
quick lookup. Each new block updates the UTXO set by removing spent outputs and
adding new outputs from transactions in that block.
This model has implications:
* Scalability: The UTXO set grows over time as more transactions create outputs.
Nodes need to manage this state efficiently. Pruning spent outputs is
straightforward (they're removed once spent), but the set can still grow
large. Bitcoin full nodes currently handle a UTXO set containing many millions
of entries.
* Parallelization: Transactions that spend distinct UTXOs can theoretically be
processed in parallel, since there are no global balances to update – just
individual outputs being consumed. In practice, Bitcoin validates transactions
mostly sequentially within a block, but the UTXO model lends itself to easier
sharding or parallel processing attempts because state is fragmented among
outputs.
* Simplicity: There is no notion of accounts or contract storage – just discrete
coins moving around. This makes Bitcoin's state model simpler but also limits
expressiveness for complex applications (hence why Bitcoin's on-chain
scripting is intentionally limited).
## Smart contract capabilities (or lack thereof)
Bitcoin does not have a general-purpose smart contract platform akin to
Ethereum's EVM. Its Script language enables only rudimentary smart contract-like
functionality (conditional spending). Examples include multi-signature wallets,
hash-time locked contracts (HTLCs) for payment channels (the basis of the
Lightning Network), and other simple constructs. Scripts are not Turing-complete
(no loops, for instance), which means you cannot implement arbitrary logic or
complex decentralized applications directly on Bitcoin's base layer. This is by
design, focusing Bitcoin on being sound digital cash and leaving more expressive
smart contracts to layer-2 solutions or other blockchains.
That said, off-chain or layer-2 protocols (like the Lightning Network for
micropayments, sidechains like Rootstock or Liquid, etc.) extend Bitcoin's
functionality by using on-chain scripts as anchors or adjudication mechanisms,
while doing more complex logic off-chain. This preserves Bitcoin's base layer
stability and simplicity.
## Summary (bitcoin):
Bitcoin's architecture emphasizes security, consistency, and decentralization.
Blocks link via hashes and PoW, transactions rely on UTXOs and simple scripts,
and consensus is maintained through miners expending real-world resources. Its
limitations in throughput and expressiveness are a trade-off for being the most
battle-tested, decentralized value settlement layer.
## Ethereum: architecture and innovations
Ethereum is a public blockchain designed not only for cryptocurrency
transactions but also for general-purpose computation via smart contracts.
Launched in 2015, Ethereum introduced an account-based model and the Ethereum
Virtual Machine (EVM), enabling Turing-complete scripts on-chain. Over time,
Ethereum's architecture has evolved , most notably transitioning from
Proof-of-Work to Proof-of-Stake (PoS) in 2022 (the event known as The Merge).
Ethereum's design is more complex than Bitcoin's, featuring a richer transaction
and state model, gas metering for computation, and different block structure and
consensus details.
## Account model and global state
Unlike Bitcoin's UTXOs, Ethereum uses an account-based state model. The global
state is a mapping of accounts (identified by 20-byte addresses) to their
current state. There are two types of accounts:
* Externally Owned Accounts (EOAs): Regular user accounts controlled by private
keys. They have a balance of Ether and a nonce (transaction count), but no
associated code.
* Contract Accounts: Accounts that have associated smart contract code and
persistent storage. Contracts also have balance (they can hold Ether) and a
nonce (number of contract-creations from that account), and importantly, code
that executes when they receive a transaction or message call.
The entire world state (all account balances, nonces, contract code and storage)
is stored in a data structure called the Merkle-Patricia Trie, which is a
cryptographic trie (prefix tree) that is also a Merkle tree. Ethereum's state
trie root hash is part of each block header, meaning that each block commits to
a specific world state after applying that block's transactions. This allows a
client to verify any account's state with a Merkle proof against the state root
in a trusted block header. In fact, Ethereum uses three interrelated Merkle
tries:
* State Trie: Mapping account addresses to account state objects (balance,
nonce, code hash, storage root).
* Storage Trie: For each contract account, its storage (a key-value store) is
itself stored as a Merkle trie, the root of which is stored in the account's
state object.
* Transaction Trie and Receipt Trie: Each block has a trie of transactions
included and a trie of receipts (outcomes of each transaction). The roots of
these tries are in the block header as well (transactionsRoot and
receiptsRoot).
This trie structure makes verifying parts of the state possible without having
the entire state (useful for light clients). However, maintaining these tries is
heavy for full nodes, as the state can grow large and changes every transaction.
In Ethereum, each transaction directly updates the global state by debiting one
account and crediting another (for value transfers) or modifying contract
storage and code (for contract calls). This is a more fluid model than
Bitcoin's; it's easier to query an account balance or send funds from one
account to another without managing multiple UTXOs. But it also means that
validating transactions requires knowing and updating shared global state, which
can be more complex to scale.
## Ethereum block structure
Ethereum blocks contain a header, a list of transactions, and a list of ommers
(uncles). The block header in Ethereum (pre-Merge, in PoW) includes:
* Parent Hash: Hash of the previous block's header.
* Ommers Hash: Hash of the list of ommer headers (ommer is Ethereum's term for a
stale block, analogous to Bitcoin's orphan, that can be included for a minor
reward).
* Miner (Beneficiary) Address: The Ethereum address of the miner (block
proposer) to receive block rewards (mining reward and gas fees).
* State Root: The Merkle root of the state trie after all transactions in the
block are executed.
* Transactions Root: Merkle root of the transactions list.
* Receipts Root: Merkle root of the receipts (each transaction's execution
result: logs, status, gas used, etc.).
* Logs Bloom: A 256-byte bloom filter aggregating all logs (events) generated by
transactions in the block. This allows quick filtering for particular log
topics without scanning every transaction.
* Difficulty: The PoW difficulty level for this block (makes sense only in PoW
era).
* Number: Block number (height in the chain).
* Gas Limit: The maximum amount of gas that all transactions in the block
combined can consume. This parameter is set by the miner within protocol
constraints and can adjust slowly over time.
* Gas Used: Total gas consumed by all transactions in this block.
* Timestamp: When the block was mined (seconds since Unix epoch).
* Extra Data: An optional 0–32 byte field for arbitrary data (mining pools often
used this for tagging blocks with an identifier).
* MixHash: A field from PoW (Ethash) mining, part of the proof that a sufficient
amount of work was done (it's a hash output from the mining algorithm).
* Nonce: An 8-byte PoW nonce (combined with MixHash and block header, proves the
miner did enough work).
This block header is much larger than Bitcoin's (over 500 bytes due to trie
roots and the bloom filter). After The Merge (when Ethereum moved to PoS), some
fields lost significance:
* Difficulty is no longer used (replaced internally by a 'terminal total
difficulty' check at the merge transition).
* Nonce and MixHash are no longer updated by mining, so they became essentially
constant placeholders (Nonce is now fixed at 0x0000000000000000 in PoS blocks,
and MixHash (renamed in code to "prevrandao") now contains a random value
contributed by the beacon chain for randomness in contracts).
* Ommers/Uncles no longer exist in PoS (because block proposals are not
competing like in PoW, so no stale blocks are produced under normal
conditions).
The block body in Ethereum contains:
* Transactions: Each transaction (details below) is executed in order. The
results of these executions update the state trie. By the end of processing
all txs, the final state root must match the header's stateRoot. If a block's
state root doesn't match the result of executing its transactions on the
parent state, the block is invalid.
* Ommers (Uncle) List: In PoW Ethereum (pre-Merge), miners could include up to 2
uncle blocks – these are headers of blocks that were mined almost concurrently
but did not make it into the main chain (maybe because another miner's block
at the same height was chosen). Including them gives a small reward to the
miner of the uncle and the including miner, and helps decentralization by
compensating miners with slightly slower block propagation. Uncles had to be
recent (within 6 blocks or so) and valid but not in the main chain. In PoS
Ethereum, the concept of uncles is obsolete.
One notable aspect of Ethereum blocks is the Gas Limit. Unlike Bitcoin which has
a block size limit, Ethereum limits computational work per block via gas. Miners
(now validators) can slightly adjust this gas limit target, voting it up or down
by a bounded amount each block, which allows the network to adapt throughput
based on capacity. Historically, the gas limit has grown from \~5 million in
early days to about 15 million, and after EIP-1559 it's somewhat elastic around
a target (with a hard cap at 2x target for temporary spikes). This gas limit
translates to a variable number of transactions per block because some
transactions use more gas (complex smart contract calls) and some use less
(simple ETH transfers).
Block Time: Ethereum blocks were targeted at \~15 seconds during the PoW era.
Under PoS, blocks are produced in fixed 12-second slots. Generally, one block
per slot (some slots can be empty if a validator misses their turn). This
regularizes block times a bit more. A 12-second block time means Ethereum
confirms transactions much faster than Bitcoin's 10 minutes, but it also means
more potential forks in PoW (which was mitigated by the uncle mechanism). Under
PoS, the protocol assigns a unique validator to propose each block, reducing
collision.
## Transaction lifecycle in ethereum
Ethereum transactions are more complex than Bitcoin's, as they can encode not
just value transfers but also contract calls and creation of new contracts. A
transaction in Ethereum includes:
* Nonce: A sequence number for the sender account, which ensures each
transaction can be processed once and in order. The first transaction from an
account has nonce 0, then 1, and so on. This prevents replay and double-spend
by ordering the transactions from an account.
* Gas Price (or Max Fee): In the legacy model, each transaction specified a gas
price (in gwei per gas unit) that the sender is willing to pay. After EIP-1559
(August 2021), the model changed: now each transaction includes a max fee per
gas and a max priority fee. The protocol sets a base fee per gas (which rises
and falls with congestion), and the user can add a tip (priority fee) to
incentivize inclusion. The effective gas price paid is base fee + priority
(capped by the max fee).
* Gas Limit (per tx): The maximum gas the sender allows this transaction to
consume. This protects against buggy or malicious contracts running infinitely
– if gas runs out, the transaction is reverted (but still fees are paid for
gas used).
* To: The recipient address (20 bytes), which could be an EOA for a simple
payment or a contract address to invoke, or empty if the transaction is
creating a new contract.
* Value: Amount of Ether (in wei) to send to the recipient (can be zero for pure
contract calls).
* Data: An arbitrary-length byte field. For contract calls, this holds the
function signature and parameters; for contract creation, it contains the
compiled bytecode of the contract; for simple ETH transfers, data can be empty
or any message.
* v, r, s (Signature): The Elliptic Curve digital signature components proving
the transaction is authorized by the private key of the sender's address.
Ethereum uses secp256k1 like Bitcoin, but signs over the transaction data
(including the chain ID for replay protection since EIP-155).
When a user sends an Ethereum transaction:
1. Creation and Signing: The user's wallet (or dApp via web3) constructs the
transaction object with the fields above. It must get the current nonce for
the sending account (by querying a node) and decide on fee parameters (base
fee is known from the last block, and a tip is chosen). The user signs the
transaction with their private key, producing v, r, s.
2. Broadcast to Network: The signed transaction (commonly RLP encoded) is sent
to an Ethereum node. The node verifies the signature (recovers the sender
address and checks it matches the nonce/account), checks that the sender has
enough Ether balance to cover the value + gas\_limit \* max\_fee, and that the
nonce is correct (next one in sequence). If valid, the node adds it to its
local mempool.
3. Mempool and Propagation: Similar to Bitcoin, Ethereum nodes gossip
transactions to peers. There isn't a global "mempool" but each node maintains
its own set of pending txns. The transactions are sorted mainly by fee
priority (especially post EIP-1559, miners choose transactions giving the
highest priority fee first, since base fee is fixed per block). Under high
load, users must pay higher tips to get priority.
4. Block Inclusion (Mining/Validation): In PoW Ethereum (before Merge), miners
would pick the highest-paying transactions fitting in the block's gas limit.
In PoS Ethereum, the chosen validator of the slot will do similarly.
Transactions are executed sequentially in the block – the state is updated
for each. If a transaction runs out of gas or otherwise fails (reverts), it
still consumes the gas (the block includes it and the failure is recorded in
the receipt, and the state is unchanged by that tx except gas deduction).
Because failed transactions waste gas, miners usually still include them if
they pay fees, since the miner gets the gas fee even for reverted tx. Once
the block is filled up to the gas limit (or the available transactions are
exhausted or not worth the lower fees), the block is sealed.
5. Consensus and Execution: The new block, once proposed, is broadcast and
validated by other nodes. Each node will re-execute every transaction in the
block to ensure the resulting state matches the block header's stateRoot and
that no rules were broken (correct gas computation, no invalid opcodes,
sender had enough balance, etc.). Ethereum's consensus rules are essentially
"the canonical chain is the one with valid blocks that the consensus
mechanism (PoW longest chain or PoS fork choice) dictates".
6. Confirmation and Finality: Under PoW, Ethereum's block confirmations were
probabilistic like Bitcoin (though with faster blocks, forks occurred more
often, which is why even 12 confirmations (\~3 minutes) were often considered
safe for Ethereum transactions). The uncle mechanism reduced the risk for
miners but from a user perspective finality was still probabilistic – albeit
on the order of minutes instead of an hour. Under PoS (after The Merge),
Ethereum now has a notion of finality at the protocol level: validators vote
on checkpoints (every 32-slot epoch) using Casper FFG (Friendly Finality
Gadget). When two-thirds of validators attest to a checkpoint and then again
to a subsequent one, the earlier checkpoint is finalized. In practice, this
means Ethereum blocks reach absolute finality typically within 2 epochs (64
slots, which is about 12–13 minutes). In normal operation, finality happens
regularly and automatically; if the network is partitioned or many validators
are offline, finality could delay, but the design strongly incentivizes
liveness. Thus, Ethereum offers faster confirmation and deterministic
finality within minutes – a major improvement for high-value settlements,
where waiting an hour on Bitcoin might be impractical. Until finality,
Ethereum blocks are still somewhat tentative, but the chain uses a
fork-choice rule (LMD-GHOST) that makes reorgs after even a few blocks deep
extremely rare barring an attack.
## Proof-of-work (ethash) to proof-of-stake transition
Ethereum originally used a PoW algorithm called Ethash. Ethash was a memory-hard
hash algorithm (based on DAG lookups and Keccak hashing) designed to be
ASIC-resistant (though ASICs were eventually developed). Block time \~15s, and
difficulty adjusted with each block to target that interval. Ethereum's PoW had
one twist: the difficulty bomb, a mechanism intended to exponentially increase
difficulty at a certain block number to "freeze" PoW and force the transition to
PoS (this bomb was postponed several times until the Merge).
Ethash Mining: Similar to Bitcoin's mining, Ethash miners would assemble blocks
and try varying a nonce (and an additional field called nonce2 in the mix-hash
calculation) to find a hash below target. Ethash required computing a
pseudo-random dataset (the DAG, about 4+ GB in size) each epoch and using it in
hashing, making memory bandwidth the bottleneck (to discourage pure ASIC
advantages). The mining reward in Ethereum included a static block reward (which
changed over time, e.g., 2 ETH per block in recent years) plus all gas fees from
transactions (minus the portion burned by EIP-1559 base fee after that upgrade –
post EIP-1559, the base fee is destroyed, only the tip goes to miner).
By 2022, Ethereum developers launched the Beacon Chain (a PoS chain running in
parallel) and then merged it with the main chain, turning off PoW entirely. Now
Ethereum's consensus is pure PoS with no mining at all.
Proof-of-Stake (Casper and Beacon Chain): Ethereum's PoS is implemented via the
Beacon Chain, which manages validators and coordinates block production and
finality:
* Validators join by staking 32 ETH into a deposit contract on Ethereum (this
was done on the PoW chain and continues on the PoS chain for new entrants).
* Validators are pseudo-randomly assigned to propose blocks or attest (vote) on
blocks. Every 12-second slot, one validator is the proposer who creates a
block (now just an "execution payload" plus consensus info) and others are
attesters.
* Attesters are organized into committees per slot that vote on the block of
that slot and also on checkpoint epochs. If a validator misses their turn or
votes contrary to the majority, they get minor penalties; if they try to
attack (e.g., double sign or surround votes), they can be slashed (losing a
portion of their stake and being ejected).
* Finality via Casper FFG means once supermajority votes checkpoint, it's
irreversible unless 1/3 of validators are slashed (which is extremely costly
for an attacker, in the billions of USD at today's stake).
* The fork-choice rule is "latest message driven GHOST" (LMD-GHOST), which means
nodes consider the chain with the most aggregated weight of attestations
supporting it, favoring the heaviest attested chain head between finality
checkpoints.
Under PoS, Ethereum's block time remains \~12s, but the variance is nearly zero
(no more block time variability due to PoW luck). Transactions are still
processed by each block's proposer in the execution layer (the EVM chain as
before), so from the user perspective, nothing changed in how transactions look
or what the block contains; only how the block creator is chosen and how
consensus is reached has changed.
The removal of mining drastically cuts Ethereum's energy usage (over 99%
reduction) and also changed the economic issuance (no more large block rewards,
only small issuance to validators and fee burn often exceeds issuance, making
ETH possibly deflationary at times).
## Smart contract execution: the ethereum virtual machine - EVM
One of Ethereum's core innovations is the Ethereum Virtual Machine. The EVM is a
stack-based virtual CPU that executes contract bytecode. Every Ethereum full
node runs the EVM as part of transaction processing, to determine the outcome of
contract calls. Key aspects of the EVM and execution environment:
* Smart Contracts: Contracts are stored on-chain as deployed bytecode (a series
of EVM opcodes). Each contract has its own persistent storage (a key-value
store mapping 256-bit keys to 256-bit values), which is part of the global
state trie. When a contract's code executes, it can read and write its
storage, send internal transactions (calls) to other contracts or accounts,
perform arithmetic, logic, control flow, etc., subject to gas limits.
* Gas and Fees: To prevent infinite loops and hogging of resources, Ethereum
introduces gas, a unit of computation. Every EVM instruction has a fixed gas
cost (e.g., an ADD might cost 3 gas, an SSTORE (storing to contract storage)
costs 20,000 gas or more, etc.). When a transaction is sent, the sender must
provide a gas limit and will pay fees for each gas unit consumed. If execution
exhausts the gas before finishing, it's halted and reverted. If it finishes
with gas left, the unused gas is refunded (and the sender isn't charged for
those). Gas ensures Turing-completeness doesn't come at the cost of halting
problem – the fact that you have to pay for every step guarantees eventual
completion or termination of execution.
* EVM Model: The EVM is stack-based (with 1024-slot deep stack), operates on
256-bit words for all operations (which makes arithmetic easier for
cryptographic operations but somewhat inefficient for typical 32-bit/64-bit
tasks). It has a memory (volatile, not persisted, used for holding data during
execution) and the aforementioned storage (persisted between calls for that
contract). Contracts can call other contracts or create new contracts; these
actions consume additional gas (and form an internal call stack).
* Messages and Calls: A contract invocation (either from an EOA or
contract-to-contract) is called a message call. It's like a transaction
initiated internally. The EVM handles these calls by creating a new execution
context for the callee, with its own gas allotment (which can be limited by
the caller). This is how contracts interact – they call functions of other
contracts.
* Deterministic Execution: All nodes execute the same code with the same initial
state, so they should all get the same result and state root.
Non-deterministic actions (like accessing real time, randomness, etc.) are
either done via special opcode that draws from known values (block timestamp,
or now beacon chain randomness via PREVRANDAO opcode) or via oracles (external
data fed on-chain) – but the EVM itself is deterministic.
* Logs: Contracts can emit log events (which do not affect state but are
recorded in transaction receipts and indexed by the bloom filter in block
header). These logs are not used by the consensus, but they're useful for
off-chain listeners (dApps) to watch for events.
* Reentrancy and Security: Because contracts can call each other, care must be
taken (the infamous DAO hack was due to reentrant calls). Ethereum's execution
model allows complex interactions, which also opens up a surface for bugs.
Over time, best practices and patterns (and new features like reentrancy
guards or shifts to languages like Vyper or use of the
Checks-Effects-Interactions pattern) have evolved to mitigate common pitfalls.
EVM Compatibility: Ethereum's EVM became a sort of standard for many other
blockchains (like Binance Smart Chain, Avalanche C-Chain, Polygon, etc.),
because it allows reuse of the vast ecosystem of developer tools and contract
code. The downside is the EVM wasn't designed for extreme throughput – it's
single-threaded and all nodes execute all transactions, which can be a
bottleneck. Efforts to evolve the EVM or replace it (e.g., Ethereum's planned
move to eWASM which was later deprioritized, or other chains using WebAssembly
VMs) stem from the need for more performance. Still, as of 2025 Ethereum's main
execution engine remains the EVM, now running under PoS consensus.
## Networking and propagation in ethereum
Ethereum's peer-to-peer network is similar in spirit to Bitcoin's but has its
own protocol (devp2p with the ETH subprotocol). Key points:
* Ethereum nodes gossip transactions and blocks across the network. The
propagation of blocks, due to faster cadence, had to be optimized early:
Ethereum used a protocol for propagating blocks quickly (including techniques
like the Forwarding via Relay and "FruityMesh"-like random topologies). In
recent years Ethereum also adopted the concept of block gossip and perhaps
versions of compact block or delta propagation to handle the high tx volume.
* The network also must propagate attestations and consensus votes in the PoS
era. That is handled by the beacon chain's networking (often using libp2p
gossipsub topics for different message types like blocks, attestations, sync
committee signatures, etc.).
* Uncles (Ommer) propagation: In PoW, nodes would also propagate uncle blocks.
Now in PoS, there's basically no such concept aside from maybe handling if a
validator proposes after missing a slot (but then it's just a normal chain
fork scenario, resolved by fork-choice quickly).
* Transaction Propagation: Ethereum historically had large mempools and needed
to propagate lots of transactions. The concept of gossip with certain rules
(don't spam low-fee tx to everyone, etc.) and possibly filtering by min gas
price are used to manage propagation.
Given Ethereum's higher TX volume, its networking layer is designed to handle
many more messages per second than Bitcoin's. It achieves this in part by the
lighter weight of messages (Ethereum uses a binary protocol over TCP, with RLP
encoding), and in part by allowing nodes to specialize (some might not keep full
tx gossip if they're archival nodes, etc.). The upcoming protocol upgrades like
EIP-4844 (Proto-Danksharding) will introduce new message types (blobs for data
availability) and rely on the P2P layer to broadcast large blobs efficiently.
## Scalability approaches: layer 2 and sharding
While Ethereum is not yet sharded at the base layer (original Ethereum 2.0 plans
for execution sharding have shifted toward a rollup-centric roadmap), it heavily
relies on Layer-2 scaling solutions. These include:
* Rollups: Both Optimistic Rollups (like Optimism, Arbitrum) and ZK-Rollups
(like zkSync, StarkNet, Polygon zkEVM) that execute transactions off-chain (or
off-mainchain) and post succinct proofs or summaries on Ethereum. Ethereum's
base layer is evolving to support these via data sharding (eventually
providing lots of space for rollup data).
* State Channels and Payment Channels: Generalized state channels or specific
payment channels (e.g., Raiden Network, similar to Bitcoin's Lightning) allow
users to transact off-chain with only occasional settlements on-chain.
* Sidechains: Independent chains like Polygon's PoS chain (discussed below) or
xDai/Gnosis Chain, which use their own validators but connect to Ethereum, are
another approach to scaling out transactions without burdening L1.
Ethereum's ethos is now to keep the L1 as a secure, decentralized base (with
moderate capacity) and let most transactions happen on L2, inheriting security
from L1 but not congesting it. This contrasts with some other chains that try to
scale on the base layer via different consensus or architecture choices, which
we'll explore.
## Polygon (matic pos chain): hybrid layer-2 architecture
Polygon (formerly Matic Network) is a platform aimed at scaling Ethereum. The
Polygon PoS chain is a prominent public blockchain that operates as a
commit-chain (often considered a sidechain) to Ethereum. It uses a
Proof-of-Stake based consensus with a large set of validators, while
periodically committing checkpoints to Ethereum for finality and security.
Polygon's design is a hybrid of a sidechain and a plasma-like framework,
combining the speed of a separate chain with the security assurances of Ethereum
as a base layer. The architecture is tiered, with a dual consensus mechanism
(Bor and Heimdall layers) and interoperability with Ethereum.
## Architecture overview
Polygon's PoS chain architecture can be thought of in three layers:
* Ethereum Layer (Mainchain): Polygon relies on Ethereum as the ultimate source
of truth. A set of smart contracts on Ethereum manages the validator staking,
checkpoint submission, and dispute resolution (for plasma exits). Validators
stake the Polygon's token (originally MATIC, now upgraded to a token called
POL) on Ethereum to secure the PoS chain. This means Polygon's validator set
and root of trust is anchored in Ethereum – if something goes wrong on the
Polygon sidechain, transactions can potentially be settled or exited via
Ethereum.
* Heimdall (Consensus) Layer: Heimdall is the layer of validators running a
consensus protocol (based on Tendermint, a BFT consensus engine) to manage the
PoS mechanism and handle periodic checkpointing of sidechain state to
Ethereum. Heimdall nodes track the state of the sidechain, collect signatures
from validators, and produce a checkpoint (basically a Merkle root of all
blocks produced in a span) that is then submitted to the Ethereum contracts.
This provides finality for batches of Polygon blocks once a checkpoint is
accepted on Ethereum. Heimdall is also responsible for validator set
management (updating who is active, based on stake and Ethereum contract info)
and slashing misbehaving validators.
* Bor (Block Producer) Layer: Bor nodes are the block producers that actually
create the blocks on the Polygon sidechain. Bor is essentially a modified
Ethereum client (a fork of Geth) that is optimized for fast block production
and uses a simpler consensus, relying on validator selection from Heimdall. A
subset of the validators (the block producer set) is selected in rounds to
create blocks using a lightweight consensus (which is often a simpler
authority or committee-based protocol, since the security is backed by the
higher-level BFT checkpointing). Bor layer runs an EVM-compatible chain –
meaning it functions much like Ethereum (same transaction format, uses gas,
runs EVM smart contracts), so developers can deploy solidity contracts on
Polygon just as they would on Ethereum, but with faster and cheaper
transactions.
This dual-layer approach allows Polygon to have rapid block times (on the order
of 2 seconds) and high throughput on the Bor chain, while Heimdall's periodic
checkpoints (for example, every few minutes or after a certain number of blocks)
anchor the sidechain state to Ethereum. If an invalid state were somehow
introduced on the sidechain (e.g., through a malicious majority on Polygon),
users could potentially challenge or exit via the Ethereum contracts (this is
the Plasma aspect: the ability to exit funds from the sidechain by providing
proof of their coins in the last valid checkpointed state).
## Consensus mechanism: tendermint-based pos and plasma checkpoints
Polygon's consensus on the Heimdall layer uses a BFT algorithm derived from
Tendermint. Tendermint provides instant finality assuming a supermajority of
validators are honest. In Polygon:
* Validators stake tokens on Ethereum and run Heimdall nodes.
* Heimdall (Tendermint) organizes validators in a rotating leader schedule
(Tendermint's round-robin). For each checkpoint interval, one validator is the
proposer to initiate the checkpoint, and others sign off on it. If the
proposer fails or a checkpoint submission doesn't succeed, Tendermint rounds
handle a new proposer.
* A checkpoint consists of the Merkle root of all blocks since the last
checkpoint and some metadata (e.g., range of block numbers, etc.). The
selected proposer validator packages this and actually sends a transaction to
the Ethereum contract with that data (along with aggregated signatures of many
validators to prove consensus).
* The Ethereum contract verifies the signatures and the included state root.
Once accepted, that batch of Polygon blocks is considered finalized with the
security of Ethereum – it would require a fraudulent checkpoint (which would
require a large share of validators to sign, who would then get slashed by
Ethereum if proven invalid) to undo it.
Between checkpoints, the Polygon chain's blocks are not finalized in the BFT
sense (depending on the exact implementation). However, because the same
validators are typically following a consensus on the sidechain blocks as well,
they usually won't revert unless there's a serious issue. In practice, Polygon's
Bor chain often uses a simpler Proof-of-Stake consensus where a single block
producer (from the validator set) creates blocks in sequence (possibly somewhat
like a round-robin leader sequence or a small committee). The block producers
are periodically shuffled (the shuffle uses on-chain randomness from Ethereum or
some decentralized source to prevent predictability). This is akin to a
delegated PoS or round-robin PoA on the block layer, which is very fast but by
itself not super decentralized if considered alone. The decentralization and
security comes from the larger Heimdall validator set overseeing and
checkpointing it.
Hybrid PoS+Plasma Design: The term "Plasma" in Polygon's context refers to the
ability to fall back to mainchain security. Plasma is a design for child chains
that rely on mainchain fraud proofs to secure funds. Polygon's chain borrows
some Plasma concepts:
* Users can choose to use the Plasma bridge for certain assets, which means
their withdrawals from Polygon require a waiting period and proof (in case of
fraud). Plasma mode is more secure (robust against even some sidechain
failures) but has restrictions (only simple transfers of assets, no
generalized state).
* Or users can use the PoS bridge, which trusts the validator signatures on
checkpoints for faster withdrawals and supports arbitrary state (like NFTs,
smart contracts interactions). The PoS bridge assumes >2/3 of validators are
honest to be secure (just like the sidechain itself).
This flexibility allows developers to pick stronger security or more
functionality as needed.
In summary, Polygon's consensus is effectively Proof-of-Stake with 100+
validators (anyone can stake and become a validator, though often delegated
staking occurs), running a BFT consensus (instant finality) for checkpoints and
governance, and a faster block producer sub-protocol for block-by-block
production. It's a layered consensus: fast blocks on Bor, periodic BFT finality
on Heimdall. This contrasts with Ethereum's single-layer PoS where every slot is
finalized later, or Bitcoin's PoW where finality is probabilistic.
## Block production and structure on polygon
Blocks on the Polygon PoS chain (Bor chain) look much like Ethereum blocks.
Since Bor is a fork of Geth, an Polygon block contains:
* A header (with parent hash, state root, tx root, receipts root, etc.), though
consensus fields differ because Polygon doesn't do PoW. Instead, there might
be a slot number or something to indicate sequence. It might not include
difficulty or nonce in any meaningful way.
* A list of transactions (which are Ethereum-format transactions, using gas,
etc.).
* Possibly a list of "guard" or consensus info (but likely not explicitly, since
consensus is off-chain in Tendermint signatures, not recorded in each block
header like Ethereum's attestations).
The block time on Polygon is much shorter than Ethereum mainnet. Often 2 seconds
per block is cited. This means each block has far fewer transactions than an
Ethereum block might (due to time), but overall throughput can be higher given
many more blocks per minute. The gas limit per block on Polygon is also high
(potentially similar or higher than Ethereum's, since they aimed for high
throughput).
Because the Bor chain is permissioned to a set of known validators (even if open
to join via staking, at any epoch the set is fixed), block propagation and
validation can be faster, each node might connect more directly to all block
producers or have optimized gossip.
Finality of Blocks: Within the Polygon chain, the Bor layer might not have
immediate finality (if it's just one producer after another, a rogue producer
could equivocate and cause a fork). However, since the producers are validators
under watch, and every few minutes the checkpoint locks in the history, the
chain is generally run as if finalized by social consensus unless a serious
issue arises. The Tendermint consensus on Heimdall could, in theory, also sign
off on each block for instant finality, but that would be slower for block
production. Instead, they trade off some temporary forkability for speed,
knowing that finality comes with checkpoints.
## Transaction lifecycle on polygon pos chain
From a user's perspective, using Polygon's PoS chain is similar to using
Ethereum, with some additional steps for bridging:
1. Moving Assets to Polygon: Typically, a user locks tokens (like ERC-20 or
ERC-721 assets, or ETH) in a smart contract on Ethereum and an equivalent
amount is minted or made available on Polygon (via the PoS bridge or Plasma
bridge). This initial deposit and final withdrawal are where the hybrid
security comes into play.
2. Transacting on Polygon: Once funds are on Polygon, the user can send
transactions on the Polygon network just like on Ethereum: they have a
Polygon address (same keys as their Ethereum address), send transactions with
nonce, gas price (on Polygon paid in MATIC/POL token as gas), etc. The
transaction gets broadcast to Polygon nodes and lands in a Bor block usually
within a few seconds. Gas fees on Polygon are very low due to lower demand
and higher throughput (plus their token value differences).
3. Block Confirmation: Within a couple seconds the transaction is in a block.
Polygon's chain may have confirmations akin to Ethereum's (next blocks on
top). But soon, a checkpoint will include this block hash. Checkpoints might
be created, say, every 30 minutes or when 100–200 blocks have been produced
(specific parameters can vary). When the checkpoint that covers this block is
submitted and finalized on Ethereum, that transaction effectively has the
security of Ethereum backing it.
4. Withdrawing / Finalizing back to Ethereum: If the user wants to withdraw
assets back to Ethereum, if using the PoS bridge, they rely on validator
signatures (which are assumed honest after checkpoint finality) to unlock
funds after a short delay. If using the Plasma bridge, they might wait a
challenge period (e.g., 7 days) to be sure no invalid state was pushed.
During normal operation, users simply see near-instant transactions and a degree
of finality after maybe a minute or so (once a checkpoint is created and signed,
even before it's submitted, validators consider those blocks final). The
experience is high-speed, leveraging the trust that validators will behave (due
to stake at risk).
## State management and EVM compatibility
The Polygon PoS chain is fully EVM-compatible. It maintains an account-based
state model nearly identical to Ethereum's:
* Accounts (EOA and contracts) exist with balances in MATIC, storage for
contracts, etc.
* It has its own set of ERC-20 tokens, NFTs, etc., which often mirror Ethereum
assets via bridges.
* The state is managed in a trie (since it's a fork of Geth, it likely uses
similar data structures for state).
* It supports the same JSON-RPC APIs as Ethereum, so Ethereum tooling (Metamask,
Truffle, Hardhat) works on Polygon with just a network config change.
This compatibility was a huge factor in Polygon's adoption: developers can
deploy existing Ethereum contracts with minimal changes to get much better
performance for their dApps.
One difference is scale: Because Polygon can push more transactions, the state
can grow faster in size than Ethereum's, but since it's not as decentralized (in
terms of hardware requirements and number of full nodes), they can handle higher
state growth at the cost of centralization pressures. There may also be
differences in chain parameters (like block gas limit, etc.), but logically it
functions the same as Ethereum's execution layer.
Data Availability: One risk in sidechains is data availability – if the chain
validators went rogue and withhold blocks, users could have difficulty proving
things to exit. Polygon's design, by checkpointing only the Merkle root, doesn't
put all transaction data on Ethereum (unlike a rollup). So it does rely on the
assumption that the majority of validators keep the data available and honest.
If a situation occurred where a bad block was checkpointed, users would need
those block details to prove fraud (which is why the Plasma bridge only works
for limited transactions where proofs are easier). The trade-off is that Polygon
can have cheaper transactions since it doesn't publish all data to expensive L1,
but it introduces a bit more trust in validators for data availability. Newer
solutions (like Validiums or some sidechains) focus on this distinction, but
Polygon's approach is to lean on economic incentives and Ethereum anchoring to
strike a balance.
## Network topology and cross-chain bridge
The Polygon network's P2P layer is similar to an Ethereum-like network, with
nodes gossiping blocks and transactions. However, since it is effectively
permissioned (only validators produce blocks), many nodes in the network are
either validators or observer nodes. In practice, many users rely on public RPC
endpoints (hosted by services) to interact, rather than running full nodes,
given it's semi-centralized.
Bridge: The bridge between Polygon and Ethereum is essentially a set of smart
contracts:
* On Ethereum: contracts for staking (managing validators), deposit/withdraw for
assets, and checkpoint management.
* On Polygon: corresponding logic to manage incoming deposits (mint tokens or
release funds) and to freeze assets when moving back.
Validators play a role in the bridge: for the PoS bridge, a quorum of them signs
off on withdrawals. For the Plasma bridge, fraud proofs could be submitted if
needed.
## Polygon's proof-of-stake (pos) vs "proof-of-l" (hybrid approach)
The prompt references "Polygon's PoL". While not a standard term, it likely
refers to Polygon's unique approach to consensus. One could interpret
Proof-of-Lock (PoL) – the idea that validators lock up stake on Ethereum and
thus secure the sidechain. Or generally, Polygon's combination of Proof-of-Stake
and Plasma as a hybrid. In practice:
* Polygon uses staked tokens (locked on Ethereum) to determine validators (so
the security comes from a proof-of-stake system).
* It leverages Ethereum's finality by writing checkpoints (so finality is
insured by Ethereum's proof-of-work/now proof-of-stake security).
* It also inherits some Plasma characteristics for security of funds (users can
exit with proof of funds if validators misbehave).
This hybrid model is different from a pure L1 PoS chain that doesn't rely on any
external chain. Polygon sacrificed some decentralization (smaller validator set
than Ethereum, and reliant on Ethereum itself) to gain immediate scalability and
to bootstrap security via Ethereum.
## Summing up polygon:
It achieves fast block times and high throughput via a sidechain that's run by
its own set of validators under a PoS consensus, yet it periodically defers to
Ethereum for final checkpoints. It's an interesting middle ground between a pure
sidechain and a full L2 rollup. Developers and users liked it because it offered
the Ethereum experience (same technology stack) with much better performance,
suitable for gaming, NFTs, DeFi without worrying about mainnet gas fees. The
cost is a bit more trust in validators (though that trust is economically
reinforced by their stake and Ethereum's oversight).
Polygon has since expanded beyond the PoS chain, working on true layer-2s like
Polygon zkEVM (a ZK-rollup) and others, but the PoS chain remains a major hub
and a good example of a public blockchain with a novel consensus design.
## Comparisons with other public blockchains
Beyond Bitcoin, Ethereum, and Polygon, there are several other prominent public
blockchains, each taking different approaches to consensus, finality, and
scalability. We will compare a few: Solana, Avalanche, Cardano, and Polkadot,
focusing on their consensus mechanisms, block times, finality guarantees, and
scaling strategies. These networks illustrate the spectrum of design trade-offs
in the blockchain space.
## Solana: high-throughput via proof of history and tower bft
Consensus Mechanism: Solana is a high-performance blockchain that uses a unique
combination of Proof of Stake (PoS) with an innovation called Proof of History
(PoH) and a consensus algorithm named Tower BFT (a variant of Practical
Byzantine Fault Tolerance tuned for PoH). In Solana:
* Proof of History serves as a cryptographic clock. It's essentially a
continuously running verifiable delay function (a sequence of SHA-256 hashes)
that all nodes follow. These hashes, with timestamps, create a ledger of time.
This allows nodes to agree on an ordering of events (transactions, votes)
without having to communicate constantly about time – they trust the timeline
encoded by the PoH sequence.
* Tower BFT builds on a PBFT-like consensus but leverages the PoH clock to
reduce the communication overhead. Validators can vote on blocks and use the
PoH ticks to impose timeouts and leader rotations deterministically. Each
validator has a vote locking mechanism: once they vote on a version of the
ledger, they can't easily revert without waiting out an exponentially
increasing delay. This mechanism prefers liveness – the chain continues
producing blocks rapidly, and finalization of a block grows more certain as
more votes are stacked on top of it with increasing lockouts.
* Leader Rotation: Solana elects leaders (block producers) for short intervals
(a slot). Because of PoH, each slot is a fixed number of PoH ticks (e.g.,
400ms worth of hashing). The schedule of which validator is leader for which
slot is decided in advance (pseudo-randomly, based on stake weight and a VRF),
so each validator knows when it's their turn to produce a block. Leaders
produce blocks in rapid succession.
Block Time and Throughput: Solana's block time is extremely fast – on the order
of 400 milliseconds per block (one slot). This is much lower than Ethereum's 12s
or Bitcoin's 10min. With such a short block time, Solana can process a
continuous stream of transactions. The network has demonstrated high throughput,
theoretically up to 50,000+ TPS in optimal conditions (and often thousands of
TPS in practice), thanks to optimizations like parallel transaction processing
(Solana's runtime can process non-conflicting transactions in parallel, using a
runtime called Sealevel that identifies which accounts are read/written by each
transaction).
Finality: Solana's finality is not instant, but it's fast. Typically, within a
couple of seconds, a block can be considered finalized for practical purposes.
The protocol doesn't mark an explicit "final" state like Casper, but because of
the vote lockouts in Tower BFT, the probability of a fork beyond a certain depth
becomes negligible after some slots. Many references cite around \~1 to 2 seconds
for confidence or sometimes about \~32 confirmations (\~12.8 seconds) to be safe.
In the context of Solana, even 1 confirmation (0.4s) could be considered, but
most will wait a handful of blocks. The Zebpay comparison (for example) lists
Solana finality at \~12.8s, likely being conservative. In essence, Solana
sacrifices some decentralization (it requires powerful hardware and a limited,
though growing, validator set) to achieve this speed.
Scalability Approach: Solana's approach is to scale vertically and in parallel
on a single global state:
* No sharding: Solana keeps one giant state and one ledger, avoiding the
complexities of cross-shard communication. Instead, it demands validators to
run beefy hardware (high-end CPUs, GPUs for signature verification, lots of
RAM and fast SSDs for the ledger).
* Parallel processing: By carefully planning which transactions can run together
(transactions must specify which state (accounts) they will read/write),
Solana's runtime can execute many transactions at the same time on different
threads or GPU cores, maximizing throughput on modern hardware.
* Network optimizations: Solana introduced concepts like Turbine, a UDP-based
block propagation protocol that breaks blocks into small pieces and
scatter-gathers them across the network (similar to erasure coding), and Gulf
Stream, a mempool-less forwarding protocol where validators send upcoming
transactions to the expected leader in advance, smoothing block production.
* These innovations allow Solana to reduce latency throughout the system: from
networking to consensus to execution.
Smart Contract Environment: Solana does not use the EVM. Instead, it uses eBPF
(Berkeley Packet Filter bytecode) as the execution format for on-chain programs.
Developers typically write smart contracts in Rust (or C, C++) and compile to
BPF bytecode. Solana's model is different: contracts are not autonomous accounts
with storage as in Ethereum; rather, state is held in designated accounts and
passed into programs. A program on Solana can be thought of as a deployed
contract code (identified by a program ID), and it operates on provided account
data. This model is more explicit about what data is touched by each call (which
enables the parallelism). It also means the contract logic and contract data are
separate.
Use Cases: Solana's speed and low fees (fractions of a cent per tx) make it
attractive for high-frequency trading, gaming, and other use cases that demand
throughput. The trade-off is that running a Solana validator is
resource-intensive, so the network tends to be more "heavy" and may centralize
in data centers. Nonetheless, it represents one extreme of the design space:
maximize performance by leveraging current hardware and clever protocol design.
## Avalanche: sub-second finality with avalanche consensus and subnets
Consensus Mechanism: Avalanche introduced a novel consensus family often
referred to as the Avalanche consensus (also "Snowball"/"Snowflake" algorithms).
It's neither classical BFT nor Nakamoto PoW, but a metastable consensus achieved
by repeated random subsampling of validators:
* In Avalanche, when a validator sees a transaction or block, it queries a small
random subset of other validators about their preference (which conflict do
you prefer, A or B?). It then adjusts its own preference based on the majority
of responses. This query process is repeated in rounds (with different random
samples) until the network gravitates to a unanimous decision. The process
leverages probability and randomness to achieve consensus quickly with
extremely low communication overhead compared to PBFT (not every validator
talks to every other, only random subsets).
* The result is a consensus that is leaderless (no single proposer that everyone
follows each round) and highly robust. It can achieve consensus with high
probability in just a couple of network round trips.
* Avalanche consensus is used to decide which transactions (or blocks) are
accepted. It's fast – finality on the order of one second or even sub-second
is common because after a few polling rounds, confidence is very high that the
decision won't change.
Avalanche's platform actually consists of multiple chains:
* The X-Chain (Exchange chain) which uses a DAG ledger (directed acyclic graph
of transactions) and Avalanche consensus to manage asset transfers
(UTXO-based, used for native asset management).
* The C-Chain (Contract chain) which is an instance of the EVM (account-based)
and uses a modified Avalanche consensus (called Snowman) that is optimized for
totally ordered blocks (Snowman is basically Avalanche consensus but with
linear block production, suitable for smart contract execution). C-Chain is
where Ethereum-compatible dApps run, so it behaves much like an Ethereum clone
but using Avalanche consensus rather than PoW/PoS.
* The P-Chain (Platform chain) which handles staking, validator membership, and
coordination of subnets (it also uses Snowman consensus).
Block Time and Finality: Avalanche blocks (particularly on the C-Chain) are
quite fast. The network commonly achieves block times of around 1 second, and
importantly finality is typically achieved within \~1-2 seconds. This means that
when a transaction is included in a block, within a second or two it is
irreversible with extremely high confidence. There is no concept of a long
confirmation wait; Avalanche offers near-immediate finality akin to classical
BFT systems, but with a much larger validator set (hundreds or thousands of
validators) due to its efficient consensus. In practice, Avalanche's
time-to-finality is one of the best among major chains – often cited as
sub-second in ideal conditions and around 1-2 seconds under load.
Scalability Approach: Avalanche's approach to scaling is two-fold:
* Efficient Consensus: Its consensus can accommodate a high number of validators
without a massive performance penalty. Communication complexity is low
(probabilistic gossip), so it can maintain decentralization (anyone can be a
validator by staking a modest amount of AVAX and running a node) while still
achieving high throughput and low latency. This is in contrast to Solana which
restricts validator count by hardware demands, or to Ethereum which restricts
throughput to maintain decentralization; Avalanche tries to get both via
algorithmic efficiency.
* Subnets: Avalanche is built as a platform for launching interoperable
blockchains. The default set (X, P, C chains) is known as the Primary Network,
which all validators validate. But Avalanche allows the creation of subnets –
a set of validators that can run one or more custom blockchains with their own
rules (could be permissioned chains, or chains optimized for specific
applications, possibly using different virtual machines). This is a
sharding-like approach: each subnet can be considered an independent shard
with its own state and execution, and subnets can be heterogeneous (not all
have to run EVM; one could run a different VM or application-specific chain).
* Subnets can communicate via the Primary Network or via bridges, though native
interoperability is still evolving.
* This approach means Avalanche can scale by adding more subnets to handle new
workloads, rather than piling everything on one chain. However, the default
C-Chain itself can handle a significant load (several thousand TPS) given the
consensus performance.
* Avalanche essentially offers an infrastructure where many blockchains (even
with different designs) share a common security model if they are validated by
a common validator set. It's up to the creators whether to require all
Avalanche validators or a subset.
Smart Contract Environment: The primary smart contract platform on Avalanche is
the C-Chain, which is EVM-compatible. It mirrors Ethereum's capabilities
(solidity contracts, same API). This was a strategic choice to attract Ethereum
developers to easily deploy on Avalanche. The Avalanche C-Chain benefits from
Avalanche consensus, so you get Ethereum-like smart contracts with much faster
finality and higher throughput. The downside might be slightly less mature
tooling or the need to use the Avalanche-specific endpoints, but generally it's
very close to Ethereum.
Avalanche also supports other VMs via subnets (for example, there is a subnet
running a Bitcoin-like UTXO chain, and others planned with native Rust or Move
VMs).
Finality Guarantees: Because Avalanche's consensus doesn't rely on chain depth
and probabilistic confirmation, once a transaction is confirmed and finalized,
it's done. Avalanche provides deterministic finality. The probability of
reversal after finality is essentially zero unless an attacker controls a
majority of validators (and even then the consensus protocol doesn't create
typical forks; an attacker would likely have to pause consensus or break it
rather than secretly create a conflicting history).
Comparative Notes: Avalanche's block time (\~1s) and finality (1-2s) are much
faster than Ethereum's (\~12s, \~6-12min finality) and Bitcoin's (10min, 60min+
finality). It's closer to Solana's in speed, though using a very different
approach (gossip vs leader-based). Avalanche doesn't reach the raw TPS of Solana
in one chain (Solana's claimed 50k vs Avalanche maybe a few thousand on
C-Chain), but Avalanche can scale out with subnets and keep adding more chains
if needed. Avalanche is also lighter on hardware than Solana; running an
Avalanche validator is more feasible on consumer hardware (though it still
benefits from good networking and CPU for cryptographic operations).
## Cardano: ouroboros proof-of-stake and eutxo model
Consensus Mechanism: Cardano is a blockchain platform that emphasizes academic
research and formally verified security. Its consensus algorithm is a family of
PoS protocols named Ouroboros. Unlike Ethereum's Casper FFG or Avalanche's BFT,
Ouroboros is a chain-based Proof-of-Stake similar in spirit to Nakamoto
consensus but using stake-weighted lottery for block leaders. Key points:
* Ouroboros Praos (current version): Time is divided into epochs (e.g., 5 days
long) and each epoch is subdivided into slots (each slot \~1 second according
to some sources, though not every slot will have a block). For each slot, the
protocol randomly selects a stakeholder (could be a stake pool representative)
to be the block producer for that slot, with probability proportional to the
amount of stake they control (either themselves or delegated to them).
* If a slot has a leader, that leader can produce a block. There might be slots
with no leader (no block in that slot), which introduces some expected gap
between blocks. In practice, Cardano's block time (the average interval with a
block) is about 20 seconds. This is because not every one of the 1-second
slots results in a block, roughly 5% of slots produce blocks if parameters
yield \~20s block time.
* Slot leader election uses a VRF (Verifiable Random Function) where each
potential leader privately checks if they won their slot by inputting some
seed and their stake, yielding a proof if yes.
* Ouroboros, being chain-based, means forks can occur if two leaders are elected
close or network delays cause two different blocks for the same slot or
adjacent slots. The chain selection rule in Ouroboros is similar to Bitcoin's
longest chain (or rather the chain with highest accumulated stake-signed
blocks), albeit with tweaks to ensure honest majority of stake leads to
eventual convergence.
* Cardano evolves Ouroboros with versions like Ouroboros Genesis, Ouroboros
Omega, each improving aspects like flexibility in offline periods or better
random selection. But importantly, it's not instant finality. It inherits a
probabilistic finality like Bitcoin: the deeper a block is in the chain, the
more secure it is considered.
Finality: As a result of the above, Cardano's transactions have probabilistic
finality. The network does not have a finality gadget yet (though there are
future plans to incorporate one possibly, or Ouroboros Leios/chronos might
improve time consensus). It's often said that a transaction on Cardano is
considered final after about 10-15 blocks (which at 20s each is a few minutes)
for practical security, but to be extremely safe (like 99.999% certain), it
might require on the order of 100 blocks or more. In fact, Cardano's
documentation suggests that due to the nature of Ouroboros, absolute finality
"cannot happen in less than one day" in a theoretical sense – implying after an
epoch boundary, the chain is pretty set. This is far slower finality compared to
BFT chains, and even slower than Ethereum's finality. However, significant
rollbacks on Cardano are extremely unlikely unless someone controls a majority
of stake and can orchestrate a deep reorg.
Scalability Approach: Cardano's base layer scalability relies on protocol
refinements and on-chain parameter increases:
* It uses eUTXO (Extended UTXO) as its transaction model, not accounts. eUTXO is
like Bitcoin's UTXO but with the ability for outputs to carry attached data
and scripts (Plutus scripts) that must be satisfied to spend them. This model
enables local verification of contract logic and more parallelism (since
independent UTXOs can be processed in parallel), but it also means something
like a single contract state is more cumbersome to update (it's broken into
UTXOs).
* Cardano has been gradually increasing parameters like block size, script
memory limits, etc., to allow more transactions per block. However, on-chain
throughput remains moderate (in the order of a few dozen transactions per
second at most currently). They haven't pushed base layer throughput to
extremes yet.
* The major scalability plans for Cardano involve layer 2 solutions and
sidechains:
* Hydra Head Protocol: State channels that allow a group of users to do fast
off-chain transactions and only settle the net result to the chain. Hydra
could allow many local off-chain ledgers operating for quick interaction
(e.g., gaming or fast payments) and leveraging Cardano for security when
closing the channel.
* Sidechains: Cardano is developing sidechains that could connect to the main
chain and use ADA for staking but have different parameters (for example, a
sidechain for EVM compatibility or one optimized for privacy). A recently
discussed sidechain is Midnight (privacy-focused) and Milkomeda (EVM
sidechain) already operates connected to Cardano.
* Input Endorsers: A future upgrade in Ouroboros might separate transaction
propagation from block confirmation by introducing input endorsers that
pre-validate transactions and then include references in blocks, increasing
throughput.
* Cardano's approach is often to research and slowly deploy upgrades,
prioritizing correctness. It may not be the fastest to scale, but it aims to
do so methodically.
Smart Contract Environment: Cardano's smart contracts run on a platform called
Plutus, which uses the eUTXO model. Contracts are written in a Haskell-based
language (or another high-level language that compiles to Plutus Core). The
model is quite different from Ethereum's:
* Because of eUTXO, a contract state is represented as UTXOs that a script can
spend and produce new UTXOs. All conditions must be satisfied in one
transaction, which encourages a style of contracts where logic is applied in
the transaction construction and the chain simply verifies it.
* This makes certain things efficient (parallelism, since independent UTXOs =
independent transactions, no global mutex on a contract's storage) but others
more complex (composing contracts or doing something like "all participants
agree" might require more careful orchestration).
* Cardano also focuses on formal verification; the Plutus language and the
overall design aim to reduce smart contract vulnerabilities (though it's still
possible to write bad logic, of course).
Comparative Notes: Cardano tends to have longer latency (20s blocks, no quick
finality) compared to others. Its throughput has been lower, but with
improvements and Hydra, it may increase. It trades off raw performance in favor
of a conservative, research-driven approach. Where Solana and Avalanche push the
envelope on raw TPS and finality, Cardano emphasizes security proofs and novel
L2 scaling. In a sense, Cardano aligns closer to Bitcoin's philosophy among
these, but with PoS and smart contracts.
## Polkadot: heterogeneous sharding with npos and grandpa finality
Consensus Mechanism: Polkadot is a sharded multi-chain network designed to
connect multiple specialized blockchains (parachains) under one security
umbrella. Its consensus has two layers:
* Block Production – BABE: Polkadot uses a variant of Ouroboros called BABE
(Blind Assignment for Blockchain Extension) for selecting block authors on the
relay chain (the main chain). Similar to Cardano, validators are randomly
assigned slots to produce relay chain blocks, in a decentralized lottery
fashion. BABE runs continuously creating blocks (Polkadot's block time is
about 6 seconds).
* Finality – GRANDPA: Complementing BABE, Polkadot has a finality gadget called
GRANDPA (GHOST-based Recursive ANcestor Deriving Prefix Agreement). GRANDPA is
a BFT algorithm where validators vote on the chain's state. It doesn't run
every block, but when it does run (it can finalize many blocks in one round),
it finalizes the longest chain that has 2/3 votes. In practice, GRANDPA might
finalize blocks every few seconds or every few rounds depending on network
conditions. This means Polkadot blocks get finalized (irreversible) typically
within half a minute or less – often a batch of recent blocks are finalized
together.
* Because Polkadot separates block production from finality, it achieves both
good throughput (continuous 6s blocks even if finality lags a bit) and
deterministic finality eventually. If the network is under heavy load, blocks
might still be produced but finality might catch up with a slight delay; if
finality is working faster than production, it might finalize every block
almost immediately as they come.
Nominated Proof-of-Stake (NPoS): Polkadot's PoS system involves nominators (who
stake DOT tokens and back certain validators) and validators (who actually run
nodes and produce/validate blocks). This is an iteration on Delegated PoS, but
with some differences like nominator's stake being split among possibly several
validators, and an algorithm to choose a diverse set of validators maximizing
stake decentralization. Polkadot typically has on the order of a few hundred
validators (perhaps \~300–1000) in its active set, and many nominators who stake
behind them.
Sharding via Parachains: Polkadot's big scalability approach is parallel chains
(parachains). The relay chain (Polkadot main chain) itself doesn't do much in
terms of smart contracts or heavy transactions; its job is to coordinate and
finalize states of parachains. Each parachain is a blockchain with its own state
transition function (it could be a smart contract platform, a runtime for
identity, a DeFi chain, an IoT chain, etc.). Validators in Polkadot are grouped
into rotating subsets to validate parachain blocks (they act as collators or
check the collators' work).
* Each parachain produces blocks in parallel, and those blocks are checked by a
subset of validators, then the results (state transitions) are posted to the
relay chain as candidates.
* The relay chain block includes the certified parachain blocks' state roots.
GRANDPA finality then finalizes the relay chain block, which means all
parachain states in that block are finalized.
* This architecture allows Polkadot to process many chains' transactions at
once, theoretically scaling linearly with the number of parachains. Initially,
Polkadot might support e.g. 100 parachains, effectively meaning 100 parallel
throughput lanes.
* Parachains can even have their own consensus if they want (but they rely on
Polkadot validators for final approval). Polkadot ensures security via shared
staking – an attack on one parachain would require attacking the whole
network's validator set.
Block Time and Throughput: The relay chain's 6-second block time means the
system is fairly responsive. Parachains also effectively follow that tempo (each
parachain might produce a block each relay chain block or at least have the
opportunity to). Polkadot's design goal is high aggregate throughput through
parallelism, although any single parachain might still have limits (depending on
its own config, e.g., Moonbeam parachain (an Ethereum-like chain on Polkadot)
might have a block time of 12s and certain gas limit).
Finality: With GRANDPA, Polkadot achieves finality in roughly 1-2 relay chain
blocks in many cases. For example, it might finalize every second block, or
finalize a batch after 4 blocks if network is slower. Empirically, Polkadot
often has finality within \~12 to 30 seconds. In a demonstration, Polkadot has
achieved finalizing 51 parachains in 30 seconds (as one Reddit mention noted).
This is far quicker than probabilistic finality and comparable to other
BFT-style chains. The advantage is this finality covers the entire sharded
system at once.
Scalability and Upgrades: Polkadot can increase its throughput by:
* adding more parachains (there is a mechanism to auction parachain slots,
etc.),
* using parathreads (pay-as-you-go parachains for lower throughput chains),
* or future upgrades like asynchronous backing, which pipeline parachain block
production more efficiently. Polkadot's architecture is forward-looking; it
intends to incorporate further optimizations (for instance, there's work on
increasing the number of parallel threads or improving how parachains hand off
data).
Smart Contract Environment: Polkadot itself doesn't have a native smart contract
VM on the relay chain (no user contracts on the relay chain). Instead, smart
contracts live on parachains. Polkadot provides a framework called Substrate to
build parachains. Substrate is very flexible; you can compose pallets (modules)
for governance, balances, etc., and also include a smart contract pallet if you
want your chain to support contracts. Many parachains exist:
* Moonbeam/Moonriver: EVM-compatible parachains (so essentially an Ethereum-like
environment on Polkadot/Kusama).
* Acala: DeFi focused with its own stablecoin and also EVM compatibility.
* Parallel, Astar, etc.: Some support EVM, some support WebAssembly smart
contracts (Substrate has a WebAssembly VM for smart contracts called
ink!/pallet-contracts).
* Unique Network: NFT-focused chain with custom logic.
This heterogeneous approach means Polkadot doesn't enforce one execution
environment – each chain can optimize for its use case. However, one downside is
that achieving cross-chain interoperability (beyond what Polkadot provides via
XCMP – cross-chain message passing – among parachains) is more complex for
developers, and liquidity or state is fragmented across chains. Polkadot's
protocol handles cross-chain messages trustlessly, which is powerful (an asset
can move from one parachain to another under the same security, unlike bridging
across totally separate L1s which require external trust). This is one of its
selling points: a foundation for a multi-chain ecosystem with shared security
and trust-minimized interoperability.
Comparative Notes: Polkadot stands out for its sharding (multiple parallel
chains) which neither of the others (Solana, Avalanche, Cardano) do in the same
unified way (Avalanche has subnets but they are not as tightly coupled; Ethereum
is planning data sharding but currently relies on L2; Solana is monolithic,
Cardano primarily monolithic + L2). Polkadot's 6s block and \~finality under 1
minute put it in a similar league with Avalanche in terms of user experience
quickness (though Avalanche is a bit faster). Polkadot's security relies on a
robust validator set and the slashing of misbehavior like any PoS, but it hasn't
faced major attacks. Also noteworthy is Polkadot's on-chain governance which can
upgrade the protocol quite flexibly (the network has self-amendment features).
Finally, Polkadot's model means if one parachain congests itself, others are not
directly slowed (except if it saturates shared resources on the relay chain, but
they're isolated to a degree). This is a different approach than scaling a
single chain to handle everything; it aligns with the idea that different
applications may be better on different specialized chains, but all tied
together.
⸻
Each of these platforms – Solana, Avalanche, Cardano, Polkadot – showcases
different design philosophies:
* Solana: maximize performance on one chain, hardware-scale, at the cost of high
requirements and more complex networking.
* Avalanche: invent new consensus to get both speed and decentralization, allow
many chains but keep default one chain easy to use (with EVM).
* Cardano: prioritize security proofs and gradual decentralization, use novel
PoS, accept slower finality, and scale through off-chain means.
* Polkadot: embrace multi-chain from the start, with strong finality and the
ability to run many types of blockchains under one network.
These trade-offs reflect the blockchain trilemma (decentralization, security,
scalability). No single approach is definitively "best" – each is optimizing for
certain use cases and assumptions.
file: ./content/docs/knowledge-bank/public-sector-usecases.mdx
meta: {
"title": "Public sector use cases",
"description": "A comprehensive guide to blockchain applications across government, infrastructure, citizen services, and public sector modernization"
}
## Introduction to blockchain in the public sector
Governments and public institutions around the world are under increasing
pressure to deliver efficient, transparent, and citizen-friendly services. These
organizations manage a vast array of responsibilities including public records,
identity management, taxation, procurement, welfare programs, infrastructure,
elections, and more. However, many of these systems are built on outdated
technologies and siloed processes that limit interoperability, trust, and
real-time decision-making.
Blockchain presents an opportunity to reimagine how public services are
delivered, by enabling secure, decentralized, and auditable infrastructure that
supports data integrity, automation, and collaboration. At its core, blockchain
offers a shared ledger across multiple parties where records are tamper-evident,
smart contracts enforce logic without intermediaries, and identities can be
verified without centralized databases.
In the public sector, blockchain is not a replacement for existing systems but a
foundational layer that can connect departments, institutions, and citizens in a
more trusted and efficient digital ecosystem. This guide explores practical and
emerging use cases across a wide range of public domains where blockchain has
the potential to transform processes and improve outcomes for citizens and
governments alike.
## Benefits of blockchain for public institutions
Public sector entities face unique challenges including accountability to
taxpayers, the need for transparency, compliance with legal frameworks, and the
expectation of universal service access. Blockchain provides benefits that align
directly with these imperatives:
* Tamper-evident records that improve accountability and reduce fraud
* Shared data layers that break down silos between departments and agencies
* Cryptographic proof of actions and approvals for audit and compliance
* Automation of multi-party workflows through smart contracts
* Permissioned access control and data sharing with privacy protections
* Immutable registries for assets, entitlements, and rights
* Real-time traceability for funds, goods, and documents
These benefits translate into faster service delivery, lower administrative
costs, improved trust in government, and better visibility into how public
resources are managed.
## Land and property registration
Property ownership is one of the most fundamental public services provided by
governments. Traditional land records are prone to forgery, manual error,
missing documents, and disputes over ownership. Blockchain enables secure,
digital property registries where every transaction involving a land parcel is
recorded immutably and transparently.
In a blockchain-based land registry, each plot of land is assigned a unique
digital identifier that maps to its physical location and legal metadata.
Ownership transfers are executed through smart contracts that validate the
identity of the parties, update the registry, and store proof of transaction.
Benefits of this approach include:
* Elimination of duplicate titles and fraudulent claims
* Transparent historical record of all ownership changes
* Instant verification of ownership and encumbrances
* Reduction in legal disputes and due diligence time for banks
* Integrated workflows between land departments, municipal agencies, and
financial institutions
Several countries including Sweden, Georgia, and India have piloted
blockchain-based land registries to simplify title management and increase
public confidence in property rights.
## Public procurement and government tenders
Public procurement is a critical function for governments but is often marred by
lack of transparency, corruption, and inefficiencies. Blockchain enables a
transparent and auditable procurement system where every step—from tender
creation to bid evaluation and contract fulfillment—is recorded on-chain and
accessible for review.
Key features of blockchain in procurement include:
* Timestamped publishing of tenders with immutable parameters
* Confidential, encrypted bid submissions that can only be opened after the
deadline
* Smart contract logic for evaluating bids against predefined criteria
* Automatic selection of the winning bidder and generation of contract terms
* On-chain performance monitoring and milestone-based payments
This system increases trust among vendors, reduces bid manipulation, ensures
compliance with procurement rules, and enables real-time oversight by
anti-corruption bodies and auditors.
Countries like Colombia and Chile have experimented with blockchain in public
procurement to enhance transparency, lower corruption risk, and improve
competitiveness in public bidding.
## Digital identity and citizen credentials
A foundational identity is required for citizens to access nearly every public
service—healthcare, education, taxation, social benefits, and voting.
Traditional identity systems are centralized, fragmented, and lack portability
across institutions. Blockchain introduces self-sovereign identity models where
citizens own and control their credentials while government entities issue
verifiable proofs.
In this model:
* Citizens receive a digital wallet that stores verifiable credentials issued by
various government agencies
* Each credential is anchored on a blockchain with cryptographic signatures
* Citizens can present zero-knowledge proofs to verify claims (e.g., age,
residency, qualification) without revealing sensitive information
* Agencies and private service providers can instantly verify the validity of
credentials without accessing underlying databases
The benefits include streamlined onboarding for services, reduced identity
fraud, simplified interagency coordination, and enhanced citizen privacy. The
European Union’s EBSI initiative and projects in Canada and Singapore are
advancing blockchain-based digital identity ecosystems for public use.
## Education credentials and academic records
Educational certificates and degrees are prone to forgery and often difficult to
verify across institutions, employers, or borders. Blockchain provides a
trusted, digital credential registry where academic achievements are issued as
verifiable credentials by authorized institutions.
Once recorded on-chain:
* Degrees and diplomas can be validated instantly by employers or government
bodies
* Students can carry their credentials in a digital wallet and share only when
needed
* Verifiers can confirm authenticity without contacting the issuing authority
* Records remain intact, even if the issuing institution shuts down or loses
data
Governments can use this model for national credential registries, professional
licensing boards, or skill development programs. Countries such as Malta,
Singapore, and India have implemented blockchain for issuing diplomas,
vocational certificates, and academic transcripts.
## Social welfare and benefit disbursement
Governments deliver a wide range of subsidies and benefits to citizens,
including food rations, pensions, housing support, unemployment insurance, and
disaster relief. These programs often struggle with inefficiencies, delays,
leakage, and fraud. Blockchain enables conditional, transparent, and automated
disbursement of welfare benefits through smart contracts.
Key components of blockchain-based welfare delivery include:
* Registration of beneficiaries using verified digital identity
* Smart contracts that release payments or entitlements based on eligibility
conditions
* Real-time tracking of disbursements and consumption at point of delivery
* Citizen-facing dashboards for grievance redressal and entitlement history
This system reduces administrative overhead, ensures that funds reach the
intended recipients, and enables data-driven policy design based on actual
consumption trends.
Countries including Kenya, the Philippines, and South Korea have explored
blockchain pilots for pension disbursement, conditional cash transfers, and
humanitarian aid distribution.
## Healthcare records and vaccine traceability
Healthcare systems rely on accurate, timely access to patient data and medical
histories. Yet in many jurisdictions, health records are fragmented across
hospitals, clinics, and labs. Blockchain creates a unified patient-centric
health record where access is controlled by the patient and verified by
cryptographic signatures.
Use cases include:
* Cross-provider health record access with patient consent
* Tamper-proof storage of vaccination records and test results
* Public health dashboards based on anonymized and aggregated blockchain data
* Pharmaceutical supply chain tracking to detect counterfeits
During the COVID-19 pandemic, several countries explored blockchain-based
vaccine certificates and distribution tracking to ensure transparency and
prevent misuse. Estonia and the UAE are leading examples of blockchain adoption
in national healthcare systems.
## Urban governance and smart city platforms
City governments manage an array of digital services—from public transport and
utilities to permitting and community feedback. As cities adopt smart
infrastructure, blockchain can serve as the secure, interoperable data layer
that connects devices, departments, and citizens in real time.
Applications in urban governance include:
* Tokenized incentives for recycling, mobility, or energy conservation
* Transparent tracking of utility usage and billing
* Citizen complaint management with traceable resolution timelines
* Decentralized identity for accessing municipal services
By using blockchain as a coordination mechanism, cities can deliver more
responsive, efficient, and citizen-friendly digital public services. Barcelona,
Dubai, and Amsterdam have deployed blockchain projects to modernize service
delivery and enhance citizen engagement.
## Taxation and revenue management
Efficient taxation is vital for government sustainability, yet public tax
systems often suffer from limited integration, complex compliance procedures,
evasion, and corruption. Blockchain offers a transparent and auditable
infrastructure to automate tax collection, reporting, and reconciliation while
enabling real-time oversight.
Blockchain-based taxation systems can provide:
* Immutable recordkeeping of invoices and taxable events
* Smart contract enforcement of tax calculations and deductions
* Integration with digital payment systems for instant tax remittance
* Real-time dashboards for tax authorities and audit agencies
* Cross-border tax validation for goods and services
For example, a government could link its value-added tax system to a
blockchain-enabled e-invoicing network. Each invoice issued by a registered
business is hashed and recorded on-chain. Smart contracts compute the applicable
tax and enforce split payments—sending the net amount to the seller and the tax
portion directly to the treasury. This reduces fraud, prevents underreporting,
and ensures timely revenue collection.
Brazil, India, and China have explored integrating blockchain with e-invoicing
and GST (Goods and Services Tax) systems to enhance tax transparency and reduce
evasion.
## Elections and voting systems
Free and fair elections are the foundation of democratic governance. However,
conventional voting systems are vulnerable to issues such as ballot tampering,
voter fraud, low turnout, and delayed counting. Blockchain provides a secure and
transparent method for digital voting where each vote is recorded immutably and
counted accurately.
A blockchain-based voting system can support:
* Voter registration through self-sovereign digital identity
* Issuance of unique, one-time voting tokens to verified citizens
* Secure ballot casting using encrypted, pseudonymous identities
* Transparent counting process visible to all stakeholders
* Immutable audit trail of every vote and counting action
These systems can be used for local, national, or institutional elections as
well as participatory governance mechanisms such as citizen budgeting or policy
consultations.
For instance, a city could implement a blockchain-based digital voting app where
residents cast votes on budget allocations. Each vote is verified using a
digital ID and timestamped on-chain. The results are publicly auditable and
final within seconds of vote closing.
Estonia, South Korea, and Utah County in the United States have piloted
blockchain voting technologies for both government and internal organizational
elections.
## Public finance and budget tracking
Governments manage large and complex budgets that involve multiple departments,
programs, and vendors. Traditional financial systems often lack transparency and
are susceptible to misreporting or misallocation of funds. Blockchain provides a
mechanism for real-time, transparent, and programmable budget execution.
Key features of blockchain-enabled public finance systems include:
* On-chain disbursement of public funds through smart contracts
* Multi-signature approvals and audit trails for each transaction
* Real-time dashboards for citizens, media, and auditors
* Conditional fund release based on project milestones or delivery proofs
* Tamper-proof logging of receipts, invoices, and contracts
A municipal government could use blockchain to track infrastructure project
budgets. Funds are allocated through a smart contract that releases payments
based on verified completion stages, such as road paving or building
inspections. Citizens can view project status and spending directly on a public
interface.
The World Bank and several African nations have explored blockchain in public
finance management to increase transparency and reduce fraud in development aid
and infrastructure projects.
## Environmental regulation and carbon markets
Climate change mitigation requires reliable tracking of emissions, enforcement
of environmental regulations, and management of carbon credits and offsets.
Blockchain offers verifiable, decentralized infrastructure for environmental
monitoring and sustainable finance.
Applications include:
* Tokenization of carbon credits and offsets
* Transparent emissions tracking and registry management
* Smart contracts for automatic compliance enforcement
* Peer-to-peer carbon credit marketplaces
* Verifiable impact measurement for green finance initiatives
For example, an environmental agency could deploy IoT sensors at industrial
facilities to measure emissions. These sensors send data to the blockchain via
trusted oracles. If emissions exceed a permitted threshold, the smart contract
triggers penalties or requires the company to purchase additional carbon
credits.
Blockchain also enables decentralized registries of carbon offsets where each
credit is uniquely identified, traceable, and permanently recorded upon
retirement. This prevents double counting and increases market integrity.
Companies like IBM, Verra, and the Energy Web Foundation are working with
governments to develop blockchain-based environmental monitoring and carbon
trading platforms.
## Law enforcement and judicial systems
Legal systems rely heavily on documentation, evidence integrity, and
traceability of procedures. Blockchain enhances these functions by offering
immutable storage of case files, chain-of-custody records, digital warrants, and
procedural logs.
Use cases for law enforcement and judiciary include:
* Digital evidence management with time-stamped verification
* Secure sharing of case files among police, prosecutors, and courts
* Smart contracts to manage parole, bail conditions, or sentencing rules
* Citizen portals for reporting, tracking complaints, or receiving summons
* Automated fine collection and citation management
For instance, digital surveillance footage, once verified and hashed, can be
stored on a blockchain to prove authenticity and timestamp. A digital warrant
issued by a magistrate can be recorded on-chain with access granted only to
authorized enforcement officers.
Countries such as China and India have piloted blockchain in court systems for
evidence management, bail tracking, and smart legal document notarization.
## Intellectual property and public registries
Governments maintain registries of intellectual property such as patents,
copyrights, and trademarks. These records are often siloed, vulnerable to
tampering, and slow to verify. Blockchain introduces a tamper-evident registry
where creators can register and timestamp their works, and examiners can audit
and validate claims transparently.
Applications include:
* On-chain registration of creative works and inventions
* Smart contract licensing and royalty distribution
* Open access to ownership history and litigation status
* Cross-border IP collaboration with verifiable timelines
A national IP office could offer a blockchain-based portal where authors,
artists, and inventors register their work. Each registration is hashed and
anchored on-chain, allowing for instant verification of submission time and
ownership. Disputes are resolved based on the immutable history of claims and
usage rights.
WIPO and national agencies in South Korea, Australia, and the UAE have explored
blockchain use in IP protection and licensing workflows.
## Transport, logistics, and infrastructure projects
Public infrastructure and logistics services involve complex coordination
between agencies, contractors, and stakeholders. Projects such as road
construction, public transport networks, and airport expansions often face
delays, cost overruns, and misreporting. Blockchain can be used to improve
tracking, transparency, and accountability.
Use cases include:
* Supply chain tracking for construction materials
* On-chain project milestones and performance records
* Permitting and inspection logs with timestamped validations
* Integration with GPS and IoT devices for fleet tracking
A public works department could issue tenders on-chain and monitor the delivery
of materials such as cement or steel using blockchain-enabled logistics. Each
batch is recorded with origin, quantity, and delivery status. Payments are
released based on delivery confirmation and project milestones verified by
independent inspectors.
Blockchain enables transparency in contractor payments, prevents procurement
fraud, and builds citizen trust in public spending.
## Border control and customs
Customs and immigration departments require accurate and secure exchange of data
on travelers, cargo, and declarations. Blockchain can streamline cross-border
operations by allowing trusted parties to access verified records, reduce
paperwork, and speed up clearance processes.
Use cases for blockchain in customs and immigration include:
* Tokenized cargo manifests with on-chain declarations
* Cross-border customs agreements using smart contracts
* Shared traveler and immigration data between countries
* Blockchain-based visa and travel permit registries
For example, a shipment moving across multiple borders can be tracked on a
blockchain where each customs authority verifies its passage and inspection. If
all records are valid, the next border crossing is pre-cleared, speeding up
transit and reducing administrative load.
Organizations like the World Customs Organization and Singapore Customs are
experimenting with blockchain-enabled trade facilitation tools.
## Emergency response and disaster relief
In disaster scenarios, timely and transparent relief distribution is critical.
Coordination among governments, NGOs, and local stakeholders requires real-time
information and audit trails to prevent duplication, theft, or misuse of aid.
Blockchain helps manage:
* Beneficiary registration and entitlement verification
* Transparent allocation of relief funds and supplies
* Smart contracts for conditional disbursement based on verified need
* Real-time dashboards for donors, agencies, and field workers
Following a natural disaster, affected families could register their needs via
mobile applications. Once verified, they receive digital vouchers or tokens that
can be redeemed for food, shelter, or medicine. All transactions are recorded
on-chain and visible to donors and government agencies for oversight.
UNICEF and the World Food Programme have run blockchain pilots to deliver aid
and track usage in refugee camps and disaster-affected regions.
## Archiving and public record preservation
Governments are custodians of vast historical, legal, and administrative
records—from legislative documents to census data. These archives must be
preserved, authenticated, and accessible for public trust and institutional
memory.
Blockchain provides:
* Permanent digital hashes of public records stored in distributed ledgers
* Tamper-proof timestamping of original document versions
* Long-term access policies through decentralized storage systems
* Immutable audit trails of who accessed or altered a record
For example, a national archive could hash and record every digital law or court
ruling on-chain, preserving its authenticity even if the website or file system
changes. Researchers and journalists could verify that the document is original
and unaltered.
Decentralized storage platforms such as IPFS can be integrated with blockchain
to host the files, while the hashes and metadata remain permanently accessible
and verifiable.
## Freedom of information and open data
Transparency is a cornerstone of good governance. Many countries have
right-to-information laws and open data platforms, but their effectiveness
depends on the accuracy, availability, and credibility of published data.
Blockchain can be used to:
* Certify that public datasets are complete and unmodified
* Record the origin and update history of government statistics
* Enable citizen auditing of public expenditures, laws, and policies
* Build APIs that return real-time, verified data to applications and dashboards
For instance, a finance ministry could publish its annual budget data on-chain,
including line items, departmental allocations, and disbursements. Journalists,
researchers, and citizens can verify every update against the ledger, ensuring
data integrity and institutional accountability.
Blockchain enhances the transparency and reliability of open data while
discouraging manipulation or concealment of information.
## Agricultural subsidies and supply chain transparency
Agriculture is a vital sector in most economies and a key focus area for public
policy. Governments often provide subsidies, crop insurance, procurement
services, and disaster relief to farmers. However, these programs face
challenges such as delayed disbursements, lack of transparency, and fraudulent
claims. Blockchain can improve efficiency, trust, and traceability across
agricultural value chains.
Blockchain applications in agriculture include:
* Digital farmer identity and land ownership verification
* On-chain registration of subsidies and insurance policies
* Smart contract-based payouts linked to weather or yield data
* Transparent procurement tracking from farm to warehouse to market
* Food traceability systems for quality assurance and export compliance
For example, a state government could issue digital tokens representing
fertilizer subsidies. Registered farmers receive these tokens in their wallets
and redeem them at approved vendors. Every transaction is recorded on the
blockchain, ensuring transparency, preventing duplication, and enabling
data-driven policy reforms.
Blockchain also supports agricultural cooperatives and marketplaces by tracking
produce origin, quality, and payment history. This helps small farmers access
better pricing and reduces losses due to middlemen or delayed payments.
## Public safety and emergency services
Public safety agencies such as police, fire departments, and emergency medical
responders manage sensitive data, operate in fast-changing environments, and
require high coordination. Blockchain can enhance accountability, inter-agency
coordination, and real-time access to critical data.
Key use cases in public safety include:
* Tamper-evident digital logs of incident reports and actions taken
* Chain-of-custody tracking for evidence and forensic materials
* Emergency call routing and escalation protocols using smart contracts
* Identity verification for field responders and citizens
* Blockchain-backed audit trails for use-of-force reporting or disciplinary
cases
Imagine a blockchain network that links police stations, ambulance services, and
local hospitals. When a distress call is received, a smart contract triggers
dispatch, logs each step of the response, and updates relevant stakeholders.
Once an incident is closed, the record is sealed with timestamps and access
rights based on role and jurisdiction.
These systems create more accountability in high-stakes situations and reduce
manual reporting burdens for frontline personnel.
## Defense procurement and military logistics
Defense and security organizations handle complex procurement, maintenance, and
logistics operations with high security requirements. The opacity and volume of
these systems can lead to inefficiencies, overspending, or supply chain
vulnerabilities. Blockchain offers traceability, automation, and integrity in
defense operations.
Blockchain in defense may include:
* On-chain records of parts manufacturing, inspection, and certification
* Digital defense contracts with milestone-based payments
* Equipment lifecycle tracking and predictive maintenance triggers
* Inter-agency coordination on classified logistics with permissioned access
For instance, when procuring military-grade hardware, each component is recorded
on a blockchain during production, testing, and delivery. If a fault is
discovered later, the exact manufacturing batch and supplier can be traced,
enabling faster recalls and accountability.
Several defense agencies globally, including those in the United States and NATO
members, have launched research programs on using blockchain for supply chain
integrity, secure communications, and asset tracking.
## Public transportation and mobility platforms
Transportation systems such as metro rail, buses, and bike-sharing schemes are
often subsidized and managed by public authorities. These systems need secure
ticketing, dynamic pricing, usage tracking, and intermodal coordination.
Blockchain can support a unified digital layer for mobility services.
Use cases for blockchain in transport include:
* Multi-vendor smart ticketing systems with real-time settlement
* Subsidy verification and fraud prevention in concession fares
* Ride or pass ownership through NFTs or tokenized passes
* Mobility-as-a-service (MaaS) platforms with shared incentives
A city might implement a blockchain-based transport wallet where citizens hold
ride credits, monthly passes, or tokens earned through eco-friendly behavior
such as cycling or carpooling. These tokens are interoperable across bus, metro,
and last-mile services, with automatic routing and fare calculations done via
smart contracts.
Projects in Sweden, Dubai, and Singapore have investigated blockchain-based
digital mobility networks that integrate public and private transport operators
under common governance rules.
## Government research funding and grants
Research and innovation are central to national development, and governments
allocate substantial funds to universities, startups, and independent labs.
However, research funding mechanisms can suffer from opaque selection criteria,
delayed disbursement, and limited visibility into project progress.
Blockchain can enhance trust and efficiency by:
* Registering funding calls, proposals, and reviews on-chain
* Automating grant approval and release based on smart contracts
* Tracking expenditure, milestones, and deliverables
* Publishing research outcomes and peer reviews immutably
Consider a national research foundation that operates a blockchain-based grant
portal. Each grant call is published with criteria and evaluation workflows.
Researchers submit proposals that are timestamped and assigned pseudonymous
reviewers. Funding is disbursed in phases, triggered by milestone approvals and
submission of verified outputs.
Such a system improves fairness in selection, reduces administrative overhead,
and increases the credibility of public-funded research.
## Utility billing and energy systems
Public utilities such as electricity, water, and gas need accurate billing,
meter data management, and fraud prevention. With the rise of decentralized
energy generation, blockchain enables peer-to-peer energy trading, smart meter
integration, and verifiable consumption history.
Utility applications for blockchain include:
* Smart metering and usage-based billing using oracles
* Subsidy application and redemption via tokens
* Tokenization of carbon credits or solar incentives
* Settlement of cross-grid energy trades between households or municipalities
A municipality could deploy solar panels on public buildings and track their
energy output on-chain. Residents participate in a tokenized scheme where excess
energy is rewarded and usage is billed automatically. All data is visible to
regulators, auditors, and citizens via a public dashboard.
Governments in Australia, Germany, and India have supported pilots involving
blockchain-based metering, microgrids, and decentralized energy settlements.
## Immigration, refugee, and cross-border identity systems
Migration and refugee movements present humanitarian and logistical challenges.
Governments and international bodies require systems that respect privacy,
provide legal identity, and support service delivery across borders. Blockchain
enables secure, portable, and user-controlled identity frameworks.
Use cases include:
* Cross-border digital identity records linked to biometrics
* Tamper-proof logs of visa issuance and immigration status
* Health and vaccination records portable across countries
* Aid and financial inclusion tools for displaced populations
A refugee who loses their documents during displacement can access their
blockchain-based digital ID to prove prior residency, vaccinations, or
education. Aid organizations can use the ID to authenticate beneficiaries and
deliver cash aid via digital wallets.
The United Nations and NGOs have explored blockchain to issue portable identity
credentials to stateless individuals, enabling access to healthcare, education,
and mobility in host nations.
## Tourism, culture, and heritage preservation
Tourism departments manage heritage sites, event access, and revenue collection.
Cultural institutions face challenges in provenance, ticket fraud, and visitor
data fragmentation. Blockchain can protect cultural assets and streamline
tourism services.
Applications include:
* NFT-based access passes for museums and festivals
* Traceable registries of historical artifact ownership
* Smart contract distribution of tourism revenues among local communities
* Visitor badges and loyalty points for frequent travelers
For instance, a national heritage board could issue digital collectibles that
double as entry passes for cultural events. These NFTs can include embedded
discounts, local business tie-ins, or audio guides. Tourists build a verifiable
on-chain record of site visits and contribute reviews or donations via the same
platform.
Projects in France, Japan, and Italy are exploring blockchain’s potential in
digital tourism ecosystems.
## Cooperative governance and rural development
Decentralized cooperatives, often supported by public grants, play a major role
in agriculture, fisheries, housing, and credit in rural regions. Blockchain
strengthens these cooperatives by providing digital infrastructure for
governance, finance, and recordkeeping.
Use cases include:
* On-chain voting and decision-making for cooperative members
* Transparent ledger of contributions, loans, and dividends
* Smart contract enforcement of bylaws and dispute resolution
* Integration with rural banking and microfinance institutions
A dairy cooperative might use a blockchain-based app to track milk production,
allocate shared costs, and distribute revenues. Members vote on investment
proposals using digital tokens, and outcomes are instantly reflected on-chain
for all to review.
This fosters trust, financial inclusion, and digital governance in remote areas.
## Public libraries, open knowledge, and academic records
Public libraries and national knowledge networks can use blockchain to preserve
open access to books, documents, and academic work. Blockchain ensures that
content is original, uncensored, and credited to the rightful author.
Applications include:
* Immutable digital records of publications and revisions
* Peer-reviewed knowledge sharing with timestamped edits
* Library card tokens that allow borrowing and community contributions
* Royalty or grant flows to authors through smart contract licensing
An open-source research portal can use blockchain to manage version control,
prevent plagiarism, and reward contributors. Each contribution is hashed,
logged, and acknowledged publicly, creating transparent academic incentives.
Institutions such as MIT and research groups in the Netherlands have
experimented with blockchain for open science, academic reputation, and public
knowledge registries.
## Real estate development and zoning regulation
Urban planning, land use control, and real estate development involve
interdependent approvals from public bodies. Blockchain brings traceability and
efficiency to the issuance of permits, zoning adjustments, and developer
commitments.
Use cases include:
* Permit application workflows with digital signatures and time tracking
* On-chain representation of zoning maps and development rights
* Citizen dashboards for monitoring construction activity and grievances
* Smart contracts that enforce escrow, impact fees, and inspection results
When a developer applies for a building permit, the application is submitted
on-chain with required documents and stakeholder endorsements. Inspection
reports and approvals are digitally signed and linked. Once completed, the
project’s regulatory compliance history is preserved forever, deterring misuse
and improving oversight.
Cities like Dubai and San Francisco have considered blockchain-based zoning and
permitting platforms for their smart city initiatives.
## Interoperability between agencies and jurisdictions
In public administration, most blockchain use cases require collaboration between multiple departments, ministries, or even sovereign governments. However, siloed digital infrastructures and incompatible data formats often hinder cooperation. Blockchain offers a shared infrastructure that can facilitate interoperability without requiring centralized control.
Key interoperability scenarios include:
* Cross-border data exchange for customs, immigration, and trade
* Shared ledgers across central and local governments for budget and taxation
* Legal and regulatory frameworks that enable multi-agency contract execution
* Standards for exchanging verifiable credentials, certificates, and licenses
A practical example involves a shared national digital ID system used by banks, tax departments, and health agencies. Each agency issues and verifies attributes (e.g., income status, citizenship, insurance coverage) on a blockchain ledger. Citizens share proofs without re-verifying data or completing repeated applications.
To support such interoperability, governments must adopt common data schemas, define smart contract interfaces, and build cross-chain bridges where necessary. This requires strong collaboration between public sector IT teams, standards bodies, and regulatory authorities.
## Phased implementation roadmap for blockchain adoption
Introducing blockchain into government operations requires a careful, phased approach. Blockchain projects impact multiple stakeholders and involve changes to legal processes, citizen interaction models, and back-office systems. A phased roadmap helps manage these complexities.
### Phase 1: Assessment and pilot
* Identify high-impact use cases with limited integration requirements
* Evaluate legal and regulatory constraints
* Develop proof of concept with a focus on traceability or transparency
* Use testnets or sandboxes for evaluation and learning
### Phase 2: Integration and scaling
* Build production-grade blockchain infrastructure (public or permissioned)
* Onboard multiple departments or agencies as network participants
* Integrate with existing systems through middleware and APIs
* Establish identity and access control frameworks
### Phase 3: Governance and interoperability
* Create cross-agency governance boards for smart contract management
* Define standards for data sharing, privacy, and key recovery
* Enable interoperability with other blockchain networks or international systems
### Phase 4: Public engagement and citizen adoption
* Launch mobile apps, dashboards, and portals for citizen participation
* Offer self-sovereign digital identities and reusable credentials
* Provide public education and feedback loops to improve adoption
Each phase builds on the previous one, gradually replacing manual workflows with verifiable automation while preserving trust and accountability.
## Key technology components for public sector blockchain systems
Deploying a blockchain solution for government services involves a number of supporting components, each of which must be secure, scalable, and legally compliant.
* **Blockchain node infrastructure**: Public, permissioned, or hybrid networks operated by government bodies or certified entities
* **Smart contracts**: Encoded logic for verification, disbursement, entitlement, or record updates
* **Wallets and credentials**: Digital wallets for citizens, agencies, and employees with identity verification features
* **APIs and oracles**: Integration with real-world data sources such as payment systems, biometrics, or sensors
* **Monitoring and analytics**: Dashboards to track adoption, usage, and performance in real time
* **Auditing tools**: Forensics, logging, and replay capabilities to verify decision trails and compliance
* **Data protection layers**: Encryption, selective disclosure, and privacy-preserving computation
These tools must be orchestrated within legal frameworks and designed with a user-first approach to ensure usability by both civil servants and citizens.
## Legal, regulatory, and data protection considerations
Government use of blockchain must comply with laws around data protection, procurement, access to information, and administrative procedure. Every implementation should assess the legal context in areas such as:
* **Data privacy laws**: Ensuring compliance with regulations such as GDPR, India’s DPDP Act, or HIPAA when storing personal data or identifiers
* **Legal admissibility**: Determining whether blockchain entries can serve as evidence or official records under existing statutes
* **Procurement frameworks**: Updating RFPs and contracts to include open-source protocols, smart contract audits, and long-term upgrade plans
* **Sovereignty and hosting**: Ensuring blockchain nodes and digital infrastructure remain under national jurisdiction and are resilient to external attacks
Data minimization, encryption, and proper consent models are critical when dealing with public registries, identity, health, or education data. Zero-knowledge proofs, selective disclosure, and verifiable credentials help meet these obligations without compromising decentralization.
## Capacity building for blockchain governance
Beyond technology, successful blockchain deployment in the public sector requires investment in human capacity and institutional governance.
Governments should build:
* **Blockchain literacy among policymakers**: Training workshops, courses, and secondments for senior civil servants
* **Technical teams**: In-house or contracted developers familiar with Solidity, Rust, Go, and smart contract security
* **Audit and compliance units**: Capable of verifying on-chain logic, validating oracle data, and responding to system changes
* **Citizen engagement programs**: Focused on digital literacy, wallet onboarding, and service access through mobile platforms
Open government platforms can publish documentation, roadmaps, and source code to involve academia, civic tech, and citizen watchdogs in shaping policy and ensuring accountability.
## Monitoring, metrics, and key performance indicators
To evaluate the impact of blockchain use in the public sector, projects must define and track KPIs aligned with policy goals, such as:
* **Service delivery metrics**: Time saved, cost per transaction, uptime and error rates
* **Transparency metrics**: Number of publicly auditable contracts, number of accesses to dashboards, citizen satisfaction
* **Efficiency metrics**: Reduction in redundant processes, automation rate, decreased manual interventions
* **Trust metrics**: Surveyed trust in service reliability, openness of procurement, complaint resolution rates
* **Security and compliance metrics**: Number of incidents, vulnerabilities resolved, smart contract audit coverage
Monitoring frameworks should publish regular updates to internal dashboards as well as public portals that demonstrate continuous improvement and performance.
## Examples of blockchain success stories in the public sector
Across the globe, governments and public institutions are experimenting with blockchain to solve practical problems. Some noteworthy examples include:
### Estonia
Estonia uses blockchain infrastructure to secure public records such as health data, identity registries, and judicial files. X-Road, Estonia’s national data exchange layer, integrates blockchain anchoring to detect tampering and ensure that data requests are auditable by citizens.
### Georgia
The Republic of Georgia implemented a blockchain-based land registry in partnership with Bitfury. More than 1.5 million land titles are recorded immutably, reducing fraud and improving access to legal documentation.
### Colombia
Colombia’s National Agency for Public Procurement (Colombia Compra Eficiente) piloted a blockchain-based procurement platform to reduce corruption, ensure transparency, and allow public scrutiny of contract awards.
### Dubai
Dubai launched the “Dubai Blockchain Strategy” to become the first city fully powered by blockchain. The strategy includes paperless government services, smart visas, and business registration on blockchain infrastructure.
### Brazil
The Brazilian tax authority implemented a blockchain platform to facilitate data exchange between customs and tax agencies, improving cross-border trade and reducing compliance complexity for exporters.
These projects demonstrate that blockchain, when deployed thoughtfully, can deliver measurable improvements in service delivery, transparency, and operational resilience.
## Challenges and limitations
Despite its promise, blockchain adoption in government comes with real-world limitations that must be considered:
* **Technical complexity**: Integrating blockchain with legacy systems can be difficult, especially when internal IT teams lack experience
* **Scalability and performance**: Public blockchains may struggle with high-throughput use cases such as real-time payments or micro-transactions
* **Legal ambiguity**: Smart contracts may lack clear legal status or mechanisms for dispute resolution
* **Resistance to change**: Bureaucratic inertia, internal politics, and job security concerns can delay adoption
* **Security risks**: Misconfigured smart contracts, wallet mismanagement, and oracle manipulation can lead to data loss or unauthorized access
Risk assessments and contingency planning should accompany every pilot. Incremental rollout, sandbox environments, and external audits help mitigate these risks while building institutional confidence.
## The future of blockchain in the public sector
Blockchain represents a foundational shift in how governments can manage data, processes, and relationships with citizens. Over the next decade, we expect to see:
* **Self-sovereign public identity**: Citizens controlling their identity credentials across borders, institutions, and private services
* **Decentralized administrative platforms**: Ministries, cities, and international organizations coordinating over shared infrastructure
* **Public digital assets**: Tokenization of land, licenses, permits, and carbon credits becoming standard practice
* **Hybrid public-private service layers**: Nonprofits, banks, and startups interoperating with public infrastructure through APIs and open protocols
* **Citizen-centric governance**: Transparent, participatory mechanisms embedded in software, from budgeting to dispute resolution
file: ./content/docs/knowledge-bank/smart-contracts.mdx
meta: {
"title": "Smart contracts",
"description": "Understanding smart contract development and best practices"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
# Smart contracts
Smart contracts are self-executing contracts with terms directly written into
code.
## Core concepts
Blockchain has introduced a shift in how applications execute logic and manage
data in decentralized environments. Two core components that enable this
automation are smart contracts and chaincode. While the terminology varies
depending on the platform, “smart contracts” on Ethereum and other EVM-based
platforms, and “chaincode” on Hyperledger Fabric, the conceptual foundation is
similar: encapsulating business logic in a secure, verifiable, and autonomous
format.
Both serve as deterministic code that gets triggered by transactions, leading to
changes in state or the execution of defined logic. These units of automation
replace the need for centralized backend services or intermediaries, reducing
operational costs and increasing transparency and efficiency. However, the
differences in architecture, programming model, governance, and performance
between public and permissioned networks have led to platform-specific design
choices and development methodologies for these artifacts.
## Historical context and conceptual foundation
The concept of smart contracts dates back to the 1990s, introduced by Nick
Szabo. His definition was more theoretical, focused on creating
digitally-enforced contracts that automatically execute when predefined
conditions are met. At the time, there was no platform robust enough to
implement such logic in a decentralized and tamper-proof environment.
This changed with the advent of Ethereum in 2015. Ethereum was the first
blockchain platform designed from the ground up with smart contracts in mind. It
introduced the Ethereum Virtual Machine (EVM), a fully isolated runtime
environment where smart contracts could be deployed and executed in a
decentralized and trustless way.
Chaincode emerged later as part of Hyperledger Fabric, a permissioned blockchain
platform developed under the Linux Foundation’s Hyperledger project. Fabric was
built with enterprise requirements in mind, such as access control, privacy, and
modular consensus, making it suitable for supply chain, finance, government, and
other regulated industries. Chaincode plays the same functional role as smart
contracts but operates in a controlled and governed environment.
## Smart contracts in Ethereum
Ethereum smart contracts are programs written primarily in Solidity, a
statically typed, contract-oriented language inspired by JavaScript and C++.
These contracts are compiled into bytecode, which runs on the Ethereum Virtual
Machine (EVM). Each deployed contract is stored at a specific address and
maintains its own storage, execution logic, and interface functions.
A smart contract in Ethereum is deployed through a transaction containing its
compiled bytecode. Once deployed, the contract becomes immutable, its logic
cannot be changed unless an upgrade pattern, like proxy contracts, is used.
Smart contracts are triggered when a user or another contract sends a
transaction to the contract’s address. The transaction includes a function
selector (derived from the function signature) and the required parameters
encoded using the Ethereum ABI (Application Binary Interface). When executed,
the EVM processes the contract logic deterministically on every full node across
the network.
Key constructs inside a smart contract include:
* msg.sender: The address of the account or contract that called the function
* msg.value: Amount of Ether sent with the call
* block.timestamp: The timestamp of the current block
* storage: A persistent key-value store associated with the contract
* memory: A temporary, volatile area used during execution
Contracts can hold Ether, interact with other contracts, emit events, perform
mathematical computations, and enforce access control using modifiers.
## Chaincode in Hyperledger Fabric
Chaincode in Hyperledger Fabric is the equivalent of a smart contract but
designed for a permissioned, enterprise environment. It is commonly written in
Go, but support is also available for Java and Node.js. Instead of being
compiled to bytecode for a virtual machine, chaincode runs as a Docker
container, isolated from the peer nodes.
The chaincode lifecycle in Fabric is significantly more structured and involves
organizational governance. Here’s how it works: 1. Package: The chaincode is
bundled into a .tar.gz archive that includes the source code and metadata. 2.
Install: This package is installed on endorsing peers. 3. Approve: Each
participating organization in the consortium must approve the chaincode
definition. 4. Commit: After approval, the definition is committed to the
channel.
The primary interface includes three functions:
* Init: Invoked when the chaincode is first deployed or upgraded.
* Invoke: Handles logic execution for transaction proposals.
* Query: Performs data reads without altering the ledger.
Unlike Ethereum, the state is not stored directly on the blockchain but
maintained in a key-value world state database like LevelDB or CouchDB. Each
peer maintains its own copy of this world state, while the blockchain itself
acts as an immutable log of all transactions.
## Execution models: Ethereum vs. Fabric
One of the most fundamental differences lies in how smart contracts and
chaincode are executed and validated.
In Ethereum, every transaction is:
* Sent to the network
* Mined into a block by a validator
* Executed by every node on the network to ensure consistency
* Recorded in the blockchain
This is a replicated state machine approach. All full nodes execute the
transaction and reach consensus on the outcome.
In Hyperledger Fabric, the process is more modular:
* Execution: Proposals are simulated by endorsing peers.
* Ordering: The endorsed transactions are submitted to an ordering service.
* Validation: Committing peers validate the endorsements before updating the
state.
This execute-order-validate model allows Fabric to achieve high throughput and
low latency while maintaining governance and security. It also enables
confidential transactions using private data collections and organizational
policies.
## Smart contract storage and gas economy
In Ethereum, smart contracts operate under a tightly resource-constrained
environment. Every operation within a contract consumes gas, which is paid in
Ether. Gas acts as a safeguard against misuse or infinite loops and compensates
miners or validators for the computation performed.
This makes storage optimization a major design concern. On-chain storage is
expensive, so developers often use lean data structures:
* mapping(address => uint) for lookup tables
* Arrays for indexed access (though costly when large)
* bytes32 hashes to reference off-chain content (like IPFS data)
* Event logs to emit data retrievable by off-chain services without being stored
on-chain
Additionally, there is no built-in concept of relational data or SQL-like
queries. Developers must implement their own indexing, filtering, and pagination
logic, or use off-chain services like The Graph to index contract events and
expose a GraphQL API.
Due to Ethereum’s immutable nature, upgrading a contract means deploying a new
version. Developers implement upgradeable contract patterns, such as:
* Proxy pattern: Separate storage from logic. The proxy contract forwards calls
to the logic contract.
* EIP-1967 and EIP-1822: Standard layouts for upgradeable contracts
* UUPS (Universal Upgradeable Proxy Standard): A minimal and efficient upgrade
pattern
These patterns allow the contract logic to be changed without losing state.
However, they introduce complexity and must be handled with precision to avoid
bricking the contract or introducing security vulnerabilities.
## Chaincode storage and private data
In Fabric, the storage model is more flexible and familiar for enterprise
developers. The world state is a database, typically:
* LevelDB: Default key-value store, fast and lightweight.
* CouchDB: Optional document-oriented store, supports complex queries.
Each key in the world state maps to a value (usually a JSON document).
Transactions that change the state are recorded in blocks on the ledger but the
current state is stored separately. This model separates state from the
immutable transaction log.
Unlike Ethereum, Fabric allows for private data collections (PDCs). These are
used when some data should only be visible to a subset of organizations in the
consortium. Instead of storing sensitive data on the ledger, Fabric stores a
hash of the data and shares the actual payload directly between authorized
peers.
This enables compliance with privacy regulations and use cases such as:
* Trade finance (sharing sensitive invoice data)
* Pharmaceutical supply chains (batch data confidentiality)
* Government and inter-agency workflows
Chaincode can access both public state and private collections using the Fabric
SDK or the GetPrivateData API. This modularity gives developers fine-grained
control over data visibility and trust.
## Access control and authorization
Security and permissioning differ significantly between public and private
blockchains. Ethereum contracts are public by default. Anyone can call a
function unless explicitly restricted. Developers implement access control
using:
* Modifiers (e.g., onlyOwner)
* Role-based access (hasRole in OpenZeppelin’s AccessControl)
* Multi-signature schemes for administrative operations
* ecrecover to verify signatures and off-chain identities
Fabric uses MSP (Membership Service Providers) to manage identities. Each
participant has an X.509 certificate issued by a recognized CA (Certificate
Authority). Access control is enforced at several levels:
* Chaincode logic can inspect the client’s identity using APIs like GetCreator()
or attribute-based logic.
* Channel configuration defines which organizations have access to which data
and chaincode.
* Endorsement policies define which organizations must sign off on a transaction
for it to be considered valid.
This makes Fabric highly suitable for inter-organization workflows, supply
chains, and regulated environments where access must be restricted and
auditable.
## Language support and developer experience
The choice of programming language and development tools is another key
distinction.
Ethereum:
* Primary language: Solidity
* Others: Vyper (Python-inspired), Huff (low-level), Fe (experimental)
* Tooling: Hardhat, Foundry, Truffle, Remix
* Testing frameworks: Mocha, Chai, Waffle
* Deployment: Infura, Alchemy, custom RPC nodes
Smart contract development includes writing Solidity code, compiling with the
Solidity compiler (solc), and testing with local testnets like Ganache or
Hardhat’s in-memory network. Advanced debugging, gas estimation, stack tracing,
and coverage analysis are critical.
Hyperledger Fabric:
* Language: Go (recommended), JavaScript (Node.js), Java
* Tooling: Fabric SDKs for Node.js, Java, Go
* Development: peer lifecycle CLI, Docker-based containerization, Fabric CA for
identity issuance
* Local deployment: Using Fabric samples or Docker Compose environments
Fabric provides structured sample chaincode templates and encourages modular
design. Testing is typically performed using scripts that invoke chaincode
through the SDK or CLI, simulating proposals and observing ledger updates.
In contrast to Ethereum, Fabric’s containerized environment allows for more
traditional application development practices such as version control, unit
testing, and CI/CD pipelines.
## Event handling and off-chain integration
Smart contracts and chaincode both support emitting events, which are crucial
for off-chain applications to track blockchain activity.
In Ethereum, contracts use the event keyword and the EVM logs these emissions.
These logs are:
* Indexed by topics (event signature and arguments)
* Accessible via JSON-RPC (eth\_getLogs)
* Frequently consumed by tools like The Graph, Moralis, or custom Node.js
listeners using Web3.js or Ethers.js
In Fabric, chaincode emits events using the SetEvent API. These events are
embedded in the transaction block and picked up by clients subscribed to the
peer’s event services. Applications can register for block events, filtered
events, or chaincode-specific events using the Fabric SDK.
This event-driven model is essential for building responsive frontends,
notification systems, and external integrations (e.g., triggering a payment,
updating ERP records, or syncing with cloud services).
## Inter-contract communication and composability
In Ethereum, smart contracts can interact with one another using direct function
calls or delegate calls. This enables composability, the property that allows
multiple contracts to be used together like building blocks. Popular DeFi
protocols (like Yearn, Aave, and Compound) rely heavily on this feature.
For example:
* A staking contract might call a reward distribution contract
* A DEX aggregator might route trades through multiple liquidity pools
* A governance contract might control upgrades to other contracts
However, care must be taken with reentrancy, gas limits, and fallback behaviors.
Contracts should implement reentrancy guards and adhere to the
“checks-effects-interactions” pattern.
Fabric does not support the same kind of composability because of its
endorsement model and modular design. However, chaincode can still invoke other
chaincodes via the InvokeChaincode function. This enables modular architecture
where different chaincodes handle asset types, regulatory validation, or
cross-channel data queries.
## Governance and version management
Governance refers to the mechanisms used to control, upgrade, and manage access
to contracts or chaincode in a blockchain network. In Ethereum, governance is
often implemented at the application level, since the network itself is public
and decentralized. Developers of decentralized applications (dApps) must build
their own logic for:
* Administrator roles
* Multi-signature control
* DAO (Decentralized Autonomous Organization) structures for on-chain voting
* Time-locked functions and upgrade proposals
For example, a DeFi protocol may use a governance token that allows holders to
propose and vote on upgrades to interest rate models or reserve factors. These
votes trigger execution of functions in a “Governor” contract, which then calls
the upgradable logic components. Governance becomes an intrinsic part of the
protocol’s trust model.
Hyperledger Fabric takes a different approach. Governance is built into the
platform:
* Only authorized members of the consortium can install or approve chaincode
* Channel configurations define organizational privileges
* Chaincode upgrades require unanimous or majority approval, depending on the
endorsement policy
* Identity revocation and certificate expiration are centrally managed
Because Fabric is designed for permissioned consortia, every upgrade, channel
change, or access configuration involves signed transactions from member
organizations. This makes Fabric highly auditable and controlled, essential for
enterprise-grade applications where legal compliance and operational risk are
non-negotiable.
## Performance and scalability
Smart contracts and chaincode differ significantly in performance, throughput,
and scalability.
Ethereum, operating as a public blockchain, must optimize for decentralization
and trust minimization. This limits throughput to a few dozen TPS (transactions
per second) on the base layer. Although Ethereum 2.0 and layer 2 solutions (like
Optimism, Arbitrum, zkSync) have increased capacity through rollups and
sidechains, the base layer remains a bottleneck for compute-heavy logic.
Gas fees also influence scalability. During peak periods, gas prices can spike,
making contract interactions prohibitively expensive for users. This pushes
developers to optimize logic, reduce storage use, and batch operations to
minimize user costs.
Hyperledger Fabric, by design, separates execution from consensus. It can
achieve hundreds to thousands of TPS depending on the hardware, network
configuration, and endorsement complexity. Since all participants are known and
authenticated, Fabric eliminates the need for mining or staking, reducing
overhead. Key scalability factors include:
* Number of peers and organizations
* Complexity of endorsement policies
* Volume of reads and writes per transaction
* Use of private data collections
Fabric can horizontally scale by splitting data across channels, independent
ledgers with their own policies and chaincode. This allows one network to serve
multiple business units or use cases in parallel.
## Auditing and traceability
Smart contracts provide transparency because all interactions and state changes
are visible on the blockchain. This is a double-edged sword. While it improves
accountability, it also exposes sensitive logic and data. To mitigate this,
developers use techniques like:
* Abstracting logic behind proxies
* Emitting only hashed or obfuscated data
* Encrypting off-chain payloads and linking via content hashes
Ethereum’s public nature is useful for use cases like:
* Verifying ownership (NFTs)
* Proving event occurrence (e.g., time of creation)
* Demonstrating fairness in auctions or gaming
Tools like Etherscan, Tenderly, and The Graph enhance auditability by providing
indexed access to contract history, call traces, and error diagnostics.
Fabric provides native auditing capabilities tailored to enterprises. Each block
contains a full cryptographic record of transactions, signed by the submitter
and validated by the network. Logs can be exported to SIEM systems, fed into
compliance dashboards, or attached to legal evidence trails.
Moreover, private data collections in Fabric allow organizations to prove data
existence or perform zero-knowledge proofs without revealing raw data. This
capability is invaluable for industries like:
* Pharmaceuticals (e.g., batch integrity)
* Trade finance (e.g., invoice fraud prevention)
* Government (e.g., tamper-proof registries)
## Real-world use cases
Both smart contracts and chaincode have seen wide adoption, but in different
domains:
Ethereum smart contracts
* DeFi: Lending (Aave), AMMs (Uniswap), stablecoins (DAI)
* NFTs: ERC-721 and ERC-1155 used in marketplaces like OpenSea
* Gaming: On-chain assets and play-to-earn mechanics (Axie Infinity)
* DAOs: Governance via tokenized voting
* Identity: Soulbound tokens and decentralized identifiers
These applications benefit from Ethereum’s global accessibility, network
effects, and liquidity. However, they face limitations on speed, privacy, and
cost.
Fabric chaincode
* Supply chain: Tracking agriculture, vaccines, mining products
* Finance: Cross-border payments, factoring, CBDCs
* Government: Land registry, voting, customs compliance
* Health care: Clinical trials, pharma cold chain, patient consent
* Insurance: Fraud prevention, parametric claims
These use cases prioritize privacy, trust, auditability, and control. They are
often implemented as B2B consortia with legal agreements backing the chain.
Developer challenges and solutions
Developing smart contracts is intellectually rigorous and security-critical.
Common challenges include:
* Unintended logic bugs (e.g., integer overflow, reentrancy)
* Upgrade complexity
* Gas estimation and limits
* Eventual consistency of cross-contract calls
To address these, best practices include:
* Using tested libraries like OpenZeppelin
* Auditing with tools like MythX, Slither, and Certora
* Writing extensive test cases and simulations
* Using static analyzers and symbolic execution tools
Chaincode developers face different hurdles:
* Understanding the Fabric lifecycle and endorsement policies
* Designing for modularity across organizations
* Managing certificate-based identities and wallets
* Setting up dev environments with Docker and CA nodes
Fabric provides starter kits, test networks, and sample chaincode to simplify
onboarding. Larger projects benefit from using CI/CD pipelines, Helm charts,
Kubernetes orchestration, and secrets management for production deployments.
## Interoperability and cross-chain functionality
As blockchain ecosystems diversify, interoperability is becoming a crucial
requirement. Smart contracts and chaincode must increasingly interact with
systems beyond their native network, whether that’s another blockchain, a legacy
ERP system, or a cloud-based analytics engine.
Ethereum and cross-chain interactions
Smart contracts on Ethereum can’t directly interact with other blockchains or
external systems. To bridge this gap, developers rely on:
* Oracles (e.g., Chainlink, Band Protocol): These bring external data onto the
blockchain. For example, fetching off-chain asset prices, weather information,
or compliance results.
* Bridges: Used to transfer tokens and data between chains (e.g., Ethereum,
Polygon). This allows liquidity movement and contract invocation across
chains.
* Relayers and Message Passing Protocols: Protocols like LayerZero, Axelar, or
Wormhole enable generic message-passing between smart contracts deployed on
different blockchains.
However, these tools introduce new attack surfaces. Oracle manipulation (e.g.,
price feed exploits) and bridge vulnerabilities (e.g., reentrancy in token
wrapping contracts) have led to several high-profile exploits in recent years.
To mitigate risk:
* Use decentralized oracles with reputation models
* Implement fail-safes and kill switches in contracts
* Design time-locks for sensitive operations
* Use multi-signature schemes for bridge control
Fabric and system integration
In contrast, Fabric is designed from the outset to be interoperable with
enterprise IT systems. Chaincode interacts with:
* Client applications through SDKs that can relay events, fetch ledger data, and
trigger invokes
* External databases or APIs through intermediary integration layers
* IoT networks feeding real-world data for automation (e.g., temperature sensors
in pharma supply chain)
Furthermore, Fabric supports Chaincode-as-an-External-Service (CCaaS). This
allows developers to write chaincode in any language and run it outside of the
Fabric peer container. It opens the door to integrating with services like:
* Payment gateways
* Identity verification systems
* Cloud AI inference engines
* Enterprise databases (e.g., Oracle, SQL Server)
Integration patterns often include:
* Kafka streams or RabbitMQ for event-driven flows
* RESTful APIs with JWT tokens or OAuth for user authentication
* Hash-based proof mechanisms for verifiable claims
This flexibility is ideal for industries where blockchain is not a replacement,
but a layer of integrity and auditability within a broader digital architecture.
## Legal and compliance considerations
Smart contracts and chaincode carry real-world implications. As they
increasingly encode legal agreements, regulatory frameworks are evolving to
catch up.
Smart contracts can:
* Represent enforceable contracts (e.g., escrow logic for marketplaces)
* Trigger financial transactions or settlements
* Record legal rights (e.g., ownership of an NFT or real-world asset)
Challenges include:
* Immutability: Once deployed, a contract may be impossible to change, even if
laws or user needs evolve.
* Jurisdiction: Ethereum nodes operate globally, but disputes are governed by
national laws.
* Dispute resolution: There is no inherent mechanism for arbitration or human
override in most public chain contracts.
Legal engineers are exploring hybrid contracts, where legal language and smart
contract logic are linked. Tools like OpenLaw, Clause.io, and Accord Project
attempt to bridge legal prose and executable code.
Efforts are also underway to formalize smart contract legality. For instance:
* EU’s MiCA regulation outlines requirements for crypto-asset service providers
* UK Law Commission recognizes smart contracts as enforceable under existing
contract law
* The UNIDROIT project on digital assets and private law includes smart contract
frameworks
Chaincode and compliance
Fabric, being permissioned, offers more compliance-aligned features:
* Identity and certificate traceability
* Role-based access control
* Confidential data sharing
* GDPR-aligned data deletion via off-chain referencing
* Regulatory reporting through block event logs
This makes Fabric well-suited for:
* Regulated industries like banking, insurance, pharmaceuticals
* Governments requiring high transparency and control
* Auditable workflow tracking (e.g., customs clearance, tax collection)
In many deployments, smart legal clauses are enforced via chaincode, while audit
trails and logs are integrated with compliance reporting tools, reducing manual
oversight and regulatory risk.
## Security practices and threat models
Security in blockchain is binary, you’re either secure or exploited. The cost of
failure is high, especially when code is immutable and value is at stake. Thus,
security architecture is a fundamental concern for both smart contracts and
chaincode.
## Ethereum smart contract security
Risks include:
* Reentrancy: Attackers reenter the contract before state updates
* Integer overflow/underflow: Before Solidity 0.8.x, arithmetic bugs caused
major losses
* Access control flaws: Misconfigured admin logic
* Front-running: Transaction order manipulation on public mempools
* Flash loan exploits: Temporary capital used to manipulate or drain funds
Security practices:
* Use established libraries (e.g., OpenZeppelin)
* Adopt design patterns like:
* Checks-Effects-Interactions
* Pull over Push payments
* Guarded fallback functions
* Write extensive unit and integration tests
* Conduct formal verification for critical logic
* Use static and dynamic analysis tools (MythX, Slither, Echidna)
Auditing firms such as Trail of Bits, Certik, and OpenZeppelin are often engaged
before mainnet deployment.
## Chaincode security
Chaincode operates in a more trusted and controlled environment but is not
immune to risks:
* Logic bugs (e.g., flawed endorsement conditions)
* Data leakage via logs or improper use of private collections
* Identity spoofing if MSP or CA is compromised
* Insufficient validation of inputs
* Chaincode-to-chaincode access abuse
Security controls include:
* Certificate revocation and renewal mechanisms
* Endorsement and validation policies
* TLS-secured communication between peers and clients
* Audit logging and block-level integrity proofs
* Isolated container execution for chaincode logic
Organizations often deploy Fabric in hardened Kubernetes clusters with
firewalls, DDoS protection, and intrusion detection systems to defend the full
stack.
## Enterprise adoption strategies
The path to adopting blockchain solutions differs significantly based on whether
an organization builds with public smart contracts or private chaincode. Each
comes with distinct architectural choices, legal implications, and business
models.
Public blockchain adoption with smart contracts
Organizations building on Ethereum or other public blockchains often aim to:
* Tap into public liquidity and composability (e.g., DeFi or tokenized assets)
* Establish transparent and decentralized infrastructure for services like
identity, media, or decentralized storage
* Implement open innovation models, such as incentivized networks and
community-owned protocols
Adoption journey: 1. Prototype with testnets like Goerli or Sepolia using
frameworks like Hardhat or Foundry 2. Deploy contracts on mainnet or L2 chains
(Polygon, Arbitrum, Base) 3. Integrate frontends with wallet connectors (e.g.,
MetaMask, WalletConnect) 4. Secure through audits and bug bounty programs 5.
Onboard users with token incentives, governance rights, or NFT rewards
Regulatory risks, gas cost volatility, and UX friction remain barriers. However,
the community-driven innovation and developer tool maturity in Ethereum’s
ecosystem make it the preferred platform for open financial and digital
ecosystems.
## Enterprise blockchain adoption with Fabric chaincode
Private consortiums adopting Fabric begin by identifying multi-party workflows
that require:
* Trusted execution across organizational boundaries
* Verifiable audit trails
* Fine-grained access control
* Integration with legacy systems (ERP, CRM, payment rails)
Adoption journey: 1. Define the consortium: Roles, data owners, regulators,
service providers 2. Design the network: Number of peers, channels, ordering
nodes 3. Develop chaincode in Go or Node.js for domain-specific workflows 4.
Configure MSPs, CAs, and identities for each organization 5. Launch pilot
networks on local or cloud infrastructure 6. Scale deployment to production with
orchestration, monitoring, and compliance processes
Industries like logistics, healthcare, government, and finance are actively
deploying Fabric-based networks due to its modular, governed, and
privacy-preserving design.
## Trends shaping the future
Smart contracts and chaincode are evolving rapidly in response to both technical
innovation and market demand. The following trends are shaping the future of
blockchain development and deployment:
1. Zero-knowledge proofs (ZKPs)
Smart contracts are incorporating zk-SNARKs and zk-STARKs for privacy-preserving
computation. Use cases include:
* Private voting (e.g., MACI)
* Anonymous identity (e.g., Semaphore, zkLogin)
* Scalable L2 rollups with zero-knowledge validity proofs (zkSync, Scroll,
Polygon zkEVM)
Fabric is also exploring ZKP integration through custom chaincode modules that
validate off-chain assertions without revealing underlying data.
2. Account abstraction
Ethereum is transitioning toward account abstraction (EIP-4337), allowing smart
contracts to act as user wallets. This will enable:
* Gasless transactions (sponsored by dApps)
* Biometric or social login
* Session keys and programmable recovery
It transforms the UX of smart contract interaction and lowers the barrier to
Web3 adoption for non-technical users.
3. Tokenization of real-world assets
Both smart contracts and chaincode are powering the tokenization of assets:
* Real estate, commodities, bonds, and even invoices
* On-chain trading, settlement, and collateralization
* Compliance with local regulations via role-based access and KYC integration
Platforms like SettleMint, ConsenSys Codefi, and R3 Corda are at the forefront
of building asset tokenization infrastructure using these paradigms.
4. CBDCs and digital cash
Central banks are exploring digital currencies built on smart contracts or
chaincode. Examples:
* Ethereum-based pilots (e.g., Banque de France, MAS)
* Fabric-based implementations (e.g., Project Bakong in Cambodia)
* Interbank settlement using private blockchains (e.g., Jasper-Ubin, mBridge)
These systems use programmable logic for issuance, circulation control,
compliance, and analytics.
5. Decentralized identity and verifiable credentials
Smart contracts are increasingly tied to DIDs and VCs:
* Establish on-chain identifiers
* Issue credentials via trusted institutions
* Validate claims without revealing user data (e.g., zero-knowledge credentials)
Fabric supports similar models using attribute-based certificates and private
data collections, making it ideal for enterprise-grade identity networks.
file: ./content/docs/knowledge-bank/solidity.mdx
meta: {
"title": "Solidity programming",
"description": "Guide to Solidity smart contract development"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Tabs, Tab } from "fumadocs-ui/components/ui/tabs";
## Introduction to Solidity
Solidity is the primary programming language used for writing smart contracts on
the Ethereum blockchain and other EVM-compatible platforms. It is a statically
typed, contract-oriented language influenced by JavaScript, Python, and C++.
Solidity enables developers to encode business logic and digital agreements
directly onto the blockchain in the form of executable contracts.
Solidity compiles into bytecode that runs on the Ethereum Virtual Machine. Each
deployed contract becomes part of the blockchain's permanent history and can
interact with users, other contracts, or itself based on its defined functions
and data structures. The language supports inheritance, libraries, user-defined
types, event emission, and cryptographic primitives.
## The Ethereum Virtual Machine
Before diving into Solidity syntax and logic, it is crucial to understand the
execution environment. Solidity contracts run on the Ethereum Virtual Machine,
which is a sandboxed runtime capable of executing bytecode deterministically
across all Ethereum nodes. The EVM has access to the blockchain’s current state
and can modify it as part of transaction execution.
The EVM operates on a stack-based architecture with its own instruction set.
Developers interact with it indirectly through high-level code written in
Solidity. The EVM is responsible for managing account balances, contract
storage, and gas usage. Each operation within a contract costs a specific amount
of gas and transactions must supply a sufficient gas limit to execute
successfully.
## Contract Structure
A Solidity smart contract starts with a version pragma to define the compiler
version. This is followed by imports, state variable declarations, function
definitions, events, modifiers, and any supporting types. The structure must be
clear and organized to ensure maintainability and readability.
Here is a basic example of a Solidity contract:
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract HelloWorld {
string public message;
constructor(string memory initMessage) {
message = initMessage;
}
function updateMessage(string memory newMessage) public {
message = newMessage;
}
}
```
This contract demonstrates core concepts such as constructor initialization,
public state variables, and transaction-triggered updates. Once deployed, this
contract can store and retrieve a message on-chain and allow users to update it.
## Data Types in Solidity
Solidity offers a range of data types to handle values. These include primitive
types such as integers, booleans, and addresses, as well as complex structures
like arrays, mappings, structs, and enums. Memory and storage handling is
critical since the location of data impacts gas usage and state persistence.
The basic types include the following
**Boolean** Used to store true or false values. It consumes minimal storage and
is commonly used for conditions and flags.
**Integer** Solidity provides signed and unsigned integers with widths ranging
from 8 to 256 bits. Integer overflow was a critical issue prior to version
0.8.x, which now has built-in overflow checks.
**Address** The address type holds Ethereum addresses. It includes functions
such as transfer, send, and call to interact with other accounts or contracts.
**String and Bytes** Strings are dynamically sized UTF-8 sequences. Bytes can be
fixed or dynamic in size and are used for efficient binary data storage.
**Arrays** Arrays can be fixed or dynamic and support indexing. They can hold
any type including other arrays or structs.
**Mappings** Mappings are hash tables that associate keys with values. They are
particularly efficient for lookups and are widely used for token balances or
permissions.
**Structs and Enums** Structs group multiple fields under a single type. Enums
define a restricted set of named values and are useful for state machines and
access modes.
```solidity
struct Product {
string name;
uint price;
bool available;
}
enum Status { Pending, Shipped, Delivered }
```
Understanding these types and their appropriate use cases is essential for
writing efficient and secure smart contracts.
## Functions and Visibility
Functions are the building blocks of a Solidity smart contract. They define the
logic that interacts with and modifies the contract’s state. A function can be
called internally by other functions or externally via transactions and
off-chain calls.
Every function has a signature that may include arguments, return values,
visibility specifiers, mutability specifiers, and modifiers. Solidity supports
multiple visibility levels to control access to functions and variables.
**Public** Functions and variables marked as public can be accessed from both
inside and outside the contract. Solidity automatically creates a getter method
for public state variables.
**Private** Only visible within the contract that defines them. Private
functions and variables are not accessible by derived contracts.
**Internal** Accessible within the contract and by contracts that inherit from
it. Internal visibility allows reuse through inheritance but prevents access by
external actors.
**External** Callable only from outside the contract. External functions are
optimized for gas and used for API-like interfaces that interact with users or
other contracts.
```solidity
contract AccessExample {
string internal name;
function setName(string memory newName) public {
name = newName;
}
function getName() external view returns (string memory) {
return name;
}
}
```
## Function Modifiers
Modifiers are custom logic wrappers used to change the behavior of functions.
They are typically used for access control, validation, or logging. A modifier
can execute code before or after the target function runs and uses the
underscore character as a placeholder for the function body.
Common use cases include role-based access, locking mechanisms, and input
validations.
```solidity
modifier onlyOwner() {
require(msg.sender == owner, "Not authorized");
_;
}
function updateSettings() public onlyOwner {
// Only owner can perform this action
}
```
Modifiers make contracts easier to read and maintain by isolating repetitive
checks or preconditions.
## Memory vs Storage
Solidity uses two main locations for data: memory and storage. Choosing the
correct location is important for both performance and correctness.
**Storage** Storage variables persist on-chain and retain their values between
transactions. They are more expensive to use and are associated with the
contract's permanent state.
**Memory** Memory variables exist only during function execution. They are
cheaper and are reset after each external call or function return.
Function arguments of reference types like arrays, structs, and strings must
explicitly declare whether they are stored in memory or storage.
```solidity
function setMessage(string memory _msg) public {
message = _msg;
}
```
Local variables should use memory unless they need to persist across function
calls. Operations on storage references can unexpectedly modify contract state
if not handled correctly.
## Constructors and Initialization
Solidity contracts support constructors to initialize state variables during
deployment. A constructor is defined using the keyword `constructor` and can
accept parameters. It is called once and only during deployment.
```solidity
contract Token {
string public name;
constructor(string memory _name) {
name = _name;
}
}
```
If no constructor is defined, Solidity provides a default one. Constructors are
useful for passing values such as token names, owner addresses, or initial
configurations.
## Events and Logging
Solidity provides events for emitting logs from contracts. Events allow
contracts to communicate with the outside world by triggering logs that can be
captured by off-chain applications or indexed by external services.
Events are declared with the `event` keyword and triggered with the `emit`
statement.
```solidity
event Transfer(address indexed from, address indexed to, uint amount);
function transfer(address to, uint amount) public {
balances[msg.sender] -= amount;
balances[to] += amount;
emit Transfer(msg.sender, to, amount);
}
```
Indexed parameters allow external applications to filter events efficiently.
Event logs are stored in the transaction receipt and are not accessible from
within contracts.
## Error Handling and Assertions
Solidity offers several mechanisms to handle errors and enforce correctness.
**Require** Checks for valid conditions and reverts with a message if the
condition fails. It refunds unused gas and is typically used for input
validation and access control.
**Revert** Explicitly causes a failure and reverts all changes. It is used to
signal errors deeper in the call stack or to create custom error messages.
**Assert** Used to check internal consistency and invariants. It consumes all
remaining gas and is usually reserved for cases that should never fail unless
there is a bug.
```solidity
function withdraw(uint amount) public {
require(balance[msg.sender] >= amount, "Insufficient funds");
balance[msg.sender] -= amount;
payable(msg.sender).transfer(amount);
}
```
Proper error handling improves user experience and guards against contract
misuse.
## Inheritance and Contract Composition
Solidity supports single and multiple inheritance, allowing contracts to inherit
state variables and functions from one or more base contracts. This enables
reuse of code, modular design, and extensibility of functionality.
A derived contract can override functions from a base contract and use the
`super` keyword to reference parent implementations. This pattern is widely used
in frameworks like OpenZeppelin where base contracts implement common features
such as ownership, pausability, or token standards.
```solidity
contract Base {
function greet() public pure virtual returns (string memory) {
return "Hello from Base";
}
}
contract Derived is Base {
function greet() public pure override returns (string memory) {
return "Hello from Derived";
}
}
```
The `virtual` keyword must be used on base functions that are intended to be
overridden, and the `override` keyword must be declared on derived functions to
ensure compatibility.
## Abstract Contracts
An abstract contract is one that contains at least one function without an
implementation. These contracts cannot be deployed directly and are intended to
serve as base definitions that must be extended by child contracts.
Abstract contracts define reusable logic and interfaces for complex systems.
They enforce structure while allowing customization in derived implementations.
```solidity
abstract contract Account {
function deposit() public virtual;
}
contract BankAccount is Account {
function deposit() public override {
// Implementation
}
}
```
Abstract contracts are particularly useful when designing modular applications
with interchangeable components.
## Interfaces in Solidity
Interfaces are similar to abstract contracts but with stricter rules. They
define function signatures without implementations and cannot include state
variables, constructors, or non-external functions.
Interfaces are commonly used to interact with external contracts, such as ERC20
or ERC721 tokens. They allow contracts to call functions on other contracts
without needing the full source code.
```solidity
interface IERC20 {
function totalSupply() external view returns (uint);
function transfer(address to, uint amount) external returns (bool);
}
```
Any contract that implements the interface must provide concrete implementations
of the defined functions. Interfaces enable modularity, upgradeability, and
protocol compatibility.
## Libraries and Code Reuse
Solidity provides libraries as a way to organize and reuse logic without
maintaining state. Libraries can contain reusable functions that operate on
primitive types or user-defined structs. They are deployed once and linked to
other contracts either statically or dynamically.
Stateless libraries reduce code duplication and optimize for gas by sharing
logic across contracts. Solidity allows both internal and external library
calls, with `using for` syntax enabling method chaining on types.
```solidity
library Math {
function add(uint a, uint b) internal pure returns (uint) {
return a + b;
}
}
contract Calculator {
using Math for uint;
function compute(uint x, uint y) public pure returns (uint) {
return x.add(y);
}
}
```
Libraries are essential in building secure and efficient systems. Popular
libraries include SafeMath, Address, Strings, and EnumerableSet from
OpenZeppelin.
## Contract-to-Contract Interaction
Solidity contracts can interact with other contracts through their interfaces or
direct references. This allows building composable systems, delegating
functionality, or creating dependency chains.
There are three primary methods to interact with contracts:
**Direct Instantiation** The contract is deployed and its address is used to
create an instance in another contract.
**Interfaces** An interface is defined for the external contract and used to
make safe calls.
**Low-level Calls** Functions like `address.call`, `delegatecall`, and
`staticcall` provide low-level access but require caution due to lack of type
safety.
```solidity
interface IExternal {
function getValue() external view returns (uint);
}
contract Caller {
function fetch(address target) public view returns (uint) {
IExternal ext = IExternal(target);
return ext.getValue();
}
}
```
Care must be taken to handle failed calls, manage gas, and validate external
data. Contract interactions are powerful but must be audited for reentrancy,
access control, and unexpected behaviors.
## Gas Optimization Techniques
Every operation in Solidity costs gas. Efficient contracts reduce cost for users
and optimize blockchain storage. Developers must consider gas costs when
designing logic, especially for loops, storage writes, and external calls.
Common gas-saving techniques include:
Using `uint256` instead of smaller types like `uint8` unless packing structs.
The default word size of the EVM is 256 bits and aligning types prevents
unnecessary operations.
Packing multiple small variables into a single storage slot by placing them
sequentially in a struct. This reduces the number of SSTORE operations and
lowers gas usage.
Avoiding expensive operations such as writing to storage inside loops or
repeatedly calling functions that return the same value. Instead, cache results
in memory and update storage only once.
Using constants and immutable variables for values that never change. Constants
are inlined during compilation, and immutables are set once during deployment.
```solidity
uint256 constant MAX_SUPPLY = 1000000;
address immutable creator;
constructor() {
creator = msg.sender;
}
```
Precomputing values off-chain when possible and storing minimal references (such
as hashes or IPFS links) on-chain. This ensures auditability while saving gas.
## Fallback and Receive Functions
Solidity supports special functions to handle unexpected calls and Ether
transfers. These include the fallback and receive functions.
**Receive** Called when the contract receives plain Ether with no data. It must
be declared as external and payable.
**Fallback** Called when a function is not found or data is provided with the
Ether transfer. Can be used to handle dynamic calls or proxy behavior.
```solidity
receive() external payable {
// Handle incoming Ether
}
fallback() external payable {
// Handle unknown function calls
}
```
Contracts with neither a receive nor a fallback function will reject Ether
transfers. These functions must be handled carefully to avoid exposing
vulnerabilities such as uncontrolled proxy logic or denial of service.
## Storage Layout and Upgradability
Understanding the layout of storage is critical for writing upgradeable
contracts. Solidity stores state variables sequentially in storage slots. In
upgradable contracts using proxy patterns, storage layout must be preserved
across versions.
Breaking layout compatibility can lead to overwritten values or locked state.
Developers use patterns like:
Reserved storage slots that leave gaps for future variables
Using structs with consistent layouts
Avoiding reordering of variables between upgrades
Using libraries like OpenZeppelin’s Upgradeable Contracts that handle these
constraints with automated tools
## Deployment Considerations
Deploying a Solidity contract involves compiling it with the Solidity compiler
and broadcasting a transaction containing the bytecode. Deployment must be
planned considering:
Gas limits and funding
Network congestion
Correct configuration of constructor arguments
Initial state validation and post-deployment scripts
Tooling like Hardhat, Truffle, and Foundry streamline deployment. Developers can
write migration scripts, automate deployment pipelines, and deploy to testnets
like Goerli or Sepolia before mainnet launches.
Contracts once deployed are immutable unless designed with upgradability.
Therefore, deployments must be audited, documented, and verified using block
explorers.
## Testing and Debugging Contracts
Testing is crucial in Solidity development. Bugs in smart contracts can cause
financial losses, loss of data, or legal issues. Testing strategies include:
Unit testing with JavaScript or TypeScript using frameworks like Mocha and Chai
Integration testing using Hardhat or Foundry to simulate full user workflows
Property-based testing with tools like Echidna to check for unexpected failures
Gas profiling to detect inefficient logic
Stack tracing with Hardhat and debugging failed transactions on local networks
Tests should cover edge cases, reentrancy, state transitions, permissioned
functions, and math boundaries.
Example of a simple unit test in Hardhat:
```javascript
it("should update the message", async function () {
const [owner] = await ethers.getSigners();
const Contract = await ethers.getContractFactory("HelloWorld");
const contract = await Contract.deploy("Initial");
await contract.updateMessage("New message");
expect(await contract.message()).to.equal("New message");
});
```
Writing thorough and automated tests improves code quality, confidence, and
reduces risk of deployment errors.
## Real-World Applications of Solidity
Solidity is the backbone of many real-world blockchain applications. It is used
to build decentralized finance platforms, NFT marketplaces, DAOs, identity
management solutions, and more. These applications run autonomously on the
blockchain and rely on Solidity contracts to manage state, enforce rules, and
handle value.
In decentralized finance, Solidity is used to implement lending protocols,
decentralized exchanges, automated market makers, and staking systems. Contracts
manage user deposits, interest accruals, liquidity pools, and real-time asset
swaps. Protocols like Aave, Compound, and Uniswap rely on robust and secure
Solidity contracts.
In NFTs, Solidity is used to encode ownership of digital assets, media, and
collectibles. NFT standards such as ERC721 and ERC1155 define how tokens are
minted, transferred, and traded. These standards allow creators to build
marketplaces, auctions, and royalties systems that are fully on-chain.
In DAOs, Solidity enables governance through smart contracts that manage
proposals, voting, and treasury disbursements. Token holders can interact with
DAO contracts to steer the direction of decentralized communities and allocate
funds democratically.
## ERC Standards and Token Contracts
Ethereum Request for Comments (ERC) standards define common interfaces and
behaviors for tokens. The most widely used standards in Solidity are ERC20,
ERC721, and ERC1155.
**ERC20** Defines a fungible token interface. Each token is identical and
divisible. Used for currencies, governance tokens, and utility tokens.
**ERC721** Defines non-fungible tokens. Each token has a unique identifier and
is used for collectibles, art, and identity.
**ERC1155** Defines a multi-token standard that can manage both fungible and
non-fungible assets in one contract. Useful for gaming and marketplaces.
Example of an ERC20 token in Solidity:
```solidity
contract MyToken is ERC20 {
constructor() ERC20("MyToken", "MTK") {
_mint(msg.sender, 1000000 * 10 ** decimals());
}
}
```
These standards promote interoperability across wallets, exchanges, and dApps.
## Upgradeable Contracts and Proxy Patterns
Smart contracts are immutable by design, but upgradeability can be achieved
using proxy patterns. This involves separating logic and storage. A proxy
contract delegates calls to an implementation contract while preserving state.
Common upgrade patterns include:
Transparent Proxy Pattern, where admin can upgrade the implementation, and users
interact with a proxy
UUPS (Universal Upgradeable Proxy Standard), a lightweight proxy approach with
logic embedded in the implementation
Beacon Proxy, where multiple proxies can share a common upgrade point via a
beacon contract
Upgradeability requires careful management of storage layout and access control.
Libraries like OpenZeppelin provide secure implementations for deploying and
managing upgradeable contracts.
## DeFi and Composability
DeFi applications are built with composability in mind. This allows contracts to
interact with each other to form complex financial instruments. A vault may use
a lending protocol as collateral, an exchange to swap tokens, and an oracle for
pricing.
Solidity enables this through safe contract interactions, event logging, and
shared standards. Developers must be aware of reentrancy risks, flash loan
attacks, and front-running vulnerabilities.
To build secure DeFi protocols, developers use:
Price oracles with time-weighted averages
Reentrancy guards and withdrawal patterns
Permit functions for gasless approvals using signatures
Circuit breakers and emergency pause functionality
Treasury contracts and time-locked governance
## NFT Use Cases and Marketplace Contracts
NFTs are digital representations of unique assets. Solidity allows for minting,
transferring, and auctioning NFTs. Common features include:
On-chain metadata linking to IPFS or Arweave
Minting limits, royalties, and whitelists
Batch transfers and airdrops
Integration with off-chain marketplaces via events and standards
An NFT contract must comply with ERC721 or ERC1155 and implement functions such
as `tokenURI`, `safeTransferFrom`, and approval mechanisms.
Example snippet for minting an ERC721:
```solidity
function mint(address to, uint tokenId) public onlyOwner {
_safeMint(to, tokenId);
}
```
Marketplaces often rely on Solidity for order matching, escrow, and bidding
systems. Events like `Transfer`, `Approval`, and `Sale` enable real-time
indexing and discovery.
## Smart Contract Auditing
Auditing is a critical step before deploying Solidity contracts to mainnet. It
involves a deep review of the codebase to identify bugs, vulnerabilities, and
inefficiencies. Audit activities include:
Manual review of logic, access control, and storage layout
Static analysis for known patterns and anti-patterns
Unit and integration test coverage evaluation
Formal verification of critical invariants
Security researchers simulate attack vectors and suggest mitigation strategies.
Common audit findings include unprotected ownership transfers, unchecked
external calls, and improper math operations.
Well-audited contracts are essential for DeFi, token launches, and enterprise
applications. Auditors provide reports with severity classifications and
recommended fixes.
## Advanced Design Patterns in Solidity
Solidity supports several advanced design patterns that enhance flexibility,
modularity, and safety in smart contract development.
**Factory Pattern** Used to create multiple instances of a contract from a
parent contract. Common in NFT collections, lending vaults, or token launches.
The factory contract handles deployment and registration of new child contracts.
```solidity
contract Factory {
address[] public children;
function createChild() external {
Child c = new Child();
children.push(address(c));
}
}
```
**Proxy Pattern** Separates logic and data to enable upgrades. Uses
`delegatecall` to forward calls from a proxy to an implementation. Requires
careful management of storage slots and admin privileges.
**Pull over Push** Reduces risks by letting users withdraw funds instead of
having them sent automatically. Prevents reentrancy and unexpected failures.
```solidity
mapping(address => uint) public balances;
function withdraw() public {
uint amount = balances[msg.sender];
balances[msg.sender] = 0;
payable(msg.sender).transfer(amount);
}
```
**Access Control and Role Management** Implementing granular permissions using
role-based patterns enhances security and decentralization. Contracts use
mappings and modifiers to enforce role ownership and administrative boundaries.
**Pausable Contracts** Include pause functionality to temporarily disable
sensitive functions during emergencies or maintenance. Commonly used in DeFi
protocols to prevent exploits during volatile periods.
## Solidity Development Tools
A rich ecosystem of tools supports Solidity development across the lifecycle
from writing code to deploying it on-chain.
**Solidity Compiler (solc)** The core compiler that transforms Solidity source
code into bytecode and ABI. Supported by most frameworks and used in custom
build setups.
**Hardhat** A flexible development framework for Solidity. Offers in-memory EVM
for testing, plugin system, network forking, stack traces, and deployment
automation.
**Foundry** A fast, Rust-based toolkit for smart contract development. Supports
fuzzing, property testing, Solidity scripting, and efficient builds.
**Truffle** Legacy framework offering test and deployment tooling. Used in
conjunction with Ganache for local chain simulation.
**Remix IDE** A browser-based Solidity editor for quick experimentation.
Includes a Solidity compiler, debugger, and testing console.
**Ethers.js and Web3.js** JavaScript libraries for interacting with Solidity
contracts from frontend or backend applications. Provide contract instantiation,
event listeners, and signer abstractions.
**The Graph** Indexes blockchain data emitted by Solidity events. Allows dApps
to query historical and real-time data using GraphQL.
**Slither and MythX** Static analysis tools that detect common bugs and
vulnerabilities in Solidity code. Often used during audits.
## Best Practices for Solidity Development
Following best practices in Solidity improves code security, readability, and
maintainability.
Use the latest stable compiler version for security improvements and bug fixes
Always specify exact compiler version using pragma to avoid incompatibilities
Favor short and readable functions with clear logic separation
Validate all external inputs with `require` and checks
Avoid complex nested loops or deep inheritance trees
Use modifiers for role enforcement and repeated checks
Write unit and integration tests covering edge cases
Audit for reentrancy, access control, overflow, and race conditions
Use established libraries such as OpenZeppelin for tokens, roles, and safe math
Document contracts and public APIs using NatSpec comments
## The Future of Solidity
Solidity is actively maintained by the Ethereum Foundation and community
contributors. Its evolution is shaped by developer feedback, security research,
and EVM ecosystem needs.
Key areas of ongoing and future improvement include:
Optimizing for gas efficiency with new opcodes and compiler outputs
Improving developer ergonomics with better debugging and error reporting
Supporting language features such as generics, custom types, and macros
Integrating with zero-knowledge tools to enable private computations
Enabling more native cross-chain and asynchronous execution patterns
Expanding formal verification support for mission-critical systems
The language has matured from simple token contracts to powering
multi-billion-dollar decentralized systems. With new features, patterns, and
tooling, Solidity will continue to be a foundation for programmable value and
decentralized governance.
Solidity is a gateway into decentralized systems that shift control from
centralized authorities to code-enforced logic. From tokens and DAOs to DeFi and
NFTs, Solidity enables developers to build unstoppable applications with trust,
transparency, and autonomy.
Mastering Solidity involves understanding not just syntax but also the
principles of blockchain execution, gas efficiency, state management, and
security. With the right tools and discipline, developers can design, build, and
maintain smart contracts that are robust, upgradeable, and impactful across
industries.
As Ethereum and the EVM ecosystem evolve, Solidity will continue to play a key
role in shaping the future of decentralized applications and programmable
finance.
file: ./content/docs/knowledge-bank/subgraphs.mdx
meta: {
"title": "Subgraphs",
"description": "A complete guide to building, deploying, and querying subgraphs for blockchain data indexing using The Graph protocol"
}
## Introduction to subgraphs
Subgraphs are the indexing and querying units used within The Graph protocol.
They define how on-chain data should be extracted, processed, and served through
a GraphQL API. Subgraphs are essential for building decentralized applications
that require fast and reliable access to blockchain state and historical data.
Rather than querying a blockchain node directly for each interaction, developers
create subgraphs to transform raw events and calls into structured, queryable
datasets. These subgraphs run on The Graph’s decentralized network or its hosted
service and serve as the backend data layer for many of the most popular dApps.
Subgraphs are written in a declarative way. Developers specify which smart
contract events to listen to, how to transform those events into entities, and
what GraphQL schema the final API should expose. This model enables clean
separation between data generation on-chain and data consumption off-chain.
## Understanding The Graph protocol
The Graph protocol is an indexing infrastructure that allows querying blockchain
data through GraphQL. It supports various EVM-compatible chains like Ethereum,
Polygon, Avalanche, and others. The core components of the protocol include:
Indexers, who run nodes that index data and serve queries
Curators, who signal valuable subgraphs using GRT tokens
Delegators, who stake GRT with indexers to secure the network
Consumers, who query subgraphs using GraphQL
Subgraph developers define the indexing logic and deploy it to the network. Once
deployed, their subgraphs become accessible via GraphQL endpoints and are
maintained by indexers without requiring centralized APIs.
The protocol provides deterministic indexing through WASM-based mappings,
scalability through sharding and modular design, and economic incentives to
ensure data availability and integrity.
## Anatomy of a subgraph
A subgraph project is composed of a few critical files and directories. These
define how The Graph will extract and structure the data:
**subgraph.yaml** This is the manifest file. It defines the network, data
sources (contracts), event handlers, and mappings. It instructs The Graph which
chain to listen to, which contracts to monitor, and how to process specific
events or function calls.
**schema.graphql** This file defines the GraphQL schema for the subgraph. It
declares entity types, their fields, relationships, and any indexing options.
These types become the basis for how data can be queried later on.
**AssemblyScript mappings** Mapping files are written in AssemblyScript, a
TypeScript-like language that compiles to WebAssembly. These files include event
handlers that transform blockchain data into entities. They run in a sandboxed
environment and cannot make HTTP calls or access off-chain storage.
**Generated types** Using the Graph CLI, developers generate TypeScript
definitions for events, entities, and contract bindings. This allows safe and
predictable manipulation of blockchain data within mapping functions.
These elements combine to form a complete subgraph that listens to smart
contract events, transforms them into structured data, and exposes them via a
GraphQL API.
## Sample subgraph structure
A typical subgraph structure in the project directory might look like this:
```
├── subgraph.yaml
├── schema.graphql
├── src/
│ └── mapping.ts
├── generated/
│ ├── schema.ts
│ └── contract/
│ └── Contract.ts
├── abis/
│ └── Contract.json
```
Each file plays a specific role in defining how data flows from the blockchain
into a queryable dataset.
* `subgraph.yaml` specifies the smart contract and handlers
* `schema.graphql` defines the API
* `mapping.ts` transforms events into entities
* `generated/` holds types used in the mappings
* `abis/` contains the smart contract ABI required to decode events
This structure ensures the subgraph remains modular, readable, and easy to
maintain.
## Defining the GraphQL schema
The schema file in a subgraph project is named `schema.graphql`. It defines the
data model of the subgraph. Each data model is known as an entity and is
represented using GraphQL type definitions.
Entities are stored in the subgraph's underlying database and can be queried via
GraphQL. Each entity must have an `id` field of type `ID`, which serves as the
unique identifier.
Example schema:
```graphql
type Profile @entity {
id: ID!
owner: Bytes!
name: String!
createdAt: BigInt!
}
```
The schema supports scalar types like `String`, `Int`, `BigInt`, `Bytes`,
`Boolean`, and `ID`. Entities can also reference other entities using one-to-one
or one-to-many relationships, which are established by using `@derivedFrom` on
the related side.
```graphql
type User @entity {
id: ID!
profiles: [Profile!]! @derivedFrom(field: "owner")
}
```
This schema describes a one-to-many relationship from `User` to `Profile`, where
each profile is linked to a user via the `owner` field.
The schema must be aligned with the data emitted by smart contract events. Each
time an event is handled, new instances of these entities are created or updated
through the mapping logic.
## Writing the subgraph manifest
The manifest file `subgraph.yaml` tells The Graph which blockchain to connect
to, which contracts to monitor, and which handlers to invoke. It also defines
the schema and mapping files.
A minimal example of a subgraph manifest:
```yaml
specVersion: 0.0.4
description: Tracks profiles on-chain
repository: https://github.com/example/profile-subgraph
schema:
file: ./schema.graphql
dataSources:
- kind: ethereum
name: ProfileContract
network: mainnet
source:
address: "0x1234567890abcdef..."
abi: ProfileContract
startBlock: 15000000
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
entities:
- Profile
abis:
- name: ProfileContract
file: ./abis/ProfileContract.json
eventHandlers:
- event: ProfileCreated(indexed address,indexed bytes32,string,uint256)
handler: handleProfileCreated
file: ./src/mapping.ts
```
This manifest tells The Graph to watch the `ProfileCreated` event emitted by a
contract deployed at a specific address on Ethereum mainnet. The event will be
handled by the `handleProfileCreated` function in the mapping file.
The `startBlock` is used to optimize syncing by skipping historical blocks that
do not contain relevant data. The ABI is needed to decode event parameters.
## Creating the mapping logic
Mapping logic is written in AssemblyScript and placed in the `src` folder. It
contains functions that respond to smart contract events and generate or update
entities based on event data.
Example mapping for `ProfileCreated`:
```ts
import { ProfileCreated } from "../generated/ProfileContract/ProfileContract";
import { Profile } from "../generated/schema";
export function handleProfileCreated(event: ProfileCreated): void {
let entity = new Profile(event.params.profileId.toHex());
entity.owner = event.params.owner;
entity.name = event.params.name;
entity.createdAt = event.block.timestamp;
entity.save();
}
```
The `event` object gives access to event parameters, transaction metadata, and
block context. The handler creates a new `Profile` entity using the profile ID
as the key, assigns values, and saves it to the store.
Handlers must always ensure that IDs are unique and types are compatible with
the schema. Entities are saved using the `.save()` method and will be queryable
via the GraphQL API after indexing.
## Generating types from the schema
To safely work with entity types and contract events in the mapping file,
developers use the Graph CLI to generate code from the schema and ABI.
The command is:
```
graph codegen
```
This generates TypeScript files under the `generated/` directory. These include:
* Type-safe classes for each entity in `schema.ts`
* Smart contract bindings for each ABI in its respective folder
These generated types eliminate common errors, offer autocompletion, and ensure
consistency between the schema, mappings, and event definitions.
Entities in the generated code extend the `Entity` class and expose getters and
setters for each field, along with type casting and default value helpers.
## Deploying a subgraph
The CLI packages the files, uploads the schema, manifest, mappings, and ABIs,
and registers the subgraph with the indexing service. A deployment hash is
generated, which serves as a reference to the current version.
## Querying with GraphQL
Once deployed and synced, subgraphs expose a GraphQL endpoint. This allows
front-end applications, analytics tools, or external APIs to query the data.
The GraphQL schema is derived directly from `schema.graphql`, and every entity
becomes a queryable type. Developers can retrieve data with flexible queries
such as:
```graphql
{
profiles(first: 5, orderBy: createdAt, orderDirection: desc) {
id
name
owner
createdAt
}
}
```
Filters can be applied using `where` clauses to narrow down results.
```graphql
{
profiles(where: { owner: "0xabc..." }) {
name
id
}
}
```
Pagination is supported using `first` and `skip`, enabling efficient rendering
of lists and infinite scroll interfaces.
Sorting can be applied using `orderBy` on indexed fields, with `asc` or `desc`
directions.
Subgraphs enable complex data consumption with minimal performance cost, as all
queries run against a pre-indexed store maintained by indexers.
## Versioning and schema migration
As smart contracts evolve or new requirements emerge, subgraphs need to be
versioned. Developers can deploy multiple versions of a subgraph and mark one as
the current production version.
Each version has a unique deployment ID, and changes to the schema, event
handlers, or mappings require a new deployment.
To support schema changes:
* Update the `schema.graphql`
* Regenerate types using `graph codegen`
* Update mappings accordingly
* Test and deploy as a new version
The old version remains accessible but no longer receives new data. This allows
safe rollouts, testing, and rollback if necessary.
## Performance optimization and indexing
Efficient indexing is crucial to ensure that subgraphs sync quickly and serve
queries promptly. Developers can improve performance by:
Reducing the number of entities created per event
Avoiding heavy use of derived fields or reverse lookups
Minimizing unnecessary state updates to entities
Reducing large loops or repetitive logic in mappings
Setting an appropriate `startBlock` to skip unnecessary historical data
Avoiding cross-contract or recursive calls in mappings, which are unsupported
Indexing speed depends on block density, event frequency, and mapping logic
complexity. For high-volume subgraphs, batching and conditional logic can help
reduce bottlenecks.
Indexing logs can be monitored via the dashboard or CLI to debug issues such as
failed events, missing ABI entries, or type mismatches.
## Debugging and testing
Subgraphs can be tested locally using a forked chain or mock events. The Graph
CLI supports:
* Code generation and validation
* Manifest linting
* Subgraph simulation
For contract testing, frameworks like Hardhat or Foundry can emit events and
verify the subgraph behavior using integration test setups.
Event handlers should be designed to be deterministic and side-effect-free.
Subgraphs do not persist intermediate state or allow external API calls, so all
logic must be pure and repeatable.
Logs and sync statuses provide real-time feedback on indexing progress, failed
handlers, or schema violations.
Proper testing reduces production errors and ensures reliable data for users and
applications.
## Using dynamic data sources
In many decentralized applications, new smart contracts are deployed at runtime.
These contracts are not known in advance, so they cannot be declared statically
in the manifest file. To support this, subgraphs allow the use of dynamic data
sources.
Dynamic data sources are created at runtime through templates. When a subgraph
encounters a specific event, it can instantiate a new data source to begin
listening to events from a new contract.
For example, a factory contract may emit a `ProfileDeployed` event each time a
new profile contract is created. The subgraph listens to this factory and, upon
detecting the event, dynamically spawns a new data source for the deployed
profile contract.
```ts
import { ProfileTemplate } from "../generated/templates";
import { ProfileDeployed } from "../generated/Factory/Factory";
export function handleProfileDeployed(event: ProfileDeployed): void {
ProfileTemplate.create(event.params.newContract);
}
```
The template must be defined in the `subgraph.yaml` file:
```yaml
templates:
- name: ProfileTemplate
kind: ethereum/contract
network: mainnet
source:
abi: Profile
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
entities:
- Profile
abis:
- name: Profile
file: ./abis/Profile.json
eventHandlers:
- event: ProfileUpdated(indexed address,string)
handler: handleProfileUpdated
file: ./src/profile-mapping.ts
```
This pattern is commonly used in factory-based architectures such as token
vaults, NFT launchpads, or DeFi protocols.
## Indexing multiple contracts
Subgraphs can index multiple contracts by declaring multiple data sources in the
manifest. Each data source can monitor a different contract, respond to
different events, and use distinct handlers.
Example use case:
* One contract manages token minting
* Another manages metadata updates
* A third handles permissions
Each contract can be added with its own `dataSource` entry, ABI, and event
handlers.
```yaml
dataSources:
- kind: ethereum
name: TokenContract
source:
address: "0x..."
abi: Token
mapping: ...
- kind: ethereum
name: MetadataContract
source:
address: "0x..."
abi: Metadata
mapping: ...
```
This approach allows modular indexing and accommodates complex systems where
state is distributed across multiple contracts.
## Subgraph templating and reuse
Subgraph templates promote reuse across similar deployments. Developers can
publish and share subgraph configurations for common contract standards like
ERC20, ERC721, or governance protocols.
Reusable subgraphs help accelerate development for ecosystems like:
* NFT marketplaces
* DAO voting systems
* Liquidity pools
These templates abstract the contract interactions and provide ready-made
GraphQL APIs. Developers only need to supply addresses and optional
configuration overrides.
Templates can be versioned and imported using packages or remote includes via
IPFS. This enables teams to collaborate and standardize data access across
multiple dApps or sub-networks.
## Use cases enabled by subgraphs
Subgraphs are a critical backend component in many types of decentralized
applications. Some common use cases include:
Real-time token balances and holder snapshots
Historical trade activity for decentralized exchanges
NFT ownership and metadata browsing
DAO governance participation, proposals, and vote history
Streaming payments and claimable rewards tracking
Cross-chain bridge activity and relay verification
Portfolio analytics for wallet dashboards
Decentralized identity registries and attestations
Event-driven notification systems and protocol health monitors
By exposing data through a flexible and reliable GraphQL API, subgraphs allow
developers to build rich user experiences without overloading blockchain nodes
or dealing with raw logs.
Subgraphs also improve security and decentralization by removing the need for
proprietary or centralized data APIs.
## Community and ecosystem
The Graph ecosystem includes developers, indexers, curators, and contributors
building tools and maintaining subgraphs across major chains. Popular subgraphs
are used by Uniswap, Aave, Balancer, ENS, and many other leading protocols.
Developers can contribute by:
* Publishing subgraphs to Graph Explorer or Subgraph Studio
* Signaling high-quality subgraphs to support indexing on the network
* Writing custom handlers for niche protocols or contract types
* Sharing templates and tutorials for emerging standards
The community maintains libraries, tutorials, and starter kits to help
developers bootstrap new subgraphs quickly and follow best practices.
## Best practices for subgraph development
Subgraph development benefits from a consistent, modular, and testable approach.
Following best practices ensures high performance, clarity, and reliability over
time.
* Use consistent entity naming and schema versioning. Entities should be named in
singular form and grouped logically. Fields should be explicit, typed, and avoid
ambiguous names.
* Always define a unique and deterministic ID for each entity. Event parameters
such as transaction hash, log index, or custom identifiers can be combined to
avoid duplication.
* Avoid using optional or nullable fields when not necessary. A well-defined
schema allows for cleaner queries and better indexing performance.
* Batch updates and reduce redundant writes to the store. Saving an entity after
each minor change can increase load and indexing time. Perform calculations in
memory before saving.
* Use event-level filtering logic in mappings to avoid unnecessary processing.
Only create or update entities when specific conditions are met.
* Comment your mapping logic to explain transformations. AssemblyScript is not
always familiar to all developers, and clarity in logic helps with
collaboration.
* Keep mapping logic pure and deterministic. Avoid side effects or state-dependent
behavior that relies on prior events. Subgraphs must be replayable and
idempotent.
* Regenerate types with `graph codegen` after every schema or ABI change. This
prevents runtime errors and ensures that mappings are aligned with the current
data model.
* Use `@derivedFrom` carefully. Derived fields must not be relied on for event
handling, as they are computed post-indexing and cannot trigger side effects.
* Test subgraphs locally using Ganache or Hardhat with a forked mainnet. Emit
sample events and ensure that the Graph node indexes the subgraph correctly and
returns accurate results.
## Known limitations and constraints
While powerful, subgraphs come with design limitations developers must account
for.
* Subgraphs are read-only and cannot write to the blockchain or trigger
transactions. They cannot send messages or invoke contract methods.
* Mappings are sandboxed and cannot perform asynchronous operations, call external
APIs, or access off-chain state.
* Cross-contract reads are not supported. Mappings only receive event data and
cannot query blockchain state beyond what is emitted in the event.
* There is no native support for joins in GraphQL queries. Relationships must be
explicitly defined in the schema and managed through entity linking.
* Historical contract state prior to deployment of the subgraph is not available
unless explicitly emitted and indexed.
* Block-by-block tracking is only supported using block handlers, which can be
computationally expensive and should be used sparingly.
* There is no persistent cache between handler calls. All context must be passed
through the event or reconstructed from existing entities.
* Debugging AssemblyScript may lack advanced IDE support. Logging and replaying
test events is often necessary to trace logic issues.
## Performance tuning and scaling
Subgraphs can be optimized for speed and scalability through a series of
techniques.
* Set a meaningful `startBlock` to avoid indexing irrelevant history. Use the
deployment block of the contract or the earliest event of interest.
* Avoid storing large text strings or unnecessary data. Use IPFS hashes for
content and load media off-chain when needed.
* Use indexed fields for `orderBy` and `where` filters. This makes queries more
efficient and improves response time.
* Handle large event volumes by minimizing computations in each handler. Avoid
loops over arrays when working with block or transaction metadata.
* Group entity updates together and reduce entity cardinality when possible.
Instead of creating a new entity per interaction, consider updating an
aggregate counter or history record.
* Use Graph Studio and Explorer tools to monitor indexing speed, handler
runtimes, and potential performance bottlenecks.
* Use dynamic data sources only when necessary. Static indexing is faster and
simpler when contract addresses are known.
## Future of subgraphs and The Graph protocol
The Graph protocol is evolving to support more chains, richer indexing features,
and increased decentralization. The Graph Network is live and continues to
expand its capabilities.
Support for more non-EVM chains such as Solana, NEAR, and Cosmos is under
development. This will enable cross-chain indexing and unified data layers
across ecosystems. Improvements to GraphQL querying, pagination, and derived
field performance are underway. This will make front-end development faster and
more flexible.
Zero-knowledge proofs and cryptographic verification of data integrity are being
explored to ensure trustless querying at scale. Custom indexers and modular
handlers are being developed to allow alternative indexing logic and
chain-specific plugins. Subgraph Studio is expected to offer improved debugging,
real-time monitoring, and better multi-environment management tools for
developers.
The subgraph ecosystem will continue to grow as more protocols adopt The Graph
as their indexing standard. Curated registries, version control, and marketplace
discovery will be enhanced to make data services more accessible and
sustainable. Subgraphs are a powerful abstraction that bridge on-chain data with
off-chain usability. They allow developers to expose rich, queryable APIs from
blockchain events with minimal overhead and maximum flexibility.
By defining clear schemas, mapping contract events, and deploying to a
decentralized network of indexers, developers create robust backends that scale
with their applications. Subgraphs are an essential part of Web3 infrastructure,
enabling wallets, dashboards, marketplaces, and governance platforms to deliver
fast and reliable data to users. Mastering their architecture and development
process will remain a vital skill for anyone building the decentralized
internet.
file: ./content/docs/knowledge-bank/supply-chain.mdx
meta: {
"title": "Supply chain use cases",
"label": "Blockchain use cases in supply chain management",
"description": "Comprehensive guide to blockchain applications in global supply chains, covering traceability, logistics, procurement, compliance, and resilience"
}
## Introduction to blockchain in supply chain management
Global supply chains are complex networks involving manufacturers, suppliers, logistics providers, financial institutions, customs agencies, and end customers. These ecosystems span geographies, languages, and legal systems. As supply chains grow in complexity, they face increasing challenges around transparency, efficiency, fraud prevention, and real-time visibility.
Traditional supply chain systems are often siloed, paper-driven, and dependent on centralized intermediaries. These limitations hinder trust between parties, delay resolution of disputes, and prevent accurate tracking of materials and goods.
Blockchain introduces a decentralized infrastructure that enables shared, tamper-evident records across all stakeholders. Each event—such as shipping, certification, inspection, payment, or customs clearance—can be recorded on-chain, providing a single source of truth accessible to authorized parties.
By enabling auditable data, programmable workflows, and verifiable identities, blockchain has the potential to transform the way supply chains operate—from raw material sourcing to last-mile delivery.
## Benefits of blockchain in supply chain ecosystems
Blockchain brings a set of capabilities uniquely suited to solve common supply chain challenges:
* Immutable, timestamped records of transactions, movements, and ownership changes
* Multi-party data sharing without exposing confidential internal systems
* Smart contract automation of payments, compliance checks, and service-level agreements
* Provenance tracking and material authenticity for quality control and certification
* Reduced reliance on intermediaries for coordination, dispute resolution, and enforcement
These benefits translate into improved traceability, faster dispute resolution, lower costs, and increased customer confidence.
## Traceability from origin to consumption
End-to-end traceability is a cornerstone of supply chain reliability. Businesses and regulators increasingly demand proof of where a product came from, what processes it underwent, and whether it met required standards. Blockchain provides a verifiable audit trail for every stage of a product’s lifecycle.
Key features of blockchain-based traceability include:
* Unique digital IDs for raw materials, components, and finished goods
* On-chain logging of transformations, shipments, and handling
* Cryptographic signatures from suppliers, labs, or certifiers
* Public or permissioned access to traceability data by auditors and customers
Example:
* A chocolate manufacturer logs each cocoa shipment on blockchain with origin, farm ID, and batch code
* As the cocoa is roasted, blended, and packaged, each step is recorded and linked to the final product
* Retailers and consumers scan the packaging to view the full supply chain, including sustainability claims
This level of traceability improves food safety, supports ethical sourcing, and helps companies meet compliance obligations across geographies.
## Anti-counterfeiting and product authentication
Counterfeit goods are a global problem affecting pharmaceuticals, electronics, luxury items, and industrial components. Blockchain allows manufacturers to issue digital certificates of authenticity that can be verified at any point in the supply chain.
Blockchain supports product authentication by:
* Assigning tamper-resistant digital identities to each product unit
* Allowing customers or partners to verify authenticity using mobile apps
* Recording every handoff between suppliers, distributors, and retailers
* Detecting duplicate or mismatched entries that indicate counterfeit activity
Example:
* A medical device manufacturer embeds a QR code on each unit, linked to a blockchain entry
* Distributors, hospitals, and regulators can scan and verify product origin and batch information
* If a counterfeit unit enters the market, it lacks a valid blockchain record and is flagged automatically
This protects brand reputation, reduces liability, and increases buyer trust—especially in regulated or high-value industries.
## Inventory visibility and logistics tracking
Supply chains often suffer from poor visibility into inventory levels, shipment location, and logistics handoffs. Delays, theft, and misrouting are common when multiple parties rely on disconnected systems. Blockchain improves logistics coordination through real-time, shared records of asset movement.
Logistics tracking on blockchain includes:
* Recording departure, arrival, and transit events with timestamps and geolocation
* Linking each shipment to its bill of lading, invoices, and customs forms
* Updating shipment status through IoT sensors or mobile scanning
* Providing a tamper-evident log accessible to all stakeholders
Example:
* A high-value electronics shipment is tracked from factory to warehouse to retailer
* Each leg of the journey is confirmed on blockchain, along with temperature and vibration readings from onboard sensors
* If a delay or tampering is detected, automated alerts are sent and insurance contracts are triggered
By replacing emails and phone calls with verifiable data, blockchain improves delivery reliability, reduces insurance claims, and optimizes fleet utilization.
## Smart contracts for procurement and payments
Procurement involves multiple layers of approvals, verifications, and invoice processing. Delays in payments or errors in contract enforcement can strain supplier relationships. Smart contracts on blockchain automate these processes by encoding terms and executing them automatically when conditions are met.
Smart procurement applications include:
* Supplier onboarding with KYC and contract registration
* Automatic invoice generation upon goods delivery
* Multi-tier approvals and conditional payment triggers
* Volume-based discounts or penalty clauses enforced on-chain
Example:
* A retailer agrees to pay a food supplier within three days of successful delivery and inspection
* The smart contract releases funds automatically once GPS and temperature data confirm compliance
* In case of non-compliance, penalties are calculated and deducted before payment
This reduces processing time, increases trust, and provides a transparent audit trail for both buyers and suppliers.
## Compliance and regulatory reporting
Many supply chains are subject to environmental, safety, labor, and trade regulations. Demonstrating compliance typically involves collecting certifications, inspection records, and documentation across multiple suppliers and jurisdictions. Blockchain simplifies compliance by ensuring records are immutable, time-stamped, and accessible to auditors.
Blockchain-based compliance systems offer:
* Digital storage of certifications such as RoHS, REACH, ISO, or fair-trade labels
* Timestamped proof of inspections, tests, and training events
* Smart contract validation of compliance before processing orders
* Regulator dashboards for real-time audit access
Example:
* An apparel brand sources organic cotton and needs to validate that farms meet sustainability standards
* Third-party certifiers upload audit results and GPS-verified field data to blockchain
* Suppliers must pass compliance checks before orders are fulfilled
* Brands use blockchain data for annual ESG disclosures and investor reporting
This reduces the risk of non-compliance penalties, supports certification credibility, and saves time during audits.
## Cold chain and condition-sensitive logistics
Transporting perishable goods such as pharmaceuticals, food, and chemicals requires strict control of temperature, humidity, and handling conditions. Any deviation can spoil the product or invalidate safety certifications. Blockchain, combined with IoT sensors, ensures that all condition-sensitive events are logged and verified.
Cold chain use cases include:
* IoT-based temperature and humidity monitoring throughout transit
* Smart contract alerts triggered by threshold violations
* Real-time data access for logistics partners, insurers, and receivers
* Immutable logs for quality assurance and dispute resolution
Example:
* A vaccine shipment is monitored for temperature throughout international transport
* Any excursion beyond the allowed range is recorded and alerts are sent
* The receiving clinic checks the blockchain record before accepting or rejecting the shipment
* If rejected, a smart contract claims insurance and initiates replacement logistics
Blockchain makes cold chain logistics more transparent, accountable, and compliant with health and safety standards.
## Ethical sourcing and sustainability verification
Consumers and regulators increasingly demand evidence that goods are sourced ethically, produced sustainably, and comply with environmental or social standards. Blockchain enables traceability of sustainability data and third-party certifications throughout the supply chain.
Applications include:
* Logging carbon emissions, water usage, or waste at each production stage
* Verifying renewable energy usage or low-impact materials
* Recording fair labor compliance and community engagement
* Connecting product SKUs to full sustainability metadata
Example:
* A coffee company logs each harvest, washing, and transport stage with sustainability metrics
* Blockchain records include farm practices, labor treatment, and deforestation risk
* Retailers and consumers can view this data via a QR scan on the final product
* Investors and regulators use the same data for sustainability reports
This helps companies meet ESG goals, attract responsible investors, and build brand loyalty among ethical consumers.
## Multi-tier supplier management
Large manufacturers often rely on suppliers several tiers removed from their operations. Lack of visibility into Tier 2 and Tier 3 suppliers leads to risks around quality, capacity, and compliance. Blockchain provides a way to register and monitor multiple supplier tiers through a shared, permissioned ledger.
Features of multi-tier supply chain visibility include:
* Supplier registration with identity and capability verification
* Event-driven logging of subcontracted activities
* Dynamic tracking of material flow through each supply layer
* Permission-based access for OEMs and auditors
Example:
* An automotive company wants to trace semiconductors used in its electric vehicles
* The Tier 1 supplier sources chips from Tier 2 foundries, who source silicon from Tier 3 refiners
* Each tier registers their activity on blockchain, enabling full traceability of inputs
* If a recall is needed, the OEM can identify affected batches and root causes instantly
This approach reduces risk, increases supply chain resilience, and supports responsible sourcing across distributed production ecosystems.
## Supply chain financing and working capital optimization
Access to working capital is a persistent challenge for small and mid-size suppliers in global supply chains. Delays in invoice payments or unclear contract performance often prevent them from accessing affordable credit. Blockchain improves supply chain financing by providing verified, real-time data about deliveries, service milestones, and fulfillment status.
Applications of blockchain in supplier finance include:
* Tokenization of invoices based on confirmed delivery events
* Smart contracts that trigger financing eligibility upon verification
* Credit scoring algorithms using on-chain performance history
* Peer-to-peer finance marketplaces for invoice-backed loans
Example:
* A textile supplier delivers fabric to a fashion brand and logs delivery confirmation on blockchain
* The confirmed record is used to tokenize the invoice and post it on a finance platform
* A financial institution buys the invoice at a discount and is repaid automatically when the buyer releases funds
* The supplier receives instant liquidity without waiting for 60-day payment terms
Blockchain reduces lending risk by enabling trustworthy data, expanding financing access, and increasing liquidity across the supply chain.
## Cargo insurance and automated claims
Cargo insurance covers goods in transit against risks like damage, theft, or delay. However, processing claims is often slow and disputes are common due to incomplete documentation or ambiguous responsibilities. Blockchain streamlines insurance by linking verified supply chain events with smart contract logic.
Use cases include:
* Digital policies with condition-based triggers
* On-chain logging of sensor data from insured shipments
* Automated claim validation using smart contracts and oracles
* Immutable records of carrier handoffs and delivery exceptions
Example:
* A shipment of chemicals is insured for temperature-related spoilage
* During transit, an IoT sensor records a breach of acceptable temperature
* The event is logged to the blockchain and triggers the insurance smart contract
* Based on predefined rules, a payout is processed and delivered without manual claim submission
This enhances insurer transparency, reduces fraud, and provides faster relief to affected parties — while lowering operational costs for underwriters.
## Customs clearance and cross-border trade
International trade requires goods to pass through customs and border protection agencies, often involving manual documentation, delayed approvals, and risk of corruption. Blockchain enables real-time, verifiable exchange of shipping, inspection, and compliance data to accelerate customs processing.
Blockchain customs applications include:
* Digital bills of lading, packing lists, and invoices stored immutably
* Permissioned access for customs authorities in importing and exporting countries
* Smart contracts that validate compliance with origin and tariff rules
* Traceability of goods to verify sanctions, restrictions, or quotas
Example:
* An electronics manufacturer ships goods from Korea to Germany
* All export and import documents are registered on blockchain, accessible to customs in both countries
* The receiving customs officer verifies that origin certifications, safety checks, and tariff classifications are valid
* Clearance is granted instantly without manual verification or paper forms
Blockchain reduces customs delays, enhances security, and enables interoperable trade ecosystems aligned with digital trade agreements.
## Product recalls and quality incident response
When a defective or unsafe product reaches consumers, recalls must be executed quickly and precisely. Traditional systems struggle to identify affected batches, trace shipments, or coordinate notifications. Blockchain enhances recall management through immutable product histories and real-time stakeholder access.
Blockchain recall systems provide:
* Batch-level traceability from factory to point of sale
* Real-time identification of affected goods and delivery locations
* Smart contract-based trigger of distributor notifications and refunds
* Audit logs of response times, corrective actions, and regulatory reporting
Example:
* A food manufacturer identifies contamination in a specific production lot
* The lot code is mapped on-chain to all retailers and logistics partners that received it
* A recall smart contract notifies affected parties, halts sales, and processes refunds to customers
* Regulators access a full timeline of events and company actions
This system limits brand damage, accelerates consumer safety actions, and improves regulatory compliance through full supply chain visibility.
## Supplier onboarding and verification
Global supply chains often involve onboarding new suppliers for raw materials, packaging, services, or logistics. Verifying legitimacy, credentials, and performance history is time-consuming and error-prone. Blockchain creates a decentralized supplier registry that simplifies onboarding and reduces fraud.
Features include:
* On-chain registration of supplier identity, certifications, and performance metrics
* Access-controlled sharing of sensitive documentation
* Timestamped records of KYC, audits, and blacklist checks
* Smart contract validation of eligibility for procurement events
Example:
* A large retailer sources new suppliers for sustainable packaging
* Each candidate uploads proof of environmental certifications and past contract references to the blockchain
* Procurement officers view and compare verified profiles without needing to contact third parties
* When selected, the supplier's compliance and payment terms are enforced through smart contracts
Blockchain builds trust across new partnerships and reduces time-to-contract while preserving auditability.
## Multi-modal and last-mile delivery coordination
Delivering goods involves multiple transport modes including sea, air, rail, and road. Coordinating between carriers, warehouses, and retailers introduces complexity, especially in last-mile delivery. Blockchain creates a unified platform for tracking goods and managing dynamic delivery routes.
Use cases:
* Shared visibility across ocean freight, port terminals, inland logistics, and couriers
* Smart contract handoffs at mode transitions with condition verification
* On-chain confirmation of proof-of-delivery (POD) and signature
* Dynamic rerouting based on real-time delivery data and constraints
Example:
* A shipment of electronics arrives at a port and is transferred to a rail operator
* Blockchain records each transfer and verifies container seal integrity
* Upon last-mile delivery, a mobile app logs recipient signature and GPS timestamp to blockchain
* If delays or deviations occur, stakeholders are notified instantly and can adapt routing
This reduces handoff errors, improves delivery KPIs, and supports end-to-end performance optimization.
## Freight booking, scheduling, and asset utilization
Shippers, freight forwarders, and carriers often rely on separate platforms to book cargo, schedule pickups, and assign capacity. This fragmentation leads to inefficiencies and unused space. Blockchain enables decentralized freight marketplaces and real-time scheduling coordination.
Applications include:
* Smart contract-based freight bidding and auction mechanisms
* On-chain booking calendars linked to carrier availability
* Incentive systems for full-load optimization and return-trip planning
* Verifiable service records for carriers and drivers
Example:
* A manufacturer posts a time-sensitive cargo job on a blockchain freight platform
* Verified carriers bid for the job, with smart contracts enforcing pickup time, price, and delivery conditions
* GPS and sensor data feed into the contract, and payment is released upon validated delivery
* Carrier ratings and on-time history are updated automatically on-chain
This improves operational agility, lowers transport costs, and increases fleet utilization across fragmented networks.
## Packaging reuse, pallet pooling, and return logistics
Reusable packaging such as crates, pallets, and containers reduce environmental impact but are difficult to track and manage. Blockchain provides a ledger to monitor asset ownership, location, and usage history, supporting circular logistics models.
Use cases:
* Tokenization of packaging assets for tracking and accountability
* Shared logistics models where returnable assets are exchanged between participants
* Smart contracts to charge deposit, usage, or damage fees
* Integration with reverse logistics and recycling flows
Example:
* A beverage company distributes drinks in reusable crates
* Each crate is tagged and logged on blockchain with delivery, return, and cleaning cycles
* Retailers receive tokens as deposit, returned when crates are scanned and verified
* Damage or loss triggers automatic fines, and reuse rates are published for ESG tracking
Blockchain makes reuse systems scalable, financially viable, and trustworthy for all stakeholders.
## Digitization of trade documents
Global trade requires a significant number of documents including bills of lading, letters of credit, invoices, packing lists, and inspection reports. These documents are often printed, couriered, or emailed—causing delays, errors, and security risks. Blockchain digitizes and secures these documents for instant, verified access.
Document digitization features:
* Issuance of digital bills of lading with transfer history on-chain
* Anchoring of scanned PDFs or XML to tamper-proof hashes
* Access control policies for banks, freight forwarders, and customs
* Smart contract conditions for releasing payments or goods
Example:
* A cargo shipment is issued a digital bill of lading by the shipping line
* The document is signed by origin authorities, approved by the bank, and transferred to the buyer on delivery
* Each action is recorded on blockchain, eliminating paper trails and manual delays
* A letter of credit is executed via smart contract once delivery is confirmed and inspection reports match terms
This reduces fraud, speeds up trade cycles, and aligns with international digital trade frameworks.
## Integrated supply chain dashboards and analytics
With blockchain providing structured, consistent, and verifiable data across supply chain processes, businesses can build real-time dashboards that go beyond traditional ERP systems. These dashboards provide visibility, risk monitoring, and predictive analytics.
Dashboards can include:
* Shipment status, temperature, and route analytics from on-chain IoT data
* Supplier performance, delay history, and compliance flags
* Financial exposure based on tokenized invoices and receivables
* Carbon footprint and sustainability scorecards based on full chain emissions
Example:
* A global food company monitors its key ingredient supply chain in real time
* Dashboards track shipment delays, supplier disruptions, and inventory shortages
* Predictive models suggest when to reorder, switch suppliers, or adjust production
* On-chain data improves planning accuracy and reduces response times during crises
Blockchain serves as the backbone for intelligent, resilient, and data-driven supply chains.
## Port operations and warehouse automation
Ports and warehouses are pivotal nodes in the global supply chain. Congestion, miscommunication, and manual document handling at these points lead to delays, revenue loss, and inventory mismanagement. Blockchain enhances port and warehouse operations by improving coordination, automating workflows, and offering shared visibility across stakeholders.
Use cases include:
* Smart gate access with digital vehicle and cargo authorization
* On-chain record of warehouse receipts and proof of inventory movements
* Automation of loading, unloading, and storage workflows via smart contracts
* Real-time dashboards for port authorities, customs, carriers, and freight forwarders
Example:
* A shipping container arrives at a port and is scanned upon entry
* The digital bill of lading, customs declaration, and warehouse slot allocation are all linked on blockchain
* The container is routed, stored, and scheduled for release through a series of smart contracts that verify readiness and payment
* All actions, delays, and handling notes are recorded and accessible to approved parties
Blockchain reduces manual handoffs, enables just-in-time cargo management, and fosters accountability in high-volume logistics zones.
## Industry-specific use cases: pharmaceuticals
Pharmaceutical supply chains demand high precision, traceability, and compliance with global health regulations. Temperature excursions, counterfeiting, and non-compliant handling can threaten patient safety. Blockchain delivers tamper-proof traceability, real-time monitoring, and compliance reporting.
Applications in pharma:
* Serialization of each drug unit or batch with on-chain authentication
* Cold chain compliance monitored via IoT and logged immutably
* Certification of each manufacturing, shipping, and handling step
* Smart recall systems and regulatory access dashboards
Example:
* A vaccine batch is produced and assigned a digital identity on blockchain
* Each step of shipment—packing, storage, airport transfer, customs, and delivery—is logged with IoT telemetry and timestamps
* Pharmacists scan and verify the product before administering
* If a recall is issued, all stakeholders are instantly notified with affected units identified by batch and distribution point
Blockchain ensures regulatory alignment, improves safety, and boosts public confidence in life-saving pharmaceuticals.
## Industry-specific use cases: food and agriculture
Food supply chains face pressure to ensure freshness, safety, and ethical sourcing while minimizing waste and fraud. Blockchain enables full traceability from farm to fork, automates compliance with food safety laws, and supports certification of origin and organic standards.
Applications include:
* On-chain logging of farm practices, harvest dates, and storage conditions
* Certification of organic or fair-trade status by verified third parties
* Cold chain tracking for perishable items
* Expiry-based smart contracts for automatic recalls or price reductions
Example:
* A batch of mangoes is harvested, sorted, and shipped with blockchain-registered tags
* Each shipment includes pesticide test results, refrigeration logs, and shipping metadata
* Upon reaching the supermarket, staff verify freshness and storage compliance via mobile apps
* In case of contamination, affected batches are quickly identified and removed from shelves
Blockchain reduces waste, supports food security, and enhances brand trust in competitive markets.
## Industry-specific use cases: fashion and apparel
Fashion brands face reputational risks tied to labor exploitation, environmental impact, and fast-moving global supply chains. Blockchain offers provenance tracking, ethical sourcing transparency, and lifecycle documentation for apparel products.
Use cases include:
* Digital product passports from raw material to retail
* Verification of certifications such as GOTS, BCI, or carbon-neutral sourcing
* Tracking of factory conditions, subcontracting, and inspection history
* Integration with resale and recycling platforms for circular fashion
Example:
* A designer brand creates a digital twin for each item of clothing it produces
* The blockchain record includes material source, labor conditions, and environmental impact data
* Customers scan a QR code on the tag to see full product provenance and sustainability score
* When the product is resold or recycled, those events are recorded, completing a circular record
Blockchain helps fashion brands build credibility in sustainability while creating new channels for consumer engagement and loyalty.
## Risk management and contingency planning
Supply chains face disruptions from weather, geopolitical conflict, labor strikes, and pandemics. Traditional systems struggle to adapt due to opaque processes and delayed information flow. Blockchain provides real-time risk monitoring and contingency execution.
Applications in risk management:
* Distributed event logging from on-ground partners (e.g., port shutdowns, accidents)
* Smart contracts that activate alternative routing or supplier options
* Performance logs for resilience scoring and supplier redundancy planning
* Risk-sharing contracts with dynamic insurance coverage and pooled reserves
Example:
* A key shipping route is disrupted due to a geopolitical conflict
* A smart contract evaluates risk exposure and triggers re-routing to a secondary supplier
* Inventory stock levels and transit delays are updated in a dashboard shared with operations, procurement, and finance teams
* Insurance payouts and penalty waivers are triggered where thresholds are breached
Blockchain allows faster, data-driven responses to uncertainty and creates more adaptive supply chain networks.
## Supplier diversity and inclusion tracking
Large corporations are increasingly required to meet targets for supplier diversity, such as engaging with minority-owned, women-owned, or small local businesses. Blockchain helps companies document and verify diversity metrics without manual oversight.
Blockchain supports diversity initiatives through:
* Verified registry of supplier certifications (e.g., MWBE, LGBTBE)
* On-chain tracking of purchase orders and invoice volume across diverse vendors
* Smart contract-enforced allocation quotas or bidding preferences
* Audit-ready reporting for ESG disclosures and compliance
Example:
* A government agency sets a 30 percent procurement goal for small and minority-owned businesses
* All supplier profiles and contracts are logged on blockchain with diversity certification
* Monthly analytics track spend percentages, fulfillment rates, and vendor performance
* Reports are submitted automatically to oversight boards with data transparency
Blockchain improves accountability, reduces tokenism, and helps expand economic opportunity within global sourcing strategies.
## Product lifecycle tracking and circular economy models
As sustainability becomes central to business models, companies are shifting from linear (produce-use-dispose) to circular (reuse-recycle-regenerate) approaches. Blockchain enables tracking of products beyond initial sale into repair, resale, and recycling.
Applications include:
* Assigning persistent digital IDs to products for tracking over time
* Recording repair, refurbishment, and resale events on-chain
* Tokenizing returns or recycling incentives for customers
* Measuring and verifying extended product usage and impact
Example:
* An electronics company tracks each laptop from assembly to customer delivery
* Repairs at service centers, battery replacements, and trade-ins are logged to the product’s digital ID
* Once recycled, valuable metals and components are traced and remanufactured
* Customers receive loyalty tokens for sustainable behavior, linked to product lifecycle
This enables compliance with circular economy legislation, reduces e-waste, and strengthens long-term relationships with environmentally conscious customers.
## Supplier collaboration and innovation
Innovation in supply chains often depends on strong relationships between buyers and suppliers. However, IP protection concerns, coordination barriers, and delayed payments hinder co-innovation. Blockchain fosters collaboration with secure data sharing, shared rewards, and traceable contributions.
Applications include:
* Secure upload of supplier prototypes, design iterations, and test results
* Timestamped attribution of innovation to specific partners
* Royalty sharing through programmable smart contracts
* Joint innovation challenges with voting and reward distribution on-chain
Example:
* A consumer electronics firm runs an open call for component innovations among its supplier base
* Submissions are logged with contributor identity and encrypted designs
* Voters assess the best solution, and the smart contract disburses funds and recognition
* If the design becomes a commercial product, downstream sales trigger royalty payments
Blockchain aligns incentives, protects IP, and opens new innovation models in competitive supply chains.
## AI integration and data integrity for forecasting
AI and machine learning are increasingly used in supply chain planning for demand forecasting, pricing optimization, and route planning. However, these models rely on high-quality, trustworthy data. Blockchain ensures that the data feeding AI systems is tamper-proof and transparently sourced.
Benefits include:
* Trusted data pipelines from verified sensors, partners, and processes
* Model inputs and predictions traceable to specific datasets and timestamps
* Auditable histories of model training data for regulatory compliance
* Shared learning models governed through DAO-based data cooperatives
Example:
* A food distributor uses AI to predict seasonal demand for fresh produce
* Blockchain ensures that delivery, weather, and consumption data are accurate and unaltered
* The AI model outputs are visible to supply chain managers and linked to smart contracts for procurement
* In case of anomalies, the data trail can be examined for integrity and source credibility
Blockchain improves explainability, fairness, and auditability of AI in complex, data-rich supply chains.
## Smart labeling and interactive packaging
Physical products can be linked to their digital records using smart labels, enabling end users to verify authenticity, origin, and lifecycle information. Blockchain enhances this capability by storing immutable metadata that is accessible via QR codes, NFC tags, or RFID chips on the product itself.
Use cases include:
* Packaging that links to tamper-proof blockchain history
* Real-time updates on sourcing, delivery, and certifications
* Integration with consumer-facing apps for authenticity and sustainability info
* Engagement tools such as reward redemption and resale verification
Example:
* A wine bottle carries a QR code that links to a blockchain record of grape origin, vineyard processing, shipping, and bottling details
* Consumers scan the label to verify temperature compliance during transport and explore tasting notes and vintage data
* Resellers verify provenance and validate that the bottle was not opened or tampered with
This approach enhances brand engagement, combats counterfeiting, and supports digital twin strategies for physical goods.
## Implementation models and deployment frameworks
Blockchain implementation in supply chains requires careful planning and integration with existing IT systems, operational processes, and partner ecosystems. There is no one-size-fits-all approach, but common deployment models include:
### Private or consortium blockchains
* Used among a closed group of stakeholders such as manufacturers, suppliers, and logistics providers
* Offers control over participation, data visibility, and performance tuning
* Common platforms: Hyperledger Fabric, Quorum, Corda
### Public-permissioned blockchains
* Enable broader visibility while restricting write permissions to verified entities
* Ideal for scenarios involving regulators, certifiers, or consumers
* Example platforms: Polygon, Avalanche, Hedera, LACChain
### Public blockchains
* Provide full transparency and immutability for applications requiring open access
* Suitable for consumer verification, decentralized trade, or open marketplaces
* Common choices: Ethereum, Tezos, Arbitrum
Factors to consider in implementation:
* Interoperability with ERP, WMS, TMS, and IoT platforms
* Data privacy policies, compliance requirements, and user roles
* Onboarding and training for suppliers, inspectors, and internal teams
* Integration with smart contracts, wallets, and analytics systems
Phased rollouts often start with pilot use cases, followed by multi-node expansion, middleware deployment, and eventually full enterprise integration.
## Interoperability and standards
Supply chains span geographies, legal frameworks, and technical systems. For blockchain to be effective, networks must interoperate across public and private chains, industry consortia, and national platforms.
Key strategies include:
* Using cross-chain bridges or interoperability protocols (e.g., Polkadot, Cosmos, Chainlink CCIP)
* Adopting data standards like GS1, EPCIS, and UN/CEFACT for structured messaging
* Leveraging verifiable credentials and decentralized identifiers (DIDs) for entity verification
* Designing APIs and SDKs for seamless application integration
Example:
* A textile exporter in India uses a local blockchain to register compliance and production data
* This information is relayed to a European customs platform via a cross-chain bridge and verified by regulators in real time
* Brands, distributors, and auditors access this data through GraphQL APIs or dashboards
Standardization enables true global scalability, reduces vendor lock-in, and builds ecosystems where blockchain networks work together instead of in silos.
## Digital governance and consortium coordination
Blockchain introduces new governance models for multi-party collaboration, where trust and rules are embedded in code. Supply chain consortia must define how decisions are made, who maintains smart contracts, and how data access is managed.
Governance models may include:
* Multi-signature approval schemes for protocol upgrades or node onboarding
* Token-weighted or stake-based voting systems for feature prioritization
* Smart contract-controlled treasuries for funding maintenance or incentives
* Arbitration protocols for handling disputes between participants
Example:
* A group of food retailers, producers, and logistics firms form a consortium to track agricultural sourcing
* Smart contracts define onboarding rules, data-sharing permissions, and voting mechanisms
* Members periodically vote on adding new certifications or changing metadata schemas
* Funding for infrastructure upgrades is automatically drawn from a pooled treasury based on vote outcomes
Digital governance ensures alignment, fairness, and adaptability in decentralized supply chain networks.
## Future outlook for blockchain in supply chain
Blockchain’s role in supply chain management will continue to expand as organizations prioritize resilience, transparency, and automation. Over the next decade, we expect to see:
* Mainstream integration with digital twins and industrial IoT for real-time visibility
* Proliferation of smart product passports aligned with sustainability and trade regulations
* Tokenization of supply chain assets including invoices, carbon credits, and raw materials
* Broader adoption of decentralized marketplaces and trustless procurement systems
* Convergence with AI for automated decision-making and exception handling
As blockchain matures and interoperability frameworks solidify, supply chains will shift from opaque, reactive systems to proactive, data-driven networks where trust is automated, and performance is optimized across every transaction.
Blockchain offers a fundamentally new architecture for trust, data sharing, and automation in global supply chains. It does not replace existing systems but augments them by creating a shared ledger of truth that spans organizations, borders, and industries.
Its impact includes:
* Enabling traceability and provenance where fraud and opacity once thrived
* Automating settlements, inspections, and compliance in real time
* Empowering consumers, regulators, and partners with trustworthy data
* Facilitating sustainable, inclusive, and ethical sourcing practices
The success of blockchain in supply chain management depends not just on technology, but on leadership, collaboration, and willingness to redefine how value is created and exchanged. With thoughtful deployment and strategic alignment, blockchain can serve as the backbone of next-generation supply chains that are secure, transparent, and built to last.
file: ./content/docs/security/application-security.mdx
meta: {
"title": "Application security"
}
Our development process integrates security at every stage. We follow best
practices and employ advanced tools to ensure the security of our applications.
## Secure software development lifecycle (sdlc)
Our SDLC incorporates security activities at each stage of development, such as
requirements gathering, design, coding, testing, and deployment.
* **Secure Coding Practices**: Promote secure coding practices within the
development team, including adhering to coding standards and conducting code
reviews.
* **Threat Modeling**: Perform threat modeling exercises to identify potential
security threats and vulnerabilities at the design stage.
* **Secure Dependencies**: Manage and update all dependencies and third-party
libraries used in the software to ensure they are free of vulnerabilities.
## Regular security testing
We conduct regular security testing throughout the development lifecycle to
identify and address potential security weaknesses.
* **Vulnerability Scanning**: Automated vulnerability scanning tools are used to
identify common vulnerabilities.
* **Penetration Testing**: Regular third-party penetration tests are conducted
to identify and remediate vulnerabilities. Our penetration testing includes
network, application, and infrastructure assessments to ensure comprehensive
coverage. SettleMint does not publicly share detailed results of network
penetration tests, but high-level summaries and compliance reports can be
provided to customers upon request.
* **Code Analysis**: Automated and manual code analysis to ensure that security
flaws are identified and addressed.
file: ./content/docs/security/compliance-and-certifications.mdx
meta: {
"title": "Compliance and certifications"
}
SettleMint is committed to maintaining compliance with industry standards and
regulations. We have obtained several certifications that demonstrate our
dedication to security and quality.
## Industry standards and certifications
We adhere to industry standards and best practices to ensure the highest level
of security.
* **ISO 27001**: Our information security management system is certified to ISO
27001 standards, ensuring a systematic approach to managing sensitive
information.
* **SOC 2 Type II**: We undergo regular SOC 2 Type II audits to ensure the
security and availability of our services. SettleMint conducts regular
internal and external audits to ensure compliance with relevant standards and
to identify areas for improvement.
* **GIA (Global Information Assurance)**: We follow GIA standards to ensure
robust information security practices.
* **CoBIT (Control Objectives for Information and Related Technologies)**: Our
adherence to CoBIT standards ensures that our IT management and governance
processes are aligned with business goals and risks.
## Information security management system (isms)
SettleMint provides customers with documentation describing our Information
Security Management System (ISMS). This documentation details our security
policies, procedures, and controls, demonstrating our commitment to maintaining
a robust security framework in line with industry standards.
## Regular audits
We conduct regular internal and external audits to ensure compliance with
relevant standards and to identify areas for improvement.
* **Internal Audits**: Conducted by our internal audit team according to
industry best practices.
* **External Audits**: Conducted by independent third-party auditors to provide
an objective assessment of our security posture.
## Continuous improvement
We are committed to continuously improving our security practices to stay ahead
of emerging threats and to meet the evolving needs of our clients.
* **Security Reviews**: Regular reviews of our security policies and procedures
to ensure they are up-to-date and effective.
* **Client Feedback**: We actively seek feedback from our clients to improve our
security measures and address any concerns they may have.
file: ./content/docs/security/data-security.mdx
meta: {
"title": "Data security"
}
We employ advanced encryption techniques and data protection measures to ensure
the security of data at all times.
## Data encryption
Sensitive data is encrypted both in transit and at rest using industry-standard
encryption protocols.
* **In Transit**: Data is encrypted using TLS 1.2 or higher to protect it during
transmission.
* **At Rest**: Data is encrypted using AES-256 to ensure it remains secure when
stored.
## Data backup and recovery
Regular backups are performed, and data recovery plans are in place to ensure
quick restoration of services in the event of an incident.
* **Backup Frequency**: Backups are performed regularly to ensure that data can
be restored to a recent state.
* **Recovery Plans**: Detailed recovery plans are in place to ensure quick and
efficient restoration of services.
## Data retention and deletion
We have policies and procedures in place for data retention and secure deletion.
* **Data Retention**: Data is retained only as long as necessary for business
purposes or as required by law.
* **Secure Deletion**: Data is securely deleted when it is no longer needed,
using techniques such as degaussing and cryptographic wiping.
file: ./content/docs/security/incident-response.mdx
meta: {
"title": "Incident response"
}
We have a detailed incident response plan in place to address security incidents
promptly and effectively.
## Incident detection
Continuous monitoring and automated alerting systems are used to detect
potential security incidents.
* **Monitoring Systems**: Comprehensive monitoring systems are in place to
detect suspicious activity and potential security incidents.
* **Automated Alerts**: Automated alerting systems notify the incident response
team of potential incidents in real-time.
## Incident handling
A dedicated incident response team is available 24/7 to handle security
incidents promptly.
* **Incident Response Team**: A team of trained professionals is available to
respond to security incidents at any time.
* **Incident Management**: Incidents are managed according to a predefined
process, ensuring a quick and efficient response.
## Incident recovery
Comprehensive recovery plans are in place to ensure the quick restoration of
services and data integrity.
* **Recovery Procedures**: Detailed procedures are in place to ensure the quick
and efficient recovery of services.
* **Post-Incident Analysis**: After an incident, a thorough analysis is
conducted to identify root causes and implement measures to prevent future
occurrences.
file: ./content/docs/security/index.mdx
meta: {
"title": "Introduction"
}
At SettleMint, we prioritize the security of our clients' data and systems. Our
comprehensive security posture encompasses policies, procedures, and
technologies designed to protect against a wide range of threats. This document
outlines the key elements of our security strategy and demonstrates our
commitment to maintaining the highest standards of security.
## Our commitment to security
SettleMint is committed to providing a secure environment for all our digital
asset solutions. We understand the critical importance of security in the
blockchain industry and continuously work to ensure that our infrastructure and
applications meet the highest standards.
## Key elements of our security posture
* **Proactive Security Measures**: Implementing proactive security measures to
prevent incidents before they occur.
* **Continuous Monitoring**: Continuous monitoring and regular audits to ensure
compliance with security standards.
* **Employee Training**: Ongoing employee training and awareness programs to
foster a culture of security.
* **Client Collaboration**: Working closely with clients to understand their
security needs and incorporate their requirements into our solutions.
file: ./content/docs/security/infrastructure-security.mdx
meta: {
"title": "Infrastructure security"
}
Our infrastructure is designed with multiple layers of security to protect
against various threats. We employ advanced technologies and best practices to
ensure the security and resilience of our systems.
## Cloud security
Our cloud providers are industry leaders, offering robust security features and
compliance certifications.
* **DDoS Protection**: Advanced DDoS protection mechanisms to prevent and
mitigate distributed denial-of-service attacks.
* **Network Security**: Secure network architecture with firewalls, intrusion
detection systems, and network segmentation to protect against unauthorized
access and threats.
## High availability and disaster recovery
Our blockchain platform is designed with a focus on ensuring high availability
and robust disaster recovery to maintain uninterrupted service and secure data
integrity under various conditions.
* **Redundancy**: Critical components are redundant, ensuring that the failure
of a single component does not affect the overall system.
* **Backup and Recovery**: Utilize Velero for efficient backup and restoration
in DR scenarios, managed by cluster operators.
* **Geographically Distributed Nodes**: Enabling blockchain node deployment
across multiple data centers in different regions to enhance resilience
against regional outages and optimize performance globally.
* **Inter-Cluster Synchronization**: We use advanced consensus protocols for
real-time data synchronization across clusters, ensuring data consistency and
integrity.
* **Automatic Failover Mechanisms**: Critical components like transaction
processing nodes and storage have automatic failover, with hot standby nodes
for immediate takeover.
* **Load Balancing**: We apply sophisticated load balancing to evenly distribute
workloads and prevent overloads, enhancing network performance.
## Tamper audit and software integrity
SettleMint's Kubernetes and container management infrastructure includes tamper
audit and software integrity functions to detect changes in container builds or
configurations. These measures ensure the integrity of release artifacts and
workloads by using tools such as image signing, admission controllers, and
runtime security tools to monitor and secure the environment. Continuous
monitoring and automated checks help maintain a secure Kubernetes deployment.
## Access control and monitoring
SettleMint restricts, logs, and monitors access to all critical systems,
including hypervisors, firewalls, vulnerability scanners, network sniffers, and
APIs. This comprehensive access control and monitoring ensure that only
authorized personnel can access these systems, enhancing security and
accountability.
### Monitoring privileged access
SettleMint monitors and logs privileged access (administrator level) to
information security management systems. This practice ensures that all
administrative actions are tracked and reviewed, enhancing security and
accountability by detecting and responding to any unauthorized or suspicious
activities.
file: ./content/docs/security/security-policies.mdx
meta: {
"title": "Security policies"
}
SettleMint has established comprehensive security policies to safeguard our
systems and data. These policies are designed to ensure the confidentiality,
integrity, and availability of information.
## Data protection and privacy
We adhere to strict data protection regulations such as GDPR and CCPA. Personal
data is handled with the utmost care, ensuring confidentiality and integrity.
* **Data Encryption**: All sensitive data is encrypted both in transit and at
rest using industry-standard encryption protocols.
* **Data Minimization**: We collect only the data necessary for our operations
and limit access to it based on the principle of least privilege.
## Access control
Multi-factor authentication (MFA) is required for access to sensitive systems.
Role-based access control (RBAC) ensures that employees have the minimum
necessary access.
* **Authentication**: Strong authentication mechanisms, including MFA and SSO,
are enforced across our systems.
* **Authorization**: Access to resources is granted based on roles and
responsibilities, ensuring that users only have access to what they need.
## Incident response
Our incident response policy outlines the procedures for detecting, responding
to, and recovering from security incidents.
* **Incident Detection**: Continuous monitoring and automated alerting systems
to detect potential security incidents.
* **Incident Handling**: A dedicated incident response team is available 24/7 to
handle security incidents promptly.
* **Incident Recovery**: Comprehensive recovery plans to ensure quick
restoration of services and data integrity.
## Employee training and awareness
Continuous training and awareness programs are crucial to maintaining our
security posture. Employees undergo regular security training to stay updated on
the latest threats and best practices.
* **Training Programs**: Regular security training sessions for all employees.
* **Awareness Campaigns**: Ongoing awareness campaigns to reinforce the
importance of security in daily operations.
## Third-party security
SettleMint's third-party agreements include provisions for the security and
protection of information and assets. These agreements ensure that all partners
and vendors adhere to our stringent security requirements, maintaining a
consistent security posture across our supply chain.
* **Vendor Assessments**: We conduct regular security assessments of our vendors
to ensure compliance with our security standards.
* **Contractual Obligations**: Security requirements are embedded in our
third-party contracts to ensure ongoing compliance.
file: ./content/docs/security/security-scanners.mdx
meta: {
"title": "Security scanners"
}
SettleMint uses advanced security scanners to maintain the integrity and
security of our codebase and dependencies. This page provides detailed
information about the scanners we use, including Aikido, TruffleHog, and
Renovate.
## Aikido
Aikido is a comprehensive security platform that provides a variety of tools for
vulnerability management and penetration testing. It includes multiple scanners
to cover different aspects of security:
* **ZAP (Zed Attack Proxy)**: Used for penetration testing and finding
vulnerabilities in web applications. It helps identify issues such as SQL
injection, cross-site scripting (XSS), and other security threats.
* **Trivy**: A comprehensive security scanner for container images, file
systems, and Git repositories. It detects vulnerabilities, misconfigurations,
and secrets.
* **Clair**: An open-source project for the static analysis of vulnerabilities
in application containers (currently supports Docker). It scans container
images for known vulnerabilities in the packages installed.
* **Nuclei**: A fast, customizable vulnerability scanner based on templates. It
helps in identifying security issues across various protocols.
* **Bandit**: A security linter for Python code that finds common security
issues in Python code.
* **Gitleaks**: A tool for detecting hardcoded secrets like passwords, API keys,
and tokens in Git repositories.
* **Syft**: Used for generating Software Bill of Materials (SBOMs) and open
source license scanning.
* **Grype**: A vulnerability scanner for container images and filesystems.
* **Checkov**: An infrastructure as code (IaC) static analysis tool that detects
misconfigurations in cloud infrastructure.
* **Phylum**: Detects malware in dependencies.
* **endoflife.date**: Detects outdated and end-of-life software.
Aikido ensures that security is maintained throughout the development lifecycle
by providing continuous monitoring and automated testing.
You can request the Aikido security scan report by following this
[link](https://app.aikido.dev/audit-report/external/ifiVHdPo7XlO1kmSjOoPtofe/request).
### Cloud infrastructure integration
In addition to these scanners, Aikido is integrated with our cloud
infrastructure to ensure secure operations. This integration allows us to run
our infrastructure in a secure manner, leveraging the power of these tools to
continuously monitor, assess, and improve the security posture of our cloud
environments.
## Trufflehog
TruffleHog is a tool for detecting secrets in the codebase. It scans for
high-entropy strings and other potential secrets in the code repositories,
ensuring that sensitive information such as API keys, passwords, and tokens are
not exposed in the source code.
* **High-Entropy String Detection**: Identifies strings that may be secrets
based on entropy.
* **Pattern Matching**: Uses regular expressions to identify potential secrets
based on known patterns.
## Renovate
Renovate is a dependency management tool that automates the process of updating
dependencies. It regularly scans for outdated or vulnerable dependencies and
creates pull requests to update them.
* **Automated Dependency Updates**: Regularly scans and updates dependencies to
the latest versions.
* **Pull Request Creation**: Automatically generates pull requests for updates,
simplifying the update process.
* **Compatibility Checks**: Ensures that updates are compatible with the
existing codebase, reducing the risk of breaking changes.
## Integration with ci/cd pipeline
These security scanners are integrated into our CI/CD pipeline to provide
continuous security checks and ensure that vulnerabilities are identified and
addressed promptly.
* **Continuous Integration**: Automated security scans are performed at each
stage of the development process.
* **Continuous Deployment**: Ensures that only secure and compliant code is
deployed to production.
By using these advanced security scanners, SettleMint maintains a high level of
security for its applications and infrastructure, protecting against a wide
range of threats and vulnerabilities.
file: ./content/docs/support/faqs.mdx
meta: {
"title": "FAQs",
"description": "Frequently asked questions"
}
## Frequently Asked Questions (FAQs)
**1. Why is SettleMint considered the best blockchain platform for enterprises?**
SettleMint offers a high level of abstraction without limiting control. It
accelerates enterprise blockchain adoption with a full-stack low-code
development environment, built-in protocol support (Ethereum, Hyperledger
Fabric, etc.), smart contract lifecycle tools, and robust middleware for
integration.
It simplifies the entire lifecycle from development to production deployment,
making blockchain projects faster to implement, easier to scale, and
cost-efficient to maintain.
**2. What blockchain protocols are supported by SettleMint?**
SettleMint supports a wide variety of blockchain protocols:
* Private Networks - Hyperledger Besu, Hyperledger Fabric and Quorum.
* Layer 1 Public Networks - Ethereum, Avalanche, Hedera and Fantom.
* Layer 2 Public Networks - Polygon PoS, Polygon zkEVM, Optimism, Arbitrum and Soneium.
The platform's extensibility allows onboarding of additional protocols based on
project requirements.
**3. How does SettleMint simplify smart contract development and deployment?**
SettleMint provides:
* A contract IDE for authoring Solidity or chaincode
* Templates and reusable libraries
* One-click deployment to any supported network
* Version control and upgrade lifecycle management
* Auto-generated GraphQL and REST APIs
* Event binding and subscription to smart contract logs
These tools reduce development effort while providing deep control over contract
logic.
**4. Can SettleMint integrate with existing enterprise systems?**
Yes. SettleMint offers:
* Middleware connectors
* SDKs for languages like JavaScript, Python, and Java
* Integration studio
* Zero config APIs
This enables seamless integration with legacy systems, workflows, and data
pipelines.
**5. How does SettleMint manage identity and access control?**
Identity and access are managed using:
* **RBAC (Role-Based Access Control):** Assign roles across apps, nodes, and
contracts
* **Membership Service Providers:** Especially in Fabric networks for
certificate-based access
* **API Gateways with Auth:** Support for JWT, OAuth2, and API keys
* **On-chain permissions:** Smart contracts can enforce ACLs for method calls
Security is enforced across both infrastructure and app layers.
**6. Is it possible to monitor and debug blockchain applications on SettleMint?**
Yes, SettleMint provides observability features including:
* Real-time logs from smart contracts and nodes
* Transaction explorer and state diffing tools
* Metrics dashboards for node health and API usage
* Subgraph query monitoring
* Alerts and triggers on transaction or performance failures
These capabilities help developers maintain SLAs and proactively resolve issues.
**7. What kind of deployment environments does SettleMint support?**
SettleMint supports:
* **Hosted environments** managed by SettleMint for rapid prototyping
* **Self-hosted** Kubernetes or cloud-native deployments (AWS, Azure, GCP)
* **On-premise** installations for highly regulated industries
* **Multi-region support** for high availability and compliance needs
The platform abstracts DevOps complexity while maintaining flexibility.
file: ./content/docs/support/slas.mdx
meta: {
"title": "Service Level Commitment",
"description": "Overview of SettleMint's reliability, support tiers, and enterprise-grade guarantees."
}
# SettleMint Service Level Commitment
SettleMint delivers enterprise-grade reliability, support, and performance to
ensure your blockchain solutions operate smoothly and securely. Our Service
Level Commitments are designed to match the needs of mission-critical workloads
across industries.
## Platform Uptime
We commit to delivering **99.9%+ availability** across our managed environments,
with infrastructure built for resilience, redundancy, and rapid recovery.
* High-availability cloud deployments
* Disaster recovery and failover procedures
* Regular security patching and proactive maintenance
## Support Tiers
SettleMint offers multiple support tiers to match your operational needs:
| Support Plan | First Response Time | Intervention Window | Customer Success Engineer | SLAs & Penalties |
| ------------ | ------------------- | ------------------- | ------------------------- | ---------------- |
| Standard | ≤90 mins (P1/P2) | 10h/5d | Included | Not included |
| Silver | ≤60 mins (P1/P2) | 10h/5d, 15h/6d | Included | Included |
| Gold | ≤30 mins (P1) | 24h/7d | Included | Included |
| Platinum | Immediate (P1) | 24h/7d | Included | Included |
> P1 = Critical priority | P2 = High priority Detailed SLA and penalty
> conditions are available upon request.
## Incident Prioritization
Incidents are classified by severity to ensure appropriate response and
resolution times.
* **P1 - Critical:** Complete service disruption or production outage.
* **P2 - High:** Major degradation impacting business operations.
* **P3 - Medium:** Non-critical issues or partial functionality loss.
* **P4 - Low:** Minor bugs, requests, or informational questions.
## Maintenance and Updates
* **Minor updates** (patches, bug fixes) are deployed frequently and safely.
* **Major updates** are planned and communicated in advance.
* **Scheduled maintenance** is limited to four hours per month, with 10 business
days’ notice.
## Backup & Monitoring
* **Daily backups** of non-volatile data with 30-day retention
* **Proactive monitoring** across all tiers
* **Advanced monitoring and reporting** available on request
## Enterprise Assurance
Our full SLA document, including detailed KPIs, penalty clauses, and escalation
procedures, is available as part of enterprise contracts.
file: ./content/docs/support/support.mdx
meta: {
"title": "Get support"
}
For any technical issues or troubleshooting support, feel free to reach out to us.
Our team is available to assist you with any queries you may have.
Contact us at [support@settlemint.com](mailto:support@settlemint.com) , we’re here to help.
If you have an existing contract, you can also get in touch with your Account
Manager or Customer Success Manager for any assistance.
file: ./content/docs/terms-and-policies/cookie-policy.mdx
meta: {
"title": "Cookie policy"
}
## 1. What are cookies?
Cookies are small (text) files, which are stored onto your device by the server
of the website you visit. Cookies contain information used by the server to:
* Optimize functionality of the website.
* Optimize rendering of the website.
* Retain and reuse selected preferences.
* Analyze visitor's behavior.
* Provide targeted advertisement.
Cookies typically do not register any personal data, such as your name, address,
phone number, email address or other data that can be traced back to you. If you
wish, you can configure most browsers to reject cookies or to notify you when
cookies are being sent.
## 2. Which cookies do we use?
A distinction can be made in the types of cookies in relation to the controlling
and processing of the cookies:
* First party cookies: Cookies which are fully controlled by SettleMint.
* Third party cookies: Cookies which are controlled by a third party related to
SettleMint; e.g. Google or Facebook.
A second distinction can be made based on the purpose of cookies:
* Necessary cookies: Cookies are required to use the website.
* Functional cookies: Cookies are facilitating the use of the website and
provide you a more personalized experience.
* Analytical cookies: Cookies are used to compile visitor statistics to provide
a better understanding of the functioning of the website.
* Marketing cookies: Cookies can monitor internet user behavior to show
personalized online advertisements and customized content.
Necessary, functional and analytical cookies, whether first or third party, are
always used and placed when visiting our website. We ask for your consent to use
and place first party cookies related to marketing purposes.
file: ./content/docs/terms-and-policies/gdpr-guide.mdx
meta: {
"title": "GDPR Compliance"
}
The General Data Protection Regulation (GDPR) is a comprehensive privacy
regulation that governs the collection, processing, and storage of personal data
within the European Union (EU) and the European Economic Area (EEA). As a
European company building a blockchain application, it is essential to ensure
your application complies with GDPR regulations. This documentation will outline
key considerations and provide guidance for achieving compliance.
To support our clients in aligning with GDPR requirements, SettleMint provides
platform-level features and architectural best practices that help ensure
privacy, security, and regulatory alignment while building decentralized
applications.
***
## Key considerations
### 1. Data minimization
Under GDPR, companies must practice data minimization, collecting and processing
only what is necessary for a specific, declared purpose. Blockchain’s inherent
immutability introduces challenges here.
SettleMint supports this principle by:
* Providing integrated off-chain storage modules where sensitive user data can
be stored securely, keeping only cryptographic references or hashes on-chain.
* Allowing developers to configure smart contracts to avoid direct storage of
personally identifiable information (PII).
**Best practices suggested:**
* Store only deterministic hashes or proofs on-chain.
* Use secure IPFS or cloud connectors to manage off-chain personal data.
***
### 2. Identifying data controllers and data processors
GDPR requires clear distinction between **data controllers** (who determine the
purpose and means of processing) and **data processors** (who act on behalf of
controllers).
On the SettleMint platform:
* Access roles and data flows can be clearly modeled using permissioned
blockchain channels.
* Organizations on a blockchain network can be mapped to controller/processor
roles via Membership Service Provider (MSP) structures.
**Best practices suggested:**
* Maintain a registry of actors and their responsibilities in your governance
model.
* Document data processing agreements between consortium members.
***
### 3. Right to erasure (Right to be Forgotten)
The immutability of blockchain makes deletion of personal data difficult or
impossible.
SettleMint addresses this challenge through:
* Off-chain personal data storage, enabling full erasure of user data without
breaking blockchain references.
* Support for advanced cryptographic patterns such as zero-knowledge proofs and
hashed identifiers to make data unlinkable.
**Best practices suggested:**
* Never store raw PII on-chain.
* Design smart contracts to support revocation and pointer invalidation
mechanisms.
***
### 4. Pseudonymization and anonymization
SettleMint enables privacy-by-design through data transformation tools that
support:
* **Pseudonymization**: Replacing user identifiers with random tokens or
blockchain addresses.
* **Anonymization**: Removing or irreversibly altering PII such that it cannot
be re-linked.
**Best practices suggested:**
* Use public-private key pairs to abstract identities.
* Avoid reusing pseudonyms across different datasets.
***
### 5. Consent management
GDPR mandates that users provide clear and revocable consent for processing
their personal data.
SettleMint provides application kits and templates for:
* Building smart contract-based consent registries that are transparent and
auditable.
* Logging and timestamping user consent and withdrawals immutably, while storing
detailed consent data off-chain.
**Best practices suggested:**
* Design explicit consent flows in the application UI.
* Allow users to view and manage consent history via self-sovereign identity
interfaces.
***
### 6. Data protection impact assessment (DPIA)
A DPIA is essential to proactively assess and mitigate privacy risks.
SettleMint supports DPIA efforts by:
* Providing visual workflows and configuration templates that help document data
flows, access levels, and risk areas.
* Enabling rapid prototyping and simulation of data processing within your
decentralized architecture.
**Best practices suggested:**
* Use DPIA templates early in the design phase.
* Update DPIA documentation with each chaincode upgrade or network policy
change.
***
### 7. Cross-border transfers
Transfers of personal data outside the EU/EEA require appropriate safeguards
such as Standard Contractual Clauses (SCCs) or Binding Corporate Rules (BCRs).
For permissioned blockchains built with SettleMint:
* Data residency policies can be enforced through organization-specific data
nodes and localized off-chain storage.
* Data access policies can be enforced through Fabric/Quorum consortium rules
and smart contract-level whitelisting.
**Best practices suggested:**
* Ensure all network participants agree to and implement SCCs where applicable.
* Architect the network with geographic boundaries in mind when dealing with
sensitive user data.
***
## SettleMint’s GDPR-aligned features
SettleMint is committed to privacy-first blockchain development and offers the
following GDPR-supportive features:
* **Off-chain Secure Data Vaults**: Integration with IPFS, cloud, and database
connectors for compliant data storage.
* **Zero-knowledge Pattern Support**: Capability to implement zk-proofs, Merkle
proofs, and hashed pointers to minimize on-chain data exposure.
* **Granular Access Controls**: Role-based access, smart contract permissions,
and organization-level policies enforce strict data governance.
* **Audit Logging and Consent Trails**: Tamper-proof registries to track user
consent and system actions in accordance with GDPR transparency requirements.
* **Chaincode Lifecycle Management**: Ensures that every upgrade or change in
data logic is reviewed, versioned, and auditable.
***
Achieving GDPR compliance for blockchain applications requires thoughtful
design, clear governance, and secure implementation practices. SettleMint
simplifies this journey by embedding privacy-focused capabilities directly into
its blockchain development platform. Whether you're building enterprise
applications or public-facing dApps, SettleMint provides the tools,
architecture, and support to meet your data protection obligations under GDPR.
file: ./content/docs/terms-and-policies/privacy-policy.mdx
meta: {
"title": "Privacy policy"
}
## 1. Who we are
"We", "us", "our", SettleMint, CertiMint or Databroker means SettleMint NV, with
its registered office at Arnould Nobelstraat 38, 3000 Leuven, Belgium and with
company number BE0661674810.
Your privacy is important to us, therefore we've developed this Privacy Policy,
which sets out how we collect, disclose, transfer and use ("process") the
personal data that you share with us, as well as which rights you have. Please
take a moment to read through this policy. We only process personal data in
accordance with this Privacy Policy. SettleMint acts both as a "controller" and
a "processor" of personal data. The controller of the personal data determines
the purposes and means of the processing of personal data and the processor
processes the personal data on behalf of the controller.
Personal data are all data that can be traced back to individual persons and
identify them directly or indirectly; such as a name, phone number, location,
email or home address. Should you have any questions, concerns or complaints
regarding this Privacy Policy or our processing of your personal data; or you
wish to submit a request to exercise your rights as set out by the GDPR, you can
contact us:
* Via mail: [privacy@settlemint.com](mailto:privacy@settlemint.com).
* By post: Arnould Nobelstraat 30, 3000 Leuven, Belgium to the attention of our
Data Protection Officer.
This Privacy Policy was revised last on February 21, 2021.
## 2. How and for which purpose do we collect your personal data
### 2. 1 contact form
When filling in the contact form on our website, we need certain information
about you in order to be able to answer your questions or requests. We will use
the information collected through the contact form only for the purpose of
dealing with your request.
For this purpose, we collect the following data:
* Full name and Surname
* Company name
* E-mail address
* Phone number
* Any additional information you provide to us regarding your project
Alternatively, you can contact us by email via [support@settlemint.com](mailto:support@settlemint.com). We process
this information based on your consent as you provided this information freely
to us.
### 2. 2 newsletter
In the event you register for our newsletter, your email address will be used in
order to send you our newsletters, which may include invites to events,
seminars, etc. organized by us. All other data fields are marked as "voluntary"
and you can submit your question without having to fill in this additional
requested information.
For this purpose, we collect the following data:
* Name
* E-mail address
We process this information based on your consent as you provided this
information freely to us.
### 2. 3 website maintenance and improvement
In order to improve our website, we offer the possibility to provide us with
feedback through the Hotjar tool. The providing of feedback, with or without the
Hotjar tool is not mandatory nor required to view and browse our website.
For this purpose, we collect the following data:
* Emoticon representing your general feeling about your experience.
* Free text field.
* Email address.
* Connection with data related to visits (device-specific, usage data, cookies,
behavior and interactions) of previous and future visits. Combination of
feedback with any other feedback previously submitted from your device,
location (limited to country), language used, technology used (device and
browser), custom attributes (e.g. products or services you are using), your
behavior and interactions (pages visited).
We furthermore use Google Analytics and Hubspot to provide us insights on the
website performance, conversion rates and other visitor metrics. Google
Analytics and Hubspot use cookies in order to collect the data which is being
processed. For more information on cookies, we refer to our cookie policy. We
process this information based on our legitimate interest.
### 2. 4 job applicants (including unsuccessful applicants)
SettleMint processes personal data of applicants seeking to be employed by
SettleMint and (potential) business relations. Business relations include
clients, suppliers and subcontractors who provide services or carry out
assignments for or on behalf of SettleMint (processors). The information we
collect from you depends on your relationship with SettleMint or the services
you use within SettleMint.
For this purpose, we collect the following data:
* Name;
* Curriculum vitae (CV), which may include:
* Address.
* Place of residence.
* Date of birth.
* Telephone number.
* E-mail address.
* References.
* Certificates.
We process this information based on the execution of a (future) contract.
### 2. 5 employees and former employees
SettleMint processes personal data of employees and former employees,
selfemployed persons/employees employed by SettleMint and (potential) business
relations. Business relations include clients, suppliers and subcontractors who
provide services or carry out assignments for or on behalf of SettleMint
(processors).The information we collect from you depends on your relationship
with SettleMint or the services you use within SettleMint. The data is used to
e.g., provision salary payments, registration of mobile phone number, mobility
and insurance.
For these purposes, we collect the following data:
* Name;
* Address;
* Contact details, including email address and telephone number;
* Date of birth;
* Place of birth;
* Nationality;
* National register number;
* Gender;
* Language;
* Details of your qualifications, skills, experience and employment history,
including start and end dates, with previous employers and with the
organization;
* Information about your pay and benefits;
* Bank account number;
* Information about your marital status, next of kin, dependents and emergency
contacts;
* Employment contract.
We process this information based on the execution of a contract.
### 2. 6 (potential) business connections
During any interaction with you, we may collect personal data for business and
marketing purposes. Interaction may include events (collection of business
cards), our options to contact SettleMint, or you when serving as a contact
point for the collaboration with your company.
For this purpose, we collect the following data:
* Company information (name, address, sector...);
* Contact details (name, email-address and/or phone number...)
* Job title;
* Notes on our meetings/conversations/history in general;
* Contract information, including billing.
We process this information based on our legitimate interest.
### 2. 7 cookies
Our website makes use of cookies to facilitate the rendering and functioning.
For further information relating to our use of cookies, we refer you to our
Cookie Policy.
We process this information based on legitimate interest.
### 2. 8 training
Under the name of Blockchainacademy.global, SettleMint organizes training
sessions to which any individual can subscribe. For this purpose, we collect the
following data:
* Name;
* Email address;
* Bank data.
We may furthermore ask the attendees of the training for feedback on the
attended training, in order to improve our training activities. For this
purpose, we anonymously collect the following data: ● Feedback.
We process this information based on the execution of a contract.
## 3. Do we share or transfer your personal data?
We actively and passively share data with a number of affiliated third parties
which we engage to assist us in the execution of our daily activities. Active
sharing means that the third party processes the information as input in the
process of our collaboration with said third party. Passive sharing on the other
hand means that we use a service/software provided and hosted by the third
party, however the third party does not process the information as an input in
the process of our collaboration with said third party.
Our active sharing collaborations are:
* KBC - KBC uses the data for insurance purposes commissioned by SettleMint for
SettleMint employees.
* NMBS/SNCB - NMBS/SNCB uses the data for issuance of subscription purposes
commissioned by SettleMint for SettleMint employees.
* Orange - Orange uses the data for mobile phone number registration purposes
commissioned by SettleMint for SettleMint employees.
* SD Worx - SD Worx uses the data for salary payment purposes commissioned by
SettleMint for SettleMint employees.
Our passive sharing collaborations are:
* Deloitte - We use this supplier for accounting purposes.
* Eventbrite - We use this supplier for organization of training purposes.
SettleMint actively processes the information and provides the training, while
Eventbrite hosts the website on which individuals can register for the
training.
* Google Mail - We use this Software as a Service for digital communication
purposes. SettleMint actively processes the information while Google hosts the
software.
* Leadfeeder - We use this Software as a Service for customer relationship
management purposes.SettleMint actively processes the information while
Leadfeeder hosts the software.
* MailChimp - We use this Software as a Service for newsletter purposes.
SettleMint actively processes the information while MailChimp hosts the
software.
* Microsoft Office Lens - We use this Software as a Service for customer
relationship management purposes. SettleMint actively processes the
information and Microsoft provides the software.
* Pipedrive - We use this Software as a Service for customer relationship
management purposes. SettleMint actively processes the information while
Pipedrive hosts the software.
* SurveyMonkey - We use this Software as a Service for training feedback
purposes. SettleMint actively processes the information while Pipedrive hosts
the software.
* Hotjar - We use this Software as a Service for website visit experience
feedback. SettleMint actively processes the information, while Hotjar hosts
the software.
* Google Analytics - We use this Software as a Service for website traffic
analysis. SettleMint actively processes the information, while Google hosts
the software.
* Zoom - We use this Software as a Service for web conferencing purposes.
SettleMint actively processes the information while Zoom hosts the software.
* Phantombuster - We use this Software as a Service for sales automation
purposes. SettleMint actively processes the information while Phantombuster
hosts the software.
* Zapier - We use this Software as a Service for marketing automation purposes.
SettleMint actively processes the information while Zapier hosts the software.
* Leadpages - We use this Software as a Service for website visit experience
feedback. SettleMint actively processes the information, while Leadpages hosts
the software.
* Segment - We use this Software as a Service for data management purposes.
SettleMint actively processes the information, while Segment hosts the
software.
* Hubspot - We use this Software as a Service for customer relationship
management purposes & marketing automation. SettleMint actively processes the
information while Hubspot hosts the software
For each of the above mentioned third parties, we have a data processing
agreement, governing the use by these third parties and the protection of your
personal data. Besides the aforementioned affiliated third parties, we make use
of social media and their plugins, which enable you to be directed to our social
media channels and to interact with our content and employees. We do not however
disclose your personal data to any of our social media partners. Any reference
made to you will be discussed with you upfront to obtain your consent.
These social media channels on which we are represented, and related management
tools are:
* Facebook;
* LinkedIn;
* Twitter;
* Instagram;
* YouTube;
* Reddit;
* Medium;
* GitHub;
* Telegram;
* Hootsuite
In the event you click such a link, such social media service provider may
collect personal data about you and may link this information to your existing
profile on such social media. We are not responsible for the use of your
personal data by such social media service provider. In this case, the social
media service provider will act as controller.
## 4. What techniques do we use to protect the privacy of your personal data?
SettleMint has implemented technical and organizational measures that are
appropriate to the obtained personal data. These safeguards are designed to
secure all your personal data from loss and unauthorized access, copying, use or
modification.
1. Technical measures:
* Use of anti- virus, firewalls, etc.;
* Authentication;
* Encrypted hard disks;
* Access restriction;
* Encryption of data;
* Secure backup.
2. Organizational measures:
* Access for specific persons;
* Internal Privacy Policy for employees;
* Training of employees;
* Confidentiality clauses;
* Incident & data breach management.
We can transfer your personal data to parties that are based outside the EEA. In
such a case, we ensure that your personal data is processed in a country that
has a similar degree of data protection and where at least one of the following
safeguards is implemented:
* Countries that have been deemed to provide an adequate level of data
protection by the European Commission;
* Where we use specific providers, we may use specific contracts approved by the
European Commission which gives personal data the same protection it has
within the EEA;
* Where we use providers based in the US, we may transfer your data if they are
certified under the EU-US Privacy Shield, which requires a similar level of
data protection as if it was processed within the EEA."
## 5. How long do we keep your personal data?
We retain your data for as long as it is necessary for the fulfillment of the
purposes we collected it for. In some circumstances we may anonymize your
personal data - which means it can no longer be associated to you- for research
or statistical purposes in which case we may use this information without
further notice to you. In cases where local law requires it, we retain your
personal data for the following period:
* Hotjar website visit experience feedback - 1 year
* CV obtained through external recruiters - 2 years
* CV uploaded via our website - 1 year
* Employee data - 5 years or as legally required
* Orange mobile phone registration data - 5 years or as legally required
* KBC employee insurance data - 5 years or as legally required
* Deloitte accounting -5 years or as legally required
## 6. What are your rights?
You have rights under the GDPR in relation to your personal data. We have
summarized them for you in a clear and legible way. To exercise any of your
rights, please send us a written request in accordance with paragraph 1 of this
Privacy Policy. We will respond to your request without undue delay, but in any
event within one month of the receipt of the request. In the case of complex
requests or many requests, we may extend this period with two additional months.
In such case, we shall inform you of the extension within one month of the
receipt of your request and the reasons for the delay.
### 6. 1 the right to be informed
In accordance with Article 12 of the GDPR, we as controller shall take
appropriate measures to provide any information referred to in Articles 13
through to 22 and Article 34 relating to processing of your personal data in a
concise, transparent, intelligible and easily accessible form, using clear and
plain language, in particular for any information addressed specifically to a
child. The information shall be provided in writing, or by other means,
including, where appropriate, by electronic means. When requested by you, the
information may be provided orally, given that your identity is proven.
Where we obtain personal data, collected directly from you, we shall provide you
with:
* Our contact details.
* The contact details of our data protection officer where applicable.
* The purposes of the processing for which the personal data is intended, as
well as the legal basis for the processing.
* Details of the purposes for processing in case of legitimate interests.
* The recipients of the personal data, if any.
* Where applicable, the intention to transfer personal data to a third country
orinternational organization and the existence or absence of an adequacy
decision by the Commission and any appropriate or suitable safeguards.
* The period for which the personal data will be stored.
* Information on further processing other than for the purposes originally
stated, prior to further processing.
Where we obtain personal data, not collected directly from you, we shall provide
you with:
* The information as mentioned in the paragraph above, on information we
collected directly from you.
* The identity and contact details of the controller/controller's
representative.
* The contact details of the data protection officer where applicable
### 6. 2 the right to access
In accordance with Article 15 of the GDPR, you have the right to ask us if we
process personal data concerning you. In the case that we process your personal
data, you have the right to ask us:
* The purpose for which it is been processed;
* Which personal data;
* Duration of the retention;
* The source of data (third party or automated processing such as profiling);
* Safeguards related to transfer;
* A copy of the data. Note that for any additional copies, we reserve the right
to charge a reasonable fee to cover administrative costs.
### 6. 3 the right to rectification
In accordance with Article 16 of the GDPR, you have the right to request a
correction of the stored personal data concerning you if they are inaccurate or
incorrect.
### 6. 4 the right to erasure (right to be forgotten)
In accordance with Article 17 of the GDPR, you have the right to request that
your personal data held by us is erased. In other words, you have the right to
be forgotten by us if:
* Personal data is no longer necessary in relation to the purpose for which it
was collected;
* You withdraw your consent for the processing and we based our processing on
your consent;
* No overriding legitimate grounds for processing are presented by the
controller in response to the objection by the data subject;
* The personal data has been unlawfully processed;
* The personal data has to be erased for compliance with legal obligations;
* The data subject is younger than 16 years and consent of the holder of
parental responsibility has not been obtained.
The right to be forgotten does not apply for:
* Exercising the right of freedom of expression and information;
* Compliance with legal obligations which requires processing by law;
* Reasons of public interest in the area of public health;
* Archiving purposes in the public interest, scientific or historical research
purposes or statistical purposes.
### 6. 5 the right to restrict processing
In accordance with Article 18 of the GDPR, you have the right to restrict the
processing of your personal data (meaning that the personal data may only be
stored by us and may only be used for limited purposes), if:
* You contest the accuracy of the personal data (and only for as long as it
takes to verify that accuracy);
* The processing is unlawful, and you request restriction (as opposed to
exercising the right to erasure);
* We no longer need the personal data for the purposes of our processing, but
you require personal data for the establishment, exercise or defense of legal
claims;
* You have objected to processing, pending the verification of that objection.
In addition to our right to store your personal data, we may still otherwise
process it but only:
* With your consent;
* For the establishment, exercise or defense of legal claims;
* For the protection of the rights of another natural or legal person;
* For reasons of important public interest.
We will inform you before we lift the restriction of processing.
### 6. 6 the right to data portability
In accordance with Article 20 of the GDPR, you have the right to receive your
personal data, which you have provided to us, in an understandable and readable
format. You furthermore have the right to transmit that data to another
organization without hindrance from us if our processing of the data was based
on your consent and is processed in an automated manner. Where technically
feasible, you have the right to have your data transferred directly by us to the
organization.
Exercising your right to data portability shall be without prejudice. Note that
the right to data portability does not apply if:
* The processing is necessary for the performance of a task carried out in the
public interest.
* The processing is in the exercise of official authority vested in us.
* It adversely affects the rights and freedoms of others.
### 6. 7 the right to object to processing
In accordance with Article 21 of the GDPR, you are entitled to object to the
processing of your personal data, meaning that we have to terminate the
processing of your personal data. The right of objection exists only within the
limits provided for in art. 21 GDPR. In addition, our interests may prevent the
processing from being terminated, so that we are entitled to process your
personal data despite your objection.
### 6. 8 automated individual decision-making, including profiling
In accordance with Article 22 of the GDPR, you have the right not to be subject
to a decision based solely on automated processing, including profiling, which
produces legal effects concerning you or similarly affects you.
This right shall not apply if the decision is:
* Necessary for entering into, or performance of, a contract between you and us.
* Authorized by Union or Member State law to which we are subject, and which
also lays down suitable measures to safeguard the data subject's rights and
freedoms and legitimate interests.
* Based on your explicit consent.
### 6. 9 right of appeal to a supervisory authority
If you consider that our processing of your personal information infringes data
protection laws, you have a legal right to lodge a complaint with a supervisory
authority responsible for data protection. You may do so in the EU member state
of your habitual residence, your place of work or the place of the alleged
infringement. In Belgium, you can submit a complaint to the Authority for the
protection of personal data: De Gegevensbeschermingsautoriteit (GBA)
Drukpersstraat 35 1000 Brussel Tel.: +32 (0)2 274 48 00 Fax.: +32 (0)2 274 48 35
[commission@privacycommission.be](mailto:commission@privacycommission.be) ``
## 7. Amendments to the privacy policy
In a world of continuous technological change, we will need to update this
Privacy Statement on a regular basis. We invite you to consult the latest
version of this Privacy Statement online and we will keep you informed of
important changes through our website or through our other usual communication
channels.
file: ./content/docs/terms-and-policies/terms-of-service.mdx
meta: {
"title": "Terms of service"
}
SettleMint Platform -- Terms of Service
DISCLAIMER: Please read these Terms of Service carefully before using the
SettleMint Platform (as defined below). By using the platform, you agree that
your use of the SettleMint Platform shall be governed by these Terms of Service.
Version 2.0 -- October 15, 2021
If you have any questions about the SettleMint Platform or these Terms of
Service, please contact us at [support@settlemint.com](mailto:support@settlemint.com).
The SettleMint Platform (as defined herafter) is operated and managed by
SettleMint, a limited liability company (naamloze vennootschap) having its
registered office at 7Tuinen, Building B, Arnould Nobelstraat 38, 3000 Leuven
(Belgium) and registered with the Crossroads Bank of Enterprises (Kruispuntbank
van Ondernemingen) under company number 0661.674.810 (RLE Leuven) ("SettleMint"
or "we").
These terms of service (the "Terms of Service") describe the terms and
conditions under which user(s) ("User(s)" or "you") can access and use the
SettleMint Platform) except when other contractual arrangements are expressly
made between SettleMint and User. The general terms and conditions of the User
are not applicable and are therefore expressly excluded, even if such general
terms and conditions would contain a similar clause. In the event of any
conflict or inconsistency between the provisions of these Terms of Service and
the provisions of any contractual arrangements between SettleMint and User, the
provisions of the latter shall prevail.
SettleMint and the User are hereinafter jointly referred to as "Parties" and
each individually as a "Party".
## 1. Description of the SettleMint platform
The SettleMint Platform is a cloud-based blockchain application building,
integration and hosting platform allowing developers to build and integrate
blockchain applications available at [https://console.settlemint.com](https://console.settlemint.com) (the
"Platform").
## 2. Applicability
2.1. The access and use of the Platform is subject to acceptance without
modification of all terms and conditions as contained in these Terms of Service.
2.2. By clicking the "I agree" button, you engage in our service and acknowledge
and agree that your use of the Platform is exclusively governed by these Terms
of Service. If you do not agree to any provision of these Terms of Service, you
may not access and use the Platform in any manner, even if you already have an
Account.
2.3. In the event the Platform uses services or components (which may include
open source software) of third parties or provides access to any third party
websites, services and applications ("Third Party Services"), these Terms of
Service will not apply to these Third Party Services and the terms of service,
license agreements and/or privacy policies of those third parties will govern
your use of the Third Party Services. You shall be notified if and when such
third party terms of service, license agreements and/or privacy policies are
applicable. By accessing such third party service, you agree to comply with the
applicable terms and you acknowledge that you are the sole party to such terms.
SettleMint cannot be held liable in any way with regard to the use of the Third
Party Services and the content of such third parties' terms, license agreements
or privacy policy.
2.4. We reserve the right at any time, and from time to time, with or without
cause to:
* amend these Terms of Service;
* change the Platform, including, adding, eliminating or discontinuing,
temporarily or permanently any tool, service or other feature of the Platform
without any liability against the User or any third parties; or
* deny or terminate, in part, temporarily or permanently, your use of and/or
access to the Platform as set forth herein. Any such amendments or changes
made will be effective immediately upon SettleMint making such changes
available in the Platform or otherwise providing notice thereof. You agree
that your continued use of the Platform after such changes constitutes your
acceptance of such changes.
## 3. Use of the platform
3.1. You are responsible for providing at your own expense, all equipment
necessary to connect to, access and otherwise use the Platform, including but
not limited to modems, hardware, server, operating system, software and internet
access (the "Equipment"). You are responsible for ensuring that such Equipment
is compatible with the Platform and complies with all minimum system
requirements as set out on the webpage. You will also be responsible for
maintaining the security of the Equipment. SettleMint will not be liable for any
loss or damage arising from your failure to comply with the above requirements.
3.2. In order to access the Platform's app creation and management tools you
will be required to create an account providing you access to the Platform (the
"Account") and provide certain registration information. Every individual with
such access Account is a "Direct User" (as opposed to "End Users" who are
individuals invited by User to use the SettleMint Platform Apps created in the
Platform. When creating your Account, you agree (i) to provide accurate,
truthful, current and complete information and (ii) to maintain and promptly
update your Account information. SettleMint reserves the right to suspend or
terminate the Account of anyone who provides inaccurate, untrue, or incomplete
information or who fails to comply with the account registration requirements.
You shall be solely responsible for maintaining the confidentiality and security
of your Account login information such as your password and shall be fully
responsible for all activities that occur under your Account. You agree to
immediately notify SettleMint of any unauthorized use, or suspected unauthorized
use of your Account or any other breach of security.
3.3. During the Term, SettleMint may, in its sole discretion, provide you with
certain updates of the Platform.
## 4. Acces to the platform
### 4. 1. license by settlemint
4.1.1. During the Term and subject to these Terms of Service and to the timely
payment of the Fees, SettleMint grants you a non-exclusive, personal,
restricted, revocable and subject to the conditions set forth in section 4.1.7.
transferable and sub-licensable license to access and use the functionality of
the Platform, including updates, solely to develop, use and host a blockchain
application that you make available to End Users (a "SettleMint Platform App")
(the "License").
4.1.2. Term and Renewal. Your initial license term is of one year and will
automatically renew at the end of the license term.
4.1.3. Notice of Non-Renewal. To prevent renewal of your license, you must give
a written notice of non-renewal at least 60 days before the end of the license
term.
4.1.4. Early Cancellation. You may choose to cancel your license early at your
convenience provided that we will not provide any refunds of prepaid fees or
unused license Fees, and you will promptly pay all unpaid fees due through the
end of the license Term.
4.1.5. Free Trial. If you register for a free trial, we will make the applicable
license available to you on a trial basis free of charge until the earlier of
(a) the end of the free trial period (if not terminated earlier) or (b) the
start date of your paid license. Unless you purchase a license before the end of
the free trial, all your data may be permanently deleted at the end of the
trial, and we will not recover it. If we include additional terms and conditions
on the trial registration web page, those will apply as well.
4.1.6. You are not allowed to use the Platform in a manner not authorized by
SettleMint. You shall use the Platform solely in full compliance with (i) these
Terms of Service; (ii) any additional instructions or policies issued by
SettleMint, including, but not limited to, those posted within the Platform and
(iii) any applicable legislation, rules or regulations.
4.1.7. Provided you are offering the Platform exclusively as an integrated
solution for your own use and for your proper commercial purposes to offer your
End Users a SettleMint Platform App in your own name and for your proper
account, the License set forth herein is transferable and sub-licensable for
purposes of integration only and subject to the restrictions set out in section
4.2.
### 4. 2. restrictions
You agree to use the Platform only for its intended use as set forth in these
Terms of Service. Within the limits of the applicable law, you are not permitted
to (or allow any other third party to) (i) access the Platform functionalities
by any other means other than through the interface and Account that is provided
to you by SettleMint (ii) copy, adapt, alter, translate or modify in any manner
the Platform or underlying software; (iii), lease, rent, loan, distribute, or
otherwise transfer the Platform to any third party; (iv) decompile, reverse
engineer, disassemble, or otherwise derive or determine or attempt to derive or
determine the software code (or the underlying ideas, algorithms, structure or
organization) of the Platform, except and only to the extent that such activity
is expressly permitted by applicable law notwithstanding this limitation; (v)
gain unauthorized access to accounts of other Users or use the Platform to
conduct or promote any illegal activities; (vi) use the Platform to generate
unsolicited email advertisements or spam; (vii) impersonate any person or
entity, or otherwise misrepresent your affiliation with a person or entity;
(viii) use any high volume automatic, electronic or manual process to access,
search or harvest information from the Platform (including without limitation
robots, spiders or scripts); (ix) alter, remove, or obscure any copyright
notice, digital watermarks, proprietary legends or other notice included in the
Platform; (x) intentionally distribute any worms, Trojan horses, corrupted
files, or other items of a destructive or deceptive nature (xi) use the Platform
for any unlawful, invasive, infringing, defamatory or fraudulent purpose; or
(xii) remove or in any manner circumvent any technical or other protective
measures in the Platform. Except as expressly set forth herein, no express or
implied license or any rights of any kind are granted to you regarding the
Platform, including but not limited to any right to obtain possession of any
source code, data or other technical material relating to the Platform.
### 4. 3. license by user
By uploading, creating or otherwise sharing data on or through the Platform, you
grant SettleMint a non-exclusive, royalty-free, worldwide, sublicensable,
transferable, license to use, copy, store, modify, transmit and display such
data and documents uploaded by you (the "User Data"), to the extent necessary
and always in compliance with the provisions set forth in Article 12 of these
Terms of Service. To provide and maintain the Platform, SettleMint reserves the
right, but is not obliged, to review and remove any User Data which is deemed to
be in violation with the provisions of these Terms of Service or is deemed
inappropriate in accordance with any rights of third parties or any applicable
legislation or regulation.
## 5. Ownership
5.1. As between the User and SettleMint, the Platform and all Intellectual
Property Rights pertaining thereto, are the exclusive property of SettleMint
and/or its licensors. For the purpose of this Agreement, "Intellectual Property
Rights" means any and all now known or hereafter existing (a) rights associated
with works of authorship, including copyrights, and moral rights, (b) trademark
or service mark rights, (c) trade secret rights, know-how, (d) patents, patent
rights, and industrial property rights, (e) layout design rights, design rights
(f) semi-conductor topography rights (g) rights on trade-, brand- , business-
and domain names, (h) database rights, and any other industrial or intellectual
proprietary rights or similar right (whether registered or unregistered), and
(i) all registrations, applications for registration, renewals, extensions,
divisions, improvements or reissues relating to any of these rights and the
right to apply for, maintain and enforce any of the preceding items, in each
case in any jurisdiction throughout the world.
5.2. All rights, including Intellectual Property Rights, titles and interests in
and to the Platform or any part thereof not expressly granted to the User by
these Terms of Service are reserved by SettleMint and its licensors. Except as
expressly set forth herein, no express or implied license or right of any kind
is granted to the User regarding the Platform, including any right to obtain
possession of any software code, data or other technical material related to the
Platform.
5.3. Feedback. If you provide SettleMint with any feedback or suggestions
regarding the Sites or Services ("Feedback"), you hereby assign to SettleMint
all rights in such Feedback and agree that SettleMint shall have the right to
use and fully exploit such Feedback and related information in any manner it
deems appropriate. SettleMint will treat any Feedback you provide to SettleMint
as non-confidential and non-proprietary. You agree that you will not submit to
SettleMint any information or ideas that you consider to be confidential or
proprietary.
## 6. Suspension for breach
If SettleMint becomes aware or suspects, in its sole discretion, any violation
by you of these Terms of Service, or any other instructions, guidelines or
policies issued by SettleMint, then SettleMint may suspend or limit your access
to the Platform. The duration of any suspension will be until you have cured the
breach which caused such suspension or limitation, except when such breach is
incurable.
## 7. Support
In case you need technical support, you can contact SettleMint on the following
Email address [support@settlemint.com](mailto:support@settlemint.com).
## 8. Payment
8.1. In consideration for the License and the access to and use of the Platform
as set out in these Terms of Service, SettleMint will charge the usage fees as
displayed on the Platform.
8.2. All payments for the use of the Platform can be made by credit card or wire
transfer (upon approval by the credit committee). SettleMint will only process
card transactions that have been authorized by the applicable network or card
issuer. Users shall authorize their banks to hold, receive, disburse and settle
funds on their behalf, including generating a paper draft or electronic funds
transfer to process each payment transaction initiated by the User and relating
to the use of the Platform. Subject to these Terms of Service, Users shall also
authorize their banks to debit or credit any payment card or other payment
method accepted by SettleMint.
8.3. If payments are made by credit card, the User shall be solely responsible
for the security of its data (including but not limited to the information
associated with a payment card, such as card holder, account number, expiration
date and CVC (the "Cardholder Data")) in its possession or control. Users agree
to comply with all applicable laws, regulations and rules relating to the
collection, security and dissemination of any personal, financial or transaction
information. Users agree to notify SettleMint immediately if they provide any
third party with access (or otherwise permit, authorize, or enable such third
party's access) to any Cardholder Data.
8.4. If payments are settled via wire transfer, the User should pay the invoices
within 30 days of issuance. For later payment, interest charges of 1,5% per
month or the highest permissible rate applicable by law will be charged. Under
no circumstances will SettleMint refund the usage fees.
YOU MUST PROVIDE CURRENT, COMPLETE AND ACCURATE INFORMATION FOR YOUR BILLING
ACCOUNT. YOU MUST PROMPTLY UPDATE ALL INFORMATION TO KEEP YOUR BILLING ACCOUNT
CURRENT, COMPLETE AND ACCURATE (SUCH AS A CHANGE IN BILLING ADDRESS, CREDIT CARD
NUMBER, OR CREDIT CARD EXPIRATION DATE), AND YOU MUST PROMPTLY NOTIFY US OR OUR
PAYMENT PROCESSORS IF YOUR PAYMENT METHOD IS CANCELED (E.G., FOR LOSS OR THEFT)
OR IF YOU BECOME AWARE OF A POTENTIAL BREACH OF SECURITY, SUCH AS THE
UNAUTHORIZED DISCLOSURE OR USE OF YOUR USER NAME OR PASSWORD. CHANGES TO SUCH
INFORMATION CAN BE MADE AT [billing@settlemint.com](mailto:billing@settlemint.com)
## 9. Liability
9.1. To the maximum extent permitted under applicable law, SettleMint shall only
be liable for personal injury or any damages resulting from (i) its gross
negligence; (ii) its willful misconduct or (iii) any fraud committed by
SettleMint.
9.2. To the extent permitted under applicable law, SettleMint shall not be
liable to the User or any third party, for any special, indirect, exemplary,
punitive, incidental or consequential damages of any nature including, but not
limited to damages or costs due to loss of profits, data, revenue, goodwill,
production of use, procurement of substitute services, or property damage
arising out of or in connection with the Platform under these Terms of Service,
including but not limited to any miscalculations, or the use, misuse, or
inability to access or use the Platform, regardless of the cause of action or
the theory of liability, whether in tort, contract, or otherwise, even if
SettleMint has been notified of the likelihood of such damages. The limitation
in this section 9.2. shall not apply to the obligations of SettleMint under
section 11 ("Indemnification").
9.3. You agree that SettleMint can only be held liable as per the terms of this
section 9 to the extent damages suffered by you are directly attributable to
SettleMint. You further agree that SettleMint is only liable to you directly,
and not to the End Users. For the avoidance of doubt, SettleMint shall not be
liable for any claims resulting from (i) your or any third party's unauthorized
use of the Platform, (ii) your or any third party's use of the SettleMint
Platform Apps, (iii) Third Parties Services, (iv) your failure to use the most
recent version of the Platform made available to you or your failure to
integrate or install any corrections to the Platform issued by SettleMint, or
(v) your use of the Platform in combination with any non-SettleMint products or
services. The exclusions and limitations of liability under this section shall
operate to the benefit of any of SettleMint's affiliates and subcontractors
under these Terms of Service to the same extent such provisions operate to the
benefit of SettleMint.
9.4. To the extent permitted by applicable law, and except in the case of fraud,
willful misconduct or gross negligence by SettleMint, SettleMint's aggregate
liability arising from or relating to these Terms of Service will be limited to
the Fees paid to SettleMint during a period of twelve (12) months prior to the
occurrence giving rise to the liability.
## 10. Warranties and disclaimers
### 10. 1. by settlemint
10.1.1. General. Except as expressly provided in this section 10 and to the
maximum extent permitted by applicable law, the Platform is provided "AS IS,"
and SettleMint makes no (and hereby disclaims all) other warranties, covenants
or representations, or conditions, whether written, oral, express or implied
including, without limitation, any implied warranties of satisfactory quality,
course of dealing, trade usage or practice, merchantability, suitability,
availability, accessability, title, non-infringement, or fitness for a
particular use or purpose, with respect to the use, misuse, or inability to use
the Platform or any other products or services provided to the User by
SettleMint. SettleMint does not warrant that all errors can be corrected, or
that access to or operation of the Platform shall be uninterrupted, secure, or
error-free.
10.1.2. Network control. The User acknowledges and agrees that there are risks
inherent to transmitting information and storing information on the internet and
through blockchain and that SettleMint is not responsible and cannot be held
liable for any loss of your data. User further acknowledges and agrees that
SettleMint does not own or control any of the underlying software through which
blockchain networks are formed nor, the case being, cryptocurrencies are created
and transacted. In general, the underlying software for blockchain networks
tends to be open source such that anyone can use, copy, modify, and distribute
it. By accessing and using the Platform, you understand and acknowledge that
SettleMint is not responsible for operation of the underlying software and
networks that support blockchain and cryptocurrencies and that SettleMint makes
no guarantee of functionality, security, or availability of such software and
networks.
10.1.3. Forks. The underlying protocols are subject to sudden changes in
operating rules, and third parties may from time to time create a copy of a
digital asset network and implement changes in operating rules or other features
("Forks") that may result in more than one version of a network (each, a "Forked
Network"). You understand and acknowledge that Forked Networks are wholly
outside of the control of SettleMint. In the event of a Fork, you understand and
acknowledge that SettleMint may temporarily suspend services on the Platform and
SettleMint Platform Apps (with or without advance notice to you) while we
determine, at our sole discretion, if and which Forked Network(s) to support.
### 10. 2. by user
You represent and warrant to SettleMint that (a) you have the authority to enter
into this binding agreement personally, (b) that you are liable for any User
Data and that this User Data is accurate and truthful and shall not (i) infringe
any Intellectual Property Rights of third parties; (ii) misappropriate any trade
secret; (iii) be deceptive, defamatory, obscene, pornographic or unlawful; (iv)
contain any viruses, worms or other malicious computer programming codes
intended to damage the Platform or data; or (v) otherwise violate the rights of
a third party, (c) that you and all transactions initiated by you will comply
with all rules and regulations applicable to such transaction, (d) you are
solely responsible for the SettleMint Platform Applications created by you on
the Platform and (e) you will not use the Platform, directly or indirectly, for
any fraudulent undertaking or in any manner so as to interfere with the use of
the Platform. If SettleMint determines you have used the Platform for a
fraudulent, unauthorized, illegal or criminal purpose, you hereby authorize
SettleMint to share information about you, your Account or your access to the
Platform with the competent authorities. You agree that any use of the Platform
contrary to or in violation of these representations and warranties shall
constitute unauthorized and improper use of the Platform for which SettleMint
cannot be held liable.
## 11. Indemnification
### 11. 1. by settlemint
SettleMint shall defend and indemnify you as specified herein against any
founded and well-substantiated claims brought by third parties to the extent
such claim is based on an infringement of the Intellectual Property Rights of
such third party by the Platform and excluding any claims resulting from (i)
your or any third party's unauthorized use of the Platform, (ii) your or any
third party's use of the SettleMint Platform Apps, (iii) your failure to use the
most recent version of the Platform made available to you, or your failure to
install any corrections or updates to the Platform issued by SettleMint, if
SettleMint indicated that such update or correction was required to prevent a
potential infringement, (iv) Third Parties Services, or (v) your use of the
Platform in combination with any non-SettleMint products or services.
Such indemnity obligation shall be conditional upon the following: (i)
SettleMint is given prompt written notice of any such claim; (ii) SettleMint is
granted sole control of the defense and settlement of such a claim; (iii) upon
SettleMint's request, the User fully cooperates with SettleMint in the defense
and settlement of such a claim, at SettleMint's expense; and (iv) the User makes
no admission as to SettleMint's liability in respect of such a claim, nor does
the User agree to any settlement in respect of such a claim without SettleMint's
prior written consent. Provided these conditions are met, SettleMint shall
indemnify the User for all damages and costs incurred by the User as a result of
such a claim, as awarded by a competent court of final instance, or as agreed to
by SettleMint pursuant to a settlement agreement.
In the event the Platform, in SettleMint's reasonable opinion, is likely to or
become the subject of a third-party infringement claim (as per this section
11.1.), SettleMint shall have the right, at its sole option and expense, to: (i)
modify the (allegedly) infringing part of the Platform so that it becomes
non-infringing while preserving materially equivalent functionalities; (ii)
obtain for the User a license to continue using the Platform in accordance with
these Terms of Service; or (iii) terminate the Terms of Service for that portion
of the Platform which is the subject of such infringement.
The foregoing states the entire liability and obligation of SettleMint and the
sole remedy of the User with respect to any infringement or alleged infringement
of any Intellectual Property Rights caused by the Platform or any part thereof.
### 11. 2. by user
You hereby agree to indemnify and hold harmless SettleMint and its current and
future affiliates, officers, directors, employees, agents and representatives
from each and every demand, claim, loss, liability, or damage of any kind
whatsoever, including reasonable attorney's fees, whether in tort or in
contract, that it or any of them may incur by reason of, or arising out of, any
claim which is made by any third party with respect to (i) any breach or
violation by you of any provisions of these Terms of Service or any other
instructions or policies issued by SettleMint; (ii) any data violating any
Intellectual Property Rights of a third party and (iii) fraud, intentional
misconduct, or gross negligence committed by you.
## 12. Privacy statement
SettleMint recognizes and understands the importance of your privacy and wants
to respect your desire to store and access personal information in a private and
secure environment. Please note that SettleMint has to be considered as the Data
Processor and the User as the Data Controller for the processing of any Personal
Data in accordance with the EU Regulation 2016/679 together with the codes of
practice, codes of conduct, regulatory guidance and standard clauses and other
related legislation resulting from such Regulation, as updated from time to time
(the "General Data Protection Regulation"), via the Platform or any part
thereof. Please note that SettleMint shall only process any Personal Data
relating to you on the documented instructions from the Data Controller and
takes appropriate technical and organizational measures against any unauthorized
or unlawful processing of your Personal Data or its accidental loss, destruction
or any unauthorized access thereto. In the event you as a User request
SettleMint of a copy, correction, deletion of the Personal Data or you want to
restrict or object to the processing activities, you shall inform SettleMint of
such request within two (2) calendar days. SettleMint shall, as Data Processor,
provide the User with full details of such request, objection or restriction of
the User, together with a copy of the Personal Data held by SettleMint. We shall
not use your Personal Data for any other purpose than instructed by the Data
Controller and allowing you to make use of the features of the Platform. For the
purpose of these Terms of Service, "Data Controller", "Data Processor" and
"Personal Data", shall have the meaning given thereto in the Data Protection
Regulation.
## 13. Terms and termination
13.1 The term of this Agreement will commence on the Effective Date and remain
in effect as long as subscription and usage fees are paid, unless terminated
earlier in accordance with section 13.3. The termination of this Agreement can
be requested by you at any time, upon which you will pay the outstanding
balance, after which there will be no further charges.
13.2. SettleMint will not be liable to you for compensation, reimbursement, or
damages in connection with any termination or suspension of the use of the
Platform. Any termination of these Terms of Service does not relieve Users from
any obligations to pay Fees or costs accrued prior to termination and any other
amounts owed by you to SettleMint as provided in these Terms of Service.
13.3. Termination for breach SettleMint may terminate with immediate effect
these Terms of Service and your right to access and use of the Platform (i) if
SettleMint believes or has reasonable grounds to suspect that you are violating
these Terms of Service (including but not limited to any violation of the
Intellectual Property Rights of SettleMint) or any other guidelines or policies
issued by SettleMint or (ii) if you are suspended for non-payment for more than
30 (thirty) days.
13.4. Effects of termination Upon the termination of these Terms of Service for
any reason whatsoever in accordance with the provisions of these Terms of
Service, at the moment of effective termination: (i) you will no longer be
authorized to access or use the Platform; (ii) SettleMint shall sanitize and
destroy the Personal Data related to your Account, including but not limited to
the data on the Platform within thirty (30) calendar days upon termination of
these Terms of Service in a secure way that ensures that all Personal Data is
deleted and unrecoverable. Personal Data that needs to be kept to comply with
relevant legal and regulatory retention requirements may be kept by SettleMint
beyond expiry of the period of thirty (30) calendar days as long as required by
such laws or regulations, and (iii) all rights and obligations of SettleMint or
the User under these Terms of Service shall terminate, except those rights and
obligations under those sections specifically designated in section 14.7. Upon
written request submitted by the User to SettleMint no later than fourteen (14)
calendar days prior to the termination of the agreement, SettleMint shall
provide the User, immediately prior to the sanitization and destruction thereof,
with a readable and usable copy of the Personal Data and/or the systems
containing Personal Data.
13.5. Outstanding Fees. Termination shall not relieve you of the obligation to
pay any fees payable to SettleMint prior to the effective date of termination.
In the event of termination by SettleMint pursuant to Section 13.3, all amounts
payable by you under this Agreement will become immediately due and payable.
## 14. Miscellaneous
### 14. 1. force majeure
SettleMint shall not be liable for any failure or delay in the performance of
its obligations with regard to the Platform if such delay or failure is due to
causes beyond our control due including but not limited to acts of God, war,
pandemic, strikes or labor disputes, embargoes, government orders,
telecommunications, network, computer, server or Internet downtime, unauthorized
access to SettleMints' information technology systems by third parties or any
other cause beyond the reasonable control of SettleMint (the "Force Majeure
Event"). We shall notify you of the nature of such Force Majeure Event and the
effect on our ability to perform our obligations under these Terms of Service
and how we plan to mitigate the effect of such Force Majeure Event.
### 14. 2. severability
If any provision of these Terms of Service is, for any reason, held to be
invalid or unenforceable, the other provisions of these Terms of Service will
remain enforceable and the invalid or unenforceable provision will be deemed
modified so that it is valid and enforceable to the maximum extent permitted by
law.
### 14. 3. waiver
Any failure to enforce any provision of the Terms of Service shall not
constitute a waiver thereof or of any other provision.
### 14. 4. assignment
You may not assign or transfer these Terms of Service or any rights or
obligations to any third party. SettleMint shall be free to (i) transfer or
assign (part of) its obligations or rights under the Terms of Service to one of
its affiliates and (ii) to subcontract performance or the support of the
performance of these Terms of Service to its affiliates, to individual
contractors and to third party service providers without prior notification to
the User.
### 14. 5. notices
All notices from SettleMint intended for receipt by you shall be deemed
delivered and effective when sent to the email address provided by you on your
Account. If you change this email address, you must update your email address on
your personal settings page.
### 14. 6. survival
Sections 5, 9, 10, 11 shall survive any termination or expiration of these Terms
of Service.
### 14. 7. governing law and jurisdiction
These Terms of Service shall be exclusively governed by and construed in
accordance with the laws of Belgium, without giving effect to any of its
conflict of law principles or rules. The courts and tribunals of Leuven shall
have sole jurisdiction should any dispute arise relating to these Terms of
Service.
file: ./content/docs/use-case-guides/asset-tokenization.mdx
meta: {
"title": "Asset tokenization",
"description": "A Guide to Connecting a Frontend to Your Blockchain Application",
"sidebar_position": 3,
"keywords": [
"asset tokenization",
"solidity",
"smart contract"
]
}
This guide will show you how to build an asset tokenization application using
SettleMint.
In this guide, you will learn:
* What Asset Tokenizaton Is
* The Benefits of using Asset Tokenization
* Asset Tokenization Use-Cases
* How to build and deploy an Asset Tokenization Application
## What is asset tokenization?
Asset tokenization is the process of representing ownership rights to an asset
through digital tokens on a blockchain. These tokens serve as a digital
representation of the asset and are recorded and managed on the blockchain
network, enabling secure ownership transfer and efficient trading.
## Benefits of asset tokenization
* **Increased Liquidity:** Tokenizing assets enables fractional ownership,
allowing investors to buy and sell smaller units, thereby increasing liquidity
for traditionally illiquid assets.
* **Accessibility:** Tokenization removes barriers to entry by enabling
participation in asset ownership, allowing investors of all sizes to access
previously exclusive investment opportunities.
* **Efficiency:** Digital tokens can be traded 24/7, reducing settlement times,
and eliminating intermediaries, thereby streamlining the process and reducing
costs.
* **Transparency:** Blockchain provides a transparent and immutable ledger,
offering a clear audit trail for asset ownership, transfers, and transactions.
## Asset tokenization use-cases
* **Real Estate:** Tokenizing real estate assets enables fractional ownership,
making it more accessible to a broader investor base and facilitating
efficient trading.
* **Supply Chain:** Tokenizing supply chain assets such as goods, inventory, or
documents can enhance traceability, provenance, and efficient transfer of
ownership.
* **Art and Collectibles:** Tokenizing artwork and collectibles allows for easy
ownership transfer, provenance verification, and fractional ownership, making
it more inclusive and liquid.
* **Investment Funds:** Tokenizing investment funds allows for fractional
ownership, streamlined distribution, and automated compliance with regulatory
requirements.
## Building an asset tokenization application
## Part 1: resource setup
### 1. Create an application
To start, you need to create an application on SettleMint. An application is a
collection of the different components on SettleMint that will help run your
solution.

To create an application on SettleMint, select the application launcher in the
top right of the dashboard (four boxes). Click `Add an application`.
You will now be able to create a blockchain application and give it a name.
### 2. Deploy a network and node
After creating an application, you can now deploy a network and node. We will
use both of these resources to deploy our Asset Tokenization Smart Contract.

To create a network and node, click on the `Start Here` button. Then Select
`Add a Blockchain Network`. This will show all the supported blockchains on
SettleMint.
For this guide, select `Hyperledger Besu`.

After selecting `Hyperledger Besu`, you now have the option to select our
deployment plan.
For this guide, you can use the following settings:
**Type**: Shared
**Cloud Provider**: Google Cloud
**Region**: Location closest to you
**Resouce Pack**: Small

After clicking confirm, the node and network will start deploying at the same
time. You will see the status as `Running` once both have been successfully
deployed.
### 3. Create ipfs storage
This guide uses a simple image as the tokenized asset. This image will be pinned
on IPFS, so the next step is to deploy a storage service.

Click on `Storage` and then select `Add storage`. Then select `IPFS` and create
an instance called `Token Storage`. You can choose the same deployment plan that
you did earlier with the network and node.
### 4. Deploy a private key
To get access to the node you deployed, you will need to generate a private key.

To create a key, click on the `Private Keys` option, then select the
`Accessible EC DSA P256` option. Create a name and select the node that you
deployed in the earlier step.
## Part 2: the smart contract
Now that you have deployed the needed resources, you can create and deploy the
Asset Tokenization smart contract.
### 1. Create a smart contract set
To create a Smart contract set, navigate to the `Dev tools` section in the left
sidebar. From there, click on `Add a dev tool`, choose `Code Studio` and then
`Smart Contract Set`.
You will now be given the option to select a template. Choose the `Empty`
option. Create a name and select the same deployment plan as you did earlier.
For more information on how to add a smart contract set,
[see our Smart Contract Sets section](/building-with-settlemint/dev-tools/code-studio/smart-contract-sets/add-smart-contract-set)

### 2. Opening the integrated development environment IDE
To add and edit the smart contract code, you will use the IDE.

Once the resource has been deployed, select the `IDE` tab and then
`View in fullscreen mode`.
### 3. Adding the smart contract code
With the IDE open in fullscreen, create a new file for your Asset Tokenization
smart contract.

1. On the File Explorer on the left side, select the `Contracts` option.
2. Right Click and select `New File...`
3. Create a new file called `AssetTokenization.sol`
Before adding the contract code, you'll need to install the OpenZeppelin
contracts dependency. This provides the base contracts we'll inherit from for
features like upgradability and access control.
Open the terminal in the IDE and run:
```bash
npm install @openzeppelin/contracts-upgradeable
```
This package provides the base contracts we'll use like `UUPSUpgradeable`,
`OwnableUpgradeable`, and `ERC1155SupplyUpgradeable`.
After installing the dependency, copy and paste the Solidity code below:
Solidity Code
```solidity
// SPDX-License-Identifier: MIT
// SettleMint.com
pragma solidity ^0.8.13;
import "@openzeppelin/contracts-upgradeable/proxy/utils/UUPSUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/access/OwnableUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/token/ERC1155/extensions/ERC1155SupplyUpgradeable.sol";
/**
* @title AssetTokenization
* @dev A contract for tokenizing assets using ERC1155 standard with upgradeable functionality.
*/
contract AssetTokenization is Initializable, UUPSUpgradeable, ERC1155SupplyUpgradeable, OwnableUpgradeable {
/**
* @dev Struct representing an asset.
* @param assetId Unique identifier number.
* @param name Name of the asset.
* @param symbol Symbol of the asset.
* @param maxSupply Maximum number of tokens for the asset.
* @param faceValue Initial value of the asset.
* @param maturityTimestamp Maturity date in the value of a unix timestamp.
* @param assetUri URI for the asset metadata.
*/
struct Asset {
uint256 assetId;
string name;
string symbol;
uint256 maxSupply;
uint256 faceValue;
uint256 maturityTimestamp;
string assetUri;
}
/// @notice Mapping from asset ID to asset details.
mapping(uint256 => Asset) public assetToDetails;
/**
* @dev Event emitted on asset transfer.
* @param from Address from which the asset is transferred.
* @param to Address to which the asset is transferred.
* @param assetIds Array of asset IDs being transferred.
* @param amounts Array of amounts of each asset being transferred.
*/
event AssetTransferEvent(address indexed from, address indexed to, uint256[] assetIds, uint256[] amounts);
/**
* @dev Initializes the contract.
*/
function initialize() external initializer {
__ERC1155_init("");
__Ownable_init(msg.sender);
__UUPSUpgradeable_init();
}
/**
* @dev Creates a new asset.
* @param assetId Unique identifier for the asset.
* @param name Name of the asset.
* @param symbol Symbol of the asset.
* @param maxSupply Maximum supply of the asset.
* @param faceValue Initial value of the asset.
* @param maturityTimestamp Maturity date of the asset in unix timestamp.
* @param assetUri URI for the asset metadata.
*/
function createAsset(
uint256 assetId,
string memory name,
string memory symbol,
uint256 maxSupply,
uint256 faceValue,
uint256 maturityTimestamp,
string memory assetUri
) external onlyOwner {
require(assetToDetails[assetId].assetId != assetId, "Asset already exists");
Asset memory asset = Asset(assetId, name, symbol, maxSupply, faceValue, maturityTimestamp, assetUri);
assetToDetails[assetId] = asset;
}
/**
* @dev Mints a specified amount of an asset to a recipient.
* @param assetId ID of the asset to mint.
* @param amounts Amount of the asset to mint.
* @param recipient Address to receive the minted assets.
*/
function mint(uint256 assetId, uint256 amounts, address recipient) external onlyOwner {
require(assetToDetails[assetId].assetId == assetId, "Asset does not exist");
require(totalSupply(assetId) + amounts <= assetToDetails[assetId].maxSupply, "Max supply exceeded");
require(assetToDetails[assetId].maturityTimestamp > block.timestamp, "Asset is already matured");
_mint(recipient, assetId, amounts, "");
}
/**
* @dev Mints multiple assets in a batch to a recipient.
* @param assetIds Array of asset IDs to mint.
* @param amounts Array of amounts for each asset to mint.
* @param recipient Address to receive the minted assets.
*/
function mintBatch(uint256[] memory assetIds, uint256[] memory amounts, address recipient) public onlyOwner {
uint256 length = assetIds.length;
for (uint256 i = 0; i < length; i++) {
require(assetToDetails[assetIds[i]].assetId == assetIds[i], "Asset does not exist");
require(
totalSupply(assetIds[i]) + amounts[i] <= assetToDetails[assetIds[i]].maxSupply, "Max supply exceeded"
);
require(assetToDetails[assetIds[i]].maturityTimestamp > block.timestamp, "Asset is already matured");
}
_mintBatch(recipient, assetIds, amounts, "");
}
/**
* @dev Burns a specified amount of an asset from the sender.
* @param assetId ID of the asset to burn.
* @param amounts Amount of the asset to burn.
*/
function burn(uint256 assetId, uint256 amounts) external {
require(assetToDetails[assetId].assetId == assetId, "Asset does not exist");
_burn(msg.sender, assetId, amounts);
}
/**
* @dev Burns multiple assets in a batch from the sender.
* @param assetIds Array of asset IDs to burn.
* @param amounts Array of amounts for each asset to burn.
*/
function burnBatch(uint256[] memory assetIds, uint256[] memory amounts) external {
uint256 length = assetIds.length;
for (uint256 i = 0; i < length; i++) {
require(assetToDetails[assetIds[i]].assetId == assetIds[i], "Asset does not exist");
}
_burnBatch(msg.sender, assetIds, amounts);
}
/**
* @dev Returns the URI for a specific asset ID.
* @param id Asset ID to query the URI for.
* @return URI of the specified asset ID.
*/
function uri(uint256 id) public view override returns (string memory) {
return assetToDetails[id].assetUri;
}
/**
* @dev Updates the state on asset transfer and emits the transfer event.
* @param from Address from which the asset is transferred.
* @param to Address to which the asset is transferred.
* @param assetIds Array of asset IDs being transferred.
* @param amounts Array of amounts of each asset being transferred.
*/
function _update(address from, address to, uint256[] memory assetIds, uint256[] memory amounts)
internal
override(ERC1155SupplyUpgradeable)
{
super._update(from, to, assetIds, amounts);
emit AssetTransferEvent(from, to, assetIds, amounts);
}
/**
* @dev Authorizes the upgrade of the contract to a new implementation.
* @param newImplementation Address of the new implementation.
*/
function _authorizeUpgrade(address newImplementation) internal override onlyOwner {}
}
```
### 4. Change the deployment configuration
With the code pasted in the IDE, you now need to change the deployment settings
to include the smart contract you have just created.

In the file explorer on the left, select the `ignition` folder. Then open the
`main.ts` file under `modules`.
Replace the content of `main.ts` with the code below:
Ignition Module Code
```javascript
// SPDX-License-Identifier: MIT
// SettleMint.com
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const AssetTokenizationModule = buildModule("AssetTokenizationModule", (m) => {
const assetTokenization = m.contract("AssetTokenization");
return { assetTokenization };
});
export default AssetTokenizationModule;
```
### 5. Deploy the contract
With those settings changed, you are now ready to compile and deploy your smart
contract.

To compile the smart contract:
1. Select the `Task Manager` on the left menu
2. Click `Foundry - Build` or `Hardhat - Build` to compile the contract
3. A terminal window below will show the status of the compiling contract
To deploy your smart contract:
1. Select the `Hardhat - Deploy to platform network` option
2. The terminal will open to show the status of deploying your contract
3. The terminal will show the contract address of your smart contract

The contract address can also be found in `deployed_addresses.json` in the
`deployments`folder created when deploying the smart contract code. You will
need it later for the integration.
## Part 3: connect the resources
### 1. Upload an image to ipfs
You will now upload the image to the IPFS storage service you deployed earlier.

Save the image above to your computer. It is what you will use to represent your
asset.

To upload this image to IPFS:
1. Click on Storage
2. Select File Manager
3. Select the `Import` option

After the image has been imported, select the `Share Link` option by clicking on
the 3 dots next to the file size.
Save this URL as you will use it later in this guide when building the
integration.
Select the `Set pinning` option. This will make sure your file remains on IPFS.

Choose the local node option and click `Apply`.
### 2. Get the json-rpc endpoint
To connect to the network that you have created, you need to get your JSON-RPC
connection URL.

The URL can be found by:
1. Selecting `Blockchain nodes`
2. Clicking on the `Connect` tab
3. Copy the `JSON-RPC` URL
Save this URL as you will use it later in this guide when building the
integration.
### 3. Creating an access token
To connect to your node and storage, you will need an access token. We recommend
you use an application access token.
You can create an application access token by navigating to the application
dashboard, and then clicking on the `Access Tokens` section in the left sidebar.

You can now create an application access token with an expiration and the scopes
you want to use. For this guide, we recommend you create an access token scoped
to your node and storage.
You will now see your access token. Copy the token since you cannot see it
again! For more information on how to use access tokens,
[see our Access Tokens section](/building-with-settlemint/application-access-tokens).
### 4. Setup integration studio deployment
The final step is to create a deployment of the `Integration Studio`.

To create an integration studio deployment:
1. Click on `Integration Tools` on the left menu
2. Name the Integration Studio
3. Choose the same deployment plan you have used in this guide

Open your Integration Studio by selecting the `Interface` tab and then opening
it in fullscreen mode.
For this guide, import the template below into the Integration Studio.

To import the below JSON file:
1. Click on the hamburger icon in the top right next to the `Deploy` button.
2. Select the import option
3. Paste the below JSON code into the window
JSON Code
```json
[ { "id": "8154b1dd0912e484", "type": "function", "z": "a781da6f697711d2",
"name": "Set Global Variables", "func": "const glbVar = {\n privateKey:
\"PRIVATE_KEY\",\n privateKeyAddress: \"ADDRESS\",\n smartContract:
\"ADDRESS\",\n accessToken: \"ACCESS_TOKEN\",\n rpcEndpoint: \"RCP_ENDPOINT\",\n
abi: [\n {\n \"inputs\": [\n {\n \"internalType\": \"address\",\n \"name\":
\"target\",\n \"type\": \"address\"\n }\n ],\n \"name\": \"AddressEmptyCode\",\n
\"type\": \"error\"\n },\n {\n \"inputs\": [\n {\n \"internalType\":
\"address\",\n \"name\": \"sender\",\n \"type\": \"address\"\n },\n {\n
\"internalType\": \"uint256\",\n \"name\": \"balance\",\n \"type\":
\"uint256\"\n },\n {\n \"internalType\": \"uint256\",\n \"name\": \"needed\",\n
\"type\": \"uint256\"\n },\n {\n \"internalType\": \"uint256\",\n \"name\":
\"tokenId\",\n \"type\": \"uint256\"\n }\n ],\n \"name\":
\"ERC1155InsufficientBalance\",\n \"type\": \"error\"\n },\n {\n \"inputs\": [\n
{\n \"internalType\": \"address\",\n \"name\": \"approver\",\n \"type\":
\"address\"\n }\n ],\n \"name\": \"ERC1155InvalidApprover\",\n \"type\":
\"error\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n
\"name\": \"idsLength\",\n \"type\": \"uint256\"\n },\n {\n \"internalType\":
\"uint256\",\n \"name\": \"valuesLength\",\n \"type\": \"uint256\"\n }\n ],\n
\"name\": \"ERC1155InvalidArrayLength\",\n \"type\": \"error\"\n },\n {\n
\"inputs\": [\n {\n \"internalType\": \"address\",\n \"name\": \"operator\",\n
\"type\": \"address\"\n }\n ],\n \"name\": \"ERC1155InvalidOperator\",\n
\"type\": \"error\"\n },\n {\n \"inputs\": [\n {\n \"internalType\":
\"address\",\n \"name\": \"receiver\",\n \"type\": \"address\"\n }\n ],\n
\"name\": \"ERC1155InvalidReceiver\",\n \"type\": \"error\"\n },\n {\n
\"inputs\": [\n {\n \"internalType\": \"address\",\n \"name\": \"sender\",\n
\"type\": \"address\"\n }\n ],\n \"name\": \"ERC1155InvalidSender\",\n \"type\":
\"error\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"address\",\n
\"name\": \"operator\",\n \"type\": \"address\"\n },\n {\n \"internalType\":
\"address\",\n \"name\": \"owner\",\n \"type\": \"address\"\n }\n ],\n \"name\":
\"ERC1155MissingApprovalForAll\",\n \"type\": \"error\"\n },\n {\n \"inputs\":
[\n {\n \"internalType\": \"address\",\n \"name\": \"implementation\",\n
\"type\": \"address\"\n }\n ],\n \"name\": \"ERC1967InvalidImplementation\",\n
\"type\": \"error\"\n },\n {\n \"inputs\": [],\n \"name\":
\"ERC1967NonPayable\",\n \"type\": \"error\"\n },\n {\n \"inputs\": [],\n
\"name\": \"FailedInnerCall\",\n \"type\": \"error\"\n },\n {\n \"inputs\":
[],\n \"name\": \"InvalidInitialization\",\n \"type\": \"error\"\n },\n {\n
\"inputs\": [],\n \"name\": \"NotInitializing\",\n \"type\": \"error\"\n },\n
{\n \"inputs\": [\n {\n \"internalType\": \"address\",\n \"name\": \"owner\",\n
\"type\": \"address\"\n }\n ],\n \"name\": \"OwnableInvalidOwner\",\n \"type\":
\"error\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"address\",\n
\"name\": \"account\",\n \"type\": \"address\"\n }\n ],\n \"name\":
\"OwnableUnauthorizedAccount\",\n \"type\": \"error\"\n },\n {\n \"inputs\":
[],\n \"name\": \"UUPSUnauthorizedCallContext\",\n \"type\": \"error\"\n },\n
{\n \"inputs\": [\n {\n \"internalType\": \"bytes32\",\n \"name\": \"slot\",\n
\"type\": \"bytes32\"\n }\n ],\n \"name\": \"UUPSUnsupportedProxiableUUID\",\n
\"type\": \"error\"\n },\n {\n \"anonymous\": false,\n \"inputs\": [\n {\n
\"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"account\",\n
\"type\": \"address\"\n },\n {\n \"indexed\": true,\n \"internalType\":
\"address\",\n \"name\": \"operator\",\n \"type\": \"address\"\n },\n {\n
\"indexed\": false,\n \"internalType\": \"bool\",\n \"name\": \"approved\",\n
\"type\": \"bool\"\n }\n ],\n \"name\": \"ApprovalForAll\",\n \"type\":
\"event\"\n },\n {\n \"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\":
true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n \"type\":
\"address\"\n },\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n
\"name\": \"to\",\n \"type\": \"address\"\n },\n {\n \"indexed\": false,\n
\"internalType\": \"uint256[]\",\n \"name\": \"assetIds\",\n \"type\":
\"uint256[]\"\n },\n {\n \"indexed\": false,\n \"internalType\":
\"uint256[]\",\n \"name\": \"amounts\",\n \"type\": \"uint256[]\"\n }\n ],\n
\"name\": \"AssetTransferEvent\",\n \"type\": \"event\"\n },\n {\n
\"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\": false,\n
\"internalType\": \"uint64\",\n \"name\": \"version\",\n \"type\": \"uint64\"\n
}\n ],\n \"name\": \"Initialized\",\n \"type\": \"event\"\n },\n {\n
\"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\": true,\n
\"internalType\": \"address\",\n \"name\": \"previousOwner\",\n \"type\":
\"address\"\n },\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n
\"name\": \"newOwner\",\n \"type\": \"address\"\n }\n ],\n \"name\":
\"OwnershipTransferred\",\n \"type\": \"event\"\n },\n {\n \"anonymous\":
false,\n \"inputs\": [\n {\n \"indexed\": true,\n \"internalType\":
\"address\",\n \"name\": \"operator\",\n \"type\": \"address\"\n },\n {\n
\"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n
\"type\": \"address\"\n },\n {\n \"indexed\": true,\n \"internalType\":
\"address\",\n \"name\": \"to\",\n \"type\": \"address\"\n },\n {\n \"indexed\":
false,\n \"internalType\": \"uint256[]\",\n \"name\": \"ids\",\n \"type\":
\"uint256[]\"\n },\n {\n \"indexed\": false,\n \"internalType\":
\"uint256[]\",\n \"name\": \"values\",\n \"type\": \"uint256[]\"\n }\n ],\n
\"name\": \"TransferBatch\",\n \"type\": \"event\"\n },\n {\n \"anonymous\":
false,\n \"inputs\": [\n {\n \"indexed\": true,\n \"internalType\":
\"address\",\n \"name\": \"operator\",\n \"type\": \"address\"\n },\n {\n
\"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n
\"type\": \"address\"\n },\n {\n \"indexed\": true,\n \"internalType\":
\"address\",\n \"name\": \"to\",\n \"type\": \"address\"\n },\n {\n \"indexed\":
false,\n \"internalType\": \"uint256\",\n \"name\": \"id\",\n \"type\":
\"uint256\"\n },\n {\n \"indexed\": false,\n \"internalType\": \"uint256\",\n
\"name\": \"value\",\n \"type\": \"uint256\"\n }\n ],\n \"name\":
\"TransferSingle\",\n \"type\": \"event\"\n },\n {\n \"anonymous\": false,\n
\"inputs\": [\n {\n \"indexed\": false,\n \"internalType\": \"string\",\n
\"name\": \"value\",\n \"type\": \"string\"\n },\n {\n \"indexed\": true,\n
\"internalType\": \"uint256\",\n \"name\": \"id\",\n \"type\": \"uint256\"\n }\n
],\n \"name\": \"URI\",\n \"type\": \"event\"\n },\n {\n \"anonymous\": false,\n
\"inputs\": [\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n
\"name\": \"implementation\",\n \"type\": \"address\"\n }\n ],\n \"name\":
\"Upgraded\",\n \"type\": \"event\"\n },\n {\n \"inputs\": [],\n \"name\":
\"UPGRADE_INTERFACE_VERSION\",\n \"outputs\": [\n {\n \"internalType\":
\"string\",\n \"name\": \"\",\n \"type\": \"string\"\n }\n ],\n
\"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\":
[\n {\n \"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\":
\"uint256\"\n }\n ],\n \"name\": \"assetToDetails\",\n \"outputs\": [\n {\n
\"internalType\": \"uint256\",\n \"name\": \"assetId\",\n \"type\":
\"uint256\"\n },\n {\n \"internalType\": \"string\",\n \"name\": \"name\",\n
\"type\": \"string\"\n },\n {\n \"internalType\": \"string\",\n \"name\":
\"symbol\",\n \"type\": \"string\"\n },\n {\n \"internalType\": \"uint256\",\n
\"name\": \"maxSupply\",\n \"type\": \"uint256\"\n },\n {\n \"internalType\":
\"uint256\",\n \"name\": \"faceValue\",\n \"type\": \"uint256\"\n },\n {\n
\"internalType\": \"uint256\",\n \"name\": \"maturityTimestamp\",\n \"type\":
\"uint256\"\n },\n {\n \"internalType\": \"string\",\n \"name\": \"assetUri\",\n
\"type\": \"string\"\n }\n ],\n \"stateMutability\": \"view\",\n \"type\":
\"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"address\",\n
\"name\": \"account\",\n \"type\": \"address\"\n },\n {\n \"internalType\":
\"uint256\",\n \"name\": \"id\",\n \"type\": \"uint256\"\n }\n ],\n \"name\":
\"balanceOf\",\n \"outputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\":
\"\",\n \"type\": \"uint256\"\n }\n ],\n \"stateMutability\": \"view\",\n
\"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\":
\"address[]\",\n \"name\": \"accounts\",\n \"type\": \"address[]\"\n },\n {\n
\"internalType\": \"uint256[]\",\n \"name\": \"ids\",\n \"type\":
\"uint256[]\"\n }\n ],\n \"name\": \"balanceOfBatch\",\n \"outputs\": [\n {\n
\"internalType\": \"uint256[]\",\n \"name\": \"\",\n \"type\": \"uint256[]\"\n
}\n ],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n
\"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"assetId\",\n
\"type\": \"uint256\"\n },\n {\n \"internalType\": \"uint256\",\n \"name\":
\"amounts\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"burn\",\n
\"outputs\": [],\n \"stateMutability\": \"nonpayable\",\n \"type\":
\"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256[]\",\n
\"name\": \"assetIds\",\n \"type\": \"uint256[]\"\n },\n {\n \"internalType\":
\"uint256[]\",\n \"name\": \"amounts\",\n \"type\": \"uint256[]\"\n }\n ],\n
\"name\": \"burnBatch\",\n \"outputs\": [],\n \"stateMutability\":
\"nonpayable\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n
\"internalType\": \"uint256\",\n \"name\": \"assetId\",\n \"type\":
\"uint256\"\n },\n {\n \"internalType\": \"string\",\n \"name\": \"name\",\n
\"type\": \"string\"\n },\n {\n \"internalType\": \"string\",\n \"name\":
\"symbol\",\n \"type\": \"string\"\n },\n {\n \"internalType\": \"uint256\",\n
\"name\": \"maxSupply\",\n \"type\": \"uint256\"\n },\n {\n \"internalType\":
\"uint256\",\n \"name\": \"faceValue\",\n \"type\": \"uint256\"\n },\n {\n
\"internalType\": \"uint256\",\n \"name\": \"maturityTimestamp\",\n \"type\":
\"uint256\"\n },\n {\n \"internalType\": \"string\",\n \"name\": \"assetUri\",\n
\"type\": \"string\"\n }\n ],\n \"name\": \"createAsset\",\n \"outputs\": [],\n
\"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n },\n {\n
\"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"id\",\n
\"type\": \"uint256\"\n }\n ],\n \"name\": \"exists\",\n \"outputs\": [\n {\n
\"internalType\": \"bool\",\n \"name\": \"\",\n \"type\": \"bool\"\n }\n ],\n
\"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\":
[],\n \"name\": \"initialize\",\n \"outputs\": [],\n \"stateMutability\":
\"nonpayable\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n
\"internalType\": \"address\",\n \"name\": \"account\",\n \"type\":
\"address\"\n },\n {\n \"internalType\": \"address\",\n \"name\":
\"operator\",\n \"type\": \"address\"\n }\n ],\n \"name\":
\"isApprovedForAll\",\n \"outputs\": [\n {\n \"internalType\": \"bool\",\n
\"name\": \"\",\n \"type\": \"bool\"\n }\n ],\n \"stateMutability\": \"view\",\n
\"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\":
\"uint256\",\n \"name\": \"assetId\",\n \"type\": \"uint256\"\n },\n {\n
\"internalType\": \"uint256\",\n \"name\": \"amounts\",\n \"type\":
\"uint256\"\n },\n {\n \"internalType\": \"address\",\n \"name\":
\"recipient\",\n \"type\": \"address\"\n }\n ],\n \"name\": \"mint\",\n
\"outputs\": [],\n \"stateMutability\": \"nonpayable\",\n \"type\":
\"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256[]\",\n
\"name\": \"assetIds\",\n \"type\": \"uint256[]\"\n },\n {\n \"internalType\":
\"uint256[]\",\n \"name\": \"amounts\",\n \"type\": \"uint256[]\"\n },\n {\n
\"internalType\": \"address\",\n \"name\": \"recipient\",\n \"type\":
\"address\"\n }\n ],\n \"name\": \"mintBatch\",\n \"outputs\": [],\n
\"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n },\n {\n
\"inputs\": [],\n \"name\": \"owner\",\n \"outputs\": [\n {\n \"internalType\":
\"address\",\n \"name\": \"\",\n \"type\": \"address\"\n }\n ],\n
\"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\":
[],\n \"name\": \"proxiableUUID\",\n \"outputs\": [\n {\n \"internalType\":
\"bytes32\",\n \"name\": \"\",\n \"type\": \"bytes32\"\n }\n ],\n
\"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\":
[],\n \"name\": \"renounceOwnership\",\n \"outputs\": [],\n \"stateMutability\":
\"nonpayable\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n
\"internalType\": \"address\",\n \"name\": \"from\",\n \"type\": \"address\"\n
},\n {\n \"internalType\": \"address\",\n \"name\": \"to\",\n \"type\":
\"address\"\n },\n {\n \"internalType\": \"uint256[]\",\n \"name\": \"ids\",\n
\"type\": \"uint256[]\"\n },\n {\n \"internalType\": \"uint256[]\",\n \"name\":
\"values\",\n \"type\": \"uint256[]\"\n },\n {\n \"internalType\": \"bytes\",\n
\"name\": \"data\",\n \"type\": \"bytes\"\n }\n ],\n \"name\":
\"safeBatchTransferFrom\",\n \"outputs\": [],\n \"stateMutability\":
\"nonpayable\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n
\"internalType\": \"address\",\n \"name\": \"from\",\n \"type\": \"address\"\n
},\n {\n \"internalType\": \"address\",\n \"name\": \"to\",\n \"type\":
\"address\"\n },\n {\n \"internalType\": \"uint256\",\n \"name\": \"id\",\n
\"type\": \"uint256\"\n },\n {\n \"internalType\": \"uint256\",\n \"name\":
\"value\",\n \"type\": \"uint256\"\n },\n {\n \"internalType\": \"bytes\",\n
\"name\": \"data\",\n \"type\": \"bytes\"\n }\n ],\n \"name\":
\"safeTransferFrom\",\n \"outputs\": [],\n \"stateMutability\":
\"nonpayable\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n
\"internalType\": \"address\",\n \"name\": \"operator\",\n \"type\":
\"address\"\n },\n {\n \"internalType\": \"bool\",\n \"name\": \"approved\",\n
\"type\": \"bool\"\n }\n ],\n \"name\": \"setApprovalForAll\",\n \"outputs\":
[],\n \"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n },\n {\n
\"inputs\": [\n {\n \"internalType\": \"bytes4\",\n \"name\": \"interfaceId\",\n
\"type\": \"bytes4\"\n }\n ],\n \"name\": \"supportsInterface\",\n \"outputs\":
[\n {\n \"internalType\": \"bool\",\n \"name\": \"\",\n \"type\": \"bool\"\n }\n
],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n
\"inputs\": [],\n \"name\": \"totalSupply\",\n \"outputs\": [\n {\n
\"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\": \"uint256\"\n }\n
],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n
\"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"id\",\n
\"type\": \"uint256\"\n }\n ],\n \"name\": \"totalSupply\",\n \"outputs\": [\n
{\n \"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\": \"uint256\"\n
}\n ],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n
\"inputs\": [\n {\n \"internalType\": \"address\",\n \"name\": \"newOwner\",\n
\"type\": \"address\"\n }\n ],\n \"name\": \"transferOwnership\",\n \"outputs\":
[],\n \"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n },\n {\n
\"inputs\": [\n {\n \"internalType\": \"address\",\n \"name\":
\"newImplementation\",\n \"type\": \"address\"\n },\n {\n \"internalType\":
\"bytes\",\n \"name\": \"data\",\n \"type\": \"bytes\"\n }\n ],\n \"name\":
\"upgradeToAndCall\",\n \"outputs\": [],\n \"stateMutability\": \"payable\",\n
\"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\":
\"uint256\",\n \"name\": \"id\",\n \"type\": \"uint256\"\n }\n ],\n \"name\":
\"uri\",\n \"outputs\": [\n {\n \"internalType\": \"string\",\n \"name\":
\"\",\n \"type\": \"string\"\n }\n ],\n \"stateMutability\": \"view\",\n
\"type\": \"function\"\n }\n ]\n\n}\n\nglobal.set('privateKey',
glbVar.privateKey);\nglobal.set('privateKeyAddress',glbVar.privateKeyAddress)\nglobal.set('contract',
glbVar.smartContract);\nglobal.set('accessToken',
glbVar.accessToken);\nglobal.set('rpcEndpoint',
glbVar.rpcEndpoint);\nglobal.set('abi',glbVar.abi)\n\nreturn msg;", "outputs":
1, "timeout": "", "noerr": 0, "initialize": "", "finalize": "", "libs": [], "x":
460, "y": 80, "wires": [ [ "a7c63a0fd0d1a779" ] ] } ]
```
### 5. Interact with the smart contract
The Integration Studio allows you to interact with your smart contract and add business logic.
Go to the newly created `Asset Tokenisation` tab in the Integration Studio.

The first function you need to complete is to set the global variables of the integration.

To do this, click on the middle item in the diagram labeled `Set Global Variables`. There you will you a variable called `glbVar`. Here is where you will enter the information to start interacting with your smart contract.

1. **privateKey** - Enter your private key that you created in [Part 1 / Step 4](#4-deploy-a-private-key)
2. **privateKeyAdress** - The address created after completing [Part 1 / Step 4](#4-deploy-a-private-key)
3. **smartContract** - The address of your deployed smart contract after completing [Part 2 / Step 5](#5-deploy-the-contract)
4. **accessToken** - The API key created when completing [Part 3 / Step 3](#3-creating-an-access-token)
5. **rpcEndpoint** - The JSON RPC URL that was shown when completing [Part 3 / Step 2](#2-get-the-json-rpc-endpoint)
With this information entered, click on the blue square next to the `Inject` item.
Now you need to create an asset by creating an asset name, asset symbol and assetUri.

To create an asset, double click on the `Inject` option next to the `Initialise Asset` item.
In this window you can set:
**msg.assetName** - Bond
**msg.assetSymbol** - BND
**msg.assetUri** - The IPFS URL of the asset you created after completing [Part 3 / Step 1](#1-upload-an-image-to-ipfs)
From here you can now click on the other `inject` options to:
1. Create an Asset
2. View the Asset
3. Mint the Asset
4. View the Balance

To see how the interactions with your smart contract, choose the `Debug` option under the deploy button.
## Great job
You have now created and deployed an Asset Tokenization smart contract using SettleMint!
Find other guides in our [Guide Library](/developer-guides/guide-library) to help you build with SettleMint.
```
```
file: ./content/docs/use-case-guides/attestation-service.mdx
meta: {
"title": "Ethereum attestation indexer",
"description": "A comprehensive guide to implementing and using the Ethereum Attestation Service (EAS) for creating, managing, and verifying on-chain attestations",
"keywords": [
"ethereum",
"eas",
"attestation",
"blockchain",
"web3",
"smart contracts",
"verification",
"schema registry",
"resolver"
]
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## 1. Introduction to eas
### What is eas?
Ethereum Attestation Service (EAS) is a decentralized protocol that allows users
to create, verify, and manage attestations (verifiable claims) on the Ethereum
blockchain. It provides a standardized way to make claims about data,
identities, or events that can be independently verified by others.
### Why use eas?
* **Decentralization**: No central authority is needed to verify claims.
* **Interoperability**: Standardized schemas allow for cross-platform
compatibility.
* **Security**: Attestations are secured by the Ethereum blockchain.
* **Transparency**: All attestations are publicly verifiable.
***
## 2. Key concepts
### Core components
1. **SchemaRegistry**:
* A smart contract that stores and manages schemas.
* Schemas define the structure and data types of attestations, ensuring that
all attestations conform to a predefined format.
2. **EAS Contract**:
* The main contract that handles the creation and management of attestations.
* It interacts with the `SchemaRegistry` to ensure that attestations adhere
to the defined schemas.
3. **Attestations**:
* Verifiable claims stored on the blockchain.
* Created and managed by the `EAS Contract`.
4. **Resolvers**:
* Optional contracts that provide additional validation logic for
attestations.
***
## 3. How EAS works
```mermaid
graph TD
SchemaRegistry["SchemaRegistry"]
UsersSystems["Users/Systems"]
EASContract["EAS Contract"]
Verifiers["Verifiers"]
Attestations["Attestations"]
SchemaRegistry -- "Defines Data Structure" --> EASContract
UsersSystems -- "Interact" --> EASContract
EASContract -- "Creates" --> Attestations
Verifiers -- "Verify" --> Attestations
```
### Workflow
1. **Schema Definition**: Start by defining a schema using the
**SchemaRegistry** contract.
2. **Attestation Creation**: Use the **EAS Contract** to create attestations
based on the schema.
3. **Optional Validation**: Resolvers can be used for further validation logic.
4. **On-chain Storage**: Attestations are securely stored and retrievable
on-chain.
***
## 4. Contract deployment
Before deploying the EAS contracts, you must add the smart contract set to your
project.
### Adding the smart contract set
1. **Navigate to the Dev tools Section**: Go to the application dashboard of the
application where you want to deploy the EAS contracts, then navigate to the
**Dev tools** section in the left sidebar.
2. **Select the Attestation Service Set**: From there, click on **Add a dev
tool**, choose **Code Studio** and then **Smart Contract Set**. Choose the
**Attestation Service** template.
3. **Customize**: Modify the set as needed for your specific project.
4. **Save**: Save the configuration.
For detailed instructions, visit the
[Smart Contract Sets Documentation](/platform-components/dev-tools/code-studio).
***
### Deploying the contracts
Once the contract set is ready, you can deploy it using either the **Task Menu**
in the SettleMint IDE or via the **Terminal**.
#### Deploy using the task menu
1. **Open the Task Menu**:
* In the SettleMint Integrated IDE, access the **Task Menu** from the
sidebar.
2. **Select Deployment Task**:
* Choose the task corresponding to the **Hardhat- Reset & Deploy to platform
network** module.
3. **Monitor Deployment Logs**:
* The terminal output will display the deployment progress and contract
addresses.
#### Deploy using the terminal
1. **Prepare the Deployment Module**:\
Ensure the module is defined in `ignition/modules/main.ts`:
```typescript
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const CustomEASModule = buildModule("EASDeployment", (m) => {
const schemaRegistry = m.contract("SchemaRegistry", [], {});
const EAS = m.contract("EAS", [schemaRegistry], {});
return { schemaRegistry, EAS };
});
export default CustomEASModule;
```
2. **Run the Deployment Command**:\
Execute the following command in your terminal:
```bash
bunx settlemint scs hardhat deploy remote --reset -m ignition/modules/main.ts
```
3. **Monitor Deployment Logs**:
* The terminal output will display the deployment progress and contract
addresses.
***
## 5. Registering a schema
### Example use case
Imagine building a service where users prove ownership of their social media
profiles. The schema might include:
* **Username**: A unique identifier for the user.
* **Platform**: The social media platform name (e.g., Twitter).
* **Handle**: The user's handle on that platform (e.g., `@coolcoder123`).
### Example
```javascript
const { ethers } = require("ethers");
// Configuration object for network and contract details
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // The network endpoint (e.g., Ethereum mainnet/testnet)
registryAddress: "YOUR_SCHEMA_REGISTRY_ADDRESS_HERE", // Where the SchemaRegistry contract lives
privateKey: "YOUR_PRIVATE_KEY_HERE", // Your wallet's private key (keep this secret!)
};
// Create connection to blockchain and setup contract interaction
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const signer = new ethers.Wallet(config.privateKey, provider);
const schemaRegistry = new ethers.Contract(
config.registryAddress,
[
// This event helps us track when new schemas are registered
"event Registered(bytes32 indexed uid, address indexed owner, string schema, address resolver, bool revocable)",
// This function lets us register new schemas
"function register(string calldata schema, address resolver, bool revocable) external returns (bytes32)",
],
signer
);
async function registerSchema() {
try {
// Define what data fields our attestations will contain
const schema = "string username, string platform, string handle";
const resolverAddress = ethers.ZeroAddress; // No special validation needed
const revocable = true; // Attestations can be revoked if needed
console.log("🚀 Registering schema for social media ownership...");
// Send the transaction to create our schema
const tx = await schemaRegistry.register(
schema,
resolverAddress,
revocable
);
const receipt = await tx.wait(); // Wait for blockchain confirmation
// Get our schema's unique ID from the transaction
const schemaUID = receipt.logs[0].topics[1];
console.log("✅ Schema registered successfully! UID:", schemaUID);
} catch (error) {
console.error("❌ Error registering schema:", error.message);
}
}
registerSchema();
```
***
## 6. Creating attestations
### Example use case
Let's create an attestation that proves:
* **Username**: `awesome_developer`
* **Platform**: `GitHub`
* **Handle**: `@devmaster`
### Example
```javascript
const { EAS, SchemaEncoder } = require("@ethereum-attestation-service/eas-sdk");
const { ethers } = require("ethers");
// Setup our connection details
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // Network endpoint
easAddress: "YOUR_EAS_CONTRACT_ADDRESS_HERE", // Main EAS contract address
privateKey: "YOUR_PRIVATE_KEY_HERE", // Your wallet's private key
schemaUID: "YOUR_SCHEMA_UID_HERE", // The UID from when we registered our schema
};
// Connect to the blockchain
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const signer = new ethers.Wallet(config.privateKey, provider);
const EAS = new EAS(config.easAddress);
eas.connect(signer);
// Create an encoder that matches our schema structure
const schemaEncoder = new SchemaEncoder(
"string username, string platform, string handle"
);
// The actual data we want to attest to
const attestationData = [
{ name: "username", value: "awesome_developer", type: "string" },
{ name: "platform", value: "GitHub", type: "string" },
{ name: "handle", value: "@devmaster", type: "string" },
];
async function createAttestation() {
try {
// Convert our data into the format EAS expects
const encodedData = schemaEncoder.encodeData(attestationData);
// Create the attestation
const tx = await eas.attest({
schema: config.schemaUID,
data: {
recipient: ethers.ZeroAddress, // Public attestation (no specific recipient)
expirationTime: 0, // Never expires
revocable: true, // Can be revoked later if needed
data: encodedData, // Our encoded attestation data
},
});
// Wait for confirmation and get the result
const receipt = await tx.wait();
console.log(
"✅ Attestation created successfully! UID:",
receipt.attestationUID
);
} catch (error) {
console.error("❌ Error creating attestation:", error.message);
}
}
createAttestation();
```
## 7. Verifying attestations
Verification is essential to ensure the integrity and authenticity of
attestations. You can verify attestations using one of the following methods:
1. **Using the EAS SDK**: Perform lightweight, off-chain verification
programmatically.
2. **Using a Custom Smart Contract Resolver**: Add custom on-chain validation
logic for attestations.
### Choose your verification method
#### Verification using the EAS sdk
The EAS SDK provides an easy way to verify attestations programmatically, making
it ideal for off-chain use cases.
##### Example
```javascript
const { ethers } = require("ethers");
const { EAS } = require("@ethereum-attestation-service/eas-sdk");
// Basic configuration for connecting to the network
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // Network endpoint
easAddress: "YOUR_EAS_CONTRACT_ADDRESS_HERE", // Main EAS contract
};
async function verifyAttestation(attestationUID) {
// Setup our blockchain connection
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const EAS = new EAS(config.easAddress);
eas.connect(provider);
console.log("🔍 Verifying attestation:", attestationUID);
// Try to find the attestation on the blockchain
const attestation = await eas.getAttestation(attestationUID);
// Check if we found anything
if (!attestation) {
console.error("❌ Attestation not found");
return;
}
// Show the attestation details
console.log("✅ Attestation Details:");
console.log("Attester:", attestation.attester); // Who created this attestation
console.log("Data:", attestation.data); // The actual attested data
console.log("Revoked:", attestation.revoked ? "Yes" : "No"); // Is it still valid?
}
// Replace with your attestation UID
verifyAttestation("YOUR_ATTESTATION_UID_HERE");
```
##### Key points
* **Lightweight**: Suitable for most off-chain verifications.
* **No Custom Logic**: Fetches and verifies data stored in EAS.
#### Verification using a custom smart contract resolver
Custom resolvers enable on-chain validation with additional business rules or
logic.
##### Example: trusted attester verification
The following smart contract resolver ensures that attestations are valid only
if made by trusted attesters.
###### Smart contract code
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
// This contract checks if attestations come from trusted sources
contract CustomResolver {
// Keep track of which addresses we trust to make attestations
mapping(address => bool) public trustedAttesters;
// When deploying, we set up our initial list of trusted attesters
constructor(address[] memory initialAttesters) {
for (uint256 i = 0; i < initialAttesters.length; i++) {
trustedAttesters[initialAttesters[i]] = true;
}
}
// EAS calls this function before accepting an attestation
function validate(
bytes32 attestationUID, // Unique ID of the attestation
address attester, // Who's trying to create the attestation
bytes memory data // The attestation data (unused in this example)
) external view returns (bool) {
// Only allow attestations from addresses we trust
if (!trustedAttesters[attester]) {
return false;
}
return true;
}
}
```
###### Deploying the resolver with hardhat ignition
Deploy this custom resolver using the Hardhat Ignition framework.
```typescript
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const CustomResolverDeployment = buildModule("CustomResolver", (m) => {
const initialAttesters = ["0xTrustedAddress1", "0xTrustedAddress2"];
const resolver = m.contract("CustomResolver", [initialAttesters], {});
return { resolver };
});
export default CustomResolverDeployment;
```
Run the following command in your terminal to deploy:
```bash
npx hardhat deploy --module ignition/modules/main.ts
```
###### Linking the resolver to a schema
When registering a schema, include the resolver's address for on-chain
validation.
```javascript
const resolverAddress = "YOUR_DEPLOYED_RESOLVER_ADDRESS";
const schema = "string username, string platform, string handle";
const schemaUID = await schemaRegistry.register(schema, resolverAddress, true);
console.log("✅ Schema with resolver registered! UID:", schemaUID);
```
###### Validating attestations with the resolver
To validate an attestation, call the `validate` function of your deployed
resolver contract.
```javascript
const resolver = new ethers.Contract(
"YOUR_RESOLVER_ADDRESS",
["function validate(bytes32, address, bytes) external view returns (bool)"],
provider
);
const isValid = await resolver.validate(
"YOUR_ATTESTATION_UID",
"ATTESTER_ADDRESS",
"ATTESTATION_DATA"
);
console.log("✅ Is the attestation valid?", isValid);
```
##### Key points
* **Customizable Rules**: Add your own validation logic to the resolver.
* **On-Chain Validation**: Ensures attestations meet specific conditions before
they are considered valid.
***
### When to use each method?
* **EAS SDK**: Best for off-chain applications where simple validation suffices.
* **Custom Resolver**: Use for on-chain validation with additional rules, such
as verifying trusted attesters or specific data formats.
## 8. Using the attestation indexer
### Setup attestation indexer
1. Go to your application's **Middleware** section
2. Click "Add a middleware"
3. Select "Attestation Indexer"
4. Configure with your contract addresses:
* EAS Contract: `EAS contract address`
* Schema Registry: `Schema Registry contract address`
### Querying attestations
#### Connection details
After deployment:
1. Go to your Attestation Indexer
2. Click "Connections" tab
3. You'll find your GraphQL endpoint URL
4. Create an Application Access Token (Settings → Application Access Tokens)
#### Using the graphql ui
The indexer provides a built-in GraphQL UI where you can test queries. Click
"GraphQL UI" in your indexer to access it.
#### Example query implementation
```javascript
// Example fetch request to query attestations
async function queryAttestations(schemaId) {
const response = await fetch("YOUR_INDEXER_URL", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer YOUR_APP_TOKEN",
},
body: JSON.stringify({
query: `{
attestations(
where: {
schemaId: {
equals: "${schemaId}"
}
}
) {
id
attester
recipient
revoked
data
}
}`,
}),
});
const data = await response.json();
return data.data.attestations;
}
// Usage example:
const schemaId = "YOUR_SCHEMA_ID"; // From the registration step
const attestations = await queryAttestations(schemaId);
console.log("Attestations:", attestations);
```
## 9. Integration studio implementation
For those using integration studio, we've created a complete flow implementation
of the EAS interactions. This flow automates the entire process we covered in
this guide.
### Flow overview
The flow includes:
* EAS Configuration Setup
* Schema Registration
* Attestation Creation
* Attestation Verification
* Debug nodes for monitoring results
### Installation
1. In Integration Studio, go to Import → Clipboard
2. Paste the flow JSON below
3. Click Import
Click to view/copy the complete Node-RED flow JSON
```json
[
{
"id": "eas_flow",
"type": "tab",
"label": "EAS Attestation Flow",
"disabled": false,
"info": ""
},
{
"id": "setup_inject",
"type": "inject",
"z": "eas_flow",
"name": "Inputs: RpcUrl, Registry address,Eas address, Private key",
"props": [
{
"p": "rpcUrl",
"v": "RPC-URL/API-KEY",
"vt": "str"
},
{
"p": "registryAddress",
"v": "REGISTERY-ADDRESS",
"vt": "str"
},
{
"p": "easAddress",
"v": "EAS-ADDRESS",
"vt": "str"
},
{
"p": "privateKey",
"v": "PRIVATE-KEY",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 250,
"y": 120,
"wires": [["setup_function"]]
},
{
"id": "setup_function",
"type": "function",
"z": "eas_flow",
"name": "Setup Global Variables",
"func": "// Initialize provider with specific network parameters\nconst provider = new ethers.JsonRpcProvider(msg.rpcUrl)\n\nconst signer = new ethers.Wallet(msg.privateKey, provider);\n\n// Initialize EAS with specific gas settings\nconst EAS = new eassdk.EAS(msg.easAddress);\neas.connect(signer);\n\n// Store in global context\nglobal.set('provider', provider);\nglobal.set('signer', signer);\nglobal.set('eas', eas);\nglobal.set('registryAddress', msg.registryAddress);\n\nmsg.payload = 'EAS Configuration Initialized';\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "ethers",
"module": "ethers"
},
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
}
],
"x": 580,
"y": 120,
"wires": [["setup_debug"]]
},
{
"id": "register_inject",
"type": "inject",
"z": "eas_flow",
"name": "Register Schema",
"props": [],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 120,
"y": 260,
"wires": [["register_function"]]
},
{
"id": "register_function",
"type": "function",
"z": "eas_flow",
"name": "Register Schema",
"func": "// Get global variables set in init\nconst signer = global.get('signer');\nconst registryAddress = global.get('registryAddress');\n\n// Initialize SchemaRegistry contract\nconst schemaRegistry = new ethers.Contract(\n registryAddress,\n [\n \"event Registered(bytes32 indexed uid, address indexed owner, string schema, address resolver, bool revocable)\",\n \"function register(string calldata schema, address resolver, bool revocable) external returns (bytes32)\"\n ],\n signer\n);\n\n// Define what data fields our attestations will contain\nconst schema = \"string username, string platform, string handle\";\nconst resolverAddress = \"0x0000000000000000000000000000000000000000\"; // No special validation needed\nconst revocable = true; // Attestations can be revoked if needed\n\ntry {\n const tx = await schemaRegistry.register(schema, resolverAddress, revocable);\n const receipt = await tx.wait();\n\n const schemaUID = receipt.logs[0].topics[1];\n // Store schemaUID in global context for later use\n global.set('schemaUID', schemaUID);\n\n msg.payload = {\n success: true,\n schemaUID: schemaUID,\n message: \"Schema registered successfully!\"\n };\n} catch (error) {\n msg.payload = {\n success: false,\n error: error.message\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "ethers",
"module": "ethers"
}
],
"x": 310,
"y": 260,
"wires": [["register_debug"]]
},
{
"id": "create_inject",
"type": "inject",
"z": "eas_flow",
"name": "Input: Schema uid",
"props": [
{
"p": "schemaUID",
"v": "SCHEMA-UID",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 130,
"y": 400,
"wires": [["create_function"]]
},
{
"id": "create_function",
"type": "function",
"z": "eas_flow",
"name": "Create Attestation",
"func": "// Get global variables\nconst EAS = global.get('eas');\nconst schemaUID = msg.schemaUID;\n\n// Create an encoder that matches our schema structure\nconst schemaEncoder = new eassdk.SchemaEncoder(\"string username, string platform, string handle\");\n\n// The actual data we want to attest to\nconst attestationData = [\n { name: \"username\", value: \"awesome_developer\", type: \"string\" },\n { name: \"platform\", value: \"GitHub\", type: \"string\" },\n { name: \"handle\", value: \"@devmaster\", type: \"string\" }\n];\n\ntry {\n // Convert our data into the format EAS expects\n const encodedData = schemaEncoder.encodeData(attestationData);\n\n // Create the attestation\n const tx = await eas.attest({\n schema: schemaUID,\n data: {\n recipient: \"0x0000000000000000000000000000000000000000\", // Public attestation\n expirationTime: 0, // Never expires\n revocable: true, // Can be revoked later if needed\n data: encodedData // Our encoded attestation data\n }\n });\n\n // Wait for confirmation and get the result\n const receipt = await tx.wait();\n\n // Store attestation UID for later verification\n global.set('attestationUID', receipt.attestationUID);\n\n msg.payload = {\n success: true,\n attestationUID: receipt,\n message: \"Attestation created successfully!\"\n };\n} catch (error) {\n msg.payload = {\n success: false,\n error: error.message\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
},
{
"var": "ethers",
"module": "ethers"
}
],
"x": 330,
"y": 400,
"wires": [["create_debug"]]
},
{
"id": "verify_inject",
"type": "inject",
"z": "eas_flow",
"name": "Input: Attestation UID",
"props": [
{
"p": "attestationUID",
"v": "Attestation UID",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 140,
"y": 540,
"wires": [["verify_function"]]
},
{
"id": "verify_function",
"type": "function",
"z": "eas_flow",
"name": "Verify Attestation",
"func": "const EAS = global.get('eas');\nconst attestationUID = msg.attestationUID;\n\ntry {\n const attestation = await eas.getAttestation(attestationUID);\n const schemaEncoder = new eassdk.SchemaEncoder(\"string pshandle, string socialMedia, string socialMediaHandle\");\n const decodedData = schemaEncoder.decodeData(attestation.data);\n\n msg.payload = {\n isValid: !attestation.revoked,\n attestation: {\n attester: attestation.attester,\n time: new Date(Number(attestation.time) * 1000).toLocaleString(),\n expirationTime: attestation.expirationTime > 0 \n ? new Date(Number(attestation.expirationTime) * 1000).toLocaleString()\n : 'Never',\n revoked: attestation.revoked\n },\n data: {\n psHandle: decodedData[0].value.toString(),\n socialMedia: decodedData[1].value.toString(),\n socialMediaHandle: decodedData[2].value.toString()\n }\n };\n} catch (error) {\n msg.payload = { \n success: false, \n error: error.message,\n details: JSON.stringify(error, Object.getOwnPropertyNames(error))\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
},
{
"var": "ethers",
"module": "ethers"
}
],
"x": 350,
"y": 540,
"wires": [["verify_debug"]]
},
{
"id": "setup_debug",
"type": "debug",
"z": "eas_flow",
"name": "Setup Result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 770,
"y": 120,
"wires": []
},
{
"id": "register_debug",
"type": "debug",
"z": "eas_flow",
"name": "Register Result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 500,
"y": 260,
"wires": []
},
{
"id": "create_debug",
"type": "debug",
"z": "eas_flow",
"name": "Create Result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 520,
"y": 400,
"wires": []
},
{
"id": "verify_debug",
"type": "debug",
"z": "eas_flow",
"name": "Verify Result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 530,
"y": 540,
"wires": []
},
{
"id": "1322bb7438d96baf",
"type": "comment",
"z": "eas_flow",
"name": "Initialize EAS Config",
"info": "",
"x": 110,
"y": 60,
"wires": []
},
{
"id": "e5e3294119a80c1b",
"type": "comment",
"z": "eas_flow",
"name": "Register a new schema",
"info": "/* SCHEMA GUIDE\nEdit the schema variable to define your attestation fields.\nFormat: \"type name, type name, type name\"\n\nAvailable Types:\n- string (text)\n- bool (true/false)\n- address (wallet address)\n- uint256 (number)\n- bytes32 (hash)\n\nExamples:\n\"string name, string email, bool isVerified\"\n\"string twitter, address wallet, uint256 age\"\n\"string discord, string github, string telegram\"\n*/\n\nconst schema = \"string pshandle, string socialMedia, string socialMediaHandle\";",
"x": 120,
"y": 200,
"wires": []
},
{
"id": "2be090c17b5e4fce",
"type": "comment",
"z": "eas_flow",
"name": "Create Attestation",
"info": "",
"x": 110,
"y": 340,
"wires": []
},
{
"id": "3d99f76c5c0bdaf0",
"type": "comment",
"z": "eas_flow",
"name": "Verify Attestation",
"info": "",
"x": 110,
"y": 480,
"wires": []
}
]
```
### Configuration steps:
1. Update the setup inject node with your:
* RPC URL
* Registry Address
* EAS Address
* Private Key
2. Customize the schema in the register function
3. Deploy the flow
4. Test each step sequentially using the inject nodes
The flow provides debug outputs at each step to monitor the process.
file: ./content/docs/application-kits/asset-tokenization/api-portal.mdx
meta: {
"title": "API Portal",
"description": "APIs for easy integrations with external world"
}
## API Reference — Asset Tokenization Kit
To start integrating with the SettleMint Asset Tokenization Kit APIs, the first
step is to generate an API key. This key is required to authenticate all
programmatic access to your deployed instance, and it ensures that only
authorized applications or developers can perform operations such as deploying
assets, transferring tokens, updating access roles, or retrieving transaction
and portfolio data.
From the API portal interface, navigate to the “API Keys” tab and click on the
“Create API Key” button in the top-right corner. This will open a modal where
you are required to enter a Name for the key (e.g., “Admin UI”, “Investor App”,
or “Integration Layer”) and optionally set an expiry date/time if you want to
limit the duration of access. Setting an expiry is a recommended practice when
issuing temporary or scoped keys for testing, third-party vendors, or automated
scripts.

Once the key is created, it will appear in the list of active keys along with
its name, creation date, expiry (if set), and a masked view of the key value.
You will use this key in your requests by adding it to the HTTP header as
follows: x-api-key: YOUR\_GENERATED\_KEY\_HERE
This key must be included in every API request to authenticate and authorize
your actions. Without it, the backend will reject the request with a 401
Unauthorized error. Be sure to store this key securely and avoid hardcoding it
into public repositories or frontend applications exposed to browsers. For web
apps, it is recommended to proxy API requests through a backend or use
serverless middleware that injects the key securely.
Each deployment has its own scoped API keys, meaning keys are only valid for the
specific environment they were created in (e.g., staging, production). The base
URL and available endpoints are listed in the API Documentation tab, where you
can also download the full OpenAPI (Swagger) schema for code generation or
external API tooling.

The API documentation portal is accessible through an interactive web interface
that provides detailed specifications for each endpoint, including request
formats, parameters, and response structures. Built on the Swagger framework,
the portal also includes a “Try It Out” feature, allowing users to test API
calls in real-time using sample or authorized credentials. This environment
supports rapid integration, debugging, and validation of token operations
directly from the browser.

## Available APIs
SettleMint’s Asset Tokenization Kit APIs provide a unified interface for
creating, managing, and interacting with a wide range of tokenized financial
instruments on blockchain. These APIs cover asset classes such as bonds,
equities, cryptocurrencies, funds, stablecoins, and deposits, each with
capabilities to deploy contracts, mint and transfer tokens, configure financial
parameters, and enforce role-based access control. Additional endpoints support
user identity lookup, transaction tracking, yield management, and portfolio
analytics, offering full transparency and operational control. Built for modular
integration, these APIs enable secure, compliant, and scalable deployment of
digital assets across enterprise and institutional platforms, while supporting
extensibility through customizable settings and real-time market data
integrations.
### Bond APIs
| Method | Endpoint | Description |
| ------ | ----------------------------------------------- | --------------------------------------------- |
| GET | `/api/bond` | List all bond contracts |
| GET | `/api/bond/{address}` | Get bond contract details |
| GET | `/api/bond/factory/address-available/{address}` | Check if a bond contract address is available |
| POST | `/api/bond/factory/predict-address` | Predict a future bond contract address |
| POST | `/api/bond/factory` | Deploy a new bond contract |
| POST | `/api/bond/transfer` | Transfer bond tokens |
| POST | `/api/bond/mint` | Mint new bond tokens |
| POST | `/api/bond/mature` | Mark bond as matured |
| POST | `/api/bond/redeem` | Redeem matured bond tokens |
| PATCH | `/api/bond/set-yield-schedule` | Set or update yield distribution |
| PATCH | `/api/bond/top-up` | Top up underlying bond capital |
| POST | `/api/bond/withdraw` | Withdraw underlying bond asset |
| DELETE | `/api/bond/burn` | Burn bond tokens |
| PUT | `/api/bond/access-control/grant-role` | Grant access control role |
| DELETE | `/api/bond/access-control/revoke-role` | Revoke access control role |
| PATCH | `/api/bond/access-control/update-roles` | Update assigned roles |
| PUT | `/api/bond/block-user` | Block a user from bond contract |
| DELETE | `/api/bond/unblock-user` | Unblock a user |
***
### Cryptocurrency APIs
| Method | Endpoint | Description |
| ------ | --------------------------------------------------------- | --------------------------------- |
| GET | `/api/cryptocurrency` | List all cryptocurrency contracts |
| GET | `/api/cryptocurrency/{address}` | Get cryptocurrency details |
| GET | `/api/cryptocurrency/factory/address-available/{address}` | Check if address is available |
| POST | `/api/cryptocurrency/factory/predict-address` | Predict future contract address |
| POST | `/api/cryptocurrency/factory` | Deploy new cryptocurrency |
| POST | `/api/cryptocurrency/transfer` | Transfer tokens |
| POST | `/api/cryptocurrency/mint` | Mint new tokens |
| POST | `/api/cryptocurrency/withdraw` | Withdraw tokens |
| PUT | `/api/cryptocurrency/access-control/grant-role` | Grant a role |
| DELETE | `/api/cryptocurrency/access-control/revoke-role` | Revoke a role |
| PATCH | `/api/cryptocurrency/access-control/update-roles` | Update user roles |
***
### Equity APIs
| Method | Endpoint | Description |
| ------ | ------------------------------------------------- | ------------------------------------ |
| GET | `/api/equity` | List all equity contracts |
| GET | `/api/equity/{address}` | Get equity contract details |
| GET | `/api/equity/factory/address-available/{address}` | Check if equity address is available |
| POST | `/api/equity/factory/predict-address` | Predict equity contract address |
| POST | `/api/equity/factory` | Deploy new equity |
| POST | `/api/equity/transfer` | Transfer equity tokens |
| POST | `/api/equity/mint` | Mint new equity tokens |
| POST | `/api/equity/withdraw` | Withdraw token |
| DELETE | `/api/equity/burn` | Burn equity tokens |
| PUT | `/api/equity/access-control/grant-role` | Grant role |
| DELETE | `/api/equity/access-control/revoke-role` | Revoke role |
| PATCH | `/api/equity/access-control/update-roles` | Update roles |
| PUT | `/api/equity/block-user` | Block user |
| DELETE | `/api/equity/unblock-user` | Unblock user |
***
### Fund APIs
| Method | Endpoint | Description |
| ------ | ----------------------------------------------- | ---------------------------------- |
| GET | `/api/fund` | List all fund contracts |
| GET | `/api/fund/{address}` | Get fund contract details |
| GET | `/api/fund/factory/address-available/{address}` | Check if fund address is available |
| POST | `/api/fund/factory/predict-address` | Predict contract address |
| POST | `/api/fund/factory` | Deploy new fund |
| POST | `/api/fund/transfer` | Transfer fund tokens |
| POST | `/api/fund/mint` | Mint fund tokens |
| POST | `/api/fund/withdraw` | Withdraw token |
| DELETE | `/api/fund/burn` | Burn fund tokens |
| PUT | `/api/fund/access-control/grant-role` | Grant role |
| DELETE | `/api/fund/access-control/revoke-role` | Revoke role |
| PATCH | `/api/fund/access-control/update-roles` | Update roles |
| PUT | `/api/fund/block-user` | Block user |
| DELETE | `/api/fund/unblock-user` | Unblock user |
***
### Stablecoin APIs
| Method | Endpoint | Description |
| ------ | ----------------------------------------------------- | ----------------------------- |
| GET | `/api/stablecoin` | List stablecoin contracts |
| GET | `/api/stablecoin/{address}` | Get stablecoin details |
| GET | `/api/stablecoin/factory/address-available/{address}` | Check if address is available |
| POST | `/api/stablecoin/factory/predict-address` | Predict contract address |
| POST | `/api/stablecoin/factory` | Deploy stablecoin contract |
| POST | `/api/stablecoin/transfer` | Transfer stablecoin |
| POST | `/api/stablecoin/mint` | Mint new stablecoin |
| DELETE | `/api/stablecoin/burn` | Burn stablecoin |
| PUT | `/api/stablecoin/freeze` | Freeze user account |
| PUT | `/api/stablecoin/pause` | Pause contract |
| DELETE | `/api/stablecoin/unpause` | Unpause contract |
| PATCH | `/api/stablecoin/update-collateral` | Update collateral data |
| PUT | `/api/stablecoin/block-user` | Block user |
| DELETE | `/api/stablecoin/unblock-user` | Unblock user |
| POST | `/api/stablecoin/withdraw` | Withdraw token |
| PUT | `/api/stablecoin/access-control/grant-role` | Grant role |
| DELETE | `/api/stablecoin/access-control/revoke-role` | Revoke role |
| PATCH | `/api/stablecoin/access-control/update-roles` | Update roles |
***
### Deposit APIs
| Method | Endpoint | Description |
| ------ | -------------------------------------------------- | ------------------------------------- |
| GET | `/api/deposit` | List deposit contracts |
| GET | `/api/deposit/{address}` | Get deposit details |
| GET | `/api/deposit/factory/address-available/{address}` | Check if deposit address is available |
| POST | `/api/deposit/factory/predict-address` | Predict contract address |
| POST | `/api/deposit/factory` | Deploy deposit contract |
| POST | `/api/deposit/transfer` | Transfer deposit token |
| POST | `/api/deposit/mint` | Mint new deposit tokens |
| DELETE | `/api/deposit/burn` | Burn deposit tokens |
| PUT | `/api/deposit/freeze` | Freeze account |
| PUT | `/api/deposit/pause` | Pause deposit contract |
| DELETE | `/api/deposit/unpause` | Unpause contract |
| PATCH | `/api/deposit/update-collateral` | Update collateral data |
| PUT | `/api/deposit/allow-user` | Allow user access |
| DELETE | `/api/deposit/disallow-user` | Disallow user |
| POST | `/api/deposit/withdraw` | Withdraw token |
| PUT | `/api/deposit/access-control/grant-role` | Grant role |
| DELETE | `/api/deposit/access-control/revoke-role` | Revoke role |
| PATCH | `/api/deposit/access-control/update-roles` | Update roles |
***
### Fixed Yield
| Method | Endpoint | Description |
| ------ | ------------------------------------- | ---------------------------- |
| GET | `/api/fixed-yield` | List all fixed yield entries |
| GET | `/api/fixed-yield/{address}` | Get details by address |
| GET | `/api/fixed-yield/bond/{bondAddress}` | Get yield by bond address |
***
### User & Contact APIs
| Method | Endpoint | Description |
| ------ | ---------------------------- | -------------------------- |
| GET | `/api/user` | List users |
| GET | `/api/user/{id}` | Get user by ID |
| GET | `/api/user/wallet/{address}` | Get user by wallet address |
| GET | `/api/user/search` | Search users |
| GET | `/api/contact` | List contacts |
| GET | `/api/contact/{id}` | Get contact details |
***
### Transaction APIs
| Method | Endpoint | Description |
| ------ | ------------------------------------ | ---------------------------- |
| GET | `/api/transaction` | List all transactions |
| GET | `/api/transaction/address/{address}` | Get transactions by address |
| GET | `/api/transaction/{transactionHash}` | Get transaction details |
| GET | `/api/transaction/recent` | Get recent transactions |
| GET | `/api/transaction/count` | Get transaction count |
| GET | `/api/transaction/timeline` | Get timeline of transactions |
***
### Asset Events, Stats & Balances
| Method | Endpoint | Description |
| ------ | --------------------------------------- | ----------------------------- |
| GET | `/api/asset-events` | List all asset events |
| GET | `/api/asset-events/{asset}` | List events for asset |
| GET | `/api/asset-stats/{address}` | Get asset statistics |
| GET | `/api/asset-balance` | List all balances |
| GET | `/api/asset-balance/{asset}/{account}` | Get account balance for asset |
| GET | `/api/asset-balance/portfolio/{wallet}` | Get user portfolio balances |
| GET | `/api/asset-activity` | Get asset activity data |
***
### Settings & Provider APIs
| Method | Endpoint | Description |
| ------ | -------------------------------------- | ------------------------ |
| GET | `/api/setting/{key}` | Get setting value by key |
| GET | `/api/providers/exchange-rates/{base}` | Get exchange rates |
| PATCH | `/api/providers/exchange-rates/` | Update exchange rates |
| GET | `/api/providers/asset-price/{assetId}` | Get asset price |
| PATCH | `/api/providers/asset-price/{assetId}` | Update asset price |
***
### Swagger / API Schema
| Method | Endpoint | Description |
| ------ | ------------------- | ------------------- |
| GET | `/api/swagger` | Swagger UI |
| GET | `/api/swagger/json` | Swagger JSON schema |
file: ./content/docs/application-kits/asset-tokenization/asset-designer.mdx
meta: {
"title": "Asset designer",
"description": "Getting started with Application Kit"
}
## SettleMint asset designer
The Asset Designer is purpose-built to streamline the creation, issuance, and
lifecycle management of regulated, on-chain financial instruments. Designed for
use by financial institutions, governments, fintech companies, and asset
managers, it provides an intuitive web-based interface that abstracts blockchain
infrastructure and smart contract logic, allowing users to focus on financial
structuring, regulatory compliance, and business execution.
The platform supports tokenization across five key asset classes, bonds,
stablecoins, investment funds, equities, and cryptocurrencies, each with
tailored configuration workflows aligned to their economic models and legal
requirements. Through a modular, guided process, users define asset metadata,
supply models, pricing mechanisms, backing logic, and governance rules. Tokens
issued through the Asset Designer are fully compatible with Ethereum token
standards.
Tokenized bonds

The bond module enables digital issuance of debt instruments such as corporate,
sovereign, or municipal bonds. Tokenized bonds retain core traditional
attributes, face value, coupon structure, maturity date, ISIN, and issuance cap,
while embedding programmable rules around interest payments, redemption
conditions, and transferability.
The interface allows users to define symbol, ISIN, maximum supply, and link the
bond to an underlying reserve or collateral (e.g., real estate, cash). These
blockchain-based bonds are interoperable with DeFi protocols, support automated
coupon disbursements, and can be fractionalized to expand market access. Issuers
benefit from real-time visibility, shortened settlement cycles, and improved
auditability.
### Stablecoins

The stablecoin module facilitates the issuance of price-stable digital
currencies pegged to fiat currencies like USD or EUR. Users configure token
name, symbol, decimal precision, peg currency, and collateral verification
interval, determining how often proof-of-reserve updates must occur.
The platform supports a variety of backing models, including fiat reserves,
algorithmic mechanisms, and on-chain collateralization. Stablecoins created via
the Asset Designer can integrate with proof-of-reserve systems or off-chain
oracles, enabling compliance with regulatory reporting standards. These tokens
enable low-volatility payments, programmable finance, and cross-border
settlements, bridging traditional finance with digital ecosystems.
### Tokenized funds

The fund tokenization module allows asset managers to issue programmable tokens
representing units or shares in collective investment schemes, such as hedge
funds, venture capital funds, or hybrid strategies.
Users define fund category (e.g., Commodity, Event-Driven, Fixed Income
Arbitrage) and fund class (e.g., Absolute Return, Income-Focused, Small-Cap),
configure management fees (in basis points), and set token pricing logic aligned
with traditional NAV calculation. Smart contracts automate redemptions, lock-up
enforcement, and performance fees, while enabling faster onboarding, enhanced
transparency, and optional secondary liquidity for LP units.
### Tokenized equities

SettleMint’s equity tokenization module enables companies to digitize capital
ownership through blockchain-based programmable shares. Users configure equity
class (Common, Preferred, Private Equity) and category (ESOP Shares, Convertible
Stock, Sector-Based Equity), define denomination parameters, and embed rights
such as dividends, voting power, or liquidation preferences.
These tokens serve as legally-aligned digital counterparts to traditional
securities, making them suitable for private placements, employee stock plans,
early-stage funding, or regulated market participation. Real-time cap table
updates, vesting schedules, and compliance integrations help streamline
governance, enhance transparency, and broaden shareholder engagement.
## Cryptocurrencies
 The
cryptocurrency creation module supports issuance of native digital tokens for
projects building decentralized applications, platforms, or token economies.
Users define token name, symbol, decimals, initial supply, and reference price
unit. These tokens can power governance, staking, utility access, or economic
incentives within digital ecosystems.
SettleMint supports both fixed-supply and dynamic supply models, with optional
smart contract features such as inflation control, burn logic, and vesting
schedules. All tokens are compliant and designed for seamless integration with
wallets, exchanges, DeFi protocols, and DAO frameworks.
file: ./content/docs/application-kits/asset-tokenization/asset-manager.mdx
meta: {
"title": "Asset manager",
"description": "Create and manage all the assets on the platform"
}
## SettleMint asset manager

Once assets are created using the Asset Designer module, they are listed and
made available in the Asset Management section. This module allows users to
manage, monitor, and operate digital assets across multiple categories such as
bonds, cryptocurrencies, equities, funds, stablecoins, and deposits. The
following sections describe the Asset Management interface and all available
options in detail.
## Asset manager
Asset Manager serves as the operational control center for managing the
lifecycle of digital assets issued on the SettleMint platform. It forms an
integral part of the Issuer Portal and provides a suite of tools and insights
that empower users with administrative roles—such as platform admins or asset
managers—to handle every aspect of asset governance.
### Available asset classes
* **Stablecoins**: These are digital tokens typically backed by fiat currency or
crypto assets. They are collateralized and pegged to maintain price stability.
* **Bonds**: Represent tokenized debt instruments issued by corporate or
government entities. Their lifecycle may include maturity dates, interest
distributions, and other traditional bond mechanics.
* **Funds**: These tokens represent pooled investments or mutual fund-like
structures. Investors can hold shares proportional to the fund's net asset
value (NAV).
* **Equities**: Tokenized shares representing ownership in a private or public
company.
* **Cryptocurrencies**: Either native assets (created directly on-chain) or
wrapped versions of external assets.
* **Deposits**: Represent stored value, vouchers, or fiat-collateralized digital
tokens often used in regulated financial settings.
Each asset type can contain multiple instances. For example, under
"Stablecoins," you might find entries such as "OmniDollar" or "SigmaDollar,"
each with its own symbol and contract address. Clicking on a specific asset
entry opens its management dashboard.
## The asset dashboard: a centralized view
Each asset has its own Asset Overview Page, which is where detailed data and
actionable controls are presented. This view is separated into tabs, each
focusing on a specific operational or analytical aspect. These tabs include:
1. Details
2. Collateral
3. Statistics
4. Holders
5. Events
6. Permissions
7. Block List
### 1. Details tab: asset identity and metrics
This section gives the user a comprehensive view of the asset's fundamental
details and performance indicators. The following fields are visible:
* **Name and Symbol**: Identifies the asset using the custom branding and ticker
symbol defined at creation.
* **Smart Contract Address**: This is the Ethereum-compatible address where the
asset is deployed. It serves as the point of interaction for any
blockchain-related queries.
* **Creator**: Displays the wallet or user who initiated asset creation.
* **Decimals**: Defines the number of decimal places the token supports. For
example, a value of 16 means that the token can be divided to 10^-16 parts.
* **Total Supply**: Total number of tokens that have been minted and are in
circulation.
* **Total Burned**: Tokens that have been permanently destroyed or removed from
circulation.
* **Number of Holders**: Unique wallet addresses currently holding the token.
* **Ownership Concentration**: Reflects the percentage of total supply held by
the top wallet. A 100% concentration means all tokens are held by a single
address.
* **Unit Price**: The price of one token, often set during asset creation.
* **Total Value**: This is calculated as Total Supply \* Unit Price.
This section enables users to perform a quick but comprehensive review of the
asset's current state.
### 2. Collateral section: ensuring backing integrity
For assets that require collateral—typically stablecoins or deposits—this
section provides detailed visibility into collateral compliance.
* **Proven Collateral**: The actual amount of collateral that has been locked
and validated.
* **Required Collateral Threshold**: The minimum collateralization ratio
mandated (usually 100%).
* **Committed Collateral Ratio**: Shows the ratio between committed collateral
and total asset value. If this dips below 100%, the system may trigger alerts
or restrictions.
* **Collateral Proof Expiration**: A timestamp after which the current
collateral proof is considered expired.
* **Collateral Proof Validity**: Indicates how long the current proof remains
valid.
These data points are critical for regulatory compliance and to maintain user
trust, especially in financial or governmental use cases.
### 3. Statistics section: operational intelligence
The statistics section of the Asset Manager provides visual dashboards that
allow users to analyze the historical and real-time performance of the asset:
* **Collateral Ratio**: A pie chart showing what percentage of collateral is
free vs. committed.
* **Total Supply**: A time-based chart showing changes in circulating supply.
* **Supply Changes**: Shows mint and burn actions over time.
* **Wallet Distribution**: Displays a bar chart showing how many tokens are held
across different wallet size brackets.
* **Total Transfers**: Tracks the number of token transfers over time.
* **Total Volume**: Cumulative transaction volume in fiat value.
These visuals are useful for internal reporting, risk assessment, and investor
communications.
### 4. Holders tab: know your token holders
The holders tab lists all wallet addresses that currently hold the token. It
presents the following information:
* **Wallet Address and Label**
* **Token Balance and Value**
* **Wallet Type** (e.g., Regular, Admin, Frozen)
* **Frozen Balance**: If part of the wallet's balance is currently frozen.
* **Status**: Whether the wallet is active or has been restricted.
* **Last Activity**: Timestamp of the most recent interaction with the asset.
This feature helps track concentration risk, compliance exposure, and user
engagement.
### 5. Events tab: auditable history of actions
This tab logs every major operation performed on the asset:
* **Event Timestamp**
* **Event Type** (Mint, Transfer, Create, Collateral Update, etc.)
* **Initiator**: Wallet address or user that performed the action
* **Asset Involved**
* **Details**: Deep link to the event's full metadata
The Events tab is essential for compliance auditing and helps organizations
maintain a verifiable trail of all token-related actions.
### 6. Permissions tab: role-based access control
This section shows all users and wallets with specific roles associated with
asset management. The platform supports granular access levels:
* **Admin**: Full rights including contract pause, role management, and supply
control
* **Supply Manager**: Can mint or burn tokens
* **User Manager**: Can assign and manage user roles
* **Auditor**: Can only view asset data, but cannot make changes
Each role is attached to a wallet address, and a single wallet can hold multiple
roles.
### 7. Block list tab: managing restricted access
The block list contains addresses that are forbidden from holding or interacting
with the asset. This may be used for:
* Regulatory sanctions
* Suspicious activity prevention
* Operational policy enforcement
Blocked wallets cannot transfer, receive, or interact with the asset. Admins can
add or remove wallets from the block list using the Manage dropdown.
## Manage actions menu
Each asset page includes a Manage button at the top-right. Clicking this opens a
dropdown list of actionable controls:
* **Mint**: Add new supply to circulation.
* **Update Collateral**: Adjust or reaffirm backing.
* **Pause Contract**: Temporarily suspend activity.
* **Add Asset Admin**: Delegate control to another wallet.
* **Block/Unblock User**: Modify access status of specific wallets.
* **View Events**: Jump directly to the events tab for inspection.
These controls ensure that platform users can respond quickly to operational
needs while maintaining control and auditability.
file: ./content/docs/application-kits/asset-tokenization/deployment.mdx
meta: {
"title": "Deployment",
"description": "Development setup and deployment"
}
Login to SettleMint platform and create an organzation. For more details refer -
'[Account setup guide](/building-with-settlemint/setup-account-and-billing)'
Select "Add application" option and select "Asset tokenization kit".

Select the network you will like to use and then on the next step, decide if you
will be deploying development or production environment. Optionally, enable code
studio to get access to smart contract and front end UI IDE within the platform
UI.

You can choose between development and production environments based on the
stage and requirements of your application.
The development environment operates on a shared infrastructure within
SettleMint’s managed SaaS offering. It is provisioned with a small resource
pack, suitable for prototyping, testing, and early-stage development. This
environment includes one validator node and one non-validator node, sufficient
for basic functionality validation and integration testing.
The production environment, on the other hand, is deployed on a dedicated
cluster and is provisioned with a medium resource pack by default. It is
designed to support high availability, performance, and scalability for
enterprise-grade deployments. The production setup includes four validator nodes
and two non-validator nodes, ensuring fault tolerance and improved network
consensus performance.
Both environments support dynamic resource scaling, allowing resource packs to
be scaled up or down at any point based on application demand or usage patterns.
## Custom deployment module for front end deployment
The **Asset Tokenization Kit (ATK) Frontend UI** is a containerized web
application that provides a user interface for interacting with tokenized assets
on the SettleMint platform. This guide covers the configuration and deployment
of the frontend UI as a **custom deployment** within SettleMint.
### **Container Image Setup**
The ATK Frontend UI is deployed using a prebuilt container image. Configure the
following:
| Field | Description | Example Value |
| ------------------------------------- | ----------------------------------------------------------- | -------------------------------------------------- |
| **Container Image** | The Docker image containing the frontend UI. | `ghcr.io/settlemint/asset-tokenization-kit:0.3.14` |
| **Exposed Port** | The port on which the frontend serves HTTP traffic. | `3000` |
| **Registry Credentials** (if private) | Username and access token for private container registries. | (Provided by DevOps) |
### **Access Control**
Define who can access the deployed frontend:
* **Anyone with the link** – Public access (suitable for demos).
* **Members of the organization** – Restricted to authenticated users within the
SettleMint organization.
### **Environment Variables**
The frontend requires the following environment variables to connect to backend
services:
| Variable | Purpose |
| ----------------------------------------- | ----------------------------------------------- |
| `SETTLEMINT_INSTANCE` | Internal SettleMint instance identifier. |
| `SETTLEMINT_ACCESS_TOKEN` | Authentication token for SettleMint APIs. |
| `SETTLEMINT_HD_PRIVATE_KEY` | Private key for blockchain transaction signing. |
| `SETTLEMINT_BLOCKSCOUT_UI_ENDPOINT` | URL for the blockchain explorer (Blockscout). |
| `SETTLEMINT_HASURA_ENDPOINT` | Hasura GraphQL engine endpoint. |
| `SETTLEMINT_HASURA_DATABASE_URL` | Database connection URL (internal). |
| `SETTLEMINT_HASURA_ADMIN_SECRET` | Admin secret for Hasura access. |
| `SETTLEMINT_PORTAL_GRAPHQL_ENDPOINT` | SettleMint Portal GraphQL API endpoint. |
| `SETTLEMINT_THEGRAPH_SUBGRAPHS_ENDPOINTS` | The Graph indexing service endpoints. |
**Security Note:**
* Sensitive values (e.g., `HD_PRIVATE_KEY`, `HASURA_ADMIN_SECRET`) are masked in
the UI.
* Rotate credentials periodically following security best practices.
### **Custom Domains (Optional)**
For production or client-facing deployments, bind a custom domain: e.g.
[https://demo.tokenmint.be/](https://demo.tokenmint.be/) This ensures a branded URL for end users.
The **Asset Tokenization Kit Frontend UI** can be deployed as a standalone
service while maintaining integration with SettleMint's blockchain and backend
infrastructure. Key benefits include:
* **Independent frontend management** – Deploy UI updates without affecting
other components.
* **Flexible access control** – Configure visibility for internal testing or
public demos.
* **Secure environment injection** – Sensitive keys and endpoints are securely
passed at runtime.
* **Custom domain support** – Use branded URLs for professional deployments.
## Local development
For local development, please go to our
[GitHub repository](https://github.com/settlemint/asset-tokenization-kit) and
follow the readme setup guide.
There are two ways to use this kit:
1. **Predeployed Setup** - Using pre-deployed contracts (fastest)
2. **Customized Setup** - Deploy your own contracts
### Predeployed Setup (Fastest)
This is the fastest way to get started with the kit. It uses pre-deployed
contracts, subgraphs, and ABIs.
```bash
# Install dependencies
bun install
# Login and connect to SettleMint
bunx settlemint login
bunx settlemint connect
# Generate types and start development server
cd kit/dapp
bun codegen:settlemint
bun addresses
bun dev
```
Browse to [http://localhost:3000](http://localhost:3000) to access the
application. Create an account by clicking "Sign up" - the first account created
will have admin privileges.
### Customized Setup
If you want to deploy and use your customised contracts, subgraph, and ABIs,
follow these steps:
#### Prerequisites
1. Forge v0.3.0 - Install the latest Foundry from
[https://book.getfoundry.sh/getting-started/installation](https://book.getfoundry.sh/getting-started/installation)
2. Node.js version >=20.18.1 - Required for The Graph CLI. We recommend using
[fnm](https://github.com/Schniz/fnm) for Node.js installation.
#### Deployment Steps
```bash
# Install dependencies
bun install
# Login and connect to SettleMint
bun settlemint login
bun settlemint connect
# Deploy contracts
cd kit/contracts
bun deploy:remote
# Deploy subgraph
cd ../subgraph
bun deploy:remote
cd ../../
# Codegen
bun codegen
# Setup dapp
cd kit/dapp
bun addresses
bun db:push
# Start development server
bun dev
```
Browse to [http://localhost:3000](http://localhost:3000) to access the
application. Create an account by clicking "Sign up" - the first account created
will have admin privileges.
### Database Customization
To modify database schema:
1. Update your schema definitions in the schema folder:
```bash
# Navigate to schema directory
cd kit/dapp/src/lib/db
```
2. Apply your changes to the database:
```bash
# Run in the kit/dapp directory
cd kit/dapp
bun db:push
```
3. Ensure your updates are registered with Hasura by executing:
```bash
settlemint hasura track -a
```
4. Regenerate GraphQL types by running the following command in the root
directory. It is important to use the `--force` flag to ensure the types are
regenerated:
```bash
bun codegen --force
```
5. Launch the application to verify your changes:
```bash
bun dev
```
> **Note**: When modifying tables managed by Better Auth (user, session,
> account, verification), you may need to update `additionalFields` in
> `kit/dapp/src/lib/auth/auth.ts`. If user object field changes aren't reflected
> in the `useSession` hook, try clearing cookies and signing in again. See
> [Better Auth database core schema](https://www.better-auth.com/docs/concepts/database#core-schema)
> for more information.
file: ./content/docs/application-kits/asset-tokenization/introduction.mdx
meta: {
"title": "Introduction",
"description": "Build your digital assets platform in minutes"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
The **SettleMint Asset Tokenization Kit** is a complete development toolkit and
reference application for launching digital asset platforms quickly and
securely. It eliminates the usual complexity of building from scratch by
offering a pre-integrated stack of smart contracts, backend services, and a
web-based user interface. Whether issuing stablecoins, tokenized securities, or
loyalty points, the kit equips developers with all the core components needed to
move from idea to prototype in days.

## Core components
The kit’s foundation lies in **battle-tested smart contract templates** that
follow standards like ERC-20. These templates are extensible and suitable for
multiple asset classes:
* Stablecoins and fiat-backed tokens
* Tokenized bonds and securities
* Loyalty points or reward systems
* Real-world asset representations (e.g., real estate, IP)
In addition to the contract layer, a **fully functional dApp UI** is bundled
with the kit. This includes both an admin console and user portal, designed to
support common workflows from day one:
* Define and configure new tokenized assets using the **Asset Designer**
* Issue, transfer, and monitor digital assets
* Manage users through wallet creation, KYC approvals, access control, and
blacklisting
Because the backend and frontend layers are already wired together, teams can
focus on business logic and design without investing months in integration
efforts.
***
## Supported token classes
The Asset Tokenization Kit supports a diverse set of token classes, each built
to represent real-world financial instruments on-chain. These include bonds,
equities, funds, stablecoins, cryptocurrencies, and deposit-backed tokens,
covering both asset ownership and capital instruments. Each class follows a
standardized contract interface enriched with configurable parameters,
role-based access control, and compliance mechanisms suited for enterprise or
institutional deployment.
Token classes are designed to be modular yet interoperable, allowing
institutions to digitize traditional instruments, launch programmable assets,
and meet regulatory obligations without reengineering core systems. Whether
issuing tokenized debt, enabling digital equity programs, or piloting CBDC use
cases, these templates serve as production-ready building blocks with built-in
lifecycle, governance, and auditability features.
***
### Bond

Bond tokens represent debt instruments issued on-chain, offering fixed or floating returns over a predefined maturity period. These can be structured to support traditional coupon-bearing bonds, zero-coupon notes, or tokenized commercial papers. Bonds enable capital raising with regulatory-grade access control and transparency for both issuers and investors.
Bond tokens can integrate programmable yield schedules, automate redemption workflows, and support on-chain recordkeeping for audits and compliance. These instruments are suitable for both public market issuance and private placements across regulated jurisdictions.
**Key Features**
* Maturity schedules and yield configuration.
* Support for redemption, top-up, and withdrawal.
* Role-based access control and user-level restrictions.
**Example Use Cases**
* Tokenized corporate bonds or government securities.
* Real estate-backed debt issuance.
* Treasury bond digitization for regulated investors.
Equity tokens represent ownership in a company or asset, digitally mirroring shares or voting rights. These tokens can be embedded with compliance logic such as lock-in periods, vesting, or shareholder rights, making them suitable for fundraising, ESOPs, and investor governance in both private and public markets.
Equity tokens support automated cap table management, enabling real-time updates to ownership records and seamless transferability under controlled conditions. They can also be integrated with governance modules to facilitate on-chain voting and shareholder resolutions.
**Key Features**
* Ownership tracking and transfer controls.
* Role-based minting, burning, and blocking of accounts.
* Smart contract hooks for voting or governance modules.
**Example Use Cases**
* Startup equity or cap table management.
* Private equity fund tokenization.
* Real-world asset fractional ownership.
***
### Fund

Fund tokens allow the creation and management of pooled investment vehicles where token holders share profits and risks. These tokens can support minting, redemption, NAV tracking, and enforce access controls as per fund structures like mutual funds, hedge funds, or DAOs managing diversified assets.
Fund tokens enable dynamic portfolio management with real-time NAV adjustments and support for multi-asset backing or performance-linked issuance. They offer operational transparency and auditability, making them suitable for both retail-facing platforms and institutional fund structures.
**Key Features**
* NAV-based token minting and withdrawal.
* User access restrictions and investor limits.
* Full transferability and burn logic.
**Example Use Cases**
* Tokenized mutual or hedge funds.
* Venture capital DAOs.
* ESG or green finance investment vehicles.
Stablecoins are blockchain-based representations of fiat currencies or other stable assets. These tokens aim to minimize volatility and serve as mediums of exchange or units of account within digital ecosystems. The framework supports collateral management, freezing, and pausing mechanisms for regulatory compliance and risk control.
Stablecoins also serve as a foundational layer for central bank digital currency (CBDC) prototypes and regulated digital cash systems. They can integrate with identity verification, transaction monitoring, and monetary policy controls, making them adaptable for both commercial applications and government-led digital currency pilots.
**Key Features**
* Collateral tracking and updates.
* Freezing and pausing of accounts and contracts.
* On-chain minting, burning, and withdrawal mechanisms.
**Example Use Cases**
* Bank-issued tokenized fiat currencies.
* Stable medium of exchange in DeFi protocols.
* Asset-backed token for remittance or settlement.
Cryptocurrency tokens are fungible digital currencies that can be freely minted, transferred, or withdrawn. They may represent utility, governance, or native currencies of ecosystems, and the tokenization framework supports full lifecycle management with standard access control features for centralized issuance models.
Cryptocurrency tokens can power digital economies within platforms, enabling payments, staking, and access to gated services or features. The framework also allows seamless integration with EVM-compatible wallets, decentralized exchanges, and automated distribution mechanisms for ecosystem incentives.
**Key Features**
* Mintable, burnable, and transferable.
* Role-based issuance and withdrawal.
* Compatible with EVM wallets and DEXs.
**Example Use Cases**
* In-app or platform utility tokens.
* Centralized exchange-listed assets.
* Loyalty and rewards token programs.
Deposit tokens represent tokenized claims against reserved assets, often used in custodial or institutional settings. These tokens allow fine-grained access control and account-level permissioning, suitable for asset-backed financing, digital guarantees, and central bank-backed pilot systems.
Deposit tokens can be tailored for use in interbank settlement networks, collateralized lending platforms, or programmable escrow arrangements. They support auditability, restricted transfer logic, and integration with core banking systems, making them ideal for regulated financial infrastructure and institutional-grade asset tokenization.
**Key Features**
* Freeze, pause, allow/disallow user functions.
* Collateral-based minting and burning.
* On-chain controls for regulated issuance.
**Example Use Cases**
* CBDC sandbox or pilot programs.
* Regulated commercial bank deposits.
* Asset-backed loan tokenization platforms.
## Compliance and security
Compliance is not an add-on but an embedded principle in the kit’s architecture.
It is built to align with enterprise-grade regulatory expectations, supporting
both internal governance and external obligations.
The kit includes:
* **Whitelisted address logic** to restrict transfers to approved participants
* **Transaction limits** configurable per asset or user category
* **Audit logs** to track all key operations on-chain and off-chain
* **Role-based access control** to separate admin and user capabilities
* **KYC/AML workflows** that integrate identity checks into the onboarding
process
It also supports alignment with evolving regulations such as **Europe’s MiCA**,
reducing the effort for institutions to stay compliant over time.
***
## Operational monitoring
Institutions need visibility into their asset operations, and the kit offers
this out of the box.
The **analytics module** provides:
* Real-time dashboards of asset supply, ownership distribution, and transaction
history
* Visual breakdowns of token activity for operational and compliance teams
* Exportable data views for reporting, audits, or internal governance
This monitoring framework helps organizations maintain transparency and enforce
accountability across tokenized programs.
***
## Developer enablement
Developers are not left to glue components together manually. The Asset
Tokenization Kit ensures all layers work in harmony and offers powerful tools to
accelerate custom development.
### Integrated tools
* **SettleMint SDK and CLI** to scaffold, manage, and deploy projects
* **Web-based IDE** for instant cloud development
* **Local dev compatibility** with Git access for use with any code editor
### Pre-built blockchain integrations
* **IPFS** for decentralized document and metadata storage
* **The Graph** for indexing on-chain data
* **Hasura** for GraphQL API access to blockchain data
### External connectivity
* REST and GraphQL APIs to connect with CRMs, core banking systems, and
reporting platforms
* Hooks and webhooks to automate workflows or trigger third-party actions

This results in a developer experience where the focus is on building
business-specific logic, not plumbing infrastructure.
***
## Deployment and automation
Launching environments with the kit is straightforward and scalable. Most setup
steps are automated and repeatable across development, testing, and production.
* **One-click deployment** available via SettleMint’s managed infrastructure
* **CLI-based deployment** for more control or private cloud hosting
* **Environment presets** for Dev, Test, and Prod configurations
* Integration with standard **CI/CD pipelines** to support enterprise release
cycles
With minimal DevOps overhead, organizations will maintain faster iteration
cycles and lower deployment risk.
***
## Speed and efficiency gains
Adopting the kit significantly reduces project timelines and developer workload.
Organizations will benefit from:
* **4x faster smart contract development** using pre-audited templates
* **8x faster front-end development** thanks to pre-built dApp interfaces
* **Launch time in days**, not months, for MVPs or pilot rollouts
* A **modular codebase** that enables easy customization without rework
Development teams no longer need to reinvent the wheel, and product teams can
validate ideas quickly with real users.
***
## Customization and extensibility
Unlike rigid SaaS platforms, the kit offers complete flexibility. Every
component is open and editable:
* Modify or extend smart contracts for unique financial instruments
* Customize the UI for branding, UX, or business-specific workflows
* Add new integrations, APIs, or on-chain data sources as required
* Build new features or compliance rules without breaking the architecture
This extensibility ensures that the kit remains relevant as use cases evolve,
making it suitable for both pilots and scaled production environments.
***
## Ideal use cases
The Asset Tokenization Kit is well-suited for:
* **Banks and financial institutions** creating programmable money or tokenized
debt
* **Fintech startups** building platforms for fractional ownership, stablecoins,
or tokenized securities
* **Corporates** issuing loyalty tokens or digitizing internal assets like
carbon credits
* **Governments and regulators** running sandbox projects for CBDCs or digital
bonds
Its flexibility and compliance-focused design allow it to operate in diverse
industry contexts with minimal configuration.
***
## Getting started
To begin using the kit:
1. **Clone the source code** from SettleMint's Git repository
[SettleMint Asset Tokenization Kit on GitHub](https://github.com/settlemint/asset-tokenization-kit)
2. **Install the SDK and CLI** to scaffold a new project
3. **Launch the Web IDE** or integrate into your local development environment
4. **Review documentation and API references** to begin customizing the
application

Comprehensive guides, code samples, and pre-configured environments are
available to reduce onboarding time for development teams.
***
## Ongoing support and roadmap
* The kit is **actively maintained** and updated to meet new technical and
regulatory requirements
* **Support channels** are available for both developer troubleshooting and
enterprise onboarding
***
file: ./content/docs/application-kits/asset-tokenization/portfolio-manager.mdx
meta: {
"title": "Portfolio manager",
"description": "Manage your assets, as an indivudal user or as an admin/treasury"
}
The dashboard section provides a high-level overview of the user's asset
holdings and current portfolio valuation. It shows the total value of all assets
in the wallet, typically denominated in EUR, calculated using the real-time
balances and latest token pricing data. A line chart allows users to visualize
how the portfolio’s value has changed over time.
Users can switch between three chart modes. The total value view displays the
entire portfolio's historical valuation. The stacked by asset type view breaks
down the value by token categories such as equity and deposit. The compare asset
types view plots each category on separate lines to allow performance
comparisons. The dashboard also includes a date range selector for
period-specific analysis and a transfer button to initiate outbound token
transfers quickly.
## My assets

The my assets section displays a detailed breakdown of tokens held in the
wallet. This includes both visual and tabular representations. A donut chart on
the left shows asset allocation by type, enabling users to analyze
diversification across different token categories. Each token is listed with its
name, symbol, classification (e.g. deposit, equity), and available balance.
A transaction chart is shown alongside the table, which displays the number of
token operations performed over time. This helps identify active versus dormant
tokens. The asset list also includes filtering options, data export features,
and a details button for each token entry. These details can be used for further
inspection or reporting.
## My activity
The my activity section presents a historical record of on-chain actions
associated with the user's wallet. It is divided into two tabs. The recent
transactions tab lists blockchain transactions that invoked smart contract
functions, such as mint, burn, and transfer. The all events tab includes a
broader set of events including role assignments and collateral updates.
Each activity entry includes the timestamp, the token involved, the type of
event, and the initiating wallet address. A details button opens metadata
associated with the action, including any additional contract-level information.
Filters are available for narrowing the list, and users can export all activity
data in CSV or Excel formats for audits or compliance purposes.
## My contacts
The my contacts section allows users to maintain a personal address book for
frequently used wallet addresses. This helps prevent errors during manual input
and speeds up the token transfer process. A form is provided to save new
contacts, where users can enter the destination wallet address along with a
first and last name.
Saved contacts are displayed in a searchable table and can be selected directly
during transfers. This feature is particularly useful for investors or operators
who regularly send tokens to the same set of recipients.
## Transfer
The transfer functionality enables users to move tokens to external wallets
directly through the platform. Users can select the token they want to transfer,
specify the destination address either manually or by selecting from contacts,
and enter the amount to transfer.
Real-time input validation checks the token balance and ensures the address
format is correct. Upon confirmation, the transfer is submitted to the
blockchain, and its outcome is recorded in the activity log.
file: ./content/docs/application-kits/asset-tokenization/signup-and-login.mdx
meta: {
"title": "Signup and Login",
"description": "Getting started with Application Kit"
}
Find the connection endpoint on the connect tab of the custom deployment Section

Go to issuer portal, sign up to create the admin or the issuer account. The
first user to signup gets the admin rights to the application.

The user is required to create a secure 6-digit PIN. This PIN is used to
authorize all wallet-related transactions and ensures that only the legitimate
wallet holder can perform actions such as transferring assets or interacting
with contracts.
After setting the PIN, the system generates a list of one-time-use recovery
codes. These codes allow the user to verify transactions or recover wallet
access in case of credential loss. The interface prompts the user to store these
codes securely, such as in a password manager or offline backup. These codes are
not retrievable after this step.

## Passkey sign in
As part of its secure authentication framework, the Asset Tokenization Kit
supports sign-in via passkeys, offering users a passwordless and
phishing-resistant login experience. Passkeys are based on public key
cryptography and are stored securely on the user’s device or cloud identity
provider (such as Apple iCloud Keychain, Google Password Manager, or Windows
Hello). This ensures that private keys never leave the user’s device,
significantly reducing the risk of credential theft or reuse.
During the sign-in process, users will be prompted to authenticate using their
device’s biometric or PIN-based identity (e.g., fingerprint, Face ID, or system
passcode). Upon successful verification, the device automatically signs a
challenge issued by the platform using the previously registered passkey. The
backend verifies the signature against the stored public key, granting access
without requiring a username or password.
This approach improves both user convenience and security posture. It eliminates
the need for password resets and prevents common attack vectors such as
credential stuffing, phishing, and man-in-the-middle attacks. For enterprise
users, passkeys can be linked to organizational identity systems or integrated
with device-level access policies for compliance and centralized control.
Passkey sign-in can be used as a primary login method or combined with other
mechanisms such as wallet authentication, OAuth providers, or multi-factor
authentication (MFA), depending on the deployment’s security requirements. The
implementation is fully compatible with modern browsers and mobile devices,
providing a seamless experience across environments.
Post login you will see the dashboard and will get access to various modules and
services related to asset tokenization.

file: ./content/docs/application-kits/asset-tokenization/ui-customization.mdx
meta: {
"title": "UI customization",
"description": "How to rebrand and enhance the front-end interface of the application"
}
The **SettleMint asset tokenization kit (ATK)** is a modular open-source
platform that enables fast deployment of asset tokenization solutions. While it
includes infrastructure and smart contract templates, the frontend DApp—built
with **Next.js and Tailwind CSS**—serves as the primary interface for users and
clients. The soruce code is available on GitHub - [SettleMint Asset Tokenization Kit on GitHub](https://github.com/settlemint/asset-tokenization-kit)

Smart contract templates (under `kit/contracts/`) and Helm-based infrastructure
charts (under `kit/charts/`) are available, but this document will focus on the
frontend (`kit/dapp/`) only.
The APIs exposed by the SettleMint Asset Tokenization Kit are designed to be
developer-friendly, REST-compliant, and suitable for seamless integration with
both modern web applications and legacy enterprise systems. These endpoints
cover a wide range of functionality including asset issuance, role management,
token transfers, transaction monitoring, yield scheduling, and user account
control. Each API is authenticated via an x-api-key header, using the
application-level access token, which ensures secure and controlled access to
the backend services.
Developers can use these APIs to extend the current frontend or create entirely
new modules and views. These APIs are also ideal for external integrations with third-party systems.
Whether you’re connecting the platform to an ERP system, investor onboarding
portal, custodial wallet service, or regulatory reporting system, the exposed
endpoints provide the necessary flexibility. Legacy frontends or enterprise
tools can call these APIs directly to query asset metadata, initiate token
actions (e.g., mint, redeem, burn), or enforce access control based on
organizational workflows.
In UI-driven customizations, you can build new React components in the
src/components/ directory that consume these APIs to render data-rich views such
as investor summaries, issuance pipelines, fund performance breakdowns, or
compliance logs. With consistent schema definitions available through the
/api/swagger endpoint, developers can easily generate client libraries or use
tools like Postman and Swagger UI for rapid prototyping.
***
## Project structure overview
The frontend application is located inside the following path:
kit/dapp/
The core source files reside in the `src/` directory, which is organized as
follows:
| Folder | Purpose |
| ----------------- | ---------------------------------------------------------------- |
| `src/app/` | Route-based pages and views |
| `src/components/` | Reusable UI blocks like asset displays, login cards, and widgets |
| `src/hooks/` | React hooks for business logic (wallets, data fetching, etc.) |
| `src/lib/` | Shared libraries and service clients |
| `src/utils/` | Utility functions used across the app |
| `src/types/` | TypeScript types and interfaces |
| `src/i18n/` | Translatable message files for multi-language support |
| `public/` | Static assets, logos, and background images |
***
## Understanding the frontend stack
The frontend stack is built using modern JavaScript tools:
* **Next.js** for routing, page rendering, and performance optimization
* **Tailwind CSS** for styling and theming
* **TypeScript** for strong typing across components and logic
* **React hooks** to abstract state management and user flows
* **Headless modular architecture** that makes the kit adaptable to new use
cases
***
## Component-based UI customization
The core of the customization effort will revolve around modifying or replacing
components located in:
kit/dapp/src/components/
### Example components:
* `asset-info`: Displays asset metadata like class, valuation, issuer
* `asset-status-pill`: Shows colored status labels for active, expired, or
upcoming assets
* `auth`: Handles wallet login and user authentication flows
* `layout`: Base page layout including navigation, theming, and background
visuals
### How to customize:
* Replace text labels or buttons with domain-specific terminology (e.g., "Carbon
credits" instead of "Assets")
* Modify visual hierarchy using Tailwind utility classes (`text-lg`,
`bg-blue-200`, etc.)
* Replace logos and color palettes using your organization's design tokens
* Introduce new UI elements like KPI cards, charts, or asset filtering widgets
You can also **introduce new components** by following the same conventions and
placing them in the same directory with self-contained logic and style.
***
## Routing and views
All page-level routes and user workflows are managed inside:
kit/dapp/src/app/
This is where you'll add or update views for actions like:
* Viewing asset lists
* Tokenizing new assets
* Dashboard analytics
* User KYC status or permission management
Each folder in `app/` represents a route and includes components and logic
specific to that page. For example, to create a new route `/reports`, you would
create a folder `src/app/reports/` and define the page structure using standard
Next.js conventions.
***
## Logic and hooks
Hooks abstract key logic into reusable functions and live in:
kit/dapp/src/hooks/
These include:
* `useWallet()`: Connect and manage wallet state
* `useAssets()`: Fetch and format asset metadata
* `usePermissions()`: Handle access roles and conditional UI behavior
You may add your own hooks to encapsulate logic like integrating third-party
APIs (e.g., for KYC, ESG scoring, legal docs, etc.).
***
## Theming and branding
You can change the **visual branding** of the app by modifying:
### Global styles:
* Tailwind config in `tailwind.config.ts`
* Colors, font sizes, border radius, spacing
### Backgrounds and logos:
* Replace files in:
kit/dapp/public/backgrounds/ kit/dapp/public/logo.svg
* Update layout components to reference your assets
### Dark/light themes:
* Theming is handled by Tailwind and can be adjusted using conditional CSS
classes (`dark:bg-gray-800`, `light:text-black`, etc.)
***
## Language & text customization
Multi-language support is implemented using message files in:
kit/dapp/src/i18n/
Each locale file (e.g., `en.json`, `fr.json`) contains key-value pairs for
interface text. This makes it easy to:
* Translate the platform for different regions
* Customize the tone of language (corporate vs. casual)
* Handle jurisdiction-specific legal disclaimers
***
## Utilities, types, and libraries
Supporting folders for maintaining consistent and scalable architecture:
* `src/utils/`: Reusable utility functions (e.g., formatting, validators)
* `src/types/`: TypeScript types for API responses, asset metadata, user models
* `src/lib/`: External service clients (e.g., subgraph queries, REST API
fetchers)
These folders help keep business logic clean and reusable across your
customizations.
***
## Running locally
To preview and test your changes:
```bash
cd kit/dapp
bun install
bun dev
```
file: ./content/docs/application-kits/asset-tokenization/user-management.mdx
meta: {
"title": "User management",
"description": "Getting started with Application Kit"
}

The user profile section within the asset tokenization kit serves as a
centralized and dynamic control panel for managing individual user accounts,
identities, and on-chain activities. It is designed to give platform
administrators full visibility into each user’s lifecycle, from account creation
and wallet assignment to asset ownership and blockchain interactions enabling
efficient user governance, compliance enforcement, and operational insight.
## User creation
User creation happens via the signup form on the login page.
Upon user onboarding, the system automatically generates and assigns a unique
blockchain wallet address to the user. This wallet becomes the user’s on-chain
identity and is securely linked to their profile for executing token-related
operations such as asset issuance, transfers, minting, and participation in
permissioned flows. The wallet address is displayed in truncated format for
readability and includes quick copy functionality for administrative
convenience.
The profile interface is divided into multiple functional tabs such as Details,
Holdings, Latest Events, and Permissions, each providing focused data views and
administrative controls.
## User details
The Details tab displays a comprehensive snapshot of the user’s identity and
account state. It includes the user’s display name, email address, wallet
address, account status (e.g., active or banned), and KYC verification status.
It also logs platform activity details such as the date of account creation,
last login timestamp, and most recent interaction. Administrators can view
real-time operational metrics, including the number of tokenized assets supplied
by the user and the total number of transactions performed. This data is
critical for tracking user engagement, identifying high-value users, and
ensuring active participation in the tokenization ecosystem.
To complement the raw data, the interface incorporates data visualizations that
provide meaningful insights into user activity and asset allocation. The Asset
Distribution chart displays a visual breakdown of the user’s holdings by asset
class such as bonds and deposits, offering a portfolio-level perspective. The
Transactions Volume per Day graph highlights daily activity over the past month,
capturing transaction spikes and behavioral trends, while the Transactions
Volume per Month chart offers a longitudinal view of user interaction across the
year. These charts assist in behavioral analysis and help identify users with
consistent platform engagement versus those with sporadic activity.
## Role management and KYC
Administrative controls are accessible through the Edit User dropdown. This
includes the ability to update the user’s role via a dedicated interface where
platform roles User, Issuer, or Admin, can be reassigned based on functional
requirements or access privileges. A separate confirmation dialog allows KYC
verification to be completed manually, typically after the necessary identity
documents have been reviewed by the compliance team. These capabilities ensure
the platform adheres to both role-based access control and regulatory compliance
mandates.
## User holdings

The Holdings tab provides a ledger-like view of all assets linked to the user’s
wallet. It lists each asset’s name, token symbol, asset type (bond, deposit,
etc.), balance, holder type (creator/owner), operational status, and the last
recorded activity. This section is especially important for financial
administrators or asset managers who need to assess the user’s exposure, asset
diversity, and the lifecycle stage of each instrument under their control.
## Events audit trail

The Latest Events tab functions as a real-time audit trail of all blockchain
events associated with the user. Each entry is timestamped and includes the
asset name, event type (such as minting, creation, or permission grants), and
the sender identity. This detailed ledger helps in tracking operational changes,
validating ownership claims, monitoring permission shifts, and performing
forensic analysis in the event of disputes or compliance inquiries.
file: ./content/docs/application-kits/asset-tokenization/xvp-settlement.mdx
meta: {
"title": "Atomic settlements (X versus Payment Settlements)",
"description": "Enable atomic settlements between parties with X versus Payment (XvP) Settlements"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import {
Clock,
LayoutGrid,
Workflow,
ShieldCheck,
FormInput,
MonitorCheck,
CalendarClock,
FileCheck,
} from "lucide-react";
## What is X versus Payment Settlement?
The X versus Payment (XvP) Settlement is a solution enables atomic, trustless
settlements of digital assets between parties. This feature provides a mechanism
for executing delivery-versus-payment (DvP), payment-versus-payment (PvP), or
any asset-versus-asset exchange in a single, indivisible transaction.

### Key benefits
} title="Multi-party transfers">
Enable multiple participants to exchange various assets in a single atomic
transaction.
} title="Conditional triggering">
Execute settlements automatically when predefined conditions or time
thresholds are met.
} title="Settlement monitoring">
Track approvals, execution status, and transaction progress in real-time.
} title="Compliance integration">
Enforce regulatory compliance across settlements through the upcoming SMART
protocol.
} title="Programmable settlement">
Configure expiration dates and custom parameters to control settlement
behavior.
} title="Post-settlement actions">
Configure automated notifications, emails, or workflows triggered upon
settlement execution.
## Creating an X versus Payment Settlement
To create a new X versus Payment Settlement, navigate to the XvP Settlement
section and click "Create new XvP Settlement." The creation interface allows you
to configure:
1. **Settlement flows**: Configure two or more flows, where each flow specifies:
* **From**: The sending party's wallet address
* **To**: The receiving party's wallet address
* **Asset**: The digital asset to be transferred
* **Amount**: The quantity of the asset to be transferred

2. **Expiry date**: The date after until which the settlement can be executed
3. **Auto-execute**: Whether to automatically execute the settlement on final
approval

Lastly, review the settlement details and submit the transaction to deploy a new
XvP Settlement contract.

Compliance conditions for digital assets can be configured through the SMART
protocol (coming soon) directly on the assets themselves.
Once configured, submit the transaction to deploy a new XvP Settlement contract.
The newly created contract will appear in your settlement list for monitoring.
## Approving an X versus Payment Settlement
When you're involved in an X versus Payment Settlement as a sender, you'll need
to approve it before it can be executed. The approval process includes:
1. Granting the settlement contract an allowance to transfer the specified
assets from your wallet
2. Then, sending your approval to the settlement contract itself
Once all involved parties have sent their approval, the settlement is ready to
be executed.

You can set up workflows for execution at a certain time or on other events.
## Execution phase of an X versus Payment Settlement
The execution phase is where the X versus Payment Settlement demonstrates its
value. During execution:
1. The settlement contract verifies all required approvals are in place
2. It checks that the settlement hasn't expired or been cancelled
3. It atomically transfers all assets between parties in a single transaction
If auto-execution is enabled during creation, the settlement contract will
automatically execute when the final party approves it. In this case, the last
approver pays the gas fees for the execution transaction.

## Cancelling an X versus Payment Settlement
Any involved party can cancel an X versus Payment Settlement before it has been
executed. This provides a safety mechanism if:
* Settlement parameters were incorrectly configured
* Market conditions changed before execution
* A party needs to withdraw from the agreement
Once a settlement is cancelled, it cannot be reactivated or executed.

## Settlement states
An X versus Payment Settlement can exist in one of the following states:
* **Pending**: Created but not yet fully approved by all parties
* **Ready**: Fully approved and ready for execution
* **Executed**: Successfully completed, with all assets transferred
* **Expired**: Past the cutoff date and no longer executable
* **Cancelled**: Explicitly cancelled by an involved party
These states help track the lifecycle of each settlement and provide clarity on
its current status.
## Technical implementation
The X versus Payment Settlement is powered by a secure smart contract that
follows best practices for atomic exchanges. The contract:
* Utilizes OpenZeppelin's security libraries
* Implements reentrancy protection
* Supports meta-transactions for gasless operations
* Includes comprehensive error handling
* Emits events for all key actions for auditability
Each settlement contract maintains its own state and manages the asset flows
between parties, ensuring settlement integrity and security.
file: ./content/docs/building-with-settlemint/building-with-sdk/blockscout.mdx
meta: {
"title": "Blockscout Explorer",
"description": "Integrating Blockscout blockchain explorer in your SettleMint dApp"
}
## About
The SettleMint Blockscout SDK provides a seamless way to interact with Blockscout APIs for blockchain data exploration and analysis. It enables you to easily query transaction data, blocks, addresses, smart contracts and more from your SettleMint-powered blockchain networks.
## API Reference
### Functions
#### createBlockscoutClient()
> **createBlockscoutClient**\<`Setup`>(`options`, `clientOptions?`): `object`
Defined in: [sdk/blockscout/src/blockscout.ts:76](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/blockscout/src/blockscout.ts#L76)
Creates a Blockscout GraphQL client with proper type safety using gql.tada
##### Type Parameters
| Type Parameter |
| --------------------------------------- |
| `Setup` *extends* `AbstractSetupSchema` |
##### Parameters
| Parameter | Type | Description |
| ---------------------- | ---------------------------------------------------- | --------------------------------------------- |
| `options` | \{ `accessToken?`: `string`; `instance`: `string`; } | Configuration options for the client |
| `options.accessToken?` | `string` | - |
| `options.instance?` | `string` | - |
| `clientOptions?` | `RequestConfig` | Optional GraphQL client configuration options |
##### Returns
`object`
An object containing the GraphQL client and initialized gql.tada function
| Name | Type | Defined in |
| --------- | --------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| `client` | `GraphQLClient` | [sdk/blockscout/src/blockscout.ts:80](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/blockscout/src/blockscout.ts#L80) |
| `graphql` | `initGraphQLTada`\<`Setup`> | [sdk/blockscout/src/blockscout.ts:81](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/blockscout/src/blockscout.ts#L81) |
##### Throws
Will throw an error if the options fail validation
##### Example
```ts
import { createBlockscoutClient } from '@settlemint/sdk-blockscout';
import type { introspection } from "@schemas/blockscout-env";
import { createLogger, requestLogger } from '@settlemint/sdk-utils/logging';
const logger = createLogger();
const { client, graphql } = createBlockscoutClient<{
introspection: introspection;
disableMasking: true;
scalars: {
AddressHash: string;
Data: string;
DateTime: string;
Decimal: string;
FullHash: string;
Json: string;
NonceHash: string;
Wei: string;
};
}>({
instance: process.env.SETTLEMINT_BLOCKSCOUT_ENDPOINT,
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN
}, {
fetch: requestLogger(logger, "blockscout", fetch) as typeof fetch,
});
// Making GraphQL queries
const query = graphql(`
query GetTransaction($hash: String!) {
transaction(hash: $hash) {
hash
blockNumber
value
gasUsed
}
}
`);
const result = await client.request(query, {
hash: "0x123abc..."
});
```
### Type Aliases
#### ClientOptions
> **ClientOptions** = `z.infer`\<*typeof* [`ClientOptionsSchema`](#clientoptionsschema)>
Defined in: [sdk/blockscout/src/blockscout.ts:24](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/blockscout/src/blockscout.ts#L24)
Type definition for client options derived from the ClientOptionsSchema
***
#### RequestConfig
> **RequestConfig** = `ConstructorParameters`\<*typeof* `GraphQLClient`>\[`1`]
Defined in: [sdk/blockscout/src/blockscout.ts:11](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/blockscout/src/blockscout.ts#L11)
Type definition for GraphQL client configuration options
### Variables
#### ClientOptionsSchema
> `const` **ClientOptionsSchema**: `ZodObject`\<\{ `accessToken`: `ZodOptional`\<`ZodString`>; `instance`: `ZodUnion`\; }, `$strip`>
Defined in: [sdk/blockscout/src/blockscout.ts:16](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/blockscout/src/blockscout.ts#L16)
Schema for validating client options for the Blockscout client.
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/building-with-settlemint/building-with-sdk/eas.mdx
meta: {
"title": "Ethereum Attestation Service (EAS)",
"description": "Integrating Ethereum Attestation Service (EAS) in your SettleMint dApp"
}
## About
The SettleMint EAS SDK provides a lightweight wrapper for the Ethereum Attestation Service (EAS), enabling developers to easily create, manage, and verify attestations within their applications. It simplifies the process of working with EAS by handling contract interactions, schema management, and The Graph integration, while ensuring proper integration with the SettleMint platform. This allows developers to quickly implement document verification, identity attestation, and other EAS-based features without manual setup.
## API Reference
### Functions
#### createEASClient()
> **createEASClient**(`options`): `object`
Defined in: [sdk/eas/src/eas.ts:36](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/eas.ts#L36)
Creates an EAS client for interacting with the Ethereum Attestation Service.
##### Parameters
| Parameter | Type | Description |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
| `options` | \{ `accessToken`: `string`; `attestationAddress`: `string`; `chainId`: `string`; `chainName`: `string`; `rpcUrl`: `string`; `schemaRegistryAddress`: `string`; } | Configuration options for the client |
| `options.accessToken` | `string` | Access token for the RPC provider (must start with 'sm\_aat\_' or 'sm\_pat\_') |
| `options.attestationAddress` | `string` | The address of the EAS Attestation contract |
| `options.chainId` | `string` | The chain ID to connect to |
| `options.chainName` | `string` | The name of the chain to connect to |
| `options.rpcUrl` | `string` | The RPC URL to connect to (must be a valid URL) |
| `options.schemaRegistryAddress` | `string` | The address of the EAS Schema Registry contract |
##### Returns
`object`
An object containing the EAS client instance
| Name | Type | Defined in |
| ------------------ | ----------------------------------- | --------------------------------------------------------------------------------------------- |
| `getSchema()` | (`uid`) => `Promise`\<`string`> | [sdk/eas/src/eas.ts:96](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/eas.ts#L96) |
| `registerSchema()` | (`options`) => `Promise`\<`string`> | [sdk/eas/src/eas.ts:95](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/eas.ts#L95) |
##### Throws
Will throw an error if the options fail validation
##### Example
```ts
import { createEASClient } from '@settlemint/sdk-eas';
const client = createEASClient({
schemaRegistryAddress: "0x1234567890123456789012345678901234567890",
attestationAddress: "0x1234567890123456789012345678901234567890",
accessToken: "your-access-token",
chainId: "1",
chainName: "Ethereum",
rpcUrl: "http://localhost:8545"
});
```
### Interfaces
#### RegisterSchemaOptions
Defined in: [sdk/eas/src/types.ts:34](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L34)
Options for registering a new schema in the EAS Schema Registry.
##### Properties
| Property | Type | Description | Defined in |
| -------------------------------------------- | -------------------------------- | -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| `fields` | [`SchemaField`](#schemafield)\[] | Array of fields that make up the schema | [sdk/eas/src/types.ts:36](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L36) |
| `resolverAddress` | `string` | Address of the resolver contract that will handle attestations | [sdk/eas/src/types.ts:38](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L38) |
| `revocable` | `boolean` | Whether attestations using this schema can be revoked | [sdk/eas/src/types.ts:40](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L40) |
***
#### SchemaField
Defined in: [sdk/eas/src/types.ts:22](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L22)
Represents a single field in an EAS schema.
##### Properties
| Property | Type | Description | Defined in |
| ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| `description?` | `string` | Optional description of the field's purpose | [sdk/eas/src/types.ts:28](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L28) |
| `name` | `string` | The name of the field | [sdk/eas/src/types.ts:24](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L24) |
| `type` | `"string"` \| `"address"` \| `"bool"` \| `"bytes"` \| `"bytes32"` \| `"int8"` \| `"int256"` \| `"uint8"` \| `"uint256"` | The Solidity type of the field | [sdk/eas/src/types.ts:26](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L26) |
### Type Aliases
#### ClientOptions
> **ClientOptions** = `z.infer`\<*typeof* [`ClientOptionsSchema`](#clientoptionsschema)>
Defined in: [sdk/eas/src/client-options.schema.ts:28](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/client-options.schema.ts#L28)
Configuration options for creating an EAS client.
Combines EAS-specific options with base Viem client options.
### Variables
#### ClientOptionsSchema
> `const` **ClientOptionsSchema**: `ZodObject`\<\{ `accessToken`: `ZodString`; `attestationAddress`: `ZodString`; `chainId`: `ZodString`; `chainName`: `ZodString`; `rpcUrl`: `ZodString`; `schemaRegistryAddress`: `ZodString`; }, `$strip`>
Defined in: [sdk/eas/src/client-options.schema.ts:9](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/client-options.schema.ts#L9)
Schema for validating EAS client configuration options.
Extends the base Viem client options with EAS-specific requirements.
***
#### EAS\_FIELD\_TYPES
> `const` **EAS\_FIELD\_TYPES**: `object`
Defined in: [sdk/eas/src/types.ts:5](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L5)
Supported field types for EAS schema fields.
Maps to the Solidity types that can be used in EAS schemas.
##### Type declaration
| Name | Type | Default value | Defined in |
| ---------------------------- | ----------- | ------------- | ------------------------------------------------------------------------------------------------- |
| `address` | `"address"` | `"address"` | [sdk/eas/src/types.ts:7](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L7) |
| `bool` | `"bool"` | `"bool"` | [sdk/eas/src/types.ts:8](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L8) |
| `bytes` | `"bytes"` | `"bytes"` | [sdk/eas/src/types.ts:9](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L9) |
| `bytes32` | `"bytes32"` | `"bytes32"` | [sdk/eas/src/types.ts:10](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L10) |
| `int256` | `"int256"` | `"int256"` | [sdk/eas/src/types.ts:12](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L12) |
| `int8` | `"int8"` | `"int8"` | [sdk/eas/src/types.ts:14](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L14) |
| `string` | `"string"` | `"string"` | [sdk/eas/src/types.ts:6](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L6) |
| `uint256` | `"uint256"` | `"uint256"` | [sdk/eas/src/types.ts:11](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L11) |
| `uint8` | `"uint8"` | `"uint8"` | [sdk/eas/src/types.ts:13](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/eas/src/types.ts#L13) |
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/building-with-settlemint/building-with-sdk/hasura.mdx
meta: {
"title": "Hasura",
"description": "Integrating Hasura in your SettleMint dApp"
}
## About
The SettleMint Hasura SDK provides a seamless way to interact with Hasura GraphQL APIs for managing application data. It enables you to easily query and mutate data stored in your SettleMint-powered PostgreSQL databases through a type-safe GraphQL interface.
## API Reference
### Functions
#### createHasuraClient()
> **createHasuraClient**\<`Setup`>(`options`, `clientOptions?`, `logger?`): `object`
Defined in: [sdk/hasura/src/hasura.ts:82](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/hasura/src/hasura.ts#L82)
Creates a Hasura GraphQL client with proper type safety using gql.tada
##### Type Parameters
| Type Parameter |
| --------------------------------------- |
| `Setup` *extends* `AbstractSetupSchema` |
##### Parameters
| Parameter | Type | Description |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------- |
| `options` | \{ `accessToken?`: `string`; `adminSecret`: `string`; `cache?`: `"default"` \| `"force-cache"` \| `"no-cache"` \| `"no-store"` \| `"only-if-cached"` \| `"reload"`; `instance`: `string`; } | Configuration options for the client |
| `options.accessToken?` | `string` | - |
| `options.adminSecret?` | `string` | - |
| `options.cache?` | `"default"` \| `"force-cache"` \| `"no-cache"` \| `"no-store"` \| `"only-if-cached"` \| `"reload"` | - |
| `options.instance?` | `string` | - |
| `clientOptions?` | `RequestConfig` | Optional GraphQL client configuration options |
| `logger?` | `Logger` | Optional logger to use for logging the requests |
##### Returns
`object`
An object containing:
* client: The configured GraphQL client instance
* graphql: The initialized gql.tada function for type-safe queries
| Name | Type | Defined in |
| --------- | --------------------------- | --------------------------------------------------------------------------------------------------------- |
| `client` | `GraphQLClient` | [sdk/hasura/src/hasura.ts:87](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/hasura/src/hasura.ts#L87) |
| `graphql` | `initGraphQLTada`\<`Setup`> | [sdk/hasura/src/hasura.ts:88](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/hasura/src/hasura.ts#L88) |
##### Throws
Will throw an error if the options fail validation against ClientOptionsSchema
##### Example
```ts
import { createHasuraClient } from '@settlemint/sdk-hasura';
import type { introspection } from "@schemas/hasura-env";
import { createLogger, requestLogger } from "@settlemint/sdk-utils/logging";
const logger = createLogger();
const { client, graphql } = createHasuraClient<{
introspection: introspection;
disableMasking: true;
scalars: {
timestamp: string;
timestampz: string;
uuid: string;
date: string;
time: string;
jsonb: string;
numeric: string;
interval: string;
geometry: string;
geography: string;
};
}>({
instance: process.env.SETTLEMINT_HASURA_ENDPOINT,
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN,
adminSecret: process.env.SETTLEMINT_HASURA_ADMIN_SECRET,
}, {
fetch: requestLogger(logger, "hasura", fetch) as typeof fetch,
});
// Making GraphQL queries
const query = graphql(`
query GetUsers {
users {
id
name
email
}
}
`);
const result = await client.request(query);
```
***
#### createPostgresPool()
> **createPostgresPool**(`databaseUrl`): `Pool`
Defined in: [sdk/hasura/src/postgres.ts:107](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/hasura/src/postgres.ts#L107)
Creates a PostgreSQL connection pool with error handling and retry mechanisms
##### Parameters
| Parameter | Type | Description |
| ------------- | -------- | ----------------------------- |
| `databaseUrl` | `string` | The PostgreSQL connection URL |
##### Returns
`Pool`
A configured PostgreSQL connection pool
##### Throws
Will throw an error if called from browser runtime
##### Example
```ts
import { createPostgresPool } from '@settlemint/sdk-hasura';
const pool = createPostgresPool(process.env.SETTLEMINT_HASURA_DATABASE_URL);
// The pool will automatically handle connection errors and retries
const client = await pool.connect();
try {
const result = await client.query('SELECT NOW()');
console.log(result.rows[0]);
} finally {
client.release();
}
```
### Type Aliases
#### ClientOptions
> **ClientOptions** = `z.infer`\<*typeof* [`ClientOptionsSchema`](#clientoptionsschema)>
Defined in: [sdk/hasura/src/hasura.ts:27](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/hasura/src/hasura.ts#L27)
Type definition for client options derived from the ClientOptionsSchema.
***
#### RequestConfig
> **RequestConfig** = `ConstructorParameters`\<*typeof* `GraphQLClient`>\[`1`]
Defined in: [sdk/hasura/src/hasura.ts:12](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/hasura/src/hasura.ts#L12)
Type definition for GraphQL client configuration options
### Variables
#### ClientOptionsSchema
> `const` **ClientOptionsSchema**: `ZodObject`\<\{ `accessToken`: `ZodOptional`\<`ZodString`>; `adminSecret`: `ZodString`; `cache`: `ZodOptional`\<`ZodEnum`\<\{ `default`: `"default"`; `force-cache`: `"force-cache"`; `no-cache`: `"no-cache"`; `no-store`: `"no-store"`; `only-if-cached`: `"only-if-cached"`; `reload`: `"reload"`; }>>; `instance`: `ZodUnion`\; }, `$strip`>
Defined in: [sdk/hasura/src/hasura.ts:17](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/hasura/src/hasura.ts#L17)
Schema for validating client options for the Hasura client.
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/building-with-settlemint/building-with-sdk/ipfs.mdx
meta: {
"title": "IPFS Storage",
"description": "Integrating IPFS storage in your SettleMint dApp"
}
## About
The SettleMint IPFS SDK provides a simple way to interact with IPFS (InterPlanetary File System) through the SettleMint platform. It enables you to easily store and retrieve files using IPFS in a decentralized manner.
## API Reference
### Functions
#### createIpfsClient()
> **createIpfsClient**(`options`): `object`
Defined in: [sdk/ipfs/src/ipfs.ts:31](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/ipfs/src/ipfs.ts#L31)
Creates an IPFS client for client-side use
##### Parameters
| Parameter | Type | Description |
| ------------------ | -------------------------- | ------------------------------------------ |
| `options` | \{ `instance`: `string`; } | Configuration options for the client |
| `options.instance` | `string` | The URL of the IPFS instance to connect to |
##### Returns
`object`
An object containing the configured IPFS client instance
| Name | Type | Defined in |
| -------- | --------------- | ------------------------------------------------------------------------------------------------- |
| `client` | `KuboRPCClient` | [sdk/ipfs/src/ipfs.ts:31](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/ipfs/src/ipfs.ts#L31) |
##### Throws
Will throw an error if the options fail validation
##### Example
```ts
import { createIpfsClient } from '@settlemint/sdk-ipfs';
const { client } = createIpfsClient({
instance: 'https://ipfs.settlemint.com'
});
// Upload a file using Blob
const blob = new Blob(['Hello, world!'], { type: 'text/plain' });
const result = await client.add(blob);
console.log(result.cid.toString());
```
***
#### createServerIpfsClient()
> **createServerIpfsClient**(`options`): `object`
Defined in: [sdk/ipfs/src/ipfs.ts:60](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/ipfs/src/ipfs.ts#L60)
Creates an IPFS client for server-side use with authentication
##### Parameters
| Parameter | Type | Description |
| --------------------- | --------------------------------------------------- | ------------------------------------------------------------------ |
| `options` | \{ `accessToken`: `string`; `instance`: `string`; } | Configuration options for the client including authentication |
| `options.accessToken` | `string` | The access token used to authenticate with the SettleMint platform |
| `options.instance` | `string` | The URL of the IPFS instance to connect to |
##### Returns
`object`
An object containing the authenticated IPFS client instance
| Name | Type | Defined in |
| -------- | --------------- | ------------------------------------------------------------------------------------------------- |
| `client` | `KuboRPCClient` | [sdk/ipfs/src/ipfs.ts:60](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/ipfs/src/ipfs.ts#L60) |
##### Throws
Will throw an error if called on the client side or if options validation fails
##### Example
```ts
import { createServerIpfsClient } from '@settlemint/sdk-ipfs';
const { client } = createServerIpfsClient({
instance: process.env.SETTLEMINT_IPFS_ENDPOINT,
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN
});
// Upload a file using Blob
const blob = new Blob(['Hello, world!'], { type: 'text/plain' });
const result = await client.add(blob);
console.log(result.cid.toString());
```
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/building-with-settlemint/building-with-sdk/minio.mdx
meta: {
"title": "MinIO/S3 storage",
"description": "Integrating MinIO/S3 storage in your SettleMint dApp"
}
## About
The SettleMint MinIO SDK provides a simple way to interact with MinIO object storage through the SettleMint platform. It enables you to easily store and retrieve files using MinIO's S3-compatible API in a secure and scalable manner.
## API Reference
### Functions
#### createPresignedUploadUrl()
> **createPresignedUploadUrl**(`client`, `fileName`, `path`, `bucket`, `expirySeconds`): `Promise`\<`string`>
Defined in: [sdk/minio/src/helpers/functions.ts:261](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/functions.ts#L261)
Creates a presigned upload URL for direct browser uploads
##### Parameters
| Parameter | Type | Default value | Description |
| --------------- | -------- | ---------------- | -------------------------------------------------- |
| `client` | `Client` | `undefined` | The MinIO client to use |
| `fileName` | `string` | `undefined` | The file name to use |
| `path` | `string` | `""` | Optional path/folder |
| `bucket` | `string` | `DEFAULT_BUCKET` | Optional bucket name (defaults to DEFAULT\_BUCKET) |
| `expirySeconds` | `number` | `3600` | How long the URL should be valid for |
##### Returns
`Promise`\<`string`>
Presigned URL for PUT operation
##### Throws
Will throw an error if URL creation fails or client initialization fails
##### Example
```ts
import { createServerMinioClient, createPresignedUploadUrl } from "@settlemint/sdk-minio";
const { client } = createServerMinioClient({
instance: process.env.SETTLEMINT_MINIO_ENDPOINT!,
accessKey: process.env.SETTLEMINT_MINIO_ACCESS_KEY!,
secretKey: process.env.SETTLEMINT_MINIO_SECRET_KEY!
});
// Generate the presigned URL on the server
const url = await createPresignedUploadUrl(client, "report.pdf", "documents/");
// Send the URL to the client/browser via HTTP response
return Response.json({ uploadUrl: url });
// Then in the browser:
const response = await fetch('/api/get-upload-url');
const { uploadUrl } = await response.json();
await fetch(uploadUrl, {
method: 'PUT',
headers: { 'Content-Type': 'application/pdf' },
body: pdfFile
});
```
***
#### createServerMinioClient()
> **createServerMinioClient**(`options`): `object`
Defined in: [sdk/minio/src/minio.ts:23](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/minio.ts#L23)
Creates a MinIO client for server-side use with authentication.
##### Parameters
| Parameter | Type | Description |
| ------------------- | ------------------------------------------------------------------------ | --------------------------------------------------------------- |
| `options` | \{ `accessKey`: `string`; `instance`: `string`; `secretKey`: `string`; } | The server client options for configuring the MinIO client |
| `options.accessKey` | `string` | The MinIO access key used to authenticate with the MinIO server |
| `options.instance` | `string` | The URL of the MinIO instance to connect to |
| `options.secretKey` | `string` | The MinIO secret key used to authenticate with the MinIO server |
##### Returns
`object`
An object containing the initialized MinIO client
| Name | Type | Defined in |
| -------- | -------- | ----------------------------------------------------------------------------------------------------- |
| `client` | `Client` | [sdk/minio/src/minio.ts:23](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/minio.ts#L23) |
##### Throws
Will throw an error if not called on the server or if the options fail validation
##### Example
```ts
import { createServerMinioClient } from "@settlemint/sdk-minio";
const { client } = createServerMinioClient({
instance: process.env.SETTLEMINT_MINIO_ENDPOINT!,
accessKey: process.env.SETTLEMINT_MINIO_ACCESS_KEY!,
secretKey: process.env.SETTLEMINT_MINIO_SECRET_KEY!
});
client.listBuckets();
```
***
#### deleteFile()
> **deleteFile**(`client`, `fileId`, `bucket`): `Promise`\<`boolean`>
Defined in: [sdk/minio/src/helpers/functions.ts:214](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/functions.ts#L214)
Deletes a file from storage
##### Parameters
| Parameter | Type | Default value | Description |
| --------- | -------- | ---------------- | -------------------------------------------------- |
| `client` | `Client` | `undefined` | The MinIO client to use |
| `fileId` | `string` | `undefined` | The file identifier/path |
| `bucket` | `string` | `DEFAULT_BUCKET` | Optional bucket name (defaults to DEFAULT\_BUCKET) |
##### Returns
`Promise`\<`boolean`>
Success status
##### Throws
Will throw an error if deletion fails or client initialization fails
##### Example
```ts
import { createServerMinioClient, deleteFile } from "@settlemint/sdk-minio";
const { client } = createServerMinioClient({
instance: process.env.SETTLEMINT_MINIO_ENDPOINT!,
accessKey: process.env.SETTLEMINT_MINIO_ACCESS_KEY!,
secretKey: process.env.SETTLEMINT_MINIO_SECRET_KEY!
});
await deleteFile(client, "documents/report.pdf");
```
***
#### getFileById()
> **getFileById**(`client`, `fileId`, `bucket`): `Promise`\<[`FileMetadata`](#filemetadata)>
Defined in: [sdk/minio/src/helpers/functions.ts:141](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/functions.ts#L141)
Gets a single file by its object name
##### Parameters
| Parameter | Type | Default value | Description |
| --------- | -------- | ---------------- | -------------------------------------------------- |
| `client` | `Client` | `undefined` | The MinIO client to use |
| `fileId` | `string` | `undefined` | The file identifier/path |
| `bucket` | `string` | `DEFAULT_BUCKET` | Optional bucket name (defaults to DEFAULT\_BUCKET) |
##### Returns
`Promise`\<[`FileMetadata`](#filemetadata)>
File metadata with presigned URL
##### Throws
Will throw an error if the file doesn't exist or client initialization fails
##### Example
```ts
import { createServerMinioClient, getFileByObjectName } from "@settlemint/sdk-minio";
const { client } = createServerMinioClient({
instance: process.env.SETTLEMINT_MINIO_ENDPOINT!,
accessKey: process.env.SETTLEMINT_MINIO_ACCESS_KEY!,
secretKey: process.env.SETTLEMINT_MINIO_SECRET_KEY!
});
const file = await getFileByObjectName(client, "documents/report.pdf");
```
***
#### getFilesList()
> **getFilesList**(`client`, `prefix`, `bucket`): `Promise`\<[`FileMetadata`](#filemetadata)\[]>
Defined in: [sdk/minio/src/helpers/functions.ts:62](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/functions.ts#L62)
Gets a list of files with optional prefix filter
##### Parameters
| Parameter | Type | Default value | Description |
| --------- | -------- | ---------------- | ---------------------------------------------------- |
| `client` | `Client` | `undefined` | The MinIO client to use |
| `prefix` | `string` | `""` | Optional prefix to filter files (like a folder path) |
| `bucket` | `string` | `DEFAULT_BUCKET` | Optional bucket name (defaults to DEFAULT\_BUCKET) |
##### Returns
`Promise`\<[`FileMetadata`](#filemetadata)\[]>
Array of file metadata objects
##### Throws
Will throw an error if the operation fails or client initialization fails
##### Example
```ts
import { createServerMinioClient, getFilesList } from "@settlemint/sdk-minio";
const { client } = createServerMinioClient({
instance: process.env.SETTLEMINT_MINIO_ENDPOINT!,
accessKey: process.env.SETTLEMINT_MINIO_ACCESS_KEY!,
secretKey: process.env.SETTLEMINT_MINIO_SECRET_KEY!
});
const files = await getFilesList(client, "documents/");
```
***
#### uploadFile()
> **uploadFile**(`client`, `buffer`, `objectName`, `contentType`, `bucket`): `Promise`\<[`FileMetadata`](#filemetadata)>
Defined in: [sdk/minio/src/helpers/functions.ts:311](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/functions.ts#L311)
Uploads a buffer directly to storage
##### Parameters
| Parameter | Type | Default value | Description |
| ------------- | -------- | ---------------- | -------------------------------------------------- |
| `client` | `Client` | `undefined` | The MinIO client to use |
| `buffer` | `Buffer` | `undefined` | The buffer to upload |
| `objectName` | `string` | `undefined` | The full object name/path |
| `contentType` | `string` | `undefined` | The content type of the file |
| `bucket` | `string` | `DEFAULT_BUCKET` | Optional bucket name (defaults to DEFAULT\_BUCKET) |
##### Returns
`Promise`\<[`FileMetadata`](#filemetadata)>
The uploaded file metadata
##### Throws
Will throw an error if upload fails or client initialization fails
##### Example
```ts
import { createServerMinioClient, uploadBuffer } from "@settlemint/sdk-minio";
const { client } = createServerMinioClient({
instance: process.env.SETTLEMINT_MINIO_ENDPOINT!,
accessKey: process.env.SETTLEMINT_MINIO_ACCESS_KEY!,
secretKey: process.env.SETTLEMINT_MINIO_SECRET_KEY!
});
const buffer = Buffer.from("Hello, world!");
const uploadedFile = await uploadFile(client, buffer, "documents/hello.txt", "text/plain");
```
### Interfaces
#### FileMetadata
Defined in: [sdk/minio/src/helpers/schema.ts:29](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L29)
Type representing file metadata after validation.
##### Properties
| Property | Type | Description | Defined in |
| ------------------------------------ | -------- | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| `contentType` | `string` | The content type of the file. | [sdk/minio/src/helpers/schema.ts:41](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L41) |
| `etag` | `string` | The ETag of the file. | [sdk/minio/src/helpers/schema.ts:56](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L56) |
| `id` | `string` | The unique identifier for the file. | [sdk/minio/src/helpers/schema.ts:33](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L33) |
| `name` | `string` | The name of the file. | [sdk/minio/src/helpers/schema.ts:37](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L37) |
| `size` | `number` | The size of the file in bytes. | [sdk/minio/src/helpers/schema.ts:46](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L46) |
| `uploadedAt` | `string` | The date and time the file was uploaded. | [sdk/minio/src/helpers/schema.ts:51](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L51) |
| `url?` | `string` | The URL of the file. | [sdk/minio/src/helpers/schema.ts:61](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L61) |
### Variables
#### DEFAULT\_BUCKET
> `const` **DEFAULT\_BUCKET**: `"uploads"` = `"uploads"`
Defined in: [sdk/minio/src/helpers/schema.ts:67](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/minio/src/helpers/schema.ts#L67)
Default bucket name to use for file storage when none is specified.
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/building-with-settlemint/building-with-sdk/portal.mdx
meta: {
"title": "Smart Contract Portal",
"description": "Using the Smart Contract Portal in your SettleMint dApp"
}
## About
The SettleMint Smart Contract Portal SDK provides a seamless way to interact with the Smart Contract Portal Middleware API. It enables you to easily interact with your smart contracts using a REST or GraphQL API.
The SDK offers a type-safe interface for all Portal API operations, with comprehensive error handling and validation. It integrates smoothly with modern TypeScript applications while providing a simple and intuitive developer experience.
## Examples
### Deploy contract
```ts
/**
* This example demonstrates how to deploy a contract.
*
* The process involves:
* 1. Creating a portal client
* 2. Deploying a forwarder contract
* 3. Waiting for the forwarder contract deployment to be finalized
* 4. Deploying a stablecoin factory contract
* 5. Getting all contracts and filtering by ABI name
*
* This pattern is useful for applications that need to deploy smart contracts
* in the SettleMint Portal, providing a way to track the progress of blockchain operations.
*/
import { loadEnv } from "@settlemint/sdk-utils/environment";
import { createLogger, requestLogger } from "@settlemint/sdk-utils/logging";
import { getAddress } from "viem";
import { createPortalClient } from "../portal.js"; // Replace this path with "@settlemint/sdk-portal"
import { waitForTransactionReceipt } from "../utils/wait-for-transaction-receipt.js"; // Replace this path with "@settlemint/sdk-portal"
import type { introspection } from "./schemas/portal-env.d.ts"; // Replace this path with the generated introspection type
const env = await loadEnv(false, false);
const logger = createLogger();
const { client: portalClient, graphql: portalGraphql } = createPortalClient<{
introspection: introspection;
disableMasking: true;
scalars: {
// Change unknown to the type you are using to store metadata
JSON: unknown;
};
}>(
{
instance: env.SETTLEMINT_PORTAL_GRAPHQL_ENDPOINT!,
accessToken: env.SETTLEMINT_ACCESS_TOKEN!,
},
{
fetch: requestLogger(logger, "portal", fetch) as typeof fetch,
},
);
// Replace with the address of your private key which you use to deploy smart contracts
const FROM = getAddress("0x4B03331cF2db1497ec58CAa4AFD8b93611906960");
/**
* Deploy a forwarder contract
*/
const deployForwarder = await portalClient.request(
portalGraphql(`
mutation DeployContractForwarder($from: String!) {
DeployContractForwarder(from: $from, gasLimit: "0x3d0900") {
transactionHash
}
}
`),
{
from: FROM,
},
);
/**
* Wait for the forwarder contract deployment to be finalized
*/
const transaction = await waitForTransactionReceipt(deployForwarder.DeployContractForwarder?.transactionHash!, {
portalGraphqlEndpoint: env.SETTLEMINT_PORTAL_GRAPHQL_ENDPOINT!,
accessToken: env.SETTLEMINT_ACCESS_TOKEN!,
});
/**
* Deploy a stablecoin factory contract
*/
const deployStableCoinFactory = await portalClient.request(
portalGraphql(`
mutation DeployContractStableCoinFactory($from: String!, $constructorArguments: DeployContractStableCoinFactoryInput!) {
DeployContractStableCoinFactory(from: $from, constructorArguments: $constructorArguments, gasLimit: "0x3d0900") {
transactionHash
}
}
`),
{
from: FROM,
constructorArguments: {
forwarder: getAddress(transaction?.receipt.contractAddress!),
},
},
);
console.log(deployStableCoinFactory?.DeployContractStableCoinFactory?.transactionHash);
const contractAddresses = await portalClient.request(
portalGraphql(`
query GetContracts {
getContracts {
count
records {
address
abiName
createdAt
}
}
}
`),
);
// Print total count
console.log(`Total contracts: ${contractAddresses.getContracts?.count}`);
// Contracts for StableCoinFactory
console.log(contractAddresses.getContracts?.records.filter((record) => record.abiName === "StableCoinFactory"));
```
### Get pending transactions
```ts
/**
* This example demonstrates how to get the number of pending transactions.
*
* The process involves:
* 1. Creating a portal client
* 2. Making a GraphQL query to get the number of pending transactions
*
* This pattern is useful for applications that need to monitor the status of pending transactions
* in the SettleMint Portal, providing a way to track the progress of blockchain operations.
*/
import { loadEnv } from "@settlemint/sdk-utils/environment";
import { createLogger, requestLogger } from "@settlemint/sdk-utils/logging";
import { createPortalClient } from "../portal.js"; // Replace this path with "@settlemint/sdk-portal"
import type { introspection } from "./schemas/portal-env.d.ts"; // Replace this path with the generated introspection type
const env = await loadEnv(false, false);
const logger = createLogger();
const { client: portalClient, graphql: portalGraphql } = createPortalClient<{
introspection: introspection;
disableMasking: true;
scalars: {
// Change unknown to the type you are using to store metadata
JSON: unknown;
};
}>(
{
instance: env.SETTLEMINT_PORTAL_GRAPHQL_ENDPOINT!,
accessToken: env.SETTLEMINT_ACCESS_TOKEN!,
},
{
fetch: requestLogger(logger, "portal", fetch) as typeof fetch,
},
);
// Making GraphQL queries
const query = portalGraphql(`
query GetPendingTransactions {
getPendingTransactions {
count
}
}
`);
const result = await portalClient.request(query);
console.log(`There are ${result.getPendingTransactions?.count} pending transactions`);
```
### Monitoring alerting
```ts
/**
* This example demonstrates how to implement real-time transaction monitoring and alerting.
*
* The process involves:
* 1. Creating a WebSocket subscription to monitor all blockchain transactions
* 2. Setting up custom handlers for different monitoring scenarios
* 3. Processing transactions in real-time as they are confirmed
* 4. Implementing specific monitoring functions for addresses, events, and failures
* 5. Triggering alerts based on predefined conditions
*
* This pattern is useful for applications that need to:
* - Detect suspicious activities for security purposes
* - Track high-value transfers or specific contract interactions
* - Monitor for failed transactions that require attention
* - Implement compliance reporting and audit trails
* - Build automated workflows that respond to on-chain events
* - Provide real-time notifications to stakeholders
*/
import type { FormattedExecutionResult } from "graphql";
import { type Transaction, type WebsocketClientOptions, getWebsocketClient } from "../portal.js"; // Replace this path with "@settlemint/sdk-portal"
/**
* Handlers for different monitoring scenarios
* You can implement your own handlers
*/
export type AlertHandlers = {
onAddressActivity: (transaction: Transaction, addresses: string[]) => void;
onEvent: (transaction: Transaction, eventNames: string[]) => void;
onFailure: (transaction: Transaction) => void;
};
/**
* Monitors all blockchain transactions by subscribing to transaction updates via GraphQL.
* This function continuously logs all transaction receipts as they are received.
*
* @param options - Configuration options for connecting to the Portal API
* @param handlers - Optional handlers for different monitoring scenarios
* @throws Error if the subscription fails
*
* @example
* import { monitorAllTransactions } from "@settlemint/sdk-portal";
*
* monitorAllTransactions({
* portalGraphqlEndpoint: "https://example.settlemint.com/graphql",
* accessToken: "your-access-token"
* }, {
* onAddressActivity: (tx, address) => {
* console.log(`Address ${address} was involved in transaction ${tx.transactionHash}`);
* },
* onEvent: (tx, eventName) => {
* console.log(`Event ${eventName} detected in transaction ${tx.transactionHash}`);
* },
* onFailure: (tx, reason) => {
* console.log(`Transaction ${tx.transactionHash} failed: ${reason}`);
* }
* });
*/
export function monitorAllTransactions(options: WebsocketClientOptions, handlers: AlertHandlers) {
const wsClient = getWebsocketClient(options);
const subscription = wsClient.iterate<{
getProcessedTransactions: {
records: Transaction[];
};
}>({
query: `subscription getProcessedTransactions {
getProcessedTransactions(pageSize: 1) {
records {
receipt {
transactionHash
to
status
from
type
revertReason
revertReasonDecoded
logs
events
contractAddress
}
transactionHash
from
createdAt
address
functionName
isContract
}
}
}`,
});
// Start the monitoring process
processSubscription(subscription, handlers);
return subscription;
}
/**
* Internal helper to process the subscription stream
*/
async function processSubscription(
subscription: AsyncIterable<
FormattedExecutionResult<
{
getProcessedTransactions: {
records: Transaction[];
};
},
unknown
>
>,
handlers: AlertHandlers,
) {
(async () => {
for await (const result of subscription) {
if (result?.data?.getProcessedTransactions?.records) {
const records = result.data.getProcessedTransactions.records;
const transaction = records.at(-1);
if (transaction) {
processTransaction(transaction, handlers);
}
}
}
})();
}
/**
* Process a single transaction with the configured handlers
*/
function processTransaction(transaction: Transaction, handlers: AlertHandlers) {
// Monitor specific addresses (example addresses)
handlers.onAddressActivity(transaction, ["0x742d35Cc6634C0532925a3b844Bc454e4438f44e"]);
// Monitor for specific events
handlers.onEvent(transaction, ["Transfer", "Approval"]);
// Monitor for failed transactions
handlers.onFailure(transaction);
}
/**
* Monitors transactions from or to specific addresses.
*
* @param transaction - The transaction to check
* @param addresses - The addresses to monitor
*
* @example
* import { monitorSpecificAddresses } from "@settlemint/sdk-portal";
*
* monitorSpecificAddresses(transaction, ["0x742d35Cc6634C0532925a3b844Bc454e4438f44e"]);
*/
export function monitorSpecificAddresses(transaction: Transaction, addresses: string[]) {
const { from, address } = transaction;
const { to } = transaction.receipt;
const isInvolved = addresses.some((address) => [from, to].includes(address));
if (isInvolved) {
notify(`[ADDRESS] Address ${address} was involved in transaction ${transaction.transactionHash}`);
}
}
/**
* Monitors transactions for specific contract events.
*
* @param transaction - The transaction to check
* @param eventNames - The event names to monitor
*
* @example
* import { monitorContractEvents } from "@settlemint/sdk-portal";
*
* monitorContractEvents(transaction, ["Transfer", "Approval"]);
*/
export function monitorContractEvents(transaction: Transaction, eventNames: string[]) {
const events = transaction.receipt.events;
const eventDetected = events.find((event) => eventNames.includes(event.eventName));
if (eventDetected) {
notify(`[EVENT] Event ${eventDetected.eventName} detected in transaction ${transaction.transactionHash}`);
}
}
/**
* Monitors for failed transactions that require attention.
*
* @param transaction - The transaction to check
*
* @example
* import { monitorFailedTransactions } from "@settlemint/sdk-portal";
*
* monitorFailedTransactions(transaction, "Unknown reason");
*/
export function monitorFailedTransactions(transaction: Transaction) {
const status = transaction.receipt?.status;
if (status === "Reverted") {
const reason = transaction.receipt.revertReasonDecoded;
notify(`[FAILED] Transaction ${transaction.transactionHash} failed: ${reason}`);
}
}
const notify = (message: string) => {
console.log(message);
};
/**
* Example usage - monitoring specific on-chain activity
*/
export function runMonitoringExample() {
// Basic usage
monitorAllTransactions(
{
portalGraphqlEndpoint: "https://example.settlemint.com/graphql",
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN!,
},
{
onAddressActivity: monitorSpecificAddresses,
onEvent: monitorContractEvents,
onFailure: monitorFailedTransactions,
},
);
}
runMonitoringExample();
```
### Send transaction using hd wallet
```ts
/**
* This example demonstrates how to send a transaction using an HD wallet.
*
* The process involves:
* 1. Creating a wallet for a user using the HD private key
* 2. Setting up a pincode for wallet verification
* 3. Handling the wallet verification challenge
* 4. Sending a transaction to the blockchain
*
* This pattern is useful for applications that need to manage multiple user wallets
* derived from a single HD wallet, providing a secure and scalable approach to
* blockchain interactions in enterprise applications.
*/
import { loadEnv } from "@settlemint/sdk-utils/environment";
import { createLogger, requestLogger } from "@settlemint/sdk-utils/logging";
import type { Address } from "viem";
import { createPortalClient } from "../portal.js"; // Replace this path with "@settlemint/sdk-portal"
import { handleWalletVerificationChallenge } from "../utils/wallet-verification-challenge.js"; // Replace this path with "@settlemint/sdk-portal"
import type { introspection } from "./schemas/portal-env.js"; // Replace this path with the generated introspection type
const env = await loadEnv(false, false);
const logger = createLogger();
const { client: portalClient, graphql: portalGraphql } = createPortalClient<{
introspection: introspection;
disableMasking: true;
scalars: {
// Change unknown to the type you are using to store metadata
JSON: unknown;
};
}>(
{
instance: env.SETTLEMINT_PORTAL_GRAPHQL_ENDPOINT!,
accessToken: env.SETTLEMINT_ACCESS_TOKEN!,
},
{
fetch: requestLogger(logger, "portal", fetch) as typeof fetch,
},
);
/**
* First create a wallet using the HD private key, this needs to be done for every user that is using your app
*/
const wallet = await portalClient.request(
portalGraphql(`
mutation createUserWallet($keyVaultId: String!, $name: String!) {
createWallet(keyVaultId: $keyVaultId, walletInfo: { name: $name }) {
address
}
}
`),
{
keyVaultId: env.SETTLEMINT_HD_PRIVATE_KEY!,
name: "My Wallet",
},
);
/**
* Set a pincode for the wallet, this is used to verify the wallet when the user is sending a transaction to the chain
*/
const pincodeVerification = await portalClient.request(
portalGraphql(`
mutation setPinCode($address: String!, $pincode: String!) {
createWalletVerification(
userWalletAddress: $address
verificationInfo: {pincode: {name: "PINCODE", pincode: $pincode}}
) {
id
name
parameters
verificationType
}
}
`),
{
address: wallet.createWallet?.address!,
pincode: "123456",
},
);
/**
* Generate a challenge response for the pincode verification
*/
const challengeResponse = await handleWalletVerificationChallenge({
portalClient,
portalGraphql,
verificationId: pincodeVerification.createWalletVerification?.id!,
userWalletAddress: wallet.createWallet?.address! as Address,
code: "123456",
verificationType: "pincode",
});
/**
* Send a transaction to the chain
* This is a sample of how to send a transaction to the chain using the portal client and the asset tokenization kit
* The challenge response is generated using the handleWalletVerificationChallenge function, this is used to verifiy wallet access
* @see https://github.com/settlemint/asset-tokenization-kit
*/
const result = await portalClient.request(
portalGraphql(`
mutation StableCoinFactoryCreate(
$challengeResponse: String!
$verificationId: String
$address: String!
$from: String!
$input: StableCoinFactoryCreateInput!
) {
StableCoinFactoryCreate(
challengeResponse: $challengeResponse
verificationId: $verificationId
address: $address
from: $from
input: $input
) {
transactionHash
}
}
`),
{
challengeResponse: challengeResponse.challengeResponse,
verificationId: pincodeVerification.createWalletVerification?.id!,
address: "0x5e771e1417100000000000000000000000000004",
from: wallet.createWallet?.address!,
input: {
name: "Test Coin",
symbol: "TEST",
decimals: 18,
collateralLivenessSeconds: 3_600,
},
},
);
// Log the transaction hash
console.log("Transaction hash:", result.StableCoinFactoryCreate?.transactionHash);
```
## API Reference
### Functions
#### createPortalClient()
> **createPortalClient**\<`Setup`>(`options`, `clientOptions?`): `object`
Defined in: [sdk/portal/src/portal.ts:72](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/portal.ts#L72)
Creates a Portal GraphQL client with the provided configuration.
##### Type Parameters
| Type Parameter |
| --------------------------------------- |
| `Setup` *extends* `AbstractSetupSchema` |
##### Parameters
| Parameter | Type | Description |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------- |
| `options` | \{ `accessToken?`: `string`; `cache?`: `"default"` \| `"force-cache"` \| `"no-cache"` \| `"no-store"` \| `"only-if-cached"` \| `"reload"`; `instance`: `string`; } | Configuration options for the Portal client |
| `options.accessToken?` | `string` | - |
| `options.cache?` | `"default"` \| `"force-cache"` \| `"no-cache"` \| `"no-store"` \| `"only-if-cached"` \| `"reload"` | - |
| `options.instance?` | `string` | - |
| `clientOptions?` | `RequestConfig` | Additional GraphQL client configuration options |
##### Returns
`object`
An object containing the configured GraphQL client and graphql helper function
| Name | Type | Defined in |
| --------- | --------------------------- | --------------------------------------------------------------------------------------------------------- |
| `client` | `GraphQLClient` | [sdk/portal/src/portal.ts:76](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/portal.ts#L76) |
| `graphql` | `initGraphQLTada`\<`Setup`> | [sdk/portal/src/portal.ts:77](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/portal.ts#L77) |
##### Throws
If the provided options fail validation
##### Example
```ts
import { createPortalClient } from "@settlemint/sdk-portal";
import { loadEnv } from "@settlemint/sdk-utils/environment";
import { createLogger, requestLogger } from "@settlemint/sdk-utils/logging";
import type { introspection } from "@schemas/portal-env";
const env = await loadEnv(false, false);
const logger = createLogger();
const { client: portalClient, graphql: portalGraphql } = createPortalClient<{
introspection: introspection;
disableMasking: true;
scalars: {
// Change unknown to the type you are using to store metadata
JSON: unknown;
};
}>(
{
instance: env.SETTLEMINT_PORTAL_GRAPHQL_ENDPOINT!,
accessToken: env.SETTLEMINT_ACCESS_TOKEN!,
},
{
fetch: requestLogger(logger, "portal", fetch) as typeof fetch,
},
);
// Making GraphQL queries
const query = portalGraphql(`
query GetPendingTransactions {
getPendingTransactions {
count
}
}
`);
const result = await portalClient.request(query);
```
***
#### getWebsocketClient()
> **getWebsocketClient**(`options`): `Client`
Defined in: [sdk/portal/src/utils/websocket-client.ts:30](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/websocket-client.ts#L30)
Creates a GraphQL WebSocket client for the Portal API
##### Parameters
| Parameter | Type | Description |
| --------- | --------------------------------------------------- | -------------------------- |
| `options` | [`WebsocketClientOptions`](#websocketclientoptions) | The options for the client |
##### Returns
`Client`
The GraphQL WebSocket client
##### Example
```ts
import { getWebsocketClient } from "@settlemint/sdk-portal";
const client = getWebsocketClient({
portalGraphqlEndpoint: "https://portal.settlemint.com/graphql",
accessToken: "your-access-token",
});
```
***
#### handleWalletVerificationChallenge()
> **handleWalletVerificationChallenge**\<`Setup`>(`options`): `Promise`\<\{ `challengeResponse`: `string`; `verificationId?`: `string`; }>
Defined in: [sdk/portal/src/utils/wallet-verification-challenge.ts:103](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wallet-verification-challenge.ts#L103)
Handles a wallet verification challenge by generating an appropriate response
##### Type Parameters
| Type Parameter |
| --------------------------------------- |
| `Setup` *extends* `AbstractSetupSchema` |
##### Parameters
| Parameter | Type | Description |
| --------- | ------------------------------------------------------------------------------------------------- | ---------------------------------------------------------- |
| `options` | [`HandleWalletVerificationChallengeOptions`](#handlewalletverificationchallengeoptions)\<`Setup`> | The options for handling the wallet verification challenge |
##### Returns
`Promise`\<\{ `challengeResponse`: `string`; `verificationId?`: `string`; }>
Promise resolving to an object containing the challenge response and optionally the verification ID
##### Throws
If the challenge cannot be created or is invalid
##### Example
```ts
import { createPortalClient } from "@settlemint/sdk-portal";
import { handleWalletVerificationChallenge } from "@settlemint/sdk-portal";
const { client, graphql } = createPortalClient({
instance: "https://portal.example.com/graphql",
accessToken: "your-access-token"
});
const result = await handleWalletVerificationChallenge({
portalClient: client,
portalGraphql: graphql,
verificationId: "verification-123",
userWalletAddress: "0x123...",
code: "123456",
verificationType: "otp"
});
```
***
#### waitForTransactionReceipt()
> **waitForTransactionReceipt**(`transactionHash`, `options`): `Promise`\<[`Transaction`](#transaction)>
Defined in: [sdk/portal/src/utils/wait-for-transaction-receipt.ts:80](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L80)
Waits for a blockchain transaction receipt by subscribing to transaction updates via GraphQL.
This function polls until the transaction is confirmed or the timeout is reached.
##### Parameters
| Parameter | Type | Description |
| ----------------- | ----------------------------------------------------------------------- | --------------------------------------------- |
| `transactionHash` | `string` | The hash of the transaction to wait for |
| `options` | [`WaitForTransactionReceiptOptions`](#waitfortransactionreceiptoptions) | Configuration options for the waiting process |
##### Returns
`Promise`\<[`Transaction`](#transaction)>
The transaction details including receipt information when the transaction is confirmed
##### Throws
Error if the transaction receipt cannot be retrieved within the specified timeout
##### Example
```ts
import { waitForTransactionReceipt } from "@settlemint/sdk-portal";
const transaction = await waitForTransactionReceipt("0x123...", {
portalGraphqlEndpoint: "https://example.settlemint.com/graphql",
accessToken: "your-access-token",
timeout: 30000 // 30 seconds timeout
});
```
### Interfaces
#### HandleWalletVerificationChallengeOptions\
Defined in: [sdk/portal/src/utils/wallet-verification-challenge.ts:64](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wallet-verification-challenge.ts#L64)
Options for handling a wallet verification challenge
##### Type Parameters
| Type Parameter |
| --------------------------------------- |
| `Setup` *extends* `AbstractSetupSchema` |
##### Properties
| Property | Type | Description | Defined in |
| ------------------------------------------------ | ----------------------------------------- | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `code` | `string` \| `number` | The verification code provided by the user | [sdk/portal/src/utils/wallet-verification-challenge.ts:74](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wallet-verification-challenge.ts#L74) |
| `portalClient` | `GraphQLClient` | The portal client instance | [sdk/portal/src/utils/wallet-verification-challenge.ts:66](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wallet-verification-challenge.ts#L66) |
| `portalGraphql` | `initGraphQLTada`\<`Setup`> | The GraphQL query builder | [sdk/portal/src/utils/wallet-verification-challenge.ts:68](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wallet-verification-challenge.ts#L68) |
| `userWalletAddress` | `` `0x${string}` `` | The wallet address to verify | [sdk/portal/src/utils/wallet-verification-challenge.ts:72](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wallet-verification-challenge.ts#L72) |
| `verificationId` | `string` | The ID of the verification challenge | [sdk/portal/src/utils/wallet-verification-challenge.ts:70](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wallet-verification-challenge.ts#L70) |
| `verificationType` | `"otp"` \| `"secret-code"` \| `"pincode"` | The type of verification being performed | [sdk/portal/src/utils/wallet-verification-challenge.ts:76](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wallet-verification-challenge.ts#L76) |
***
#### Transaction
Defined in: [sdk/portal/src/utils/wait-for-transaction-receipt.ts:34](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L34)
Represents the structure of a blockchain transaction with its receipt
##### Properties
| Property | Type | Description | Defined in |
| -------------------------------------------- | --------- | ------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `address` | `string` | The contract address involved in the transaction | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:43](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L43) |
| `createdAt` | `string` | Timestamp when the transaction was created | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:41](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L41) |
| `from` | `string` | The sender address (duplicate of receipt.from) | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:39](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L39) |
| `functionName` | `string` | The name of the function called in the transaction | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:45](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L45) |
| `isContract` | `boolean` | Whether the transaction is a contract deployment | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:47](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L47) |
| `transactionHash` | `string` | The hash of the transaction (duplicate of receipt.transactionHash) | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:37](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L37) |
***
#### TransactionEvent
Defined in: [sdk/portal/src/utils/wait-for-transaction-receipt.ts:8](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L8)
Represents an event emitted during a transaction execution
##### Properties
| Property | Type | Description | Defined in |
| -------------------------------- | ------------------------------ | --------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `args` | `Record`\<`string`, `unknown`> | The arguments emitted by the event | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:12](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L12) |
| `eventName` | `string` | The name of the event that was emitted | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:10](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L10) |
| `topics` | `` `0x${string}` ``\[] | Indexed event parameters used for filtering and searching | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:14](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L14) |
***
#### TransactionReceipt
Defined in: [sdk/portal/src/utils/wait-for-transaction-receipt.ts:20](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L20)
Represents the structure of a blockchain transaction receipt
##### Extends
* `TransactionReceipt`\<`string`, `number`, `"Success"` | `"Reverted"`>
##### Properties
| Property | Type | Description | Overrides | Defined in |
| ---------------------------------------------------- | ------------------------------------------ | ------------------------------------------------------- | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `contractAddress` | `` `0x${string}` `` | The address of the contract deployed in the transaction | `TransactionReceiptViem.contractAddress` | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:28](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L28) |
| `events` | [`TransactionEvent`](#transactionevent)\[] | Array of events emitted during the transaction | - | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:26](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L26) |
| `revertReason` | `string` | The raw reason for transaction reversion, if applicable | - | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:22](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L22) |
| `revertReasonDecoded` | `string` | Human-readable version of the revert reason | - | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:24](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L24) |
***
#### WaitForTransactionReceiptOptions
Defined in: [sdk/portal/src/utils/wait-for-transaction-receipt.ts:57](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L57)
Options for waiting for a transaction receipt
##### Extends
* [`WebsocketClientOptions`](#websocketclientoptions)
##### Properties
| Property | Type | Description | Inherited from | Defined in |
| -------------------------------------------------------- | -------- | ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `accessToken?` | `string` | The access token for authentication with the Portal API | [`WebsocketClientOptions`](#websocketclientoptions).[`accessToken`](#accesstoken-1) | [sdk/portal/src/utils/websocket-client.ts:14](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/websocket-client.ts#L14) |
| `portalGraphqlEndpoint` | `string` | The GraphQL endpoint URL for the Portal API | [`WebsocketClientOptions`](#websocketclientoptions).[`portalGraphqlEndpoint`](#portalgraphqlendpoint-1) | [sdk/portal/src/utils/websocket-client.ts:10](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/websocket-client.ts#L10) |
| `timeout?` | `number` | Optional timeout in milliseconds before the operation fails | - | [sdk/portal/src/utils/wait-for-transaction-receipt.ts:59](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/wait-for-transaction-receipt.ts#L59) |
***
#### WebsocketClientOptions
Defined in: [sdk/portal/src/utils/websocket-client.ts:6](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/websocket-client.ts#L6)
Options for the GraphQL WebSocket client
##### Extended by
* [`WaitForTransactionReceiptOptions`](#waitfortransactionreceiptoptions)
##### Properties
| Property | Type | Description | Defined in |
| ---------------------------------------------------------- | -------- | ------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| `accessToken?` | `string` | The access token for authentication with the Portal API | [sdk/portal/src/utils/websocket-client.ts:14](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/websocket-client.ts#L14) |
| `portalGraphqlEndpoint` | `string` | The GraphQL endpoint URL for the Portal API | [sdk/portal/src/utils/websocket-client.ts:10](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/utils/websocket-client.ts#L10) |
### Type Aliases
#### ClientOptions
> **ClientOptions** = `z.infer`\<*typeof* [`ClientOptionsSchema`](#clientoptionsschema)>
Defined in: [sdk/portal/src/portal.ts:25](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/portal.ts#L25)
Type representing the validated client options.
***
#### RequestConfig
> **RequestConfig** = `ConstructorParameters`\<*typeof* `GraphQLClient`>\[`1`]
Defined in: [sdk/portal/src/portal.ts:11](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/portal.ts#L11)
Configuration options for the GraphQL client, excluding 'url' and 'exchanges'.
### Variables
#### ClientOptionsSchema
> `const` **ClientOptionsSchema**: `ZodObject`\<\{ `accessToken`: `ZodOptional`\<`ZodString`>; `cache`: `ZodOptional`\<`ZodEnum`\<\{ `default`: `"default"`; `force-cache`: `"force-cache"`; `no-cache`: `"no-cache"`; `no-store`: `"no-store"`; `only-if-cached`: `"only-if-cached"`; `reload`: `"reload"`; }>>; `instance`: `ZodUnion`\; }, `$strip`>
Defined in: [sdk/portal/src/portal.ts:16](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/portal/src/portal.ts#L16)
Schema for validating Portal client configuration options.
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/building-with-settlemint/building-with-sdk/the-graph.mdx
meta: {
"title": "The Graph",
"description": "Integrating The Graph in your SettleMint dApp"
}
## About
The SettleMint TheGraph SDK provides a seamless way to interact with TheGraph APIs for blockchain data indexing and querying. It enables you to easily create and manage subgraphs, define schemas, and query indexed blockchain data using GraphQL from your SettleMint-powered blockchain networks.
The SDK offers a type-safe interface for all TheGraph operations, with comprehensive error handling and validation. It integrates smoothly with modern TypeScript applications while providing a simple and intuitive developer experience.
## API Reference
### Functions
#### createTheGraphClient()
> **createTheGraphClient**\<`Setup`>(`options`, `clientOptions?`): `object`
Defined in: [sdk/thegraph/src/thegraph.ts:91](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/thegraph/src/thegraph.ts#L91)
Creates a TheGraph GraphQL client with proper type safety using gql.tada
##### Type Parameters
| Type Parameter |
| --------------------------------------- |
| `Setup` *extends* `AbstractSetupSchema` |
##### Parameters
| Parameter | Type | Description |
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------- |
| `options` | \{ `accessToken?`: `string`; `cache?`: `"default"` \| `"force-cache"` \| `"no-cache"` \| `"no-store"` \| `"only-if-cached"` \| `"reload"`; `instances`: `string`\[]; `subgraphName`: `string`; } | Configuration options for the client including instance URLs, access token and subgraph name |
| `options.accessToken?` | `string` | - |
| `options.cache?` | `"default"` \| `"force-cache"` \| `"no-cache"` \| `"no-store"` \| `"only-if-cached"` \| `"reload"` | - |
| `options.instances?` | `string`\[] | - |
| `options.subgraphName?` | `string` | - |
| `clientOptions?` | `RequestConfig` | Optional GraphQL client configuration options |
##### Returns
`object`
An object containing:
* client: The configured GraphQL client instance
* graphql: The initialized gql.tada function for type-safe queries
| Name | Type | Defined in |
| --------- | --------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| `client` | `GraphQLClient` | [sdk/thegraph/src/thegraph.ts:95](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/thegraph/src/thegraph.ts#L95) |
| `graphql` | `initGraphQLTada`\<`Setup`> | [sdk/thegraph/src/thegraph.ts:96](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/thegraph/src/thegraph.ts#L96) |
##### Throws
Will throw an error if the options fail validation against ClientOptionsSchema
##### Example
```ts
import { createTheGraphClient } from '@settlemint/sdk-thegraph';
import type { introspection } from '@schemas/the-graph-env-kits';
import { createLogger, requestLogger } from '@settlemint/sdk-utils/logging';
const logger = createLogger();
const { client, graphql } = createTheGraphClient<{
introspection: introspection;
disableMasking: true;
scalars: {
Bytes: string;
Int8: string;
BigInt: string;
BigDecimal: string;
Timestamp: string;
};
}>({
instances: JSON.parse(process.env.SETTLEMINT_THEGRAPH_SUBGRAPHS_ENDPOINTS || '[]'),
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN,
subgraphName: 'kits'
}, {
fetch: requestLogger(logger, "the-graph-kits", fetch) as typeof fetch,
});
// Making GraphQL queries
const query = graphql(`
query SearchAssets {
assets {
id
name
symbol
}
}
`);
const result = await client.request(query);
```
### Type Aliases
#### ClientOptions
> **ClientOptions** = `z.infer`\<*typeof* [`ClientOptionsSchema`](#clientoptionsschema)>
Defined in: [sdk/thegraph/src/thegraph.ts:26](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/thegraph/src/thegraph.ts#L26)
Type definition for client options derived from the ClientOptionsSchema
***
#### RequestConfig
> **RequestConfig** = `ConstructorParameters`\<*typeof* `GraphQLClient`>\[`1`]
Defined in: [sdk/thegraph/src/thegraph.ts:11](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/thegraph/src/thegraph.ts#L11)
Type definition for GraphQL client configuration options
### Variables
#### ClientOptionsSchema
> `const` **ClientOptionsSchema**: `ZodObject`\<\{ `accessToken`: `ZodOptional`\<`ZodString`>; `cache`: `ZodOptional`\<`ZodEnum`\<\{ `default`: `"default"`; `force-cache`: `"force-cache"`; `no-cache`: `"no-cache"`; `no-store`: `"no-store"`; `only-if-cached`: `"only-if-cached"`; `reload`: `"reload"`; }>>; `instances`: `ZodArray`\<`ZodUnion`\>; `subgraphName`: `ZodString`; }, `$strip`>
Defined in: [sdk/thegraph/src/thegraph.ts:16](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/thegraph/src/thegraph.ts#L16)
Schema for validating client options for the TheGraph client.
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/building-with-settlemint/building-with-sdk/viem.mdx
meta: {
"title": "Viem",
"description": "Integrating Viem for Ethereum interactions in your SettleMint dApp"
}
## About
The SettleMint Viem SDK provides a lightweight wrapper that automatically configures and sets up a Viem client based on your connected SettleMint application. It simplifies the process of establishing connections to SettleMint-managed blockchain networks by handling authentication, endpoint configuration, and chain selection. This allows developers to quickly start using Viem's powerful Ethereum interaction capabilities without manual setup, while ensuring proper integration with the SettleMint platform.
## API Reference
### Functions
#### getChainId()
> **getChainId**(`options`): `Promise`\<`number`>
Defined in: [sdk/viem/src/viem.ts:217](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L217)
Get the chain id of a blockchain network.
##### Parameters
| Parameter | Type | Description |
| --------- | ----------------------------------------- | ---------------------------------- |
| `options` | [`GetChainIdOptions`](#getchainidoptions) | The options for the public client. |
##### Returns
`Promise`\<`number`>
The chain id.
##### Example
```ts
import { getChainId } from '@settlemint/sdk-viem';
const chainId = await getChainId({
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN,
rpcUrl: process.env.SETTLEMINT_BLOCKCHAIN_NODE_OR_LOAD_BALANCER_JSON_RPC_ENDPOINT!,
});
console.log(chainId);
```
***
#### getPublicClient()
> **getPublicClient**(`options`): `object`
Defined in: [sdk/viem/src/viem.ts:75](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L75)
Get a public client. Use this if you need to read from the blockchain.
##### Parameters
| Parameter | Type | Description |
| --------- | --------------------------------- | ---------------------------------- |
| `options` | [`ClientOptions`](#clientoptions) | The options for the public client. |
##### Returns
`object`
The public client. see [https://viem.sh/docs/clients/public](https://viem.sh/docs/clients/public)
##### Example
```ts
import { getPublicClient } from '@settlemint/sdk-viem';
const publicClient = getPublicClient({
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN,
chainId: process.env.SETTLEMINT_BLOCKCHAIN_NETWORK_CHAIN_ID!,
chainName: process.env.SETTLEMINT_BLOCKCHAIN_NETWORK!,
rpcUrl: process.env.SETTLEMINT_BLOCKCHAIN_NODE_OR_LOAD_BALANCER_JSON_RPC_ENDPOINT!,
});
// Get the block number
const block = await publicClient.getBlockNumber();
console.log(block);
```
***
#### getWalletClient()
> **getWalletClient**(`options`): (`verificationOptions?`) => `Client`\<`HttpTransport`\<`undefined` | `RpcSchema`, `boolean`>, `Chain`, `undefined`, `WalletRpcSchema`, `object` & `object` & `object` & `object` & `object` & `object` & `object` & `WalletActions`\<`Chain`, `undefined`>>
Defined in: [sdk/viem/src/viem.ts:143](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L143)
Get a wallet client. Use this if you need to write to the blockchain.
##### Parameters
| Parameter | Type | Description |
| --------- | --------------------------------- | ---------------------------------- |
| `options` | [`ClientOptions`](#clientoptions) | The options for the wallet client. |
##### Returns
A function that returns a wallet client. The function can be called with verification options for HD wallets. see [https://viem.sh/docs/clients/wallet](https://viem.sh/docs/clients/wallet)
> (`verificationOptions?`): `Client`\<`HttpTransport`\<`undefined` | `RpcSchema`, `boolean`>, `Chain`, `undefined`, `WalletRpcSchema`, `object` & `object` & `object` & `object` & `object` & `object` & `object` & `WalletActions`\<`Chain`, `undefined`>>
###### Parameters
| Parameter | Type |
| ---------------------- | --------------------------------------------------------- |
| `verificationOptions?` | [`WalletVerificationOptions`](#walletverificationoptions) |
###### Returns
`Client`\<`HttpTransport`\<`undefined` | `RpcSchema`, `boolean`>, `Chain`, `undefined`, `WalletRpcSchema`, `object` & `object` & `object` & `object` & `object` & `object` & `object` & `WalletActions`\<`Chain`, `undefined`>>
##### Example
```ts
import { getWalletClient } from '@settlemint/sdk-viem';
import { parseAbi } from "viem";
const walletClient = getWalletClient({
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN,
chainId: process.env.SETTLEMINT_BLOCKCHAIN_NETWORK_CHAIN_ID!,
chainName: process.env.SETTLEMINT_BLOCKCHAIN_NETWORK!,
rpcUrl: process.env.SETTLEMINT_BLOCKCHAIN_NODE_OR_LOAD_BALANCER_JSON_RPC_ENDPOINT!,
});
// Get the chain id
const chainId = await walletClient().getChainId();
console.log(chainId);
// write to the blockchain
const transactionHash = await walletClient().writeContract({
account: "0x0000000000000000000000000000000000000000",
address: "0xFBA3912Ca04dd458c843e2EE08967fC04f3579c2",
abi: parseAbi(["function mint(uint32 tokenId) nonpayable"]),
functionName: "mint",
args: [69420],
});
console.log(transactionHash);
```
### Enumerations
#### OTPAlgorithm
Defined in: [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:18](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L18)
Supported hash algorithms for One-Time Password (OTP) verification.
These algorithms determine the cryptographic function used to generate OTP codes.
##### Enumeration Members
| Enumeration Member | Value | Description | Defined in |
| ------------------------------ | ------------ | ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `SHA1` | `"SHA1"` | SHA-1 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:20](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L20) |
| `SHA224` | `"SHA224"` | SHA-224 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:22](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L22) |
| `SHA256` | `"SHA256"` | SHA-256 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:24](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L24) |
| `SHA3_224` | `"SHA3-224"` | SHA3-224 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:30](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L30) |
| `SHA3_256` | `"SHA3-256"` | SHA3-256 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:32](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L32) |
| `SHA3_384` | `"SHA3-384"` | SHA3-384 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:34](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L34) |
| `SHA3_512` | `"SHA3-512"` | SHA3-512 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:36](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L36) |
| `SHA384` | `"SHA384"` | SHA-384 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:26](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L26) |
| `SHA512` | `"SHA512"` | SHA-512 hash algorithm | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:28](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L28) |
***
#### WalletVerificationType
Defined in: [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:5](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L5)
Types of wallet verification methods supported by the system.
Used to identify different verification mechanisms when creating or managing wallet verifications.
##### Enumeration Members
| Enumeration Member | Value | Description | Defined in |
| -------------------------------------- | ---------------- | ----------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `OTP` | `"OTP"` | One-Time Password verification method | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:9](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L9) |
| `PINCODE` | `"PINCODE"` | PIN code verification method | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:7](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L7) |
| `SECRET_CODES` | `"SECRET_CODES"` | Secret recovery codes verification method | [sdk/viem/src/custom-actions/types/wallet-verification.enum.ts:11](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/types/wallet-verification.enum.ts#L11) |
### Interfaces
#### CreateWalletParameters
Defined in: [sdk/viem/src/custom-actions/create-wallet.action.ts:14](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L14)
Parameters for creating a wallet.
##### Properties
| Property | Type | Description | Defined in |
| ---------------------------------- | ----------------------------- | ------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `keyVaultId` | `string` | The unique name of the key vault where the wallet will be created. | [sdk/viem/src/custom-actions/create-wallet.action.ts:16](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L16) |
| `walletInfo` | [`WalletInfo`](#walletinfo-1) | Information about the wallet to be created. | [sdk/viem/src/custom-actions/create-wallet.action.ts:18](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L18) |
***
#### CreateWalletResponse
Defined in: [sdk/viem/src/custom-actions/create-wallet.action.ts:24](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L24)
Response from creating a wallet.
##### Properties
| Property | Type | Description | Defined in |
| ------------------------------------------ | -------- | ------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `address` | `string` | The blockchain address of the wallet. | [sdk/viem/src/custom-actions/create-wallet.action.ts:30](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L30) |
| `derivationPath` | `string` | The HD derivation path used to create the wallet. | [sdk/viem/src/custom-actions/create-wallet.action.ts:32](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L32) |
| `id` | `string` | The unique identifier of the wallet. | [sdk/viem/src/custom-actions/create-wallet.action.ts:26](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L26) |
| `name` | `string` | The name of the wallet. | [sdk/viem/src/custom-actions/create-wallet.action.ts:28](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L28) |
***
#### CreateWalletVerificationChallengesParameters
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts:8](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts#L8)
Parameters for creating wallet verification challenges.
##### Properties
| Property | Type | Description | Defined in |
| -------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `addressOrObject` | [`AddressOrObject`](#addressorobject-2) | The wallet address or object containing wallet address and optional verification ID. | [sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts:10](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts#L10) |
***
#### CreateWalletVerificationParameters
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:59](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L59)
Parameters for creating a wallet verification.
##### Properties
| Property | Type | Description | Defined in |
| ---------------------------------------------------------- | ----------------------------------------------------- | -------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `userWalletAddress` | `string` | The wallet address for which to create the verification. | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:61](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L61) |
| `walletVerificationInfo` | [`WalletVerificationInfo`](#walletverificationinfo-1) | The verification information to create. | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:63](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L63) |
***
#### CreateWalletVerificationResponse
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:69](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L69)
Response from creating a wallet verification.
##### Properties
| Property | Type | Description | Defined in |
| ---------------------------------------------- | --------------------------------------------------- | -------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `id` | `string` | The unique identifier of the verification. | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:71](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L71) |
| `name` | `string` | The name of the verification method. | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:73](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L73) |
| `parameters` | `Record`\<`string`, `string`> | Additional parameters specific to the verification type. | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:77](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L77) |
| `verificationType` | [`WalletVerificationType`](#walletverificationtype) | The type of verification method. | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:75](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L75) |
***
#### DeleteWalletVerificationParameters
Defined in: [sdk/viem/src/custom-actions/delete-wallet-verification.action.ts:6](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/delete-wallet-verification.action.ts#L6)
Parameters for deleting a wallet verification.
##### Properties
| Property | Type | Description | Defined in |
| -------------------------------------------------- | -------- | -------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `userWalletAddress` | `string` | The wallet address for which to delete the verification. | [sdk/viem/src/custom-actions/delete-wallet-verification.action.ts:8](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/delete-wallet-verification.action.ts#L8) |
| `verificationId` | `string` | The unique identifier of the verification to delete. | [sdk/viem/src/custom-actions/delete-wallet-verification.action.ts:10](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/delete-wallet-verification.action.ts#L10) |
***
#### DeleteWalletVerificationResponse
Defined in: [sdk/viem/src/custom-actions/delete-wallet-verification.action.ts:16](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/delete-wallet-verification.action.ts#L16)
Response from deleting a wallet verification.
##### Properties
| Property | Type | Description | Defined in |
| ---------------------------- | --------- | ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `success` | `boolean` | Whether the deletion was successful. | [sdk/viem/src/custom-actions/delete-wallet-verification.action.ts:18](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/delete-wallet-verification.action.ts#L18) |
***
#### GetWalletVerificationsParameters
Defined in: [sdk/viem/src/custom-actions/get-wallet-verifications.action.ts:7](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/get-wallet-verifications.action.ts#L7)
Parameters for getting wallet verifications.
##### Properties
| Property | Type | Description | Defined in |
| -------------------------------------------------- | -------- | ---------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `userWalletAddress` | `string` | The wallet address for which to fetch verifications. | [sdk/viem/src/custom-actions/get-wallet-verifications.action.ts:9](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/get-wallet-verifications.action.ts#L9) |
***
#### VerificationResult
Defined in: [sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts:26](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts#L26)
Result of a wallet verification challenge.
##### Properties
| Property | Type | Description | Defined in |
| ------------------------------ | --------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `verified` | `boolean` | Whether the verification was successful. | [sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts:28](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts#L28) |
***
#### VerifyWalletVerificationChallengeParameters
Defined in: [sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts:16](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts#L16)
Parameters for verifying a wallet verification challenge.
##### Properties
| Property | Type | Description | Defined in |
| ------------------------------------------------ | --------------------------------------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `addressOrObject` | [`AddressOrObject`](#addressorobject-2) | The wallet address or object containing wallet address and optional verification ID. | [sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts:18](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts#L18) |
| `challengeResponse` | `string` | The response to the verification challenge. | [sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts:20](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts#L20) |
***
#### WalletInfo
Defined in: [sdk/viem/src/custom-actions/create-wallet.action.ts:6](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L6)
Information about the wallet to be created.
##### Properties
| Property | Type | Description | Defined in |
| ------------------------ | -------- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `name` | `string` | The name of the wallet. | [sdk/viem/src/custom-actions/create-wallet.action.ts:8](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet.action.ts#L8) |
***
#### WalletOTPVerificationInfo
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:27](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L27)
Information for One-Time Password (OTP) verification.
##### Extends
* `BaseWalletVerificationInfo`
##### Properties
| Property | Type | Description | Overrides | Inherited from | Defined in |
| ------------------------------------------------ | ------------------------------- | --------------------------------------------- | --------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `algorithm?` | [`OTPAlgorithm`](#otpalgorithm) | The hash algorithm to use for OTP generation. | - | - | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:31](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L31) |
| `digits?` | `number` | The number of digits in the OTP code. | - | - | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:33](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L33) |
| `issuer?` | `string` | The issuer of the OTP. | - | - | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:37](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L37) |
| `name` | `string` | The name of the verification method. | - | `BaseWalletVerificationInfo.name` | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:9](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L9) |
| `period?` | `number` | The time period in seconds for OTP validity. | - | - | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:35](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L35) |
| `verificationType` | [`OTP`](#otp) | The type of verification method. | `BaseWalletVerificationInfo.verificationType` | - | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:29](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L29) |
***
#### WalletPincodeVerificationInfo
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:17](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L17)
Information for PIN code verification.
##### Extends
* `BaseWalletVerificationInfo`
##### Properties
| Property | Type | Description | Overrides | Inherited from | Defined in |
| ------------------------------------------------ | --------------------- | ------------------------------------- | --------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `name` | `string` | The name of the verification method. | - | `BaseWalletVerificationInfo.name` | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:9](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L9) |
| `pincode` | `string` | The PIN code to use for verification. | - | - | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:21](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L21) |
| `verificationType` | [`PINCODE`](#pincode) | The type of verification method. | `BaseWalletVerificationInfo.verificationType` | - | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:19](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L19) |
***
#### WalletSecretCodesVerificationInfo
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:43](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L43)
Information for secret recovery codes verification.
##### Extends
* `BaseWalletVerificationInfo`
##### Properties
| Property | Type | Description | Overrides | Inherited from | Defined in |
| ------------------------------------------------ | ------------------------------- | ------------------------------------ | --------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `name` | `string` | The name of the verification method. | - | `BaseWalletVerificationInfo.name` | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:9](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L9) |
| `verificationType` | [`SECRET_CODES`](#secret_codes) | The type of verification method. | `BaseWalletVerificationInfo.verificationType` | - | [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:45](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L45) |
***
#### WalletVerification
Defined in: [sdk/viem/src/custom-actions/get-wallet-verifications.action.ts:15](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/get-wallet-verifications.action.ts#L15)
Represents a wallet verification.
##### Properties
| Property | Type | Description | Defined in |
| ------------------------------------------------ | --------------------------------------------------- | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `id` | `string` | The unique identifier of the verification. | [sdk/viem/src/custom-actions/get-wallet-verifications.action.ts:17](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/get-wallet-verifications.action.ts#L17) |
| `name` | `string` | The name of the verification method. | [sdk/viem/src/custom-actions/get-wallet-verifications.action.ts:19](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/get-wallet-verifications.action.ts#L19) |
| `verificationType` | [`WalletVerificationType`](#walletverificationtype) | The type of verification method. | [sdk/viem/src/custom-actions/get-wallet-verifications.action.ts:21](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/get-wallet-verifications.action.ts#L21) |
***
#### WalletVerificationChallenge
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts:16](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts#L16)
Represents a wallet verification challenge.
##### Properties
| Property | Type | Description | Defined in |
| ------------------------------------------------ | --------------------------------------------------- | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `challenge` | `Record`\<`string`, `string`> | The challenge parameters specific to the verification type. | [sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts:24](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts#L24) |
| `id` | `string` | The unique identifier of the challenge. | [sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts:18](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts#L18) |
| `name` | `string` | The name of the challenge. | [sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts:20](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts#L20) |
| `verificationType` | [`WalletVerificationType`](#walletverificationtype) | The type of verification required. | [sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts:22](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts#L22) |
***
#### WalletVerificationOptions
Defined in: [sdk/viem/src/viem.ts:101](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L101)
The options for the wallet client.
##### Properties
| Property | Type | Description | Defined in |
| -------------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
| `challengeResponse` | `string` | The challenge response (used for HD wallets) | [sdk/viem/src/viem.ts:109](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L109) |
| `verificationId?` | `string` | The verification id (used for HD wallets), if not provided, the challenge response will be validated against all active verifications. | [sdk/viem/src/viem.ts:105](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L105) |
### Type Aliases
#### AddressOrObject
> **AddressOrObject** = `string` | \{ `userWalletAddress`: `string`; `verificationId?`: `string`; }
Defined in: [sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts:6](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts#L6)
Represents either a wallet address string or an object containing wallet address and optional verification ID.
***
#### ClientOptions
> **ClientOptions** = `Omit`\<`z.infer`\<*typeof* [`ClientOptionsSchema`](#clientoptionsschema)>, `"httpTransportConfig"`> & `object`
Defined in: [sdk/viem/src/viem.ts:51](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L51)
Type representing the validated client options.
##### Type declaration
| Name | Type | Defined in |
| ---------------------- | --------------------- | ------------------------------------------------------------------------------------------------- |
| `httpTransportConfig?` | `HttpTransportConfig` | [sdk/viem/src/viem.ts:52](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L52) |
***
#### CreateWalletVerificationChallengesResponse
> **CreateWalletVerificationChallengesResponse** = [`WalletVerificationChallenge`](#walletverificationchallenge)\[]
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts:30](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification-challenges.action.ts#L30)
Response from creating wallet verification challenges.
***
#### GetChainIdOptions
> **GetChainIdOptions** = `Omit`\<`z.infer`\<*typeof* [`GetChainIdOptionsSchema`](#getchainidoptionsschema)>, `"httpTransportConfig"`> & `object`
Defined in: [sdk/viem/src/viem.ts:198](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L198)
Type representing the validated get chain id options.
##### Type declaration
| Name | Type | Defined in |
| ---------------------- | --------------------- | --------------------------------------------------------------------------------------------------- |
| `httpTransportConfig?` | `HttpTransportConfig` | [sdk/viem/src/viem.ts:199](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L199) |
***
#### GetWalletVerificationsResponse
> **GetWalletVerificationsResponse** = [`WalletVerification`](#walletverification)\[]
Defined in: [sdk/viem/src/custom-actions/get-wallet-verifications.action.ts:27](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/get-wallet-verifications.action.ts#L27)
Response from getting wallet verifications.
***
#### VerifyWalletVerificationChallengeResponse
> **VerifyWalletVerificationChallengeResponse** = [`VerificationResult`](#verificationresult)\[]
Defined in: [sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts:34](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/verify-wallet-verification-challenge.action.ts#L34)
Response from verifying a wallet verification challenge.
***
#### WalletVerificationInfo
> **WalletVerificationInfo** = [`WalletPincodeVerificationInfo`](#walletpincodeverificationinfo) | [`WalletOTPVerificationInfo`](#walletotpverificationinfo) | [`WalletSecretCodesVerificationInfo`](#walletsecretcodesverificationinfo)
Defined in: [sdk/viem/src/custom-actions/create-wallet-verification.action.ts:51](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/custom-actions/create-wallet-verification.action.ts#L51)
Union type of all possible wallet verification information types.
### Variables
#### ClientOptionsSchema
> `const` **ClientOptionsSchema**: `ZodObject`\<\{ `accessToken`: `ZodOptional`\<`ZodString`>; `chainId`: `ZodString`; `chainName`: `ZodString`; `httpTransportConfig`: `ZodOptional`\<`ZodAny`>; `rpcUrl`: `ZodUnion`\; }, `$strip`>
Defined in: [sdk/viem/src/viem.ts:25](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L25)
Schema for the viem client options.
***
#### GetChainIdOptionsSchema
> `const` **GetChainIdOptionsSchema**: `ZodObject`\<\{ `accessToken`: `ZodOptional`\<`ZodString`>; `httpTransportConfig`: `ZodOptional`\<`ZodAny`>; `rpcUrl`: `ZodUnion`\; }, `$strip`>
Defined in: [sdk/viem/src/viem.ts:180](https://github.com/settlemint/sdk/blob/v2.3.5/sdk/viem/src/viem.ts#L180)
Schema for the viem client options.
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/building-with-settlemint/cli/command-reference.mdx
meta: {
"title": "Command reference",
"description": "CLI command reference for SettleMint platform"
}
{
Welcome to the SettleMint CLI documentation. This command-line interface provides tools for managing your SettleMint projects, smart contracts, and platform resources.
}
{
To get started:
}
Use settlemint login to authenticate with your SettleMint account
Create a new project with settlemint create
Connect your project to SettleMint using settlemint connect
{
Browse through the available commands below to learn more about each one. You can click the command names to view detailed documentation, or use settlemint [command] --help in your terminal.
}
{
Usage: settlemint [command]
CLI for SettleMint
Options: -v, --version Output the current version -h, --help Display help for command
Commands: codegen [options] Generate GraphQL and REST types and queries connect [options] Connects your dApp to your application create [options] Create a new application from a template hasura|ha Manage Hasura service in the SettleMint platform login [options] Login to your SettleMint account. logout [options] Logout from your SettleMint account pincode-verification-response|pvr [options] Get pincode verification response for a blockchain node platform Manage SettleMint platform resources smart-contract-set|scs Manage smart contract sets and subgraphs help [command] display help for command
}
file: ./content/docs/building-with-settlemint/evm-chains-guide/add-network-and-nodes.mdx
meta: {
"title": "Add Network and nodes",
"description": "Guide to adding a blockchain network to your application"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
import React from "react";
Summary
To build a blockchain application, the first step is setting up a blockchain
network. You can either deploy a permissioned network such as Hyperledger
Besu or Quorum, or connect to an existing L1 or L2 public network like
Ethereum, Polygon PoS, Hedera, Polygon zkEVM, Avalanche, Arbitrum, or
Optimism. Both mainnet and testnet versions are available for public
networks.
When creating an application on SettleMint, you will be prompted to select a
network type and assign it a name. For permissioned networks, you control
the entire network infrastructure, including validator nodes. A first
validating node is automatically deployed along with your permissioned
network. For public networks, the validators are already established by the
network's consensus participants, and SettleMint will deploy an archive node
that connects to the chosen public network.
In SettleMint-managed (SaaS) mode, you will need to choose between a shared
or dedicated cluster for deployment. You can also select a cloud provider
and a data center of your choice. Additionally, you will have the option to
select from small, medium, or large resource packs, which can be scaled up
or down later as needed.
For permissioned networks, you can configure network settings and customize
the genesis file before deployment. For most use cases, keeping the default
settings is recommended. After deployment, your network manager and first
node will be fully operational within minutes.
To enhance reliability in permissioned networks, you should add more nodes
for fault tolerance. The best practice is to deploy four validator nodes and
two non-validator nodes to ensure Byzantine fault tolerance. For public
networks, you might want to deploy additional archive or full nodes to
improve reliability of your connection to the network.
Once your network and nodes are running, adding a load balancer will help
distribute network traffic efficiently and improve performance. You can then
access the Insights tab to integrate monitoring tools. For permissioned
networks, you can add Blockscout blockchain explorer to track transactions
and network activity. If you are using public networks, you can use their
publicly available blockchain explorers instead.
## Prerequisites
Before setting up a blockchain network, you need to have an application created
in your workspace. Applications provide the organizational context for all your
blockchain resources including networks, nodes, and development tools. If you
haven't created an application yet, follow our
[create application](/building-with-settlemint/evm-chains-guide/create-an-application)
guide first.
## 1. Add a blockchain network
For EVM Chains, SettleMint offers Hyperledger Besu and Quorum for permissioned
networks and a bunch of public networks to choose from. For the list of
supported networks please refer -
[supported networks](/platform-components/blockchain-infrastructure/network-manager#supported-blockchain-network-protocols)

You can perform the same action via the SettleMint SDK CLI as well -
First ensure you're authenticated:
```bash
settlemint login
```
Create a blockchain network:
```bash
settlemint platform create blockchain-network besu \
--node-name
# Get information about the command and all available options
settlemint platform create blockchain-network besu --help
```
Navigate to application
Go to the application containing your network.
Add network
Click Add blockchain network to open a form.
Configure network
Select the protocol of your choice and click Continue.
Choose a network name and a node name.
Configure your deployment settings and network parameters.
Click Confirm to add the network.
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create network
const createNetwork = async () => {
const result = await client.blockchainNetwork.create({
applicationUniqueName: "your-app",
name: "my-network",
nodeName: "validator-1",
consensusAlgorithm: "BESU_QBFT",
provider: "GKE", // GKE, EKS, AKS
region: "EUROPE"
});
console.log('Network created:', result);
};
// List networks
const listNetworks = async () => {
const networks = await client.blockchainNetwork.list("your-app");
console.log('Networks:', networks);
};
// Get network details
const getNetwork = async () => {
const network = await client.blockchainNetwork.read("network-unique-name");
console.log('Network details:', network);
};
// Restart network
const restartNetwork = async () => {
await client.blockchainNetwork.restart("network-unique-name");
};
```
Get your access token from the Platform UI under User Settings → API Tokens.
{/* Left Column - Text Content */}
While deploying a network, you can tune various parameters to optimize performance and execution. The Chain ID serves as a unique identifier for your blockchain network, ensuring proper differentiation from others. The Seconds per block setting controls the block time interval, impacting transaction finality speed. Gas price defines the transaction cost per unit of gas, influencing network fees, while the Gas limit determines the maximum gas allowed per block, affecting computational capacity.
The EVM stack size configures the stack depth for smart contract execution, and the Contract size limit sets the maximum contract code size to manage deployment constraints. Adjusting these settings allows for greater scalability, efficiency, and cost control based on your specific use case.
For EVM Chains, SettleMint allows you to set key genesis file paramters for a custom network configuration.
## Manage a network
Network management can be done via SettleMint SDK CLI using these commands -
```bash
# List networks
settlemint platform list blockchain-networks --application
# Get network details
settlemint platform read blockchain-network
# Restart network
settlemint platform restart blockchain-network
```
Navigate to your network and click **Manage network** to see available actions:
* View network details and status
* Monitor network health
* Restart network operations
```typescript
// List networks
await client.blockchainNetwork.list("your-app");
// Get network details
await client.blockchainNetwork.read("network-unique-name");
// Restart network
await client.blockchainNetwork.restart("network-unique-name");
```
When we deploy a network, first node is automatically deployed with it and is a
validator node. Once you have deployed a permissioned network or joined a public
network, you can add more nodes to it.
## 2. Add blockchain nodes
To see and add nodes, please click on **Blockchain Nodes** tile on the dashboard
or use the **Blockchain Nodes** link in the left menu.

We recommend the following number of nodes for each envrionment:
| Blockchain Network | Node Type | Minimum Nodes for Fault Tolerance |
| -------------------- | ------------------- | --------------------------------- |
| **Hyperledger Besu** | Validator Nodes | 4 (Byzantine Fault Tolerant BFT) |
| **Hyperledger Besu** | Non-Validator Nodes | 2 (for redundancy) |
| **GoQuorum** | Validator Nodes | 4 (Istanbul BFT) |
| **GoQuorum** | Non-Validator Nodes | 2 (for redundancy) |
Nodes can also be added using SettleMint SDK CLI using the following commands-
Navigate to application
Go to the application containing your network.
Access nodes
Click Blockchain nodes in the left navigation.
Configure node
Click Add a blockchain node.
Select the blockchain network to add this node to.
Choose a node name and node type (Validator/Non-Validator).
Configure deployment settings.
Click Confirm.
First ensure you're authenticated:
```bash
settlemint login
```
Create a blockchain node:
```bash
settlemint platform create blockchain-node besu \
--blockchain-network \
--node-type \
--provider \
--region
# Get help
settlemint platform create blockchain-node --help
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
const createNode = async () => {
const result = await client.blockchainNode.create({
applicationUniqueName: "your-application",
blockchainNetworkUniqueName: "your-network",
name: "my-node",
nodeType: "VALIDATOR",
provider: "provider",
region: "region"
});
console.log('Node created:', result);
};
```
Get your access token from the Platform UI in left menu bar > Access Tokens.
## Manage node
You can view node details and status, can monitor node health, pause and
restart, or upgrade the node via the SDK CLI or the Platform UI.
Navigate to your node and click **Manage node** to see available actions:
* View node details and status
* Monitor node health
* Restart node operations
```bash
# List nodes
settlemint platform list services --application
# Restart node
settlemint platform restart blockchain-node
```
```typescript
// List nodes
await client.blockchainNode.list("your-application");
// Get node details
await client.blockchainNode.read("node-unique-name");
// Restart node
await client.blockchainNode.restart("node-unique-name");
```
All operations require appropriate permissions in your workspace.
## 3. Add load balancer
To add a load balancer, navigate to the **Blockchain nodes** section in the
SettleMint platform and select your deployed network. Click "Add load balancer",
choose the region, provider, and desired resource configuration. Once deployed,
the load balancer helps distribute traffic efficiently, improving network
reliability and performance.
When selecting nodes to connect to the load balancer, ensure you include at
least two non-validator nodes for optimal redundancy. The load balancer can be
configured to route requests to specific nodes based on workload distribution,
ensuring high availability and fault tolerance in your blockchain network.

## 4. Add blockchain explorer
To add blockscout blockchain explorer for EVM based permissioned networks,
navigate to **Insights** via the left menu or the dashboard tile. For public
networks, you may use publicly available blockchain explorers for the respective
network.


### For public networks, please use the following blockchain explorers
| **Network** | **Mainnet Explorer** | **Testnet Explorer** |
| -------------------- | -------------------------------------------------------- | ----------------------------------------------------------------------------------- |
| **Ethereum** | [Etherscan](https://etherscan.io/) | [Sepolia](https://sepolia.etherscan.io/) / [Holesky](https://holesky.etherscan.io/) |
| **Avalanche** | [SnowTrace](https://snowtrace.io/) | [Fuji](https://testnet.snowtrace.io/) |
| **Hedera Hashgraph** | [HashScan](https://hashscan.io/mainnet) | [HashScan Testnet](https://hashscan.io/testnet) |
| **Polygon PoS** | [PolygonScan](https://polygonscan.com/) | [Amoy](https://amoy.polygonscan.com//) |
| **Polygon zkEVM** | [zkEVM Explorer](https://zkevm.polygonscan.com/) | [zkEVM Testnet](https://testnet-zkevm.polygonscan.com/) |
| **Optimism** | [Optimistic Etherscan](https://optimistic.etherscan.io/) | [Optimism Goerli](https://goerli-optimism.etherscan.io/) |
| **Arbitrum** | [Arbiscan](https://arbiscan.io/) | [Arbitrum Goerli](https://goerli.arbiscan.io/) |
Congratulations!
You have succesfully built the blockchain infrastructure layer for you
application. From here you can proceed for creating or setting up private keys
for transaction signer and user wallets.
file: ./content/docs/building-with-settlemint/evm-chains-guide/add-private-keys.mdx
meta: {
"title": "Add private keys",
"description": "How to create and use private keys on SettleMint platform"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To send transactions on a blockchain, you need a private key (with enough
funds for gas for networks with a fee). You can create and manage private
keys directly within SettleMint using ECDSA, HD ECDSA, or HSM key types.
After creating a key, it must be attached to at least one node to enable
transaction signing, this is critical for smart contract deployment. Without
it, the deployment will fail.
SettleMint also offers user wallets, a scalable solution that generates
wallets from a single HD ECDSA 256 key. Each user gets a unique address for
privacy and parallel transaction support. You can create user wallets once
your HD key is deployed and running, and fund them as needed for gas-based
transactions.
## How to add private keys and user wallets in SettleMint platform
# Private keys
Sending transactions on a blockchain network requires two essential components:
a private key to cryptographically sign your transactions, and sufficient funds
in your wallet to cover the associated gas fees. Without either element,
transaction execution will fail.
While you can use external private keys created with tools like MetaMask or
other wallet solutions, **SettleMint offers a more integrated approach**. The
platform provides built-in functionality to **create and manage private keys
directly within your environment**, eliminating the need for external wallet
management.
When you deploy a blockchain node, it contains a signing proxy that captures the
eth\_sendTransaction call, uses the appropriate key from the private key section
to sign it, and sends it onwards to the blockchain node. You can use this proxy
directly via the node's JSON-RPC endpoints
([JSON-RPC](https://ethereum.org/en/developers/docs/apis/json-rpc/)) and via
tools like Hardhat ([HardHat
RPC](https://hardhat.org/config/#json-rpc-based-networks)) configured to use the
"remote" default option for signing.
## Create a private key
To add a private key in the SettleMint platform, navigate to the private keys
section and click create a private key. You'll be prompted to select the type of
private key, ECDSA P256, HD ECDSA P256, or HSM ECDSA P256.

Navigate to your **application**, click **private keys** in the left navigation, and then click **create a private key**. This opens a form.
Follow these steps to create the private key:
1. Choose a **private key type**:
* **Accessible ECDSA P256**: Standard Ethereum-style private keys with exposed mnemonic
* **HD ECDSA P256**: Hierarchical Deterministic keys for advanced key management
* **HSM ECDSA P256**: Hardware Security Module protected keys for maximum security
2. Choose a **name** for your private key
3. Select the **nodes** on which you want the key to be active
4. Click **confirm** to create the key
```bash
# Create Accessible ECDSA P256 key
settlemint platform create private-key accessible-ecdsa-p256 my-key \
--application my-app \
--blockchain-node node-123
# Create HD ECDSA P256 key
settlemint platform create private-key hd-ecdsa-p256 my-key \
--application my-app
# Create HSM ECDSA P256 key
settlemint platform create private-key hsm-ecdsa-p256 my-key \
--application my-app
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create private key
const createKey = async () => {
const result = await client.privateKey.create({
name: "my-key",
applicationUniqueName: "my-app",
privateKeyType: "ACCESSIBLE_ECDSA_P256", // or "HD_ECDSA_P256" or "HSM_ECDSA_P256"
blockchainNodeUniqueNames: ["node-123"] // optional
});
console.log('Private key created:', result);
};
```
## Attaching private keys to blockchain nodes (transaction signer)

Every smart contract deployment involves a transaction that must be signed by an
authorized account. This signature proves that the transaction came from a valid
identity and permits it to be processed by the network. When using SettleMint,
deploying a smart contract via the platform UI or SDK initiates an
eth\_sendTransaction call, which must be signed by a private key. However, nodes
cannot inherently sign transactions unless a key has been explicitly activated
and attached to them.
If no private key is attached to the node involved in the deployment, the
process will halt at the signing step. The platform will not be able to
authorize the deployment transaction, resulting in a failed operation. This
makes key-to-node assignment a required step in any production or test setup
involving deployment, contract interactions, or any state-changing blockchain
transaction.
**How to attach a private key to a node**
1. Go to the private keys section of your SettleMint workspace.
2. Click on the private key (e.g., "Deployer") you wish to use for signing
transactions.
3. Navigate to the nodes tab of that private key's page.
4. You'll see a list of available nodes in your network (validator and RPC
nodes).
5. Select the nodes that should use this key for transaction signing. These will
usually be RPC nodes or validators depending on your setup.
6. Once selected, the key becomes active on these nodes and is used for signing
all outgoing transactions initiated from the platform.
**Best practices and nuances**
1. Always attach the key to at least one node before deploying a smart contract.
In most cases, attaching it to an RPC node is sufficient.
2. Avoid attaching the same key to multiple nodes unless required, to reduce the
risk of key misuse or unnecessary transaction replay.
3. Ensure the private key has sufficient funds (ETH or native token) to pay for
gas costs associated with contract deployment if working on public chains or
non-zero gas fee networks.
4. For security reasons, only assign signing permissions to nodes you trust and
control.
5. Consider using an HD key if you want to manage multiple identities derived
from the same mnemonic, but ensure the correct derivation path is used.
## Manage private keys
1. Navigate to your application's **private keys** section
2. Click on a private key to:
* View details and status
* Manage node associations
* Check balances
* Fund the key
```bash
# List all private keys
settlemint platform list private-keys --application
# View specific key details
settlemint platform read private-key
# Restart a private key
settlemint platform restart private-key
```
```typescript
// List private keys
const listKeys = async () => {
const keys = await client.privateKey.list("your-app-name");
};
// Get key details
const getKey = async () => {
const key = await client.privateKey.read("key-unique-name");
};
// Restart key
const restartKey = async () => {
await client.privateKey.restart("key-unique-name");
};
```
## Fund the private key
For networks that require gas to perform a transaction, your private key should
contain enough funds to cover the gas price.
1. Click the **private key** in the overview to see detailed information
2. Open the **balances tab**
3. Use the public address of the wallet which you want to fund for sending
tokens/currency to it.
Ensure your private key has sufficient funds before attempting transactions on
networks that require gas fees.
## User wallets
SettleMint's **user wallets** feature offers a production-ready solution for
managing infinite wallets with efficiency and scalability. This tool empowers
users with seamless wallet generation, ensuring **cost-effective management**
and eliminating additional expenses. By generating **unique addresses for each
user**, privacy is significantly enhanced, while improved performance ensures
faster, parallel transaction processing through separate nonces. User wallet
also simplifies wallet recovery since all wallets are derived from a single
master key. User wallets use the same signing proxy to sign transactions with
the corresponding user private key.
## Create and setup user wallets
To set up your user wallets, navigate to your application, click **private
keys** in the left navigation, and then click **create a private key**. This
opens a form.
Select **HD ECDSA P256** as the private key type then, enter a **name** for your
deployment. You can also select the nodes or load balancers on which you want to
enable the user wallets. You can change this later if you want to use your user
wallets on a different node. Click **confirm** to deploy the wallet.
## Difference between ECDSA and HD ECDSA key and why we do not see user wallets in simple ECDSA keys.
A simple ECDSA key is just one key pair , a private key and its corresponding
public key. It can be used to sign transactions and control a blockchain
address, but it’s standalone. There’s no built-in mechanism to derive more keys
from it. If you want multiple accounts, you’d need to manually generate and
store each key separately. An HD (Hierarchical Deterministic) wallet, on the
other hand, starts from a single master seed. From this seed, it can generate an
entire tree of ECDSA key pairs in a structured and repeatable way. This system
follows the BIP-32 standard and includes concepts like key derivation paths and
chain codes.
The reason HD wallets are suitable for managing wallets is that they support
deterministic key generation. You can recreate the full wallet from just the
seed phrase. Each new account or address is simply a derived key from a known
path. This is efficient and secure, and it also simplifies backup and recovery.
Simple ECDSA keys lack this structure. They are isolated, and generating
multiple keys would require managing each one individually. This doesn’t scale
for wallets, especially those that require many accounts, addresses, or
identities. That’s why HD ECDSA key systems are preferred in wallet
implementation.
When your deployment status is **running**, you can click on it to check the
details. You can see the mnemonic from which the user wallets are generated
under **key material**. Upon initialization, the user wallets section is empty.
To create your first user wallet, click on **create a user wallet**.

Remember that for networks that require gas to perform a transaction, the user
wallet should contain enough funds to cover the gas price. You can fund it using
the address displayed in the list.
Congratulations!
You have successfully created private keys and user wallets.
You have also attached private keys to node transaction signer and you are ready
for smart contract development and deployment.
file: ./content/docs/building-with-settlemint/evm-chains-guide/attestation-indexer.mdx
meta: {
"title": "Ethereum attestation indexer",
"description": "How to work with ethereum attestation indexer"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
The ethereum attestation indexer is a tool that allows you to track, store, and
query verifiable claims (attestations) made using the ethereum attestation
service (EAS). It provides a GraphQL API to easily fetch attestation data based
on schemas you define.
To use it, you'll first deploy the necessary EAS smart contracts (schema
registry and EAS) on your blockchain network using SettleMint's code studio and
task manager. Once deployed, you can register custom schemas and create
attestations that follow those schema structures.
After setup, the attestation indexer can be added via the middleware section of
your application. Once connected to your contract addresses, it will index
attestation events, and you can use the built-in GraphQL UI or API access to
query them in real time.
## 1. Introduction to EAS
### What is EAS?
Ethereum attestation service (EAS) is a decentralized protocol that allows users
to create, verify, and manage attestations (verifiable claims) on the Ethereum
blockchain. It provides a standardized way to make claims about data,
identities, or events that can be independently verified by others.

### Why use EAS?
* **Decentralization**: No central authority is needed to verify claims.
* **Interoperability**: Standardized schemas allow for cross-platform
compatibility.
* **Security**: Attestations are secured by the Ethereum blockchain.
* **Transparency**: All attestations are publicly verifiable.
***
## 2. Key concepts
### Core components
1. **SchemaRegistry**:
* A smart contract that stores and manages schemas.
* Schemas define the structure and data types of attestations, ensuring that
all attestations conform to a predefined format.
2. **EAS contract**:
* The main contract that handles the creation and management of attestations.
* It interacts with the `SchemaRegistry` to ensure that attestations adhere
to the defined schemas.

3. **Attestations**:
* Verifiable claims stored on the blockchain.
* Created and managed by the `EAS contract`.
4. **Resolvers**:
* Optional contracts that provide additional validation logic for
attestations.
***
## 3. How EAS works
```mermaid
graph TD
SchemaRegistry["SchemaRegistry"]
UsersSystems["Users/Systems"]
EASContract["EAS contract"]
Verifiers["Verifiers"]
Attestations["Attestations"]
SchemaRegistry -- "Defines data structure" --> EASContract
UsersSystems -- "Interact" --> EASContract
EASContract -- "Creates" --> Attestations
Verifiers -- "Verify" --> Attestations
```
### Workflow
1. **Schema definition**: Start by defining a schema using the
**SchemaRegistry** contract.
2. **Attestation creation**: Use the **EAS contract** to create attestations
based on the schema.
3. **Optional validation**: Resolvers can be used for further validation logic.
4. **On-chain storage**: Attestations are securely stored and retrievable
on-chain.
***
## 4. Contract deployment
Before deploying the EAS contracts, you must add the smart contract set to your
project.
### Adding the smart contract set
1. **Navigate to the dev tools section**: Go to the application dashboard of the
application where you want to deploy the EAS contracts, then navigate to the
**dev tools** section in the left sidebar.
2. **Select the attestation service set**: From there, click on **add a dev
tool**, choose **code studio** and then **smart contract set**. Choose the
**attestation service** template.
3. **Customize**: Modify the set as needed for your specific project.
4. **Save**: Save the configuration.
For detailed instructions, visit the
[smart contract sets documentation](/platfrom-components/dev-tools/code-studio/smart-contract-sets/smart-contract-sets).
***
### Deploying the contracts
Once the contract set is ready, you can deploy it using either the **task menu**
in the SettleMint IDE or via the **terminal**.
#### Deploy using the task menu
1. **Open the task menu**:
* In the SettleMint integrated IDE, access the **task menu** from the
sidebar.
2. **Select deployment task**:
* Choose the task corresponding to the **Hardhat- reset & deploy to platform
network** module.
3. **Monitor deployment logs**:
* The terminal output will display the deployment progress and contract
addresses.
#### Deploy using the terminal
1. **Prepare the deployment module**:\
Ensure the module is defined in `ignition/modules/main.ts`:
```typescript
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const CustomEASModule = buildModule("EASDeployment", (m) => {
const schemaRegistry = m.contract("SchemaRegistry", [], {});
const EAS = m.contract("EAS", [schemaRegistry], {});
return { schemaRegistry, EAS };
});
export default CustomEASModule;
```
2. **Run the deployment command**:\
Execute the following command in your terminal:
````bash
bunx settlemint scs hardhat deploy remote -m ignition/modules/main.ts```
````
3. **Monitor deployment logs**:
* The terminal output will display the deployment progress and contract
addresses.
***
## 5. Registering a schema
### Example use case
Imagine building a service where users prove ownership of their social media
profiles. The schema might include:
* **Username**: A unique identifier for the user.
* **Platform**: The social media platform name (e.g., Twitter).
* **Handle**: The user's handle on that platform (e.g., `@coolcoder123`).
### Example
```javascript
const { ethers } = require("ethers");
// Configuration object for network and contract details
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // The network endpoint (e.g., Ethereum mainnet/testnet)
registryAddress: "YOUR_SCHEMA_REGISTRY_ADDRESS_HERE", // Where the SchemaRegistry contract lives
privateKey: "YOUR_PRIVATE_KEY_HERE", // Your wallet's private key (keep this secret!)
};
// Create connection to blockchain and setup contract interaction
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const signer = new ethers.Wallet(config.privateKey, provider);
const schemaRegistry = new ethers.Contract(
config.registryAddress,
[
// This event helps us track when new schemas are registered
"event Registered(bytes32 indexed uid, address indexed owner, string schema, address resolver, bool revocable)",
// This function lets us register new schemas
"function register(string calldata schema, address resolver, bool revocable) external returns (bytes32)",
],
signer
);
async function registerSchema() {
try {
// Define what data fields our attestations will contain
const schema = "string username, string platform, string handle";
const resolverAddress = ethers.ZeroAddress; // No special validation needed
const revocable = true; // Attestations can be revoked if needed
console.log("🚀 Registering schema for social media ownership...");
// Send the transaction to create our schema
const tx = await schemaRegistry.register(
schema,
resolverAddress,
revocable
);
const receipt = await tx.wait(); // Wait for blockchain confirmation
// Get our schema's unique ID from the transaction
const schemaUID = receipt.logs[0].topics[1];
console.log("✅ Schema registered successfully! UID:", schemaUID);
} catch (error) {
console.error("❌ Error registering schema:", error.message);
}
}
registerSchema();
```
***
## 6. Creating attestations
### Example use case
Let's create an attestation that proves:
* **Username**: `awesome_developer`
* **Platform**: `GitHub`
* **Handle**: `@devmaster`
### Example
```javascript
const { EAS, SchemaEncoder } = require("@ethereum-attestation-service/eas-sdk");
const { ethers } = require("ethers");
// Setup our connection details
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // Network endpoint
easAddress: "YOUR_EAS_CONTRACT_ADDRESS_HERE", // Main EAS contract address
privateKey: "YOUR_PRIVATE_KEY_HERE", // Your wallet's private key
schemaUID: "YOUR_SCHEMA_UID_HERE", // The UID from when we registered our schema
};
// Connect to the blockchain
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const signer = new ethers.Wallet(config.privateKey, provider);
const EAS = new EAS(config.easAddress);
eas.connect(signer);
// Create an encoder that matches our schema structure
const schemaEncoder = new SchemaEncoder(
"string username, string platform, string handle"
);
// The actual data we want to attest to
const attestationData = [
{ name: "username", value: "awesome_developer", type: "string" },
{ name: "platform", value: "GitHub", type: "string" },
{ name: "handle", value: "@devmaster", type: "string" },
];
async function createAttestation() {
try {
// Convert our data into the format EAS expects
const encodedData = schemaEncoder.encodeData(attestationData);
// Create the attestation
const tx = await eas.attest({
schema: config.schemaUID,
data: {
recipient: ethers.ZeroAddress, // Public attestation (no specific recipient)
expirationTime: 0, // Never expires
revocable: true, // Can be revoked later if needed
data: encodedData, // Our encoded attestation data
},
});
// Wait for confirmation and get the result
const receipt = await tx.wait();
console.log(
"✅ Attestation created successfully! UID:",
receipt.attestationUID
);
} catch (error) {
console.error("❌ Error creating attestation:", error.message);
}
}
createAttestation();
```
## 7. Verifying attestations
Verification is essential to ensure the integrity and authenticity of
attestations. You can verify attestations using one of the following methods:
1. **Using the EAS SDK**: Perform lightweight, off-chain verification
programmatically.
2. **Using a custom smart contract resolver**: Add custom on-chain validation
logic for attestations.
### Choose your verification method
#### Verification using the EAS sdk
The EAS SDK provides an easy way to verify attestations programmatically, making
it ideal for off-chain use cases.
##### Example
```javascript
const { ethers } = require("ethers");
const { EAS } = require("@ethereum-attestation-service/eas-sdk");
// Basic configuration for connecting to the network
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // Network endpoint
easAddress: "YOUR_EAS_CONTRACT_ADDRESS_HERE", // Main EAS contract
};
async function verifyAttestation(attestationUID) {
// Setup our blockchain connection
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const EAS = new EAS(config.easAddress);
eas.connect(provider);
console.log("🔍 Verifying attestation:", attestationUID);
// Try to find the attestation on the blockchain
const attestation = await eas.getAttestation(attestationUID);
// Check if we found anything
if (!attestation) {
console.error("❌ Attestation not found");
return;
}
// Show the attestation details
console.log("✅ Attestation details:");
console.log("Attester:", attestation.attester); // Who created this attestation
console.log("Data:", attestation.data); // The actual attested data
console.log("Revoked:", attestation.revoked ? "Yes" : "No"); // Is it still valid?
}
// Replace with your attestation UID
verifyAttestation("YOUR_ATTESTATION_UID_HERE");
```
##### Key points
* **Lightweight**: Suitable for most off-chain verifications.
* **No custom logic**: Fetches and verifies data stored in EAS.
#### Verification using a custom smart contract resolver
Custom resolvers enable on-chain validation with additional business rules or
logic.
##### Example: trusted attester verification
The following smart contract resolver ensures that attestations are valid only
if made by trusted attesters.
###### Smart contract code
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
// This contract checks if attestations come from trusted sources
contract CustomResolver {
// Keep track of which addresses we trust to make attestations
mapping(address => bool) public trustedAttesters;
// When deploying, we set up our initial list of trusted attesters
constructor(address[] memory initialAttesters) {
for (uint256 i = 0; i < initialAttesters.length; i++) {
trustedAttesters[initialAttesters[i]] = true;
}
}
// EAS calls this function before accepting an attestation
function validate(
bytes32 attestationUID, // Unique ID of the attestation
address attester, // Who's trying to create the attestation
bytes memory data // The attestation data (unused in this example)
) external view returns (bool) {
// Only allow attestations from addresses we trust
if (!trustedAttesters[attester]) {
return false;
}
return true;
}
}
```
###### Deploying the resolver with hardhat ignition
Deploy this custom resolver using the Hardhat Ignition framework.
```typescript
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const CustomResolverDeployment = buildModule("CustomResolver", (m) => {
const initialAttesters = ["0xTrustedAddress1", "0xTrustedAddress2"];
const resolver = m.contract("CustomResolver", [initialAttesters], {});
return { resolver };
});
export default CustomResolverDeployment;
```
Run the following command in your terminal to deploy:
```bash
npx hardhat deploy --module ignition/modules/main.ts
```
###### Linking the resolver to a schema
When registering a schema, include the resolver's address for on-chain
validation.
```javascript
const resolverAddress = "YOUR_DEPLOYED_RESOLVER_ADDRESS";
const schema = "string username, string platform, string handle";
const schemaUID = await schemaRegistry.register(schema, resolverAddress, true);
console.log("✅ Schema with resolver registered! UID:", schemaUID);
```
###### Validating attestations with the resolver
To validate an attestation, call the `validate` function of your deployed
resolver contract.
```javascript
const resolver = new ethers.Contract(
"YOUR_RESOLVER_ADDRESS",
["function validate(bytes32, address, bytes) external view returns (bool)"],
provider
);
const isValid = await resolver.validate(
"YOUR_ATTESTATION_UID",
"ATTESTER_ADDRESS",
"ATTESTATION_DATA"
);
console.log("✅ Is the attestation valid?", isValid);
```
##### Key points
* **Customizable rules**: Add your own validation logic to the resolver.
* **On-chain validation**: Ensures attestations meet specific conditions before
they are considered valid.
***
### When to use each method?
* **EAS SDK**: Best for off-chain applications where simple validation suffices.
* **Custom resolver**: Use for on-chain validation with additional rules, such
as verifying trusted attesters or specific data formats.
## 8. Using the attestation indexer
### Setup attestation indexer
1. Go to your application's **middleware** section
2. Click "add a middleware"
3. Select "attestation indexer"
4. Configure with your contract addresses:
* EAS contract: `EAS contract address`
* Schema registry: `Schema registry contract address`
### Querying attestations
#### Connection details
After deployment:
1. Go to your attestation indexer
2. Click "connections" tab
3. You'll find your GraphQL endpoint URL
4. Create an application access token (settings → application access tokens)
#### Using the graphql ui
The indexer provides a built-in GraphQL UI where you can test queries. Click
"GraphQL UI" in your indexer to access it.
#### Example query implementation
```javascript
// Example fetch request to query attestations
async function queryAttestations(schemaId) {
const response = await fetch("YOUR_INDEXER_URL", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer YOUR_APP_TOKEN",
},
body: JSON.stringify({
query: `{
attestations(
where: {
schemaId: {
equals: "${schemaId}"
}
}
) {
id
attester
recipient
revoked
data
}
}`,
}),
});
const data = await response.json();
return data.data.attestations;
}
// Usage example:
const schemaId = "YOUR_SCHEMA_ID"; // From the registration step
const attestations = await queryAttestations(schemaId);
console.log("Attestations:", attestations);
```
## 9. Integration studio implementation
For those using integration studio, we've created a complete flow implementation
of the EAS interactions. This flow automates the entire process we covered in
this guide.
### Flow overview
The flow includes:
* EAS configuration setup
* Schema registration
* Attestation creation
* Attestation verification
* Debug nodes for monitoring results
### Installation
1. In integration studio, go to import → clipboard
2. Paste the flow JSON below
3. Click import
Click to view/copy the complete Node-RED flow JSON
```json
[
{
"id": "eas_flow",
"type": "tab",
"label": "EAS attestation flow",
"disabled": false,
"info": ""
},
{
"id": "setup_inject",
"type": "inject",
"z": "eas_flow",
"name": "Inputs: RpcUrl, registry address, EAS address, private key",
"props": [
{
"p": "rpcUrl",
"v": "RPC-URL/API-KEY",
"vt": "str"
},
{
"p": "registryAddress",
"v": "REGISTERY-ADDRESS",
"vt": "str"
},
{
"p": "easAddress",
"v": "EAS-ADDRESS",
"vt": "str"
},
{
"p": "privateKey",
"v": "PRIVATE-KEY",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 250,
"y": 120,
"wires": [["setup_function"]]
},
{
"id": "setup_function",
"type": "function",
"z": "eas_flow",
"name": "Setup global variables",
"func": "// Initialize provider with specific network parameters\nconst provider = new ethers.JsonRpcProvider(msg.rpcUrl)\n\nconst signer = new ethers.Wallet(msg.privateKey, provider);\n\n// Initialize EAS with specific gas settings\nconst EAS = new eassdk.EAS(msg.easAddress);\neas.connect(signer);\n\n// Store in global context\nglobal.set('provider', provider);\nglobal.set('signer', signer);\nglobal.set('eas', eas);\nglobal.set('registryAddress', msg.registryAddress);\n\nmsg.payload = 'EAS configuration initialized';\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "ethers",
"module": "ethers"
},
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
}
],
"x": 580,
"y": 120,
"wires": [["setup_debug"]]
},
{
"id": "register_inject",
"type": "inject",
"z": "eas_flow",
"name": "Register schema",
"props": [],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 120,
"y": 260,
"wires": [["register_function"]]
},
{
"id": "register_function",
"type": "function",
"z": "eas_flow",
"name": "Register schema",
"func": "// Get global variables set in init\nconst signer = global.get('signer');\nconst registryAddress = global.get('registryAddress');\n\n// Initialize SchemaRegistry contract\nconst schemaRegistry = new ethers.Contract(\n registryAddress,\n [\n \"event Registered(bytes32 indexed uid, address indexed owner, string schema, address resolver, bool revocable)\",\n \"function register(string calldata schema, address resolver, bool revocable) external returns (bytes32)\"\n ],\n signer\n);\n\n// Define what data fields our attestations will contain\nconst schema = \"string username, string platform, string handle\";\nconst resolverAddress = \"0x0000000000000000000000000000000000000000\"; // No special validation needed\nconst revocable = true; // Attestations can be revoked if needed\n\ntry {\n const tx = await schemaRegistry.register(schema, resolverAddress, revocable);\n const receipt = await tx.wait();\n\n const schemaUID = receipt.logs[0].topics[1];\n // Store schemaUID in global context for later use\n global.set('schemaUID', schemaUID);\n\n msg.payload = {\n success: true,\n schemaUID: schemaUID,\n message: \"Schema registered successfully!\"\n };\n} catch (error) {\n msg.payload = {\n success: false,\n error: error.message\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "ethers",
"module": "ethers"
}
],
"x": 310,
"y": 260,
"wires": [["register_debug"]]
},
{
"id": "create_inject",
"type": "inject",
"z": "eas_flow",
"name": "Input: schema uid",
"props": [
{
"p": "schemaUID",
"v": "SCHEMA-UID",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 130,
"y": 400,
"wires": [["create_function"]]
},
{
"id": "create_function",
"type": "function",
"z": "eas_flow",
"name": "Create attestation",
"func": "// Get global variables\nconst EAS = global.get('eas');\nconst schemaUID = msg.schemaUID;\n\n// Create an encoder that matches our schema structure\nconst schemaEncoder = new eassdk.SchemaEncoder(\"string username, string platform, string handle\");\n\n// The actual data we want to attest to\nconst attestationData = [\n { name: \"username\", value: \"awesome_developer\", type: \"string\" },\n { name: \"platform\", value: \"GitHub\", type: \"string\" },\n { name: \"handle\", value: \"@devmaster\", type: \"string\" }\n];\n\ntry {\n // Convert our data into the format EAS expects\n const encodedData = schemaEncoder.encodeData(attestationData);\n\n // Create the attestation\n const tx = await eas.attest({\n schema: schemaUID,\n data: {\n recipient: \"0x0000000000000000000000000000000000000000\", // Public attestation\n expirationTime: 0, // Never expires\n revocable: true, // Can be revoked later if needed\n data: encodedData // Our encoded attestation data\n }\n });\n\n // Wait for confirmation and get the result\n const receipt = await tx.wait();\n\n // Store attestation UID for later verification\n global.set('attestationUID', receipt.attestationUID);\n\n msg.payload = {\n success: true,\n attestationUID: receipt,\n message: \"Attestation created successfully!\"\n };\n} catch (error) {\n msg.payload = {\n success: false,\n error: error.message\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
},
{
"var": "ethers",
"module": "ethers"
}
],
"x": 330,
"y": 400,
"wires": [["create_debug"]]
},
{
"id": "verify_inject",
"type": "inject",
"z": "eas_flow",
"name": "Input: attestation UID",
"props": [
{
"p": "attestationUID",
"v": "Attestation UID",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 140,
"y": 540,
"wires": [["verify_function"]]
},
{
"id": "verify_function",
"type": "function",
"z": "eas_flow",
"name": "Verify attestation",
"func": "const EAS = global.get('eas');\nconst attestationUID = msg.attestationUID;\n\ntry {\n const attestation = await eas.getAttestation(attestationUID);\n const schemaEncoder = new eassdk.SchemaEncoder(\"string pshandle, string socialMedia, string socialMediaHandle\");\n const decodedData = schemaEncoder.decodeData(attestation.data);\n\n msg.payload = {\n isValid: !attestation.revoked,\n attestation: {\n attester: attestation.attester,\n time: new Date(Number(attestation.time) * 1000).toLocaleString(),\n expirationTime: attestation.expirationTime > 0 \n ? new Date(Number(attestation.expirationTime) * 1000).toLocaleString()\n : 'Never',\n revoked: attestation.revoked\n },\n data: {\n psHandle: decodedData[0].value.toString(),\n socialMedia: decodedData[1].value.toString(),\n socialMediaHandle: decodedData[2].value.toString()\n }\n };\n} catch (error) {\n msg.payload = { \n success: false, \n error: error.message,\n details: JSON.stringify(error, Object.getOwnPropertyNames(error))\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
},
{
"var": "ethers",
"module": "ethers"
}
],
"x": 350,
"y": 540,
"wires": [["verify_debug"]]
},
{
"id": "setup_debug",
"type": "debug",
"z": "eas_flow",
"name": "Setup result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 770,
"y": 120,
"wires": []
},
{
"id": "register_debug",
"type": "debug",
"z": "eas_flow",
"name": "Register result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 500,
"y": 260,
"wires": []
},
{
"id": "create_debug",
"type": "debug",
"z": "eas_flow",
"name": "Create result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 520,
"y": 400,
"wires": []
},
{
"id": "verify_debug",
"type": "debug",
"z": "eas_flow",
"name": "Verify result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 530,
"y": 540,
"wires": []
},
{
"id": "1322bb7438d96baf",
"type": "comment",
"z": "eas_flow",
"name": "Initialize EAS config",
"info": "",
"x": 110,
"y": 60,
"wires": []
},
{
"id": "e5e3294119a80c1b",
"type": "comment",
"z": "eas_flow",
"name": "Register a new schema",
"info": "/* SCHEMA GUIDE\nEdit the schema variable to define your attestation fields.\nFormat: \"type name, type name, type name\"\n\nAvailable Types:\n- string (text)\n- bool (true/false)\n- address (wallet address)\n- uint256 (number)\n- bytes32 (hash)\n\nExamples:\n\"string name, string email, bool isVerified\"\n\"string twitter, address wallet, uint256 age\"\n\"string discord, string github, string telegram\"\n*/\n\nconst schema = \"string pshandle, string socialMedia, string socialMediaHandle\";",
"x": 120,
"y": 200,
"wires": []
},
{
"id": "2be090c17b5e4fce",
"type": "comment",
"z": "eas_flow",
"name": "Create attestation",
"info": "",
"x": 110,
"y": 340,
"wires": []
},
{
"id": "3d99f76c5c0bdaf0",
"type": "comment",
"z": "eas_flow",
"name": "Verify attestation",
"info": "",
"x": 110,
"y": 480,
"wires": []
}
]
```
### Configuration steps:
1. Update the setup inject node with your:
* RPC URL
* Registry address
* EAS address
* Private key
2. Customize the schema in the register function
3. Deploy the flow
4. Test each step sequentially using the inject nodes
The flow provides debug outputs at each step to monitor the process.
file: ./content/docs/building-with-settlemint/evm-chains-guide/audit-logs.mdx
meta: {
"title": "Audit logs",
"description": "Audit logs for the actions performed on SettleMint platform"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
The audit log keeps a detailed record of user actions across the system, helping
teams monitor activity, track changes, and stay compliant with internal and
external requirements. Each entry includes a timestamp, showing exactly when
something was done, which makes it easier to follow the flow of events and spot
any irregularities.

It also records the user who performed the action, adding a layer of
accountability by linking every change to a specific individual or system role.
This is especially useful when reviewing changes or troubleshooting unexpected
behavior.
The service field highlights which part of the platform was involved, whether
it’s an integration, middleware component, or another system area. Alongside
that, the action field captures what was done, like creating, editing, or
deleting something. Together, these fields give teams a clear snapshot of what
happened, where, and by whom.
file: ./content/docs/building-with-settlemint/evm-chains-guide/create-an-application.mdx
meta: {
"title": "Create an application",
"description": "Guide to creating a blockchain application on SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To get started on the SettleMint platform, you need to create an
organization by going to the homepage or clicking the grid icon, then
selecting "create new organization." You'll need to enter a name and
complete the billing setup using Stripe to activate it.
Once your organization is ready, you need to invite your team members by
entering their email addresses, selecting their roles, and sending the
invitation. After that, you need to create an application within the
organization by giving it a name and confirming.
You can manage your organization and applications from the dashboard, change
names, invite more members, or delete resources when needed. You can also
create and manage applications using the SDK CLI or SDK JS if you prefer to
work programmatically.
## How to create an organization and application in SettleMint platform
An organization is the highest level of hierarchy in SettleMint. It's at this
level that you can create and manage blockchain applications, invite team
members to collaborate and manage billing.


You created your first organization when you signed up to use the SettleMint
platform, but you can create as many organizations as you want, e.g. for your
company, departments, teams, clients, etc. Organizations help you structure your
work, manage collaboration, and keep your invoices clearly organized.
Create an organization
Navigate to the homepage, or click the grid icon in the upper right corner.
Click create new organization. This opens a form. Follow these steps to create your organization:
Choose a name for your organization. Choose a name that is easily
recognizable in your dashboards, e.g. your company name, department name, team name, etc.
You can change the name of your organization at any time.
Enter **billing information**. SettleMint creates a billing account for this
organization. You will be billed monthly for the resources you use within this
organization. Provide your billing details securely via Stripe, with support for Visa, Mastercard, and Amex, to activate your organization. Follow the prompts to complete the setup and gain full access to SettleMint's blockchain development tools. Ensure all details are accurate to enable a smooth onboarding experience. Your organization is billed monthly, with the invoice dates set for 1st of every month.
Click **confirm** to go to the organization dashboard. From here, you can create
your first application in this organization. The dashboard will show you a
summary of your organization's applications, the members in this organization,
and a status of the resource costs for the current month.
When you create an organization, you are the owner, and therefore an
administrator of the organization. This means you can perform all actions within
this organization, with no limitations.
## Invite new organization members

Navigate to the **members section** of your organization, via the homepage, or
via your organization dashboard.
Follow these steps to invite new members to your organization:
1. Click **invite new member**.
2. Enter the **email adress** of the person you want to invite.
3. Select their **role**, i.e. whether they will be an administrator or a user.
4. Optionally, you can add a **message** to be included in the invitation email.
5. Click **confirm** to go to the list of your organization's members. Your
email invitation has now been sent, and you see in the list that it is
pending.
## Manage an organization
Navigate to the **organization dashboard**.
Click **manage organization** to see the available actions. You can only perform
these actions if you have administrator rights for this organization.
* **change name** - Changes the organization name without any further impact.
* **delete organization** - Removes the organization from the platform.
On organization dashboard
* See all applications in that organization.
* See all members of the organization
* See all the internal applications and clients if in partner mode
You can only delete an organization when it has no applications related to it.
Applications have to be deleted one by one, once all their related resources
(e.g. networks, nodes, smart contract sets, etc.) have been deleted.
## Create an application
An application is the context in which you organize your networks, nodes, smart
contract sets and any other related blockchain resource.
You will always need to create an application before you can deploy or join
networks, and add nodes.
## How to create a new application

### Access application creation
In the upper right corner of any page, click the **grid icon**
### Navigate & create
* Navigate to your workspace
* Click **create new application**
### Configure application
* Choose a **name** for your application
* Click **confirm** to create the application
First, install the [SDK CLI](https://github.com/settlemint/sdk/blob/main/sdk/cli/README.md#usage) as a global dependency.
Then, ensure you're authenticated. For more information on authentication, see the [SDK CLI documentation](https://github.com/settlemint/sdk/blob/main/sdk/cli/README.md#login-to-the-platform).
```bash
settlemint login
```
Create an application:
```bash
settlemint platform create application
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create application
const createApp = async () => {
const result = await client.application.create({
workspaceUniqueName: "your-workspace",
name: "myApp"
});
console.log('Application created:', result);
};
// List applications
const listApps = async () => {
const apps = await client.application.list("your-workspace");
console.log('Applications:', apps);
};
// Read application details
const readApp = async () => {
const app = await client.application.read("app-unique-name");
console.log('Application details:', app);
};
// Delete application
const deleteApp = async () => {
await client.application.delete("application-unique-name");
};
```
Get your access token from the platform UI under user settings → API tokens.
## Manage an application
The SettleMint platform dashboard provides a centralized view of blockchain
infrastructure, offering real-time insights into system components. With health
status indicators, including error and warning counts, it ensures system
stability while enabling users to proactively address potential issues. Resource
usage tracking helps manage costs efficiently, providing month-to-date expense
insights.
Each component features a "details" link for quick access to in-depth
information, while the intuitive navigation panel allows seamless access to key
modules such as audit logs, access tokens, and insights. Built-in support
options further enhance usability, ensuring users can quickly troubleshoot and
resolve issues.

Navigate to your application and click **manage app** to see available actions:
* View application details
* Update application name
* Delete application
```bash
# List applications
settlemint platform list applications
# Delete application
settlemint platform delete application
```
```typescript
// List applications
await client.application.list("your-workspace");
// Read application
await client.application.read("app-unique-name");
// Delete application
await client.application.delete("app-unique-name");
```
All operations require appropriate permissions in your workspace.
Congratulations!
You have successfully created an organization and added an application within
it. From here, you can proceed to deploy a network, add nodes, a load balancer,
and a blockchain explorer
file: ./content/docs/building-with-settlemint/evm-chains-guide/deploy-custom-services.mdx
meta: {
"title": "Host dApp UI or custom services",
"description": "How to deploy containerised application frontend or other custom services"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
Deploying frontend applications or custom backend services on SettleMint can be
done through custom deployments, which allow you to run containerized
applications using your own Docker images. This enables seamless integration of
user interfaces, REST APIs, microservices, or other utilities directly within
the blockchain-powered environment of your application.
The typical use cases include hosting React/Vue/Next.js-based UIs, creating
custom indexers or oracles, exposing specialized API services, or deploying
off-chain business logic in containerized environments. These deployments are
sandboxed, stateless, and run in secure, managed infrastructure, making them
suitable for both development and production.
To get started, you'll first need to containerize your application (if not
already done) and push the image to a container registry, this can be Docker Hub,
GitHub Container Registry, or a private registry. The image must be built for
AMD architecture, as the SettleMint infrastructure currently supports AMD-based
workloads.
Once your image is ready, you can initiate a custom deployment through the
platform UI, CLI, or SDK. You'll provide the container image path, optional
environment variables, deployment region, and resource configurations. After the
container spins up successfully, your service will be publicly accessible via
the auto-assigned endpoint. For frontend apps, this can act as your live
production URL.
For applications requiring a custom domain, SettleMint allows you to bind domain
names to the deployed container. You can configure the domain in the platform
and then update your DNS records accordingly. The platform supports both ALIAS
records for top-level domains and CNAME records for subdomains. SSL/TLS
certificates are automatically handled unless you opt for a custom cert setup.
Once the deployment is live, you can manage it using the custom deployment
dashboard in the platform. This includes editing environment variables,
restarting the container, updating the image version, checking logs, and
monitoring availability. You can also script or automate these tasks using the
SDK or CLI as needed.
A few considerations: custom deployments are stateless by design, so any data
you want to persist should be stored using services like Hasura for off-chain
database functionality or MinIO/IPFS for file storage. The container's
filesystem is read-only to enhance security and portability. Additionally, apps
won't run with root privileges, so ensure your container adheres to standard
non-root user practices.
This feature is especially useful when you need to tightly couple your UI or
service logic with the on-chain components, enabling a clean, integrated workflow
for dApps, admin consoles, analytics dashboards, API bridges, or token utility
services. It offers flexibility without leaving the SettleMint ecosystem, all
while adhering to scalable and cloud-native design principles.
## How to use custom deployments to host application frontend or other custom services in SettleMint platform
A custom deployment allows you to deploy your own Docker images, such as
frontend applications, on the SettleMint platform. This feature provides
flexibility for integrating custom solutions within your blockchain-based
applications.

## Create a custom deployment
1. Prepare your container image and push it to a container registry (public or private).
2. In the SettleMint platform, navigate to the custom deployments section.
3. Click on the "add custom deployment" button to create a new deployment.
4. Provide the necessary details:
* Container image path (e.g., registry.example.com/my-app:latest)
* Container registry credentials (if using a private registry)
* Environment variables (if required)
* Custom domain information (if applicable)
5. Configure any additional settings as needed.
6. Click on 'confirm' and wait for the custom deployment to be in the running status.
```bash
# Create a custom deployment
settlemint platform create custom-deployment my-deployment \
--application my-app \
--image-repository registry.example.com \
--image-name my-app \
--image-tag latest \
--port 3000 \
--provider gcp \
--region europe-west1
# With environment variables
settlemint platform create custom-deployment my-deployment \
--application my-app \
--image-repository registry.example.com \
--image-name my-app \
--image-tag latest \
--env-vars NODE_ENV=production,DEBUG=false
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
const createDeployment = async () => {
const result = await client.customDeployment.create({
applicationId: "app-123",
name: "my-deployment",
imageRepository: "registry.example.com",
imageName: "my-app",
imageTag: "latest",
port: 3000,
provider: "gcp",
region: "europe-west1",
environmentVariables: {
NODE_ENV: "production"
}
});
};
```
## DNS configuration for custom domains
When using custom domains with your custom deployment, you'll need to configure
your DNS settings correctly. Here's how to set it up:
1. **Add custom domain to the SettleMint platform**:
* Navigate to your custom deployment in the SettleMint platform.
* In the manage custom deployment menu, click on the edit custom deployment
action.
* Locate the custom domains configuration section.
* Enter your desired custom domain (e.g., example.com for top-level domain or
app.example.com for subdomain).
* Save the changes to update your custom deployment settings.
2. **Obtain your application's hostname**: After adding your custom domain, the
SettleMint platform will provide you with an ALIAS (for top-level domains) or
CNAME (for subdomains) record. This can be found in the "connect" tab of your
custom deployment.
3. **Access your domain's DNS settings**: Log in to your domain registrar or DNS
provider's control panel.
4. **Configure DNS records**:
For Top-Level Domains (e.g., example.com):
* Remove any existing A and AAAA records for the domain you're configuring.
* Remove any existing A and AAAA records for the www domain (e.g.,
[www.example.com](http://www.example.com)) if you're using it.
```
ALIAS example.com gke-europe.settlemint.com
ALIAS www.example.com gke-europe.settlemint.com
```
For Subdomains (e.g., app.example.com):
```
CNAME app.example.com gke-europe.settlemint.com
```
5. **Set TTL (Time to Live)**:
* Set a lower TTL (e.g., 300 seconds) initially to allow for quicker
propagation.
* You can increase it later for better caching (e.g., 3600 seconds).
6. **Verify DNS propagation**:
* Use online DNS lookup tools to check if your DNS changes have propagated.
* Note that DNS propagation can take up to 48 hours, although it's often much
quicker.
7. **SSL/TLS configuration**:
* The SettleMint platform typically handles SSL/TLS certificates
automatically for both top-level domains and subdomains.
* If you need to use your own certificates, please contact us for assistance
and further instructions.
Note: The configuration process is similar for both top-level domains and
subdomains. The main difference lies in the type of DNS record you create (ALIAS
for top-level domains, CNAME for subdomains) and whether you need to remove
existing records.
## Manage custom deployments
1. Navigate to your application's **custom deployments** section
2. Click on a deployment to:
* View deployment status and details
* Manage environment variables
* Configure custom domains
* View logs
* Check endpoints
```bash
# List custom deployments
settlemint platform list custom-deployments --application my-app
# Get deployment details
settlemint platform read custom-deployment my-deployment
# Restart deployment
settlemint platform restart custom-deployment my-deployment
# Edit deployment
SettleMint platform edit custom-deployment my-deployment \
--container-image registry.example.com/my-app:v2
```
```typescript
// List deployments
const listDeployments = async () => {
const deployments = await client.customDeployment.list("my-app");
};
// Get deployment details
const getDeployment = async () => {
const deployment = await client.customDeployment.read("deployment-unique-name");
};
// Restart deployment
const restartDeployment = async () => {
await client.customDeployment.restart("deployment-unique-name");
};
// Edit deployment
const editDeployment = async () => {
await client.customDeployment.edit("deployment-unique-name", {
imageTag: "v2"
});
};
```
## Limitations and considerations
When using custom deployment, keep the following limitations in mind:
1. **No root user privileges**: Your application will run without root user
privileges for security reasons.
2. **Read-only filesystem**: The filesystem is read-only. For data persistence,
consider using:
* Hasura: A GraphQL engine that provides a scalable database solution. See
[Hasura](/building-with-settlemint/hasura-backend-as-a-service).
* Other external services: Depending on your specific needs, you may use
other cloud-based storage or database services
3. **Stateless applications**: Your applications should be designed to be
stateless. This ensures better scalability and reliability in a cloud
environment.
4. **Use AMD-based images**: Currently, our platform supports AMD-based
container images. Ensure your Docker images are built for AMD architecture to
guarantee smooth compatibility with our infrastructure.
## Best practices
* Design your applications to be stateless and horizontally scalable
* Use environment variables for configuration to make your deployments more
flexible
* Implement proper logging to facilitate debugging and monitoring
* Regularly update your container images to include the latest security patches
Custom deployment offers a powerful way to extend the capabilities of your
blockchain solutions on the SettleMint platform. By following these guidelines
and best practices, you can seamlessly integrate your custom applications into
your blockchain ecosystem.
Custom deployments support automatic SSL/TLS certificate management for custom
domains.
Congratulations!
You have successfully deployed your application front end and have a working
full-stack application built on SettleMint tools and services.
We hope your journey was smooth, please write to us at [support@settlemint.com](mailto:support@settlemint.com)
for any help or feedback.
file: ./content/docs/building-with-settlemint/evm-chains-guide/deploy-smart-contracts.mdx
meta: {
"title": "Deploy smart contracts",
"description": "Guide to deploy smart contracts and sub-graphs"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To begin, you'll need to write your Solidity smart contract that defines
your application's business logic. This includes designing the data
structure using struct, storing the data with mapping, and emitting events
to support off-chain indexing. Once written, the contract should be placed
in the contracts/ folder inside your code studio workspace.
Next, you need to prepare a deployment script using Hardhat Ignition. This
script should go into the ignition/modules/ folder and will declare how your
smart contract should be deployed. You'll use the buildModule function to
specify which contract to deploy and how it should be initialized.
After setting up the script, you should compile the contract. This step
generates the necessary build artifacts, including the ABI and bytecode,
which are essential for testing, deploying, and integrating the contract
with other components. Depending on the tool used (Hardhat or Foundry), the
output will be stored in the artifacts/ or out/ directory respectively.
Once compiled, it's important to thoroughly test your contract using either
Foundry or Hardhat. These tests will simulate real-world conditions. Writing
these tests helps you catch logic errors early before deployment.
When the contract passes all tests, you're ready to deploy. Start your local
network using the Hardhat - start network script and run the deployment
script through the IDE task manager. You'll be prompted to select your
custom deployment script file before the deployment begins.
Finally, to deploy to a SettleMint-hosted blockchain network, authenticate
using the SettleMint login script, select the appropriate node and private
key, and confirm deployment. The deployed address will be saved in a JSON
file under ignition/deployments/, which can then be used in middleware or
frontend applications to interact with the contract.
## Learning with a user data manager smart contract example
The goal of this tutorial is to design and build a simple user data manager
using Solidity. While the visible use case is centered around managing user data
(such as name, email, age, etc.), the hidden objective is to demonstrate the
core thought process behind building a smart contract that can store, update,
read, and soft delete data on the blockchain.
This example is intentionally kept simple and non-technical in terms of
blockchain identity (no wallets or signatures involved) to help beginners focus
on the fundamentals of: - Designing smart contract data structures (structs and
mappings) - Writing public and restricted functions to interact with data -
Emitting and responding to events - Handling update and soft delete logic to
mimic realistic scenarios (Understand that transaction data is never deleted,
just a more recent entry is added about that record in a newer block on
blockchain)
By the end of this tutorial, you'll not only learn the foundational patterns
that apply to many real-world blockchain applications but also understand how to
develop and deploy smart contracts on SettleMint platform.
## 1. Let's start with the solidity smart contract code
A smart contract is a self-executing program deployed on the blockchain that
defines rules and logic for how data or assets are managed without relying on
intermediaries. In this tutorial, we are writing our smart contract using
Solidity, the most widely adopted programming language for Ethereum and
EVM-compatible blockchains. Solidity is a statically typed, contract-oriented
language designed specifically for writing smart contracts that run on the
Ethereum Virtual Machine (EVM).
If you're new to Solidity or want to deepen your understanding, here are some
helpful resources: - Official Solidity Documentation:
[https://soliditylang.org/](https://soliditylang.org/) - Solidity by Example (interactive guide):
[https://solidity-by-example.org](https://solidity-by-example.org) - CryptoZombies (gamified Solidity learning):
[https://cryptozombies.io/en/solidity](https://cryptozombies.io/en/solidity)
These resources provide both foundational knowledge and hands-on coding
exercises to help you become comfortable with writing and deploying smart
contracts.
In your learning phase, you can also use ChatGPT: [https://chatgpt.com/](https://chatgpt.com/) or any of
your go to AI tools for generation of basic solidity smart contracts.
### Example userdata smart contract solidity code
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
/**
* @title UserData
* @notice This contract manages user profiles through create, update, and delete operations.
* It emits events for each operation to enable off-chain indexing and notifications.
*/
contract UserData {
// ===================================================
// Section 1: Structs
// ===================================================
/**
* @notice Struct 1.1: Represents a user's profile.
* @param name Full name of the user.
* @param email Email address of the user.
* @param age Age of the user.
* @param country Country of residence.
* @param isKYCApproved Boolean flag indicating if KYC has been approved.
* @param isDeleted Boolean flag indicating if the profile is soft-deleted.
*/
struct UserProfile {
string name;
string email;
uint8 age;
string country;
bool isKYCApproved;
bool isDeleted;
}
// ===================================================
// Section 2: Storage
// ===================================================
/**
* @notice Storage 2.1: Mapping from a unique user ID to a user profile.
*/
mapping(uint256 => UserProfile) public profiles;
// ===================================================
// Section 3: Events
// ===================================================
/**
* @notice Event 3.1: Emitted when a new profile is created.
* @dev Emits full profile details for indexing by off-chain systems.
* @param userId The unique identifier for the user.
* @param name The user's full name.
* @param email The user's email address.
* @param age The user's age.
* @param country The user's country of residence.
* @param isKYCApproved Whether the user is KYC approved.
*/
event ProfileCreated(
uint256 indexed userId,
string name,
string email,
uint8 age,
string country,
bool isKYCApproved
);
/**
* @notice Event 3.2: Emitted when an existing profile is updated.
* @dev Emits updated profile details for indexing by off-chain systems.
* @param userId The unique identifier for the user.
* @param name The updated full name.
* @param email The updated email address.
* @param age The updated age.
* @param country The updated country.
* @param isKYCApproved The updated KYC approval status.
*/
event ProfileUpdated(
uint256 indexed userId,
string name,
string email,
uint8 age,
string country,
bool isKYCApproved
);
/**
* @notice Event 3.3: Emitted when a profile is soft-deleted.
* @param userId The unique identifier for the user.
*/
event ProfileDeleted(uint256 indexed userId);
// ===================================================
// Section 4: Functions
// ===================================================
/**
* @notice Function 4.1: Creates a new user profile.
* @dev The function reverts if a profile already exists for the given userId (unless it's soft-deleted).
* @param userId Unique identifier for the user.
* @param name The user's full name.
* @param email The user's email address.
* @param age The user's age.
* @param country The user's country of residence.
* @param isKYCApproved Boolean flag indicating if KYC is approved.
*/
function createProfile(
uint256 userId,
string memory name,
string memory email,
uint8 age,
string memory country,
bool isKYCApproved
) public {
// 4.1.1 Allow creation if profile is soft-deleted or does not exist (empty name indicates non-existence)
require(
profiles[userId].isDeleted || bytes(profiles[userId].name).length == 0,
"Profile already exists"
);
// 4.1.2 Create and store the new profile
profiles[userId] = UserProfile({
name: name,
email: email,
age: age,
country: country,
isKYCApproved: isKYCApproved,
isDeleted: false
});
// 4.1.3 Emit full profile data so off-chain indexers like The Graph can index it
emit ProfileCreated(userId, name, email, age, country, isKYCApproved);
}
/**
* @notice Function 4.2: Updates an existing user profile.
* @dev Reverts if the profile does not exist or has been soft-deleted.
* @param userId Unique identifier for the user.
* @param name New full name for the user.
* @param email New email address for the user.
* @param age New age for the user.
* @param country New country of residence for the user.
* @param isKYCApproved New KYC approval status.
*/
function updateProfile(
uint256 userId,
string memory name,
string memory email,
uint8 age,
string memory country,
bool isKYCApproved
) public {
// 4.2.1 Ensure the profile exists and is not deleted
require(
bytes(profiles[userId].name).length > 0 && !profiles[userId].isDeleted,
"Profile does not exist or has been deleted"
);
// 4.2.2 Update the profile with new details
profiles[userId] = UserProfile({
name: name,
email: email,
age: age,
country: country,
isKYCApproved: isKYCApproved,
isDeleted: false
});
// 4.2.3 Emit updated full profile data so subgraph can index changes
emit ProfileUpdated(userId, name, email, age, country, isKYCApproved);
}
/**
* @notice Function 4.3: Retrieves the profile of a given user.
* @dev Reverts if the profile has been soft-deleted or does not exist.
* @param userId Unique identifier for the user.
* @return The UserProfile struct containing the user's information.
*/
function getProfile(uint256 userId) public view returns (UserData.UserProfile memory) {
// 4.3.1 Ensure the profile exists (not soft-deleted)
require(!profiles[userId].isDeleted, "Profile not found or has been deleted");
return profiles[userId];
}
/**
* @notice Function 4.4: Soft-deletes a user profile.
* @dev Marks a profile as deleted without removing its data, reverting if the profile doesn't exist or is already deleted.
* @param userId Unique identifier for the user.
*/
function deleteProfile(uint256 userId) public {
// 4.4.1 Ensure that the profile exists and is not already deleted
require(
bytes(profiles[userId].name).length > 0 && !profiles[userId].isDeleted,
"Profile already deleted or doesn't exist"
);
// 4.4.2 Soft-delete the profile by setting its isDeleted flag to true
profiles[userId].isDeleted = true;
// 4.4.3 Emit event to notify that the profile has been deleted
emit ProfileDeleted(userId);
}
}
```
> Please ensure that smart contract emits all required paramters in every event,
> otherwise, while indexing we will not get the parameters which are not emited.
## Smart contract , events & functions overview
In a smart contract, we define a clear set of events and functions to manage the
lifecycle of user profiles. These building blocks enable seamless interaction
with the contract, supporting profile creation, updates, retrieval, and soft
deletion, while ensuring all changes are traceable through emitted events.
Events play a crucial role in allowing off-chain services like The Graph to
listen for and respond to changes in contract state, whereas functions provide
the core interface for interacting with profile data on-chain.
Below is a structured overview of the key events and functions included in the
contract:
| # | Events | Parameters | Description |
| --- | ---------------- | ---------------------------------------------------------------------------------------------------- | -------------------------------------- |
| 3.1 | `ProfileCreated` | `uint256 userId`, `string name`, `string email`, `uint8 age`, `string country`, `bool isKYCApproved` | Emitted when a new profile is created |
| 3.2 | `ProfileUpdated` | `uint256 userId`, `string name`, `string email`, `uint8 age`, `string country`, `bool isKYCApproved` | Emitted when a profile is updated |
| 3.3 | `ProfileDeleted` | `uint256 userId` | Emitted when a profile is soft-deleted |
| # | Functions | Parameters | Returns | Description |
| --- | --------------- | ---------------------------------------------------------------------------------------------------- | -------------------- | ----------------------------------------- |
| 4.1 | `createProfile` | `uint256 userId`, `string name`, `string email`, `uint8 age`, `string country`, `bool isKYCApproved` | – | Creates a new user profile |
| 4.2 | `updateProfile` | `uint256 userId`, `string name`, `string email`, `uint8 age`, `string country`, `bool isKYCApproved` | – | Updates an existing profile |
| 4.3 | `getProfile` | `uint256 userId` | `UserProfile memory` | Retrieves the profile if not soft-deleted |
| 4.4 | `deleteProfile` | `uint256 userId` | – | Soft-deletes the profile |
## Crud mapping for the smart contract
This table maps traditional Web2-style CRUD operations to the equivalent
Solidity functions in the smart contract:
| **CRUD** | **Solidity Function** | **Explanation** |
| ---------- | --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Create** | `createProfile()` | Adds a new user profile to the blockchain using a unique `userId`. This simulates an `INSERT` operation in databases. It checks that the profile does not already exist (unless soft-deleted) and stores the user's details. Emits `ProfileCreated` with full data for off-chain indexing. |
| **Read** | `getProfile()` | Retrieves an existing profile by its `userId` , similar to a `SELECT` query in SQL. It returns the user's profile only if it hasn't been soft-deleted. This function is marked `view`, meaning it does not modify blockchain state and can be called without gas. |
| **Update** | `updateProfile()` | Modifies all fields of an existing user profile. Acts like an `UPDATE` in Web2 databases. It ensures the profile exists and is not deleted, then updates it with the provided values. Emits `ProfileUpdated` with full details for off-chain use. |
| **Delete** | `deleteProfile()` | Performs a **soft delete** by setting the `isDeleted` flag to `true`, without removing the actual data from storage. This is similar to a logical delete used in many enterprise databases. The data remains on-chain (for auditability), but `getProfile()` will no longer return it. Emits `ProfileDeleted`. |
## 2. Let's add this smart contract to code studio
When you deploy an **empty** smart contract set on SettleMint platform, you get
a very simple **Counter.sol** contract as an example, you may delete it.
In the contracts folder create a file called **UserData.sol** and copy paste the
content of the above smart contract code.
## 3. Prepare deployment script
In **ignition** folder, you will find a folder called **modules**, there you
will find a **main.ts** file which is basically a contract deployment script.
You may delete it if you already know or once you understand the structure. In
this folder create a file called **deployUser.ts**
### Understanding the deployment script code structure.
```ts
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const UserDataModule = buildModule("UserDataModule", (m) => {
const userdata = m.contract("UserData");
return { userdata };
});
export default UserDataModule;
```
**Let's understand key parts of this code-**
This deployment script uses Hardhat Ignition to define and execute the
deployment of a smart contract. It begins by importing the buildModule function
from the Ignition library, which is used to define a deployment module. The
module is named "UserDataModule" and is constructed using a callback function
that receives a context object m.
Within this function, m.contract("UserData") declares that a contract named
UserData (which must match the name inside the Solidity source file) should be
deployed. This is how it knows which contract is being refered.
The deployed contract instance is stored in a variable called userdata. This
instance is then returned from the module so it can be accessed later if needed.
Finally, the module is exported as the default export so it can be run by
Hardhat's Ignition system using the CLI.
## 4. Compile the smart contract code
To run various scripts to compile, test, deploy smart contracts and
sub-graphs you need to go to left top area of the IDE and go to task manager
section. When a Solidity smart contract is compiled, the source code is
transformed into low-level bytecode that can be executed on the Ethereum
Virtual Machine (EVM). This process also generates important metadata such
as the ABI (Application Binary Interface), which defines how external
applications or scripts can interact with the contract's functions and
events. Additionally, the compiler produces debugging information, source
maps, and compiler settings. These outputs are essential for deploying,
testing, and integrating the contract with dApps or frontend applications
## Foundry build
If you compile using Foundry Build then in **out** folder a folder will be
created with the name of your smart contract file name, and within that folder,
contractname.json and contractname.metadata.json will be generated. This
contractname will be what you have as the name of the contract within solidity
file.

## Hardhat build
If you compile using Hardhat Build, then in **artifacts** folder a folder will
be created with the name of your smart contract file name, and within that
folder - artifacts.d.ts, ContractName.d.ts, ContractName.dbg.ts,
ContractName.json are generated. ContractName.json is the ABI.

When you compile a Solidity smart contract in SettleMint, it processes .sol
files and generates various output artifacts needed for deployment and
interaction. For example, after compiling UserData.sol, you get the following
inside the artifacts/ directory:
📂 artifacts/contracts/UserData.sol/
* UserData.json – This is the main artifact file. It contains the ABI
(Application Binary Interface) - The compiler metadata
* UserData.dbg.json – Debugging info including source maps and AST
* UserData.d.ts – TypeScript definition file for better type safety when using
the contract in frontend or scripting environments
* artifacts.d.ts – Global TypeScript declarations for all compiled contracts
📂 artifacts/build-info/
* hash.json – Contains detailed compiler input/output and full metadata for the
build process, useful for verifying or analyzing compilation details
## 5. Test the smart contract
Smart contract testing is a critical part of the development lifecycle in
blockchain and decentralized application (dApp) projects. Since smart contracts
are immutable once deployed to the blockchain, bugs or vulnerabilities can
result in permanent loss of funds, data corruption, or security breaches.
Thorough testing ensures that smart contracts behave as expected under various
scenarios and edge cases before they go live on the mainnet.
Testing frameworks like Hardhat and Foundry provide robust tooling to write and
execute tests in Solidity or JavaScript/TypeScript. These frameworks offer
helpful utilities such as assertions, mock accounts, blockchain state
manipulation (e.g., time travel or snapshot/rollback), and expected reverts.
Additionally, testing libraries like forge-std/Test.sol (in Foundry) or chai (in
Hardhat) enable expressive and readable test assertions.
### Foundry test
In the **test** folder in IDE, create a **UserData.t.sol** file for Foundry test
script.
It uses forge-std/Test.sol is a powerful utility library provided by Foundry's
standard library (forge-std) that simplifies writing and executing tests for
smart contracts. It extends the base Solidity Test contract and includes a rich
set of assertions, cheatcodes, and debugging tools that make testing more
expressive and efficient.
When a test contract inherits from Test, it gains access to functions like
assertEq, assertTrue, fail, and testing cheatcodes such as vm.prank,
vm.expectRevert, vm.roll, and many more. These tools simulate complex behaviors
and edge cases in a local testing environment without the need to manually
manipulate the EVM state. For example, vm.expectRevert allows developers to
anticipate and verify error conditions, while assertEq simplifies comparisons
between expected and actual results.
```solidity
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.24;
import "forge-std/Test.sol";
import "../contracts/UserData.sol"; // Adjust the import path if needed
contract UserTest is Test {
UserData public user;
function setUp() public {
// Deploy the contract before each test
user = new UserData();
}
function testCreateProfile() public {
// Call createProfile
user.createProfile(1, "Alice", "alice@email.com", 30, "USA", true);
// Fetch the profile struct
UserData.UserProfile memory profile = user.getProfile(1);
// Assert values match what we set
assertEq(profile.name, "Alice");
assertEq(profile.email, "alice@email.com");
assertEq(profile.age, 30);
assertEq(profile.country, "USA");
assertEq(profile.isKYCApproved, true);
assertEq(profile.isDeleted, false);
}
function testUpdateProfile() public {
// First create a profile
user.createProfile(2, "Bob", "bob@email.com", 28, "UK", false);
// Update profile with new values
user.updateProfile(2, "Bob Updated", "bob@new.com", 29, "Canada", true);
// Fetch the updated profile
UserData.UserProfile memory profile = user.getProfile(2);
// Assert updated values
assertEq(profile.name, "Bob Updated");
assertEq(profile.email, "bob@new.com");
assertEq(profile.age, 29);
assertEq(profile.country, "Canada");
assertEq(profile.isKYCApproved, true);
assertEq(profile.isDeleted, false);
}
function testDeleteProfile() public {
// Create and delete a profile
user.createProfile(3, "Charlie", "charlie@email.com", 25, "Germany", true);
user.deleteProfile(3);
// Expect revert on reading a deleted profile
vm.expectRevert("Profile not found or has been deleted");
user.getProfile(3);
}
function testCannotCreateDuplicateProfile() public {
// Create the profile
user.createProfile(4, "Dan", "dan@email.com", 35, "India", false);
// Attempt to create with the same ID again should revert
vm.expectRevert("Profile already exists");
user.createProfile(4, "DanAgain", "dan@retry.com", 36, "India", true);
}
function testCannotUpdateNonexistentProfile() public {
// Try to update a profile that was never created
vm.expectRevert("Profile does not exist or has been deleted");
user.updateProfile(5, "Eve", "eve@email.com", 31, "Brazil", true);
}
function testCannotDeleteNonexistentProfile() public {
// Try to delete a profile that doesn't exist
vm.expectRevert("Profile already deleted or doesn't exist");
user.deleteProfile(6);
}
function testSoftDeletedCannotBeRead() public {
// Create and delete a profile
user.createProfile(7, "Zed", "zed@email.com", 44, "Japan", true);
user.deleteProfile(7);
// Trying to read it should revert
vm.expectRevert("Profile not found or has been deleted");
user.getProfile(7);
}
function testRecreateAfterSoftDelete() public {
// Create and delete a profile
user.createProfile(8, "Tom", "tom@email.com", 20, "Italy", true);
user.deleteProfile(8);
// Re-create it with new data (allowed due to soft-deletion)
user.createProfile(8, "TomNew", "tom@new.com", 21, "Spain", false);
UserData.UserProfile memory profile = user.getProfile(8);
assertEq(profile.name, "TomNew");
assertEq(profile.email, "tom@new.com");
assertEq(profile.age, 21);
assertEq(profile.country, "Spain");
assertEq(profile.isKYCApproved, false);
assertEq(profile.isDeleted, false);
}
}
```

### Hardhat test
In the **test** folder in IDE, create a **UserData.ts** file for HardHat test
script.
```ts
import { loadFixture } from "@nomicfoundation/hardhat-toolbox-viem/network-helpers";
import { expect } from "chai";
import hre from "hardhat";
// Describe our test suite for the UserData contract
describe("UserData", function () {
// deployUserFixture deploys the UserData contract using viem and returns the deployed contract instance
// along with the address of the first wallet client.
async function deployUserFixture() {
// Deploy the UserData contract using viem.
// The contract name ("UserData") must match your contract's name.
const userContract = await hre.viem.deployContract("UserData");
// Get the first wallet client's account address to use as a signer for simulate calls.
const account = (await hre.viem.getWalletClients())[0].account.address;
return { userContract, account };
}
// Define a sample user profile object for tests.
const sampleProfile = {
userId: 1n, // BigInt literal is used for user IDs
name: "Alice",
email: "alice@example.com",
age: 30,
country: "Wonderland",
isKYCApproved: true,
};
// -------------------------------
// Tests for createProfile functionality
// -------------------------------
describe("createProfile", function () {
it("should create a new profile", async function () {
// Use loadFixture to deploy a fresh instance of the contract.
const { userContract } = await loadFixture(deployUserFixture);
// Call the write method for createProfile with sampleProfile data.
await userContract.write.createProfile([
sampleProfile.userId,
sampleProfile.name,
sampleProfile.email,
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Read the stored profile from the contract using the read method.
const profile = (await userContract.read.getProfile([
sampleProfile.userId,
])) as {
name: string;
email: string;
age: number;
country: string;
isKYCApproved: boolean;
};
// Assert that the returned profile data matches our input values.
expect(profile.name).to.equal(sampleProfile.name);
expect(profile.email).to.equal(sampleProfile.email);
});
it("should not allow duplicate profile creation", async function () {
// Deploy a fresh instance using the fixture.
const { userContract, account } = await loadFixture(deployUserFixture);
// Create a profile with the sample data.
await userContract.write.createProfile([
sampleProfile.userId,
sampleProfile.name,
sampleProfile.email,
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Attempt to simulate (dry-run) creating a duplicate profile.
// We use simulate.createProfile so that no state change occurs if it fails.
try {
await userContract.simulate.createProfile(
[sampleProfile.userId, "Bob", "bob@example.com", 25, "Utopia", false],
{ account }
);
// If no error is thrown, the test should fail.
expect.fail("Expected simulate.createProfile to revert");
} catch (err: any) {
// Check that an error is thrown.
expect(err).to.exist;
}
});
});
// -------------------------------
// Tests for updateProfile functionality
// -------------------------------
describe("updateProfile", function () {
it("should update an existing profile", async function () {
// Deploy a fresh instance.
const { userContract } = await loadFixture(deployUserFixture);
// First, create the profile using the sample data.
await userContract.write.createProfile([
sampleProfile.userId,
sampleProfile.name,
sampleProfile.email,
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Update the profile's email using updateProfile.
await userContract.write.updateProfile([
sampleProfile.userId,
sampleProfile.name,
"alice@updated.com", // new email value
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Read the updated profile.
const updated = (await userContract.read.getProfile([
sampleProfile.userId,
])) as {
name: string;
email: string;
age: number;
country: string;
isKYCApproved: boolean;
};
// Verify that the email was updated.
expect(updated.email).to.equal("alice@updated.com");
});
it("should fail to update non-existent profile", async function () {
// Deploy a fresh instance.
const { userContract, account } = await loadFixture(deployUserFixture);
// Attempt to simulate updating a profile that does not exist.
try {
await userContract.simulate.updateProfile(
[999n, "Ghost", "ghost@void.com", 99, "Nowhere", false],
{ account }
);
expect.fail("Expected simulate.updateProfile to revert");
} catch (err: any) {
// Just ensure that an error was thrown.
expect(err).to.exist;
}
});
});
// -------------------------------
// Tests for deleteProfile functionality
// -------------------------------
describe("deleteProfile", function () {
it("should soft delete a profile", async function () {
// Deploy a fresh instance.
const { userContract } = await loadFixture(deployUserFixture);
// Create the profile.
await userContract.write.createProfile([
sampleProfile.userId,
sampleProfile.name,
sampleProfile.email,
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Delete the profile.
await userContract.write.deleteProfile([sampleProfile.userId]);
// Try reading the profile, expecting it to revert.
try {
await userContract.read.getProfile([sampleProfile.userId]);
expect.fail("Expected getProfile to revert");
} catch (err: any) {
expect(err).to.exist;
}
});
it("should fail to delete a non-existent profile", async function () {
// Deploy a fresh instance.
const { userContract, account } = await loadFixture(deployUserFixture);
// Attempt to simulate deleting a profile that does not exist.
try {
await userContract.simulate.deleteProfile([123n], { account });
expect.fail("Expected simulate.deleteProfile to revert");
} catch (err: any) {
expect(err).to.exist;
}
});
});
});
```
This test script leverages Hardhat's modern support for viem, a lightweight and
fast alternative to Ethers.js, designed for more efficient interaction with
Ethereum contracts. The test uses loadFixture from
hardhat-toolbox-viem/network-helpers to ensure test isolation and efficient
deployments, each test gets a clean contract instance to work with.
Inside the script, we define a fixture function (deployUserFixture) to deploy
the User contract and provide access to the publicClient. The tests cover all
core functionalities of the contract: creating, updating, reading, and
soft-deleting user profiles. Assertions are written using Chai's expect syntax,
while contract interactions (like write.createProfile and read.getProfile)
follow the Viem pattern, making the test code both concise and expressive.
Please run **hardhat test** script to test the smart contract

Once the test is pass, you can deploy to local hardhat network by using script -
**hardhat - deploy to local network**
Start test network using **hardhat - start network** script in task manager.

Deploy to test network

If you click on **hardhat - deploy to local network** and nothing happens, then
you are missing the step to select the correct deployment script and hitting
enter key. You will see a message - **extra commandline arguments, e.g. --verify
(press 'enter' to confirm or 'escape' to cancel)** in the top middle of the IDE,
hit enter, you will see **ignition/modules/main.ts**, edit the last part at put
the correct filename (e.g. deployUserData.ts), basically the name of the
deployment script you created in ignition folder, and hit enter agian to run the
deployment script. This remains true for all the deploy cases, whether on local
network or platform network.
## 6. Deploy the smart contract to platform network
Use **SettleMint Login** script in task manager to login, you will need your
personal access token. To generate personal access token, refer -
[Personal access token](/platform-components/security-and-authentication/personal-access-tokens)
Hardhat deploy to platform network enter the path of the deployment script

ignition/modules/deployUserData.ts
> If you click on **hardhat - deploy to local network** and nothing happens,
> then you are missing the step to select the correct deployment script and
> hitting enter key. You will see a message - **extra commandline arguments,
> e.g. --verify (press 'enter' to confirm or 'escape' to cancel)** in the top
> middle of the IDE, hit enter, you will see **ignition/modules/main.ts**, edit
> the last part at put the correct filename (e.g. deployUser.ts), basically the
> name of the deployment script you created in ignition folder, and hit enter
> agian to run the deployment script. This remains true for all the deploy
> cases, whether on local network or platform network.
>
> Before deploying to network, please do not forget to login to SettleMint
> network via script **settlemint login**
Select the node to which you wish to deploy this smart contract. If you get an
error, please ensure that a private key was created and attached to the node on
which you wish to deploy the smart contract.
Select the private key you wish to use to deploy smart contract. If you are
using a public network or a network with gas fee, then make sure that this
private key's wallet is funded.
Select yes when prompted - **confirm deploy to network (network name)? ›
(y/N)**.
Wait for a few minutes for the contract to be deployed.
## Deployed contract address
Deployed contract address is stored in deployed\_addresses.json file located in
igntition>deployments folder.

Congratulations!
You have successfully compiled, tested and deployed your smart contract on
blockchain network. Now you can proceed to middlewares for getting APIs to do
smart contract transactions, write data to chain and read data in a structured
format.
file: ./content/docs/building-with-settlemint/evm-chains-guide/integration-studio.mdx
meta: {
"title": "Integration studio",
"description": "Visual workflow builder for custom APIs and integrations"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
The Integration Studio is a dedicated low-code environment that enables
developers and business users to build backend workflows, API endpoints, and
custom logic using a visual interface. Powered by Node-RED, it offers an
intuitive drag-and-drop experience for orchestrating flows between smart
contracts, external APIs, databases, and storage systems, all within the
SettleMint ecosystem.
Instead of writing boilerplate backend code, developers will define logic using
nodes and flows, visually representing how data moves between services. These
flows can be triggered by webhooks, user interactions, smart contract events, or
timed executions. Under the hood, each Integration Studio is deployed as an
isolated and scalable container that supports JavaScript-based execution,
environment configuration, and secure API access.
Each node in the flow is designed to perform a specific task, such as receiving
HTTP input, transforming payloads, calling external APIs, or executing custom
JavaScript functions. These nodes are connected inside a flow, which represents
a unit of logic or an end-to-end integration path. You can create multiple flows
within the same Integration Studio instance, allowing you to modularize your
business logic and deploy distinct endpoints for different application use
cases.
When developers deploy the Integration Studio to their application, a secure
Node-RED editor is provisioned, accessible via the platform UI. The visual
interface includes common built-in nodes and pre-integrated libraries like
ethers (for blockchain interaction), ipfsHttpClient (for decentralized storage),
and others. Additional libraries can also be added manually in the project
settings.
A common scenario might involve triggering a flow via an HTTP request, fetching
on-chain data from a smart contract using ethers.js, formatting the result, and
returning it as a JSON response. These kinds of flows can be designed in
minutes, providing API endpoints that are automatically hosted and secured by
SettleMint infrastructure.
Developers can configure API Keys to restrict access to these endpoints and
monitor calls using the platform's access token management system. Every
endpoint is served over HTTPS and can be integrated with frontend dApps, backend
services, or third-party platforms.
The simplicity of visual programming, combined with the power of JavaScript,
makes Integration Studio a robust backend builder tailored for blockchain
applications. It significantly reduces development time while maintaining
flexibility for custom use cases. Developers gain fine-grained control over how
their dApp behaves off-chain, without leaving the SettleMint environment.
The SettleMint Integration Studio is a low-code development environment which
enables you to implement business logic for your application simply by dragging
and dropping.
Under the hood, the Integration Studio is powered by a **Node-RED** instance
dedicated to your application. It is a low-code programming platform built on
Node.js and designed for event-driven application development.
[Learn more about Node-RED here](https://nodered.org/docs/).
## Basic concepts
The business logic for your application can be represented as a sequence of
actions. Such a sequence of actions is represented by a **flow** in the
Integration Studio. To bring your application to life, you need to create flows.
**Nodes** are the smallest building blocks of a flow.
### Nodes
The nodes are the smallest building blocks. They can have at most one input
port, and multiple output ports. They are triggered by some event (eg. an http
request). When triggered, they perform some user defined actions, and generate
an output. This output can be passed to the input of another node, to trigger
another action.
### Flows
A flow is represented as a tab within the editor workspace and is the main way
to organize nodes. You can have more than one set of connected nodes in a flow
tab.
The Integration Studio allows you to create flows in the fastest way possible.
You can drag and drop nodes in workspace and easily connect them by clicking
from the output port of one node to input port of another to create complex
flows. This allows you to visualise the orchestration and interaction between
your components (your nodes). Since you can clearly visualize the sequence of
actions your application is going to perform, it is not only more interpretable
but also much easier to debug in the future.
The use cases include interacting with other web services, applications, and
even IoT devices - orchestrating them for any kind of purpose to bring your
business solution to life.
[Learn more about the basic concepts of Node-RED here](https://nodered.org/docs/user-guide/concepts)
## Adding the integration studio
Navigate to the **application** where you want to add the integration studio.
Click **Integration tools** in the left navigation, and then click **Add an
integration tool**. This opens a form.

### Select integration studio
Select **Integration Studio** and click **Continue** to proceed.
### Choose a name
Choose a **name** for your Integration Studio. Choose one that will be easily
recognizable in your dashboards (eg. Crowdsale Flow)
### Select deployment plan
Choose a deployment plan. Select the type, cloud provider, region and resource
pack.
[More about deployment plans](/launching-the-platform/managed-cloud-saas/deployment-plans)
### Confirm setup
You can see the **resource cost** for the Integration Studio displayed at the
bottom of the form. Click **Confirm** to add the Integration Studio.
## Using the integration studio
When the Integration Studio is deployed, click on it from the list, and go to
the **Interface** tab to start building your flows. You can also view the
interface in full screen mode.
Once the Integration Studio interface is loaded, you will see 2 flow tabs: "Flow
1" and "Example". Head over to the **"Example" tab** to see some full blown
example flows to get you started.
Double-click any of the nodes to see the code they are running. This code is
written in JavaScript, and it represents the actions the particular node
performs.

### Setting up a flow
Before we show you how to set up your own flow, we recommend reading this
[article by Node-RED on creating your first flow](https://nodered.org/docs/tutorials/first-flow).
Now let's set up an example flow together and build an endpoint to get the
latest block number of the Polygon Mumbai Testnet using the Integration Studio.
If you do not have a Polygon Mumbai Node, you can easily
[deploy a node](/platform-components/blockchain-infrastructure/blockchain-nodes)
first.
### Add http input node
Drag and drop a **Http In node** to listen for requests. If you double-click the node, you will see you have a couple parameters to set:
* `METHOD` - set it to `GET`. This is HTTP Method that your node is configured
to listen to.
* `URL` - set it to `/getLatestBlock`. This the endpoint that your node will
listen to.
### Add function node
Drag and drop a **function node**. This is the node that will query the
blockchain for the block number. Double-click the node to configure it.
`rpcEndpoint` is the RPC url of your Polygon Mumbai Node.
Under the **Connect tab** of your Polygon Mumbai node, you will find its RPC url.
`accessToken` - You will need an access token for your application. If you do
not have one, you can easily
[create an access token](/platform-components/security-and-authentication/application-access-tokens)
first.
Enter the following snippet in the Message tab:
```javascript
///////////////////////////////////////////////////////////
// Configuration //
///////////////////////////////////////////////////////////
const rpcEndpoint = "https://YOUR_NODE_RPC_ENDPOINT.settlemint.com";
const accessToken = "YOUR_APPLICATION_ACCESS_TOKEN_HERE";
///////////////////////////////////////////////////////////
// Logic //
///////////////////////////////////////////////////////////
const ethers = global.get("ethers");
const provider = new ethers.providers.JsonRpcProvider(
`${rpcEndpoint}/${accessToken}`
);
msg.payload = await provider.getBlockNumber();
return msg;
///////////////////////////////////////////////////////////
// End //
///////////////////////////////////////////////////////////
```
**Note:** ethers and some ipfs libraries are already available by default and can be used like this:
```javascript
const ethers = global.get("ethers");
const provider = new ethers.providers.JsonRpcProvider(
`${rpcEndpoint}/${accessToken}`
);
const ipfsHttpClient = global.get("ipfsHttpClient");
const client = ipfsHttpClient.create(`${ipfsEndpoint}/${accessToken}/api/v0`);
const uint8arrays = global.get("uint8arrays");
const itAll = global.get("itAll");
const data = uint8arrays.toString(
uint8arrays.concat(await itAll(client.cat(cid)))
);
```
If the library you need isn't available by default you will need to import it in
the setup tab. Example for ethers providers:

### Add http response node
Drag and drop a **Http Response node** to reply to the request. Double-click and
configure:
* `Status code` - This is the HTTP status code that the node will respond with
after completion of the request. We set it to 200 (`OK`)
Click on the `Deploy` button in the top right corner to save and deploy your
changes.
### Test your endpoint
Now, go back to the **Connect tab** of your Integration Studio to see your **API
endpoint**, which looks something like
`https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com`.

You can now send requests to
`https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com/getLatestBlock` to get
the latest block number. Do not forget to create an API Key for your Integration
studio and pass it as the `x-auth-token` authorization header with your request.
Example terminal command:
```bash
curl -H "x-auth-token: bpaas-YOUR_INTEGRATION_KEY_HERE" https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com/getLatestBlock
```
The API is live and protected by the authorization header, and you can
seamlessly integrate with your application.
You can access 4000 plus pre-built modules from the in-built library.

You can use the Integration Studio to build very complex flows. Learn more in
this [cookbook by Node-RED](https://cookbook.nodered.org/) on the different
types of flows.
file: ./content/docs/building-with-settlemint/evm-chains-guide/setup-api-portal.mdx
meta: {
"title": "Setup smart contract portal",
"description": "Setup smart contract portal"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To set up the smart contract portal for a smart contract, you first need to
compile the contract and locate its ABI file. The ABI is auto-generated during
compilation and acts as a translation layer between your contract and external
tools like frontends or API layers. You'll find this ABI in the
artifacts/contracts/ContractName.sol/ContractName.json file if you used Hardhat,
or under out/ if you used Foundry. This file contains structured definitions of
all contract functions and events, and is essential for enabling external calls
through REST or GraphQL.
Once you have the ABI file, navigate to your application on the SettleMint
platform and go to the middleware section. Here, you'll need to add a new
middleware of type API portal. You will assign a name, select the blockchain
node where your contract is deployed, and upload the ABI file. Make sure the ABI
file is named appropriately because that name will reflect in your API
structure. After confirming the setup, the API portal will automatically expose
both REST and GraphQL endpoints based on your contract's ABI.
To connect the API portal with your contract logic, you must provide the smart
contract's deployed address. This can be found inside the
deployed\_addresses.json file generated by Ignition after a successful
deployment. The portal will use this address to direct requests to the correct
contract instance.
Once deployed, you can start making REST API calls using standard HTTP requests.
You'll use the base URL shown under the portal's connect tab, and structure your
API requests according to the contract's ABI. Each call should include
authentication via your application access token, and a JSON payload specifying
the function name, parameters, and caller details such as from, gasLimit, and
simulate. The response will return transaction hashes for writes, or data for
reads.
## How to setup the smart contract portal in the SettleMint platform
### 1. Understanding application binary interface (ABI)
The application binary interface (ABI) is an essential artifact in Ethereum and
other EVM-based blockchain ecosystems that defines how smart contracts
communicate with the outside world. It acts as a formal agreement between a
deployed smart contract and any external entity, such as web applications,
backend servers, wallets, or command-line tools, about how to encode and decode
data for function calls, returns, and events. The ABI describes, in a structured
JSON format, each function's name, inputs, outputs, type (e.g., function,
constructor, event), and visibility (view, pure, payable, etc.).
The ABI is generated automatically when a Solidity smart contract is compiled.
When developers write a Solidity contract and run it through the Solidity
compiler (solc), or through development frameworks like Hardhat or Truffle, the
output includes several artifacts, one of which is the ABI. This ABI is derived
by analyzing the contract's function signatures, input and output types, event
declarations, and constructor. For each function, the compiler calculates a
unique function selector, a 4-byte identifier based on the first 4 bytes of the
keccak256 hash of the function's signature (e.g., transfer(address,uint256)).
The ABI then maps these selectors to their corresponding human-readable
definitions in JSON form.
At runtime, when an application (like a frontend built with Web3.js or
Ethers.js) wants to interact with the contract, it uses this ABI to encode the
function call and its parameters into hexadecimal data that the Ethereum Virtual
Machine (EVM) can understand. Similarly, when the EVM returns data (e.g., the
result of a view function or an event emitted during a transaction), the ABI
provides the blueprint for decoding this binary data back into usable JavaScript
objects. In addition to function calls, the ABI is also critical for subscribing
to and decoding events emitted by the contract. Each event in the contract is
represented in the ABI with a structure that allows applications to listen for
specific logs on-chain and parse them into structured data.
### 2. Using ABI from the **UserData.sol** smart contract which we deployed in the previous step
Navigate to **/artifacts/contracts/UserData.sol/UserData.json** to find the ABI
of the contract we compiled and deployed in the previous step. Download the JSON
file.
If you build using Foundry, you will find the ABI in the **out** folder
out>ContractName.sol>ContractName.json

The ABI you will get is the following:
```json
{
"_format": "hh-sol-artifact-1",
"contractName": "UserData",
"sourceName": "contracts/UserData.sol",
"abi": [
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "uint256",
"name": "userId",
"type": "uint256"
},
{
"indexed": false,
"internalType": "string",
"name": "name",
"type": "string"
},
{
"indexed": false,
"internalType": "string",
"name": "email",
"type": "string"
},
{
"indexed": false,
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"indexed": false,
"internalType": "string",
"name": "country",
"type": "string"
},
{
"indexed": false,
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
}
],
"name": "ProfileCreated",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "uint256",
"name": "userId",
"type": "uint256"
}
],
"name": "ProfileDeleted",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "uint256",
"name": "userId",
"type": "uint256"
},
{
"indexed": false,
"internalType": "string",
"name": "name",
"type": "string"
},
{
"indexed": false,
"internalType": "string",
"name": "email",
"type": "string"
},
{
"indexed": false,
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"indexed": false,
"internalType": "string",
"name": "country",
"type": "string"
},
{
"indexed": false,
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
}
],
"name": "ProfileUpdated",
"type": "event"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "userId",
"type": "uint256"
},
{
"internalType": "string",
"name": "name",
"type": "string"
},
{
"internalType": "string",
"name": "email",
"type": "string"
},
{
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"internalType": "string",
"name": "country",
"type": "string"
},
{
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
}
],
"name": "createProfile",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "userId",
"type": "uint256"
}
],
"name": "deleteProfile",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "userId",
"type": "uint256"
}
],
"name": "getProfile",
"outputs": [
{
"components": [
{
"internalType": "string",
"name": "name",
"type": "string"
},
{
"internalType": "string",
"name": "email",
"type": "string"
},
{
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"internalType": "string",
"name": "country",
"type": "string"
},
{
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
},
{
"internalType": "bool",
"name": "isDeleted",
"type": "bool"
}
],
"internalType": "struct UserData.UserProfile",
"name": "",
"type": "tuple"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"name": "profiles",
"outputs": [
{
"internalType": "string",
"name": "name",
"type": "string"
},
{
"internalType": "string",
"name": "email",
"type": "string"
},
{
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"internalType": "string",
"name": "country",
"type": "string"
},
{
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
},
{
"internalType": "bool",
"name": "isDeleted",
"type": "bool"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "userId",
"type": "uint256"
},
{
"internalType": "string",
"name": "name",
"type": "string"
},
{
"internalType": "string",
"name": "email",
"type": "string"
},
{
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"internalType": "string",
"name": "country",
"type": "string"
},
{
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
}
],
"name": "updateProfile",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
}
],
"bytecode": "0x6080806040523460155761121d908161001b8239f35b600080fdfe6080604052600436101561001257600080fd5b60003560e01c806328279308146109da578063985736ce1461087f578063c36fe3d6146107b5578063eb5339291461023d5763f08f4f641461005357600080fd5b346102385760207ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc36011261023857600435600060a060405161009581610f12565b6060815260606020820152826040820152606080820152826080820152015280600052600060205260ff60046040600020015460081c166101b45760005260006020526101726040600020604051906100ed82610f12565b6100f6816110a6565b8252610104600182016110a6565b906020830191825261019f60ff60028301541660408501908152600461012c600385016110a6565b936060870194855201549260ff6101856080880196828716151588528260a08a019760081c1615158752604051998a9960208b525160c060208c015260e08b0190611168565b9051601f198a83030160408b0152611168565b925116606087015251601f19868303016080870152611168565b9151151560a084015251151560c08301520390f35b60846040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602560248201527f50726f66696c65206e6f7420666f756e64206f7220686173206265656e20646560448201527f6c657465640000000000000000000000000000000000000000000000000000006064820152fd5b600080fd5b346102385761024b36610fa8565b908560009695939652600060205260ff60046040600020015460081c168015610797575b15610739576040519561028187610f12565b83875260208701858152604088019060ff831682526060890198848a526080810192861515845260a0820192600084528a60005260006020526040600020925180519067ffffffffffffffff82116105885781906102df8654611053565b601f81116106e6575b50602090601f831160011461068357600092610678575b50506000198260011b9260031b1c19161783555b518051600184019167ffffffffffffffff82116105885781906103368454611053565b601f8111610625575b50602090601f83116001146105c2576000926105b7575b50506000198260011b9260031b1c19161790555b60ff600283019151167fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00825416179055600381019951998a5167ffffffffffffffff8111610588576103bc8254611053565b601f8111610540575b5060209b601f82116001146104a8579261048c9492826004937fca34bc1ece01e1f6e787e2fcbd4c56766978c283996ee9eb1055109936cf34259e9f6104989c9b9a999760009261049d575b50506000198260011b9260031b1c19161790555b019151151560ff7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0084541691161782555115157fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00ff61ff00835492151560081b169116179055565b9b601f1982169c83600052816000209d60005b818110610c8d5750837fca34bc1ece01e1f6e787e2fcbd4c56766978c283996ee9eb1055109936cf34259e9f6104989c9b9a99979461048c9997946004976001951061050f575b505050811b019055610425565b929e8f83015181556001019e60200192602001610c3a565b826000526020600020601f830160051c81019160208410610ce3575b601f0160051c01905b818110610cd75750610b62565b60008155600101610cca565b9091508190610cc1565b015190508e80610af3565b600085815282812093601f1916905b818110610d435750908460019594939210610d2a575b505050811b019055610b07565b015160001960f88460031b161c191690558e8080610d1d565b92936020600181928786015181550195019301610d07565b909150836000526020600020601f840160051c81019160208510610da4575b90601f859493920160051c01905b818110610d955750610adc565b60008155849350600101610d88565b9091508190610d7a565b015190508e80610a9c565b600087815282812093601f1916905b818110610e045750908460019594939210610deb575b505050811b018355610ab0565b015160001960f88460031b161c191690558e8080610dde565b92936020600181928786015181550195019301610dc8565b909150856000526020600020601f840160051c81019160208510610e65575b90601f859493920160051c01905b818110610e565750610a85565b60008155849350600101610e49565b9091508190610e3b565b60846040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602a60248201527f50726f66696c6520646f6573206e6f74206578697374206f722068617320626560448201527f656e2064656c65746564000000000000000000000000000000000000000000006064820152fd5b5084600052600060205260ff60046040600020015460081c1615610a0c565b60c0810190811067ffffffffffffffff82111761058857604052565b90601f601f19910116810190811067ffffffffffffffff82111761058857604052565b81601f820112156102385780359067ffffffffffffffff82116105885760405192610f866020601f19601f8601160185610f2e565b8284526020838301011161023857816000926020809301838601378301015290565b60c07ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc820112610238576004359160243567ffffffffffffffff81116102385782610ff591600401610f51565b9160443567ffffffffffffffff8111610238578161101591600401610f51565b9160643560ff8116810361023857916084359067ffffffffffffffff82116102385761104391600401610f51565b9060a43580151581036102385790565b90600182811c9216801561109c575b602083101461106d57565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052602260045260246000fd5b91607f1691611062565b90604051918260008254926110ba84611053565b808452936001811690811561112857506001146110e1575b506110df92500383610f2e565b565b90506000929192526020600020906000915b81831061110c5750509060206110df92820101386110d2565b60209193508060019154838589010152019101909184926110f3565b602093506110df9592507fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0091501682840152151560051b820101386110d2565b919082519283825260005b848110611194575050601f19601f8460006020809697860101520116010190565b80602080928401015182828601015201611173565b9360ff6111cb6111df94610844608097959a999a60a08a5260a08a0190611168565b921660408601528482036060860152611168565b93151591015256fea2646970667358221220e734baef00a48587a6925ab9e9c2ba63acf5e71a194aeb1359347e94b1f78f8a64736f6c634300081b0033",
"deployedBytecode": "0x6080604052600436101561001257600080fd5b60003560e01c806328279308146109da578063985736ce1461087f578063c36fe3d6146107b5578063eb5339291461023d5763f08f4f641461005357600080fd5b346102385760207ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc36011261023857600435600060a060405161009581610f12565b6060815260606020820152826040820152606080820152826080820152015280600052600060205260ff60046040600020015460081c166101b45760005260006020526101726040600020604051906100ed82610f12565b6100f6816110a6565b8252610104600182016110a6565b906020830191825261019f60ff60028301541660408501908152600461012c600385016110a6565b936060870194855201549260ff6101856080880196828716151588528260a08a019760081c1615158752604051998a9960208b525160c060208c015260e08b0190611168565b9051601f198a83030160408b0152611168565b925116606087015251601f19868303016080870152611168565b9151151560a084015251151560c08301520390f35b60846040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602560248201527f50726f66696c65206e6f7420666f756e64206f7220686173206265656e20646560448201527f6c657465640000000000000000000000000000000000000000000000000000006064820152fd5b600080fd5b346102385761024b36610fa8565b908560009695939652600060205260ff60046040600020015460081c168015610797575b15610739576040519561028187610f12565b83875260208701858152604088019060ff831682526060890198848a526080810192861515845260a0820192600084528a60005260006020526040600020925180519067ffffffffffffffff82116105885781906102df8654611053565b601f81116106e6575b50602090601f831160011461068357600092610678575b50506000198260011b9260031b1c19161783555b518051600184019167ffffffffffffffff82116105885781906103368454611053565b601f8111610625575b50602090601f83116001146105c2576000926105b7575b50506000198260011b9260031b1c19161790555b60ff600283019151167fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00825416179055600381019951998a5167ffffffffffffffff8111610588576103bc8254611053565b601f8111610540575b5060209b601f82116001146104a8579261048c9492826004937fca34bc1ece01e1f6e787e2fcbd4c56766978c283996ee9eb1055109936cf34259e9f6104989c9b9a999760009261049d575b50506000198260011b9260031b1c19161790555b019151151560ff7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0084541691161782555115157fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00ff61ff00835492151560081b169116179055565b9b601f1982169c83600052816000209d60005b818110610c8d5750837fca34bc1ece01e1f6e787e2fcbd4c56766978c283996ee9eb1055109936cf34259e9f6104989c9b9a99979461048c9997946004976001951061050f575b505050811b019055610425565b929e8f83015181556001019e60200192602001610c3a565b826000526020600020601f830160051c81019160208410610ce3575b601f0160051c01905b818110610cd75750610b62565b60008155600101610cca565b9091508190610cc1565b015190508e80610af3565b600085815282812093601f1916905b818110610d435750908460019594939210610d2a575b505050811b019055610b07565b015160001960f88460031b161c191690558e8080610d1d565b92936020600181928786015181550195019301610d07565b909150836000526020600020601f840160051c81019160208510610da4575b90601f859493920160051c01905b818110610d955750610adc565b60008155849350600101610d88565b9091508190610d7a565b015190508e80610a9c565b600087815282812093601f1916905b818110610e045750908460019594939210610deb575b505050811b018355610ab0565b015160001960f88460031b161c191690558e8080610dde565b92936020600181928786015181550195019301610dc8565b909150856000526020600020601f840160051c81019160208510610e65575b90601f859493920160051c01905b818110610e565750610a85565b60008155849350600101610e49565b9091508190610e3b565b60846040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602a60248201527f50726f66696c6520646f6573206e6f74206578697374206f722068617320626560448201527f656e2064656c65746564000000000000000000000000000000000000000000006064820152fd5b5084600052600060205260ff60046040600020015460081c1615610a0c565b60c0810190811067ffffffffffffffff82111761058857604052565b90601f601f19910116810190811067ffffffffffffffff82111761058857604052565b81601f820112156102385780359067ffffffffffffffff82116105885760405192610f866020601f19601f8601160185610f2e565b8284526020838301011161023857816000926020809301838601378301015290565b60c07ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc820112610238576004359160243567ffffffffffffffff81116102385782610ff591600401610f51565b9160443567ffffffffffffffff8111610238578161101591600401610f51565b9160643560ff8116810361023857916084359067ffffffffffffffff82116102385761104391600401610f51565b9060a43580151581036102385790565b90600182811c9216801561109c575b602083101461106d57565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052602260045260246000fd5b91607f1691611062565b90604051918260008254926110ba84611053565b808452936001811690811561112857506001146110e1575b506110df92500383610f2e565b565b90506000929192526020600020906000915b81831061110c5750509060206110df92820101386110d2565b60209193508060019154838589010152019101909184926110f3565b602093506110df9592507fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0091501682840152151560051b820101386110d2565b919082519283825260005b848110611194575050601f19601f8460006020809697860101520116010190565b80602080928401015182828601015201611173565b9360ff6111cb6111df94610844608097959a999a60a08a5260a08a0190611168565b921660408601528482036060860152611168565b93151591015256fea2646970667358221220e734baef00a48587a6925ab9e9c2ba63acf5e71a194aeb1359347e94b1f78f8a64736f6c634300081b0033",
"linkReferences": {},
"deployedLinkReferences": {}
}
```
In this ABI, we have a set of functions, inputs, outputs, and events captured
from the UserData.sol smart contract. It outlines how external applications can
interact with the contract by providing structured definitions for each callable
function and emitted event.
#### UserData contract ABI summary
| Events | Indexed Params | Non-Indexed Params |
| ---------------- | ------------------ | -------------------------------------------------------------------------------------------- |
| `ProfileCreated` | `userId (uint256)` | `name (string)`, `email (string)`, `age (uint8)`, `country (string)`, `isKYCApproved (bool)` |
| `ProfileUpdated` | `userId (uint256)` | `name (string)`, `email (string)`, `age (uint8)`, `country (string)`, `isKYCApproved (bool)` |
| `ProfileDeleted` | `userId (uint256)` | — |
| Functions | Inputs | Outputs |
| --------------- | ---------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| `createProfile` | `userId (uint256)`, `name (string)`, `email (string)`, `age (uint8)`, `country (string)`, `isKYCApproved (bool)` | — |
| `updateProfile` | `userId (uint256)`, `name (string)`, `email (string)`, `age (uint8)`, `country (string)`, `isKYCApproved (bool)` | — |
| `deleteProfile` | `userId (uint256)` | — |
| `getProfile` | `userId (uint256)` | Tuple: `{ name (string), email (string), age (uint8), country (string), isKYCApproved (bool), isDeleted (bool) }` |
| `profiles` | `userId (uint256)` | Tuple: `{ name (string), email (string), age (uint8), country (string), isKYCApproved (bool), isDeleted (bool) }` |
### 3. Add smart contract portal middleware to your application
Middleware acts as a bridge between your blockchain network and applications,
providing essential services like data indexing, API access, and event
monitoring. Before adding middleware, ensure you have an application and
blockchain node in place.
#### How to add middleware
**Navigate to application**
Navigate to the **application** where you want to add middleware.
**Access middleware section**
Click **middleware** in the left navigation, and then click **add a middleware**. This opens a form.
**Configure middleware**
1. Choose middleware type (graph or portal)
2. Choose a **middleware name**
3. Select the **blockchain node** (prefered option for portal) or **load balancer** (prefered option for the graph)
4. Configure deployment settings
5. Click **confirm**
First ensure you're authenticated:
```bash
settlemint login
```
Create a middleware:
```bash
# Get the list of available middleware types
settlemint platform create middleware --help
# Create a middleware
settlemint platform create middleware
# Get information about the command and all available options
settlemint platform create middleware --help
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create middleware
const result = await client.middleware.create({
applicationUniqueName: "your-app-unique-name",
name: "my-middleware",
type: "SHARED",
interface: "HA_GRAPH", // Valid options: "HA_GRAPH" | "SMART_CONTRACT_PORTAL"
blockchainNodeUniqueName: "your-node-unique-name",
region: "EUROPE", // Required
provider: "GKE", // Required
size: "SMALL" // Valid options: "SMALL" | "MEDIUM" | "LARGE"
});
console.log('Middleware created:', result);
```
Get your access token from the Platform UI under User Settings → API Tokens.
#### Manage middleware
Navigate to your middleware and click **manage middleware** to:
* View middleware details and status
* Update configurations
* Monitor health
* Access endpoints
```bash
# List middlewares
settlemint platform list middlewares --application
```
```bash
# Get middleware details
settlemint platform read middleware
```
```typescript
// List middlewares
await client.middleware.list("your-app-unique-name");
```
```typescript
// Get middleware details
await client.middleware.read("middleware-unique-name");
```

You can upload or copy paste the ABI. Please note that if you upload the ABI,
the file name will be picked as the ABI name, so make sure you edit the file
name of the ABI JSON file before uploading.

In a few minutes we get a REST and GraphQL API layer

To update the ABIs of an existing smart contract portal middleware, navigate to
the middleware, go to the details and click on the 'manage middleware' button on
the top right. Click on the 'update ABIs' item and a dialog will open. In this
dialog upload the ABI file(s) you saved on your local filesystem in the previous
step.
### 4. How to configure REST API requests in the portal
To interact with your smart contract via the API portal, follow these steps:

#### Get the base URL
Navigate to the **connect** tab in the portal middleware to obtain the base API
URL. It will look something like:
`https://api-portal-affe9.gke-europe.settlemint.com/`
For exact endpoints, refer to the portal UI. An example endpoint might look like
this:
`https://api-portal-affe9.gke-europe.settlemint.com/api/user-smart-contract-abi/{address}/create-profile`
Here, `{address}` should be replaced with the deployed smart contract address on
the blockchain.
> You can find the deployed contract address in the `deployed_addresses.json`
> file located inside the `ignition/deployments` folder.
#### Sample request body
Here's an example JSON body for a smart contract function like `createProfile`:
```json
{
"from": "",
"gasLimit": "",
"gasPrice": "",
"simulate": true,
"metadata": {},
"input": {
"userId": "",
"name": "",
"email": "",
"age": 0,
"country": "",
"isKYCApproved": true
}
}
```
#### Field descriptions
* **`from`**: Public key of the wallet that will initiate the transaction.
Typically, this is the deployer's address. For advanced scenarios, this can be
a specific user's public address, depending on roles.
* **`gasLimit`**: Use a reasonably high value for zero-gas private networks. For
others, determine a realistic value through trial and error. You can fine-tune
this based on actual gas usage from previous transactions.
* **`gasPrice`**: Set to `0` for zero-gas networks, or specify an appropriate
value for gas-charging private or public networks.
* **`simulate`**: Leave as `true` for a dry run before sending actual
transactions.
* **`metadata`**: Can be left empty or with default values unless your
application requires it.
* **`input`**: Include all parameters required by the smart contract function
you are calling.
#### Authentication
Use your **application access token** as the API key for authentication. You can
generate this token from the **access tokens** section in your application
dashboard (left sidebar menu) and will look something like
**sm\_aat\_fd0fbe61cf102b6c**.
#### Expected response
If the request is valid, the API will return a: **200 OK** along with the
**transaction hash** in the response body. Else various error codes with
respective messages will be returned.
### 5. How to configure GraphQL API requests in the portal

To query smart contract data using GraphQL in the SettleMint API portal,
navigate to the **GraphQL** tab in the portal interface. You will see a visual
GraphQL explorer that allows you to construct and test your queries easily. The
endpoint for GraphQL is provided under the **connect** tab, typically looking
like:
```
https://api-portal-affe9.gke-europe.settlemint.com/graphql
```
In the explorer, start by selecting the appropriate query object exposed in the
API, such as `UserSmartContractAbi`. You'll need to provide the `address`
parameter, which corresponds to the deployed smart contract address. This
address ensures that your request is directed to the correct smart contract
instance on-chain.
Once the address is entered, you can choose the function or field you want to
query. For example, selecting the `profiles` field and providing a `uint256` ID
(such as `"101"`) will retrieve the user profile associated with that ID. You
can then pick which fields of the profile you want to fetch, like `name`,
`email`, `age`, `country`, `isKYCApproved`, and `isDeleted`.
After you've built your query, hit the play button to execute it. If successful,
the response will appear on the right-hand panel, showing the structured result
returned from the smart contract. In this case, you might get a profile with a
name, email, country, age, and flags indicating whether the profile is deleted
or KYC-approved.
This intuitive interface allows developers to rapidly test GraphQL queries
without needing to write code or leave the portal. This can be used for
debugging, exploring contract data, and integrating smart contract logic into
frontend or backend systems using GraphQL.
Congratulations!
You have successfully deployed smart contract API portal and have generated APIs
to write data on chain.
From here you can proceed for setting up graph middleware for indexing data and
get GraphQL API layer for reading data stored on chain via smart contract
interactions.
file: ./content/docs/building-with-settlemint/evm-chains-guide/setup-code-studio.mdx
meta: {
"title": "Setup code studio",
"description": "Guide to setup code studio IDE to develop and deploy smart contracts and sub-graphs"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To start developing and deploying smart contracts on the SettleMint platform,
you'll first need to add code studio to your application. This provides you with
a full-featured web-based IDE, pre-configured for blockchain development using
tools like Hardhat, Foundry, and The Graph. Once added, you can use built-in
tasks to build, test, deploy, and index your smart contracts and subgraphs, all
within the same environment.
You can add code studio through the platform UI by selecting it as a dev tool
and linking it with a smart contract set and a template. Alternatively, you can
use the SDK CLI or SDK JS to programmatically create and manage smart contract
sets. These interfaces give you flexibility depending on whether you're working
from the console or integrating via scripts or automation.
After setup, you'll be able to customize your smart contracts directly within
the IDE. A task manager will guide you through building and deploying them to
local or SettleMint-hosted blockchain networks. You can also integrate subgraphs
for indexing and querying contract data using The Graph.
To speed up development, SettleMint offers a rich library of open-source smart
contract templates, from ERC standards to more complex business use cases. These
templates can be modified, extended, or used as-is, and you also have the option
to create and manage custom templates within your consortium for reuse across
projects.
## How to setup code studio and deploy smart contracts on SettleMint platform
Code studio is SettleMint's fully integrated, web-based IDE built specifically
for blockchain development. It provides developers with a familiar Visual Studio
Code experience directly in the browser, pre-configured with essential tools
like Hardhat, Foundry, and The Graph. Code studio enables seamless development,
testing, deployment, and indexing of smart contracts and subgraphs, all within a
unified environment.
It eliminates the need for complex local setups, simplifies DevOps workflows,
and reduces time-to-market by combining infrastructure, templates, and
automation under one interface. By offering pre-built tasks, contract templates,
and GitHub integration, it solves the traditional challenges of fragmented
tooling, inconsistent environments, and steep setup requirements for web3
development.

Despite offering full configurability, code studio includes all essential
dependencies pre-installed, saving time and avoiding setup friction. It supports
extensions for formatting, linting, testing, and AI-assisted development,
mirroring the convenience of a local VS Code setup. Every component, from
contracts to testing and subgraph development is wired into a well-structured,
maintainable codebase that is continuously updated and thoroughly tested to
align with the latest development standards. This makes it ideal for both rapid
prototyping and production-grade blockchain applications.

Smart contract sets allow you to incorporate **business logic** into your
application by deploying smart contracts that run on the blockchain. You can add
a smart contract set via different methods as part of your development workflow.
## IDE project structure
The EVM IDE project structure in code studio is thoughtfully organized to
support efficient smart contract development, testing, and deployment. Each
folder serves a specific purpose in the dApp development lifecycle, aligning
with industry-standard tools like Hardhat, Foundry, and The Graph.
| Folder | Description |
| --------------- | ------------------------------------------------------------------------------------------------- |
| `contracts/` | Contains Solidity smart contracts that define the core logic and business rules of the dApp. |
| `test/` | Holds test files. These can be written in **TypeScript** for Hardhat or **Solidity** for Foundry. |
| `script/` | Stores deployment and interaction scripts, often used to automate tasks like contract deployment. |
| `lib/` | Optional directory for external Solidity libraries or reusable modules to avoid code repetition. |
| `ignitions/` | Contains **Hardhat Ignition** configuration for defining declarative deployment plans. |
| `out/` | Output folder used by **Foundry**, containing compiled contract artifacts like ABIs and bytecode. |
| `artifacts/` | Output folder used by **Hardhat**, similar to `out/`, containing build artifacts and metadata. |
| `subgraphs/` | Contains files for **The Graph** integration, schema, mappings, and manifest for data indexing. |
| `cache/` | Caching directory for Hardhat to improve build performance by avoiding redundant compilation. |
| `cache_forge/` | Caching directory for Foundry to speed up compilation and reuse outputs. |
| `node_modules/` | Contains installed npm packages and dependencies used in Hardhat or other JS-based tools. |
## Code studio task manager
The code studio IDE task manager acts as a centralized hub for running all
essential development scripts, giving developers a streamlined way to manage the
entire smart contract lifecycle. It also includes integrated SettleMint CLI
tasks for logging in and managing authenticated platform interactions, ensuring
that everything needed for blockchain development is accessible and executable
directly from within the IDE.
Below is a categorized table of tasks or scripts available with concise
explanations.
| Task | Tool | Description |
| -------------------------------------------- | -------------- | ------------------------------------------------------------------------ |
| SettleMint - Login | SettleMint CLI | Logs into the SettleMint platform via CLI for authenticated deployments. |
| Foundry - Build | Foundry | Compiles the smart contracts using Foundry. |
| Hardhat - Build | Hardhat | Compiles the smart contracts using Hardhat. |
| Foundry - Test | Foundry | Runs tests using Foundry's native testing framework. |
| Hardhat - Test | Hardhat | Executes tests using Hardhat's JavaScript-based test suite. |
| Foundry - Format | Foundry | Formats smart contract code for readability (optional). |
| Foundry - Start network | Foundry | Starts a local Foundry testnet environment. |
| Hardhat - Start network | Hardhat | Starts a local Hardhat network for JS-based testing. |
| Hardhat - Deploy to local network | Hardhat | Deploys compiled contracts to the local Hardhat network. |
| Hardhat - Reset & Deploy to local network | Hardhat | Resets the local chain state and redeploys contracts. |
| Hardhat - Deploy to platform network | Hardhat | Deploys contracts to a blockchain network hosted on SettleMint. |
| Hardhat - Reset & Deploy to platform network | Hardhat | Resets the platform network state and redeploys contracts. |
| The Graph - Codegen the subgraph types | The Graph CLI | Generates TypeScript types based on subgraph GraphQL schema. |
| The Graph - Build the subgraph | The Graph CLI | Compiles the subgraph for deployment to The Graph. |
| The Graph - Deploy or update the subgraph | The Graph CLI | Deploys or updates the subgraph on The Graph's hosted service. |
When using Hardhat Ignition for deploying smart contracts, the deployed contract
addresses are stored in the file
ignition/deployments/chain-CHAIN\_ID/deployed\_addresses.json. This file serves as
a reliable reference for all contracts deployed on a specific network. It maps
contract names to their respective blockchain addresses, making it easy to
retrieve addresses later for interactions, frontend integrations, or upgrades.
You must have an existing application before you add a smart contract set.
## How to add code studio
### Navigate to application
Navigate to the **application** where you want to add the smart contract set.
### Open dev tools
Open **dev tools** and click on **add a dev tool**.

### Select code studio
Select **code studio** as the dev tool type.

### Choose smart contract set
Then choose **smart contract set**.

### Pick a template
Pick a **template**; the code studio will load with your chosen smart contract template.

### Enter details
Click **continue** to enter details such as the dev tool name, user, and deployment plan.

### Confirm
Confirm the resource cost and click **confirm** to add the smart contract set.
You can now further configure and eventually deploy your smart contracts.
First, ensure you are authenticated:
```bash
settlemint login
```
You can create a smart contract set either on the platform or locally:
### Create on platform
Then create a smart contract set with the following command (refer to the
[CLI docs](/building-with-settlemint/15_dev-tools/1_SDK.md) for more details):
```bash
settlemint platform create smart-contract-set \
--application \
--template \
--deployment-plan
```
For example:
```bash
settlemint platform create smart-contract-set my-scset \
--application my-app \
--template default \
--deployment-plan starter
```
### Working with smart contract sets locally
You can also work with smart contract sets in your local development environment. This is useful for development and testing before deploying to the platform.
To create a smart contract set locally:
```bash
# Create a new smart contract set
settlemint scs create
# You'll see the SettleMint ASCII art and then be prompted:
✔ What is the name of your new SettleMint project? my awesome project
# Choose from available templates:
❯ ERC20 token
Empty typescript
Empty typescript with PDC
ERC1155 token
ERC20 token with crowdsale mechanism
ERC20 token with MetaTx
ERC721
# ... and more
```
Once created, you can use these commands to work with your local smart contract set:
```bash
settlemint scs -h # Show all available commands
# Main commands:
settlemint scs create # Create a new smart contract set
settlemint scs foundry # Foundry commands for building and testing
settlemint scs hardhat # Hardhat commands for building, testing and deploying
settlemint scs subgraph # Commands for managing TheGraph subgraphs
```
The scaffolded project includes everything you need to start developing smart contracts:
* Contract templates
* Testing framework
* Deployment scripts
* Development tools configuration
### Managing platform smart contract sets
Manage your platform smart contract sets with:
```bash
# List smart contract sets
settlemint platform list smart-contract-sets --application
# Read smart contract set details
settlemint platform read smart-contract-set
```
You can also add a smart contract set programmatically using the JS SDK. The API follows the same pattern as for applications and blockchain networks:
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: process.env.SETTLEMENT_ACCESS_TOKEN!,
instance: 'https://console.settlemint.com'
});
// Create a Smart Contract Set
const createSmartContractSet = async () => {
const result = await client.smartContractSet.create({
applicationUniqueName: "your-app", // Your application unique name
name: "my-smart-contract-set", // The smart contract set name
template: "default" // Template to use (choose from available templates)
});
console.log('Smart Contract Set created:', result);
};
// List Smart Contract Sets
const listSmartContractSets = async () => {
const sets = await client.smartContractSet.list("your-app");
console.log('Smart Contract Sets:', sets);
};
// Read Smart Contract Set details
const readSmartContractSet = async () => {
const details = await client.smartContractSet.read("smart-contract-set-unique-name");
console.log('Smart Contract Set details:', details);
};
```
Get your access token from the platform UI under **user settings → API tokens**.
All operations require that you have the necessary permissions in your
workspace.
## Customize smart contracts
You can customize your smart contracts using the built-in IDE. The smart
contract sets include a generative AI plugin to assist with development.
[Learn more about the AI plugin here.](./ai-plugin)
## Smart contract template library
SettleMint's smart contract templates serve as open-source, ready-to-use
foundations for blockchain application development, significantly accelerating
the deployment process. These templates enable users to quickly customize and
extend their blockchain applications, leveraging tested and community-enhanced
frameworks to reduce development time and accelerate market entry.
## Open-source smart contract templates under the mit license
Benefit from the expertise of the blockchain community and trust in the
reliability of your smart contracts. These templates are vetted and used by
major enterprises and institutions, ensuring enhanced security and confidence in
your deployments.
## Smart contract template library
The programming language used depends on the target protocol:
* **Solidity** for EVM-compatible networks
| Template | Description |
| ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
| [Empty](https://github.com/settlemint/solidity-empty) | Basic Solidity project scaffold with no predefined logic. Ideal for starting from scratch. |
| [ERC20 token](https://github.com/settlemint/solidity-token-erc20) | Standard ERC20 token implementation for fungible tokens. |
| [ERC1155 token](https://github.com/settlemint/solidity-token-erc1155) | Multi-token standard supporting both fungible and non-fungible tokens in a single contract. |
| [ERC20 token with MetaTx](https://github.com/settlemint/solidity-token-erc20-metatx) | ERC20 token with meta-transaction support to enable gasless transfers. |
| [Supplychain](https://github.com/settlemint/solidity-supplychain) | Token-based supply chain logic for tracking assets and ownership across stages. |
| [State Machine](https://github.com/settlemint/solidity-statemachine) | Contract template for building stateful workflows and processes using a finite state machine. |
| [ERC20 token with crowdsale mechanism](https://github.com/settlemint/solidity-token-erc20-crowdsale) | ERC20 token with built-in crowdsale logic for fundraising campaigns. |
| [ERC721](https://github.com/settlemint/solidity-token-erc721) | Standard implementation of ERC721 non-fungible tokens (NFTs). |
| [ERC721a](https://github.com/settlemint/solidity-token-erc721a) | Gas-optimized ERC721 implementation for efficient batch minting. |
| [ERC721 Generative Art](https://github.com/settlemint/solidity-token-erc721-generative-art) | NFT template for generating on-chain artwork using ERC721 standard. |
| [Soulbound Token](https://github.com/settlemint/solidity-token-soulbound) | Non-transferable token (SBT) representing identity or credentials. |
| [Diamond bond](https://github.com/settlemint/solidity-diamond-bond) | Example of a tokenized bond using modular smart contracts (Diamond pattern). |
| [Attestation Service](https://github.com/settlemint/solidity-attestation-service) | Service template for managing on-chain verifiable claims and attestations. |
## Create your own smart contract templates for your consortium
Within the self-managed SettleMint Platform, you can create and add your own
templates for use within your consortium. This fosters a collaborative
environment where templates can be reused and built upon, promoting innovation
and efficiency within your network.
To get started, visit:
[SettleMint GitHub Repository](https://github.com/settlemint/solidity-empty)
Congratulations.!!
You have succesfully deployed the code studio. From here you can proceed for
development and deployment of smart contracts and indexing sub-graphs.
file: ./content/docs/building-with-settlemint/evm-chains-guide/setup-graph-middleware.mdx
meta: {
"title": "Setup graph middleware",
"description": "Setup read middleware"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To set up a graph middleware in SettleMint, you'll begin by ensuring that your
application and blockchain node are ready. The graph middleware will serve as
your read layer, enabling powerful querying of on-chain events using a GraphQL
interface. This is particularly useful when you want to retrieve and analyze
historical smart contract data in a structured, filterable format.
First, you'll need to add the middleware itself. Head to the middleware section
inside your application on the SettleMint platform. Click add a middleware, and
select graph as the type. Assign a name, pick the blockchain node (where your
smart contract is deployed), configure the deployment settings, and confirm.
This action will provision the underlying infrastructure required to run your
subgraph.
Next, you will create the subgraph package in code studio. The subgraph folder
contains all the code and configuration required for indexing and querying your
smart contract's events. You will define a subgraph.config.json file that lists
the network (via chain ID), your contract address, and the data sources (i.e.,
smart contracts and associated modules) that the subgraph will index.
Inside the datasources folder, you will create a userdata.yaml manifest file
that outlines the smart contract address, ABI path, start block, and
event-handler mappings. This YAML file connects emitted events like
ProfileCreated, ProfileUpdated, and ProfileDeleted with specific AssemblyScript
functions that define how the data is processed and stored.
You will then define the schema in userdata.gql.json. This is your GraphQL
schema, which defines the structure of your indexed data. Entities like
UserProfile, ProfileCreated, and ProfileUpdated are defined here, each with the
fields to be stored and queried later via GraphQL.
Once the schema is ready, you will implement the mapping logic in userdata.ts,
which listens for emitted events and updates the subgraph's entities
accordingly. A helper file inside the fetch directory will provide utility logic
to create or retrieve entities without code repetition.
After writing all files, you will run the codegen, build, and deploy scripts
using the provided task buttons in code studio. These scripts will compile your
schema and mapping into WebAssembly (WASM), bundle it for deployment, and push
it to the graph middleware node.
Once deployed, you will be able to open the graph middleware's GraphQL explorer
and run queries against your indexed data. You can query by ID or use the plural
form to get a list of entries. This enables your application or analytics layer
to fetch historical state data in a fast and reliable way.
## How to setup graph middleware and api portal in SettleMint platform
Middleware acts as a bridge between your blockchain network and applications,
providing essential services like data indexing, API access, and event
monitoring. Before adding middleware, ensure you have an application and
blockchain node in place.
### How to add middleware

**Navigate to application**
Navigate to the **application** where you want to add middleware.
**Access middleware section**
Click **middleware** in the left navigation, and then click **add a middleware**. This opens a form.
**Configure middleware**
1. Choose middleware type (graph or portal)
2. Choose a **middleware name**
3. Select the **blockchain node** (prefered option for portal) or **load balancer** (prefered option for the graph)
4. Configure deployment settings
5. Click **confirm**
First ensure you're authenticated:
```bash
settlemint login
```
Create a middleware:
```bash
# Get the list of available middleware types
settlemint platform create middleware --help
# Create a middleware
settlemint platform create middleware
# Get information about the command and all available options
settlemint platform create middleware --help
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create middleware
const result = await client.middleware.create({
applicationUniqueName: "your-app-unique-name",
name: "my-middleware",
type: "SHARED",
interface: "HA_GRAPH", // Valid options: "HA_GRAPH" | "SMART_CONTRACT_PORTAL"
blockchainNodeUniqueName: "your-node-unique-name",
region: "EUROPE", // Required
provider: "GKE", // Required
size: "SMALL" // Valid options: "SMALL" | "MEDIUM" | "LARGE"
});
console.log('Middleware created:', result);
```
Get your access token from the Platform UI under User Settings → API Tokens.
### Manage middleware
Navigate to your middleware and click **manage middleware** to:
* View middleware details and status
* Update configurations
* Monitor health
* Access endpoints
```bash
# List middlewares
settlemint platform list middlewares --application
```
```bash
# Get middleware details
settlemint platform read middleware
```
```typescript
// List middlewares
await client.middleware.list("your-app-unique-name");
```
```typescript
// Get middleware details
await client.middleware.read("middleware-unique-name");
```
## Subgraph folder structure in code studio ide
```bash
subgraph/
│
├── subgraph.config.json
│
├── datasources/
│ ├── mycontract.gql.json
│ ├── mycontract.ts
│ └── mycontract.yaml
│
└── fetch/
└── mycontract.ts
```
## Subgraph deployment process
### 1. Collect constants needed
Find the chain ID of the network from igntition>deployments folder name
(chain-ID) or from the platform UI at blockchain networks > selcted network >
details page, it will be something like **47440**.
Locate the contract address, deployed contract address is stored in
deployed\_addresses.json file located in igntition>deployments folder.
### 2. Building subgraph.config.json file
This file is the foundational configuration for your subgraph. It defines how
and where the subgraph will be generated and which contracts it will be
tracking. Think of it as the control panel that the subgraph compiler reads to
understand what contracts to index, where to start indexing from (which block),
and which folder contains the relevant configurations (e.g., YAML manifest,
mappings, schema, etc.).
Each object in the datasources array represents a separate contract. You specify
the contract's name, address, the block number at which the indexer should begin
listening, and the path to the module folder (which holds the YAML manifest and
mapping logic). This file is essential when working with Graph CLI or SDKs for
compiling and deploying subgraphs.
When writing this file from scratch, you will need to gather the deployed
contract address, decide the indexing start block (can be 0 or a specific block
to save resources), and organize contract-related files in a logical module
folder.
```json
{
"output": "generated/scs.",
"chain": "44819",
"datasources": [
{
"name": "UserData",
"address": "0x8b1544B8e0d21aef575Ce51e0c243c2D73C3C7B9",
"startBlock": 0,
"module": ["userdata"]
}
]
}
```
### 3. Create userdata.yaml file
This is the YAML manifest file that tells the subgraph how to interact with a
specific smart contract on-chain. It defines the contract's ABI, address, the
events to listen to, and the mapping logic that should be triggered for each
event.
The structure must follow a strict YAML format, wrong indentation or a missing
property can break the subgraph. Under the source section, you provide the
contract's address, the ABI name, and the block from which indexing should
begin.
The mapping section details how the subgraph handles events. It specifies the
API version, programming language (AssemblyScript), the entities it will touch,
and the path to the mapping file. Each eventHandler entry pairs an event
signature (from the contract) with a function that will process it. When writing
this from scratch, ensure that all event signatures exactly match those in your
contract (parameter order and types must be accurate), and align them with the
corresponding handler function names.
```yaml
- kind: ethereum/contract
name: {id}
network: {chain}
source:
address: "{address}"
abi: UserData
startBlock: {startBlock}
mapping:
kind: ethereum/events
apiVersion: 0.0.5
language: wasm/assemblyscript
entities:
- UserProfile
- ProfileCreated
- ProfileUpdated
- ProfileDeleted
abis:
- name: UserData
file: "{root}/out/UserData.sol/UserData.json"
eventHandlers:
- event: ProfileCreated(indexed uint256,string,string,uint8,string,bool)
handler: handleProfileCreated
- event: ProfileUpdated(indexed uint256,string,string,uint8,string,bool)
handler: handleProfileUpdated
- event: ProfileDeleted(indexed uint256)
handler: handleProfileDeleted
file: {file}
```
### 4. Create userdata.gql.json file
This JSON file defines the GraphQL schema that powers your subgraph's data
structure. It outlines the shape of your data, which entities will be stored in
the Graph Node's underlying database, and the fields each entity will expose to
users via GraphQL queries.
Every event-based entity (like ProfileCreated, ProfileUpdated, ProfileDeleted)
is linked to the main entity (here, UserProfile) to maintain a historical audit
trail. Each entity must have an id field of type ID!, which serves as the
primary key.
You then define all other fields with their data types and nullability. When
writing this schema, think in terms of how data will be queried: What
information will consumers of the subgraph want to retrieve? The names and types
must exactly reflect the logic in your mapping files. For reuse across projects,
just align this schema with the domain model of your contract.
```json
[
{
"name": "UserProfile",
"description": "Represents the current state of a user's profile.",
"fields": [
{ "name": "id", "type": "ID!" },
{ "name": "name", "type": "String!" },
{ "name": "email", "type": "String!" },
{ "name": "age", "type": "Int!" },
{ "name": "country", "type": "String!" },
{ "name": "isKYCApproved", "type": "Boolean!" },
{ "name": "isDeleted", "type": "Boolean!" }
]
},
{
"name": "ProfileCreated",
"description": "Captures the event when a new user profile is created.",
"fields": [
{ "name": "id", "type": "ID!" },
{ "name": "userId", "type": "BigInt!" },
{ "name": "userProfile", "type": "UserProfile!" }
]
},
{
"name": "ProfileUpdated",
"description": "Captures the event when an existing user profile is updated.",
"fields": [
{ "name": "id", "type": "ID!" },
{ "name": "userId", "type": "BigInt!" },
{ "name": "userProfile", "type": "UserProfile!" }
]
},
{
"name": "ProfileDeleted",
"description": "Captures the event when a user profile is soft-deleted.",
"fields": [
{ "name": "id", "type": "ID!" },
{ "name": "userId", "type": "BigInt!" },
{ "name": "userProfile", "type": "UserProfile!" }
]
}
]
```
### 5. Create userdata.ts file
This file contains the event handler functions written in AssemblyScript. It
directly responds to the events emitted by your smart contract and updates the
subgraph's store accordingly. Each exported function matches an event in the
YAML manifest. Inside each function, the handler builds a unique ID for the
event (usually combining the transaction hash and log index), processes the
event payload, and updates or creates the relevant entity (here, UserProfile).
The logic can include custom processing like formatting values, filtering, or
even transforming data types. This file is where your business logic resides,
similar to an event-driven backend microservice. You should keep this file
modular and focused, avoiding code repetition by calling reusable helper
functions like fetchUserProfile. When writing this from scratch, always import
the generated event types and schema entities, and handle edge cases like entity
non-existence or inconsistent values.
```ts
import { BigInt } from "@graphprotocol/graph-ts";
import {
ProfileCreated as ProfileCreatedEvent,
ProfileUpdated as ProfileUpdatedEvent,
ProfileDeleted as ProfileDeletedEvent,
} from "../../generated/generated/userdata/UserData";
import {
UserProfile,
ProfileCreated,
ProfileUpdated,
ProfileDeleted,
} from "../../generated/generated/schema";
import { fetchUserProfile } from "../fetch/userdata";
export function handleProfileCreated(event: ProfileCreatedEvent): void {
// Generate a unique event ID using transaction hash and log index
let id = event.transaction.hash.toHex() + "-" + event.logIndex.toString();
let entity = new ProfileCreated(id);
entity.userId = event.params.userId;
// Fetch or create the UserProfile entity
let profile = fetchUserProfile(event.params.userId);
profile.name = event.params.name;
profile.email = event.params.email;
profile.age = event.params.age;
profile.country = event.params.country;
profile.isKYCApproved = event.params.isKYCApproved;
profile.isDeleted = false;
profile.save();
// Link the event entity to the user profile and save
entity.userProfile = profile.id;
entity.save();
}
export function handleProfileUpdated(event: ProfileUpdatedEvent): void {
let id = event.transaction.hash.toHex() + "-" + event.logIndex.toString();
let entity = new ProfileUpdated(id);
entity.userId = event.params.userId;
// Retrieve and update the existing UserProfile entity
let profile = fetchUserProfile(event.params.userId);
profile.name = event.params.name;
profile.email = event.params.email;
profile.age = event.params.age;
profile.country = event.params.country;
profile.isKYCApproved = event.params.isKYCApproved;
profile.isDeleted = false;
profile.save();
entity.userProfile = profile.id;
entity.save();
}
export function handleProfileDeleted(event: ProfileDeletedEvent): void {
let id = event.transaction.hash.toHex() + "-" + event.logIndex.toString();
let entity = new ProfileDeleted(id);
entity.userId = event.params.userId;
// Retrieve the UserProfile entity and mark it as deleted
let profile = fetchUserProfile(event.params.userId);
profile.isDeleted = true;
profile.save();
entity.userProfile = profile.id;
entity.save();
}
```
### 6. Create another userdata.ts in the fetch folder
This is a helper utility designed to avoid redundancy in your mapping file. It
abstracts the logic of either loading an existing entity or creating a new one
if it doesn't exist.
It enhances reusability and reduces boilerplate in each handler function. The
naming convention of this file usually mirrors the module or entity it's
associated with (e.g., fetch/userdata.ts).
The logic inside the function uses the userId (or other unique identifier) as a
string key and ensures that all required fields have a default value. When
writing this from scratch, ensure every field in your GraphQL schema has an
initialized value to prevent errors during Graph Node processing.
```ts
import { BigInt } from "@graphprotocol/graph-ts";
import { UserProfile } from "../../generated/generated/schema";
/**
* Fetches a UserProfile entity using the given userId.
* If it does not exist, a new UserProfile entity is created with default values.
*
* @param userId - The user ID as a BigInt.
* @returns The UserProfile entity.
*/
export function fetchUserProfile(userId: BigInt): UserProfile {
let id = userId.toString();
let user = UserProfile.load(id);
if (!user) {
user = new UserProfile(id);
user.name = "";
user.email = "";
user.age = 0;
user.country = "";
user.isKYCApproved = false;
user.isDeleted = false;
}
return user;
}
```
```mermaid
flowchart TD
%% --- Inputs ---
F1["out/UserData.json (ABI from compiler) "]:::tooling
F2["deployed_addresses.json (Deployed contract address) "]:::tooling
F3["deployments/[chain-id] (Defines network chain ID) "]:::tooling
%% --- Configuration Files ---
A1["1 - subgraph.config.json - Declares network, output, and datasources "]:::config
A2["2 - userdata.yaml - Sets ABI, contract address, event handlers "]:::config
%% --- Contract & Events ---
B1["UserData.sol - Smart contract with profile lifecycle logic "]:::contract
B2["Events: ProfileCreated, ProfileUpdated, ProfileDeleted "]:::event
%% --- Mappings & Helpers ---
C1["3 - userdata.ts - Mapping logic to handle events and update entities "]:::mapping
C2["4 - fetch/userdata.ts - Loads or creates UserProfile entity "]:::helper
%% --- Schema & Storage ---
D1["5 - userdata.gql.json - GraphQL schema defining types and relationships"]:::schema
D2["Graph Node DB - Stores UserProfile and events, queryable via GraphQL "]:::db
%% --- API Layer ---
E1["GraphQL API - Exposes indexed data to dApps and dashboards "]:::api
%% --- Connections ---
F1 --> A2
F2 --> A1
F3 --> A1
A1 --> A2
A2 --> B1
B1 --> B2
B2 --> C1
A2 --> C1
C1 --> C2
C1 --> D2
D1 --> D2
D2 --> E1
%% --- Styling ---
classDef config fill:#D0EBFF,stroke:#1E40AF,stroke-width:1px
classDef mapping fill:#FEF3C7,stroke:#B45309,stroke-width:1px
classDef schema fill:#E0F2FE,stroke:#0369A1,stroke-width:1px
classDef contract fill:#FECACA,stroke:#B91C1C,stroke-width:1px
classDef event fill:#FCD34D,stroke:#92400E,stroke-width:1px
classDef db fill:#DCFCE7,stroke:#15803D,stroke-width:1px
classDef api fill:#E9D5FF,stroke:#7C3AED,stroke-width:1px
classDef abi fill:#F3E8FF,stroke:#9333EA,stroke-width:1px
classDef helper fill:#F5F5F4,stroke:#3F3F46,stroke-width:1px
classDef tooling fill:#F0F9FF,stroke:#0284C7,stroke-width:1px
```
## Codegen, build and deploy subgraph
### Run codegen script using the task manager of the ide

### Run graph build script using the task manager of the ide

### Run graph deploy script using the task manager of the ide

### Why we see a duplicay in the graphql schema -
In The Graph's autogenerated schema, each entity is provided with two types of
queries by default:
* **Single-Entity Query:** `userProfile(id: ID!): UserProfile` *Fetches a single
`UserProfile` by its unique ID.*
* **Multi-Entity Query:** `userProfiles(...): [UserProfile]` *Fetches a list of
`UserProfile` entities, with optional filters to refine the results.*
Why This Duplication Exists
* **Flexibility in Data Access:** By offering both single-entity and
multi-entity queries, The Graph allows you to choose the most efficient way to
access your data. If you know the exact ID, you can use the single query for a
quick lookup. If you need to display or analyze a collection of records, the
multi-entity query is available.
* **Optimized Performance:** Retrieving a specific record via the single-entity
query avoids unnecessary overhead that comes with filtering through a list,
ensuring more efficient data access when the unique identifier is known.
* **Catering to Different Use Cases:** Different parts of your application may
require different query types. Detailed views might need a single record
(using userProfile), while list views benefit from the filtering and
pagination offered by userProfiles.
* **Consistency Across the Schema:** Generating both queries for every entity
ensures a consistent API design, which simplifies development by providing a
predictable pattern for data access regardless of the entity.
### Graph middleware - querying data
We can query based on the ID

Or we can query to return all entries

Congratulations.!!
You have succesfully configured graph middleware and deployed subgraphs to
enable smart contract indexing. With this you have both read and write
middleware for your smart contracts.
This marks the end of the core Web3 development, from here we will proceed to
adding off-chain database and storage options to enable us to have a holistic
backend and storage layer for our application.
file: ./content/docs/building-with-settlemint/evm-chains-guide/setup-offchain-database.mdx
meta: {
"title": "Setup off-chain database",
"description": "Add Hasura backend-as-a-service with off-chain database"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To integrate off-chain storage into your blockchain application, you should
begin by adding Hasura as a backend-as-a-service via SettleMint. This will
provision a fully managed PostgreSQL database, paired with a real-time
GraphQL API layer. It enables you to manage non-critical or frequently
updated data that doesn't need to live on-chain, without compromising
performance or flexibility.
Start by navigating to your application and opening the integration tools
section. Click on add an integration tool, select Hasura, and follow the
steps to choose a name, provider, region, and resource plan. Once deployed,
a dedicated Hasura instance will be available, complete with its own admin
console, GraphQL API, and Postgres connection string. You can manage and
monitor the instance from the same interface.
Once Hasura is set up, you can define your database schema by creating
tables and relationships under the data tab. You can add, modify, and delete
rows directly from the console, or connect to the database using a
PostgreSQL client or code. Every schema and table you define becomes
instantly queryable using the GraphQL API. The API tab will auto-generate
queries and mutations, and also allow you to derive REST endpoints or export
code snippets for frontend/backend use.
For custom business logic, you can implement actions, which are HTTP
handlers triggered by GraphQL mutations. These are useful for data
validation, enrichment, or chaining smart contract calls. If you want to
respond to database changes in real-time, use event triggers to invoke
webhooks when specific inserts, updates, or deletions happen. For recurring
jobs, cron triggers can invoke workflows on a schedule, and one-off
scheduled events allow precision control over future events.
Authentication and authorization can be finely controlled through role-based
access rules. Hasura allows you to enforce row-level permissions and
restrict query types based on user roles. To ensure secure API access, use
the Hasura admin secret and your application access token, both available
from the connect tab of your Hasura console.
You'll also have the option to connect to the Hasura PostgreSQL instance
directly using the connection string. This is useful for running SQL
scripts, performing migrations, or executing batch jobs. Whether you're
using a Node.js backend or a command-line tool like psql, your Hasura
database acts like any standard PostgreSQL instance, with enterprise-grade
reliability.
Backups are easy to configure using the pg\_dump utility or via the Hasura
CLI. You can export both your database data and metadata, and restore them
in new environments as needed. Use hasura metadata export to get a full
snapshot of your permissions, tracked tables, actions, and relationships.
Then use hasura metadata apply or hasura metadata reload to rehydrate or
sync a new instance.
By combining Hasura's flexibility with the immutability of your on-chain
smart contracts, you will be able to design a clean hybrid architecture,
critical operations are stored securely on-chain, while scalable, queryable,
and user-driven data remains off-chain. This setup dramatically improves
user experience, simplifies front-end development, and keeps infrastructure
costs under control.
Many dApps need more than just decentralized tools to build an end-to-end
solution. The SettleMint Hasura SDK provides a seamless way to interact with
Hasura GraphQL APIs for managing application data.

## Need for a on-chain and off-chain data architecture
In blockchain-based applications, not all data needs to, or should, reside
on-chain. While critical state changes, token ownerships, or verifiable proofs
are best kept immutable and transparent on a blockchain, a large portion of
application data such as user profiles, analytics, logs, metadata, and UI-driven
state is better suited to an off-chain data store. Storing everything on-chain
is neither cost-effective nor performance-friendly. On-chain data is expensive
to store and slow to query for complex front-end or dashboard use cases.
This is where a **hybrid architecture** becomes essential. In such an approach,
data is partitioned based on its importance and usage:
* **On-chain layer** serves as the source of truth for verifiable,
consensus-driven actions like token transfers, proofs, and governance.
* **Off-chain layer** handles high-volume, user-generated, or fast-changing data
that benefits from relational structure, rich queries, and low latency.
This model provides the best of both worlds: **immutability and trust from
blockchain**, and **speed, flexibility, and developer-friendliness from
traditional databases**.
## How hasura on SettleMint supports this architecture
SettleMint offers Hasura as a Backend-as-a-Service (BaaS), tightly integrated
into its low-code blockchain development stack. Hasura provides a
high-performance, real-time GraphQL API layer on top of a PostgreSQL database,
and allows developers to instantly query, filter, and subscribe to changes in
the data without writing custom backend logic.
### Key capabilities of hasura on settlemint
* A fully managed **PostgreSQL database** is provisioned automatically with each
Hasura instance.
* Hasura auto-generates a powerful and expressive **GraphQL API** for all the
tables and relationships defined in the database.
* It allows **integration with external databases** or REST/GraphQL services,
making it possible to unify multiple data sources behind one GraphQL endpoint.
* **Role-based access control** ensures secure data access aligned with business
logic and user permissions.
## Benefits of using hasura in a blockchain project
Hasura is especially useful for building interfaces, dashboards, and off-chain
tools in blockchain applications. Developers can use it to:
* Store non-critical or frequently updated data like user preferences, audit
logs, or API call metadata.
* Power admin panels or reporting dashboards with complex filtering, sorting,
and aggregation capabilities.
* Perform fast and reliable queries without the overhead of smart contract reads
or event processing.
* Sync or mirror blockchain data into Postgres via indexing services (like The
Graph or custom workers), and build additional logic around it.
For example, while the verification of a credential or the execution of a
transaction happens on-chain, the user's profile details, usage history, or
interactions with the platform can be managed off-chain using Hasura. This
results in a responsive and scalable user experience, without compromising on
the core security and trust guarantees of blockchain.
# Off-chain database use cases in blockchain applications
| Category | Use Cases |
| ------------------------------- | ------------------------------------------------------------------------------------------------ |
| **User Management & Metadata** | User profiles, KYC/AML data, Recovery info, Social links, Preferences, Session tokens |
| **Dashboards & Reporting** | Admin panels, KPIs, Filters & aggregation, Charts, Audit logs, Time-series insights |
| **App Logic & State** | Workflow states, Business rules, Off-chain approvals, Drafts, Automation triggers, API call logs |
| **User Content** | Blog posts, Comments, Ratings, Articles, Feedback, Forum threads, Attachments |
| **External/API Data** | Oracle/cache data, API mirrors, Off-chain credentials, IoT inputs, External system sync |
| **Historical & Time Data** | Snapshots, Transition logs, Archived state, Event sync history, Audit trails |
| **Content & Config** | UI content, Static pages, Themes, Menus, Feature flags, Editable app config |
| **UX & Transactions** | Pending tx queues, Gas estimates, Slippage data, NFT views, Pre-submit staging, Local metadata |
| **Admin & Dev Tools** | Schema maps, Dev notes, Admin dashboards, Background jobs, Flagged items |
| **Security & Access** | Role bindings, Access logs, Encrypted fields, Policy metadata, Permissions history |
| **Hybrid & Indexing** | Enriched on-chain data, Indexed events, ID mapping, Postgres mirroring, ETL-ready layers |
| **E-commerce / Token Economy** | Product catalog, Shopping cart, Delivery tracking, Disputes, Refund metadata |
| **Education / DAO / Community** | Learning progress, Badges, Voting drafts, Moderation flags, Contribution history |
| **Data Ops & Recovery** | Data backups, Exportable datasets, Disaster recovery layer, Compliance archiving |
## Add hasura
### Navigate to application
Navigate to the **application** where you want to add Hasura.
### Access integration tools
Click **integration tools** in the left navigation, and then click **add an integration tool**. This opens a form.
### Configure Hasura
1. Select **Hasura**, and click **continue**
2. Choose a **name** for your backend-as-a-service
3. Choose a deployment plan (provider, region, resource pack)
4. Click **confirm** to add it
First ensure you're authenticated:
```bash
settlemint login
```
Create Hasura instance:
```bash
settlemint platform create integration-tool hasura
# Get information about the command and all available options
settlemint platform create integration-tool hasura --help
```
For a full example of how to create a blockchain explorer using the SDK, see the [Hasura SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-hasura#api-reference).
The SDK enables you to easily query and mutate data stored in your SettleMint-powered PostgreSQL databases through a type-safe GraphQL interface. For detailed API reference, check out the [Hasura SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/hasura).
## Some basic features
* Under the data subtab you can create an arbitrary number of **schemas**. A
schema is a collection of tables.
* In a schema you can create **tables**, choose which columns you want and
define relations and indexes.
* You can add, edit and delete **data** in these columns as well.
[Learn more here](https://hasura.io/docs/2.0/schema/postgres/tables/)
Any table you make is instantly visible in the **API subtab**. Note that by
using the **REST and derive action buttons** you can convert queries into REST
endpoints if that fits your application better. Using the **code exporter
button** you can get the actual code snippets you can use in your application or
the integration studio.
A bit more advanced are **actions**. Actions are custom queries or mutations
that are resolved via HTTP handlers. Actions can be used to carry out complex
data validations, data enrichment from external sources or execute just about
any custom business logic. Actions can be kickstarted by using the **derive
action button** in the **API subtab**.
[Learn more here.](https://hasura.io/docs/2.0/actions/overview/)
If you need to execute tasks based on changes to your database you can leverage
**events**. An **event trigger** atomically captures events (insert, update,
delete) on a specified table and then reliably calls a HTTP webhook to run some
custom business logic.
[Learn more here.](https://hasura.io/docs/latest/graphql/core/event-triggers/index.html)
**Cron triggers** can be used to reliably trigger HTTP endpoints to run some
custom business logic periodically based on a cron schedule.
**One-off scheduled events** are individual events that can be scheduled to
reliably trigger a HTTP webhook to run some custom business logic at a
particular timestamp.
**Access to your database** can be handled all the way to the row level by using
the authentication and authorisation options available in Hasura.
[Learn more here.](https://hasura.io/docs/2.0/auth/overview/)
This is of course on top of the
[application access tokens](/platform-components/security-and-authentication/application-access-tokens)
and
[personal access tokens](/platform-components/security-and-authentication/personal-access-tokens)
in the platform you can use to close off access to the entire API.
## Usage examples
You can interact with your Hasura database in two ways: through the GraphQL API
(recommended) or directly via PostgreSQL connection.
```javascript
import fetch from 'node-fetch';
// Configure your authentication details
const HASURA_ENDPOINT = "YOUR_HASURA_ENDPOINT";
const HASURA_ADMIN_SECRET = "YOUR_HASURA_ADMIN_SECRET"; // Found in the "Connect" tab of Hasura console
const APP_ACCESS_TOKEN = "YOUR_APP_ACCESS_TOKEN"; // Generated following the Application Access Tokens guide
// Reusable function to make GraphQL requests
async function fetchGraphQL(operationsDoc, operationName, variables) {
try {
const result = await fetch(
HASURA_ENDPOINT,
{
method: "POST",
headers: {
'Content-Type': 'application/json',
'x-hasura-admin-secret': HASURA_ADMIN_SECRET,
'x-auth-token': APP_ACCESS_TOKEN
},
body: JSON.stringify({
query: operationsDoc,
variables: variables,
operationName: operationName
})
}
);
if (!result.ok) {
const text = await result.text();
throw new Error(`HTTP error! status: ${result.status}, body: ${text}`);
}
return await result.json();
} catch (error) {
console.error('Request failed:', error);
throw error;
}
}
// Query to fetch verification records
const operationsDoc = `
query MyQuery {
verification {
id
}
}
`;
// Mutation to insert a new verification record
const insertOperationDoc = `
mutation InsertVerification($name: String!, $status: String!) {
insert_verification_one(object: {name: $name, status: $status}) {
id
name
status
}
}
`;
// Function to fetch verification records
async function main() {
try {
const { errors, data } = await fetchGraphQL(operationsDoc, "MyQuery", {});
if (errors) {
console.error('GraphQL Errors:', errors);
return;
}
console.log('Data:', data);
} catch (error) {
console.error('Failed:', error);
}
}
// Function to insert a new verification record
async function insertWithGraphQL() {
try {
const { errors, data } = await fetchGraphQL(
insertOperationDoc,
"InsertVerification",
{
name: "Test User",
status: "pending"
}
);
if (errors) {
console.error('GraphQL Errors:', errors);
return;
}
console.log('Inserted Data:', data);
} catch (error) {
console.error('Failed:', error);
}
}
// Execute both query and mutation
main();
insertWithGraphQL();
```
```javascript
import pkg from 'pg';
const { Pool } = pkg;
// Initialize PostgreSQL connection (get connection string from Hasura console -> "Connect" tab)
const pool = new Pool({
connectionString: 'YOUR_POSTGRES_CONNECTION_STRING'
});
// Simple query to read all records from verification table
const readData = async () => {
const query = 'SELECT * FROM verification';
const result = await pool.query(query);
console.log('Current Data:', result.rows);
};
// Insert a new verification record with sample data
const insertData = async () => {
const query = `
INSERT INTO verification (id, identifier, value, created_at, expires_at)
VALUES ($1, $2, $3, $4, $5)
RETURNING *`;
// Sample values - modify according to your needs
const values = [
'test-id-123',
'test-identifier',
'test-value',
new Date(),
new Date(Date.now() + 24 * 60 * 60 * 1000) // Sets expiry to 24h from now
];
const result = await pool.query(query, values);
console.log('Inserted:', result.rows[0]);
};
// Update an existing record by ID
const updateData = async () => {
const query = `
UPDATE verification
SET value = $1, updated_at = $2
WHERE id = $3
RETURNING *`;
const values = ['updated-value', new Date(), 'test-id-123'];
const result = await pool.query(query, values);
console.log('Updated:', result.rows[0]);
};
// Execute all operations in sequence
async function main() {
try {
await readData();
await insertData();
await updateData();
await readData();
} finally {
await pool.end(); // Close database connection
}
}
main();
```
## Hasura postgress database access and connection

For GraphQL API:
1. **Hasura Admin Secret**: Found in the "connect" tab of Hasura console
2. **Application Access Token**: Generate this by following our
[Application Access Tokens guide](/building-with-settlemint/application-access-tokens)
For PostgreSQL:
1. **PostgreSQL Connection String**: Found in the "connect" tab of Hasura
console under "Database URL"
Always keep your credentials secure and never expose them in client-side code.
Use environment variables or a secure configuration management system in
production environments.
Understanding postgress connection string
**postgresql://hasura-f1cd9:[0c510604a378d348e7ed@p2p.gke-europe.settlemint.com](mailto:0c510604a378d348e7ed@p2p.gke-europe.settlemint.com):30787/hasura-f1cd9**
Here's how it's broken down:
* **Protocol**: `postgresql://`\
Indicates the connection type , PostgreSQL database over TCP.
* **Username**: `hasura-f1cd9`\
The database username used for authentication.
* **Password**: `0c510604a378d348e7ed`\
The corresponding password for the above username.
* **Host**: `p2p.gke-europe.settlemint.com`\
The server address (domain or IP) where the PostgreSQL database is hosted.
* **Port**: `30787`\
The network port on which the PostgreSQL service is listening.
* **Database Name**: `hasura-f1cd9`\
The specific PostgreSQL database to connect to on that server.
## Hasura backup
Via CLI pgdump command
```sql
PGPASSWORD=0c510604a378d348e7ed pg_dump \
-h p2p.gke-europe.settlemint.com \
-p 30787 \
-U hasura-f1cd9 \
-d hasura-f1cd9 \
-F p \
-f ~/Desktop/hasura_backup.sql
```
## Taking backup via hasura CLI
1. Hasura Database
2. Hasura Metadata
### Steps for taking a backup of hasura database
1. Install Hasura CLI
([https://hasura.io/docs/latest/hasura-cli/install-hasura-cli/](https://hasura.io/docs/latest/hasura-cli/install-hasura-cli/))
2. Run hasura init command to initiate a new Hasura project in the working
directory.
3. Edit config.yaml file to configure remote Hasura instance. We need to
generate an API Key in BPaaS and pass it with the endpoint.
Syntax of config.yaml:
```
version: 3
endpoint:
admin_secret:
metadata_directory: metadata
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:3000
```
Example
```
endpoint: https://hasuradb-15ce.gke-japan.settlemint.com/sm_aat_86530f5bf93d82a9
admin_secret: dc5eb1b93f43fd28c53e
metadata_directory: metadata
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:3000
```
4. Run hasura console command. (this command will sync everything to your local
hasura instance.)
5. Run this curl command to generate DB export:
Curl Format
```
curl -d '{"opts": [ "-O", "-x", "--schema=public", "--inserts"], "clean_output": true, "source": "default"}' -H "x-hasura-admin-secret: " /v1alpha1/pg_dump > db.sql
```
Example Curl
```
curl -d '{"opts": [ "-O", "-x", "--schema=public", "--inserts"], "clean_output": true, "source": "default"}' -H "x-hasura-admin-secret:78b0e4618125322de0eb" https://fuchsiacapybara-7f70.gke-europe.settlemint.com/bpaas-1d79Acd6A2f112EA450F1C07a372a7D582E6121F/v1alpha1/pg_dump > db.sql
```
### Importing data into a new instance
Please copy the content of the exported db.sql file, paste it and execute as a
SQL statement.
### Steps for taking a backup of hasura metadata
Hasura Metadata Export is a collection of yaml files which captures all the
metadata required by the GraphQL Engine. This includes info about tables that
are tracked, permission rules, relationships, and event triggers that are
defined on those tables.
If you have already initialized your project via the Hasura CLI you should see
the metadata directory structure in your project directory.
To export your entire metadata using the Hasura CLI execute the following
command in your terminal:
```
# In hasura CLI
hasura metadata export
```
This will export the metadata as YAML files in the /metadata directory
### Steps for importing or applying hasura metadata
You can apply metadata from one Hasura Server instance to another. You can also
apply an older or modified version of an instance's metadata onto itself to
replace the existing metadata. Applying or importing completely replaces the
metadata on that instance, i.e. you lose any metadata that existed before
applying.
```
# In hasura CLI
hasura metadata apply
```
### Reload hasura metadata
In some cases, the metadata can be out of sync with the database schema. For
example, when a new column has been added to a table via an external tool.
```
# In hasura CLI
hasura metadata reload
```
For more on Hasura Metadata, refer:
[https://hasura.io/docs/latest/migrations-metadata-seeds/manage-metadata/](https://hasura.io/docs/latest/migrations-metadata-seeds/manage-metadata/) For
more on Hasura Migrations, refer:
[https://hasura.io/docs/latest/migrations-metadata-seeds/manage-migrations/](https://hasura.io/docs/latest/migrations-metadata-seeds/manage-migrations/)
Congratulations.!!
You have succesfully configured Hasura backend-as-a-service layer with the
off-chain database of your choice.
From here we will proceed to adding centralized and non-centralized storage for
our images, documents, videos, archive files and other storage needs.
file: ./content/docs/building-with-settlemint/evm-chains-guide/setup-storage.mdx
meta: {
"title": "Setup storage",
"description": "Add S3 or IPFS storage"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To integrate off-chain file storage into your blockchain application, you can
configure either IPFS (for decentralized content addressing) or MinIO (an
S3-compatible private storage layer) through the SettleMint platform. Both
options serve different use cases, IPFS excels in immutability and decentralized
access, while S3-style storage is better for secure, private, and
high-performance file delivery.
To get started, navigate to the relevant application in your SettleMint
workspace and open the storage section from the left-hand menu. Click add
storage, which opens a configuration form. Choose the storage type, either IPFS
for decentralized or MinIO for private object storage. Assign a name and
configure your deployment settings like region, provider, and resource pack.
Once confirmed, the storage service will be deployed and available for use.
Once provisioned, you can access and manage your storage instance from the
manage storage section. Here, you will be able to view the storage endpoint,
health status, and metadata configuration. If using IPFS, you'll be interacting
with content hashes (CIDs), while MinIO offers an S3-compatible interface where
files are stored under buckets and can be accessed via signed URLs.
Using the SettleMint SDK or CLI, developers will be able to list, query, and
manage storage instances programmatically. The SDK provides a typed interface to
connect, upload, retrieve, and delete files. For example, the
@settlemint/sdk-ipfs package allows seamless pinning and retrieval of files
using CIDs. Similarly, @settlemint/sdk-minio wraps around common S3 operations
like uploading files, generating expirable download URLs, and managing buckets.
Depending on your use case, both IPFS and MinIO can serve as complementary
layers. For public-facing and immutable content, such as NFT metadata, DAO
governance artifacts, or verifiable documents, IPFS is well suited. For private,
regulated, or access-controlled files, like KYC documents, user uploads, admin
reports, and internal metadata, MinIO offers a robust alternative with access
control and performance guarantees.
In practice, a dApp may use both systems in tandem: the file is stored in
S3/MinIO for fast access and usability, while its content hash is stored on IPFS
(and optionally, linked on-chain) to provide tamper-proof guarantees and content
validation. This hybrid model ensures performance, security, and
decentralization where it matters most.
Once storage is connected, users and developers can begin uploading files via
frontend integrations, backend scripts, or SDK calls. Content uploaded to IPFS
will return a CID, which can be persisted on-chain or referenced in subgraphs
and APIs. Files on S3/MinIO can be secured using signed URLs or policies, making
them suitable for user role–based access or limited-time file sharing.
## Off-chain file storage use cases in blockchain applications
Blockchain applications often require storing documents, images, videos, or
metadata off-chain due to cost, performance, or privacy reasons. Two common
approaches are:
* **IPFS**: A decentralized, content-addressed file system ideal for immutable,
verifiable, and censorship-resistant data.
* **MiniO S3**: A centralized, enterprise-grade storage solution that supports
private files, fine-grained access control, and fast retrieval.
Below are separate use case tables for each.
***
## 🌐 IPFS (Interplanetary File System)
IPFS is a decentralized protocol for storing and sharing files in a peer-to-peer
network. Files are addressed by their content hash (CID), ensuring immutability
and verification.
| Category | Use Cases |
| -------------------------- | -------------------------------------------------------------------------------------- |
| **NFTs & Metadata** | NFT images and media, Metadata JSON, Reveal assets, Provenance data |
| **Decentralized Identity** | Hash of KYC documents, Verifiable credentials, DID documents, Encrypted identity data |
| **DAOs & Governance** | Proposals with supporting files, Community manifestos, Off-chain vote metadata |
| **Public Records** | Timestamped proofs, Open access research, Transparent regulatory disclosures |
| **Content Publishing** | Articles, Audio files, Podcasts, Open knowledge archives |
| **Gaming & Metaverse** | 3D assets, Wearables, In-game items, IPFS-based map data |
| **Token Ecosystems** | Whitepapers, Token metadata, Proof-of-reserve documents |
| **Data Integrity Proofs** | Merkle tree files, Hashed content for audit, CID-linked validation |
| **Hybrid dApps** | On-chain reference to CID, IPFS-pinned metadata, Public shareable URIs |
| **Data Portability** | Decentralized content backups, Peer-to-peer file sharing, Long-term open data archives |
***
## ☁️ MinIO S3 (Simple Storage Service)
MinIO S3 is a centralized cloud storage platform that offers speed, scalability,
and rich security features. It is especially suitable for private or
enterprise-grade applications.
| Category | Use Cases |
| ----------------------------- | --------------------------------------------------------------------------------------- |
| **KYC / Identity Management** | Encrypted KYC files, ID document storage, Compliance scans, Signature uploads |
| **User Uploads** | Profile pictures, File attachments, Media uploads, Form attachments |
| **Admin Dashboards** | Exported reports, Internal analytics files, Logs and snapshots |
| **E-Commerce / Marketplaces** | Product images, Order confirmations, Receipts, Invoices |
| **Private DAO Ops** | Budget spreadsheets, Voting records, Internal documents |
| **Education Platforms** | Certificates, Course PDFs, Student submissions |
| **Customer Support** | Ticket attachments, User-submitted evidence, File-based case history |
| **Real-Time Interfaces** | UI asset delivery, Previews, Optimized media for front-end display |
| **Data Recovery** | Automatic backups, Encrypted snapshots, Versioned file histories |
| **Secure Downloads** | Signed URLs for restricted access, Expirable public links, S3-based token-gated content |
***
## Summary: when to use which?
| Use Case Pattern | Recommended Storage |
| ------------------------------------- | ------------------- |
| Public, immutable content | **IPFS** |
| Verifiable, on-chain linked data | **IPFS** |
| Private or role-based content | **S3** |
| Fast real-time access (UI/media) | **S3** |
| Hybrid (IPFS for hash, S3 for access) | **Both** |
Each system has unique advantages. For truly decentralized applications where
transparency and verifiability matter, IPFS is a natural fit. For operational
scalability, secure access, and enterprise-grade needs, S3 provides a reliable
foundation.
In hybrid dApps, combining both ensures performance without compromising on
decentralization.
## Add storage
Navigate to the **application** where you want to add storage. Click **storage** in the left navigation, and then click **add storage**. This opens a form.
### Configure storage
1. Choose storage type (IPFS or MinIO)
2. Choose a **storage name**
3. Configure deployment settings
4. Click **confirm**
First ensure you're authenticated:
```bash
settlemint login
```
Create storage:
```bash
# Get the list of available storage types
settlemint platform create storage --help
# Create storage
settlemint platform create storage
# Get information about the command and all available options
settlemint platform create storage --help
```
For a full example of how to connect to a storage using the SDK, see the [MinIO SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-minio#api-reference) or [IPFS SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-ipfs#api-reference).
Get your access token from the platform UI under **user settings → API tokens**.
The SDK enables you to:
* Use IPFS for decentralized storage - check out the [IPFS SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/ipfs)
* Use MinIO for S3-compatible storage - check out the [MinIO SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/minio)
## Manage storage
Navigate to your storage and click **manage storage** to:
* View storage details and status
* Monitor health
* Access storage interface
* Update configurations
```bash
# List storage instances
settlemint platform list storage --application
# Get storage details
settlemint platform read storage
```
```typescript
// List storage instances
const listStorage = async () => {
const storages = await client.storage.list("your-app-id");
console.log('Storage instances:', storages);
};
// Get storage details
const getStorage = async () => {
const storage = await client.storage.read("storage-unique-name");
console.log('Storage details:', storage);
};
```
Congratulations!
You have succesfully added S3 and IPFS storage to your application environment.
From here we will proceed to adding custom container deployments where you can
host your application front end user interface or any other service or services
required to complete your application.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/add-network-and-nodes.mdx
meta: {
"title": "Add Network and nodes",
"description": "Guide to adding a blockchain network to your application"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
import React from "react";
Summary
To build a blockchain application on Hedera using SettleMint, you start by
selecting Hedera as your network when creating the application. Hedera is a
public network, so validators are already established by the network’s
consensus participants. SettleMint will deploy an archive node that connects
to the chosen Hedera network (mainnet or testnet).
When deploying on SettleMint in SaaS mode, you'll choose between a shared or
dedicated cluster, select a cloud provider and data center, and pick a
resource pack (small, medium, or large) that can be scaled later.
Once your node is deployed, SettleMint provides access to the Insights tab
for monitoring tools. Since Hedera is a public network, you can use Hedera’s
public blockchain explorer to track transactions and network activity.
## Prerequisites
Before setting up a blockchain network, you need to have an application created
in your workspace. Applications provide the organizational context for all your
blockchain resources including networks, nodes, and development tools. If you
haven't created an application yet, follow our
[create application](/building-with-settlemint/evm-chains-guide/create-an-application)
guide first.
## 1. Add a blockchain network
To build your application on **Hedera**, go to the **network manager** in
SettleMint and **select "Hedera"** from the list of supported public networks.
SettleMint will automatically connect your app to the Hedera network by
deploying an archive node. You can choose between mainnet or testnet depending
on your use case.
For reference, see the full list of supported networks
[here](/platform-components/blockchain-infrastructure/network-manager#supported-blockchain-network-protocols).

You can perform the same action via the SettleMint SDK CLI as well -
First ensure you're authenticated:
```bash
settlemint login
```
Create a blockchain network:
```bash
settlemint platform create blockchain-network besu \
--node-name
# Get information about the command and all available options
settlemint platform create blockchain-network besu --help
```
Navigate to application
Go to the application containing your network.
Add network
Click Add blockchain network to open a form.
Configure network
Select the protocol of your choice and click Continue.
Choose a network name and a node name.
Configure your deployment settings and network parameters.
Click Confirm to add the network.
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create network
const createNetwork = async () => {
const result = await client.blockchainNetwork.create({
applicationUniqueName: "your-app",
name: "my-network",
nodeName: "validator-1",
consensusAlgorithm: "BESU_QBFT",
provider: "GKE", // GKE, EKS, AKS
region: "EUROPE"
});
console.log('Network created:', result);
};
// List networks
const listNetworks = async () => {
const networks = await client.blockchainNetwork.list("your-app");
console.log('Networks:', networks);
};
// Get network details
const getNetwork = async () => {
const network = await client.blockchainNetwork.read("network-unique-name");
console.log('Network details:', network);
};
// Restart network
const restartNetwork = async () => {
await client.blockchainNetwork.restart("network-unique-name");
};
```
Get your access token from the Platform UI under User Settings → API Tokens.
{/* Left Column - Text Content */}
While deploying a network, you can tune various parameters to optimize performance and execution. The Chain ID serves as a unique identifier for your blockchain network, ensuring proper differentiation from others. The Seconds per block setting controls the block time interval, impacting transaction finality speed. Gas price defines the transaction cost per unit of gas, influencing network fees, while the Gas limit determines the maximum gas allowed per block, affecting computational capacity.
The EVM stack size configures the stack depth for smart contract execution, and the Contract size limit sets the maximum contract code size to manage deployment constraints. Adjusting these settings allows for greater scalability, efficiency, and cost control based on your specific use case.
For EVM Chains, SettleMint allows you to set key genesis file paramters for a custom network configuration.
## Manage a network
Network management can be done via SettleMint SDK CLI using these commands -
```bash
# List networks
settlemint platform list blockchain-networks --application
# Get network details
settlemint platform read blockchain-network
# Restart network
settlemint platform restart blockchain-network
```
Navigate to your network and click **Manage network** to see available actions:
* View network details and status
* Monitor network health
* Restart network operations
```typescript
// List networks
await client.blockchainNetwork.list("your-app");
// Get network details
await client.blockchainNetwork.read("network-unique-name");
// Restart network
await client.blockchainNetwork.restart("network-unique-name");
```
When we deploy a network, first node is automatically deployed with it and is a
validator node. Once you have deployed a permissioned network or joined a public
network, you can add more nodes to it.
## 2. Add blockchain nodes
To see and add nodes, please click on **Blockchain Nodes** tile on the dashboard
or use the **Blockchain Nodes** link in the left menu.

We recommend the following number of nodes for each envrionment:
| Blockchain Network | Node Type | Minimum Nodes for Fault Tolerance |
| -------------------- | ------------------- | --------------------------------- |
| **Hyperledger Besu** | Validator Nodes | 4 (Byzantine Fault Tolerant BFT) |
| **Hyperledger Besu** | Non-Validator Nodes | 2 (for redundancy) |
| **GoQuorum** | Validator Nodes | 4 (Istanbul BFT) |
| **GoQuorum** | Non-Validator Nodes | 2 (for redundancy) |
Nodes can also be added using SettleMint SDK CLI using the following commands-
Navigate to application
Go to the application containing your network.
Access nodes
Click Blockchain nodes in the left navigation.
Configure node
Click Add a blockchain node.
Select the blockchain network to add this node to.
Choose a node name and node type (Validator/Non-Validator).
Configure deployment settings.
Click Confirm.
First ensure you're authenticated:
```bash
settlemint login
```
Create a blockchain node:
```bash
settlemint platform create blockchain-node besu \
--blockchain-network \
--node-type \
--provider \
--region
# Get help
settlemint platform create blockchain-node --help
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
const createNode = async () => {
const result = await client.blockchainNode.create({
applicationUniqueName: "your-application",
blockchainNetworkUniqueName: "your-network",
name: "my-node",
nodeType: "VALIDATOR",
provider: "provider",
region: "region"
});
console.log('Node created:', result);
};
```
Get your access token from the Platform UI in left menu bar > Access Tokens.
## Manage node
You can view node details and status, can monitor node health, pause and
restart, or upgrade the node via the SDK CLI or the Platform UI.
Navigate to your node and click **Manage node** to see available actions:
* View node details and status
* Monitor node health
* Restart node operations
```bash
# List nodes
settlemint platform list services --application
# Restart node
settlemint platform restart blockchain-node
```
```typescript
// List nodes
await client.blockchainNode.list("your-application");
// Get node details
await client.blockchainNode.read("node-unique-name");
// Restart node
await client.blockchainNode.restart("node-unique-name");
```
All operations require appropriate permissions in your workspace.
## 3. Add load balancer
To add a load balancer, navigate to the **Blockchain nodes** section in the
SettleMint platform and select your deployed network. Click "Add load balancer",
choose the region, provider, and desired resource configuration. Once deployed,
the load balancer helps distribute traffic efficiently, improving network
reliability and performance.
When selecting nodes to connect to the load balancer, ensure you include at
least two non-validator nodes for optimal redundancy. The load balancer can be
configured to route requests to specific nodes based on workload distribution,
ensuring high availability and fault tolerance in your blockchain network.

## 4. Add blockchain explorer
To add blockscout blockchain explorer for EVM based permissioned networks,
navigate to **Insights** via the left menu or the dashboard tile. For public
networks, you may use publicly available blockchain explorers for the respective
network.


### For public networks, please use the following blockchain explorers
| **Network** | **Mainnet Explorer** | **Testnet Explorer** |
| -------------------- | -------------------------------------------------------- | ----------------------------------------------------------------------------------- |
| **Ethereum** | [Etherscan](https://etherscan.io/) | [Sepolia](https://sepolia.etherscan.io/) / [Holesky](https://holesky.etherscan.io/) |
| **Avalanche** | [SnowTrace](https://snowtrace.io/) | [Fuji](https://testnet.snowtrace.io/) |
| **Hedera Hashgraph** | [HashScan](https://hashscan.io/mainnet) | [HashScan Testnet](https://hashscan.io/testnet) |
| **Polygon PoS** | [PolygonScan](https://polygonscan.com/) | [Amoy](https://amoy.polygonscan.com//) |
| **Polygon zkEVM** | [zkEVM Explorer](https://zkevm.polygonscan.com/) | [zkEVM Testnet](https://testnet-zkevm.polygonscan.com/) |
| **Optimism** | [Optimistic Etherscan](https://optimistic.etherscan.io/) | [Optimism Goerli](https://goerli-optimism.etherscan.io/) |
| **Arbitrum** | [Arbiscan](https://arbiscan.io/) | [Arbitrum Goerli](https://goerli.arbiscan.io/) |
Congratulations!
You have succesfully built the blockchain infrastructure layer for you
application. From here you can proceed for creating or setting up private keys
for transaction signer and user wallets.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/add-private-keys.mdx
meta: {
"title": "Add private keys",
"description": "How to create and use private keys on SettleMint platform"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To send transactions on a blockchain, you need a private key (with enough
funds for gas for networks with a fee). You can create and manage private
keys directly within SettleMint using ECDSA, HD ECDSA, or HSM key types.
After creating a key, it must be attached to at least one node to enable
transaction signing, this is critical for smart contract deployment. Without
it, the deployment will fail.
SettleMint also offers user wallets, a scalable solution that generates
wallets from a single HD ECDSA 256 key. Each user gets a unique address for
privacy and parallel transaction support. You can create user wallets once
your HD key is deployed and running, and fund them as needed for gas-based
transactions.
## How to add private keys and user wallets in SettleMint platform
# Private keys
Sending transactions on a blockchain network requires two essential components:
a private key to cryptographically sign your transactions, and sufficient funds
in your wallet to cover the associated gas fees. Without either element,
transaction execution will fail.
While you can use external private keys created with tools like MetaMask or
other wallet solutions, **SettleMint offers a more integrated approach**. The
platform provides built-in functionality to **create and manage private keys
directly within your environment**, eliminating the need for external wallet
management.
When you deploy a blockchain node, it contains a signing proxy that captures the
eth\_sendTransaction call, uses the appropriate key from the private key section
to sign it, and sends it onwards to the blockchain node. You can use this proxy
directly via the node's JSON-RPC endpoints
([JSON-RPC](https://ethereum.org/en/developers/docs/apis/json-rpc/)) and via
tools like Hardhat ([HardHat
RPC](https://hardhat.org/config/#json-rpc-based-networks)) configured to use the
"remote" default option for signing.
## Create a private key
To add a private key in the SettleMint platform, navigate to the private keys
section and click create a private key. You'll be prompted to select the type of
private key, ECDSA P256, HD ECDSA P256, or HSM ECDSA P256.

Navigate to your **application**, click **private keys** in the left navigation, and then click **create a private key**. This opens a form.
Follow these steps to create the private key:
1. Choose a **private key type**:
* **Accessible ECDSA P256**: Standard Ethereum-style private keys with exposed mnemonic
* **HD ECDSA P256**: Hierarchical Deterministic keys for advanced key management
* **HSM ECDSA P256**: Hardware Security Module protected keys for maximum security
2. Choose a **name** for your private key
3. Select the **nodes** on which you want the key to be active
4. Click **confirm** to create the key
```bash
# Create Accessible ECDSA P256 key
settlemint platform create private-key accessible-ecdsa-p256 my-key \
--application my-app \
--blockchain-node node-123
# Create HD ECDSA P256 key
settlemint platform create private-key hd-ecdsa-p256 my-key \
--application my-app
# Create HSM ECDSA P256 key
settlemint platform create private-key hsm-ecdsa-p256 my-key \
--application my-app
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create private key
const createKey = async () => {
const result = await client.privateKey.create({
name: "my-key",
applicationUniqueName: "my-app",
privateKeyType: "ACCESSIBLE_ECDSA_P256", // or "HD_ECDSA_P256" or "HSM_ECDSA_P256"
blockchainNodeUniqueNames: ["node-123"] // optional
});
console.log('Private key created:', result);
};
```
## Attaching private keys to blockchain nodes (transaction signer)

Every smart contract deployment involves a transaction that must be signed by an
authorized account. This signature proves that the transaction came from a valid
identity and permits it to be processed by the network. When using SettleMint,
deploying a smart contract via the platform UI or SDK initiates an
eth\_sendTransaction call, which must be signed by a private key. However, nodes
cannot inherently sign transactions unless a key has been explicitly activated
and attached to them.
If no private key is attached to the node involved in the deployment, the
process will halt at the signing step. The platform will not be able to
authorize the deployment transaction, resulting in a failed operation. This
makes key-to-node assignment a required step in any production or test setup
involving deployment, contract interactions, or any state-changing blockchain
transaction.
**How to attach a private key to a node**
1. Go to the private keys section of your SettleMint workspace.
2. Click on the private key (e.g., "Deployer") you wish to use for signing
transactions.
3. Navigate to the nodes tab of that private key's page.
4. You'll see a list of available nodes in your network (validator and RPC
nodes).
5. Select the nodes that should use this key for transaction signing. These will
usually be RPC nodes or validators depending on your setup.
6. Once selected, the key becomes active on these nodes and is used for signing
all outgoing transactions initiated from the platform.
**Best practices and nuances**
1. Always attach the key to at least one node before deploying a smart contract.
In most cases, attaching it to an RPC node is sufficient.
2. Avoid attaching the same key to multiple nodes unless required, to reduce the
risk of key misuse or unnecessary transaction replay.
3. Ensure the private key has sufficient funds (ETH or native token) to pay for
gas costs associated with contract deployment if working on public chains or
non-zero gas fee networks.
4. For security reasons, only assign signing permissions to nodes you trust and
control.
5. Consider using an HD key if you want to manage multiple identities derived
from the same mnemonic, but ensure the correct derivation path is used.
## Manage private keys
1. Navigate to your application's **private keys** section
2. Click on a private key to:
* View details and status
* Manage node associations
* Check balances
* Fund the key
```bash
# List all private keys
settlemint platform list private-keys --application
# View specific key details
settlemint platform read private-key
# Restart a private key
settlemint platform restart private-key
```
```typescript
// List private keys
const listKeys = async () => {
const keys = await client.privateKey.list("your-app-name");
};
// Get key details
const getKey = async () => {
const key = await client.privateKey.read("key-unique-name");
};
// Restart key
const restartKey = async () => {
await client.privateKey.restart("key-unique-name");
};
```
## Fund the private key
For networks that require gas to perform a transaction, your private key should
contain enough funds to cover the gas price.
For Hedera Testnet, you can get free test HBAR from the [Hedera Portal](https://portal.hedera.com).
This will provide you with enough test HBAR to cover transaction fees during development and testing.
1. Visit [portal.hedera.com](https://portal.hedera.com)
2. Create an account if you don't have one
3. Navigate to the Testnet Faucet section
4. Copy your SettleMint wallet address (in 0x format)
5. Paste it into the faucet form
6. Submit to receive your test HBAR (typically 10,000 test HBAR)
The HBAR balance will appear in your SettleMint wallet once the transaction is
processed.
For funding your wallet:
1. Click the **private key** in the overview to see detailed information
2. Open the **balances tab**
3. Use the public address of the wallet which you want to fund for sending
tokens/currency to it.
Ensure your private key has sufficient funds before attempting transactions on
networks that require gas fees.
## User wallets
SettleMint's **user wallets** feature offers a production-ready solution for
managing infinite wallets with efficiency and scalability. This tool empowers
users with seamless wallet generation, ensuring **cost-effective management**
and eliminating additional expenses. By generating **unique addresses for each
user**, privacy is significantly enhanced, while improved performance ensures
faster, parallel transaction processing through separate nonces. User wallet
also simplifies wallet recovery since all wallets are derived from a single
master key. User wallets use the same signing proxy to sign transactions with
the corresponding user private key.
## Create and setup user wallets
To set up your user wallets, navigate to your application, click **private
keys** in the left navigation, and then click **create a private key**. This
opens a form.
Select **HD ECDSA P256** as the private key type then, enter a **name** for your
deployment. You can also select the nodes or load balancers on which you want to
enable the user wallets. You can change this later if you want to use your user
wallets on a different node. Click **confirm** to deploy the wallet.
## Difference between ECDSA and HD ECDSA key and why we do not see user wallets in simple ECDSA keys.
A simple ECDSA key is just one key pair , a private key and its corresponding
public key. It can be used to sign transactions and control a blockchain
address, but it’s standalone. There’s no built-in mechanism to derive more keys
from it. If you want multiple accounts, you’d need to manually generate and
store each key separately. An HD (Hierarchical Deterministic) wallet, on the
other hand, starts from a single master seed. From this seed, it can generate an
entire tree of ECDSA key pairs in a structured and repeatable way. This system
follows the BIP-32 standard and includes concepts like key derivation paths and
chain codes.
The reason HD wallets are suitable for managing wallets is that they support
deterministic key generation. You can recreate the full wallet from just the
seed phrase. Each new account or address is simply a derived key from a known
path. This is efficient and secure, and it also simplifies backup and recovery.
Simple ECDSA keys lack this structure. They are isolated, and generating
multiple keys would require managing each one individually. This doesn’t scale
for wallets, especially those that require many accounts, addresses, or
identities. That’s why HD ECDSA key systems are preferred in wallet
implementation.
When your deployment status is **running**, you can click on it to check the
details. You can see the mnemonic from which the user wallets are generated
under **key material**. Upon initialization, the user wallets section is empty.
To create your first user wallet, click on **create a user wallet**.

Remember that for networks that require gas to perform a transaction, the user
wallet should contain enough funds to cover the gas price. You can fund it using
the address displayed in the list.
Congratulations!
You have successfully created private keys and user wallets.
You have also attached private keys to node transaction signer and you are ready
for smart contract development and deployment.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/attestation-indexer.mdx
meta: {
"title": "Ethereum attestation indexer",
"description": "How to work with ethereum attestation indexer"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
The ethereum attestation indexer is a tool that allows you to track, store, and
query verifiable claims (attestations) made using the ethereum attestation
service (EAS). It provides a GraphQL API to easily fetch attestation data based
on schemas you define.
To use it, you'll first deploy the necessary EAS smart contracts (schema
registry and EAS) on your blockchain network using SettleMint's code studio and
task manager. Once deployed, you can register custom schemas and create
attestations that follow those schema structures.
After setup, the attestation indexer can be added via the middleware section of
your application. Once connected to your contract addresses, it will index
attestation events, and you can use the built-in GraphQL UI or API access to
query them in real time.
## 1. Introduction to EAS
### What is EAS?
Ethereum attestation service (EAS) is a decentralized protocol that allows users
to create, verify, and manage attestations (verifiable claims) on the Ethereum
blockchain. It provides a standardized way to make claims about data,
identities, or events that can be independently verified by others.

### Why use EAS?
* **Decentralization**: No central authority is needed to verify claims.
* **Interoperability**: Standardized schemas allow for cross-platform
compatibility.
* **Security**: Attestations are secured by the Ethereum blockchain.
* **Transparency**: All attestations are publicly verifiable.
***
## 2. Key concepts
### Core components
1. **SchemaRegistry**:
* A smart contract that stores and manages schemas.
* Schemas define the structure and data types of attestations, ensuring that
all attestations conform to a predefined format.
2. **EAS contract**:
* The main contract that handles the creation and management of attestations.
* It interacts with the `SchemaRegistry` to ensure that attestations adhere
to the defined schemas.

3. **Attestations**:
* Verifiable claims stored on the blockchain.
* Created and managed by the `EAS contract`.
4. **Resolvers**:
* Optional contracts that provide additional validation logic for
attestations.
***
## 3. How EAS works
```mermaid
graph TD
SchemaRegistry["SchemaRegistry"]
UsersSystems["Users/Systems"]
EASContract["EAS contract"]
Verifiers["Verifiers"]
Attestations["Attestations"]
SchemaRegistry -- "Defines data structure" --> EASContract
UsersSystems -- "Interact" --> EASContract
EASContract -- "Creates" --> Attestations
Verifiers -- "Verify" --> Attestations
```
### Workflow
1. **Schema definition**: Start by defining a schema using the
**SchemaRegistry** contract.
2. **Attestation creation**: Use the **EAS contract** to create attestations
based on the schema.
3. **Optional validation**: Resolvers can be used for further validation logic.
4. **On-chain storage**: Attestations are securely stored and retrievable
on-chain.
***
## 4. Contract deployment
Before deploying the EAS contracts, you must add the smart contract set to your
project.
### Adding the smart contract set
1. **Navigate to the dev tools section**: Go to the application dashboard of the
application where you want to deploy the EAS contracts, then navigate to the
**dev tools** section in the left sidebar.
2. **Select the attestation service set**: From there, click on **add a dev
tool**, choose **code studio** and then **smart contract set**. Choose the
**attestation service** template.
3. **Customize**: Modify the set as needed for your specific project.
4. **Save**: Save the configuration.
For detailed instructions, visit the
[smart contract sets documentation](/platfrom-components/dev-tools/code-studio/smart-contract-sets/smart-contract-sets).
***
### Deploying the contracts
Once the contract set is ready, you can deploy it using either the **task menu**
in the SettleMint IDE or via the **terminal**.
#### Deploy using the task menu
1. **Open the task menu**:
* In the SettleMint integrated IDE, access the **task menu** from the
sidebar.
2. **Select deployment task**:
* Choose the task corresponding to the **Hardhat- reset & deploy to platform
network** module.
3. **Monitor deployment logs**:
* The terminal output will display the deployment progress and contract
addresses.
#### Deploy using the terminal
1. **Prepare the deployment module**:\
Ensure the module is defined in `ignition/modules/main.ts`:
```typescript
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const CustomEASModule = buildModule("EASDeployment", (m) => {
const schemaRegistry = m.contract("SchemaRegistry", [], {});
const EAS = m.contract("EAS", [schemaRegistry], {});
return { schemaRegistry, EAS };
});
export default CustomEASModule;
```
2. **Run the deployment command**:\
Execute the following command in your terminal:
```bash
bunx settlemint hardhat deploy remote - ignition/modules/main.ts
```
3. **Monitor deployment logs**:
* The terminal output will display the deployment progress and contract
addresses.
***
## 5. Registering a schema
### Example use case
Imagine building a service where users prove ownership of their social media
profiles. The schema might include:
* **Username**: A unique identifier for the user.
* **Platform**: The social media platform name (e.g., Twitter).
* **Handle**: The user's handle on that platform (e.g., `@coolcoder123`).
### Example
```javascript
const { ethers } = require("ethers");
// Configuration object for network and contract details
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // The network endpoint (e.g., Ethereum mainnet/testnet)
registryAddress: "YOUR_SCHEMA_REGISTRY_ADDRESS_HERE", // Where the SchemaRegistry contract lives
privateKey: "YOUR_PRIVATE_KEY_HERE", // Your wallet's private key (keep this secret!)
};
// Create connection to blockchain and setup contract interaction
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const signer = new ethers.Wallet(config.privateKey, provider);
const schemaRegistry = new ethers.Contract(
config.registryAddress,
[
// This event helps us track when new schemas are registered
"event Registered(bytes32 indexed uid, address indexed owner, string schema, address resolver, bool revocable)",
// This function lets us register new schemas
"function register(string calldata schema, address resolver, bool revocable) external returns (bytes32)",
],
signer
);
async function registerSchema() {
try {
// Define what data fields our attestations will contain
const schema = "string username, string platform, string handle";
const resolverAddress = ethers.ZeroAddress; // No special validation needed
const revocable = true; // Attestations can be revoked if needed
console.log("🚀 Registering schema for social media ownership...");
// Send the transaction to create our schema
const tx = await schemaRegistry.register(
schema,
resolverAddress,
revocable
);
const receipt = await tx.wait(); // Wait for blockchain confirmation
// Get our schema's unique ID from the transaction
const schemaUID = receipt.logs[0].topics[1];
console.log("✅ Schema registered successfully! UID:", schemaUID);
} catch (error) {
console.error("❌ Error registering schema:", error.message);
}
}
registerSchema();
```
***
## 6. Creating attestations
### Example use case
Let's create an attestation that proves:
* **Username**: `awesome_developer`
* **Platform**: `GitHub`
* **Handle**: `@devmaster`
### Example
```javascript
const { EAS, SchemaEncoder } = require("@ethereum-attestation-service/eas-sdk");
const { ethers } = require("ethers");
// Setup our connection details
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // Network endpoint
easAddress: "YOUR_EAS_CONTRACT_ADDRESS_HERE", // Main EAS contract address
privateKey: "YOUR_PRIVATE_KEY_HERE", // Your wallet's private key
schemaUID: "YOUR_SCHEMA_UID_HERE", // The UID from when we registered our schema
};
// Connect to the blockchain
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const signer = new ethers.Wallet(config.privateKey, provider);
const EAS = new EAS(config.easAddress);
eas.connect(signer);
// Create an encoder that matches our schema structure
const schemaEncoder = new SchemaEncoder(
"string username, string platform, string handle"
);
// The actual data we want to attest to
const attestationData = [
{ name: "username", value: "awesome_developer", type: "string" },
{ name: "platform", value: "GitHub", type: "string" },
{ name: "handle", value: "@devmaster", type: "string" },
];
async function createAttestation() {
try {
// Convert our data into the format EAS expects
const encodedData = schemaEncoder.encodeData(attestationData);
// Create the attestation
const tx = await eas.attest({
schema: config.schemaUID,
data: {
recipient: ethers.ZeroAddress, // Public attestation (no specific recipient)
expirationTime: 0, // Never expires
revocable: true, // Can be revoked later if needed
data: encodedData, // Our encoded attestation data
},
});
// Wait for confirmation and get the result
const receipt = await tx.wait();
console.log(
"✅ Attestation created successfully! UID:",
receipt.attestationUID
);
} catch (error) {
console.error("❌ Error creating attestation:", error.message);
}
}
createAttestation();
```
## 7. Verifying attestations
Verification is essential to ensure the integrity and authenticity of
attestations. You can verify attestations using one of the following methods:
1. **Using the EAS SDK**: Perform lightweight, off-chain verification
programmatically.
2. **Using a custom smart contract resolver**: Add custom on-chain validation
logic for attestations.
### Choose your verification method
#### Verification using the EAS sdk
The EAS SDK provides an easy way to verify attestations programmatically, making
it ideal for off-chain use cases.
##### Example
```javascript
const { ethers } = require("ethers");
const { EAS } = require("@ethereum-attestation-service/eas-sdk");
// Basic configuration for connecting to the network
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // Network endpoint
easAddress: "YOUR_EAS_CONTRACT_ADDRESS_HERE", // Main EAS contract
};
async function verifyAttestation(attestationUID) {
// Setup our blockchain connection
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const EAS = new EAS(config.easAddress);
eas.connect(provider);
console.log("🔍 Verifying attestation:", attestationUID);
// Try to find the attestation on the blockchain
const attestation = await eas.getAttestation(attestationUID);
// Check if we found anything
if (!attestation) {
console.error("❌ Attestation not found");
return;
}
// Show the attestation details
console.log("✅ Attestation details:");
console.log("Attester:", attestation.attester); // Who created this attestation
console.log("Data:", attestation.data); // The actual attested data
console.log("Revoked:", attestation.revoked ? "Yes" : "No"); // Is it still valid?
}
// Replace with your attestation UID
verifyAttestation("YOUR_ATTESTATION_UID_HERE");
```
##### Key points
* **Lightweight**: Suitable for most off-chain verifications.
* **No custom logic**: Fetches and verifies data stored in EAS.
#### Verification using a custom smart contract resolver
Custom resolvers enable on-chain validation with additional business rules or
logic.
##### Example: trusted attester verification
The following smart contract resolver ensures that attestations are valid only
if made by trusted attesters.
###### Smart contract code
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
// This contract checks if attestations come from trusted sources
contract CustomResolver {
// Keep track of which addresses we trust to make attestations
mapping(address => bool) public trustedAttesters;
// When deploying, we set up our initial list of trusted attesters
constructor(address[] memory initialAttesters) {
for (uint256 i = 0; i < initialAttesters.length; i++) {
trustedAttesters[initialAttesters[i]] = true;
}
}
// EAS calls this function before accepting an attestation
function validate(
bytes32 attestationUID, // Unique ID of the attestation
address attester, // Who's trying to create the attestation
bytes memory data // The attestation data (unused in this example)
) external view returns (bool) {
// Only allow attestations from addresses we trust
if (!trustedAttesters[attester]) {
return false;
}
return true;
}
}
```
###### Deploying the resolver with hardhat ignition
Deploy this custom resolver using the Hardhat Ignition framework.
```typescript
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const CustomResolverDeployment = buildModule("CustomResolver", (m) => {
const initialAttesters = ["0xTrustedAddress1", "0xTrustedAddress2"];
const resolver = m.contract("CustomResolver", [initialAttesters], {});
return { resolver };
});
export default CustomResolverDeployment;
```
Run the following command in your terminal to deploy:
```bash
npx hardhat deploy --module ignition/modules/main.ts
```
###### Linking the resolver to a schema
When registering a schema, include the resolver's address for on-chain
validation.
```javascript
const resolverAddress = "YOUR_DEPLOYED_RESOLVER_ADDRESS";
const schema = "string username, string platform, string handle";
const schemaUID = await schemaRegistry.register(schema, resolverAddress, true);
console.log("✅ Schema with resolver registered! UID:", schemaUID);
```
###### Validating attestations with the resolver
To validate an attestation, call the `validate` function of your deployed
resolver contract.
```javascript
const resolver = new ethers.Contract(
"YOUR_RESOLVER_ADDRESS",
["function validate(bytes32, address, bytes) external view returns (bool)"],
provider
);
const isValid = await resolver.validate(
"YOUR_ATTESTATION_UID",
"ATTESTER_ADDRESS",
"ATTESTATION_DATA"
);
console.log("✅ Is the attestation valid?", isValid);
```
##### Key points
* **Customizable rules**: Add your own validation logic to the resolver.
* **On-chain validation**: Ensures attestations meet specific conditions before
they are considered valid.
***
### When to use each method?
* **EAS SDK**: Best for off-chain applications where simple validation suffices.
* **Custom resolver**: Use for on-chain validation with additional rules, such
as verifying trusted attesters or specific data formats.
## 8. Using the attestation indexer
### Setup attestation indexer
1. Go to your application's **middleware** section
2. Click "add a middleware"
3. Select "attestation indexer"
4. Configure with your contract addresses:
* EAS contract: `EAS contract address`
* Schema registry: `Schema registry contract address`
### Querying attestations
#### Connection details
After deployment:
1. Go to your attestation indexer
2. Click "connections" tab
3. You'll find your GraphQL endpoint URL
4. Create an application access token (settings → application access tokens)
#### Using the graphql ui
The indexer provides a built-in GraphQL UI where you can test queries. Click
"GraphQL UI" in your indexer to access it.
#### Example query implementation
```javascript
// Example fetch request to query attestations
async function queryAttestations(schemaId) {
const response = await fetch("YOUR_INDEXER_URL", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer YOUR_APP_TOKEN",
},
body: JSON.stringify({
query: `{
attestations(
where: {
schemaId: {
equals: "${schemaId}"
}
}
) {
id
attester
recipient
revoked
data
}
}`,
}),
});
const data = await response.json();
return data.data.attestations;
}
// Usage example:
const schemaId = "YOUR_SCHEMA_ID"; // From the registration step
const attestations = await queryAttestations(schemaId);
console.log("Attestations:", attestations);
```
## 9. Integration studio implementation
For those using integration studio, we've created a complete flow implementation
of the EAS interactions. This flow automates the entire process we covered in
this guide.
### Flow overview
The flow includes:
* EAS configuration setup
* Schema registration
* Attestation creation
* Attestation verification
* Debug nodes for monitoring results
### Installation
1. In integration studio, go to import → clipboard
2. Paste the flow JSON below
3. Click import
Click to view/copy the complete Node-RED flow JSON
```json
[
{
"id": "eas_flow",
"type": "tab",
"label": "EAS attestation flow",
"disabled": false,
"info": ""
},
{
"id": "setup_inject",
"type": "inject",
"z": "eas_flow",
"name": "Inputs: RpcUrl, registry address, EAS address, private key",
"props": [
{
"p": "rpcUrl",
"v": "RPC-URL/API-KEY",
"vt": "str"
},
{
"p": "registryAddress",
"v": "REGISTERY-ADDRESS",
"vt": "str"
},
{
"p": "easAddress",
"v": "EAS-ADDRESS",
"vt": "str"
},
{
"p": "privateKey",
"v": "PRIVATE-KEY",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 250,
"y": 120,
"wires": [["setup_function"]]
},
{
"id": "setup_function",
"type": "function",
"z": "eas_flow",
"name": "Setup global variables",
"func": "// Initialize provider with specific network parameters\nconst provider = new ethers.JsonRpcProvider(msg.rpcUrl)\n\nconst signer = new ethers.Wallet(msg.privateKey, provider);\n\n// Initialize EAS with specific gas settings\nconst EAS = new eassdk.EAS(msg.easAddress);\neas.connect(signer);\n\n// Store in global context\nglobal.set('provider', provider);\nglobal.set('signer', signer);\nglobal.set('eas', eas);\nglobal.set('registryAddress', msg.registryAddress);\n\nmsg.payload = 'EAS configuration initialized';\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "ethers",
"module": "ethers"
},
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
}
],
"x": 580,
"y": 120,
"wires": [["setup_debug"]]
},
{
"id": "register_inject",
"type": "inject",
"z": "eas_flow",
"name": "Register schema",
"props": [],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 120,
"y": 260,
"wires": [["register_function"]]
},
{
"id": "register_function",
"type": "function",
"z": "eas_flow",
"name": "Register schema",
"func": "// Get global variables set in init\nconst signer = global.get('signer');\nconst registryAddress = global.get('registryAddress');\n\n// Initialize SchemaRegistry contract\nconst schemaRegistry = new ethers.Contract(\n registryAddress,\n [\n \"event Registered(bytes32 indexed uid, address indexed owner, string schema, address resolver, bool revocable)\",\n \"function register(string calldata schema, address resolver, bool revocable) external returns (bytes32)\"\n ],\n signer\n);\n\n// Define what data fields our attestations will contain\nconst schema = \"string username, string platform, string handle\";\nconst resolverAddress = \"0x0000000000000000000000000000000000000000\"; // No special validation needed\nconst revocable = true; // Attestations can be revoked if needed\n\ntry {\n const tx = await schemaRegistry.register(schema, resolverAddress, revocable);\n const receipt = await tx.wait();\n\n const schemaUID = receipt.logs[0].topics[1];\n // Store schemaUID in global context for later use\n global.set('schemaUID', schemaUID);\n\n msg.payload = {\n success: true,\n schemaUID: schemaUID,\n message: \"Schema registered successfully!\"\n };\n} catch (error) {\n msg.payload = {\n success: false,\n error: error.message\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "ethers",
"module": "ethers"
}
],
"x": 310,
"y": 260,
"wires": [["register_debug"]]
},
{
"id": "create_inject",
"type": "inject",
"z": "eas_flow",
"name": "Input: schema uid",
"props": [
{
"p": "schemaUID",
"v": "SCHEMA-UID",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 130,
"y": 400,
"wires": [["create_function"]]
},
{
"id": "create_function",
"type": "function",
"z": "eas_flow",
"name": "Create attestation",
"func": "// Get global variables\nconst EAS = global.get('eas');\nconst schemaUID = msg.schemaUID;\n\n// Create an encoder that matches our schema structure\nconst schemaEncoder = new eassdk.SchemaEncoder(\"string username, string platform, string handle\");\n\n// The actual data we want to attest to\nconst attestationData = [\n { name: \"username\", value: \"awesome_developer\", type: \"string\" },\n { name: \"platform\", value: \"GitHub\", type: \"string\" },\n { name: \"handle\", value: \"@devmaster\", type: \"string\" }\n];\n\ntry {\n // Convert our data into the format EAS expects\n const encodedData = schemaEncoder.encodeData(attestationData);\n\n // Create the attestation\n const tx = await eas.attest({\n schema: schemaUID,\n data: {\n recipient: \"0x0000000000000000000000000000000000000000\", // Public attestation\n expirationTime: 0, // Never expires\n revocable: true, // Can be revoked later if needed\n data: encodedData // Our encoded attestation data\n }\n });\n\n // Wait for confirmation and get the result\n const receipt = await tx.wait();\n\n // Store attestation UID for later verification\n global.set('attestationUID', receipt.attestationUID);\n\n msg.payload = {\n success: true,\n attestationUID: receipt,\n message: \"Attestation created successfully!\"\n };\n} catch (error) {\n msg.payload = {\n success: false,\n error: error.message\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
},
{
"var": "ethers",
"module": "ethers"
}
],
"x": 330,
"y": 400,
"wires": [["create_debug"]]
},
{
"id": "verify_inject",
"type": "inject",
"z": "eas_flow",
"name": "Input: attestation UID",
"props": [
{
"p": "attestationUID",
"v": "Attestation UID",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 140,
"y": 540,
"wires": [["verify_function"]]
},
{
"id": "verify_function",
"type": "function",
"z": "eas_flow",
"name": "Verify attestation",
"func": "const EAS = global.get('eas');\nconst attestationUID = msg.attestationUID;\n\ntry {\n const attestation = await eas.getAttestation(attestationUID);\n const schemaEncoder = new eassdk.SchemaEncoder(\"string pshandle, string socialMedia, string socialMediaHandle\");\n const decodedData = schemaEncoder.decodeData(attestation.data);\n\n msg.payload = {\n isValid: !attestation.revoked,\n attestation: {\n attester: attestation.attester,\n time: new Date(Number(attestation.time) * 1000).toLocaleString(),\n expirationTime: attestation.expirationTime > 0 \n ? new Date(Number(attestation.expirationTime) * 1000).toLocaleString()\n : 'Never',\n revoked: attestation.revoked\n },\n data: {\n psHandle: decodedData[0].value.toString(),\n socialMedia: decodedData[1].value.toString(),\n socialMediaHandle: decodedData[2].value.toString()\n }\n };\n} catch (error) {\n msg.payload = { \n success: false, \n error: error.message,\n details: JSON.stringify(error, Object.getOwnPropertyNames(error))\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
},
{
"var": "ethers",
"module": "ethers"
}
],
"x": 350,
"y": 540,
"wires": [["verify_debug"]]
},
{
"id": "setup_debug",
"type": "debug",
"z": "eas_flow",
"name": "Setup result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 770,
"y": 120,
"wires": []
},
{
"id": "register_debug",
"type": "debug",
"z": "eas_flow",
"name": "Register result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 500,
"y": 260,
"wires": []
},
{
"id": "create_debug",
"type": "debug",
"z": "eas_flow",
"name": "Create result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 520,
"y": 400,
"wires": []
},
{
"id": "verify_debug",
"type": "debug",
"z": "eas_flow",
"name": "Verify result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 530,
"y": 540,
"wires": []
},
{
"id": "1322bb7438d96baf",
"type": "comment",
"z": "eas_flow",
"name": "Initialize EAS config",
"info": "",
"x": 110,
"y": 60,
"wires": []
},
{
"id": "e5e3294119a80c1b",
"type": "comment",
"z": "eas_flow",
"name": "Register a new schema",
"info": "/* SCHEMA GUIDE\nEdit the schema variable to define your attestation fields.\nFormat: \"type name, type name, type name\"\n\nAvailable Types:\n- string (text)\n- bool (true/false)\n- address (wallet address)\n- uint256 (number)\n- bytes32 (hash)\n\nExamples:\n\"string name, string email, bool isVerified\"\n\"string twitter, address wallet, uint256 age\"\n\"string discord, string github, string telegram\"\n*/\n\nconst schema = \"string pshandle, string socialMedia, string socialMediaHandle\";",
"x": 120,
"y": 200,
"wires": []
},
{
"id": "2be090c17b5e4fce",
"type": "comment",
"z": "eas_flow",
"name": "Create attestation",
"info": "",
"x": 110,
"y": 340,
"wires": []
},
{
"id": "3d99f76c5c0bdaf0",
"type": "comment",
"z": "eas_flow",
"name": "Verify attestation",
"info": "",
"x": 110,
"y": 480,
"wires": []
}
]
```
### Configuration steps:
1. Update the setup inject node with your:
* RPC URL
* Registry address
* EAS address
* Private key
2. Customize the schema in the register function
3. Deploy the flow
4. Test each step sequentially using the inject nodes
The flow provides debug outputs at each step to monitor the process.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/audit-logs.mdx
meta: {
"title": "Audit logs",
"description": "Audit logs for the actions performed on SettleMint platform"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
The audit log keeps a detailed record of user actions across the system, helping
teams monitor activity, track changes, and stay compliant with internal and
external requirements. Each entry includes a timestamp, showing exactly when
something was done, which makes it easier to follow the flow of events and spot
any irregularities.

It also records the user who performed the action, adding a layer of
accountability by linking every change to a specific individual or system role.
This is especially useful when reviewing changes or troubleshooting unexpected
behavior.
The service field highlights which part of the platform was involved, whether
it’s an integration, middleware component, or another system area. Alongside
that, the action field captures what was done, like creating, editing, or
deleting something. Together, these fields give teams a clear snapshot of what
happened, where, and by whom.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/create-an-application.mdx
meta: {
"title": "Create an application",
"description": "Guide to creating a blockchain application on SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To get started on the SettleMint platform, you need to create an
organization by going to the homepage or clicking the grid icon, then
selecting "create new organization." You'll need to enter a name and
complete the billing setup using Stripe to activate it.
Once your organization is ready, you need to invite your team members by
entering their email addresses, selecting their roles, and sending the
invitation. After that, you need to create an application within the
organization by giving it a name and confirming.
You can manage your organization and applications from the dashboard, change
names, invite more members, or delete resources when needed. You can also
create and manage applications using the SDK CLI or SDK JS if you prefer to
work programmatically.
## How to create an organization and application in SettleMint platform
An organization is the highest level of hierarchy in SettleMint. It's at this
level that you can create and manage blockchain applications, invite team
members to collaborate and manage billing.


You created your first organization when you signed up to use the SettleMint
platform, but you can create as many organizations as you want, e.g. for your
company, departments, teams, clients, etc. Organizations help you structure your
work, manage collaboration, and keep your invoices clearly organized.
Create an organization
Navigate to the homepage, or click the grid icon in the upper right corner.
Click create new organization. This opens a form. Follow these steps to create your organization:
Choose a name for your organization. Choose a name that is easily
recognizable in your dashboards, e.g. your company name, department name, team name, etc.
You can change the name of your organization at any time.
Enter **billing information**. SettleMint creates a billing account for this
organization. You will be billed monthly for the resources you use within this
organization. Provide your billing details securely via Stripe, with support for Visa, Mastercard, and Amex, to activate your organization. Follow the prompts to complete the setup and gain full access to SettleMint's blockchain development tools. Ensure all details are accurate to enable a smooth onboarding experience. Your organization is billed monthly, with the invoice dates set for 1st of every month.
Click **confirm** to go to the organization dashboard. From here, you can create
your first application in this organization. The dashboard will show you a
summary of your organization's applications, the members in this organization,
and a status of the resource costs for the current month.
When you create an organization, you are the owner, and therefore an
administrator of the organization. This means you can perform all actions within
this organization, with no limitations.
## Invite new organization members

Navigate to the **members section** of your organization, via the homepage, or
via your organization dashboard.
Follow these steps to invite new members to your organization:
1. Click **invite new member**.
2. Enter the **email adress** of the person you want to invite.
3. Select their **role**, i.e. whether they will be an administrator or a user.
4. Optionally, you can add a **message** to be included in the invitation email.
5. Click **confirm** to go to the list of your organization's members. Your
email invitation has now been sent, and you see in the list that it is
pending.
## Manage an organization
Navigate to the **organization dashboard**.
Click **manage organization** to see the available actions. You can only perform
these actions if you have administrator rights for this organization.
* **change name** - Changes the organization name without any further impact.
* **delete organization** - Removes the organization from the platform.
On organization dashboard
* See all applications in that organization.
* See all members of the organization
* See all the internal applications and clients if in partner mode
You can only delete an organization when it has no applications related to it.
Applications have to be deleted one by one, once all their related resources
(e.g. networks, nodes, smart contract sets, etc.) have been deleted.
## Create an application
An application is the context in which you organize your networks, nodes, smart
contract sets and any other related blockchain resource.
You will always need to create an application before you can deploy or join
networks, and add nodes.
## How to create a new application

### Access application creation
In the upper right corner of any page, click the **grid icon**
### Navigate & create
* Navigate to your workspace
* Click **create new application**
### Configure application
* Choose a **name** for your application
* Click **confirm** to create the application
First, install the [SDK CLI](https://github.com/settlemint/sdk/blob/main/sdk/cli/README.md#usage) as a global dependency.
Then, ensure you're authenticated. For more information on authentication, see the [SDK CLI documentation](https://github.com/settlemint/sdk/blob/main/sdk/cli/README.md#login-to-the-platform).
```bash
settlemint login
```
Create an application:
```bash
settlemint platform create application
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create application
const createApp = async () => {
const result = await client.application.create({
workspaceUniqueName: "your-workspace",
name: "myApp"
});
console.log('Application created:', result);
};
// List applications
const listApps = async () => {
const apps = await client.application.list("your-workspace");
console.log('Applications:', apps);
};
// Read application details
const readApp = async () => {
const app = await client.application.read("app-unique-name");
console.log('Application details:', app);
};
// Delete application
const deleteApp = async () => {
await client.application.delete("application-unique-name");
};
```
Get your access token from the platform UI under user settings → API tokens.
## Manage an application
The SettleMint platform dashboard provides a centralized view of blockchain
infrastructure, offering real-time insights into system components. With health
status indicators, including error and warning counts, it ensures system
stability while enabling users to proactively address potential issues. Resource
usage tracking helps manage costs efficiently, providing month-to-date expense
insights.
Each component features a "details" link for quick access to in-depth
information, while the intuitive navigation panel allows seamless access to key
modules such as audit logs, access tokens, and insights. Built-in support
options further enhance usability, ensuring users can quickly troubleshoot and
resolve issues.

Navigate to your application and click **manage app** to see available actions:
* View application details
* Update application name
* Delete application
```bash
# List applications
settlemint platform list applications
# Delete application
settlemint platform delete application
```
```typescript
// List applications
await client.application.list("your-workspace");
// Read application
await client.application.read("app-unique-name");
// Delete application
await client.application.delete("app-unique-name");
```
All operations require appropriate permissions in your workspace.
Congratulations!
You have successfully created an organization and added an application within
it. From here, you can proceed to deploy a network, add nodes, a load balancer,
and a blockchain explorer
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/deploy-custom-services.mdx
meta: {
"title": "Host dApp UI or custom services",
"description": "How to deploy containerised application frontend or other custom services"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
Deploying frontend applications or custom backend services on SettleMint can be
done through custom deployments, which allow you to run containerized
applications using your own Docker images. This enables seamless integration of
user interfaces, REST APIs, microservices, or other utilities directly within
the blockchain-powered environment of your application.
The typical use cases include hosting React/Vue/Next.js-based UIs, creating
custom indexers or oracles, exposing specialized API services, or deploying
off-chain business logic in containerized environments. These deployments are
sandboxed, stateless, and run in secure, managed infrastructure, making them
suitable for both development and production.
To get started, you'll first need to containerize your application (if not
already done) and push the image to a container registry, this can be Docker Hub,
GitHub Container Registry, or a private registry. The image must be built for
AMD architecture, as the SettleMint infrastructure currently supports AMD-based
workloads.
Once your image is ready, you can initiate a custom deployment through the
platform UI, CLI, or SDK. You'll provide the container image path, optional
environment variables, deployment region, and resource configurations. After the
container spins up successfully, your service will be publicly accessible via
the auto-assigned endpoint. For frontend apps, this can act as your live
production URL.
For applications requiring a custom domain, SettleMint allows you to bind domain
names to the deployed container. You can configure the domain in the platform
and then update your DNS records accordingly. The platform supports both ALIAS
records for top-level domains and CNAME records for subdomains. SSL/TLS
certificates are automatically handled unless you opt for a custom cert setup.
Once the deployment is live, you can manage it using the custom deployment
dashboard in the platform. This includes editing environment variables,
restarting the container, updating the image version, checking logs, and
monitoring availability. You can also script or automate these tasks using the
SDK or CLI as needed.
A few considerations: custom deployments are stateless by design, so any data
you want to persist should be stored using services like Hasura for off-chain
database functionality or MinIO/IPFS for file storage. The container's
filesystem is read-only to enhance security and portability. Additionally, apps
won't run with root privileges, so ensure your container adheres to standard
non-root user practices.
This feature is especially useful when you need to tightly couple your UI or
service logic with the on-chain components, enabling a clean, integrated workflow
for dApps, admin consoles, analytics dashboards, API bridges, or token utility
services. It offers flexibility without leaving the SettleMint ecosystem, all
while adhering to scalable and cloud-native design principles.
## How to use custom deployments to host application frontend or other custom services in SettleMint platform
A custom deployment allows you to deploy your own Docker images, such as
frontend applications, on the SettleMint platform. This feature provides
flexibility for integrating custom solutions within your blockchain-based
applications.

## Create a custom deployment
1. Prepare your container image and push it to a container registry (public or private).
2. In the SettleMint platform, navigate to the custom deployments section.
3. Click on the "add custom deployment" button to create a new deployment.
4. Provide the necessary details:
* Container image path (e.g., registry.example.com/my-app:latest)
* Container registry credentials (if using a private registry)
* Environment variables (if required)
* Custom domain information (if applicable)
5. Configure any additional settings as needed.
6. Click on 'confirm' and wait for the custom deployment to be in the running status.
```bash
# Create a custom deployment
settlemint platform create custom-deployment my-deployment \
--application my-app \
--image-repository registry.example.com \
--image-name my-app \
--image-tag latest \
--port 3000 \
--provider gcp \
--region europe-west1
# With environment variables
settlemint platform create custom-deployment my-deployment \
--application my-app \
--image-repository registry.example.com \
--image-name my-app \
--image-tag latest \
--env-vars NODE_ENV=production,DEBUG=false
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
const createDeployment = async () => {
const result = await client.customDeployment.create({
applicationId: "app-123",
name: "my-deployment",
imageRepository: "registry.example.com",
imageName: "my-app",
imageTag: "latest",
port: 3000,
provider: "gcp",
region: "europe-west1",
environmentVariables: {
NODE_ENV: "production"
}
});
};
```
## DNS configuration for custom domains
When using custom domains with your custom deployment, you'll need to configure
your DNS settings correctly. Here's how to set it up:
1. **Add custom domain to the SettleMint platform**:
* Navigate to your custom deployment in the SettleMint platform.
* In the manage custom deployment menu, click on the edit custom deployment
action.
* Locate the custom domains configuration section.
* Enter your desired custom domain (e.g., example.com for top-level domain or
app.example.com for subdomain).
* Save the changes to update your custom deployment settings.
2. **Obtain your application's hostname**: After adding your custom domain, the
SettleMint platform will provide you with an ALIAS (for top-level domains) or
CNAME (for subdomains) record. This can be found in the "connect" tab of your
custom deployment.
3. **Access your domain's DNS settings**: Log in to your domain registrar or DNS
provider's control panel.
4. **Configure DNS records**:
For Top-Level Domains (e.g., example.com):
* Remove any existing A and AAAA records for the domain you're configuring.
* Remove any existing A and AAAA records for the www domain (e.g.,
[www.example.com](http://www.example.com)) if you're using it.
```
ALIAS example.com gke-europe.settlemint.com
ALIAS www.example.com gke-europe.settlemint.com
```
For Subdomains (e.g., app.example.com):
```
CNAME app.example.com gke-europe.settlemint.com
```
5. **Set TTL (Time to Live)**:
* Set a lower TTL (e.g., 300 seconds) initially to allow for quicker
propagation.
* You can increase it later for better caching (e.g., 3600 seconds).
6. **Verify DNS propagation**:
* Use online DNS lookup tools to check if your DNS changes have propagated.
* Note that DNS propagation can take up to 48 hours, although it's often much
quicker.
7. **SSL/TLS configuration**:
* The SettleMint platform typically handles SSL/TLS certificates
automatically for both top-level domains and subdomains.
* If you need to use your own certificates, please contact us for assistance
and further instructions.
Note: The configuration process is similar for both top-level domains and
subdomains. The main difference lies in the type of DNS record you create (ALIAS
for top-level domains, CNAME for subdomains) and whether you need to remove
existing records.
## Manage custom deployments
1. Navigate to your application's **custom deployments** section
2. Click on a deployment to:
* View deployment status and details
* Manage environment variables
* Configure custom domains
* View logs
* Check endpoints
```bash
# List custom deployments
settlemint platform list custom-deployments --application my-app
# Get deployment details
settlemint platform read custom-deployment my-deployment
# Restart deployment
settlemint platform restart custom-deployment my-deployment
# Edit deployment
SettleMint platform edit custom-deployment my-deployment \
--container-image registry.example.com/my-app:v2
```
```typescript
// List deployments
const listDeployments = async () => {
const deployments = await client.customDeployment.list("my-app");
};
// Get deployment details
const getDeployment = async () => {
const deployment = await client.customDeployment.read("deployment-unique-name");
};
// Restart deployment
const restartDeployment = async () => {
await client.customDeployment.restart("deployment-unique-name");
};
// Edit deployment
const editDeployment = async () => {
await client.customDeployment.edit("deployment-unique-name", {
imageTag: "v2"
});
};
```
## Limitations and considerations
When using custom deployment, keep the following limitations in mind:
1. **No root user privileges**: Your application will run without root user
privileges for security reasons.
2. **Read-only filesystem**: The filesystem is read-only. For data persistence,
consider using:
* Hasura: A GraphQL engine that provides a scalable database solution. See
[Hasura](/building-with-settlemint/hasura-backend-as-a-service).
* Other external services: Depending on your specific needs, you may use
other cloud-based storage or database services
3. **Stateless applications**: Your applications should be designed to be
stateless. This ensures better scalability and reliability in a cloud
environment.
4. **Use AMD-based images**: Currently, our platform supports AMD-based
container images. Ensure your Docker images are built for AMD architecture to
guarantee smooth compatibility with our infrastructure.
## Best practices
* Design your applications to be stateless and horizontally scalable
* Use environment variables for configuration to make your deployments more
flexible
* Implement proper logging to facilitate debugging and monitoring
* Regularly update your container images to include the latest security patches
Custom deployment offers a powerful way to extend the capabilities of your
blockchain solutions on the SettleMint platform. By following these guidelines
and best practices, you can seamlessly integrate your custom applications into
your blockchain ecosystem.
Custom deployments support automatic SSL/TLS certificate management for custom
domains.
Congratulations!
You have successfully deployed your application front end and have a working
full-stack application built on SettleMint tools and services.
We hope your journey was smooth, please write to us at [support@settlemint.com](mailto:support@settlemint.com)
for any help or feedback.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/deploy-smart-contracts.mdx
meta: {
"title": "Deploy smart contracts",
"description": "Guide to deploy smart contracts and sub-graphs"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To begin, you'll need to write your Solidity smart contract that defines
your application's business logic. This includes designing the data
structure using struct, storing the data with mapping, and emitting events
to support off-chain indexing. Once written, the contract should be placed
in the contracts/ folder inside your code studio workspace.
Next, you need to prepare a deployment script using Hardhat Ignition. This
script should go into the ignition/modules/ folder and will declare how your
smart contract should be deployed. You'll use the buildModule function to
specify which contract to deploy and how it should be initialized.
After setting up the script, you should compile the contract. This step
generates the necessary build artifacts, including the ABI and bytecode,
which are essential for testing, deploying, and integrating the contract
with other components. Depending on the tool used (Hardhat or Foundry), the
output will be stored in the artifacts/ or out/ directory respectively.
Once compiled, it's important to thoroughly test your contract using either
Foundry or Hardhat. These tests will simulate real-world conditions. Writing
these tests helps you catch logic errors early before deployment.
When the contract passes all tests, you're ready to deploy. Start your local
network using the Hardhat - start network script and run the deployment
script through the IDE task manager. You'll be prompted to select your
custom deployment script file before the deployment begins.
Finally, to deploy to a SettleMint-hosted blockchain network, authenticate
using the SettleMint login script, select the appropriate node and private
key, and confirm deployment. The deployed address will be saved in a JSON
file under ignition/deployments/, which can then be used in middleware or
frontend applications to interact with the contract.
## Learning with a user data manager smart contract example
The goal of this tutorial is to design and build a simple user data manager
using Solidity. While the visible use case is centered around managing user data
(such as name, email, age, etc.), the hidden objective is to demonstrate the
core thought process behind building a smart contract that can store, update,
read, and soft delete data on the blockchain.
This example is intentionally kept simple and non-technical in terms of
blockchain identity (no wallets or signatures involved) to help beginners focus
on the fundamentals of: - Designing smart contract data structures (structs and
mappings) - Writing public and restricted functions to interact with data -
Emitting and responding to events - Handling update and soft delete logic to
mimic realistic scenarios (Understand that transaction data is never deleted,
just a more recent entry is added about that record in a newer block on
blockchain)
By the end of this tutorial, you'll not only learn the foundational patterns
that apply to many real-world blockchain applications but also understand how to
develop and deploy smart contracts on SettleMint platform.
## 1. Let's start with the solidity smart contract code
A smart contract is a self-executing program deployed on the blockchain that
defines rules and logic for how data or assets are managed without relying on
intermediaries. In this tutorial, we are writing our smart contract using
Solidity, the most widely adopted programming language for Ethereum and
EVM-compatible blockchains. Solidity is a statically typed, contract-oriented
language designed specifically for writing smart contracts that run on the
Ethereum Virtual Machine (EVM).
If you're new to Solidity or want to deepen your understanding, here are some
helpful resources: - Official Solidity Documentation:
[https://soliditylang.org/](https://soliditylang.org/) - Solidity by Example (interactive guide):
[https://solidity-by-example.org](https://solidity-by-example.org) - CryptoZombies (gamified Solidity learning):
[https://cryptozombies.io/en/solidity](https://cryptozombies.io/en/solidity)
These resources provide both foundational knowledge and hands-on coding
exercises to help you become comfortable with writing and deploying smart
contracts.
In your learning phase, you can also use ChatGPT: [https://chatgpt.com/](https://chatgpt.com/) or any of
your go to AI tools for generation of basic solidity smart contracts.
### Example userdata smart contract solidity code
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
/**
* @title UserData
* @notice This contract manages user profiles through create, update, and delete operations.
* It emits events for each operation to enable off-chain indexing and notifications.
*/
contract UserData {
// ===================================================
// Section 1: Structs
// ===================================================
/**
* @notice Struct 1.1: Represents a user's profile.
* @param name Full name of the user.
* @param email Email address of the user.
* @param age Age of the user.
* @param country Country of residence.
* @param isKYCApproved Boolean flag indicating if KYC has been approved.
* @param isDeleted Boolean flag indicating if the profile is soft-deleted.
*/
struct UserProfile {
string name;
string email;
uint8 age;
string country;
bool isKYCApproved;
bool isDeleted;
}
// ===================================================
// Section 2: Storage
// ===================================================
/**
* @notice Storage 2.1: Mapping from a unique user ID to a user profile.
*/
mapping(uint256 => UserProfile) public profiles;
// ===================================================
// Section 3: Events
// ===================================================
/**
* @notice Event 3.1: Emitted when a new profile is created.
* @dev Emits full profile details for indexing by off-chain systems.
* @param userId The unique identifier for the user.
* @param name The user's full name.
* @param email The user's email address.
* @param age The user's age.
* @param country The user's country of residence.
* @param isKYCApproved Whether the user is KYC approved.
*/
event ProfileCreated(
uint256 indexed userId,
string name,
string email,
uint8 age,
string country,
bool isKYCApproved
);
/**
* @notice Event 3.2: Emitted when an existing profile is updated.
* @dev Emits updated profile details for indexing by off-chain systems.
* @param userId The unique identifier for the user.
* @param name The updated full name.
* @param email The updated email address.
* @param age The updated age.
* @param country The updated country.
* @param isKYCApproved The updated KYC approval status.
*/
event ProfileUpdated(
uint256 indexed userId,
string name,
string email,
uint8 age,
string country,
bool isKYCApproved
);
/**
* @notice Event 3.3: Emitted when a profile is soft-deleted.
* @param userId The unique identifier for the user.
*/
event ProfileDeleted(uint256 indexed userId);
// ===================================================
// Section 4: Functions
// ===================================================
/**
* @notice Function 4.1: Creates a new user profile.
* @dev The function reverts if a profile already exists for the given userId (unless it's soft-deleted).
* @param userId Unique identifier for the user.
* @param name The user's full name.
* @param email The user's email address.
* @param age The user's age.
* @param country The user's country of residence.
* @param isKYCApproved Boolean flag indicating if KYC is approved.
*/
function createProfile(
uint256 userId,
string memory name,
string memory email,
uint8 age,
string memory country,
bool isKYCApproved
) public {
// 4.1.1 Allow creation if profile is soft-deleted or does not exist (empty name indicates non-existence)
require(
profiles[userId].isDeleted || bytes(profiles[userId].name).length == 0,
"Profile already exists"
);
// 4.1.2 Create and store the new profile
profiles[userId] = UserProfile({
name: name,
email: email,
age: age,
country: country,
isKYCApproved: isKYCApproved,
isDeleted: false
});
// 4.1.3 Emit full profile data so off-chain indexers like The Graph can index it
emit ProfileCreated(userId, name, email, age, country, isKYCApproved);
}
/**
* @notice Function 4.2: Updates an existing user profile.
* @dev Reverts if the profile does not exist or has been soft-deleted.
* @param userId Unique identifier for the user.
* @param name New full name for the user.
* @param email New email address for the user.
* @param age New age for the user.
* @param country New country of residence for the user.
* @param isKYCApproved New KYC approval status.
*/
function updateProfile(
uint256 userId,
string memory name,
string memory email,
uint8 age,
string memory country,
bool isKYCApproved
) public {
// 4.2.1 Ensure the profile exists and is not deleted
require(
bytes(profiles[userId].name).length > 0 && !profiles[userId].isDeleted,
"Profile does not exist or has been deleted"
);
// 4.2.2 Update the profile with new details
profiles[userId] = UserProfile({
name: name,
email: email,
age: age,
country: country,
isKYCApproved: isKYCApproved,
isDeleted: false
});
// 4.2.3 Emit updated full profile data so subgraph can index changes
emit ProfileUpdated(userId, name, email, age, country, isKYCApproved);
}
/**
* @notice Function 4.3: Retrieves the profile of a given user.
* @dev Reverts if the profile has been soft-deleted or does not exist.
* @param userId Unique identifier for the user.
* @return The UserProfile struct containing the user's information.
*/
function getProfile(uint256 userId) public view returns (UserData.UserProfile memory) {
// 4.3.1 Ensure the profile exists (not soft-deleted)
require(!profiles[userId].isDeleted, "Profile not found or has been deleted");
return profiles[userId];
}
/**
* @notice Function 4.4: Soft-deletes a user profile.
* @dev Marks a profile as deleted without removing its data, reverting if the profile doesn't exist or is already deleted.
* @param userId Unique identifier for the user.
*/
function deleteProfile(uint256 userId) public {
// 4.4.1 Ensure that the profile exists and is not already deleted
require(
bytes(profiles[userId].name).length > 0 && !profiles[userId].isDeleted,
"Profile already deleted or doesn't exist"
);
// 4.4.2 Soft-delete the profile by setting its isDeleted flag to true
profiles[userId].isDeleted = true;
// 4.4.3 Emit event to notify that the profile has been deleted
emit ProfileDeleted(userId);
}
}
```
> Please ensure that smart contract emits all required paramters in every event,
> otherwise, while indexing we will not get the parameters which are not emited.
## Smart contract , events & functions overview
In a smart contract, we define a clear set of events and functions to manage the
lifecycle of user profiles. These building blocks enable seamless interaction
with the contract, supporting profile creation, updates, retrieval, and soft
deletion, while ensuring all changes are traceable through emitted events.
Events play a crucial role in allowing off-chain services like The Graph to
listen for and respond to changes in contract state, whereas functions provide
the core interface for interacting with profile data on-chain.
Below is a structured overview of the key events and functions included in the
contract:
| # | Events | Parameters | Description |
| --- | ---------------- | ---------------------------------------------------------------------------------------------------- | -------------------------------------- |
| 3.1 | `ProfileCreated` | `uint256 userId`, `string name`, `string email`, `uint8 age`, `string country`, `bool isKYCApproved` | Emitted when a new profile is created |
| 3.2 | `ProfileUpdated` | `uint256 userId`, `string name`, `string email`, `uint8 age`, `string country`, `bool isKYCApproved` | Emitted when a profile is updated |
| 3.3 | `ProfileDeleted` | `uint256 userId` | Emitted when a profile is soft-deleted |
| # | Functions | Parameters | Returns | Description |
| --- | --------------- | ---------------------------------------------------------------------------------------------------- | -------------------- | ----------------------------------------- |
| 4.1 | `createProfile` | `uint256 userId`, `string name`, `string email`, `uint8 age`, `string country`, `bool isKYCApproved` | – | Creates a new user profile |
| 4.2 | `updateProfile` | `uint256 userId`, `string name`, `string email`, `uint8 age`, `string country`, `bool isKYCApproved` | – | Updates an existing profile |
| 4.3 | `getProfile` | `uint256 userId` | `UserProfile memory` | Retrieves the profile if not soft-deleted |
| 4.4 | `deleteProfile` | `uint256 userId` | – | Soft-deletes the profile |
## Crud mapping for the smart contract
This table maps traditional Web2-style CRUD operations to the equivalent
Solidity functions in the smart contract:
| **CRUD** | **Solidity Function** | **Explanation** |
| ---------- | --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Create** | `createProfile()` | Adds a new user profile to the blockchain using a unique `userId`. This simulates an `INSERT` operation in databases. It checks that the profile does not already exist (unless soft-deleted) and stores the user's details. Emits `ProfileCreated` with full data for off-chain indexing. |
| **Read** | `getProfile()` | Retrieves an existing profile by its `userId` , similar to a `SELECT` query in SQL. It returns the user's profile only if it hasn't been soft-deleted. This function is marked `view`, meaning it does not modify blockchain state and can be called without gas. |
| **Update** | `updateProfile()` | Modifies all fields of an existing user profile. Acts like an `UPDATE` in Web2 databases. It ensures the profile exists and is not deleted, then updates it with the provided values. Emits `ProfileUpdated` with full details for off-chain use. |
| **Delete** | `deleteProfile()` | Performs a **soft delete** by setting the `isDeleted` flag to `true`, without removing the actual data from storage. This is similar to a logical delete used in many enterprise databases. The data remains on-chain (for auditability), but `getProfile()` will no longer return it. Emits `ProfileDeleted`. |
## 2. Let's add this smart contract to code studio
When you deploy an **empty** smart contract set on SettleMint platform, you get
a very simple **Counter.sol** contract as an example, you may delete it.
In the contracts folder create a file called **UserData.sol** and copy paste the
content of the above smart contract code.
## 3. Prepare deployment script
In **ignition** folder, you will find a folder called **modules**, there you
will find a **main.ts** file which is basically a contract deployment script.
You may delete it if you already know or once you understand the structure. In
this folder create a file called **deployUser.ts**
### Understanding the deployment script code structure.
```ts
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const UserDataModule = buildModule("UserDataModule", (m) => {
const userdata = m.contract("UserData");
return { userdata };
});
export default UserDataModule;
```
**Let's understand key parts of this code-**
This deployment script uses Hardhat Ignition to define and execute the
deployment of a smart contract. It begins by importing the buildModule function
from the Ignition library, which is used to define a deployment module. The
module is named "UserDataModule" and is constructed using a callback function
that receives a context object m.
Within this function, m.contract("UserData") declares that a contract named
UserData (which must match the name inside the Solidity source file) should be
deployed. This is how it knows which contract is being refered.
The deployed contract instance is stored in a variable called userdata. This
instance is then returned from the module so it can be accessed later if needed.
Finally, the module is exported as the default export so it can be run by
Hardhat's Ignition system using the CLI.
## 4. Compile the smart contract code
To run various scripts to compile, test, deploy smart contracts and
sub-graphs you need to go to left top area of the IDE and go to task manager
section. When a Solidity smart contract is compiled, the source code is
transformed into low-level bytecode that can be executed on the Ethereum
Virtual Machine (EVM). This process also generates important metadata such
as the ABI (Application Binary Interface), which defines how external
applications or scripts can interact with the contract's functions and
events. Additionally, the compiler produces debugging information, source
maps, and compiler settings. These outputs are essential for deploying,
testing, and integrating the contract with dApps or frontend applications
## Foundry build
If you compile using Foundry Build then in **out** folder a folder will be
created with the name of your smart contract file name, and within that folder,
contractname.json and contractname.metadata.json will be generated. This
contractname will be what you have as the name of the contract within solidity
file.

## Hardhat build
If you compile using Hardhat Build, then in **artifacts** folder a folder will
be created with the name of your smart contract file name, and within that
folder - artifacts.d.ts, ContractName.d.ts, ContractName.dbg.ts,
ContractName.json are generated. ContractName.json is the ABI.

When you compile a Solidity smart contract in SettleMint, it processes .sol
files and generates various output artifacts needed for deployment and
interaction. For example, after compiling UserData.sol, you get the following
inside the artifacts/ directory:
📂 artifacts/contracts/UserData.sol/
* UserData.json – This is the main artifact file. It contains the ABI
(Application Binary Interface) - The compiler metadata
* UserData.dbg.json – Debugging info including source maps and AST
* UserData.d.ts – TypeScript definition file for better type safety when using
the contract in frontend or scripting environments
* artifacts.d.ts – Global TypeScript declarations for all compiled contracts
📂 artifacts/build-info/
* hash.json – Contains detailed compiler input/output and full metadata for the
build process, useful for verifying or analyzing compilation details
## 5. Test the smart contract
Smart contract testing is a critical part of the development lifecycle in
blockchain and decentralized application (dApp) projects. Since smart contracts
are immutable once deployed to the blockchain, bugs or vulnerabilities can
result in permanent loss of funds, data corruption, or security breaches.
Thorough testing ensures that smart contracts behave as expected under various
scenarios and edge cases before they go live on the mainnet.
Testing frameworks like Hardhat and Foundry provide robust tooling to write and
execute tests in Solidity or JavaScript/TypeScript. These frameworks offer
helpful utilities such as assertions, mock accounts, blockchain state
manipulation (e.g., time travel or snapshot/rollback), and expected reverts.
Additionally, testing libraries like forge-std/Test.sol (in Foundry) or chai (in
Hardhat) enable expressive and readable test assertions.
### Foundry test
In the **test** folder in IDE, create a **UserData.t.sol** file for Foundry test
script.
It uses forge-std/Test.sol is a powerful utility library provided by Foundry's
standard library (forge-std) that simplifies writing and executing tests for
smart contracts. It extends the base Solidity Test contract and includes a rich
set of assertions, cheatcodes, and debugging tools that make testing more
expressive and efficient.
When a test contract inherits from Test, it gains access to functions like
assertEq, assertTrue, fail, and testing cheatcodes such as vm.prank,
vm.expectRevert, vm.roll, and many more. These tools simulate complex behaviors
and edge cases in a local testing environment without the need to manually
manipulate the EVM state. For example, vm.expectRevert allows developers to
anticipate and verify error conditions, while assertEq simplifies comparisons
between expected and actual results.
```solidity
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.24;
import "forge-std/Test.sol";
import "../contracts/UserData.sol"; // Adjust the import path if needed
contract UserTest is Test {
UserData public user;
function setUp() public {
// Deploy the contract before each test
user = new UserData();
}
function testCreateProfile() public {
// Call createProfile
user.createProfile(1, "Alice", "alice@email.com", 30, "USA", true);
// Fetch the profile struct
UserData.UserProfile memory profile = user.getProfile(1);
// Assert values match what we set
assertEq(profile.name, "Alice");
assertEq(profile.email, "alice@email.com");
assertEq(profile.age, 30);
assertEq(profile.country, "USA");
assertEq(profile.isKYCApproved, true);
assertEq(profile.isDeleted, false);
}
function testUpdateProfile() public {
// First create a profile
user.createProfile(2, "Bob", "bob@email.com", 28, "UK", false);
// Update profile with new values
user.updateProfile(2, "Bob Updated", "bob@new.com", 29, "Canada", true);
// Fetch the updated profile
UserData.UserProfile memory profile = user.getProfile(2);
// Assert updated values
assertEq(profile.name, "Bob Updated");
assertEq(profile.email, "bob@new.com");
assertEq(profile.age, 29);
assertEq(profile.country, "Canada");
assertEq(profile.isKYCApproved, true);
assertEq(profile.isDeleted, false);
}
function testDeleteProfile() public {
// Create and delete a profile
user.createProfile(3, "Charlie", "charlie@email.com", 25, "Germany", true);
user.deleteProfile(3);
// Expect revert on reading a deleted profile
vm.expectRevert("Profile not found or has been deleted");
user.getProfile(3);
}
function testCannotCreateDuplicateProfile() public {
// Create the profile
user.createProfile(4, "Dan", "dan@email.com", 35, "India", false);
// Attempt to create with the same ID again should revert
vm.expectRevert("Profile already exists");
user.createProfile(4, "DanAgain", "dan@retry.com", 36, "India", true);
}
function testCannotUpdateNonexistentProfile() public {
// Try to update a profile that was never created
vm.expectRevert("Profile does not exist or has been deleted");
user.updateProfile(5, "Eve", "eve@email.com", 31, "Brazil", true);
}
function testCannotDeleteNonexistentProfile() public {
// Try to delete a profile that doesn't exist
vm.expectRevert("Profile already deleted or doesn't exist");
user.deleteProfile(6);
}
function testSoftDeletedCannotBeRead() public {
// Create and delete a profile
user.createProfile(7, "Zed", "zed@email.com", 44, "Japan", true);
user.deleteProfile(7);
// Trying to read it should revert
vm.expectRevert("Profile not found or has been deleted");
user.getProfile(7);
}
function testRecreateAfterSoftDelete() public {
// Create and delete a profile
user.createProfile(8, "Tom", "tom@email.com", 20, "Italy", true);
user.deleteProfile(8);
// Re-create it with new data (allowed due to soft-deletion)
user.createProfile(8, "TomNew", "tom@new.com", 21, "Spain", false);
UserData.UserProfile memory profile = user.getProfile(8);
assertEq(profile.name, "TomNew");
assertEq(profile.email, "tom@new.com");
assertEq(profile.age, 21);
assertEq(profile.country, "Spain");
assertEq(profile.isKYCApproved, false);
assertEq(profile.isDeleted, false);
}
}
```

### Hardhat test
In the **test** folder in IDE, create a **UserData.ts** file for HardHat test
script.
```ts
import { loadFixture } from "@nomicfoundation/hardhat-toolbox-viem/network-helpers";
import { expect } from "chai";
import hre from "hardhat";
// Describe our test suite for the UserData contract
describe("UserData", function () {
// deployUserFixture deploys the UserData contract using viem and returns the deployed contract instance
// along with the address of the first wallet client.
async function deployUserFixture() {
// Deploy the UserData contract using viem.
// The contract name ("UserData") must match your contract's name.
const userContract = await hre.viem.deployContract("UserData");
// Get the first wallet client's account address to use as a signer for simulate calls.
const account = (await hre.viem.getWalletClients())[0].account.address;
return { userContract, account };
}
// Define a sample user profile object for tests.
const sampleProfile = {
userId: 1n, // BigInt literal is used for user IDs
name: "Alice",
email: "alice@example.com",
age: 30,
country: "Wonderland",
isKYCApproved: true,
};
// -------------------------------
// Tests for createProfile functionality
// -------------------------------
describe("createProfile", function () {
it("should create a new profile", async function () {
// Use loadFixture to deploy a fresh instance of the contract.
const { userContract } = await loadFixture(deployUserFixture);
// Call the write method for createProfile with sampleProfile data.
await userContract.write.createProfile([
sampleProfile.userId,
sampleProfile.name,
sampleProfile.email,
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Read the stored profile from the contract using the read method.
const profile = (await userContract.read.getProfile([
sampleProfile.userId,
])) as {
name: string;
email: string;
age: number;
country: string;
isKYCApproved: boolean;
};
// Assert that the returned profile data matches our input values.
expect(profile.name).to.equal(sampleProfile.name);
expect(profile.email).to.equal(sampleProfile.email);
});
it("should not allow duplicate profile creation", async function () {
// Deploy a fresh instance using the fixture.
const { userContract, account } = await loadFixture(deployUserFixture);
// Create a profile with the sample data.
await userContract.write.createProfile([
sampleProfile.userId,
sampleProfile.name,
sampleProfile.email,
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Attempt to simulate (dry-run) creating a duplicate profile.
// We use simulate.createProfile so that no state change occurs if it fails.
try {
await userContract.simulate.createProfile(
[sampleProfile.userId, "Bob", "bob@example.com", 25, "Utopia", false],
{ account }
);
// If no error is thrown, the test should fail.
expect.fail("Expected simulate.createProfile to revert");
} catch (err: any) {
// Check that an error is thrown.
expect(err).to.exist;
}
});
});
// -------------------------------
// Tests for updateProfile functionality
// -------------------------------
describe("updateProfile", function () {
it("should update an existing profile", async function () {
// Deploy a fresh instance.
const { userContract } = await loadFixture(deployUserFixture);
// First, create the profile using the sample data.
await userContract.write.createProfile([
sampleProfile.userId,
sampleProfile.name,
sampleProfile.email,
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Update the profile's email using updateProfile.
await userContract.write.updateProfile([
sampleProfile.userId,
sampleProfile.name,
"alice@updated.com", // new email value
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Read the updated profile.
const updated = (await userContract.read.getProfile([
sampleProfile.userId,
])) as {
name: string;
email: string;
age: number;
country: string;
isKYCApproved: boolean;
};
// Verify that the email was updated.
expect(updated.email).to.equal("alice@updated.com");
});
it("should fail to update non-existent profile", async function () {
// Deploy a fresh instance.
const { userContract, account } = await loadFixture(deployUserFixture);
// Attempt to simulate updating a profile that does not exist.
try {
await userContract.simulate.updateProfile(
[999n, "Ghost", "ghost@void.com", 99, "Nowhere", false],
{ account }
);
expect.fail("Expected simulate.updateProfile to revert");
} catch (err: any) {
// Just ensure that an error was thrown.
expect(err).to.exist;
}
});
});
// -------------------------------
// Tests for deleteProfile functionality
// -------------------------------
describe("deleteProfile", function () {
it("should soft delete a profile", async function () {
// Deploy a fresh instance.
const { userContract } = await loadFixture(deployUserFixture);
// Create the profile.
await userContract.write.createProfile([
sampleProfile.userId,
sampleProfile.name,
sampleProfile.email,
sampleProfile.age,
sampleProfile.country,
sampleProfile.isKYCApproved,
]);
// Delete the profile.
await userContract.write.deleteProfile([sampleProfile.userId]);
// Try reading the profile, expecting it to revert.
try {
await userContract.read.getProfile([sampleProfile.userId]);
expect.fail("Expected getProfile to revert");
} catch (err: any) {
expect(err).to.exist;
}
});
it("should fail to delete a non-existent profile", async function () {
// Deploy a fresh instance.
const { userContract, account } = await loadFixture(deployUserFixture);
// Attempt to simulate deleting a profile that does not exist.
try {
await userContract.simulate.deleteProfile([123n], { account });
expect.fail("Expected simulate.deleteProfile to revert");
} catch (err: any) {
expect(err).to.exist;
}
});
});
});
```
This test script leverages Hardhat's modern support for viem, a lightweight and
fast alternative to Ethers.js, designed for more efficient interaction with
Ethereum contracts. The test uses loadFixture from
hardhat-toolbox-viem/network-helpers to ensure test isolation and efficient
deployments, each test gets a clean contract instance to work with.
Inside the script, we define a fixture function (deployUserFixture) to deploy
the User contract and provide access to the publicClient. The tests cover all
core functionalities of the contract: creating, updating, reading, and
soft-deleting user profiles. Assertions are written using Chai's expect syntax,
while contract interactions (like write.createProfile and read.getProfile)
follow the Viem pattern, making the test code both concise and expressive.
Please run **hardhat test** script to test the smart contract

Once the test is pass, you can deploy to local hardhat network by using script -
**hardhat - deploy to local network**
Start test network using **hardhat - start network** script in task manager.

Deploy to test network

If you click on **hardhat - deploy to local network** and nothing happens, then
you are missing the step to select the correct deployment script and hitting
enter key. You will see a message - **extra commandline arguments, e.g. --verify
(press 'enter' to confirm or 'escape' to cancel)** in the top middle of the IDE,
hit enter, you will see **ignition/modules/main.ts**, edit the last part at put
the correct filename (e.g. deployUserData.ts), basically the name of the
deployment script you created in ignition folder, and hit enter agian to run the
deployment script. This remains true for all the deploy cases, whether on local
network or platform network.
## 6. Deploy the smart contract to platform network
Use **SettleMint Login** script in task manager to login, you will need your
personal access token. To generate personal access token, refer -
[Personal access token](/platform-components/security-and-authentication/personal-access-tokens)
Hardhat deploy to platform network enter the path of the deployment script

ignition/modules/deployUserData.ts
> If you click on **hardhat - deploy to local network** and nothing happens,
> then you are missing the step to select the correct deployment script and
> hitting enter key. You will see a message - **extra commandline arguments,
> e.g. --verify (press 'enter' to confirm or 'escape' to cancel)** in the top
> middle of the IDE, hit enter, you will see **ignition/modules/main.ts**, edit
> the last part at put the correct filename (e.g. deployUser.ts), basically the
> name of the deployment script you created in ignition folder, and hit enter
> agian to run the deployment script. This remains true for all the deploy
> cases, whether on local network or platform network.
>
> Before deploying to network, please do not forget to login to SettleMint
> network via script **settlemint login**
Select the node to which you wish to deploy this smart contract. If you get an
error, please ensure that a private key was created and attached to the node on
which you wish to deploy the smart contract.
Select the private key you wish to use to deploy smart contract. If you are
using a public network or a network with gas fee, then make sure that this
private key's wallet is funded.
Select yes when prompted - **confirm deploy to network (network name)? ›
(y/N)**.
Wait for a few minutes for the contract to be deployed.
## Deployed contract address
Deployed contract address is stored in deployed\_addresses.json file located in
igntition>deployments folder.

Congratulations!
You have successfully compiled, tested and deployed your smart contract on
blockchain network. Now you can proceed to middlewares for getting APIs to do
smart contract transactions, write data to chain and read data in a structured
format.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/integration-studio.mdx
meta: {
"title": "Integration studio",
"description": "Visual workflow builder for custom APIs and integrations"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
The Integration Studio is a dedicated low-code environment that enables
developers and business users to build backend workflows, API endpoints, and
custom logic using a visual interface. Powered by Node-RED, it offers an
intuitive drag-and-drop experience for orchestrating flows between smart
contracts, external APIs, databases, and storage systems, all within the
SettleMint ecosystem.
Instead of writing boilerplate backend code, developers will define logic using
nodes and flows, visually representing how data moves between services. These
flows can be triggered by webhooks, user interactions, smart contract events, or
timed executions. Under the hood, each Integration Studio is deployed as an
isolated and scalable container that supports JavaScript-based execution,
environment configuration, and secure API access.
Each node in the flow is designed to perform a specific task, such as receiving
HTTP input, transforming payloads, calling external APIs, or executing custom
JavaScript functions. These nodes are connected inside a flow, which represents
a unit of logic or an end-to-end integration path. You can create multiple flows
within the same Integration Studio instance, allowing you to modularize your
business logic and deploy distinct endpoints for different application use
cases.
When developers deploy the Integration Studio to their application, a secure
Node-RED editor is provisioned, accessible via the platform UI. The visual
interface includes common built-in nodes and pre-integrated libraries like
ethers (for blockchain interaction), ipfsHttpClient (for decentralized storage),
and others. Additional libraries can also be added manually in the project
settings.
A common scenario might involve triggering a flow via an HTTP request, fetching
on-chain data from a smart contract using ethers.js, formatting the result, and
returning it as a JSON response. These kinds of flows can be designed in
minutes, providing API endpoints that are automatically hosted and secured by
SettleMint infrastructure.
Developers can configure API Keys to restrict access to these endpoints and
monitor calls using the platform's access token management system. Every
endpoint is served over HTTPS and can be integrated with frontend dApps, backend
services, or third-party platforms.
The simplicity of visual programming, combined with the power of JavaScript,
makes Integration Studio a robust backend builder tailored for blockchain
applications. It significantly reduces development time while maintaining
flexibility for custom use cases. Developers gain fine-grained control over how
their dApp behaves off-chain, without leaving the SettleMint environment.
The SettleMint Integration Studio is a low-code development environment which
enables you to implement business logic for your application simply by dragging
and dropping.
Under the hood, the Integration Studio is powered by a **Node-RED** instance
dedicated to your application. It is a low-code programming platform built on
Node.js and designed for event-driven application development.
[Learn more about Node-RED here](https://nodered.org/docs/).
## Basic concepts
The business logic for your application can be represented as a sequence of
actions. Such a sequence of actions is represented by a **flow** in the
Integration Studio. To bring your application to life, you need to create flows.
**Nodes** are the smallest building blocks of a flow.
### Nodes
The nodes are the smallest building blocks. They can have at most one input
port, and multiple output ports. They are triggered by some event (eg. an http
request). When triggered, they perform some user defined actions, and generate
an output. This output can be passed to the input of another node, to trigger
another action.
### Flows
A flow is represented as a tab within the editor workspace and is the main way
to organize nodes. You can have more than one set of connected nodes in a flow
tab.
The Integration Studio allows you to create flows in the fastest way possible.
You can drag and drop nodes in workspace and easily connect them by clicking
from the output port of one node to input port of another to create complex
flows. This allows you to visualise the orchestration and interaction between
your components (your nodes). Since you can clearly visualize the sequence of
actions your application is going to perform, it is not only more interpretable
but also much easier to debug in the future.
The use cases include interacting with other web services, applications, and
even IoT devices - orchestrating them for any kind of purpose to bring your
business solution to life.
[Learn more about the basic concepts of Node-RED here](https://nodered.org/docs/user-guide/concepts)
## Adding the integration studio
Navigate to the **application** where you want to add the integration studio.
Click **Integration tools** in the left navigation, and then click **Add an
integration tool**. This opens a form.

### Select integration studio
Select **Integration Studio** and click **Continue** to proceed.
### Choose a name
Choose a **name** for your Integration Studio. Choose one that will be easily
recognizable in your dashboards (eg. Crowdsale Flow)
### Select deployment plan
Choose a deployment plan. Select the type, cloud provider, region and resource
pack.
[More about deployment plans](/launching-the-platform/managed-cloud-saas/deployment-plans)
### Confirm setup
You can see the **resource cost** for the Integration Studio displayed at the
bottom of the form. Click **Confirm** to add the Integration Studio.
## Using the integration studio
When the Integration Studio is deployed, click on it from the list, and go to
the **Interface** tab to start building your flows. You can also view the
interface in full screen mode.
Once the Integration Studio interface is loaded, you will see 2 flow tabs: "Flow
1" and "Example". Head over to the **"Example" tab** to see some full blown
example flows to get you started.
Double-click any of the nodes to see the code they are running. This code is
written in JavaScript, and it represents the actions the particular node
performs.

### Setting up a flow
Before we show you how to set up your own flow, we recommend reading this
[article by Node-RED on creating your first flow](https://nodered.org/docs/tutorials/first-flow).
Now let's set up an example flow together and build an endpoint to get the
latest block number of the Polygon Mumbai Testnet using the Integration Studio.
If you do not have a Polygon Mumbai Node, you can easily
[deploy a node](/platform-components/blockchain-infrastructure/blockchain-nodes)
first.
### Add http input node
Drag and drop a **Http In node** to listen for requests. If you double-click the node, you will see you have a couple parameters to set:
* `METHOD` - set it to `GET`. This is HTTP Method that your node is configured
to listen to.
* `URL` - set it to `/getLatestBlock`. This the endpoint that your node will
listen to.
### Add function node
Drag and drop a **function node**. This is the node that will query the
blockchain for the block number. Double-click the node to configure it.
`rpcEndpoint` is the RPC url of your Polygon Mumbai Node.
Under the **Connect tab** of your Polygon Mumbai node, you will find its RPC url.
`accessToken` - You will need an access token for your application. If you do
not have one, you can easily
[create an access token](/platform-components/security-and-authentication/application-access-tokens)
first.
Enter the following snippet in the Message tab:
```javascript
///////////////////////////////////////////////////////////
// Configuration //
///////////////////////////////////////////////////////////
const rpcEndpoint = "https://YOUR_NODE_RPC_ENDPOINT.settlemint.com";
const accessToken = "YOUR_APPLICATION_ACCESS_TOKEN_HERE";
///////////////////////////////////////////////////////////
// Logic //
///////////////////////////////////////////////////////////
const ethers = global.get("ethers");
const provider = new ethers.providers.JsonRpcProvider(
`${rpcEndpoint}/${accessToken}`
);
msg.payload = await provider.getBlockNumber();
return msg;
///////////////////////////////////////////////////////////
// End //
///////////////////////////////////////////////////////////
```
**Note:** ethers and some ipfs libraries are already available by default and can be used like this:
```javascript
const ethers = global.get("ethers");
const provider = new ethers.providers.JsonRpcProvider(
`${rpcEndpoint}/${accessToken}`
);
const ipfsHttpClient = global.get("ipfsHttpClient");
const client = ipfsHttpClient.create(`${ipfsEndpoint}/${accessToken}/api/v0`);
const uint8arrays = global.get("uint8arrays");
const itAll = global.get("itAll");
const data = uint8arrays.toString(
uint8arrays.concat(await itAll(client.cat(cid)))
);
```
If the library you need isn't available by default you will need to import it in
the setup tab. Example for ethers providers:

### Add http response node
Drag and drop a **Http Response node** to reply to the request. Double-click and
configure:
* `Status code` - This is the HTTP status code that the node will respond with
after completion of the request. We set it to 200 (`OK`)
Click on the `Deploy` button in the top right corner to save and deploy your
changes.
### Test your endpoint
Now, go back to the **Connect tab** of your Integration Studio to see your **API
endpoint**, which looks something like
`https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com`.

You can now send requests to
`https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com/getLatestBlock` to get
the latest block number. Do not forget to create an API Key for your Integration
studio and pass it as the `x-auth-token` authorization header with your request.
Example terminal command:
```bash
curl -H "x-auth-token: bpaas-YOUR_INTEGRATION_KEY_HERE" https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com/getLatestBlock
```
The API is live and protected by the authorization header, and you can
seamlessly integrate with your application.
You can access 4000 plus pre-built modules from the in-built library.

You can use the Integration Studio to build very complex flows. Learn more in
this [cookbook by Node-RED](https://cookbook.nodered.org/) on the different
types of flows.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/introduction-to-hedera.mdx
meta: {
"title": "Introduction to Hedera",
"description": "Understanding Hedera network and SettleMint's EVM implementation"
}
import { Callout } from "fumadocs-ui/components/callout";
For a comprehensive overview of Hedera as a supported blockchain, visit our
[Hedera network
overview](/documentation/supported-blockchains/L1-public-networks/hedera).
## Using Hedera with EVM compatibility
When working with Hedera through SettleMint, you'll be using Hedera's EVM
compatibility layer. This means:
* Your smart contracts are written in **standard Solidity**
* Addresses follow the Ethereum **0x format** rather than Hedera's native 0.0.x
format
* You can use **SettleMint's code studio IDE** and existing smart contract
templates directly with Hedera
* Deploy using **SettleMint's built-in tools** (task manager, SDK, CLI) with the
same workflow as other EVM chains
This EVM compatibility layer acts as a bridge between Ethereum's developer
experience and Hedera's performance advantages, allowing you to deploy the same
contracts you'd use on Ethereum while benefiting from Hedera's speed and lower
costs.
While working through the SettleMint platform, you don't need to worry about
the complexities of Hedera's native format - all conversion between EVM and
native Hedera formats is handled automatically.
## What is Hedera?
Hedera is an enterprise-grade public network that offers the security of
blockchain with significantly faster transaction speeds and lower costs. Built
on the hashgraph consensus algorithm, Hedera provides a foundation for
developers to create decentralized applications with predictable performance.
### Key features
* **High throughput**: Processes thousands of transactions per second
* **Fast finality**: Transactions finalize in 3-5 seconds
* **Low, predictable fees**: Transaction costs significantly lower than
traditional blockchains
* **Enterprise governance**: Governed by a council of global organizations
* **Energy efficient**: Uses proof-of-stake with minimal environmental impact
For more information, please refer to the
[Hedera documentation](https://docs.hedera.com/hedera)
## Hedera's EVM compatibility
Hedera offers full Ethereum Virtual Machine (EVM) compatibility, enabling
developers to deploy and interact with smart contracts using familiar Solidity
code and Ethereum tools. This compatibility layer provides the best of both
worlds:
* **Ethereum developer experience**: Use standard Solidity and existing Ethereum
tools
* **Hedera's performance benefits**: Leverage high throughput and low fees
* **No lock-in**: Your smart contracts work across EVM-compatible chains
```mermaid
flowchart TD
A[Solidity smart contract] --> B[SettleMint platform]
B --> C[JSON-RPC relay]
C --> D[Hedera EVM layer]
D --> E[Hedera network]
style A fill:#f9f9f9,stroke:#333,stroke-width:1px
style B fill:#e6f7ff,stroke:#333,stroke-width:1px
style C fill:#e6f7ff,stroke:#333,stroke-width:1px
style D fill:#f0f0f0,stroke:#333,stroke-width:1px
style E fill:#f0f0f0,stroke:#333,stroke-width:1px
```
## High-level differences: Hedera vs. Ethereum
The following table highlights foundational differences that may affect your
smart contract development workflow:
| **Feature** | **Hedera** | **Ethereum** |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------- |
| **Consensus mechanism** | Asynchronous Byzantine Fault Tolerance (aBFT), Proof of Stake (PoS) | Byzantine Fault Tolerance (BFT), Proof of Stake (PoS) |
| **Transaction fees** | Low and predictable fees | Variable gas fees; can be high during network congestion |
| **Governance model** | Governed by the Hedera Governing Council, comprising leading global organizations | Decentralized; governed by the Ethereum community |
| **Native token** | HBAR | ETH |
| **Token standard** | Supports ERC-20 and ERC-721 standards, with Hedera Token Service (HTS) for native token issuance and management without smart contracts | ERC-20 and ERC-721 standards for fungible and non-fungible tokens |
| **Network state data structure** | Virtual Merkle Tree | Merkle Patricia Trie |
| **Historical data** | Off-chain mirror nodes provide access to historical data and state queries | On-chain `stateRoot`; historical data can be accessed through the blockchain |
| **Key management** | Supports ED25519 (Hedera-native accounts), ECDSA (secp256k1), and complex keys (keylist and threshold) | Accounts are managed using ECDSA (secp256k1) keys |
| **Network upgrades** | Upgrades are proposed through Hedera Improvement Proposals (HIPs) and governed by the Hedera Governing Council. Upgrades are backward compatible, not forks. | Upgrades are proposed and implemented through Ethereum Improvement Proposals (EIPs) |
## SettleMint's Hedera integration
SettleMint has built a complete, managed infrastructure layer that makes
developing on Hedera's EVM as simple as developing on any other EVM chain:
* **Fully managed infrastructure**: No need to configure complex Hedera
components
* **Zero-setup JSON-RPC relay**: Direct connection to Hedera through standard
Ethereum interfaces
* **Built-in mirror nodes**: Query blockchain data through familiar APIs
SettleMint handles all the complexity of connecting to Hedera behind the
scenes, allowing you to focus on your application's business logic rather than
infrastructure.
## Transaction flow: how it works
When you deploy or interact with a smart contract on Hedera through SettleMint,
here's what happens:
```mermaid
sequenceDiagram
participant Developer
participant SettleMint
participant JSON-RPC Relay
participant Hedera EVM
participant Hedera Consensus
Developer->>SettleMint: Deploy Solidity contract
SettleMint->>JSON-RPC Relay: Submit EVM transaction
JSON-RPC Relay->>Hedera EVM: Translate to Hedera format
Hedera EVM->>Hedera Consensus: Process transaction
Hedera Consensus-->>Hedera EVM: Confirm transaction
Hedera EVM-->>JSON-RPC Relay: Return EVM format result
JSON-RPC Relay-->>SettleMint: Return transaction status
SettleMint-->>Developer: Display success/contract address
note over JSON-RPC Relay: Handles all format conversion
note over Hedera Consensus: 3-5 second finality
```
This seamless flow enables you to use standard Web3 libraries, frameworks, and
tools while benefiting from Hedera's performance and security.
## Networks available in SettleMint
SettleMint provides access to both Hedera networks:
* **Hedera mainnet**: Production environment with real HBAR tokens
* **Hedera testnet**: Development environment with free test HBAR
For development and testing, always start with testnet. You can get free test
HBAR from the Hedera Portal at [portal.hedera.com](https://portal.hedera.com).
## Getting started with Hedera on SettleMint
To begin building on Hedera through SettleMint:
1. [Set up your SettleMint workspace and application](/documentation/building-with-settlemint/hedera-hashgraph-guide/create-an-application)
2. Configure your Hedera network connection
3. Add private keys and obtain HBAR
4. Deploy your smart contracts with familiar tools
The subsequent guides will walk you through each step in detail.
When deploying your first contract to Hedera, you might experience slightly
longer deployment times than on other EVM chains. This is normal due to the
conversion between EVM and Hedera's native transaction format.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/setup-api-portal.mdx
meta: {
"title": "Setup smart contract portal",
"description": "Setup smart contract portal"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To set up the smart contract portal for a smart contract, you first need to
compile the contract and locate its ABI file. The ABI is auto-generated during
compilation and acts as a translation layer between your contract and external
tools like frontends or API layers. You'll find this ABI in the
artifacts/contracts/ContractName.sol/ContractName.json file if you used Hardhat,
or under out/ if you used Foundry. This file contains structured definitions of
all contract functions and events, and is essential for enabling external calls
through REST or GraphQL.
Once you have the ABI file, navigate to your application on the SettleMint
platform and go to the middleware section. Here, you'll need to add a new
middleware of type API portal. You will assign a name, select the blockchain
node where your contract is deployed, and upload the ABI file. Make sure the ABI
file is named appropriately because that name will reflect in your API
structure. After confirming the setup, the API portal will automatically expose
both REST and GraphQL endpoints based on your contract's ABI.
To connect the API portal with your contract logic, you must provide the smart
contract's deployed address. This can be found inside the
deployed\_addresses.json file generated by Ignition after a successful
deployment. The portal will use this address to direct requests to the correct
contract instance.
Once deployed, you can start making REST API calls using standard HTTP requests.
You'll use the base URL shown under the portal's connect tab, and structure your
API requests according to the contract's ABI. Each call should include
authentication via your application access token, and a JSON payload specifying
the function name, parameters, and caller details such as from, gasLimit, and
simulate. The response will return transaction hashes for writes, or data for
reads.
## How to setup the smart contract portal in the SettleMint platform
### 1. Understanding application binary interface (ABI)
The application binary interface (ABI) is an essential artifact in Ethereum and
other EVM-based blockchain ecosystems that defines how smart contracts
communicate with the outside world. It acts as a formal agreement between a
deployed smart contract and any external entity, such as web applications,
backend servers, wallets, or command-line tools, about how to encode and decode
data for function calls, returns, and events. The ABI describes, in a structured
JSON format, each function's name, inputs, outputs, type (e.g., function,
constructor, event), and visibility (view, pure, payable, etc.).
The ABI is generated automatically when a Solidity smart contract is compiled.
When developers write a Solidity contract and run it through the Solidity
compiler (solc), or through development frameworks like Hardhat or Truffle, the
output includes several artifacts, one of which is the ABI. This ABI is derived
by analyzing the contract's function signatures, input and output types, event
declarations, and constructor. For each function, the compiler calculates a
unique function selector, a 4-byte identifier based on the first 4 bytes of the
keccak256 hash of the function's signature (e.g., transfer(address,uint256)).
The ABI then maps these selectors to their corresponding human-readable
definitions in JSON form.
At runtime, when an application (like a frontend built with Web3.js or
Ethers.js) wants to interact with the contract, it uses this ABI to encode the
function call and its parameters into hexadecimal data that the Ethereum Virtual
Machine (EVM) can understand. Similarly, when the EVM returns data (e.g., the
result of a view function or an event emitted during a transaction), the ABI
provides the blueprint for decoding this binary data back into usable JavaScript
objects. In addition to function calls, the ABI is also critical for subscribing
to and decoding events emitted by the contract. Each event in the contract is
represented in the ABI with a structure that allows applications to listen for
specific logs on-chain and parse them into structured data.
### 2. Using ABI from the **UserData.sol** smart contract which we deployed in the previous step
Navigate to **/artifacts/contracts/UserData.sol/UserData.json** to find the ABI
of the contract we compiled and deployed in the previous step. Download the JSON
file.
If you build using Foundry, you will find the ABI in the **out** folder
out>ContractName.sol>ContractName.json

The ABI you will get is the following:
```json
{
"_format": "hh-sol-artifact-1",
"contractName": "UserData",
"sourceName": "contracts/UserData.sol",
"abi": [
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "uint256",
"name": "userId",
"type": "uint256"
},
{
"indexed": false,
"internalType": "string",
"name": "name",
"type": "string"
},
{
"indexed": false,
"internalType": "string",
"name": "email",
"type": "string"
},
{
"indexed": false,
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"indexed": false,
"internalType": "string",
"name": "country",
"type": "string"
},
{
"indexed": false,
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
}
],
"name": "ProfileCreated",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "uint256",
"name": "userId",
"type": "uint256"
}
],
"name": "ProfileDeleted",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "uint256",
"name": "userId",
"type": "uint256"
},
{
"indexed": false,
"internalType": "string",
"name": "name",
"type": "string"
},
{
"indexed": false,
"internalType": "string",
"name": "email",
"type": "string"
},
{
"indexed": false,
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"indexed": false,
"internalType": "string",
"name": "country",
"type": "string"
},
{
"indexed": false,
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
}
],
"name": "ProfileUpdated",
"type": "event"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "userId",
"type": "uint256"
},
{
"internalType": "string",
"name": "name",
"type": "string"
},
{
"internalType": "string",
"name": "email",
"type": "string"
},
{
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"internalType": "string",
"name": "country",
"type": "string"
},
{
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
}
],
"name": "createProfile",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "userId",
"type": "uint256"
}
],
"name": "deleteProfile",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "userId",
"type": "uint256"
}
],
"name": "getProfile",
"outputs": [
{
"components": [
{
"internalType": "string",
"name": "name",
"type": "string"
},
{
"internalType": "string",
"name": "email",
"type": "string"
},
{
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"internalType": "string",
"name": "country",
"type": "string"
},
{
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
},
{
"internalType": "bool",
"name": "isDeleted",
"type": "bool"
}
],
"internalType": "struct UserData.UserProfile",
"name": "",
"type": "tuple"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"name": "profiles",
"outputs": [
{
"internalType": "string",
"name": "name",
"type": "string"
},
{
"internalType": "string",
"name": "email",
"type": "string"
},
{
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"internalType": "string",
"name": "country",
"type": "string"
},
{
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
},
{
"internalType": "bool",
"name": "isDeleted",
"type": "bool"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "userId",
"type": "uint256"
},
{
"internalType": "string",
"name": "name",
"type": "string"
},
{
"internalType": "string",
"name": "email",
"type": "string"
},
{
"internalType": "uint8",
"name": "age",
"type": "uint8"
},
{
"internalType": "string",
"name": "country",
"type": "string"
},
{
"internalType": "bool",
"name": "isKYCApproved",
"type": "bool"
}
],
"name": "updateProfile",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
}
],
"bytecode": "0x6080806040523460155761121d908161001b8239f35b600080fdfe6080604052600436101561001257600080fd5b60003560e01c806328279308146109da578063985736ce1461087f578063c36fe3d6146107b5578063eb5339291461023d5763f08f4f641461005357600080fd5b346102385760207ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc36011261023857600435600060a060405161009581610f12565b6060815260606020820152826040820152606080820152826080820152015280600052600060205260ff60046040600020015460081c166101b45760005260006020526101726040600020604051906100ed82610f12565b6100f6816110a6565b8252610104600182016110a6565b906020830191825261019f60ff60028301541660408501908152600461012c600385016110a6565b936060870194855201549260ff6101856080880196828716151588528260a08a019760081c1615158752604051998a9960208b525160c060208c015260e08b0190611168565b9051601f198a83030160408b0152611168565b925116606087015251601f19868303016080870152611168565b9151151560a084015251151560c08301520390f35b60846040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602560248201527f50726f66696c65206e6f7420666f756e64206f7220686173206265656e20646560448201527f6c657465640000000000000000000000000000000000000000000000000000006064820152fd5b600080fd5b346102385761024b36610fa8565b908560009695939652600060205260ff60046040600020015460081c168015610797575b15610739576040519561028187610f12565b83875260208701858152604088019060ff831682526060890198848a526080810192861515845260a0820192600084528a60005260006020526040600020925180519067ffffffffffffffff82116105885781906102df8654611053565b601f81116106e6575b50602090601f831160011461068357600092610678575b50506000198260011b9260031b1c19161783555b518051600184019167ffffffffffffffff82116105885781906103368454611053565b601f8111610625575b50602090601f83116001146105c2576000926105b7575b50506000198260011b9260031b1c19161790555b60ff600283019151167fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00825416179055600381019951998a5167ffffffffffffffff8111610588576103bc8254611053565b601f8111610540575b5060209b601f82116001146104a8579261048c9492826004937fca34bc1ece01e1f6e787e2fcbd4c56766978c283996ee9eb1055109936cf34259e9f6104989c9b9a999760009261049d575b50506000198260011b9260031b1c19161790555b019151151560ff7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0084541691161782555115157fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00ff61ff00835492151560081b169116179055565b9b601f1982169c83600052816000209d60005b818110610c8d5750837fca34bc1ece01e1f6e787e2fcbd4c56766978c283996ee9eb1055109936cf34259e9f6104989c9b9a99979461048c9997946004976001951061050f575b505050811b019055610425565b929e8f83015181556001019e60200192602001610c3a565b826000526020600020601f830160051c81019160208410610ce3575b601f0160051c01905b818110610cd75750610b62565b60008155600101610cca565b9091508190610cc1565b015190508e80610af3565b600085815282812093601f1916905b818110610d435750908460019594939210610d2a575b505050811b019055610b07565b015160001960f88460031b161c191690558e8080610d1d565b92936020600181928786015181550195019301610d07565b909150836000526020600020601f840160051c81019160208510610da4575b90601f859493920160051c01905b818110610d955750610adc565b60008155849350600101610d88565b9091508190610d7a565b015190508e80610a9c565b600087815282812093601f1916905b818110610e045750908460019594939210610deb575b505050811b018355610ab0565b015160001960f88460031b161c191690558e8080610dde565b92936020600181928786015181550195019301610dc8565b909150856000526020600020601f840160051c81019160208510610e65575b90601f859493920160051c01905b818110610e565750610a85565b60008155849350600101610e49565b9091508190610e3b565b60846040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602a60248201527f50726f66696c6520646f6573206e6f74206578697374206f722068617320626560448201527f656e2064656c65746564000000000000000000000000000000000000000000006064820152fd5b5084600052600060205260ff60046040600020015460081c1615610a0c565b60c0810190811067ffffffffffffffff82111761058857604052565b90601f601f19910116810190811067ffffffffffffffff82111761058857604052565b81601f820112156102385780359067ffffffffffffffff82116105885760405192610f866020601f19601f8601160185610f2e565b8284526020838301011161023857816000926020809301838601378301015290565b60c07ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc820112610238576004359160243567ffffffffffffffff81116102385782610ff591600401610f51565b9160443567ffffffffffffffff8111610238578161101591600401610f51565b9160643560ff8116810361023857916084359067ffffffffffffffff82116102385761104391600401610f51565b9060a43580151581036102385790565b90600182811c9216801561109c575b602083101461106d57565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052602260045260246000fd5b91607f1691611062565b90604051918260008254926110ba84611053565b808452936001811690811561112857506001146110e1575b506110df92500383610f2e565b565b90506000929192526020600020906000915b81831061110c5750509060206110df92820101386110d2565b60209193508060019154838589010152019101909184926110f3565b602093506110df9592507fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0091501682840152151560051b820101386110d2565b919082519283825260005b848110611194575050601f19601f8460006020809697860101520116010190565b80602080928401015182828601015201611173565b9360ff6111cb6111df94610844608097959a999a60a08a5260a08a0190611168565b921660408601528482036060860152611168565b93151591015256fea2646970667358221220e734baef00a48587a6925ab9e9c2ba63acf5e71a194aeb1359347e94b1f78f8a64736f6c634300081b0033",
"deployedBytecode": "0x6080604052600436101561001257600080fd5b60003560e01c806328279308146109da578063985736ce1461087f578063c36fe3d6146107b5578063eb5339291461023d5763f08f4f641461005357600080fd5b346102385760207ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc36011261023857600435600060a060405161009581610f12565b6060815260606020820152826040820152606080820152826080820152015280600052600060205260ff60046040600020015460081c166101b45760005260006020526101726040600020604051906100ed82610f12565b6100f6816110a6565b8252610104600182016110a6565b906020830191825261019f60ff60028301541660408501908152600461012c600385016110a6565b936060870194855201549260ff6101856080880196828716151588528260a08a019760081c1615158752604051998a9960208b525160c060208c015260e08b0190611168565b9051601f198a83030160408b0152611168565b925116606087015251601f19868303016080870152611168565b9151151560a084015251151560c08301520390f35b60846040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602560248201527f50726f66696c65206e6f7420666f756e64206f7220686173206265656e20646560448201527f6c657465640000000000000000000000000000000000000000000000000000006064820152fd5b600080fd5b346102385761024b36610fa8565b908560009695939652600060205260ff60046040600020015460081c168015610797575b15610739576040519561028187610f12565b83875260208701858152604088019060ff831682526060890198848a526080810192861515845260a0820192600084528a60005260006020526040600020925180519067ffffffffffffffff82116105885781906102df8654611053565b601f81116106e6575b50602090601f831160011461068357600092610678575b50506000198260011b9260031b1c19161783555b518051600184019167ffffffffffffffff82116105885781906103368454611053565b601f8111610625575b50602090601f83116001146105c2576000926105b7575b50506000198260011b9260031b1c19161790555b60ff600283019151167fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00825416179055600381019951998a5167ffffffffffffffff8111610588576103bc8254611053565b601f8111610540575b5060209b601f82116001146104a8579261048c9492826004937fca34bc1ece01e1f6e787e2fcbd4c56766978c283996ee9eb1055109936cf34259e9f6104989c9b9a999760009261049d575b50506000198260011b9260031b1c19161790555b019151151560ff7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0084541691161782555115157fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00ff61ff00835492151560081b169116179055565b9b601f1982169c83600052816000209d60005b818110610c8d5750837fca34bc1ece01e1f6e787e2fcbd4c56766978c283996ee9eb1055109936cf34259e9f6104989c9b9a99979461048c9997946004976001951061050f575b505050811b019055610425565b929e8f83015181556001019e60200192602001610c3a565b826000526020600020601f830160051c81019160208410610ce3575b601f0160051c01905b818110610cd75750610b62565b60008155600101610cca565b9091508190610cc1565b015190508e80610af3565b600085815282812093601f1916905b818110610d435750908460019594939210610d2a575b505050811b019055610b07565b015160001960f88460031b161c191690558e8080610d1d565b92936020600181928786015181550195019301610d07565b909150836000526020600020601f840160051c81019160208510610da4575b90601f859493920160051c01905b818110610d955750610adc565b60008155849350600101610d88565b9091508190610d7a565b015190508e80610a9c565b600087815282812093601f1916905b818110610e045750908460019594939210610deb575b505050811b018355610ab0565b015160001960f88460031b161c191690558e8080610dde565b92936020600181928786015181550195019301610dc8565b909150856000526020600020601f840160051c81019160208510610e65575b90601f859493920160051c01905b818110610e565750610a85565b60008155849350600101610e49565b9091508190610e3b565b60846040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602a60248201527f50726f66696c6520646f6573206e6f74206578697374206f722068617320626560448201527f656e2064656c65746564000000000000000000000000000000000000000000006064820152fd5b5084600052600060205260ff60046040600020015460081c1615610a0c565b60c0810190811067ffffffffffffffff82111761058857604052565b90601f601f19910116810190811067ffffffffffffffff82111761058857604052565b81601f820112156102385780359067ffffffffffffffff82116105885760405192610f866020601f19601f8601160185610f2e565b8284526020838301011161023857816000926020809301838601378301015290565b60c07ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc820112610238576004359160243567ffffffffffffffff81116102385782610ff591600401610f51565b9160443567ffffffffffffffff8111610238578161101591600401610f51565b9160643560ff8116810361023857916084359067ffffffffffffffff82116102385761104391600401610f51565b9060a43580151581036102385790565b90600182811c9216801561109c575b602083101461106d57565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052602260045260246000fd5b91607f1691611062565b90604051918260008254926110ba84611053565b808452936001811690811561112857506001146110e1575b506110df92500383610f2e565b565b90506000929192526020600020906000915b81831061110c5750509060206110df92820101386110d2565b60209193508060019154838589010152019101909184926110f3565b602093506110df9592507fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0091501682840152151560051b820101386110d2565b919082519283825260005b848110611194575050601f19601f8460006020809697860101520116010190565b80602080928401015182828601015201611173565b9360ff6111cb6111df94610844608097959a999a60a08a5260a08a0190611168565b921660408601528482036060860152611168565b93151591015256fea2646970667358221220e734baef00a48587a6925ab9e9c2ba63acf5e71a194aeb1359347e94b1f78f8a64736f6c634300081b0033",
"linkReferences": {},
"deployedLinkReferences": {}
}
```
In this ABI, we have a set of functions, inputs, outputs, and events captured
from the UserData.sol smart contract. It outlines how external applications can
interact with the contract by providing structured definitions for each callable
function and emitted event.
#### UserData contract ABI summary
| Events | Indexed Params | Non-Indexed Params |
| ---------------- | ------------------ | -------------------------------------------------------------------------------------------- |
| `ProfileCreated` | `userId (uint256)` | `name (string)`, `email (string)`, `age (uint8)`, `country (string)`, `isKYCApproved (bool)` |
| `ProfileUpdated` | `userId (uint256)` | `name (string)`, `email (string)`, `age (uint8)`, `country (string)`, `isKYCApproved (bool)` |
| `ProfileDeleted` | `userId (uint256)` | — |
| Functions | Inputs | Outputs |
| --------------- | ---------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| `createProfile` | `userId (uint256)`, `name (string)`, `email (string)`, `age (uint8)`, `country (string)`, `isKYCApproved (bool)` | — |
| `updateProfile` | `userId (uint256)`, `name (string)`, `email (string)`, `age (uint8)`, `country (string)`, `isKYCApproved (bool)` | — |
| `deleteProfile` | `userId (uint256)` | — |
| `getProfile` | `userId (uint256)` | Tuple: `{ name (string), email (string), age (uint8), country (string), isKYCApproved (bool), isDeleted (bool) }` |
| `profiles` | `userId (uint256)` | Tuple: `{ name (string), email (string), age (uint8), country (string), isKYCApproved (bool), isDeleted (bool) }` |
### 3. Add smart contract portal middleware to your application
Middleware acts as a bridge between your blockchain network and applications,
providing essential services like data indexing, API access, and event
monitoring. Before adding middleware, ensure you have an application and
blockchain node in place.
#### How to add middleware
**Navigate to application**
Navigate to the **application** where you want to add middleware.
**Access middleware section**
Click **middleware** in the left navigation, and then click **add a middleware**. This opens a form.
**Configure middleware**
1. Choose middleware type (graph or portal)
2. Choose a **middleware name**
3. Select the **blockchain node** (prefered option for portal) or **load balancer** (prefered option for the graph)
4. Configure deployment settings
5. Click **confirm**
First ensure you're authenticated:
```bash
settlemint login
```
Create a middleware:
```bash
# Get the list of available middleware types
settlemint platform create middleware --help
# Create a middleware
settlemint platform create middleware
# Get information about the command and all available options
settlemint platform create middleware --help
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create middleware
const result = await client.middleware.create({
applicationUniqueName: "your-app-unique-name",
name: "my-middleware",
type: "SHARED",
interface: "HA_GRAPH", // Valid options: "HA_GRAPH" | "SMART_CONTRACT_PORTAL"
blockchainNodeUniqueName: "your-node-unique-name",
region: "EUROPE", // Required
provider: "GKE", // Required
size: "SMALL" // Valid options: "SMALL" | "MEDIUM" | "LARGE"
});
console.log('Middleware created:', result);
```
Get your access token from the Platform UI under User Settings → API Tokens.
#### Manage middleware
Navigate to your middleware and click **manage middleware** to:
* View middleware details and status
* Update configurations
* Monitor health
* Access endpoints
```bash
# List middlewares
settlemint platform list middlewares --application
```
```bash
# Get middleware details
settlemint platform read middleware
```
```typescript
// List middlewares
await client.middleware.list("your-app-unique-name");
```
```typescript
// Get middleware details
await client.middleware.read("middleware-unique-name");
```

You can upload or copy paste the ABI. Please note that if you upload the ABI,
the file name will be picked as the ABI name, so make sure you edit the file
name of the ABI JSON file before uploading.

In a few minutes we get a REST and GraphQL API layer

To update the ABIs of an existing smart contract portal middleware, navigate to
the middleware, go to the details and click on the 'manage middleware' button on
the top right. Click on the 'update ABIs' item and a dialog will open. In this
dialog upload the ABI file(s) you saved on your local filesystem in the previous
step.
### 4. How to configure REST API requests in the portal
To interact with your smart contract via the API portal, follow these steps:

#### Get the base URL
Navigate to the **connect** tab in the portal middleware to obtain the base API
URL. It will look something like:
`https://api-portal-affe9.gke-europe.settlemint.com/`
For exact endpoints, refer to the portal UI. An example endpoint might look like
this:
`https://api-portal-affe9.gke-europe.settlemint.com/api/user-smart-contract-abi/{address}/create-profile`
Here, `{address}` should be replaced with the deployed smart contract address on
the blockchain.
> You can find the deployed contract address in the `deployed_addresses.json`
> file located inside the `ignition/deployments` folder.
#### Sample request body
Here's an example JSON body for a smart contract function like `createProfile`:
```json
{
"from": "",
"gasLimit": "",
"gasPrice": "",
"simulate": true,
"metadata": {},
"input": {
"userId": "",
"name": "",
"email": "",
"age": 0,
"country": "",
"isKYCApproved": true
}
}
```
#### Field descriptions
* **`from`**: Public key of the wallet that will initiate the transaction.
Typically, this is the deployer's address. For advanced scenarios, this can be
a specific user's public address, depending on roles.
* **`gasLimit`**: Use a reasonably high value for zero-gas private networks. For
others, determine a realistic value through trial and error. You can fine-tune
this based on actual gas usage from previous transactions.
* **`gasPrice`**: Set to `0` for zero-gas networks, or specify an appropriate
value for gas-charging private or public networks.
* **`simulate`**: Leave as `true` for a dry run before sending actual
transactions.
* **`metadata`**: Can be left empty or with default values unless your
application requires it.
* **`input`**: Include all parameters required by the smart contract function
you are calling.
#### Authentication
Use your **application access token** as the API key for authentication. You can
generate this token from the **access tokens** section in your application
dashboard (left sidebar menu) and will look something like
**sm\_aat\_fd0fbe61cf102b6c**.
#### Expected response
If the request is valid, the API will return a: **200 OK** along with the
**transaction hash** in the response body. Else various error codes with
respective messages will be returned.
### 5. How to configure GraphQL API requests in the portal

To query smart contract data using GraphQL in the SettleMint API portal,
navigate to the **GraphQL** tab in the portal interface. You will see a visual
GraphQL explorer that allows you to construct and test your queries easily. The
endpoint for GraphQL is provided under the **connect** tab, typically looking
like:
```
https://api-portal-affe9.gke-europe.settlemint.com/graphql
```
In the explorer, start by selecting the appropriate query object exposed in the
API, such as `UserSmartContractAbi`. You'll need to provide the `address`
parameter, which corresponds to the deployed smart contract address. This
address ensures that your request is directed to the correct smart contract
instance on-chain.
Once the address is entered, you can choose the function or field you want to
query. For example, selecting the `profiles` field and providing a `uint256` ID
(such as `"101"`) will retrieve the user profile associated with that ID. You
can then pick which fields of the profile you want to fetch, like `name`,
`email`, `age`, `country`, `isKYCApproved`, and `isDeleted`.
After you've built your query, hit the play button to execute it. If successful,
the response will appear on the right-hand panel, showing the structured result
returned from the smart contract. In this case, you might get a profile with a
name, email, country, age, and flags indicating whether the profile is deleted
or KYC-approved.
This intuitive interface allows developers to rapidly test GraphQL queries
without needing to write code or leave the portal. This can be used for
debugging, exploring contract data, and integrating smart contract logic into
frontend or backend systems using GraphQL.
Congratulations!
You have successfully deployed smart contract API portal and have generated APIs
to write data on chain.
From here you can proceed for setting up graph middleware for indexing data and
get GraphQL API layer for reading data stored on chain via smart contract
interactions.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/setup-code-studio.mdx
meta: {
"title": "Setup code studio",
"description": "Guide to setup code studio IDE to develop and deploy smart contracts and sub-graphs"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To start developing and deploying smart contracts on the SettleMint platform,
you'll first need to add code studio to your application. This provides you with
a full-featured web-based IDE, pre-configured for blockchain development using
tools like Hardhat, Foundry, and The Graph. Once added, you can use built-in
tasks to build, test, deploy, and index your smart contracts and subgraphs, all
within the same environment.
You can add code studio through the platform UI by selecting it as a dev tool
and linking it with a smart contract set and a template. Alternatively, you can
use the SDK CLI or SDK JS to programmatically create and manage smart contract
sets. These interfaces give you flexibility depending on whether you're working
from the console or integrating via scripts or automation.
After setup, you'll be able to customize your smart contracts directly within
the IDE. A task manager will guide you through building and deploying them to
local or SettleMint-hosted blockchain networks. You can also integrate subgraphs
for indexing and querying contract data using The Graph.
To speed up development, SettleMint offers a rich library of open-source smart
contract templates, from ERC standards to more complex business use cases. These
templates can be modified, extended, or used as-is, and you also have the option
to create and manage custom templates within your consortium for reuse across
projects.
## How to setup code studio and deploy smart contracts on SettleMint platform
Code studio is SettleMint's fully integrated, web-based IDE built specifically
for blockchain development. It provides developers with a familiar Visual Studio
Code experience directly in the browser, pre-configured with essential tools
like Hardhat, Foundry, and The Graph. Code studio enables seamless development,
testing, deployment, and indexing of smart contracts and subgraphs, all within a
unified environment.
It eliminates the need for complex local setups, simplifies DevOps workflows,
and reduces time-to-market by combining infrastructure, templates, and
automation under one interface. By offering pre-built tasks, contract templates,
and GitHub integration, it solves the traditional challenges of fragmented
tooling, inconsistent environments, and steep setup requirements for web3
development.

Despite offering full configurability, code studio includes all essential
dependencies pre-installed, saving time and avoiding setup friction. It supports
extensions for formatting, linting, testing, and AI-assisted development,
mirroring the convenience of a local VS Code setup. Every component, from
contracts to testing and subgraph development is wired into a well-structured,
maintainable codebase that is continuously updated and thoroughly tested to
align with the latest development standards. This makes it ideal for both rapid
prototyping and production-grade blockchain applications.

Smart contract sets allow you to incorporate **business logic** into your
application by deploying smart contracts that run on the blockchain. You can add
a smart contract set via different methods as part of your development workflow.
## IDE project structure
The EVM IDE project structure in code studio is thoughtfully organized to
support efficient smart contract development, testing, and deployment. Each
folder serves a specific purpose in the dApp development lifecycle, aligning
with industry-standard tools like Hardhat, Foundry, and The Graph.
| Folder | Description |
| --------------- | ------------------------------------------------------------------------------------------------- |
| `contracts/` | Contains Solidity smart contracts that define the core logic and business rules of the dApp. |
| `test/` | Holds test files. These can be written in **TypeScript** for Hardhat or **Solidity** for Foundry. |
| `script/` | Stores deployment and interaction scripts, often used to automate tasks like contract deployment. |
| `lib/` | Optional directory for external Solidity libraries or reusable modules to avoid code repetition. |
| `ignitions/` | Contains **Hardhat Ignition** configuration for defining declarative deployment plans. |
| `out/` | Output folder used by **Foundry**, containing compiled contract artifacts like ABIs and bytecode. |
| `artifacts/` | Output folder used by **Hardhat**, similar to `out/`, containing build artifacts and metadata. |
| `subgraphs/` | Contains files for **The Graph** integration, schema, mappings, and manifest for data indexing. |
| `cache/` | Caching directory for Hardhat to improve build performance by avoiding redundant compilation. |
| `cache_forge/` | Caching directory for Foundry to speed up compilation and reuse outputs. |
| `node_modules/` | Contains installed npm packages and dependencies used in Hardhat or other JS-based tools. |
## Code studio task manager
The code studio IDE task manager acts as a centralized hub for running all
essential development scripts, giving developers a streamlined way to manage the
entire smart contract lifecycle. It also includes integrated SettleMint CLI
tasks for logging in and managing authenticated platform interactions, ensuring
that everything needed for blockchain development is accessible and executable
directly from within the IDE.
Below is a categorized table of tasks or scripts available with concise
explanations.
| Task | Tool | Description |
| -------------------------------------------- | -------------- | ------------------------------------------------------------------------ |
| SettleMint - Login | SettleMint CLI | Logs into the SettleMint platform via CLI for authenticated deployments. |
| Foundry - Build | Foundry | Compiles the smart contracts using Foundry. |
| Hardhat - Build | Hardhat | Compiles the smart contracts using Hardhat. |
| Foundry - Test | Foundry | Runs tests using Foundry's native testing framework. |
| Hardhat - Test | Hardhat | Executes tests using Hardhat's JavaScript-based test suite. |
| Foundry - Format | Foundry | Formats smart contract code for readability (optional). |
| Foundry - Start network | Foundry | Starts a local Foundry testnet environment. |
| Hardhat - Start network | Hardhat | Starts a local Hardhat network for JS-based testing. |
| Hardhat - Deploy to local network | Hardhat | Deploys compiled contracts to the local Hardhat network. |
| Hardhat - Reset & Deploy to local network | Hardhat | Resets the local chain state and redeploys contracts. |
| Hardhat - Deploy to platform network | Hardhat | Deploys contracts to a blockchain network hosted on SettleMint. |
| Hardhat - Reset & Deploy to platform network | Hardhat | Resets the platform network state and redeploys contracts. |
| The Graph - Codegen the subgraph types | The Graph CLI | Generates TypeScript types based on subgraph GraphQL schema. |
| The Graph - Build the subgraph | The Graph CLI | Compiles the subgraph for deployment to The Graph. |
| The Graph - Deploy or update the subgraph | The Graph CLI | Deploys or updates the subgraph on The Graph's hosted service. |
When using Hardhat Ignition for deploying smart contracts, the deployed contract
addresses are stored in the file
ignition/deployments/chain-CHAIN\_ID/deployed\_addresses.json. This file serves as
a reliable reference for all contracts deployed on a specific network. It maps
contract names to their respective blockchain addresses, making it easy to
retrieve addresses later for interactions, frontend integrations, or upgrades.
You must have an existing application before you add a smart contract set.
## How to add code studio
### Navigate to application
Navigate to the **application** where you want to add the smart contract set.
### Open dev tools
Open **dev tools** and click on **add a dev tool**.

### Select code studio
Select **code studio** as the dev tool type.

### Choose smart contract set
Then choose **smart contract set**.

### Pick a template
Pick a **template**; the code studio will load with your chosen smart contract template.

### Enter details
Click **continue** to enter details such as the dev tool name, user, and deployment plan.

### Confirm
Confirm the resource cost and click **confirm** to add the smart contract set.
You can now further configure and eventually deploy your smart contracts.
First, ensure you are authenticated:
```bash
settlemint login
```
You can create a smart contract set either on the platform or locally:
### Create on platform
Then create a smart contract set with the following command (refer to the
[CLI docs](/building-with-settlemint/15_dev-tools/1_SDK.md) for more details):
```bash
settlemint platform create smart-contract-set \
--application \
--template \
--deployment-plan
```
For example:
```bash
settlemint platform create smart-contract-set my-scset \
--application my-app \
--template default \
--deployment-plan starter
```
### Working with smart contract sets locally
You can also work with smart contract sets in your local development environment. This is useful for development and testing before deploying to the platform.
To create a smart contract set locally:
```bash
# Create a new smart contract set
settlemint scs create
# You'll see the SettleMint ASCII art and then be prompted:
✔ What is the name of your new SettleMint project? my awesome project
# Choose from available templates:
❯ ERC20 token
Empty typescript
Empty typescript with PDC
ERC1155 token
ERC20 token with crowdsale mechanism
ERC20 token with MetaTx
ERC721
# ... and more
```
Once created, you can use these commands to work with your local smart contract set:
```bash
settlemint scs -h # Show all available commands
# Main commands:
settlemint scs create # Create a new smart contract set
settlemint scs foundry # Foundry commands for building and testing
settlemint scs hardhat # Hardhat commands for building, testing and deploying
settlemint scs subgraph # Commands for managing TheGraph subgraphs
```
The scaffolded project includes everything you need to start developing smart contracts:
* Contract templates
* Testing framework
* Deployment scripts
* Development tools configuration
### Managing platform smart contract sets
Manage your platform smart contract sets with:
```bash
# List smart contract sets
settlemint platform list smart-contract-sets --application
# Read smart contract set details
settlemint platform read smart-contract-set
```
You can also add a smart contract set programmatically using the JS SDK. The API follows the same pattern as for applications and blockchain networks:
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: process.env.SETTLEMENT_ACCESS_TOKEN!,
instance: 'https://console.settlemint.com'
});
// Create a Smart Contract Set
const createSmartContractSet = async () => {
const result = await client.smartContractSet.create({
applicationUniqueName: "your-app", // Your application unique name
name: "my-smart-contract-set", // The smart contract set name
template: "default" // Template to use (choose from available templates)
});
console.log('Smart Contract Set created:', result);
};
// List Smart Contract Sets
const listSmartContractSets = async () => {
const sets = await client.smartContractSet.list("your-app");
console.log('Smart Contract Sets:', sets);
};
// Read Smart Contract Set details
const readSmartContractSet = async () => {
const details = await client.smartContractSet.read("smart-contract-set-unique-name");
console.log('Smart Contract Set details:', details);
};
```
Get your access token from the platform UI under **user settings → API tokens**.
All operations require that you have the necessary permissions in your
workspace.
## Customize smart contracts
You can customize your smart contracts using the built-in IDE. The smart
contract sets include a generative AI plugin to assist with development.
[Learn more about the AI plugin here.](./ai-plugin)
## Smart contract template library
SettleMint's smart contract templates serve as open-source, ready-to-use
foundations for blockchain application development, significantly accelerating
the deployment process. These templates enable users to quickly customize and
extend their blockchain applications, leveraging tested and community-enhanced
frameworks to reduce development time and accelerate market entry.
## Open-source smart contract templates under the mit license
Benefit from the expertise of the blockchain community and trust in the
reliability of your smart contracts. These templates are vetted and used by
major enterprises and institutions, ensuring enhanced security and confidence in
your deployments.
## Smart contract template library
The programming language used depends on the target protocol:
* **Solidity** for EVM-compatible networks
| Template | Description |
| ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
| [Empty](https://github.com/settlemint/solidity-empty) | Basic Solidity project scaffold with no predefined logic. Ideal for starting from scratch. |
| [ERC20 token](https://github.com/settlemint/solidity-token-erc20) | Standard ERC20 token implementation for fungible tokens. |
| [ERC1155 token](https://github.com/settlemint/solidity-token-erc1155) | Multi-token standard supporting both fungible and non-fungible tokens in a single contract. |
| [ERC20 token with MetaTx](https://github.com/settlemint/solidity-token-erc20-metatx) | ERC20 token with meta-transaction support to enable gasless transfers. |
| [Supplychain](https://github.com/settlemint/solidity-supplychain) | Token-based supply chain logic for tracking assets and ownership across stages. |
| [State Machine](https://github.com/settlemint/solidity-statemachine) | Contract template for building stateful workflows and processes using a finite state machine. |
| [ERC20 token with crowdsale mechanism](https://github.com/settlemint/solidity-token-erc20-crowdsale) | ERC20 token with built-in crowdsale logic for fundraising campaigns. |
| [ERC721](https://github.com/settlemint/solidity-token-erc721) | Standard implementation of ERC721 non-fungible tokens (NFTs). |
| [ERC721a](https://github.com/settlemint/solidity-token-erc721a) | Gas-optimized ERC721 implementation for efficient batch minting. |
| [ERC721 Generative Art](https://github.com/settlemint/solidity-token-erc721-generative-art) | NFT template for generating on-chain artwork using ERC721 standard. |
| [Soulbound Token](https://github.com/settlemint/solidity-token-soulbound) | Non-transferable token (SBT) representing identity or credentials. |
| [Diamond bond](https://github.com/settlemint/solidity-diamond-bond) | Example of a tokenized bond using modular smart contracts (Diamond pattern). |
| [Attestation Service](https://github.com/settlemint/solidity-attestation-service) | Service template for managing on-chain verifiable claims and attestations. |
## Create your own smart contract templates for your consortium
Within the self-managed SettleMint Platform, you can create and add your own
templates for use within your consortium. This fosters a collaborative
environment where templates can be reused and built upon, promoting innovation
and efficiency within your network.
To get started, visit:
[SettleMint GitHub Repository](https://github.com/settlemint/solidity-empty)
Congratulations.!!
You have succesfully deployed the code studio. From here you can proceed for
development and deployment of smart contracts and indexing sub-graphs.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/setup-graph-middleware.mdx
meta: {
"title": "Setup graph middleware",
"description": "Setup read middleware"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To set up a graph middleware in SettleMint, you'll begin by ensuring that your
application and blockchain node are ready. The graph middleware will serve as
your read layer, enabling powerful querying of on-chain events using a GraphQL
interface. This is particularly useful when you want to retrieve and analyze
historical smart contract data in a structured, filterable format.
First, you'll need to add the middleware itself. Head to the middleware section
inside your application on the SettleMint platform. Click add a middleware, and
select graph as the type. Assign a name, pick the blockchain node (where your
smart contract is deployed), configure the deployment settings, and confirm.
This action will provision the underlying infrastructure required to run your
subgraph.
Next, you will create the subgraph package in code studio. The subgraph folder
contains all the code and configuration required for indexing and querying your
smart contract's events. You will define a subgraph.config.json file that lists
the network (via chain ID), your contract address, and the data sources (i.e.,
smart contracts and associated modules) that the subgraph will index.
Inside the datasources folder, you will create a userdata.yaml manifest file
that outlines the smart contract address, ABI path, start block, and
event-handler mappings. This YAML file connects emitted events like
ProfileCreated, ProfileUpdated, and ProfileDeleted with specific AssemblyScript
functions that define how the data is processed and stored.
You will then define the schema in userdata.gql.json. This is your GraphQL
schema, which defines the structure of your indexed data. Entities like
UserProfile, ProfileCreated, and ProfileUpdated are defined here, each with the
fields to be stored and queried later via GraphQL.
Once the schema is ready, you will implement the mapping logic in userdata.ts,
which listens for emitted events and updates the subgraph's entities
accordingly. A helper file inside the fetch directory will provide utility logic
to create or retrieve entities without code repetition.
After writing all files, you will run the codegen, build, and deploy scripts
using the provided task buttons in code studio. These scripts will compile your
schema and mapping into WebAssembly (WASM), bundle it for deployment, and push
it to the graph middleware node.
Once deployed, you will be able to open the graph middleware's GraphQL explorer
and run queries against your indexed data. You can query by ID or use the plural
form to get a list of entries. This enables your application or analytics layer
to fetch historical state data in a fast and reliable way.
## How to setup graph middleware and api portal in SettleMint platform
Middleware acts as a bridge between your blockchain network and applications,
providing essential services like data indexing, API access, and event
monitoring. Before adding middleware, ensure you have an application and
blockchain node in place.
### How to add middleware

**Navigate to application**
Navigate to the **application** where you want to add middleware.
**Access middleware section**
Click **middleware** in the left navigation, and then click **add a middleware**. This opens a form.
**Configure middleware**
1. Choose middleware type (graph or portal)
2. Choose a **middleware name**
3. Select the **blockchain node** (prefered option for portal) or **load balancer** (prefered option for the graph)
4. Configure deployment settings
5. Click **confirm**
First ensure you're authenticated:
```bash
settlemint login
```
Create a middleware:
```bash
# Get the list of available middleware types
settlemint platform create middleware --help
# Create a middleware
settlemint platform create middleware
# Get information about the command and all available options
settlemint platform create middleware --help
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create middleware
const result = await client.middleware.create({
applicationUniqueName: "your-app-unique-name",
name: "my-middleware",
type: "SHARED",
interface: "HA_GRAPH", // Valid options: "HA_GRAPH" | "SMART_CONTRACT_PORTAL"
blockchainNodeUniqueName: "your-node-unique-name",
region: "EUROPE", // Required
provider: "GKE", // Required
size: "SMALL" // Valid options: "SMALL" | "MEDIUM" | "LARGE"
});
console.log('Middleware created:', result);
```
Get your access token from the Platform UI under User Settings → API Tokens.
### Manage middleware
Navigate to your middleware and click **manage middleware** to:
* View middleware details and status
* Update configurations
* Monitor health
* Access endpoints
```bash
# List middlewares
settlemint platform list middlewares --application
```
```bash
# Get middleware details
settlemint platform read middleware
```
```typescript
// List middlewares
await client.middleware.list("your-app-unique-name");
```
```typescript
// Get middleware details
await client.middleware.read("middleware-unique-name");
```
## Subgraph folder structure in code studio ide
```bash
subgraph/
│
├── subgraph.config.json
│
├── datasources/
│ ├── mycontract.gql.json
│ ├── mycontract.ts
│ └── mycontract.yaml
│
└── fetch/
└── mycontract.ts
```
## Subgraph deployment process
### 1. Collect constants needed
Find the chain ID of the network from igntition>deployments folder name
(chain-ID) or from the platform UI at blockchain networks > selcted network >
details page, it will be something like **47440**.
Locate the contract address, deployed contract address is stored in
deployed\_addresses.json file located in igntition>deployments folder.
### 2. Building subgraph.config.json file
This file is the foundational configuration for your subgraph. It defines how
and where the subgraph will be generated and which contracts it will be
tracking. Think of it as the control panel that the subgraph compiler reads to
understand what contracts to index, where to start indexing from (which block),
and which folder contains the relevant configurations (e.g., YAML manifest,
mappings, schema, etc.).
Each object in the datasources array represents a separate contract. You specify
the contract's name, address, the block number at which the indexer should begin
listening, and the path to the module folder (which holds the YAML manifest and
mapping logic). This file is essential when working with Graph CLI or SDKs for
compiling and deploying subgraphs.
When writing this file from scratch, you will need to gather the deployed
contract address, decide the indexing start block (can be 0 or a specific block
to save resources), and organize contract-related files in a logical module
folder.
```json
{
"output": "generated/scs.",
"chain": "44819",
"datasources": [
{
"name": "UserData",
"address": "0x8b1544B8e0d21aef575Ce51e0c243c2D73C3C7B9",
"startBlock": 0,
"module": ["userdata"]
}
]
}
```
### 3. Create userdata.yaml file
This is the YAML manifest file that tells the subgraph how to interact with a
specific smart contract on-chain. It defines the contract's ABI, address, the
events to listen to, and the mapping logic that should be triggered for each
event.
The structure must follow a strict YAML format, wrong indentation or a missing
property can break the subgraph. Under the source section, you provide the
contract's address, the ABI name, and the block from which indexing should
begin.
The mapping section details how the subgraph handles events. It specifies the
API version, programming language (AssemblyScript), the entities it will touch,
and the path to the mapping file. Each eventHandler entry pairs an event
signature (from the contract) with a function that will process it. When writing
this from scratch, ensure that all event signatures exactly match those in your
contract (parameter order and types must be accurate), and align them with the
corresponding handler function names.
```yaml
- kind: ethereum/contract
name: {id}
network: {chain}
source:
address: "{address}"
abi: UserData
startBlock: {startBlock}
mapping:
kind: ethereum/events
apiVersion: 0.0.5
language: wasm/assemblyscript
entities:
- UserProfile
- ProfileCreated
- ProfileUpdated
- ProfileDeleted
abis:
- name: UserData
file: "{root}/out/UserData.sol/UserData.json"
eventHandlers:
- event: ProfileCreated(indexed uint256,string,string,uint8,string,bool)
handler: handleProfileCreated
- event: ProfileUpdated(indexed uint256,string,string,uint8,string,bool)
handler: handleProfileUpdated
- event: ProfileDeleted(indexed uint256)
handler: handleProfileDeleted
file: {file}
```
### 4. Create userdata.gql.json file
This JSON file defines the GraphQL schema that powers your subgraph's data
structure. It outlines the shape of your data, which entities will be stored in
the Graph Node's underlying database, and the fields each entity will expose to
users via GraphQL queries.
Every event-based entity (like ProfileCreated, ProfileUpdated, ProfileDeleted)
is linked to the main entity (here, UserProfile) to maintain a historical audit
trail. Each entity must have an id field of type ID!, which serves as the
primary key.
You then define all other fields with their data types and nullability. When
writing this schema, think in terms of how data will be queried: What
information will consumers of the subgraph want to retrieve? The names and types
must exactly reflect the logic in your mapping files. For reuse across projects,
just align this schema with the domain model of your contract.
```json
[
{
"name": "UserProfile",
"description": "Represents the current state of a user's profile.",
"fields": [
{ "name": "id", "type": "ID!" },
{ "name": "name", "type": "String!" },
{ "name": "email", "type": "String!" },
{ "name": "age", "type": "Int!" },
{ "name": "country", "type": "String!" },
{ "name": "isKYCApproved", "type": "Boolean!" },
{ "name": "isDeleted", "type": "Boolean!" }
]
},
{
"name": "ProfileCreated",
"description": "Captures the event when a new user profile is created.",
"fields": [
{ "name": "id", "type": "ID!" },
{ "name": "userId", "type": "BigInt!" },
{ "name": "userProfile", "type": "UserProfile!" }
]
},
{
"name": "ProfileUpdated",
"description": "Captures the event when an existing user profile is updated.",
"fields": [
{ "name": "id", "type": "ID!" },
{ "name": "userId", "type": "BigInt!" },
{ "name": "userProfile", "type": "UserProfile!" }
]
},
{
"name": "ProfileDeleted",
"description": "Captures the event when a user profile is soft-deleted.",
"fields": [
{ "name": "id", "type": "ID!" },
{ "name": "userId", "type": "BigInt!" },
{ "name": "userProfile", "type": "UserProfile!" }
]
}
]
```
### 5. Create userdata.ts file
This file contains the event handler functions written in AssemblyScript. It
directly responds to the events emitted by your smart contract and updates the
subgraph's store accordingly. Each exported function matches an event in the
YAML manifest. Inside each function, the handler builds a unique ID for the
event (usually combining the transaction hash and log index), processes the
event payload, and updates or creates the relevant entity (here, UserProfile).
The logic can include custom processing like formatting values, filtering, or
even transforming data types. This file is where your business logic resides,
similar to an event-driven backend microservice. You should keep this file
modular and focused, avoiding code repetition by calling reusable helper
functions like fetchUserProfile. When writing this from scratch, always import
the generated event types and schema entities, and handle edge cases like entity
non-existence or inconsistent values.
```ts
import { BigInt } from "@graphprotocol/graph-ts";
import {
ProfileCreated as ProfileCreatedEvent,
ProfileUpdated as ProfileUpdatedEvent,
ProfileDeleted as ProfileDeletedEvent,
} from "../../generated/generated/userdata/UserData";
import {
UserProfile,
ProfileCreated,
ProfileUpdated,
ProfileDeleted,
} from "../../generated/generated/schema";
import { fetchUserProfile } from "../fetch/userdata";
export function handleProfileCreated(event: ProfileCreatedEvent): void {
// Generate a unique event ID using transaction hash and log index
let id = event.transaction.hash.toHex() + "-" + event.logIndex.toString();
let entity = new ProfileCreated(id);
entity.userId = event.params.userId;
// Fetch or create the UserProfile entity
let profile = fetchUserProfile(event.params.userId);
profile.name = event.params.name;
profile.email = event.params.email;
profile.age = event.params.age;
profile.country = event.params.country;
profile.isKYCApproved = event.params.isKYCApproved;
profile.isDeleted = false;
profile.save();
// Link the event entity to the user profile and save
entity.userProfile = profile.id;
entity.save();
}
export function handleProfileUpdated(event: ProfileUpdatedEvent): void {
let id = event.transaction.hash.toHex() + "-" + event.logIndex.toString();
let entity = new ProfileUpdated(id);
entity.userId = event.params.userId;
// Retrieve and update the existing UserProfile entity
let profile = fetchUserProfile(event.params.userId);
profile.name = event.params.name;
profile.email = event.params.email;
profile.age = event.params.age;
profile.country = event.params.country;
profile.isKYCApproved = event.params.isKYCApproved;
profile.isDeleted = false;
profile.save();
entity.userProfile = profile.id;
entity.save();
}
export function handleProfileDeleted(event: ProfileDeletedEvent): void {
let id = event.transaction.hash.toHex() + "-" + event.logIndex.toString();
let entity = new ProfileDeleted(id);
entity.userId = event.params.userId;
// Retrieve the UserProfile entity and mark it as deleted
let profile = fetchUserProfile(event.params.userId);
profile.isDeleted = true;
profile.save();
entity.userProfile = profile.id;
entity.save();
}
```
### 6. Create another userdata.ts in the fetch folder
This is a helper utility designed to avoid redundancy in your mapping file. It
abstracts the logic of either loading an existing entity or creating a new one
if it doesn't exist.
It enhances reusability and reduces boilerplate in each handler function. The
naming convention of this file usually mirrors the module or entity it's
associated with (e.g., fetch/userdata.ts).
The logic inside the function uses the userId (or other unique identifier) as a
string key and ensures that all required fields have a default value. When
writing this from scratch, ensure every field in your GraphQL schema has an
initialized value to prevent errors during Graph Node processing.
```ts
import { BigInt } from "@graphprotocol/graph-ts";
import { UserProfile } from "../../generated/generated/schema";
/**
* Fetches a UserProfile entity using the given userId.
* If it does not exist, a new UserProfile entity is created with default values.
*
* @param userId - The user ID as a BigInt.
* @returns The UserProfile entity.
*/
export function fetchUserProfile(userId: BigInt): UserProfile {
let id = userId.toString();
let user = UserProfile.load(id);
if (!user) {
user = new UserProfile(id);
user.name = "";
user.email = "";
user.age = 0;
user.country = "";
user.isKYCApproved = false;
user.isDeleted = false;
}
return user;
}
```
```mermaid
flowchart TD
%% --- Inputs ---
F1["out/UserData.json (ABI from compiler) "]:::tooling
F2["deployed_addresses.json (Deployed contract address) "]:::tooling
F3["deployments/[chain-id] (Defines network chain ID) "]:::tooling
%% --- Configuration Files ---
A1["1 - subgraph.config.json - Declares network, output, and datasources "]:::config
A2["2 - userdata.yaml - Sets ABI, contract address, event handlers "]:::config
%% --- Contract & Events ---
B1["UserData.sol - Smart contract with profile lifecycle logic "]:::contract
B2["Events: ProfileCreated, ProfileUpdated, ProfileDeleted "]:::event
%% --- Mappings & Helpers ---
C1["3 - userdata.ts - Mapping logic to handle events and update entities "]:::mapping
C2["4 - fetch/userdata.ts - Loads or creates UserProfile entity "]:::helper
%% --- Schema & Storage ---
D1["5 - userdata.gql.json - GraphQL schema defining types and relationships"]:::schema
D2["Graph Node DB - Stores UserProfile and events, queryable via GraphQL "]:::db
%% --- API Layer ---
E1["GraphQL API - Exposes indexed data to dApps and dashboards "]:::api
%% --- Connections ---
F1 --> A2
F2 --> A1
F3 --> A1
A1 --> A2
A2 --> B1
B1 --> B2
B2 --> C1
A2 --> C1
C1 --> C2
C1 --> D2
D1 --> D2
D2 --> E1
%% --- Styling ---
classDef config fill:#D0EBFF,stroke:#1E40AF,stroke-width:1px
classDef mapping fill:#FEF3C7,stroke:#B45309,stroke-width:1px
classDef schema fill:#E0F2FE,stroke:#0369A1,stroke-width:1px
classDef contract fill:#FECACA,stroke:#B91C1C,stroke-width:1px
classDef event fill:#FCD34D,stroke:#92400E,stroke-width:1px
classDef db fill:#DCFCE7,stroke:#15803D,stroke-width:1px
classDef api fill:#E9D5FF,stroke:#7C3AED,stroke-width:1px
classDef abi fill:#F3E8FF,stroke:#9333EA,stroke-width:1px
classDef helper fill:#F5F5F4,stroke:#3F3F46,stroke-width:1px
classDef tooling fill:#F0F9FF,stroke:#0284C7,stroke-width:1px
```
## Codegen, build and deploy subgraph
### Run codegen script using the task manager of the ide

### Run graph build script using the task manager of the ide

### Run graph deploy script using the task manager of the ide

### Why we see a duplicay in the graphql schema -
In The Graph's autogenerated schema, each entity is provided with two types of
queries by default:
* **Single-Entity Query:** `userProfile(id: ID!): UserProfile` *Fetches a single
`UserProfile` by its unique ID.*
* **Multi-Entity Query:** `userProfiles(...): [UserProfile]` *Fetches a list of
`UserProfile` entities, with optional filters to refine the results.*
Why This Duplication Exists
* **Flexibility in Data Access:** By offering both single-entity and
multi-entity queries, The Graph allows you to choose the most efficient way to
access your data. If you know the exact ID, you can use the single query for a
quick lookup. If you need to display or analyze a collection of records, the
multi-entity query is available.
* **Optimized Performance:** Retrieving a specific record via the single-entity
query avoids unnecessary overhead that comes with filtering through a list,
ensuring more efficient data access when the unique identifier is known.
* **Catering to Different Use Cases:** Different parts of your application may
require different query types. Detailed views might need a single record
(using userProfile), while list views benefit from the filtering and
pagination offered by userProfiles.
* **Consistency Across the Schema:** Generating both queries for every entity
ensures a consistent API design, which simplifies development by providing a
predictable pattern for data access regardless of the entity.
### Graph middleware - querying data
We can query based on the ID

Or we can query to return all entries

Congratulations.!!
You have succesfully configured graph middleware and deployed subgraphs to
enable smart contract indexing. With this you have both read and write
middleware for your smart contracts.
This marks the end of the core Web3 development, from here we will proceed to
adding off-chain database and storage options to enable us to have a holistic
backend and storage layer for our application.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/setup-offchain-database.mdx
meta: {
"title": "Setup off-chain database",
"description": "Add Hasura backend-as-a-service with off-chain database"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To integrate off-chain storage into your blockchain application, you should
begin by adding Hasura as a backend-as-a-service via SettleMint. This will
provision a fully managed PostgreSQL database, paired with a real-time
GraphQL API layer. It enables you to manage non-critical or frequently
updated data that doesn't need to live on-chain, without compromising
performance or flexibility.
Start by navigating to your application and opening the integration tools
section. Click on add an integration tool, select Hasura, and follow the
steps to choose a name, provider, region, and resource plan. Once deployed,
a dedicated Hasura instance will be available, complete with its own admin
console, GraphQL API, and Postgres connection string. You can manage and
monitor the instance from the same interface.
Once Hasura is set up, you can define your database schema by creating
tables and relationships under the data tab. You can add, modify, and delete
rows directly from the console, or connect to the database using a
PostgreSQL client or code. Every schema and table you define becomes
instantly queryable using the GraphQL API. The API tab will auto-generate
queries and mutations, and also allow you to derive REST endpoints or export
code snippets for frontend/backend use.
For custom business logic, you can implement actions, which are HTTP
handlers triggered by GraphQL mutations. These are useful for data
validation, enrichment, or chaining smart contract calls. If you want to
respond to database changes in real-time, use event triggers to invoke
webhooks when specific inserts, updates, or deletions happen. For recurring
jobs, cron triggers can invoke workflows on a schedule, and one-off
scheduled events allow precision control over future events.
Authentication and authorization can be finely controlled through role-based
access rules. Hasura allows you to enforce row-level permissions and
restrict query types based on user roles. To ensure secure API access, use
the Hasura admin secret and your application access token, both available
from the connect tab of your Hasura console.
You'll also have the option to connect to the Hasura PostgreSQL instance
directly using the connection string. This is useful for running SQL
scripts, performing migrations, or executing batch jobs. Whether you're
using a Node.js backend or a command-line tool like psql, your Hasura
database acts like any standard PostgreSQL instance, with enterprise-grade
reliability.
Backups are easy to configure using the pg\_dump utility or via the Hasura
CLI. You can export both your database data and metadata, and restore them
in new environments as needed. Use hasura metadata export to get a full
snapshot of your permissions, tracked tables, actions, and relationships.
Then use hasura metadata apply or hasura metadata reload to rehydrate or
sync a new instance.
By combining Hasura's flexibility with the immutability of your on-chain
smart contracts, you will be able to design a clean hybrid architecture,
critical operations are stored securely on-chain, while scalable, queryable,
and user-driven data remains off-chain. This setup dramatically improves
user experience, simplifies front-end development, and keeps infrastructure
costs under control.
Many dApps need more than just decentralized tools to build an end-to-end
solution. The SettleMint Hasura SDK provides a seamless way to interact with
Hasura GraphQL APIs for managing application data.

## Need for a on-chain and off-chain data architecture
In blockchain-based applications, not all data needs to, or should, reside
on-chain. While critical state changes, token ownerships, or verifiable proofs
are best kept immutable and transparent on a blockchain, a large portion of
application data such as user profiles, analytics, logs, metadata, and UI-driven
state is better suited to an off-chain data store. Storing everything on-chain
is neither cost-effective nor performance-friendly. On-chain data is expensive
to store and slow to query for complex front-end or dashboard use cases.
This is where a **hybrid architecture** becomes essential. In such an approach,
data is partitioned based on its importance and usage:
* **On-chain layer** serves as the source of truth for verifiable,
consensus-driven actions like token transfers, proofs, and governance.
* **Off-chain layer** handles high-volume, user-generated, or fast-changing data
that benefits from relational structure, rich queries, and low latency.
This model provides the best of both worlds: **immutability and trust from
blockchain**, and **speed, flexibility, and developer-friendliness from
traditional databases**.
## How hasura on SettleMint supports this architecture
SettleMint offers Hasura as a Backend-as-a-Service (BaaS), tightly integrated
into its low-code blockchain development stack. Hasura provides a
high-performance, real-time GraphQL API layer on top of a PostgreSQL database,
and allows developers to instantly query, filter, and subscribe to changes in
the data without writing custom backend logic.
### Key capabilities of hasura on settlemint
* A fully managed **PostgreSQL database** is provisioned automatically with each
Hasura instance.
* Hasura auto-generates a powerful and expressive **GraphQL API** for all the
tables and relationships defined in the database.
* It allows **integration with external databases** or REST/GraphQL services,
making it possible to unify multiple data sources behind one GraphQL endpoint.
* **Role-based access control** ensures secure data access aligned with business
logic and user permissions.
## Benefits of using hasura in a blockchain project
Hasura is especially useful for building interfaces, dashboards, and off-chain
tools in blockchain applications. Developers can use it to:
* Store non-critical or frequently updated data like user preferences, audit
logs, or API call metadata.
* Power admin panels or reporting dashboards with complex filtering, sorting,
and aggregation capabilities.
* Perform fast and reliable queries without the overhead of smart contract reads
or event processing.
* Sync or mirror blockchain data into Postgres via indexing services (like The
Graph or custom workers), and build additional logic around it.
For example, while the verification of a credential or the execution of a
transaction happens on-chain, the user's profile details, usage history, or
interactions with the platform can be managed off-chain using Hasura. This
results in a responsive and scalable user experience, without compromising on
the core security and trust guarantees of blockchain.
# Off-chain database use cases in blockchain applications
| Category | Use Cases |
| ------------------------------- | ------------------------------------------------------------------------------------------------ |
| **User Management & Metadata** | User profiles, KYC/AML data, Recovery info, Social links, Preferences, Session tokens |
| **Dashboards & Reporting** | Admin panels, KPIs, Filters & aggregation, Charts, Audit logs, Time-series insights |
| **App Logic & State** | Workflow states, Business rules, Off-chain approvals, Drafts, Automation triggers, API call logs |
| **User Content** | Blog posts, Comments, Ratings, Articles, Feedback, Forum threads, Attachments |
| **External/API Data** | Oracle/cache data, API mirrors, Off-chain credentials, IoT inputs, External system sync |
| **Historical & Time Data** | Snapshots, Transition logs, Archived state, Event sync history, Audit trails |
| **Content & Config** | UI content, Static pages, Themes, Menus, Feature flags, Editable app config |
| **UX & Transactions** | Pending tx queues, Gas estimates, Slippage data, NFT views, Pre-submit staging, Local metadata |
| **Admin & Dev Tools** | Schema maps, Dev notes, Admin dashboards, Background jobs, Flagged items |
| **Security & Access** | Role bindings, Access logs, Encrypted fields, Policy metadata, Permissions history |
| **Hybrid & Indexing** | Enriched on-chain data, Indexed events, ID mapping, Postgres mirroring, ETL-ready layers |
| **E-commerce / Token Economy** | Product catalog, Shopping cart, Delivery tracking, Disputes, Refund metadata |
| **Education / DAO / Community** | Learning progress, Badges, Voting drafts, Moderation flags, Contribution history |
| **Data Ops & Recovery** | Data backups, Exportable datasets, Disaster recovery layer, Compliance archiving |
## Add hasura
### Navigate to application
Navigate to the **application** where you want to add Hasura.
### Access integration tools
Click **integration tools** in the left navigation, and then click **add an integration tool**. This opens a form.
### Configure Hasura
1. Select **Hasura**, and click **continue**
2. Choose a **name** for your backend-as-a-service
3. Choose a deployment plan (provider, region, resource pack)
4. Click **confirm** to add it
First ensure you're authenticated:
```bash
settlemint login
```
Create Hasura instance:
```bash
settlemint platform create integration-tool hasura
# Get information about the command and all available options
settlemint platform create integration-tool hasura --help
```
For a full example of how to create a blockchain explorer using the SDK, see the [Hasura SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-hasura#api-reference).
The SDK enables you to easily query and mutate data stored in your SettleMint-powered PostgreSQL databases through a type-safe GraphQL interface. For detailed API reference, check out the [Hasura SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/hasura).
## Some basic features
* Under the data subtab you can create an arbitrary number of **schemas**. A
schema is a collection of tables.
* In a schema you can create **tables**, choose which columns you want and
define relations and indexes.
* You can add, edit and delete **data** in these columns as well.
[Learn more here](https://hasura.io/docs/2.0/schema/postgres/tables/)
Any table you make is instantly visible in the **API subtab**. Note that by
using the **REST and derive action buttons** you can convert queries into REST
endpoints if that fits your application better. Using the **code exporter
button** you can get the actual code snippets you can use in your application or
the integration studio.
A bit more advanced are **actions**. Actions are custom queries or mutations
that are resolved via HTTP handlers. Actions can be used to carry out complex
data validations, data enrichment from external sources or execute just about
any custom business logic. Actions can be kickstarted by using the **derive
action button** in the **API subtab**.
[Learn more here.](https://hasura.io/docs/2.0/actions/overview/)
If you need to execute tasks based on changes to your database you can leverage
**events**. An **event trigger** atomically captures events (insert, update,
delete) on a specified table and then reliably calls a HTTP webhook to run some
custom business logic.
[Learn more here.](https://hasura.io/docs/latest/graphql/core/event-triggers/index.html)
**Cron triggers** can be used to reliably trigger HTTP endpoints to run some
custom business logic periodically based on a cron schedule.
**One-off scheduled events** are individual events that can be scheduled to
reliably trigger a HTTP webhook to run some custom business logic at a
particular timestamp.
**Access to your database** can be handled all the way to the row level by using
the authentication and authorisation options available in Hasura.
[Learn more here.](https://hasura.io/docs/2.0/auth/overview/)
This is of course on top of the
[application access tokens](/platform-components/security-and-authentication/application-access-tokens)
and
[personal access tokens](/platform-components/security-and-authentication/personal-access-tokens)
in the platform you can use to close off access to the entire API.
## Usage examples
You can interact with your Hasura database in two ways: through the GraphQL API
(recommended) or directly via PostgreSQL connection.
```javascript
import fetch from 'node-fetch';
// Configure your authentication details
const HASURA_ENDPOINT = "YOUR_HASURA_ENDPOINT";
const HASURA_ADMIN_SECRET = "YOUR_HASURA_ADMIN_SECRET"; // Found in the "Connect" tab of Hasura console
const APP_ACCESS_TOKEN = "YOUR_APP_ACCESS_TOKEN"; // Generated following the Application Access Tokens guide
// Reusable function to make GraphQL requests
async function fetchGraphQL(operationsDoc, operationName, variables) {
try {
const result = await fetch(
HASURA_ENDPOINT,
{
method: "POST",
headers: {
'Content-Type': 'application/json',
'x-hasura-admin-secret': HASURA_ADMIN_SECRET,
'x-auth-token': APP_ACCESS_TOKEN
},
body: JSON.stringify({
query: operationsDoc,
variables: variables,
operationName: operationName
})
}
);
if (!result.ok) {
const text = await result.text();
throw new Error(`HTTP error! status: ${result.status}, body: ${text}`);
}
return await result.json();
} catch (error) {
console.error('Request failed:', error);
throw error;
}
}
// Query to fetch verification records
const operationsDoc = `
query MyQuery {
verification {
id
}
}
`;
// Mutation to insert a new verification record
const insertOperationDoc = `
mutation InsertVerification($name: String!, $status: String!) {
insert_verification_one(object: {name: $name, status: $status}) {
id
name
status
}
}
`;
// Function to fetch verification records
async function main() {
try {
const { errors, data } = await fetchGraphQL(operationsDoc, "MyQuery", {});
if (errors) {
console.error('GraphQL Errors:', errors);
return;
}
console.log('Data:', data);
} catch (error) {
console.error('Failed:', error);
}
}
// Function to insert a new verification record
async function insertWithGraphQL() {
try {
const { errors, data } = await fetchGraphQL(
insertOperationDoc,
"InsertVerification",
{
name: "Test User",
status: "pending"
}
);
if (errors) {
console.error('GraphQL Errors:', errors);
return;
}
console.log('Inserted Data:', data);
} catch (error) {
console.error('Failed:', error);
}
}
// Execute both query and mutation
main();
insertWithGraphQL();
```
```javascript
import pkg from 'pg';
const { Pool } = pkg;
// Initialize PostgreSQL connection (get connection string from Hasura console -> "Connect" tab)
const pool = new Pool({
connectionString: 'YOUR_POSTGRES_CONNECTION_STRING'
});
// Simple query to read all records from verification table
const readData = async () => {
const query = 'SELECT * FROM verification';
const result = await pool.query(query);
console.log('Current Data:', result.rows);
};
// Insert a new verification record with sample data
const insertData = async () => {
const query = `
INSERT INTO verification (id, identifier, value, created_at, expires_at)
VALUES ($1, $2, $3, $4, $5)
RETURNING *`;
// Sample values - modify according to your needs
const values = [
'test-id-123',
'test-identifier',
'test-value',
new Date(),
new Date(Date.now() + 24 * 60 * 60 * 1000) // Sets expiry to 24h from now
];
const result = await pool.query(query, values);
console.log('Inserted:', result.rows[0]);
};
// Update an existing record by ID
const updateData = async () => {
const query = `
UPDATE verification
SET value = $1, updated_at = $2
WHERE id = $3
RETURNING *`;
const values = ['updated-value', new Date(), 'test-id-123'];
const result = await pool.query(query, values);
console.log('Updated:', result.rows[0]);
};
// Execute all operations in sequence
async function main() {
try {
await readData();
await insertData();
await updateData();
await readData();
} finally {
await pool.end(); // Close database connection
}
}
main();
```
## Hasura postgress database access and connection

For GraphQL API:
1. **Hasura Admin Secret**: Found in the "connect" tab of Hasura console
2. **Application Access Token**: Generate this by following our
[Application Access Tokens guide](/building-with-settlemint/application-access-tokens)
For PostgreSQL:
1. **PostgreSQL Connection String**: Found in the "connect" tab of Hasura
console under "Database URL"
Always keep your credentials secure and never expose them in client-side code.
Use environment variables or a secure configuration management system in
production environments.
Understanding postgress connection string
**postgresql://hasura-f1cd9:[0c510604a378d348e7ed@p2p.gke-europe.settlemint.com](mailto:0c510604a378d348e7ed@p2p.gke-europe.settlemint.com):30787/hasura-f1cd9**
Here's how it's broken down:
* **Protocol**: `postgresql://`\
Indicates the connection type , PostgreSQL database over TCP.
* **Username**: `hasura-f1cd9`\
The database username used for authentication.
* **Password**: `0c510604a378d348e7ed`\
The corresponding password for the above username.
* **Host**: `p2p.gke-europe.settlemint.com`\
The server address (domain or IP) where the PostgreSQL database is hosted.
* **Port**: `30787`\
The network port on which the PostgreSQL service is listening.
* **Database Name**: `hasura-f1cd9`\
The specific PostgreSQL database to connect to on that server.
## Hasura backup
Via CLI pgdump command
```sql
PGPASSWORD=0c510604a378d348e7ed pg_dump \
-h p2p.gke-europe.settlemint.com \
-p 30787 \
-U hasura-f1cd9 \
-d hasura-f1cd9 \
-F p \
-f ~/Desktop/hasura_backup.sql
```
## Taking backup via hasura CLI
1. Hasura Database
2. Hasura Metadata
### Steps for taking a backup of hasura database
1. Install Hasura CLI
([https://hasura.io/docs/latest/hasura-cli/install-hasura-cli/](https://hasura.io/docs/latest/hasura-cli/install-hasura-cli/))
2. Run hasura init command to initiate a new Hasura project in the working
directory.
3. Edit config.yaml file to configure remote Hasura instance. We need to
generate an API Key in BPaaS and pass it with the endpoint.
Syntax of config.yaml:
```
version: 3
endpoint:
admin_secret:
metadata_directory: metadata
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:3000
```
Example
```
endpoint: https://hasuradb-15ce.gke-japan.settlemint.com/sm_aat_86530f5bf93d82a9
admin_secret: dc5eb1b93f43fd28c53e
metadata_directory: metadata
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:3000
```
4. Run hasura console command. (this command will sync everything to your local
hasura instance.)
5. Run this curl command to generate DB export:
Curl Format
```
curl -d '{"opts": [ "-O", "-x", "--schema=public", "--inserts"], "clean_output": true, "source": "default"}' -H "x-hasura-admin-secret: " /v1alpha1/pg_dump > db.sql
```
Example Curl
```
curl -d '{"opts": [ "-O", "-x", "--schema=public", "--inserts"], "clean_output": true, "source": "default"}' -H "x-hasura-admin-secret:78b0e4618125322de0eb" https://fuchsiacapybara-7f70.gke-europe.settlemint.com/bpaas-1d79Acd6A2f112EA450F1C07a372a7D582E6121F/v1alpha1/pg_dump > db.sql
```
### Importing data into a new instance
Please copy the content of the exported db.sql file, paste it and execute as a
SQL statement.
### Steps for taking a backup of hasura metadata
Hasura Metadata Export is a collection of yaml files which captures all the
metadata required by the GraphQL Engine. This includes info about tables that
are tracked, permission rules, relationships, and event triggers that are
defined on those tables.
If you have already initialized your project via the Hasura CLI you should see
the metadata directory structure in your project directory.
To export your entire metadata using the Hasura CLI execute the following
command in your terminal:
```
# In hasura CLI
hasura metadata export
```
This will export the metadata as YAML files in the /metadata directory
### Steps for importing or applying hasura metadata
You can apply metadata from one Hasura Server instance to another. You can also
apply an older or modified version of an instance's metadata onto itself to
replace the existing metadata. Applying or importing completely replaces the
metadata on that instance, i.e. you lose any metadata that existed before
applying.
```
# In hasura CLI
hasura metadata apply
```
### Reload hasura metadata
In some cases, the metadata can be out of sync with the database schema. For
example, when a new column has been added to a table via an external tool.
```
# In hasura CLI
hasura metadata reload
```
For more on Hasura Metadata, refer:
[https://hasura.io/docs/latest/migrations-metadata-seeds/manage-metadata/](https://hasura.io/docs/latest/migrations-metadata-seeds/manage-metadata/) For
more on Hasura Migrations, refer:
[https://hasura.io/docs/latest/migrations-metadata-seeds/manage-migrations/](https://hasura.io/docs/latest/migrations-metadata-seeds/manage-migrations/)
Congratulations.!!
You have succesfully configured Hasura backend-as-a-service layer with the
off-chain database of your choice.
From here we will proceed to adding centralized and non-centralized storage for
our images, documents, videos, archive files and other storage needs.
file: ./content/docs/building-with-settlemint/hedera-hashgraph-guide/setup-storage.mdx
meta: {
"title": "Setup storage",
"description": "Add S3 or IPFS storage"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To integrate off-chain file storage into your blockchain application, you can
configure either IPFS (for decentralized content addressing) or MinIO (an
S3-compatible private storage layer) through the SettleMint platform. Both
options serve different use cases, IPFS excels in immutability and decentralized
access, while S3-style storage is better for secure, private, and
high-performance file delivery.
To get started, navigate to the relevant application in your SettleMint
workspace and open the storage section from the left-hand menu. Click add
storage, which opens a configuration form. Choose the storage type, either IPFS
for decentralized or MinIO for private object storage. Assign a name and
configure your deployment settings like region, provider, and resource pack.
Once confirmed, the storage service will be deployed and available for use.
Once provisioned, you can access and manage your storage instance from the
manage storage section. Here, you will be able to view the storage endpoint,
health status, and metadata configuration. If using IPFS, you'll be interacting
with content hashes (CIDs), while MinIO offers an S3-compatible interface where
files are stored under buckets and can be accessed via signed URLs.
Using the SettleMint SDK or CLI, developers will be able to list, query, and
manage storage instances programmatically. The SDK provides a typed interface to
connect, upload, retrieve, and delete files. For example, the
@settlemint/sdk-ipfs package allows seamless pinning and retrieval of files
using CIDs. Similarly, @settlemint/sdk-minio wraps around common S3 operations
like uploading files, generating expirable download URLs, and managing buckets.
Depending on your use case, both IPFS and MinIO can serve as complementary
layers. For public-facing and immutable content, such as NFT metadata, DAO
governance artifacts, or verifiable documents, IPFS is well suited. For private,
regulated, or access-controlled files, like KYC documents, user uploads, admin
reports, and internal metadata, MinIO offers a robust alternative with access
control and performance guarantees.
In practice, a dApp may use both systems in tandem: the file is stored in
S3/MinIO for fast access and usability, while its content hash is stored on IPFS
(and optionally, linked on-chain) to provide tamper-proof guarantees and content
validation. This hybrid model ensures performance, security, and
decentralization where it matters most.
Once storage is connected, users and developers can begin uploading files via
frontend integrations, backend scripts, or SDK calls. Content uploaded to IPFS
will return a CID, which can be persisted on-chain or referenced in subgraphs
and APIs. Files on S3/MinIO can be secured using signed URLs or policies, making
them suitable for user role–based access or limited-time file sharing.
## Off-chain file storage use cases in blockchain applications
Blockchain applications often require storing documents, images, videos, or
metadata off-chain due to cost, performance, or privacy reasons. Two common
approaches are:
* **IPFS**: A decentralized, content-addressed file system ideal for immutable,
verifiable, and censorship-resistant data.
* **MiniO S3**: A centralized, enterprise-grade storage solution that supports
private files, fine-grained access control, and fast retrieval.
Below are separate use case tables for each.
***
## 🌐 IPFS (Interplanetary File System)
IPFS is a decentralized protocol for storing and sharing files in a peer-to-peer
network. Files are addressed by their content hash (CID), ensuring immutability
and verification.
| Category | Use Cases |
| -------------------------- | -------------------------------------------------------------------------------------- |
| **NFTs & Metadata** | NFT images and media, Metadata JSON, Reveal assets, Provenance data |
| **Decentralized Identity** | Hash of KYC documents, Verifiable credentials, DID documents, Encrypted identity data |
| **DAOs & Governance** | Proposals with supporting files, Community manifestos, Off-chain vote metadata |
| **Public Records** | Timestamped proofs, Open access research, Transparent regulatory disclosures |
| **Content Publishing** | Articles, Audio files, Podcasts, Open knowledge archives |
| **Gaming & Metaverse** | 3D assets, Wearables, In-game items, IPFS-based map data |
| **Token Ecosystems** | Whitepapers, Token metadata, Proof-of-reserve documents |
| **Data Integrity Proofs** | Merkle tree files, Hashed content for audit, CID-linked validation |
| **Hybrid dApps** | On-chain reference to CID, IPFS-pinned metadata, Public shareable URIs |
| **Data Portability** | Decentralized content backups, Peer-to-peer file sharing, Long-term open data archives |
***
## ☁️ MinIO S3 (Simple Storage Service)
MinIO S3 is a centralized cloud storage platform that offers speed, scalability,
and rich security features. It is especially suitable for private or
enterprise-grade applications.
| Category | Use Cases |
| ----------------------------- | --------------------------------------------------------------------------------------- |
| **KYC / Identity Management** | Encrypted KYC files, ID document storage, Compliance scans, Signature uploads |
| **User Uploads** | Profile pictures, File attachments, Media uploads, Form attachments |
| **Admin Dashboards** | Exported reports, Internal analytics files, Logs and snapshots |
| **E-Commerce / Marketplaces** | Product images, Order confirmations, Receipts, Invoices |
| **Private DAO Ops** | Budget spreadsheets, Voting records, Internal documents |
| **Education Platforms** | Certificates, Course PDFs, Student submissions |
| **Customer Support** | Ticket attachments, User-submitted evidence, File-based case history |
| **Real-Time Interfaces** | UI asset delivery, Previews, Optimized media for front-end display |
| **Data Recovery** | Automatic backups, Encrypted snapshots, Versioned file histories |
| **Secure Downloads** | Signed URLs for restricted access, Expirable public links, S3-based token-gated content |
***
## Summary: when to use which?
| Use Case Pattern | Recommended Storage |
| ------------------------------------- | ------------------- |
| Public, immutable content | **IPFS** |
| Verifiable, on-chain linked data | **IPFS** |
| Private or role-based content | **S3** |
| Fast real-time access (UI/media) | **S3** |
| Hybrid (IPFS for hash, S3 for access) | **Both** |
Each system has unique advantages. For truly decentralized applications where
transparency and verifiability matter, IPFS is a natural fit. For operational
scalability, secure access, and enterprise-grade needs, S3 provides a reliable
foundation.
In hybrid dApps, combining both ensures performance without compromising on
decentralization.
## Add storage
Navigate to the **application** where you want to add storage. Click **storage** in the left navigation, and then click **add storage**. This opens a form.
### Configure storage
1. Choose storage type (IPFS or MinIO)
2. Choose a **storage name**
3. Configure deployment settings
4. Click **confirm**
First ensure you're authenticated:
```bash
settlemint login
```
Create storage:
```bash
# Get the list of available storage types
settlemint platform create storage --help
# Create storage
settlemint platform create storage
# Get information about the command and all available options
settlemint platform create storage --help
```
For a full example of how to connect to a storage using the SDK, see the [MinIO SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-minio#api-reference) or [IPFS SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-ipfs#api-reference).
Get your access token from the platform UI under **user settings → API tokens**.
The SDK enables you to:
* Use IPFS for decentralized storage - check out the [IPFS SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/ipfs)
* Use MinIO for S3-compatible storage - check out the [MinIO SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/minio)
## Manage storage
Navigate to your storage and click **manage storage** to:
* View storage details and status
* Monitor health
* Access storage interface
* Update configurations
```bash
# List storage instances
settlemint platform list storage --application
# Get storage details
settlemint platform read storage
```
```typescript
// List storage instances
const listStorage = async () => {
const storages = await client.storage.list("your-app-id");
console.log('Storage instances:', storages);
};
// Get storage details
const getStorage = async () => {
const storage = await client.storage.read("storage-unique-name");
console.log('Storage details:', storage);
};
```
Congratulations!
You have succesfully added S3 and IPFS storage to your application environment.
From here we will proceed to adding custom container deployments where you can
host your application front end user interface or any other service or services
required to complete your application.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/add-network-and-nodes.mdx
meta: {
"title": "Add network and nodes",
"description": "Guide to adding a Blockchain Network to your application"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
import React from "react";
Summary
To build a blockchain application, the first step is setting up a blockchain network with the correct number of validating and non-validating nodes. You can either deploy a permissioned network such as Hyperledger Besu or GoQuorum, or connect to an L1 or L2 Public Network like Ethereum, Polygon PoS, Hedera, Polygon zkEVM, Avalanche, Arbitrum, or Optimism. Both mainnet and testnet versions are available for public networks.
When creating an application on SettleMint, you will be prompted to select a
network and assign it a name. By default, a first validating node is deployed
along with the network, and you must assign a name to it as well. You may
optionally provide an EC DSA P256 private key to use as custom key material for
the node identity. If no key is provided, SettleMint will generate one
automatically and save it in your private keys.
In SettleMint-managed (SaaS) mode, you will need to choose between a shared or
dedicated cluster for deployment. You can also select a cloud provider and a
data center of your choice. Additionally, you will have the option to select
from small, medium, or large resource packs, which can be scaled up or down
later as needed.
Before deploying the network, you will have the option to configure network
settings and customize the genesis file. For most use cases, it is recommended
to keep the default settings. Once configured, you can proceed with deployment.
After a few minutes, your network manager and first node will be fully
operational.
To enhance reliability, you should add more nodes to your network for fault
tolerance. The best practice is to deploy four validator nodes and two
non-validator nodes. Once the nodes are set up, adding a load balancer will help
distribute network traffic efficiently and improve performance.
Once your network, nodes, and load balancer are running, you can access the
Insights tab to integrate monitoring tools. For permissioned networks, you can
add Blockscout Blockchain Explorer to track transactions and network activity.
If you are using public EVM networks, publicly available blockchain explorers
can be used instead.
## Prerequisites
Before setting up a blockchain network, you need to have an application created
in your workspace. Applications provide the organizational context for all your
blockchain resources including networks, nodes, and development tools. If you
haven't created an application yet, follow our
[Create Application](/building-with-settlemint/evm-chains-guide/create-an-organization-and-application)
guide first.
## 1. Add blockchain network
For EVM Chains, SettleMint offers Hyperledger Besu and Quorum for permissioned
networks and a bunch of public networks to choose from. For the list of
supported networks please refer -
[Supported Networks](/platform-components/blockchain-infrastructure/network-manager#supported-blockchain-network-protocols)

When we deploy a network, first node is automatically deployed with it and is an
orderer node. Once you have deployed a permissioned network or joined a public
network, you can add more nodes to it.
## 2. Add blockchain nodes
To see and add nodes, please click on **Blockchain Nodes** tile on the dashboard
or use the **Blockchain Nodes** link in the left menu.

To bootstrap a functional and fault-tolerant Hyperledger Fabric network, certain
minimum infrastructure elements are needed. These include peer nodes for each
organization and orderer nodes for the consensus layer. While development or
test networks can run on minimal nodes, production networks typically enforce
fault-tolerant configurations using Raft consensus and multiple organizations.
### Minimum recommended node setup for fabric
| Component | Minimum Nodes | Recommended Setup | Notes |
| ----------------------- | ------------- | ------------------------ | --------------------------------------------------------------------- |
| Peer Nodes | 1 per org | 2+ per org | At least one anchor peer is needed per org; more for load balancing. |
| Orderer Nodes | 3 (Raft) | 5 (odd number preferred) | Raft requires a quorum (>50%) for consensus; use odd numbers. |
| Organizations | 2 | 3+ | More orgs simulate decentralized governance and endorsement policies. |
| Certificate Authorities | 1 per org | 1 per org + TLS CA | Separate TLS and root CA improves security isolation. |
* **Peer Nodes**: A network with a single organization and one peer node is
technically valid but does not reflect real consortium setups. For endorsement
policies and distributed state replication, at least two orgs with peers are
recommended.
* **Orderers with Raft**: Raft ordering service requires **at least 3 nodes**
for high availability. Using 5 nodes allows for one node to go down without
losing quorum.
* **CA Services**: Each org should operate its own CA for identity issuance. TLS
CAs are often split for better separation of concerns.

Users can configure the following settings before deploying a **Fabric
network**:
| Parameter | Description |
| ---------------------------- | ---------------------------------------------------------------------------------------- |
| Endorsement Policy | Defines transaction endorsement requirements ("By all peers" or "By majority of peers"). |
| Batch Timeout | Time before transactions are grouped into a block. |
| Max Messages in Batch | Maximum number of messages in a batch. |
| Absolute Max Bytes in Batch | Upper limit on batch size in megabytes (MB). |
| Preferred Max Bytes in Batch | Preferred batch size in megabytes (MB). |
#### Channel configuration and policies
Hyperledger Fabric networks use a `configtx.json` file to define network
channels, membership rules, and policies. Key components include:
* **Application Group**: Defines policies for participating organizations,
specifying details such as:
* **Organization Name**
* **Policies**:
* **Admin**: Roles allow users to modify configurations.
* **Endorsement**: Policies require transaction approvals from specific
peers.
* **Readers and Writers**: Policies define access to channel data.
* **Orderer Group**: Configures the ordering service responsible for transaction
finalization. Settings include:
* **Batch Timeout**: Determines the time before transactions are grouped into
a block.
* **Max Messages Per Batch**: Controls block size.
* **Consensus Type**: Typically `etcdraft`, a Raft-based ordering service.
#### Network governance and security
Hyperledger Fabric networks require robust security and governance mechanisms:
* **Membership Service Provider (MSP)**: Controls identity verification and
authentication, ensuring only authorized participants can access the network.
* **Root Certificates and TLS Certificates**: Define trusted entities for secure
communication.
* **Endorsement Policies**: Determine how transactions are validated across
organizations, enforcing compliance and preventing unauthorized modifications.
* **Block Validation Policies**: Ensure the integrity and security of the
distributed ledger, maintaining network trustworthiness.
***
### Hyperledger fabric networks

The **dashboard** offers comprehensive network monitoring, including:
* **Network Overview**: Name, deployment location, creation date, blockchain
version, protocol type, channel ID, MSP ID.
* **Channel Configuration JSON File Access**.
* **Batch Processing Settings**:
* Timeout
* Maximum messages
* Batch size
#### Real-time performance monitoring

* Number and location of nodes.
* Active consensus nodes and cluster size.
* Latest block committed.
* Real-time transaction monitoring, allowing users to keep track of all
blockchain activities.
* Health status of orderer and peer nodes.
* Performance analytics, including block generation times, to help organizations
optimize their blockchain operations.
* Endorsement policy compliance tracking to ensure transactions adhere to
predefined security and governance policies.
#### System recommendations
> **Recommendation**\
> Alerts for **fault tolerance** and **orderer node requirements** are provided
> in the system.
#### Key benefits
* Simplifies the deployment process for Hyperledger Fabric networks through a
guided setup approach.
* Efficiently configures access control, consensus models, and governance
settings, ensuring a seamless blockchain deployment experience.
* Designed for scalability, supporting multi-organization setups with secure
identity management.
* Integrated monitoring provides organizations with real-time insights into
network performance and compliance adherence.
***
## Hyperledger fabric explorer
Hyperledger Explorer is a web-based tool designed to provide a **comprehensive
and real-time** view of blockchain operations within **Hyperledger Fabric**
networks. It enables users to monitor and analyze blockchain activities,
including **blocks, transactions, and chaincodes**, while maintaining privacy
and security. With its feature-rich dashboard, Hyperledger Explorer allows users
to **navigate through blocks, transactions, peers, and channels** with ease. The
tool provides advanced search and filtering capabilities, real-time
notifications for new blocks, and interactive metrics for visualizing blockchain
trends. By offering deep insights into ledger data and enabling efficient
network management, Hyperledger Explorer becomes an essential solution for
organizations leveraging **Hyperledger Fabric**.

* **Real-time Monitoring**: Displays network activity as it happens, providing
immediate visibility into new blocks and transactions.
* **Comprehensive Dashboard**: A central hub for monitoring network health,
including metrics such as the number of blocks, transactions, nodes, and
chaincodes.
* **Detailed Block & Transaction Views**:
* Block list with metadata such as block hash, transaction count, and
timestamps.
* Transaction explorer for tracking transaction details, types, and associated
metadata.
* **Search & Filtering**:
* Filter transactions and blocks by **date range, channel, or organization**.
* Advanced sorting capabilities for customized data views.
* **Channel & Chaincode Management**:
* View and manage available channels.
* Display installed chaincodes with versioning details.
* **Interactive Metrics & Analytics**:
* Graphical visualizations of blockchain activity.
* Hover-based insights for precise data analysis.
## Dashboard overview
The **Dashboard** serves as the main interface, providing an overview of the
blockchain network. It includes various panels such as **Peer Lists, Network
Metrics, and Recent Transactions by Organization**. Users can dynamically switch
channels via a dropdown to customize their view. Additionally, a **Latest Blocks
Notification Panel** presents key block details, including:
* Block number
* Channel name
* Data hash
* Transaction count
Each block link redirects to an in-depth **Block Details** view, offering
insights into timestamps, hashes, and transaction summaries.
## Network & channel management
The **Network View** presents details on configured properties for each channel.
Users can analyze peer statuses, their roles, and network configurations,
including **ledger height and Membership Service Provider (MSP) identity**.
The **Channel List** section provides an overview of available channels,
enabling users to navigate different segments of the blockchain network
effortlessly.
## Exploring blocks & transactions
Hyperledger Explorer provides powerful tools for tracking blockchain activities:
* **Block List**: A sortable, filterable table displaying block metadata like
block hash, transaction count, and creation timestamps.
* **Transaction List**: Supports up to **100 rows per page** with pagination and
allows users to drill down into transaction specifics.
* **JSON Transaction Views**: Enables structured previews with fold/unfold
options for easy data inspection.
## Chaincodes & smart contracts
The **Chaincode List** presents installed chaincodes across the network,
allowing filtering and sorting by:
* Chaincode name
* Version
* Deployment status
* Associated transactions
This section helps users manage smart contracts efficiently and track changes
over time.
## Analytics & metrics
A dedicated **Metrics Panel** delivers real-time statistics, such as:
* Number of blocks and transactions processed per hour or minute
* Network activity trends over time
* Interactive charts for monitoring blockchain operations
These visual analytics tools enhance user insights and ensure efficient
blockchain monitoring.
Congratulations.!!
You have succesfully built the blockchain infrastructure layer for you
application.
From here you can proceed for sevelopment and deploymnet of chaincodes.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/audit-logs.mdx
meta: {
"title": "Audit logs",
"description": "Audit logs for the actions performed on SettleMint platform"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
The audit log keeps a detailed record of user actions across the system, helping
teams monitor activity, track changes, and stay compliant with internal and
external requirements. Each entry includes a timestamp, showing exactly when
something was done, which makes it easier to follow the flow of events and spot
any irregularities.

It also records the user who performed the action, adding a layer of
accountability by linking every change to a specific individual or system role.
This is especially useful when reviewing changes or troubleshooting unexpected
behavior.
The service field highlights which part of the platform was involved, whether
it’s an integration, middleware component, or another system area. Alongside
that, the action field captures what was done, like creating, editing, or
deleting something. Together, these fields give teams a clear snapshot of what
happened, where, and by whom.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/create-an-application.mdx
meta: {
"title": "Create an application",
"description": "Guide to creating a blockchain application on SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To get started on the SettleMint platform, you need to create an
organization by going to the homepage or clicking the grid icon, then
selecting “Create new organization.” You’ll need to enter a name and
complete the billing setup using Stripe to activate it.
Once your organization is ready, you need to invite your team members by
entering their email addresses, selecting their roles, and sending the
invitation. After that, you need to create an application within the
organization by giving it a name and confirming.
You can manage your organization and applications from the dashboard, change
names, invite more members, or delete resources when needed. You can also
create and manage applications using the SDK CLI or SDK JS if you prefer to
work programmatically.
## How to create an organization and application in SettleMint platform
An organization is the highest level of hierarchy in SettleMint. It's at this
level that you can create and manage blockchain applications, invite team
members to collaborate and manage billing.


You created your first organization when you signed up to use the SettleMint
platform, but you can create as many organizations as you want, e.g. for your
company, departments, teams, clients, etc. Organizations help you structure your
work, manage collaboration, and keep your invoices clearly organized.
Create an Organization
Navigate to the homepage, or click the grid icon in the upper right corner.
Click Create new organization. This opens a form. Follow these steps to create your organization:
Choose a name for your organization. Choose a name that is easily
recognizable in your dashboards, e.g. your company name, department name, team name, etc.
You can change the name of your organization at any time.
Enter **billing information**. SettleMint creates a billing account for this
organization. You will be billed monthly for the resources you use within this
organization. Provide your billing details securely via Stripe, with support for Visa, Mastercard, and Amex, to activate your organization. Follow the prompts to complete the setup and gain full access to SettleMint’s blockchain development tools. Ensure all details are accurate to enable a smooth onboarding experience. Your organization is billed monthly, with the invoice dates set for 1st of every month.
Click **Confirm** to go to the organization dashboard. From here, you can create
your first application in this organization. The dashboard will show you a
summary of your organization's applications, the members in this organization,
and a status of the resource costs for the current month.
When you create an organization, you are the owner, and therefore an
administrator of the organization. This means you can perform all actions within
this organization, with no limitations.
## Invite new organization members

Navigate to the **Members section** of your organization, via the homepage, or
via your organization dashboard.
Follow these steps to invite new members to your organization:
1. Click **Invite new member**.
2. Enter the **email adress** of the person you want to invite.
3. Select their **role**, i.e. whether they will be an administrator or a user.
4. Optionally, you can add a **message** to be included in the invitation email.
5. Click **Confirm** to go to the list of your organization's members. Your
email invitation has now been sent, and you see in the list that it is
pending.
## Manage an organization
Navigate to the **organization dashboard**.
Click **Manage organization** to see the available actions. You can only perform
these actions if you have administrator rights for this organization.
* **Change name** - Changes the organization name without any further impact.
* **Delete organization** - Removes the organization from the platform.
On Organization Dashboard
* See all applications in that organization.
* See all members of the organization
* See all the internal applications and clients if in partner mode
You can only delete an organization when it has no applications related to it.
Applications have to be deleted one by one, once all their related resources
(e.g. networks, nodes, smart contract sets, etc.) have been deleted.
## Create an application
An application is the context in which you organize your networks, nodes, smart
contract sets and any other related blockchain resource.
You will always need to create an application before you can deploy or join
networks, and add nodes.
## How to create a new application

### Access Application Creation
In the upper right corner of any page, click the **grid icon**
### Navigate & Create
* Navigate to your workspace
* Click **Create new application**
### Configure Application
* Choose a **name** for your application
* Click **Confirm** to create the application
First, install the [SDK CLI](https://github.com/settlemint/sdk/blob/main/sdk/cli/README.md#usage) as a global dependency.
Then, ensure you're authenticated. For more information on authentication, see the [SDK CLI documentation](https://github.com/settlemint/sdk/blob/main/sdk/cli/README.md#login-to-the-platform).
```bash
settlemint login
```
Create an application:
```bash
settlemint platform create application
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create application
const createApp = async () => {
const result = await client.application.create({
workspaceUniqueName: "your-workspace",
name: "myApp"
});
console.log('Application created:', result);
};
// List applications
const listApps = async () => {
const apps = await client.application.list("your-workspace");
console.log('Applications:', apps);
};
// Read application details
const readApp = async () => {
const app = await client.application.read("app-unique-name");
console.log('Application details:', app);
};
// Delete application
const deleteApp = async () => {
await client.application.delete("application-unique-name");
};
```
Get your access token from the Platform UI under User Settings → API Tokens.
## Manage an application
The SettleMint Platform Dashboard provides a centralized view of blockchain
infrastructure, offering real-time insights into system components. With health
status indicators, including error and warning counts, it ensures system
stability while enabling users to proactively address potential issues. Resource
usage tracking helps manage costs efficiently, providing month-to-date expense
insights.
Each component features a “Details” link for quick access to in-depth
information, while the intuitive navigation panel allows seamless access to key
modules such as Audit Logs, Access Tokens, and Insights. Built-in support
options further enhance usability, ensuring users can quickly troubleshoot and
resolve issues.

Navigate to your application and click **Manage app** to see available actions:
* View application details
* Update application name
* Delete application
```bash
# List applications
settlemint platform list applications
# Delete application
settlemint platform delete application
```
```typescript
// List applications
await client.application.list("your-workspace");
// Read application
await client.application.read("app-unique-name");
// Delete application
await client.application.delete("app-unique-name");
```
All operations require appropriate permissions in your workspace.
Congratulations.!!
You have successfully created an organization and added an application within
it. From here, you can proceed to deploy a network, add nodes, a load balancer,
and a blockchain explorer
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/deploy-chain-code.mdx
meta: {
"title": "Deploy chaincode",
"description": "Guide to deploy chaincode"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
The process starts by defining a clean and structured chaincode using
TypeScript. Transaction functions are added to handle common operations,
creating new entries, reading existing ones, updating data, and deleting
records. These functions follow familiar CRUD patterns but are designed to
work within the immutable environment of a blockchain. Events are emitted
during each transaction to make state changes observable by off-chain
systems or external services.
Once the code is ready, it's compiled and packaged into a deployable
archive. The chaincode is then installed on peers, approved by
organizations, and committed to a Fabric channel using a lifecycle
management script. If needed, an init function can be invoked to populate
default data or configure initial state.
After deployment, functions can be tested directly by invoking or querying
them through the provided CLI. This end-to-end workflow, from writing the
chaincode to interacting with it on-chain, follows a consistent and
repeatable pattern, making it easier to manage chaincode across different
stages of development and deployment.
## Learning with a asset transfer chaincode example
The goal of this tutorial is to design and build a simple Asset Transfer
chaincode using typescript. While the visible use case is centered around
managing user data (such as name, email, age, etc.), the hidden objective is to
demonstrate the core thought process behind building a chaincode that can store,
update, read, and soft delete data on the blockchain.
This example is intentionally kept simple and non-technical in terms of
blockchain identity (no wallets or signatures involved) to help beginners focus
on the fundamentals of: - Designing chaincode data structures (structs and
mappings) - Writing public and restricted functions to interact with data -
Emitting and responding to events - Handling update and soft delete logic to
mimic realistic scenarios (Understand that transaction data is never deleted,
just a more recent entry is added about that record in a newer block on
blockchain)
By the end of this tutorial, you’ll not only learn the foundational patterns
that apply to many real-world blockchain applications but also understand how to
develop and deploy chaincodes on SettleMint platform.
## 1. Let's start with the typescript chaincode code
A **chaincode** is a self-executing smart contract deployed on the Hyperledger
Fabric network. It defines the business logic that governs how assets are
created, modified, or queried on the ledger , all in a trusted, peer-to-peer
environment without intermediaries. In this tutorial, we will write our
chaincode using **TypeScript**, a statically typed superset of JavaScript that
is well-suited for building secure, modular, and maintainable Fabric chaincode.
Hyperledger Fabric officially supports chaincode written in TypeScript (and
JavaScript), allowing developers to leverage familiar web development tools,
strong typing, and modern asynchronous programming patterns.
If you're new to Fabric chaincode development or want to go deeper with
TypeScript-based chaincodes, here are some useful resources to get started:
* **Official Fabric Chaincode (TypeScript) Docs**:\
[https://hyperledger-fabric.readthedocs.io/en/latest/chaincode4ade.html](https://hyperledger-fabric.readthedocs.io/en/latest/chaincode4ade.html)
* **Fabric Samples Repository (TypeScript Chaincode)**:\
[https://github.com/hyperledger/fabric-samples/tree/main/asset-transfer-typescript](https://github.com/hyperledger/fabric-samples/tree/main/asset-transfer-typescript)
* **fabric-shim API Reference (Chaincode SDK for TypeScript)**:\
[https://hyperledger.github.io/fabric-chaincode-node/](https://hyperledger.github.io/fabric-chaincode-node/)
These resources will help you understand how to structure your project, use the
`fabric-shim` SDK, define transaction functions, and handle data interactions
securely and efficiently.
For generating boilerplate chaincode templates or testing simple logic, you can
also leverage AI tools like [ChatGPT](https://chatgpt.com/) or your preferred
code generation assistant to scaffold TypeScript chaincode quickly. Just ensure
you validate the generated logic against Fabric’s lifecycle and endorsement
policies before using it in production environments.
With TypeScript, you get the benefits of modern tooling, better type safety, and
an easier development experience , especially if you're coming from a web or
full-stack background.
### Example asset transfer chaincode typescript code
**index.ts**
```ts
import type { Contract } from "fabric-contract-api";
import { AssetTransferContract } from "./assetTransfer";
export { AssetTransferContract } from "./assetTransfer";
export const contracts: Array = [AssetTransferContract];
```
**asset.ts**
```ts
import { Object, Property } from "fabric-contract-api";
@Object()
export class Asset {
@Property()
public docType?: string;
@Property()
public ID: string;
@Property()
public Color: string;
@Property()
public Size: number;
@Property()
public Owner: string;
@Property()
public AppraisedValue: number;
constructor(
ID: string,
Color: string,
Size: number,
Owner: string,
AppraisedValue: number
) {
this.ID = ID;
this.Color = Color;
this.Size = Size;
this.Owner = Owner;
this.AppraisedValue = AppraisedValue;
}
}
```
```ts
import { instanceToPlain } from "class-transformer";
import {
Context,
Contract,
Info,
Returns,
Transaction,
} from "fabric-contract-api";
import stringify from "json-stringify-deterministic";
import sortKeysRecursive from "sort-keys-recursive";
import { Asset } from "./asset";
@Info({
title: "AssetTransfer",
description: "Smart contract for trading assets",
})
export class AssetTransferContract extends Contract {
@Transaction()
public async InitLedger(ctx: Context): Promise {
const assets: Asset[] = [
new Asset("asset1", "blue", 5, "Tomoko", 300),
new Asset("asset2", "red", 5, "Brad", 400),
new Asset("asset3", "green", 10, "Jin Soo", 500),
new Asset("asset4", "yellow", 10, "Max", 600),
new Asset("asset5", "black", 15, "Adriana", 700),
new Asset("asset6", "white", 15, "Michel", 800),
];
for (const asset of assets) {
asset.docType = "asset";
await ctx.stub.putState(
asset.ID,
Buffer.from(stringify(sortKeysRecursive(instanceToPlain(asset))))
);
console.info(`Asset ${asset.ID} initialized`);
}
}
// CreateAsset issues a new asset to the world state with given details.
@Transaction()
public async CreateAsset(
ctx: Context,
id: string,
color: string,
size: number,
owner: string,
appraisedValue: number
): Promise {
const exists = await this.AssetExists(ctx, id);
if (exists) {
throw new Error(`The asset ${id} already exists`);
}
const asset = new Asset(id, color, size, owner, appraisedValue);
asset.docType = "asset";
const assetBuffer = Buffer.from(
stringify(sortKeysRecursive(instanceToPlain(asset)))
);
// Publish event
ctx.stub.setEvent("CreateAsset", assetBuffer);
// we insert data in alphabetic order using 'json-stringify-deterministic' and 'sort-keys-recursive'
await ctx.stub.putState(id, assetBuffer);
}
// ReadAsset returns the asset stored in the world state with given id.
@Transaction(false)
public async ReadAsset(ctx: Context, id: string): Promise {
const assetJSON = await ctx.stub.getState(id); // get the asset from chaincode state
if (!assetJSON || assetJSON.length === 0) {
throw new Error(`The asset ${id} does not exist`);
}
return assetJSON.toString();
}
// UpdateAsset updates an existing asset in the world state with provided parameters.
@Transaction()
public async UpdateAsset(
ctx: Context,
id: string,
color: string,
size: number,
owner: string,
appraisedValue: number
): Promise {
const exists = await this.AssetExists(ctx, id);
if (!exists) {
throw new Error(`The asset ${id} does not exist`);
}
// overwriting original asset with new asset
const updatedAsset = new Asset(id, color, size, owner, appraisedValue);
updatedAsset.docType = "asset";
const assetBuffer = Buffer.from(
stringify(sortKeysRecursive(instanceToPlain(updatedAsset)))
);
// Publish event
ctx.stub.setEvent("UpdateAsset", assetBuffer);
// we insert data in alphabetic order using 'json-stringify-deterministic' and 'sort-keys-recursive'
await ctx.stub.putState(id, assetBuffer);
}
// DeleteAsset deletes an given asset from the world state.
@Transaction()
public async DeleteAsset(ctx: Context, id: string): Promise {
const assetString = await this.ReadAsset(ctx, id);
const assetBuffer = Buffer.from(
stringify(sortKeysRecursive(instanceToPlain(assetString)))
);
// Publish event
ctx.stub.setEvent("DeleteAsset", assetBuffer);
await ctx.stub.deleteState(id);
}
// AssetExists returns true when asset with given ID exists in world state.
@Transaction(false)
@Returns("boolean")
public async AssetExists(ctx: Context, id: string): Promise {
const assetJSON = await ctx.stub.getState(id);
return assetJSON && assetJSON.length > 0;
}
// TransferAsset updates the owner field of asset with given id in the world state, and returns the old owner.
@Transaction()
public async TransferAsset(
ctx: Context,
id: string,
newOwner: string
): Promise {
const assetString = await this.ReadAsset(ctx, id);
const asset: Asset = JSON.parse(assetString);
const oldOwner = asset.Owner;
asset.Owner = newOwner;
const assetBuffer = Buffer.from(stringify(sortKeysRecursive(asset)));
// Publish event
ctx.stub.setEvent("TransferAsset", assetBuffer);
// we insert data in alphabetic order using 'json-stringify-deterministic' and 'sort-keys-recursive'
await ctx.stub.putState(id, assetBuffer);
return oldOwner;
}
// GetAllAssets returns all assets found in the world state.
@Transaction(false)
@Returns("string")
public async GetAllAssets(ctx: Context): Promise {
const allResults = [];
// range query with empty string for startKey and endKey does an open-ended query of all assets in the chaincode namespace.
const iterator = await ctx.stub.getStateByRange("", "");
let result = await iterator.next();
while (!result.done) {
const strValue = Buffer.from(result.value.value.toString()).toString(
"utf8"
);
let record;
try {
record = JSON.parse(strValue);
} catch (err) {
console.log(err);
record = strValue;
}
allResults.push(record);
result = await iterator.next();
}
return JSON.stringify(allResults);
}
}
```
## Chaincode components
In this Hyperledger Fabric chaincode, we define a clear set of events and
transaction functions that manage the lifecycle of assets. These components
provide a complete interface for interacting with on-chain asset data,
supporting creation, updates, transfers, deletions, and querying, while also
ensuring visibility through event emissions.
Events play a crucial role in enabling off-chain services to listen for changes
in the blockchain state, while the transaction functions serve as the core API
for modifying and reading asset data stored in the ledger.
Below is a structured overview of the key events and functions defined in the
contract:
### Events
| Event Name | Parameters | Description |
| --------------- | ----------------------------------------------------------------------------------- | ------------------------------------------ |
| `CreateAsset` | `string id`, `string color`, `number size`, `string owner`, `number appraisedValue` | Emitted when a new asset is created |
| `UpdateAsset` | `string id`, `string color`, `number size`, `string owner`, `number appraisedValue` | Emitted when an asset is updated |
| `DeleteAsset` | `string id` | Emitted when an asset is deleted |
| `TransferAsset` | `string id`, `string newOwner` | Emitted when ownership of an asset changes |
### Functions
| Function Name | Parameters | Returns | Description |
| --------------- | ----------------------------------------------------------------------------------- | --------------- | ------------------------------------------------------ |
| `InitLedger` | – | `void` | Initializes the ledger with default sample assets |
| `CreateAsset` | `string id`, `string color`, `number size`, `string owner`, `number appraisedValue` | `void` | Creates and stores a new asset in the ledger |
| `ReadAsset` | `string id` | `string` | Retrieves asset details by ID |
| `UpdateAsset` | `string id`, `string color`, `number size`, `string owner`, `number appraisedValue` | `void` | Updates asset data in the ledger |
| `DeleteAsset` | `string id` | `void` | Deletes an asset from the ledger |
| `AssetExists` | `string id` | `boolean` | Checks if an asset with the given ID exists |
| `TransferAsset` | `string id`, `string newOwner` | `string` | Transfers asset ownership, returns previous owner name |
| `GetAllAssets` | – | `string` (JSON) | Retrieves all assets stored in the ledger |
## Crud mapping for the chaincode
This table maps traditional Web2-style CRUD operations to the equivalent
transaction functions in the Fabric chaincode:
| **CRUD** | **Chaincode Function** | **Explanation** |
| ---------- | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Create** | `CreateAsset()` | Adds a new asset to the world state with a unique `ID`. Similar to an `INSERT` in databases. It first checks if the asset already exists, and if not, creates it. Emits a `CreateAsset` event to support off-chain indexing or listeners. |
| **Read** | `ReadAsset()` | Retrieves a specific asset by its `ID`. This acts like a `SELECT` query. It reads the raw asset from the ledger state and returns it as a stringified JSON. This function does not modify state and is marked `@Transaction(false)`. |
| **Update** | `UpdateAsset()` | Replaces all fields of an existing asset with new values. This is equivalent to a full `UPDATE` operation in traditional databases. The function ensures the asset exists before applying changes. Emits `UpdateAsset` for monitoring and traceability. |
| **Delete** | `DeleteAsset()` | Removes the asset from the ledger entirely, performing a **hard delete**. Unlike soft deletes, this operation erases the state key from the ledger. The original state is lost, but the deletion is tracked via the `DeleteAsset` event for audit purposes. |
## Packaging and deploying chaincode

## Chaincode lifecycle management on SettleMint (fabric)
This guide explains the process to **compile**, **package**, **install**,
**approve**, **commit**, **initialize**, **query**, and **invoke** Hyperledger
Fabric chaincode using `chaincode.sh` provided in the SettleMint platform IDE.
## Prerequisites
Ensure the following environment variables are set before you begin:
* `CC_NAME`: Chaincode name
* `CC_VERSION`: Chaincode version (e.g., `1.0`)
* `CC_SEQUENCE`: Lifecycle sequence number (e.g., `1`)
* `CC_SRC_PATH`: Path to built chaincode files (`./dist`)
* `CC_RUNTIME_LANGUAGE`: Language used (`node`)
* Optional:
* `CC_INIT_FCN`: Init function name (e.g., `InitLedger`)
* `CC_INIT_ARGS`: JSON arguments for init
* `CC_CHANNEL`: Channel name (default: `default-channel`)
* `CC_COLLECTIONS_CONFIG_PATH`: For private data collections
* `CC_SIGNATURE_POLICY`: Signature policy, if any
***
## 1. Compile and package chaincode
**Step:** Compile TypeScript into JavaScript (if runtime is Node.js) and create
the chaincode `.tar.gz` package.
**Command:**\
**`./chaincode.sh package`**
* Transpiles and bundles code under `dist/`.
* Runs `peer lifecycle chaincode package`.
* Outputs a tarball `chaincodeName.tar.gz`.
***
## 2. Install chaincode on peer
**Step:** Uploads the packaged chaincode file to a specified peer.
**Command:**\
**`./chaincode.sh install `**
* Uses the `BTP_SERVICE_TOKEN` to authenticate with the SettleMint backend.
* Automatically polls until the chaincode is detected as installed.
***
## 3. Approve chaincode for org
**Step:** Approves the chaincode definition using the provided peer and orderer.
**Command:**\
**`./chaincode.sh approve `**
* Sends the chaincode metadata and version to the orderer.
* Can include private data configs and signature policies if required.
***
## 4. Check commit readiness (optional)
**Step:** Validates whether enough orgs have approved the chaincode.
**Command:**\
**`./chaincode.sh commit-readiness `**
* Useful for multi-org setups to confirm endorsement policy compliance.
***
## 5. Commit chaincode
**Step:** Commits the approved chaincode definition to the channel.
**Command:**\
**`./chaincode.sh commit `**
* Requires a majority of orgs to have approved the chaincode.
* Confirms commit status by polling continuously.
***
## 6. Initialize chaincode (optional)
**Step:** Calls the init function (e.g., `InitLedger`) if defined.
**Command:**\
**`./chaincode.sh init `**
* Only needed when `initRequired` was set to `true` during approval.
***
## 7. Query chaincode
**Step:** Read data from ledger via chaincode function.
**Command:**\
**`./chaincode.sh query --arguments '["arg1"]'`**
* No state change is made.
* Supports channel override via `--channel`.
***
## 8. Invoke chaincode
**Step:** Write/update ledger state via chaincode.
**Command:**\
**`./chaincode.sh invoke --arguments '["arg1"]'`**
* Supports `--transient` for sensitive/private inputs.
* Supports channel override.
***
## 9. Test build locally
Optional workspace tasks like `build package`, `test workspace` are available in
Code Studio’s **Task Manager** panel to automate test builds.
***
## Helpful queries
| Command | Description |
| ------------------------------------- | -------------------------------------- |
| **`./chaincode.sh peers`** | List available peers |
| **`./chaincode.sh orderers`** | List available orderers |
| **`./chaincode.sh nodes`** | List all nodes in the application |
| **`./chaincode.sh installed `** | View installed chaincodes |
| **`./chaincode.sh approved `** | Check approved definition of chaincode |
| **`./chaincode.sh committed `** | Check committed chaincode |
***
## Channel management
| Command | Purpose |
| -------------------------------------------------------- | --------------------------- |
| **`./chaincode.sh create-channel `** | Create a new channel |
| **`./chaincode.sh orderer-join-channel `** | Add orderer to channel |
| **`./chaincode.sh peer-join-channel
`** | Add peer to channel |
| **`./chaincode.sh orderer-leave-channel `** | Remove orderer from channel |
| **`./chaincode.sh peer-leave-channel
`** | Remove peer from channel |
The `chaincode.sh` utility acts as a full-featured DevOps toolkit for Fabric
chaincode lifecycle management. It integrates with SettleMint APIs to
orchestrate everything from packaging to deployment to post-deployment
operations across multiple peers and orderers.
Congratulations.!!
You have successfully packaged and deployed your chaincode on blockchain
network.
Now you can proceed to middlewares for getting APIs to do chaincode
transactions, write data to chain and read data in a structured format.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/deploy-custom-services.mdx
meta: {
"title": "Host dApp UI or custom services",
"description": "How to deploy containerised application frontend or other custom services"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
Deploying frontend applications or custom backend services on SettleMint can be
done through Custom Deployments, which allow you to run containerized
applications using your own Docker images. This enables seamless integration of
user interfaces, REST APIs, microservices, or other utilities directly within
the blockchain-powered environment of your application.
The typical use cases include hosting React/Vue/Next.js-based UIs, creating
custom indexers or oracles, exposing specialized API services, or deploying
off-chain business logic in containerized environments. These deployments are
sandboxed, stateless, and run in secure, managed infrastructure, making them
suitable for both development and production.
To get started, you’ll first need to containerize your application (if not
already done) and push the image to a container registry, this can be Docker Hub,
GitHub Container Registry, or a private registry. The image must be built for
AMD architecture, as the SettleMint infrastructure currently supports AMD-based
workloads.
Once your image is ready, you can initiate a Custom Deployment through the
platform UI, CLI, or SDK. You’ll provide the container image path, optional
environment variables, deployment region, and resource configurations. After the
container spins up successfully, your service will be publicly accessible via
the auto-assigned endpoint. For frontend apps, this can act as your live
production URL.
For applications requiring a custom domain, SettleMint allows you to bind domain
names to the deployed container. You can configure the domain in the platform
and then update your DNS records accordingly. The platform supports both ALIAS
records for top-level domains and CNAME records for subdomains. SSL/TLS
certificates are automatically handled unless you opt for a custom cert setup.
Once the deployment is live, you can manage it using the Custom Deployment
dashboard in the platform. This includes editing environment variables,
restarting the container, updating the image version, checking logs, and
monitoring availability. You can also script or automate these tasks using the
SDK or CLI as needed.
A few considerations: custom deployments are stateless by design, so any data
you want to persist should be stored using services like Hasura for off-chain
database functionality or MinIO/IPFS for file storage. The container’s
filesystem is read-only to enhance security and portability. Additionally, apps
won’t run with root privileges, so ensure your container adheres to standard
non-root user practices.
This feature is especially useful when you need to tightly couple your UI or
service logic with the on-chain components, enabling a clean, integrated workflow
for dApps, admin consoles, analytics dashboards, API bridges, or token utility
services. It offers flexibility without leaving the SettleMint ecosystem, all
while adhering to scalable and cloud-native design principles.
## How to use custom deployments to host application frontend or other custom services in SettleMint platform
A Custom Deployment allows you to deploy your own Docker images, such as
frontend applications, on the SettleMint platform. This feature provides
flexibility for integrating custom solutions within your blockchain-based
applications.

## Create a custom deployment
1. Prepare your container image and push it to a container registry (public or private).
2. In the SettleMint platform, navigate to the Custom Deployments section.
3. Click on the "Add Custom Deployment" button to create a new deployment.
4. Provide the necessary details:
* Container image path (e.g., registry.example.com/my-app:latest)
* Container registry credentials (if using a private registry)
* Environment variables (if required)
* Custom domain information (if applicable)
5. Configure any additional settings as needed.
6. Click on 'Confirm' and wait for the Custom Deployment to be in the Running status.
```bash
# Create a custom deployment
settlemint platform create custom-deployment my-deployment \
--application my-app \
--image-repository registry.example.com \
--image-name my-app \
--image-tag latest \
--port 3000 \
--provider gcp \
--region europe-west1
# With environment variables
settlemint platform create custom-deployment my-deployment \
--application my-app \
--image-repository registry.example.com \
--image-name my-app \
--image-tag latest \
--env-vars NODE_ENV=production,DEBUG=false
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
const createDeployment = async () => {
const result = await client.customDeployment.create({
applicationId: "app-123",
name: "my-deployment",
imageRepository: "registry.example.com",
imageName: "my-app",
imageTag: "latest",
port: 3000,
provider: "gcp",
region: "europe-west1",
environmentVariables: {
NODE_ENV: "production"
}
});
};
```
## Dns configuration for custom domains
When using custom domains with your Custom Deployment, you'll need to configure
your DNS settings correctly. Here's how to set it up:
1. **Add Custom Domain to the SettleMint Platform**:
* Navigate to your Custom Deployment in the SettleMint platform.
* In the manage custom deployment menu, click on the edit custom deployment
action.
* Locate the custom domains configuration section.
* Enter your desired custom domain (e.g., example.com for top-level domain or
app.example.com for subdomain).
* Save the changes to update your Custom Deployment settings.
2. **Obtain Your Application's Hostname**: After adding your custom domain, the
SettleMint platform will provide you with an ALIAS (for top-level domains) or
CNAME (for subdomains) record. This can be found in the "Connect" tab of your
Custom Deployment.
3. **Access Your Domain's DNS Settings**: Log in to your domain registrar or DNS
provider's control panel.
4. **Configure DNS Records**:
For Top-Level Domains (e.g., example.com):
* Remove any existing A and AAAA records for the domain you're configuring.
* Remove any existing A and AAAA records for the www domain (e.g.,
[www.example.com](http://www.example.com)) if you're using it.
```
ALIAS example.com gke-europe.settlemint.com
ALIAS www.example.com gke-europe.settlemint.com
```
For Subdomains (e.g., app.example.com):
```
CNAME app.example.com gke-europe.settlemint.com
```
5. **Set TTL (Time to Live)**:
* Set a lower TTL (e.g., 300 seconds) initially to allow for quicker
propagation.
* You can increase it later for better caching (e.g., 3600 seconds).
6. **Verify DNS Propagation**:
* Use online DNS lookup tools to check if your DNS changes have propagated.
* Note that DNS propagation can take up to 48 hours, although it's often much
quicker.
7. **SSL/TLS Configuration**:
* The SettleMint platform typically handles SSL/TLS certificates
automatically for both top-level domains and subdomains.
* If you need to use your own certificates, please contact us for assistance
and further instructions.
Note: The configuration process is similar for both top-level domains and
subdomains. The main difference lies in the type of DNS record you create (ALIAS
for top-level domains, CNAME for subdomains) and whether you need to remove
existing records.
## Manage custom deployments
1. Navigate to your application's **Custom Deployments** section
2. Click on a deployment to:
* View deployment status and details
* Manage environment variables
* Configure custom domains
* View logs
* Check endpoints
```bash
# List custom deployments
settlemint platform list custom-deployments --application my-app
# Get deployment details
settlemint platform read custom-deployment my-deployment
# Restart deployment
settlemint platform restart custom-deployment my-deployment
# Edit deployment
SettleMint platform edit custom-deployment my-deployment \
--container-image registry.example.com/my-app:v2
```
```typescript
// List deployments
const listDeployments = async () => {
const deployments = await client.customDeployment.list("my-app");
};
// Get deployment details
const getDeployment = async () => {
const deployment = await client.customDeployment.read("deployment-unique-name");
};
// Restart deployment
const restartDeployment = async () => {
await client.customDeployment.restart("deployment-unique-name");
};
// Edit deployment
const editDeployment = async () => {
await client.customDeployment.edit("deployment-unique-name", {
imageTag: "v2"
});
};
```
## Limitations and considerations
When using Custom Deployment, keep the following limitations in mind:
1. **No Root User Privileges**: Your application will run without root user
privileges for security reasons.
2. **Read-Only Filesystem**: The filesystem is read-only. For data persistence,
consider using:
* Hasura: A GraphQL engine that provides a scalable database solution. See
[Hasura](/building-with-settlemint/hasura-backend-as-a-service).
* Other External Services: Depending on your specific needs, you may use
other cloud-based storage or database services
3. **Stateless Applications**: Your applications should be designed to be
stateless. This ensures better scalability and reliability in a cloud
environment.
4. **Use AMD-based Images**: Currently, our platform supports AMD-based
container images. Ensure your Docker images are built for AMD architecture to
guarantee smooth compatibility with our infrastructure.
## Best practices
* Design your applications to be stateless and horizontally scalable
* Use environment variables for configuration to make your deployments more
flexible
* Implement proper logging to facilitate debugging and monitoring
* Regularly update your container images to include the latest security patches
Custom Deployment offers a powerful way to extend the capabilities of your
blockchain solutions on the SettleMint platform. By following these guidelines
and best practices, you can seamlessly integrate your custom applications into
your blockchain ecosystem.
Custom Deployments support automatic SSL/TLS certificate management for custom
domains.
Congratulations.!!
You have successfully deployed your application front end and have a working
full-stack application built on SettleMint tools and services.
We hope your journey was smooth, please write to us at [support@settlemint.com](mailto:support@settlemint.com)
for any help or feedback.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/integration-studio.mdx
meta: {
"title": "Integration studio",
"description": "Visual workflow builder for custom APIs and integrations"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
The Integration Studio is a dedicated low-code environment that enables
developers and business users to build backend workflows, API endpoints, and
custom logic using a visual interface. Powered by Node-RED, it offers an
intuitive drag-and-drop experience for orchestrating flows between smart
contracts, external APIs, databases, and storage systems, all within the
SettleMint ecosystem.
Instead of writing boilerplate backend code, developers will define logic using
nodes and flows, visually representing how data moves between services. These
flows can be triggered by webhooks, user interactions, smart contract events, or
timed executions. Under the hood, each Integration Studio is deployed as an
isolated and scalable container that supports JavaScript-based execution,
environment configuration, and secure API access.
Each node in the flow is designed to perform a specific task, such as receiving
HTTP input, transforming payloads, calling external APIs, or executing custom
JavaScript functions. These nodes are connected inside a flow, which represents
a unit of logic or an end-to-end integration path. You can create multiple flows
within the same Integration Studio instance, allowing you to modularize your
business logic and deploy distinct endpoints for different application use
cases.
When developers deploy the Integration Studio to their application, a secure
Node-RED editor is provisioned, accessible via the platform UI. The visual
interface includes common built-in nodes and pre-integrated libraries like
ethers (for blockchain interaction), ipfsHttpClient (for decentralized storage),
and others. Additional libraries can also be added manually in the project
settings.
A common scenario might involve triggering a flow via an HTTP request, fetching
on-chain data from a smart contract using ethers.js, formatting the result, and
returning it as a JSON response. These kinds of flows can be designed in
minutes, providing API endpoints that are automatically hosted and secured by
SettleMint infrastructure.
Developers can configure API Keys to restrict access to these endpoints and
monitor calls using the platform’s access token management system. Every
endpoint is served over HTTPS and can be integrated with frontend dApps, backend
services, or third-party platforms.
The simplicity of visual programming, combined with the power of JavaScript,
makes Integration Studio a robust backend builder tailored for blockchain
applications. It significantly reduces development time while maintaining
flexibility for custom use cases. Developers gain fine-grained control over how
their dApp behaves off-chain, without leaving the SettleMint environment.
The SettleMint Integration Studio is a low-code development environment which
enables you to implement business logic for your application simply by dragging
and dropping.
Under the hood, the Integration Studio is powered by a **Node-RED** instance
dedicated to your application. It is a low-code programming platform built on
Node.js and designed for event-driven application development.
[Learn more about Node-RED here](https://nodered.org/docs/).
## Basic concepts
The business logic for your application can be represented as a sequence of
actions. Such a sequence of actions is represented by a **flow** in the
Integration Studio. To bring your application to life, you need to create flows.
**Nodes** are the smallest building blocks of a flow.
### Nodes
The nodes are the smallest building blocks. They can have at most one input
port, and multiple output ports. They are triggered by some event (eg. an http
request). When triggered, they perform some user defined actions, and generate
an output. This output can be passed to the input of another node, to trigger
another action.
### Flows
A flow is represented as a tab within the editor workspace and is the main way
to organize nodes. You can have more than one set of connected nodes in a flow
tab.
The Integration Studio allows you to create flows in the fastest way possible.
You can drag and drop nodes in workspace and easily connect them by clicking
from the output port of one node to input port of another to create complex
flows. This allows you to visualise the orchestration and interaction between
your components (your nodes). Since you can clearly visualize the sequence of
actions your application is going to perform, it is not only more interpretable
but also much easier to debug in the future.
The use cases include interacting with other web services, applications, and
even IoT devices - orchestrating them for any kind of purpose to bring your
business solution to life.
[Learn more about the basic concepts of Node-RED here](https://nodered.org/docs/user-guide/concepts)
## Adding the integration studio
Navigate to the **application** where you want to add the integration studio.
Click **Integration tools** in the left navigation, and then click **Add an
integration tool**. This opens a form.

### Select integration studio
Select **Integration Studio** and click **Continue** to proceed.
### Choose a name
Choose a **name** for your Integration Studio. Choose one that will be easily
recognizable in your dashboards (eg. Crowdsale Flow)
### Select deployment plan
Choose a deployment plan. Select the type, cloud provider, region and resource
pack.
[More about deployment plans](/launching-the-platform/managed-cloud-saas/deployment-plans)
### Confirm setup
You can see the **resource cost** for the Integration Studio displayed at the
bottom of the form. Click **Confirm** to add the Integration Studio.
## Using the integration studio
When the Integration Studio is deployed, click on it from the list, and go to
the **Interface** tab to start building your flows. You can also view the
interface in full screen mode.
Once the Integration Studio interface is loaded, you will see 2 flow tabs: "Flow
1" and "Example". Head over to the **"Example" tab** to see some full blown
example flows to get you started.
Double-click any of the nodes to see the code they are running. This code is
written in JavaScript, and it represents the actions the particular node
performs.

### Setting up a flow
Before we show you how to set up your own flow, we recommend reading this
[article by Node-RED on creating your first flow](https://nodered.org/docs/tutorials/first-flow).
Now let's set up an example flow together and build an endpoint to get the
latest block number of the Polygon Mumbai Testnet using the Integration Studio.
If you do not have a Polygon Mumbai Node, you can easily
[deploy a node](/platfrom-components/add-a-node-to-a-network) first.
### Add http input node
Drag and drop a **Http In node** to listen for requests. If you double-click the node, you will see you have a couple parameters to set:
* `METHOD` - set it to `GET`. This is HTTP Method that your node is configured
to listen to.
* `URL` - set it to `/getLatestBlock`. This the endpoint that your node will
listen to.
### Add function node
Drag and drop a **function node**. This is the node that will query the
blockchain for the block number. Double-click the node to configure it.
`rpcEndpoint` is the RPC url of your Polygon Mumbai Node.
Under the **Connect tab** of your Polygon Mumbai node, you will find its RPC url.
`accessToken` - You will need an access token for your application. If you do
not have one, you can easily
[create an access token](/platfrom-components/application-access-tokens) first.
Enter the following snippet in the Message tab:
```javascript
///////////////////////////////////////////////////////////
// Configuration //
///////////////////////////////////////////////////////////
const rpcEndpoint = "https://YOUR_NODE_RPC_ENDPOINT.settlemint.com";
const accessToken = "YOUR_APPLICATION_ACCESS_TOKEN_HERE";
///////////////////////////////////////////////////////////
// Logic //
///////////////////////////////////////////////////////////
const ethers = global.get("ethers");
const provider = new ethers.providers.JsonRpcProvider(
`${rpcEndpoint}/${accessToken}`
);
msg.payload = await provider.getBlockNumber();
return msg;
///////////////////////////////////////////////////////////
// End //
///////////////////////////////////////////////////////////
```
**Note:** ethers and some ipfs libraries are already available by default and can be used like this:
```javascript
const ethers = global.get("ethers");
const provider = new ethers.providers.JsonRpcProvider(
`${rpcEndpoint}/${accessToken}`
);
const ipfsHttpClient = global.get("ipfsHttpClient");
const client = ipfsHttpClient.create(`${ipfsEndpoint}/${accessToken}/api/v0`);
const uint8arrays = global.get("uint8arrays");
const itAll = global.get("itAll");
const data = uint8arrays.toString(
uint8arrays.concat(await itAll(client.cat(cid)))
);
```
If the library you need isn't available by default you will need to import it in
the setup tab. Example for ethers providers:

### Add http response node
Drag and drop a **Http Response node** to reply to the request. Double-click and
configure:
* `Status code` - This is the HTTP status code that the node will respond with
after completion of the request. We set it to 200 (`OK`)
Click on the `Deploy` button in the top right corner to save and deploy your
changes.
### Test your endpoint
Now, go back to the **Connect tab** of your Integration Studio to see your **API
endpoint**, which looks something like
`https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com`.

You can now send requests to
`https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com/getLatestBlock` to get
the latest block number. Do not forget to create an API Key for your Integration
studio and pass it as the `x-auth-token` authorization header with your request.
Example terminal command:
```bash
curl -H "x-auth-token: bpaas-YOUR_INTEGRATION_KEY_HERE" https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com/getLatestBlock
```
The API is live and protected by the authorization header, and you can
seamlessly integrate with your application.
You can access 4000 plus pre-built modules from the in-built library.

You can use the Integration Studio to build very complex flows. Learn more in
this [cookbook by Node-RED](https://cookbook.nodered.org/) on the different
types of flows.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/setup-code-studio.mdx
meta: {
"title": "Setup code studio",
"description": "Guide to setup Code Studio IDE to develop and deploy chaincodes and sub-graphs"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To start developing and deploying chaincodes on the SettleMint platform, you’ll
first need to add Code Studio to your application. This provides you with a
full-featured web-based IDE, pre-configured for blockchain development. Once
added, you can use built-in tasks to build, test and deploy chaincodes, all
within the same environment.
You can add Code Studio through the Platform UI by selecting it as a dev tool
and linking it with a chaincode set and a template. Alternatively, you can use
the SDK CLI or SDK JS to programmatically create and manage chaincode sets.
These interfaces give you flexibility depending on whether you’re working from
the console or integrating via scripts or automation.
After setup, you’ll be able to customize your chaincodes directly within the
IDE. A task manager will guide you through building and deploying them to local
or SettleMint-hosted blockchain networks.
To speed up development, SettleMint offers a rich library of open-source
chaincode templates. These templates can be modified, extended, or used as-is,
and you also have the option to create and manage custom templates within your
consortium for reuse across projects.
## How to setup code studio and deploy chaincode on SettleMint platform
Code Studio is SettleMint’s fully integrated, web-based IDE built specifically
for blockchain development. It provides developers with a familiar Visual Studio
Code experience directly in the browser, pre-configured with essential tools.
Code Studio enables seamless development, testing, deployment, and indexing of
chaincodes and subgraphs, all within a unified environment.
It eliminates the need for complex local setups, simplifies DevOps workflows,
and reduces time-to-market by combining infrastructure, templates, and
automation under one interface. By offering pre-built tasks, contract templates,
and GitHub integration, it solves the traditional challenges of fragmented
tooling, inconsistent environments, and steep setup requirements for web3
development.

Despite offering full configurability, Code Studio includes all essential
dependencies pre-installed, saving time and avoiding setup friction. It supports
extensions for formatting, linting, testing, and AI-assisted development,
mirroring the convenience of a local VS Code setup. Every component, from
contracts to testing and subgraph development is wired into a well-structured,
maintainable codebase that is continuously updated and thoroughly tested to
align with the latest development standards. This makes it ideal for both rapid
prototyping and production-grade blockchain applications.

Smart contract sets allow you to incorporate **business logic** into your
application by deploying chaincodes that run on the blockchain. You can add a
chaincode set via different methods as part of your development workflow.
## Fabric IDE project structure
The Fabric IDE project structure in Code Studio is tailored to support robust
development and lifecycle management of Hyperledger Fabric chaincode. It
includes all necessary files for building, deploying, and managing chaincode
with automation support, organized in a modular and maintainable layout.
| Folder / File | Description |
| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------- |
| `lib/btp-chaincode-lifecycle/` | Contains shell scripts and helper logic to automate the Fabric chaincode lifecycle (e.g., install, approve, commit). |
| ├─ `chaincode.sh` | Shell script to execute full chaincode lifecycle steps such as packaging, installing, and committing. |
| ├─ `utils.sh` | Utility shell functions used within lifecycle scripts for reusability and simplification. |
| ├─ `README.md` | Documentation for using the lifecycle management tools provided in this library. |
| `src/` | Contains the TypeScript source code for your chaincode logic. Each file typically represents a chaincode or asset handler. |
| ├─ `asset.ts` | Defines the asset structure and related helper functions. |
| ├─ `assetTransfer.ts` | Implements the core logic for asset transfer operations (e.g., create, read, update, delete). |
| ├─ `index.ts` | Entry point of the chaincode; registers the contracts to be used by the Fabric runtime. |
| `.env` | Environment configuration file for local or platform-specific variables (e.g., peer addresses, identities). |
| `.eslintrc.js` | Linting configuration for enforcing code quality and style in the TypeScript codebase. |
| `.gitignore` | Specifies intentionally untracked files to ignore in version control. |
| `.gitmodules` | References Git submodules (if any) used in the project. |
| `index.d.ts` | Type declaration file for shared interfaces or types used across the project. |
| `LICENSE` | Open-source license governing use and distribution of the project. |
| `npm-shrinkwrap.json` | Locked npm dependency versions to ensure deterministic builds. |
| `package.json` | Declares project metadata, dependencies, and npm scripts (e.g., lint, test, build). |
| `bun.lock` | Lock file used by Bun (alternative JavaScript runtime) to freeze dependency versions. |
| `README.md` | Project-level documentation outlining setup, commands, and usage for developers. |
| `tsconfig.json` | TypeScript compiler configuration, defining how the chaincode is transpiled to JavaScript. |
| `node_modules/` | Automatically generated directory containing all installed npm dependencies. |
***
## Code studio task manager for fabric
The **Code Studio Task Manager** for Hyperledger Fabric provides a curated set
of tasks to manage the full chaincode lifecycle, perform channel operations, and
view network configuration , all without needing to manually run CLI commands.
These tasks are pre-wired into the Fabric project template and are executable
from within the IDE interface.
### Chaincode lifecycle tasks
| Task Name | Description |
| ------------------------------------------------- | ----------------------------------------------------------------------- |
| `chaincode lifecycle - 1. package` | Packages the chaincode source into a compressed archive for deployment. |
| `chaincode lifecycle - 2. deploy` | Installs the packaged chaincode on the target peer. |
| `chaincode lifecycle - 3. approve` | Approves the chaincode definition for the organization. |
| `chaincode lifecycle - 4. check commit readiness` | Checks if all required organizations have approved the chaincode. |
| `chaincode lifecycle - 5. commit` | Commits the chaincode definition to the channel. |
| `chaincode lifecycle - 6. init` | Initializes the chaincode after commit (if required). |
### Chaincode interactions
| Task Name | Description |
| -------------------- | -------------------------------------------------------------- |
| `chaincode - invoke` | Executes a transaction on the deployed chaincode. |
| `chaincode - query` | Reads data from the ledger using the chaincode query function. |
### Channel operations
| Task Name | Description |
| ------------------------- | ------------------------------------------ |
| `channel - create` | Creates a new Fabric channel. |
| `channel - orderer join` | Adds an orderer to the specified channel. |
| `channel - orderer leave` | Removes an orderer from the channel. |
| `channel - peer join` | Adds a peer node to the specified channel. |
| `channel - peer leave` | Removes a peer node from the channel. |
### Listing & query utilities
| Task Name | Description |
| ---------------------------- | --------------------------------------------------------- |
| `list - approved chaincode` | Lists chaincode definitions approved by the organization. |
| `list - committed chaincode` | Lists chaincodes that have been committed to the channel. |
| `list - deployed chaincode` | Lists deployed chaincode instances on the network. |
| `list - channels - orderer` | Lists channels associated with the orderer. |
| `list - channels - peer` | Lists channels associated with a specific peer. |
| `list - nodes` | Displays all nodes in the current environment. |
| `list - orderers` | Lists all orderer nodes available. |
| `list - peers` | Lists all peer nodes available in the network. |
### Build & test tasks
| Task Name | Description |
| ----------------- | --------------------------------------------------- |
| `build package` | Compiles the chaincode package. |
| `build workspace` | Builds the full workspace and all its dependencies. |
| `test package` | Executes tests for the chaincode package. |
| `test workspace` | Runs all available tests in the workspace. |
## Customize chaincodes
You can customize your chaincodes using the built-in IDE. The smart contract
sets include a Generative AI plugin to assist with development.
[Learn more about the AI plugin here.](./ai-plugin)
## Chaincode template library
SettleMint's chaincode templates serve as open-source, ready-to-use foundations
for blockchain application development, significantly accelerating the
deployment process. These templates enable users to quickly customize and extend
their blockchain applications, leveraging tested and community-enhanced
frameworks to reduce development time and accelerate market entry.
## Open-source chaincode templates under the mit license
Benefit from the expertise of the blockchain community and trust in the
reliability of your chaincodes. These templates are vetted and used by major
enterprises and institutions, ensuring enhanced security and confidence in your
deployments.
## Chaincode template library
The programming language used depends on the target protocol:
* **TypeScript** or **Go** for Hyperledger Fabric
| Template | Description |
| ----------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
| [Empty typescript](https://github.com/settlemint/chaincode-typescript-empty) | Basic TypeScript chaincode for Hyperledger Fabric with no business logic. |
| [Empty typescript with PDC](https://github.com/settlemint/chaincode-typescript-empty-pdc) | Empty TypeScript template with support for private data collections in Fabric. |
| [Empty go](https://github.com/settlemint/chaincode-go-empty) | Minimal Go chaincode scaffold for Hyperledger Fabric. |
## Create your own chaincode templates for your consortium
Within the self-managed Blockchain SettlMint platform, you can create and add
your own templates for use within your consortium. This fosters a collaborative
environment where templates can be reused and built upon, promoting innovation
and efficiency within your network.
To get started, visit:
[SettleMint GitHub Repository](https://github.com/settlemint/solidity-empty)
Congratulations.!!
You have succesfully deployed the code studio. From here you can proceed for
development and deployment of chaincodes.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/setup-fabconnect-middleware.mdx
meta: {
"title": "Setup fabconnect middleware",
"description": "Setup fabric API layer"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To enable API-based interaction with your Fabric smart contracts, the first step
is to configure FabConnect middleware. This component exposes REST and WebSocket
endpoints that allow you to submit transactions, query chaincode, retrieve
blocks, and manage events programmatically.
Once FabConnect is added to your application, you gain full access to the underlying blockchain network through structured HTTP calls. You can register and manage identities via Fabric CA, monitor ledger state, submit transactions in sync or async mode, and configure event listeners for real-time responses.
## How to setup graph middleware and api portal in SettleMint platform
Middleware acts as a bridge between your blockchain network and applications,
providing essential services like data indexing, API access, and event
monitoring. Before adding middleware, ensure you have an application and
blockchain node in place.
## How to add middleware

## Firefly fabconnect api reference overview
This reference outlines the key API endpoints exposed by **Hyperledger FireFly
FabConnect**, a REST and WebSocket gateway that enables interaction with
Hyperledger Fabric networks via structured HTTP requests and event
subscriptions.
## 
## Authentication & setup
| Field | Description |
| ------------- | ---------------------------------------------------- |
| **Auth Type** | Bearer Token (JWT) |
| **Header** | `Authorization: Bearer ` |
| **Base URL** | `https://fabconnect-34253.gke-europe.settlemint.com` |
***
## Identity management
Manage user identities through Fabric CA:
| Endpoint | Method | Description |
| --------------------------------- | ------ | -------------------------------------------- |
| `/identities` | GET | List all registered identities |
| `/identities` | POST | Register a new identity with the CA |
| `/identities/{username}` | GET | Get details of a specific identity |
| `/identities/{username}` | PUT | Modify existing identity’s attributes |
| `/identities/{username}/enroll` | POST | Enroll the identity to receive certificates |
| `/identities/{username}/reenroll` | POST | Re-enroll the identity to renew certificates |
| `/identities/{username}/revoke` | POST | Revoke the identity’s certificates |
***
## Ledger & block info
Fetch metadata and raw data from the Fabric blockchain:
| Endpoint | Method | Description |
| ----------------------------- | ------ | --------------------------------------------------- |
| `/chaininfo` | GET | Get current block height and hashes for a channel |
| `/blocks/{blockNumberOrHash}` | GET | Retrieve block by number or hash |
| `/blockByTxId/{txId}` | GET | Retrieve block that contains a specific transaction |
***
## Transaction handling
Submit transactions or check transaction state:
| Endpoint | Method | Description |
| ---------------------- | ------ | ----------------------------------------------------- |
| `/transactions` | POST | Submit a transaction (sync or async) |
| `/transactions/{txId}` | GET | Fetch transaction details using transaction ID (hash) |
***
## Chaincode queries
Execute read-only chaincode function calls:
| Endpoint | Method | Description |
| -------- | ------ | --------------------------------- |
| `/query` | POST | Send a query request to chaincode |
***
## Transaction receipts (for async mode)
Access receipts from async transaction submissions (`fly-sync=false`):
| Endpoint | Method | Description |
| ----------------------- | ------ | ----------------------------------- |
| `/receipts` | GET | List available transaction receipts |
| `/receipts/{receiptId}` | GET | Get a specific receipt by ID |
***
## Event streams
Create and manage WebSocket or Webhook-based event delivery pipelines:
| Endpoint | Method | Description |
| ------------------------------- | ------ | -------------------------------- |
| `/eventstreams` | GET | List all existing event streams |
| `/eventstreams` | POST | Create a new event stream |
| `/eventstreams/{eventstreamId}` | GET | Retrieve a specific event stream |
| `/eventstreams/{eventstreamId}` | DELETE | Delete a specific event stream |
***
## Event subscriptions
Configure event listening rules on chaincode or block events:
| Endpoint | Method | Description |
| --------------------------------- | ------ | --------------------------------- |
| `/subscriptions` | GET | List all subscriptions |
| `/subscriptions` | POST | Create a new subscription |
| `/subscriptions/{subscriptionId}` | GET | Get a specific subscription by ID |
| `/subscriptions/{subscriptionId}` | DELETE | Remove a subscription by ID |
***
## Advanced controls
| Feature | Details |
| -------------------------- | ------------------------------------------------------------------- |
| **Sync Mode (`fly-sync`)** | Use `true` for synchronous (wait for commit), `false` for async |
| **Custom Channel** | Override with `fly-channel` parameter in query |
| **Signer Identity** | Use `fly-signer` to choose which identity signs the request |
| **Schema Support** | Structured mode supports input validation using JSON schema headers |
***
Congratulations.!!
You have succesfully configured Fabconncet middleware and have API access on
your chaincode.
From here we will proceed to adding off-chain database and storage options to
enable us to have a holistic backend and storage layer for our application.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/setup-offchain-database.mdx
meta: {
"title": "Setup off-chain database",
"description": "Add Hasura backend-as-a-service with off-chain database"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To integrate off-chain storage into your blockchain application, you should
begin by adding Hasura as a backend-as-a-service via SettleMint. This will
provision a fully managed PostgreSQL database, paired with a real-time
GraphQL API layer. It enables you to manage non-critical or frequently
updated data that doesn’t need to live on-chain, without compromising
performance or flexibility.
Start by navigating to your application and opening the Integration Tools
section. Click on Add an integration tool, select Hasura, and follow the
steps to choose a name, provider, region, and resource plan. Once deployed,
a dedicated Hasura instance will be available, complete with its own admin
console, GraphQL API, and Postgres connection string. You can manage and
monitor the instance from the same interface.
Once Hasura is set up, you can define your database schema by creating
tables and relationships under the Data tab. You can add, modify, and delete
rows directly from the console, or connect to the database using a
PostgreSQL client or code. Every schema and table you define becomes
instantly queryable using the GraphQL API. The API tab will auto-generate
queries and mutations, and also allow you to derive REST endpoints or export
code snippets for frontend/backend use.
For custom business logic, you can implement Actions, which are HTTP
handlers triggered by GraphQL mutations. These are useful for data
validation, enrichment, or chaining smart contract calls. If you want to
respond to database changes in real-time, use Event Triggers to invoke
webhooks when specific inserts, updates, or deletions happen. For recurring
jobs, Cron Triggers can invoke workflows on a schedule, and One-off
Scheduled Events allow precision control over future events.
Authentication and authorization can be finely controlled through role-based
access rules. Hasura allows you to enforce row-level permissions and
restrict query types based on user roles. To ensure secure API access, use
the Hasura admin secret and your application access token, both available
from the Connect tab of your Hasura console.
You’ll also have the option to connect to the Hasura PostgreSQL instance
directly using the connection string. This is useful for running SQL
scripts, performing migrations, or executing batch jobs. Whether you’re
using a Node.js backend or a command-line tool like psql, your Hasura
database acts like any standard PostgreSQL instance, with enterprise-grade
reliability.
Backups are easy to configure using the pg\_dump utility or via the Hasura
CLI. You can export both your database data and metadata, and restore them
in new environments as needed. Use hasura metadata export to get a full
snapshot of your permissions, tracked tables, actions, and relationships.
Then use hasura metadata apply or hasura metadata reload to rehydrate or
sync a new instance.
By combining Hasura’s flexibility with the immutability of your on-chain
smart contracts, you will be able to design a clean hybrid architecture,
critical operations are stored securely on-chain, while scalable, queryable,
and user-driven data remains off-chain. This setup dramatically improves
user experience, simplifies front-end development, and keeps infrastructure
costs under control.
Many dApps need more than just decentralized tools to build an end-to-end
solution. The SettleMint Hasura SDK provides a seamless way to interact with
Hasura GraphQL APIs for managing application data.

## Need for a on-chain and off-chain data architecture
In blockchain-based applications, not all data needs to, or should, reside
on-chain. While critical state changes, token ownerships, or verifiable proofs
are best kept immutable and transparent on a blockchain, a large portion of
application data such as user profiles, analytics, logs, metadata, and UI-driven
state is better suited to an off-chain data store. Storing everything on-chain
is neither cost-effective nor performance-friendly. On-chain data is expensive
to store and slow to query for complex front-end or dashboard use cases.
This is where a **hybrid architecture** becomes essential. In such an approach,
data is partitioned based on its importance and usage:
* **On-chain layer** serves as the source of truth for verifiable,
consensus-driven actions like token transfers, proofs, and governance.
* **Off-chain layer** handles high-volume, user-generated, or fast-changing data
that benefits from relational structure, rich queries, and low latency.
This model provides the best of both worlds: **immutability and trust from
blockchain**, and **speed, flexibility, and developer-friendliness from
traditional databases**.
## How hasura on SettleMint supports this architecture
SettleMint offers Hasura as a Backend-as-a-Service (BaaS), tightly integrated
into its low-code blockchain development stack. Hasura provides a
high-performance, real-time GraphQL API layer on top of a PostgreSQL database,
and allows developers to instantly query, filter, and subscribe to changes in
the data without writing custom backend logic.
### Key capabilities of hasura on settlemint
* A fully managed **PostgreSQL database** is provisioned automatically with each
Hasura instance.
* Hasura auto-generates a powerful and expressive **GraphQL API** for all the
tables and relationships defined in the database.
* It allows **integration with external databases** or REST/GraphQL services,
making it possible to unify multiple data sources behind one GraphQL endpoint.
* **Role-based access control** ensures secure data access aligned with business
logic and user permissions.
## Benefits of using hasura in a blockchain project
Hasura is especially useful for building interfaces, dashboards, and off-chain
tools in blockchain applications. Developers can use it to:
* Store non-critical or frequently updated data like user preferences, audit
logs, or API call metadata.
* Power admin panels or reporting dashboards with complex filtering, sorting,
and aggregation capabilities.
* Perform fast and reliable queries without the overhead of smart contract reads
or event processing.
* Sync or mirror blockchain data into Postgres via indexing services (like The
Graph or custom workers), and build additional logic around it.
For example, while the verification of a credential or the execution of a
transaction happens on-chain, the user’s profile details, usage history, or
interactions with the platform can be managed off-chain using Hasura. This
results in a responsive and scalable user experience, without compromising on
the core security and trust guarantees of blockchain.
# Off-chain database use cases in blockchain applications
| Category | Use Cases |
| ------------------------------- | ------------------------------------------------------------------------------------------------ |
| **User Management & Metadata** | User profiles, KYC/AML data, Recovery info, Social links, Preferences, Session tokens |
| **Dashboards & Reporting** | Admin panels, KPIs, Filters & aggregation, Charts, Audit logs, Time-series insights |
| **App Logic & State** | Workflow states, Business rules, Off-chain approvals, Drafts, Automation triggers, API call logs |
| **User Content** | Blog posts, Comments, Ratings, Articles, Feedback, Forum threads, Attachments |
| **External/API Data** | Oracle/cache data, API mirrors, Off-chain credentials, IoT inputs, External system sync |
| **Historical & Time Data** | Snapshots, Transition logs, Archived state, Event sync history, Audit trails |
| **Content & Config** | UI content, Static pages, Themes, Menus, Feature flags, Editable app config |
| **UX & Transactions** | Pending tx queues, Gas estimates, Slippage data, NFT views, Pre-submit staging, Local metadata |
| **Admin & Dev Tools** | Schema maps, Dev notes, Admin dashboards, Background jobs, Flagged items |
| **Security & Access** | Role bindings, Access logs, Encrypted fields, Policy metadata, Permissions history |
| **Hybrid & Indexing** | Enriched on-chain data, Indexed events, ID mapping, Postgres mirroring, ETL-ready layers |
| **E-commerce / Token Economy** | Product catalog, Shopping cart, Delivery tracking, Disputes, Refund metadata |
| **Education / DAO / Community** | Learning progress, Badges, Voting drafts, Moderation flags, Contribution history |
| **Data Ops & Recovery** | Data backups, Exportable datasets, Disaster recovery layer, Compliance archiving |
## Add hasura
### Navigate to Application
Navigate to the **application** where you want to add Hasura.
### Access Integration Tools
Click **Integration tools** in the left navigation, and then click **Add an integration tool**. This opens a form.
### Configure Hasura
1. Select **Hasura**, and click **Continue**
2. Choose a **name** for your backend-as-a-service
3. Choose a deployment plan (provider, region, resource pack)
4. Click **Confirm** to add it
First ensure you're authenticated:
```bash
settlemint login
```
Create Hasura instance:
```bash
settlemint platform create integration-tool hasura
# Get information about the command and all available options
settlemint platform create integration-tool hasura --help
```
For a full example of how to create a blockchain explorer using the SDK, see the [Hasura SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-hasura#api-reference).
The SDK enables you to easily query and mutate data stored in your SettleMint-powered PostgreSQL databases through a type-safe GraphQL interface. For detailed API reference, check out the [Hasura SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/hasura).
## Some basic features
* Under the data subtab you can create an arbitrary number of **schema's**. A
schema is a collection of tables.
* In a schema you can create **tables**, choose which columns you want and
define relations and indexes.
* You can add, edit and delete **data** in these columns as well.
[Learn more here](https://hasura.io/docs/2.0/schema/postgres/tables/)
Any table you make is instantly visible in the **API subtab**. Note that by
using the **REST and Derive Action buttons** you can convert queries into REST
endpoints if that fits your application better. Using the **Code Exporter
button** you can get the actual code snippets you can use in your application or
the integration studio.
A bit more advanced are **actions**. Actions are custom queries or mutations
that are resolved via HTTP handlers. Actions can be used to carry out complex
data validations, data enrichment from external sources or execute just about
any custom business logic. Actions can be kickstarted by using the **Derive
Action button** in the **API subtab**.
[Learn more here.](https://hasura.io/docs/2.0/actions/overview/)
If you need to execute tasks based on changes to your database you can leverage
**Events**. An **Event Trigger** atomically captures events (insert, update,
delete) on a specified table and then reliably calls a HTTP webhook to run some
custom business logic.
[Learn more here.](https://hasura.io/docs/latest/graphql/core/event-triggers/index.html)
**Cron Triggers** can be used to reliably trigger HTTP endpoints to run some
custom business logic periodically based on a cron schedule.
**One-off Scheduled Events** are individual events that can be scheduled to
reliably trigger a HTTP webhook to run some custom business logic at a
particular timestamp.
**Access to your database** can be handled all the way to the row level by using
the authentication and authorisation options available in Hasura.
[Learn more here.](https://hasura.io/docs/2.0/auth/overview/)
This is of course on top of the
[application access tokens](/platform-components/security-and-authentication/application-access-tokens)
and
[personal access tokens](/platform-components/security-and-authentication/personal-access-tokens)
in the platform you can use to close off access to the entire API.
## Usage examples
You can interact with your Hasura database in two ways: through the GraphQL API
(recommended) or directly via PostgreSQL connection.
```javascript
import fetch from 'node-fetch';
// Configure your authentication details
const HASURA_ENDPOINT = "YOUR_HASURA_ENDPOINT";
const HASURA_ADMIN_SECRET = "YOUR_HASURA_ADMIN_SECRET"; // Found in the "Connect" tab of Hasura console
const APP_ACCESS_TOKEN = "YOUR_APP_ACCESS_TOKEN"; // Generated following the Application Access Tokens guide
// Reusable function to make GraphQL requests
async function fetchGraphQL(operationsDoc, operationName, variables) {
try {
const result = await fetch(
HASURA_ENDPOINT,
{
method: "POST",
headers: {
'Content-Type': 'application/json',
'x-hasura-admin-secret': HASURA_ADMIN_SECRET,
'x-auth-token': APP_ACCESS_TOKEN
},
body: JSON.stringify({
query: operationsDoc,
variables: variables,
operationName: operationName
})
}
);
if (!result.ok) {
const text = await result.text();
throw new Error(`HTTP error! status: ${result.status}, body: ${text}`);
}
return await result.json();
} catch (error) {
console.error('Request failed:', error);
throw error;
}
}
// Query to fetch verification records
const operationsDoc = `
query MyQuery {
verification {
id
}
}
`;
// Mutation to insert a new verification record
const insertOperationDoc = `
mutation InsertVerification($name: String!, $status: String!) {
insert_verification_one(object: {name: $name, status: $status}) {
id
name
status
}
}
`;
// Function to fetch verification records
async function main() {
try {
const { errors, data } = await fetchGraphQL(operationsDoc, "MyQuery", {});
if (errors) {
console.error('GraphQL Errors:', errors);
return;
}
console.log('Data:', data);
} catch (error) {
console.error('Failed:', error);
}
}
// Function to insert a new verification record
async function insertWithGraphQL() {
try {
const { errors, data } = await fetchGraphQL(
insertOperationDoc,
"InsertVerification",
{
name: "Test User",
status: "pending"
}
);
if (errors) {
console.error('GraphQL Errors:', errors);
return;
}
console.log('Inserted Data:', data);
} catch (error) {
console.error('Failed:', error);
}
}
// Execute both query and mutation
main();
insertWithGraphQL();
```
```javascript
import pkg from 'pg';
const { Pool } = pkg;
// Initialize PostgreSQL connection (get connection string from Hasura console -> "Connect" tab)
const pool = new Pool({
connectionString: 'YOUR_POSTGRES_CONNECTION_STRING'
});
// Simple query to read all records from verification table
const readData = async () => {
const query = 'SELECT * FROM verification';
const result = await pool.query(query);
console.log('Current Data:', result.rows);
};
// Insert a new verification record with sample data
const insertData = async () => {
const query = `
INSERT INTO verification (id, identifier, value, created_at, expires_at)
VALUES ($1, $2, $3, $4, $5)
RETURNING *`;
// Sample values - modify according to your needs
const values = [
'test-id-123',
'test-identifier',
'test-value',
new Date(),
new Date(Date.now() + 24 * 60 * 60 * 1000) // Sets expiry to 24h from now
];
const result = await pool.query(query, values);
console.log('Inserted:', result.rows[0]);
};
// Update an existing record by ID
const updateData = async () => {
const query = `
UPDATE verification
SET value = $1, updated_at = $2
WHERE id = $3
RETURNING *`;
const values = ['updated-value', new Date(), 'test-id-123'];
const result = await pool.query(query, values);
console.log('Updated:', result.rows[0]);
};
// Execute all operations in sequence
async function main() {
try {
await readData();
await insertData();
await updateData();
await readData();
} finally {
await pool.end(); // Close database connection
}
}
main();
```
## Hasura postgress database access and connection

For GraphQL API:
1. **Hasura Admin Secret**: Found in the "Connect" tab of Hasura console
2. **Application Access Token**: Generate this by following our
[Application Access Tokens guide](/building-with-settlemint/application-access-tokens)
For PostgreSQL:
1. **PostgreSQL Connection String**: Found in the "Connect" tab of Hasura
console under "Database URL"
Always keep your credentials secure and never expose them in client-side code.
Use environment variables or a secure configuration management system in
production environments.
Understanding postgress connection string
**postgresql://hasura-f1cd9:[0c510604a378d348e7ed@p2p.gke-europe.settlemint.com](mailto:0c510604a378d348e7ed@p2p.gke-europe.settlemint.com):30787/hasura-f1cd9**
Here's how it's broken down:
* **Protocol**: `postgresql://`\
Indicates the connection type , PostgreSQL database over TCP.
* **Username**: `hasura-f1cd9`\
The database username used for authentication.
* **Password**: `0c510604a378d348e7ed`\
The corresponding password for the above username.
* **Host**: `p2p.gke-europe.settlemint.com`\
The server address (domain or IP) where the PostgreSQL database is hosted.
* **Port**: `30787`\
The network port on which the PostgreSQL service is listening.
* **Database Name**: `hasura-f1cd9`\
The specific PostgreSQL database to connect to on that server.
## Hasura backup
Via CLI pgdump command
```sql
PGPASSWORD=0c510604a378d348e7ed pg_dump \
-h p2p.gke-europe.settlemint.com \
-p 30787 \
-U hasura-f1cd9 \
-d hasura-f1cd9 \
-F p \
-f ~/Desktop/hasura_backup.sql
```
## Taking backup via hasura CLI
1. Hasura Database
2. Hasura Metadata
### Steps for taking a backup of hasura database
1. Install Hasura CLI
([https://hasura.io/docs/latest/hasura-cli/install-hasura-cli/](https://hasura.io/docs/latest/hasura-cli/install-hasura-cli/))
2. Run hasura init command to initiate a new Hasura project in the working
directory.
3. Edit config.yaml file to configure remote Hasura instance. We need to
generate an API Key in BPaaS and pass it with the endpoint.
Syntax of config.yaml:
```
version: 3
endpoint:
admin_secret:
metadata_directory: metadata
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:3000
```
Example
```
endpoint: https://hasuradb-15ce.gke-japan.settlemint.com/sm_aat_86530f5bf93d82a9
admin_secret: dc5eb1b93f43fd28c53e
metadata_directory: metadata
actions:
kind: synchronous
handler_webhook_baseurl: http://localhost:3000
```
4. Run hasura console command. (this command will sync everything to your local
hasura instance.)
5. Run this curl command to generate DB export:
Curl Format
```
curl -d '{"opts": [ "-O", "-x", "--schema=public", "--inserts"], "clean_output": true, "source": "default"}' -H "x-hasura-admin-secret: " /v1alpha1/pg_dump > db.sql
```
Example Curl
```
curl -d '{"opts": [ "-O", "-x", "--schema=public", "--inserts"], "clean_output": true, "source": "default"}' -H "x-hasura-admin-secret:78b0e4618125322de0eb" https://fuchsiacapybara-7f70.gke-europe.settlemint.com/bpaas-1d79Acd6A2f112EA450F1C07a372a7D582E6121F/v1alpha1/pg_dump > db.sql
```
### Importing data into a new instance
Please copy the content of the exported db.sql file, paste it and execute as a
SQL statement.
### Steps for taking a backup of hasura metadata
Hasura Metadata Export is a collection of yaml files which captures all the
Metadata required by the GraphQL Engine. This includes info about tables that
are tracked, permission rules, relationships, and event triggers that are
defined on those tables.
If you have already initialized your project via the Hasura CLI you should see
the Metadata directory structure in your project directory.
To export your entire Metadata using the Hasura CLI execute the following
command in your terminal:
```
# In hasura CLI
hasura metadata export
```
This will export the Metadata as YAML files in the /metadata directory
### Steps for importing or applying hasura metadata
You can apply Metadata from one Hasura Server instance to another. You can also
apply an older or modified version of an instance's Metadata onto itself to
replace the existing Metadata. Applying or importing completely replaces the
Metadata on that instance, i.e. you lose any Metadata that existed before
applying.
```
# In hasura CLI
hasura metadata apply
```
### Reload hasura metadata
In some cases, the Metadata can be out of sync with the database schema. For
example, when a new column has been added to a table via an external tool.
```
# In hasura CLI
hasura metadata reload
```
For more on Hasura Metadata, refer:
[https://hasura.io/docs/latest/migrations-metadata-seeds/manage-metadata/](https://hasura.io/docs/latest/migrations-metadata-seeds/manage-metadata/) For
more on Hasura Migrations, refer:
[https://hasura.io/docs/latest/migrations-metadata-seeds/manage-migrations/](https://hasura.io/docs/latest/migrations-metadata-seeds/manage-migrations/)
Congratulations.!!
You have succesfully configured Hasura backend-as-a-service layer with the
off-chain database of your choice.
From here we will proceed to adding centralized and non-centralized storage for
our images, documents, videos, archive files and other storage needs.
file: ./content/docs/building-with-settlemint/hyperledger-fabric-guide/setup-storage.mdx
meta: {
"title": "Setup storage",
"description": "Add S3 or IPFS storage"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Summary
To integrate off-chain file storage into your blockchain application, you can
configure either IPFS (for decentralized content addressing) or MinIO (an
S3-compatible private storage layer) through the SettleMint platform. Both
options serve different use cases, IPFS excels in immutability and decentralized
access, while S3-style storage is better for secure, private, and
high-performance file delivery.
To get started, navigate to the relevant application in your SettleMint
workspace and open the Storage section from the left-hand menu. Click Add
Storage, which opens a configuration form. Choose the storage type, either IPFS
for decentralized or MinIO for private object storage. Assign a name and
configure your deployment settings like region, provider, and resource pack.
Once confirmed, the storage service will be deployed and available for use.
Once provisioned, you can access and manage your storage instance from the
Manage Storage section. Here, you will be able to view the storage endpoint,
health status, and metadata configuration. If using IPFS, you’ll be interacting
with content hashes (CIDs), while MinIO offers an S3-compatible interface where
files are stored under buckets and can be accessed via signed URLs.
Using the SettleMint SDK or CLI, developers will be able to list, query, and
manage storage instances programmatically. The SDK provides a typed interface to
connect, upload, retrieve, and delete files. For example, the
@settlemint/sdk-ipfs package allows seamless pinning and retrieval of files
using CIDs. Similarly, @settlemint/sdk-minio wraps around common S3 operations
like uploading files, generating expirable download URLs, and managing buckets.
Depending on your use case, both IPFS and MinIO can serve as complementary
layers. For public-facing and immutable content, such as NFT metadata, DAO
governance artifacts, or verifiable documents, IPFS is well suited. For private,
regulated, or access-controlled files, like KYC documents, user uploads, admin
reports, and internal metadata, MinIO offers a robust alternative with access
control and performance guarantees.
In practice, a dApp may use both systems in tandem: the file is stored in
S3/MinIO for fast access and usability, while its content hash is stored on IPFS
(and optionally, linked on-chain) to provide tamper-proof guarantees and content
validation. This hybrid model ensures performance, security, and
decentralization where it matters most.
Once storage is connected, users and developers can begin uploading files via
frontend integrations, backend scripts, or SDK calls. Content uploaded to IPFS
will return a CID, which can be persisted on-chain or referenced in subgraphs
and APIs. Files on S3/MinIO can be secured using signed URLs or policies, making
them suitable for user role–based access or limited-time file sharing.
## Off-chain file storage use cases in blockchain applications
Blockchain applications often require storing documents, images, videos, or
metadata off-chain due to cost, performance, or privacy reasons. Two common
approaches are:
* **IPFS**: A decentralized, content-addressed file system ideal for immutable,
verifiable, and censorship-resistant data.
* **MiniO S3**: A centralized, enterprise-grade storage solution that supports
private files, fine-grained access control, and fast retrieval.
Below are separate use case tables for each.
***
## 🌐 ipfs (interplanetary file system)
IPFS is a decentralized protocol for storing and sharing files in a peer-to-peer
network. Files are addressed by their content hash (CID), ensuring immutability
and verification.
| Category | Use Cases |
| -------------------------- | -------------------------------------------------------------------------------------- |
| **NFTs & Metadata** | NFT images and media, Metadata JSON, Reveal assets, Provenance data |
| **Decentralized Identity** | Hash of KYC documents, Verifiable credentials, DID documents, Encrypted identity data |
| **DAOs & Governance** | Proposals with supporting files, Community manifestos, Off-chain vote metadata |
| **Public Records** | Timestamped proofs, Open access research, Transparent regulatory disclosures |
| **Content Publishing** | Articles, Audio files, Podcasts, Open knowledge archives |
| **Gaming & Metaverse** | 3D assets, Wearables, In-game items, IPFS-based map data |
| **Token Ecosystems** | Whitepapers, Token metadata, Proof-of-reserve documents |
| **Data Integrity Proofs** | Merkle tree files, Hashed content for audit, CID-linked validation |
| **Hybrid dApps** | On-chain reference to CID, IPFS-pinned metadata, Public shareable URIs |
| **Data Portability** | Decentralized content backups, Peer-to-peer file sharing, Long-term open data archives |
***
## ☁️ minio s3 (simple storage service)
MiniO S3 is a centralized cloud storage platform that offers speed, scalability,
and rich security features. It is especially suitable for private or
enterprise-grade applications.
| Category | Use Cases |
| ----------------------------- | --------------------------------------------------------------------------------------- |
| **KYC / Identity Management** | Encrypted KYC files, ID document storage, Compliance scans, Signature uploads |
| **User Uploads** | Profile pictures, File attachments, Media uploads, Form attachments |
| **Admin Dashboards** | Exported reports, Internal analytics files, Logs and snapshots |
| **E-Commerce / Marketplaces** | Product images, Order confirmations, Receipts, Invoices |
| **Private DAO Ops** | Budget spreadsheets, Voting records, Internal documents |
| **Education Platforms** | Certificates, Course PDFs, Student submissions |
| **Customer Support** | Ticket attachments, User-submitted evidence, File-based case history |
| **Real-Time Interfaces** | UI asset delivery, Previews, Optimized media for front-end display |
| **Data Recovery** | Automatic backups, Encrypted snapshots, Versioned file histories |
| **Secure Downloads** | Signed URLs for restricted access, Expirable public links, S3-based token-gated content |
***
## Summary: when to use which?
| Use Case Pattern | Recommended Storage |
| ------------------------------------- | ------------------- |
| Public, immutable content | **IPFS** |
| Verifiable, on-chain linked data | **IPFS** |
| Private or role-based content | **S3** |
| Fast real-time access (UI/media) | **S3** |
| Hybrid (IPFS for hash, S3 for access) | **Both** |
Each system has unique advantages. For truly decentralized applications where
transparency and verifiability matter, IPFS is a natural fit. For operational
scalability, secure access, and enterprise-grade needs, S3 provides a reliable
foundation.
In hybrid dApps, combining both ensures performance without compromising on
decentralization.
## Add storage
Navigate to the **application** where you want to add storage. Click **Storage** in the left navigation, and then click **Add storage**. This opens a form.
### Configure Storage
1. Choose storage type (IPFS or MinIO)
2. Choose a **Storage name**
3. Configure deployment settings
4. Click **Confirm**
First ensure you're authenticated:
```bash
settlemint login
```
Create storage:
```bash
# Get the list of available storage types
settlemint platform create storage --help
# Create storage
settlemint platform create storage
# Get information about the command and all available options
settlemint platform create storage --help
```
For a full example of how to connect to a storage using the SDK, see the [MinIO SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-minio#api-reference) or [IPFS SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-ipfs#api-reference).
Get your access token from the Platform UI under User Settings → API Tokens.
The SDK enables you to:
* Use IPFS for decentralized storage - check out the [IPFS SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/ipfs)
* Use MinIO for S3-compatible storage - check out the [MinIO SDK documentation](https://github.com/settlemint/sdk/tree/main/sdk/minio)
## Manage storage
Navigate to your storage and click **Manage storage** to:
* View storage details and status
* Monitor health
* Access storage interface
* Update configurations
```bash
# List storage instances
settlemint platform list storage --application
# Get storage details
settlemint platform read storage
```
```typescript
// List storage instances
const listStorage = async () => {
const storages = await client.storage.list("your-app-id");
console.log('Storage instances:', storages);
};
// Get storage details
const getStorage = async () => {
const storage = await client.storage.read("storage-unique-name");
console.log('Storage details:', storage);
};
```
Congratulations.!!
You have succesfully added S3 and IPFS storage to your application environment.
From here we will proceed to adding custom container deployments where you can
host your application front end user interface or any other service or services
required to complete your application.
file: ./content/docs/launching-the-platform/managed-cloud-saas/deployment-plans.mdx
meta: {
"title": "Deployment plans",
"description": "Guide to deployment plans and resource allocation in SettleMint"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
For On-premise and BYOC options, please [contact
us](mailto:support@settlemint.com).
## Cloud providers & regions
### Choose provider
Select from supported cloud providers:
* Google Cloud Platform
* Amazon Web Services
* Microsoft Azure
### Select region
Pick available regions based on:
* Geographic location
* Compliance requirements
* Performance needs
## Resource packs
### Small
* Basic memory allocation
* Standard vCPU
* Minimal storage
* Development use
### Medium
* Enhanced memory
* Multiple vCPUs
* Extended storage
* Production ready
### Large
* Maximum memory
* Dedicated vCPUs
* Extensive storage
* High performance
## Recommended setups
### Development/PoC
* Shared infrastructure
* Small resource pack
* Basic monitoring
* Cost optimized
### Production
* Dedicated infrastructure
* Medium/Large resource pack
* Full monitoring
* High availability
For each service you deploy (network, node, smart contract set, etc.) you need
to select a deployment plan. The deployment plan defines the infrastructure
type, the cloud provider and region of your choice, and the resources (memory,
vCPU and disk space) that will be allocated to your service.
## Infrastructure type
Not all applications are equal. Some are for experimentation, some are pilots,
while others are high volume and mission critical. We make it easy to match the
infrastructure to the scale of the project.
* **Shared** - This is typically the most cost-effective deployment
configuration. Resources are deployed in a shared cluster. The performance
will vary based on the demand from other services sharing the infrastructure.
This configuration is like living on an island with other inhabitants with
whom you need to share limited resources.
* **Dedicated** - This configuration offers the highest specifications without
requiring additional technical overhead. Your service runs on its own
exclusively-used cloud infrastructure, meaning it can't be impacted by others.
To continue the metaphor, with this configuration you choose the size of the
island based on your needs, and you don't share its resources with anyone
else.
**On-premise** and **Bring Your Own Cloud (BYOC)** are also supported. Feel free
[contact us](mailto:support@settlemint.com) to discuss these options.
## Cloud provider and region
We offer you the flexibility to deploy your services in the cloud of your
choice, and easily build cross-cloud provider and cross-geographical region
networks. All leading cloud providers are supported and we are continously
working on adding support for more regions.
[Discover all supported cloud providers and available regions](/launching-the-platform/managed-cloud-saas/supported-cloud-providers)
## Resource pack
The resource pack refers to the memory, vCPU and disk space allocated to your
service. You can choose between **small, medium and large**. If at some point
the current resource usage is about to reach its limit, and the service risks
getting stuck, you can scale the resource pack.
## Recommended setup
* Non-production application or Proof of Concept: shared infrastructure and
small resource pack
* Application in production mode: dedicated infrastructure and medium resource
pack
file: ./content/docs/launching-the-platform/managed-cloud-saas/introduction.mdx
meta: {
"title": "Introduction",
"description": "SaaS Platform"
}
SettleMint’s SaaS delivery model is designed to be developer-friendly from day
one. Developers gain instant access to pre-configured blockchain environments
via a simple web interface or APIs, removing the need to install, configure, or
troubleshoot infrastructure components. Everything from network provisioning to
smart contract deployment is abstracted into intuitive workflows, letting
developers build, test, and iterate applications with minimal friction.
One of the key strengths of SettleMint’s SaaS approach is the future-proof
flexibility it offers. Applications built on the SaaS platform can be migrated
to self-managed or on-premise environments at any stage, without needing to
re-architect the solution. This provides enterprises with full control over
deployment models, whether they start on SaaS for speed and scale or later shift
to on-premise for compliance, data sovereignty, or regulatory alignment.
By enabling such smooth transitions between deployment models, SettleMint
ensures that technical decisions made early in the project lifecycle do not
become limitations later. Organizations are empowered to innovate quickly while
maintaining the freedom to adapt their infrastructure strategy as business or
compliance needs evolve.
file: ./content/docs/launching-the-platform/managed-cloud-saas/supported-cloud-providers.mdx
meta: {
"title": "Supported cloud providers",
"description": "Overview of cloud providers supported by the SettleMint platform"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
## Overview
When launching a blockchain development project, you need to decide where your
project will be stored. SettleMint offers you the flexibility to deploy in the
cloud of your choice, and easily build cross-cloud provider and
cross-geographical region networks.
Every deployment uses Kubernetes as an orchestration layer for all the different
services. Kubernetes is a widely used open-source system for automating the
deployment, scaling, and management of containerized applications. By leveraging
managed Kubernetes services by cloud providers, our platform can offer
affordable, stable, and scalable environments to run the blockchain nodes and
additional services.
### Features
* Cross-cloud deployment support
* Regional flexibility
* Managed Kubernetes services
* Enterprise-grade infrastructure
### Benefits
* High availability
* Scalable infrastructure
* Cost optimization
* Global reach
## Cloud provider options
### Available Regions
#### Ready to Deploy
* Frankfurt
* Mumbai
* Singapore
#### Contact Required
* Ohio [Contact us](mailto:support@settlemint.com)
* Bahrain [Contact us](mailto:support@settlemint.com)
* Osaka [Contact us](mailto:support@settlemint.com)
**AWS Benefits**
* Global infrastructure
* Enterprise-grade security
* Extensive service integration
* Flexible pricing options
### Available Regions
#### Ready to Deploy
* Brussels
* Mumbai
* Singapore
* Tokyo
#### Contact Required
* Oregon [Contact us](mailto:support@settlemint.com)
**GCP Benefits**
* Advanced networking
* Strong container support
* Integrated DevOps tools
* AI/ML capabilities
### Available Regions
#### Ready to Deploy
* Dubai
* Tokyo
#### Contact Required
* Amsterdam [Contact us](mailto:support@settlemint.com)
* Singapore [Contact us](mailto:support@settlemint.com)
* California [Contact us](mailto:support@settlemint.com)
**Azure Benefits**
* Enterprise integration
* Hybrid cloud support
* Comprehensive compliance
* Advanced security features
## Infrastructure details
Every deployment uses Kubernetes as an orchestration layer for all services,
providing:
* Automated deployment
* Scalable operations
* Container management
* Service orchestration
* High availability
* Resource optimization
## Getting started
1. Choose your preferred cloud provider
2. Select an available region
3. Contact us for regions marked with "Contact Required"
4. Begin your deployment process
### Looking for more options?
We are continuously working on adding support for more cloud providers and
regions.
* Need a specific region?
* Interested in on-premise setup?
* Want to learn about 'Bring Your Own Cloud'?
[Contact our team](mailto:support@settlemint.com) to discuss your requirements.
file: ./content/docs/launching-the-platform/self-hosted-onprem/infrastructure-requirements.mdx
meta: {
"title": "Infrastructure requirements",
"description": "Infrastructure requirements for self-hosting the platform"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
The requirements listed below are for the core platform components only.
Additional resources will be needed for prerequisites and services you plan to
deploy.
## Compute resources
* **CPU**: 4 cores
* **RAM**: 16GB
* **Storage**: 100GB SSD
Minimum requirements are suitable for testing and development environments only.
* **CPU**: 8+ cores
* **RAM**: 32GB
* **Storage**: 250GB+ SSD
These specifications provide headroom for growth and better performance.
## Network requirements
The platform requires specific network configurations to ensure secure and
reliable communication between components:
### Connectivity
* **Internet Access**: Required for pulling container images and updates
* **Load Balancer**: For distributing traffic across nodes
* **Ingress Controller**: For routing external traffic
* **SSL/TLS**: Valid certificates for secure communication
### Required Ports
* **80/443**: HTTP/HTTPS traffic
* **6443**: Kubernetes API server
* **30000-32767**: NodePort services range
* **10250**: Kubelet API
* **179**: Calico BGP (if using Calico)
**Network Security** We recommend implementing network policies and security
groups to control traffic flow between components.
## Storage requirements
Proper storage configuration is crucial for platform stability and performance.
Consider the following requirements:
### Performance Requirements
* **Type**: SSD storage required for all components
* **IOPS**: Minimum 3000 IOPS for database volumes
* **Latency**: \< 10ms average latency
* **Throughput**: 125MB/s minimum for database volumes
### Capacity Planning
* **Initial Allocation**: Start with recommended sizes
* **Growth Buffer**: Plan for 30% annual growth
* **Backup Storage**: Equal to primary storage
* **Monitoring**: Implement storage usage alerts
### Storage best practices
* Use separate volumes for different components
* Implement regular backup procedures
* Monitor storage performance metrics
* Set up alerts for capacity thresholds
## Prerequisites resource requirements
When hosting prerequisites on the same infrastructure, these requirements must
be added to the base platform specifications. Each component can be hosted
separately or together depending on your architecture.
### PostgreSQL
### Production considerations
These are baseline requirements. For production environments, consider:
* High availability configurations may require 2-3x these resources
* Monitoring and logging overhead
* Backup storage requirements
* Scaling headroom for growth
### Total resource summary
For a production setup hosting both platform and prerequisites:
## Service requirements
The platform allows you to deploy services in two ways:
1. On the same cluster as the platform
2. On separate target clusters
### Same Cluster Deployment
If you plan to deploy services on the same cluster as the platform:
* Add service requirements to the platform requirements
* Include them in capacity planning
* Account for resource overhead
* Plan for scaling headroom
### Target Cluster Deployment
Using separate target clusters for services:
* Keeps platform and service workloads isolated
* Requires separate infrastructure planning
* Can be optimized for specific service needs
* Enables geographic distribution
### Infrastructure planning strategy
We recommend:
1. List all services you plan to deploy
2. Decide on deployment strategy (same cluster or target clusters)
3. For same cluster: Add service requirements to platform requirements
4. For target clusters: Plan separate infrastructure
5. Include 30% buffer for growth and peak loads
### Example calculation
Let's calculate requirements for a setup with:
* 2 Polygon nodes (Mainnet & Mumbai)
* 1 Hyperledger Besu node
* 1 Smart Contract Portal
* 1 Integration Studio
* 1 Blocksout Explorer
**Service Requirements (Medium size, Dedicated mode):**
* Polygon Nodes (2x):
* CPU: 2 × 1.5 cores = 3 cores
* RAM: 2 × 1.0 GB = 2 GB
* Storage: Minimal
* Besu Node (1x):
* CPU: 1.5 cores
* RAM: 2.5 GB
* Storage: 100 GB
* Smart Contract Portal:
* CPU: 2.0 cores
* RAM: 2.0 GB
* Storage: 10 GB
* Integration Studio:
* CPU: 2.0 cores
* RAM: 4.0 GB
* Storage: 10 GB
* Blocksout Explorer:
* CPU: 2.0 cores
* RAM: 4.0 GB
* Storage: 50 GB
**Same Cluster Approach:**
* Total CPU: 27+ cores (16 platform/prereqs + 11 services)
* Total RAM: 67GB+ (52GB platform/prereqs + 15GB services)
* Total Storage: 620GB+ (440GB platform/prereqs + 180GB services)
**Target Cluster Approach:**
* Platform Cluster: 16+ cores, 52GB+ RAM, 440GB+ storage
* Service Cluster: 11+ cores, 15GB+ RAM, 180GB+ storage
file: ./content/docs/launching-the-platform/self-hosted-onprem/introduction.mdx
meta: {
"title": "Introduction",
"description": "Getting started with the SettleMint Platform self-hosted installation"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
SettleMint can be deployed within an organization’s own air-gapped
infrastructure, providing enhanced security, data sovereignty, and compliance
with internal and regulatory requirements. Enterprises can build and manage
blockchain applications directly within their existing environments, including
Kubernetes and OpenShift, without reliance on external hosting or cloud
dependencies. This enables seamless deployment across multiple clusters and
regions, supporting highly distributed, hybrid blockchain architectures.
Welcome to the SettleMint Platform self-hosted installation guide. This
comprehensive guide will walk you through deploying the SettleMint Platform in
your own infrastructure.
With centralized control through a unified dashboard, organizations can govern
deployments across environments, assign role-based permissions, and enforce
secure access policies. Integrated load balancing ensures high availability and
scalability, while full ownership over hardware configurations allows teams to
fine-tune compute, memory, and storage resources according to workload demands.
This setup offers a highly secure, scalable, and customizable platform for
enterprise-grade blockchain deployments.
## Self-hosted installation guide
**Installation Time**
Complete installation typically takes 2-4 hours, depending on your
infrastructure setup and familiarity with the components.
## Guide structure
This installation guide is organized into three main sections. Select each tab
below to learn more:
### Requirements Section
Start here to ensure your infrastructure meets all necessary specifications before proceeding. This section covers:
* Kubernetes cluster requirements
* Network and storage specifications
* Access and security requirements
👉 [View Requirements Guide](/launching-the-platform/self-hosted-onprem/infrastructure-requirements)
### Prerequisites Section
After confirming requirements, set up the required supporting services. This section provides:
* Step-by-step setup guides
* Multiple deployment options
* Configuration requirements
* Information collection checklists
👉 [View Prerequisites Guide](/launching-the-platform/self-hosted-onprem/prerequisites/overview)
### Installation Section
Finally, deploy the SettleMint Platform using Helm:
* Standard Kubernetes deployment
* Flexible configuration options
* Production-ready setup
👉 [View Installation Guide](/launching-the-platform/self-hosted-onprem/platform-installation)
**Using This Guide**
We recommend:
1. Read through each section before starting
2. Complete all requirements and prerequisites
3. Collect required information as you progress
4. Follow the guides in order
## Before you begin
### Required Access
* Administrative access to your infrastructure
* Ability to create/modify DNS records
* Permission to deploy Kubernetes resources
* Access to cloud resources (if applicable)
## Partner support
A thorough understanding of Kubernetes concepts, architecture, and operation is
essential for successfully deploying and managing the SettleMint Platform. This
includes expertise in:
* Kubernetes cluster management
* Helm chart deployment and customization
* Infrastructure maintenance and monitoring
* Security best practices
If your team lacks the in-house expertise required for managing these
deployments, we strongly recommend collaborating with one of our certified
partners. Our partners are specifically trained to:
* Guide you through the installation process
* Help with infrastructure setup and configuration
* Provide ongoing maintenance and support
* Assist with troubleshooting and optimizations
Additionally, our blockchain technology experts are available to support you
with any technical questions or challenges you might encounter.
To connect with a certified partner or for direct assistance, please contact us
at [support@settlemint.com](mailto:support@settlemint.com).
## Information collection
Throughout the installation process, you'll need to collect configuration
details from each prerequisite service. We've made this easy by including
"Information Collection Boxes" in each guide.
### How it works
* Each prerequisite guide contains an Information Collection Box
* Required values are clearly marked
* Values are needed during platform installation
* Keep track of sensitive information securely
### Example collection box
Here's what an Information Collection Box looks like in the prerequisite guides:
**Required Values Example**
This is a sample of what you'll see in the guides. For Redis setup, you would
collect values like:
* Endpoint: redis-master.default.svc.cluster.local
* Password: your-secure-password
* Port: 6379
Note: This is just an example. Actual values will be collected during the
prerequisite setup.
## Need help?
### Documentation resources
* Review installation guides
* Check troubleshooting sections
* Follow best practices
* Consult platform architecture
### Support channels
* Email: [support@settlemint.com](mailto:support@settlemint.com)
* Schedule technical consultation
* Contact your account manager
**Next Step**
Start by reviewing the
[Infrastructure Requirements](/launching-the-platform/self-hosted-onprem/infrastructure-requirements)
to ensure your environment meets all necessary specifications.
file: ./content/docs/launching-the-platform/self-hosted-onprem/platform-installation.mdx
meta: {
"title": "Platform installation",
"sidebar_position": 3
}
## Overview
This guide walks you through installing the SettleMint Platform using Helm,
providing a command-line based installation method with full control over the
deployment process.
## Prerequisites
Before starting the installation, ensure you have:
* Completed all [prerequisite services](prerequisites/overview) setup
* Collected all required information from the prerequisite guides
* Met all infrastructure requirements
* Helm 3.x installed
* kubectl access to your cluster
* Admin permissions
## Installation steps
### 1. Sign in to the SettleMint helm registry
```bash
helm registry login harbor.settlemint.com --username --password
```
Replace `` and `` with your provided credentials.
### 2. Review configuration options
View all available configuration options:
```bash
helm show values oci://registry.settlemint.com/settlemint-platform/SettleMint --version 7.0.0
```
### 3. Install the platform
Create a values file (values.yaml) with your configuration:
```yaml
ingress:
enabled: true
className: "nginx"
host: ''
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
nginx.ingress.kubernetes.io/proxy-body-size: "500m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
cert-manager.io/cluster-issuer: "letsencrypt" # If using cert-manager
tls:
- secretName: 'platform-tls'
hosts:
- ''
- '*.'
redis:
host: ''
port: ''
password: ''
tls: true
postgresql:
host: ''
port: ''
user: ''
password: ''
database: ''
sslMode: require
auth:
jwtSigningKey: ''
providers:
google:
enabled: true
clientID: ''
clientSecret: ''
microsoftEntraId:
enabled: true
clientID: ''
clientSecret: ''
tenantId: ''
vault:
address: ''
roleId: ''
secretId: ''
namespace: 'vault'
features:
observability:
metrics:
enabled: true
apiUrl: ''
logs:
enabled: true
apiUrl: ''
deploymentEngine:
platform:
domain:
hostname: ''
clusterManager:
domain:
hostname: ''
state:
connectionUrl: 's3://?region='
secretsProvider: 'passphrase'
credentials:
encryptionKey: ''
aws:
accessKeyId: ''
secretAccessKey: ''
region: ''
# azure:
# # -- Azure storage account name
# storageAccount: ''
# # -- Azure storage account key
# storageKey: ''
targets:
- id: ''
name: ''
icon: ''
clusters:
- id: ''
name: ''
icon: ''
location:
lat: ''
lon: ''
connection:
sameCluster:
enabled: true
namespace:
single:
name: ''
domains:
service:
tls: true
hostname: ''
storage:
storageClass: ''
ingress:
ingressClass: ''
capabilities:
mixedLoadBalancers: false
app:
replicaCount: ''
api:
replicaCount: ''
existingSecret: ''
job:
resources:
requests:
cpu: ''
memory: ''
autoscaling:
enabled: true
deployWorker:
resources:
requests:
cpu: ''
memory: ''
autoscaling:
enabled: true
clusterManager:
replicaCount: ''
docs:
replicaCount: ''
imagePullCredentials:
registries:
harbor:
enabled: true
registry: "harbor.settlemint.com"
username: ''
password: ''
email: ''
support:
kubernetes-replicator:
enabled: true
features:
billing:
enabled: false
alerting:
slack:
enabled: false
webhookUrl: ''
stripe:
apiSecret: ''
webhookSecret: ''
webhookUrl: ''
apiLiveMode: false
taxRateId: ''
publishableKey: ''
autoDelete:
enabled: false
emailUsageExcel:
enabled: true
privateKeys:
hsm:
awsKms:
enabled: false
txsigner:
image:
registry: ghcr.io
repository: settlemint/btp-signer
tag: '7.6.10'
networks:
besu:
image:
registry: docker.io
repository: hyperledger/besu
tag: '24.12.2'
quorum:
image:
registry: docker.io
repository: quorumengineering/quorum
tag: '24.4.1'
geth:
image:
registry: docker.io
repository: ethereum/client-go
tag: 'alltools-v1.13.4'
fabric:
ca:
image:
registry: docker.io
repository: hyperledger/fabric-ca
tag: '1.5.13'
orderer:
image:
registry: docker.io
repository: hyperledger/fabric-orderer
tag: '2.5.10'
tools:
image:
registry: docker.io
repository: hyperledger/fabric-tools
tag: '2.5.10'
peer:
image:
registry: docker.io
repository: hyperledger/fabric-peer
tag: '2.5.10'
couchdb:
image:
registry: docker.io
repository: apache/couchdb
tag: '3.4.2'
dind:
image:
registry: docker.io
repository: library/docker
tag: '24.0.7-alpine3.18'
mainnets:
enabled: true
ethereumMetricsExporter:
image:
registry: docker.io
repository: ethpandaops/ethereum-metrics-exporter
tag: '0.26.0'
smartContractSets:
etherscan:
apiKeys:
etherscan: ""
polyscan: ""
zkevmpolyscan: ""
bscscan: ""
arbiscan: ""
optimistic: ""
IDE:
image:
registry: ghcr.io
repository: settlemint/btp-ide
tag: 'v7.6.5'
sets:
- id: starterkit-asset-tokenization
name: Asset Tokenization
image:
registry: ghcr.io
repository: settlemint/starterkit-asset-tokenization
tag: '0.0.11'
# ... (other sets can be added as needed)
customDomains:
enabled: false
outerIngressClass: "nginx"
email: ""
crons:
cleanup: "0 */10 * * * *"
```
Replace all placeholder values with your actual configuration - The license
section should be configured with your provided license file - Image tags
should be verified for the latest stable versions - Remove any unused features
to keep the configuration clean
Click to see a complete example values file
```yaml
ingress:
enabled: true
className: "nginx"
host: "example.company.com"
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
nginx.ingress.kubernetes.io/proxy-body-size: "500m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
cert-manager.io/cluster-issuer: "letsencrypt"
tls:
- secretName: "example-tls"
hosts:
- "example.company.com"
- "*.example.company.com"
redis:
host: "redis.example.local"
port: "6379"
password: "abc123password"
tls: true
postgresql:
host: "postgresql.example.local"
port: "5432"
user: "db_user"
password: "xyz789password"
database: "platform_db"
sslMode: require
auth:
jwtSigningKey: "abc123jwt456xyz789signing000key111example"
providers:
google:
enabled: true
clientID: "example-123456789.apps.googleusercontent.com"
clientSecret: "abcdef-example-google-secret"
vault:
address: "http://vault.example.local:8200"
roleId: "abc123-role-id"
secretId: "xyz789-secret-id"
namespace: "vault"
features:
observability:
metrics:
enabled: true
apiUrl: "http://metrics.example.local/api/v1"
logs:
enabled: true
apiUrl: "http://logs.example.local/api/v1"
deploymentEngine:
platform:
domain:
hostname: "example.company.com"
state:
connectionUrl: "s3-compatible-endpoint-url"
secretsProvider: "passphrase"
credentials:
encryptionKey: "abc123encryption456key789example000key"
aws:
accessKeyId: "EXAMPLEKEYID123456"
secretAccessKey: "abc123example456secret789key000aws"
region: "us-east-1"
azure:
storageAccount: "example-storage-account"
storageKey: "abc123example456key789key000azure"
google:
project: "example-project-id"
credentials: |
{
"type": "service_account",
"project_id": "your-project",
"private_key_id": "key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "service-account@project.iam.gserviceaccount.com",
"client_id": "client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account@project.iam.gserviceaccount.com"
}
targets:
- id: "example"
name: "Example Cluster"
icon: "kubernetes"
clusters:
- id: "main"
name: "Main"
icon: "global"
location:
lat: 0.0000
lon: 0.0000
connection:
sameCluster:
enabled: true
namespace:
single:
name: "example"
domains:
service:
tls: true
hostname: "example.company.com"
storage:
storageClass: "standard"
ingress:
ingressClass: "nginx"
app:
replicaCount: "2"
api:
replicaCount: "2"
existingSecret: "example-secret"
job:
resources:
requests:
cpu: "100m"
memory: "512Mi"
deployWorker:
resources:
requests:
cpu: "100m"
memory: "512Mi"
clusterManager:
replicaCount: "2"
imagePullCredentials:
registries:
harbor:
enabled: true
registry: "harbor.settlemint.com"
username: "example_user"
password: "abc123registry456password"
email: "example@company.com"
support:
kubernetes-replicator:
enabled: true
```
Install the platform:
```bash
helm upgrade --install SettleMint oci://registry.settlemint.com/settlemint-platform/SettleMint \
--namespace SettleMint \
--version 7.0.0 \
--create-namespace \
--values values.yaml
```
### 4. Verify installation
Check the deployment status:
```bash
kubectl get pods -n settlemint
```
Verify all pods are running and ready.
### 5. Access the platform
Once all pods are running, access the platform at `https://`.
### 6. Target clusters configuration
The platform supports deploying blockchain nodes and applications to multiple
target clusters across different cloud providers and regions. This section
explains how to configure target clusters in your values file.
#### Target structure
The targets configuration uses a simple 2-level hierarchy:
* **Target** (top level grouping)
* **Clusters** (individual Kubernetes clusters)
#### Basic configuration example
```yaml
features:
deploymentEngine:
targets:
- id: GROUP1
name: First Group
icon: cloud
clusters:
- id: CLUSTER1
name: Primary Cluster
icon: kubernetes
location:
lat: 50.8505
lon: 4.3488
namespace:
multiple:
enabled: true
prefix: "sm"
connection:
kubeconfig:
enabled: true
domains:
service:
tls: true
hostname: "cluster1.example.com"
storage:
storageClass: "standard"
ingress:
ingressClass: "nginx"
capabilities:
mixedLoadBalancers: false
nodePorts:
enabled: true
range:
min: 30000
max: 32767
- id: GROUP2
name: Second Group
icon: cloud
clusters:
- id: CLUSTER2
name: Secondary Cluster
icon: kubernetes
location:
lat: 1.3521
lon: 103.8198
namespace:
multiple:
enabled: true
prefix: "prod"
connection:
kubeconfig:
enabled: true
domains:
service:
tls: true
hostname: "cluster2.example.com"
storage:
storageClass: "standard"
ingress:
ingressClass: "nginx"
capabilities:
mixedLoadBalancers: true
nodePorts:
enabled: true
range:
min: 30000
max: 32767
```
#### Configuration options
##### Target level
* `id`: Unique identifier for the target group
* `name`: Display name
* `icon`: Icon identifier for the UI
##### Cluster level
* `id`: Unique identifier for the cluster
* `name`: Display name for the region/location
* `icon`: Icon identifier for the UI
* `disabled`: (Optional) Set to true to disable this cluster
* `location`: Geographic coordinates for visualization
* `lat`: Latitude
* `lon`: Longitude
##### Namespace configuration
```yaml
namespace:
single:
enabled: false # Use for single namespace deployments
name: deployments
runAsUser: 2024
fsGroup: 2024
multiple:
enabled: true # Use for multiple namespace deployments
prefix: "sm" # Prefix for created namespaces
```
##### Connection settings
```yaml
connection:
sameCluster:
enabled: false
kubeconfig:
enabled: true
```
##### Domain configuration
```yaml
domains:
service:
tls: true # Enable TLS for the domain
hostname: "cluster.example.com" # Domain for accessing services
```
The domain configuration determines how services in the cluster will be
accessed. Each cluster needs a unique domain that resolves to its ingress
controller.
##### Storage configuration
```yaml
storage:
storageClass: "standard" # Default storage class for the cluster
```
Storage class recommendations per cloud provider:
* GKE: Use `"standard"` for general purpose or `"premium-rwo"` for better
performance
* EKS: Use `"gp3"` for general purpose or `"io1"` for high-performance workloads
* AKS: Use `"managed-premium"` for production or `"default"` for development
##### Ingress configuration
```yaml
ingress:
ingressClass: "nginx" # Ingress controller class name
```
The ingress class should match your installed ingress controller. Common
options:
* `"nginx"` for NGINX Ingress Controller
* `"azure/application-gateway"` for Azure Application Gateway
* `"alb"` for AWS Application Load Balancer
##### Capabilities configuration
```yaml
capabilities:
mixedLoadBalancers: false # Support for mixed LoadBalancer services
nodePorts:
enabled: true # Enable NodePort service type
range: # Port range for NodePort services
min: 30000
max: 32767
```
Capabilities determine what features are available in the cluster:
* `mixedLoadBalancers`: Enable if your cluster supports both internal and
external load balancers
* `nodePorts`: Configure if you need to expose services using NodePort type
* The port range should be within Kubernetes defaults (30000-32767)
* Ensure the range doesn't conflict with other services
#### Important considerations
1. **Domain Names**
* Each cluster must have a unique domain name
* Domains should be properly configured in your DNS provider
* TLS certificates will be automatically managed if cert-manager is
configured
2. **Storage Classes**
* Verify the storage class exists in your cluster before using it
* Consider performance requirements when selecting storage classes
* Some features may require specific storage capabilities (e.g., RWX support)
3. **Network Capabilities**
* `mixedLoadBalancers` should match your cloud provider's capabilities
* NodePort ranges should not conflict with other services
* Ensure network policies allow required communication
When setting up a new cluster, start with the basic configuration and
gradually enable additional capabilities as needed. This approach helps in
identifying potential issues early in the deployment process.
## Troubleshooting
If you encounter issues during installation:
1. Debug the installation:
```bash
helm upgrade --install --debug --dry-run SettleMint oci://registry.settlemint.com/settlemint-platform/SettleMint \
--namespace SettleMint \
--values values.yaml
```
2. Check pod logs:
```bash
kubectl logs -n SettleMint
```
3. Generate a support bundle:
```bash
# Install support bundle plugin
curl https://krew.sh/support-bundle | bash
# Generate bundle
kubectl support-bundle --load-cluster-specs
```
Send the generated support bundle to
[support@settlemint.com](mailto:support@settlemint.com) for assistance.
## Uninstalling
To remove the platform:
```bash
helm delete SettleMint --namespace settlemint
```
**Note:** This will not delete persistent volumes or other resources outside of
Helm's control. You may need to clean these up manually.
file: ./content/docs/platform-components/account-billing/add-a-client.mdx
meta: {
"custom_edit_url": null,
"title": "Add a client",
"description": "Guide to adding a client on SettleMint",
"sidebar_position": 2
}
This guide explains how to add a client's organization to your organization as a
SettleMint partner. This feature helps partners manage their client's resources
and applications on SettleMint.
If you are not a SettleMint partner, this guide may not be useful for you. For
general account and organization information, visit the
[Setup account and billing](/building-with-settlemint/setup-account-and-billing)
section.
Learn about the SettleMint Partner program and how to become a partner on our
[Partner Program Page](https://www.settlemint.com/partner-program).
## Understanding the partner-client model
As a SettleMint partner, you can manage your client's resource usage and
applications on the platform. This is done by transfering your client's
organization to your organization.
Your organization's other clients **WILL NOT** see information from any other
clients.
Clients already using SettleMint can request to link their organization to
yours. The client selects [join a partner](/account-billing/join-a-partner) from
their account to start the process.
## Enabling partner access
To enable your organization to have partner access on SettleMint, you must first
contact the SettleMint customer success team at [support@settlemint.com](mailto:support@settlemint.com) to
request access. We will confirm your request and do the needed configuration
steps.
## What changes after becoming a partner?
### Manage client apps
### Organization access
After joining a partner, all users of your organization will now have access to
the client's applications. You are now able to manage their resources and
applications (ex: Blockchain Networks / Nodes).
After setting up your organization as a partner on SettleMint, you will now have
the ability to manage to your client's applications and resource usage.
On the SettleMint dashboard your organization's applications will now be labeled
as `Internal applications`. The `Clients & apps` section on the homepage
displays your client's applications.
### Manage client invoicing
Clients can be configured to either receive invoices directly from SettleMint or
through your organization.
### Manage client pricing
Clients can be configured to either see the pricing of their resource usage or
have these hidden.
Your organization's other clients **WILL NOT** see information from any other
clients.
### Resource cost monitoring
Your client will not see the cost of their resources used on SettleMint (ex:
Blockchain Nodes). This information is only shared with you.
### Billing
The client will no longer receive any invoices or billing directly from
SettleMint. They will receive one last invoice from SettleMint to close out the
current billing period.
## How to add a client
### Clients already using settlemint
Clients already using SettleMint can request to transfer their organization to
yours. The client selects [join a partner](/account-billing/join-a-partner) from
their account to start the process.
### Client new to settlemint
Clients new to SettleMint can get access to organizations you have created for
them. The client receives an invite from an administrator after being
[added as a new client](#how-to-add-a-client) and inviting them as members.
1. **Open Organizations & Apps**
If your account has been enabled as a partner, you will find both a `Clients`
list and an `Add a client` option under the `Organization & Apps` menu. Clicking
on `Organization Menu` (4 squares) in the top right opens this menu.
The `Add Client` option is only available to users with administrative access to
the organization.
2. **Add the Client Name or Transfer Code**
For clients new to SettleMint, you can create a new organization for them by
entering a client name.
For clients already using SettleMint, you can transfer their organization by
selecting the transfer code option. The tranfer code is what you received by
email when a client has requested to
[join a partner](/account-billing/join-a-partner).
3. **Confirming Client Added**
For clients new to SettleMint, you will be redirected to the client's SettleMint
Dashboard. You can begin to
[create an organization](/building-with-settlemint/evm-chains-guide/create-an-application#how-to-create-an-organization-and-application-in-settlemint-platform)
and
[invite members to your client's organization](/building-with-settlemint/evm-chains-guide/create-an-application#invite-new-organization-members).
Clients already using SettleMint will receive an email confirming the transfer
of the client's organization. You will see their organization shown under the
`Clients & apps` list on the main dashboard.
file: ./content/docs/platform-components/account-billing/join-a-partner.mdx
meta: {
"title": "Join a partner",
"description": "Guide explaining how to join a SettleMint partner.",
"sidebar_position": 3
}
This guide explains how to transfer your organization to a SettleMint partner.
This feature helps clients of partners better manage their resources and
applications on SettleMint.
If you have not been in contact with a SettleMint partner, this guide may not be
useful for you. For general account and organization information, visit the
[Setup account and billing](/building-with-settlemint/setup-account-and-billing)
section.
## Understanding the client-partner model
As a client of a SettleMint Partner, you can transfer your organization to a
partner's organization.
Other clients that have been transferred to this partner **WILL NOT** have
access to any of your organization's information.
## What changes after joining a partner?
### Organization access
After joining a partner, all users that are in the partner's organization on
SettleMint will now have access to manage your applications. This includes any
sensitive information attached to them.
### Resource usage cost
Depending on your configuration, the resource usage costs of your organization
(ex: Blockchain Networks/Nodes) can be hidden. In this case, these costs are now
managed by the partner's organization.
### Billing
Depending on your configuration, the invoicing for your resource usage can
either be handled by the partner or directly with SettleMint. If handled by the
partner, the billing changes will take effect in the same billing cycle when the
transfer is made.
### Resource cost monitoring
The cost of resources your organization uses on SettleMint (ex: Blockchain
Networks/Nodes) will no longer be shown. This will be handled by the partner.
## How to join a partner
1. **Open Organizations & Apps**
By going to the hompage and selecting the organization dashboard, then the
manage menu of the organization , you will see the option to
`Join a partner`. By clicking on this you will be shown the `Join a partner`
form to complete.
The `Join a Partner` option is only available to users with administrative
access to your organization.
2. **Add the Partner Email**
On this form, it is required that you enter a `Partner Contact email`. This
email is the email of a contact person at the partner that you are joining.
3. **Confirmation**
After completing the form, the partner will receive a transfer code via the
email you entered. They will then use this transfer code to add your
organization as [a client](/account-billing/add-a-client).
Once completed, you will see a note for 24 hours in the SettleMint platform
that your organization has been transfered.
file: ./content/docs/platform-components/blockchain-infrastructure/blockchain-nodes.mdx
meta: {
"title": "Blockchain nodes",
"description": "Blockchain node management in SettleMint Platform"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
A **blockchain node** is a **computer program** that maintains and verifies the
integrity of a distributed ledger. A **set of nodes** forms a **network**,
ensuring decentralized operations by synchronizing blockchain data and executing
transactions. Nodes collectively validate and store information, making
blockchains secure, transparent, and immutable.
**EVM-based blockchains** support both **public** and **permissioned** networks.
**Public EVM blockchains**, such as **Ethereum and Polygon**, are open to
anyone, allowing decentralized participation in transaction validation and smart
contract execution. These networks rely on economic incentives and
permissionless consensus mechanisms like **Proof of Stake (PoS)** to maintain
security and decentralization. On the other hand, **permissioned EVM
blockchains**, such as **Quorum and Hyperledger Besu**, restrict participation
to authorized entities. These networks prioritize privacy, regulatory
compliance, and efficiency, often using **Proof of Authority (PoA)** or **Quorum
Byzantine Fault Tolerance (QBFT)** to validate transactions. Whether public or
permissioned, all EVM-based blockchains rely on a distributed set of nodes to
maintain state synchronization and network security.
**Hyperledger Fabric** is a **permissioned blockchain framework** built for
enterprises requiring secure, controlled, and scalable blockchain solutions.
Unlike EVM-based blockchains, which may be **public or permissioned**, Fabric
networks are exclusively **permissioned**, ensuring that only pre-approved
entities can operate nodes, validate transactions, and access network data.
Fabric’s modular architecture allows organizations to define governance models,
identity management policies, and consensus mechanisms tailored to their
specific needs. Transactions in Fabric are endorsed by selected peers and then
ordered into blocks using mechanisms like **Raft** or **Byzantine Fault Tolerant
(BFT) ordering**, prioritizing efficiency and compliance over decentralization.
***
### **Types of nodes in evm-based blockchains**
* **Validator Nodes:** Actively participate in consensus by proposing and
finalizing new blocks. These nodes maintain ledger integrity by operating
under consensus mechanisms like **Proof of Authority (PoA)** or **Istanbul
Byzantine Fault Tolerance (IBFT)**.
* **Non-Validator Nodes (Observer Nodes):** Do not take part in consensus but
instead **synchronize blockchain data, respond to queries, and facilitate
smart contract execution**. They maintain updated copies of the blockchain
without finalizing blocks.
### **Among non-validator nodes:**
* **Full Nodes:** Store the entire blockchain history and verify transactions
without participating in block finalization.
* **Archive Nodes:** Extend full node functionality by retaining complete
historical blockchain states, making them essential for querying past
transactions.
### **Core components of an evm-based node**
* **Execution Layer (EVM):** Processes smart contracts and updates the
blockchain state.
* **Networking Layer (devp2p Protocol):** Facilitates peer-to-peer communication
for efficient data propagation.
* **Storage Layer:** Manages blockchain data, including account states, logs,
receipts, and the **Merkle Patricia Trie** for world-state management.
* **JSON-RPC Interface:** Provides an API for dApps, wallets, and external
applications to interact with the blockchain.
***
### **Types of nodes in hyperledger fabric**
* **Peer Nodes:** Maintain the ledger and execute smart contracts
(**Chaincode**).
* **Endorsing Peers:** Simulate transactions and provide endorsement
signatures, ensuring compliance with business rules.
* **Committing Peers:** Validate endorsed transactions and update the ledger.
* **Ordering Nodes (Orderers):** Handle transaction sequencing, package
transactions into blocks, and distribute them to peers. These nodes ensure
consensus across the network using mechanisms like **Raft** or **Byzantine
Fault Tolerant (BFT) ordering**.
* **Certificate Authority (CA) Nodes:** Issue and authenticate identities using
**Public Key Infrastructure (PKI)**, enforcing security policies and access
controls.
Unlike EVM-based blockchains, Fabric nodes **do not store a global state in
shared memory**. Instead, they use separate **key-value stores and dedicated
ledgers**, enhancing **privacy and scalability**.
## **Certificate authorities**
Certificate Authorities play a key role in the network because they dispense
X.509 certificates that can be used to identify components as belonging to an
organization. Certificates issued by CAs can also be used to sign transactions
to indicate that an organization endorses the transaction result - a
precondition of it being accepted onto the ledger. Let's examine these two
aspects of a CA in a little more detail.
Firstly, different components of the blockchain network use certificates to
identify themselves to each other as being from a particular organization. CAs
are so important that Hyperledger Fabric provides you with a built-in one
(called the Fabric-CA) to help you get going.
The mapping of certificates to member organizations is achieved via a structure
called a Membership Services Provider (MSP), which defines an organization by
creating an MSP which is tied to a root CA certificate to identify that
components and identities were created by the root CA.
The mapping of certificates to member organizations is achieved via a structure
called a Membership Services Provider (MSP), which defines an organization by
creating an MSP which is tied to a root CA certificate to identify that
components and identities were created by the root CA. The channel configuration
can then assign certain rights and permissions to the organization through a
policy.
Secondly, certificates issued by CAs are at the heart of the transaction
generation and validation process. Specifically, X.509 certificates are used in
client application transaction proposals and smart contract transaction
responses to digitally sign transactions. Subsequently the network nodes who
host copies of the ledger verify that transaction signatures are valid before
accepting transactions onto the ledger.
More information about the Fabric-CA can be found on the official Hyperledger
Fabric-CA documentation website.
## **Identities**
The different actors in a blockchain network include peers, orderers, client
applications, administrators and more. Each of these actors , active elements
inside or outside a network able to consume services , has a digital identity
encapsulated in an X.509 digital certificate. These identities really matter
because they determine the exact permissions over resources and access to
information that actors have in a blockchain network.
A digital identity furthermore has some additional attributes that Fabric uses
to determine permissions, and it gives the union of an identity and the
associated attributes a special name , principal. Principals are just like
userIDs or groupIDs, but a little more flexible because they can include a wide
range of properties of an actor's identity, such as the actor's organization,
organizational unit, role or even the actor's specific identity. When we talk
about principals, they are the properties which determine their permissions.
For an identity to be verifiable, it must come from a trusted authority. A
membership service provider (MSP) is that trusted authority in Fabric. More
specifically, an MSP is a component that defines the rules that govern the valid
identities for this organization. The default MSP implementation in Fabric uses
X.509 certificates as identities, adopting a traditional Public Key
Infrastructure (PKI) hierarchical model.
More information about identities can be found on the official Hyperledger
Fabric documentation website.
SettleMint's platform uses the Fabric-CA to create a root CA, this CA acts as a
dual-headed CA, meaning that it is used for issuing MSP and TLS certificates.
This CA must be used to issue all certificates for the network, orderers, peers,
administrators, and client applications.
## **Peers**
Peers are a fundamental element of the network because they host ledgers and
chaincode (which contain smart contracts) and are therefore one of the physical
points at which organizations that transact on a channel connect to the channel
(the other being an application). A peer can belong to as many channels as an
organizations deems appropriate (depending on factors like the processing
limitations of the peer pod and data residency rules that exist in a particular
country).
More information about peers can be found on the official Hyperledger Fabric
documentation website.
## **Orderers**
An orderer (also known as an "ordering node") does transaction ordering, which
along with other orderer nodes forms an ordering service. Because Fabric's
design relies on deterministic consensus algorithms, any block validated by the
peer is guaranteed to be final and correct. Orderers also enforce basic access
control for channels, restricting who can read and write data to them, and who
can configure them.
The ordering service gathers endorsed transactions from applications and orders
them into transaction blocks, which are subsequently distributed to every peer
node in the channel. At each of these committing peers, transactions are
recorded and the local copy of the ledger updated appropriately. An ordering
service is unique to a particular channel, with the nodes servicing that channel
also known as a "consenter set". Even if a node (or group of nodes) services
multiple channels, each channel's ordering service is considered to be a
distinct instance of the ordering service.
More information about orderers and the ordering service can be found on the
official Hyperledger Fabric documentation website.
Application channels, or simply channels SettleMint's platform creates by
default an application channel called "default-channel" which is used to create
an initial ledger for the network.
Users can create additional application channels using the binaries provided by
Hyperledger Fabric (install them locally and download the node's certificates)
or by creating a Smart contract set which is a configured web IDE with all the
necessary files and binaries to interact with your peer node and orderers.
### **Core components of a hyperledger fabric node**
* **Chaincode Layer:** Executes smart contract logic and enforces business
rules.
* **Ledger Layer:** Stores transactional data using **LevelDB** or **CouchDB**
as key-value state databases.
* **Communication Layer:** Manages gRPC-based interactions between nodes.
* **Membership Service Provider (MSP):** Governs identity verification, access
policies, and network governance.
## **Settlemint platform’s node manager**

The **SettleMint Node Manager** simplifies blockchain node deployment and
management, offering a **user-friendly interface** to configure, monitor, and
maintain nodes on different blockchain networks. It enables businesses and
developers to:
* **Deploy nodes in a few clicks** without complex configurations.
* **Monitor network health and performance** with real-time statistics.
* **Pause and resume nodes** to optimize resource usage and costs.
* **Interact with blockchain APIs** through JSON-RPC, WebSockets, and GraphQL.
* **Manage security and identity credentials** via cryptographic key management.
## Node types on SettleMint platform
For Ethereum, SettleMint provides support for the Geth client. This means that
when you add an Ethereum node, you get a Geth node by default. All nodes running
in SettleMint are configured to be archive nodes, meaning they all include all
previous states of a given blockchain since its origin.
For hyperledger besu, SettleMint provides the option between Validator and
Non-validator nodes. All nodes are configured to be archive nodes.
For hypelerledger fabric, SettleMint offers options between peer and orderer
nodes.
## 1. Hyperledger Besu node overview
The **Besu Node Dashboard** serves as a comprehensive interface for managing and
monitoring a **Hyperledger Besu node**. It provides insights into node status,
network connectivity, blockchain interactions, and system performance.
## Consensus mechanisms
A consensus mechanism defines the rules for the nodes in a blockchain network to
reach an agreement on the current state of the blockchain ledger.
Besu comes with several consensus mechanisms. As an Ethereum implementation,
Proof of Work (PoW) is a given, but the **Proof of Authority (PoA)** options are
more suitable for enterprise projects. These can be used when participants know
each other and there is a level of trust between them, e.g. in a permissioned
consortium network.
PoA is a light and practical consensus mechanism that gives a small and
designated number of blockchain actors the power to validate transactions within
the network and to create new blocks. This results in faster block times and a
much greater transaction throughput.
**SettleMint's Enterprise Ethereum networks always use QBFT**
In IBFT 2.0 networks, a group of nodes are selected to form the pool of
validators. These nodes will be in charge of determining if a proposed block is
suitable for addition to the chain. One of these validator nodes will be
arbitrarily selected as the proposer. This single proposer, having received
messages from the pool of validators, will decide what to add to the chain. This
is presented as a proposed block to the other validators. Only if a majority
(66% or more) of the validators deems the block valid will it be added to the
ledger. At the end of each consensus round, the validators select a new proposer
and the process is repeated. IBFT 2.0 has immediate finality. There are no forks
and all valid blocks are included in the main chain.
When you deploy a Hyperledger Besu blockchain network on SettleMint, it should
be Byzantine fault tolerant.
More information on Hyperledger Besu can be found in the official
[Hyperledger Besu documentation](https://besu.hyperledger.org/en/stable/).
### 1.1 Details tab

The **Details** tab provides key deployment information, including:
* **Node Name & Status** – Identifies the instance and operational state.
* **Deployment Location** – Specifies where the node is hosted.
* **Blockchain Network & Protocol** – Indicates network participation and
protocol in use.
* **Node Type** – Determines whether the node is a **validator** or other roles.

### Node identity (evm chains)

The **Node Identity section** holds cryptographic keys and credentials that
establish the node’s uniqueness within the network.
* **Mnemonic** – A set of words that generate the private key.
* **Derivation Path** – Defines how keys are generated from the mnemonic.
* **Private Key** – The secret key used for signing transactions.
* **Public Key** – The public identifier associated with the node.
* **Blockchain Address** – The node’s address on the Ethereum network.
* **Enode URL** – A unique identifier for node-to-node communication.
The **node identity** plays a vital role in **establishing trust, securing
transactions, and enabling peer-to-peer connectivity**.
***
### 1.2 Connect with node

This tab provides the **API endpoints** to interact with the node via different
protocols:
* **JSON-RPC Endpoint** – Used for blockchain interactions and queries.
* **WebSocket (JSON-WS) Endpoint** – Enables real-time event streaming.
* **GraphQL Endpoint** – Allows structured data querying.
For a software application to interact with a blockchain (e.g. by sending
transactions/data to the network, or even just by reading data), it must connect
to a node.
This section describes how to connect to your Besu node.
## Backend APIs
Once a node has been deployed on an EVM (Ethereum Virtual Machine) compatible
network, it can be accessed by different endpoints such as JSON-RPC, JSON-WS or
GraphQL. You can connect to your already deployed node using these 3 most common
endpoints.
### JSON-RPC
JSON-RPC, is a stateless, light-weight remote procedure call (RPC) protocol.
Primarily, the specification defines several data structures and the rules
around their processing. By default, the version of the JSON-RPC protocol needs
to be 2.0, and you need to provide the node ID as well as a method and
parameters.
There are different kinds of methods that can be used: ADMIN methods, DEBUG
methods, ETH methods etc. The entire list of methods that can be used can be
found in the [Besu official documentation](https://besu.hyperledger.org).
If you want to correctly connect to your node, you need to respect the right
structure for the request, which is always the same:
```json
{
"jsonrpc":"2.0"
"Id": nodeId
"method":"methodName"
"params":{
}
}
```
If you want to connect to a node deployed on the SettleMint platform, go to the
**Connect** tab on the **Node detail page** in the **Blockchain nodes** section
of your application. Select JSON-RPC or any other endpoint and click **Try it
out**. You will then be redirected to a new tab where you will be able to test
different methods as well as the related Curl command line.
### JSON-WS
To make RPC requests over WebSockets, you can use wscat, which is by definition
a Node.js based command-line tool. First you will need to connect to your node's
WebSocket server using wscat, as follows:
`"wscat -c ws://"` . All the credentials are provided
in the **Connect** tab on the **Node detail page** in the **Blockchain nodes**
section of your application. After you have established a connection, the
terminal should display a ">" prompt. You will then be able to send individual
requests as a JSON data package, as above, for instance:
```json
{
"jsonrpc":"2.0"
"Id": 1
"method":"eth_blockNumber"
"params":{
}
}
```
### GraphQL
GraphQL is a query language and server-side runtime for API's. It is designed to
make APIs fast, flexible, and developer-friendly. We have a GraphQL interface
that can be used with many different queries. These queries can be tested out in
our GraphQL playground. You can also test out the different graphql queries with
cURL, those request would look like this:
```bash
curl -X POST -H "Content-Type: application/json" -H "x-auth-token: " --data
'{ "query": "{syncing{startingBlock currentBlock highestBlock}}"}' http://.settlemint.com/graphql
```
If you want to connect to a node deployed on the SettleMint platform, go to the
**Connect** tab on the **Node detail page** in the **Blockchain nodes section**
of your application. Select Graphql and click **Try it out**. This will bring
you to the GraphQL playground where you can use all the different queries.
## Javascript API
If you do not want to use the above endpoints to connect to your node, it is
possible to use plain Javascript. Several convenience libraries exist within the
different ecosystem which makes connection much easier. With these libraries,
developers around the world can write one-line methods to easily initiate
requests (still under the hood) that interact with Ethereum or any other EVM
compliant network. Note that some libraries might be available for Ethereum but
not for the other networks.
These libraries are very helpful and abstract away much of the complexity of
interacting directly with your node. Most also provide useful and
straightforward functions such as converting ETH to Gwei, so that you can spend
less time dealing with decimal issues and more time on the functionality of your
underlying application.
One of the most commonly used libraries, Ethers, is extremely easy to use for
signing transactions, sending tokens etc. For example:
```typescript
// If you don't specify a //url//, Ethers connects to the default
// (i.e. ``http:/\/localhost:8545``)
const provider = new ethers.providers.JsonRpcProvider();
// The provider also allows signing transactions to
// send ether and pay to change state within the blockchain.
// For this, we need the account signer...
const signer = provider.getSigner();
```
***
### 1.3 JSON-rpc
The **JSON-RPC** interface enables blockchain queries, transactions, and
debugging.

#### **Key features:**
* Querying blockchain information (blocks, transactions, accounts).
* Sending transactions and interacting with smart contracts.
* Debugging and tracing blockchain activities.
#### **Json-rpc methods (grouped by category):**
##### **Transaction & gas management**
| Method | Description |
| -------------------------- | ------------------------------------------ |
| `eth_gasPrice` | Retrieves the current gas price. |
| `eth_maxPriorityFeePerGas` | Returns the max priority fee per gas unit. |
| `eth_feeHistory` | Fetches historical gas fees. |
| `eth_estimateGas` | Estimates gas required for a transaction. |
| `eth_sendRawTransaction` | Sends a signed transaction. |
| `eth_sendTransaction` | Sends a new transaction. |
##### **Transaction pool**
| Method | Description |
| -------------------------------- | -------------------------------------------- |
| `txpool_content` | Retrieves pending and queued transactions. |
| `txpool_status` | Returns transaction pool status. |
| `txpool_besuPendingTransactions` | Fetches Besu pending transactions. |
| `txpool_besuStatistics` | Provides statistics on the transaction pool. |
##### **Blockchain data**
| Method | Description |
| --------------------------- | ------------------------------------ |
| `eth_blockNumber` | Returns the latest block number. |
| `eth_getBlockByHash` | Retrieves block details by hash. |
| `eth_getBlockByNumber` | Retrieves block details by number. |
| `eth_getTransactionByHash` | Fetches transaction details by hash. |
| `eth_getTransactionReceipt` | Retrieves transaction receipt. |
| `eth_getLogs` | Fetches logs for a given filter. |
##### **Mining & network**
| Method | Description |
| --------------------- | --------------------------------------- |
| `eth_mining` | Checks if the node is currently mining. |
| `eth_hashrate` | Returns the mining hashrate. |
| `eth_protocolVersion` | Retrieves the protocol version. |
| `eth_syncing` | Checks if the node is syncing. |
| `net_peerCount` | Returns the number of peers connected. |
##### **Debug & trace**
| Method | Description |
| ------------------------- | -------------------------------------------------- |
| `debug_traceTransaction` | Returns a full trace of a transaction. |
| `debug_traceBlock` | Provides traces for an entire block. |
| `debug_getRawTransaction` | Fetches raw transaction data. |
| `debug_storageRangeAt` | Retrieves a storage range from a specific account. |
***
### 1.4 GraphQL
GraphQL provides an efficient querying method for blockchain data.

#### **Benefits:**
* Fetch specific blockchain details, such as transaction history.
* Optimize data retrieval for dApps and front-end applications.
***
### 1.5 Resources
Monitors **CPU, memory, disk usage, and network connections** to optimize
performance.

#### **Capabilities:**
* Prevent system overload by tracking hardware usage.
* Ensure proper network connectivity and peer synchronization.
***
### 1.6 Logs

Real-time logs provide visibility into **node operations**, including:
* **Peer connections and network status.**
* **Transaction processing and block synchronization.**
* **Debugging and security auditing.**
***
## 2. Hyperledger Fabric node overview
The **Hyperledger Fabric Node Dashboard** is designed to monitor and manage
Fabric nodes, focusing on consensus, network identity, and operational metrics.
### 2.1 Details tab

The **Details** tab provides:
* **Node Name & Deployment Location** – Identifies the instance.
* **Blockchain Network & Protocol** – Hyperledger Fabric-specific details.
* **Node Type** – Defines the node as an **Orderer** or **Peer**.
* **Version** – Displays the running Hyperledger Fabric version.

***
### 2.2 Node stats
Provides critical **consensus and participation metrics**, including:

* **Consensus Leader Status** – Indicates if the node is leading consensus.
* **Consensus Relation** – Defines the node’s role (e.g., **Consenter**).
* **Participation Status** – Specifies whether the node is actively
participating.
* **Ledger Height** – Displays the blockchain height.
* **Proposal Metrics** – Tracks transaction proposals received per time
interval.
***
### 2.3 Node identity & security

This section contains **cryptographic details** for network security:
* **Administrator’s TLS Certificates & Keys** – Ensures secure communication.
* **Node’s TLS Certificates & Public Keys** – Verifies node identity.
* **Private Keys & Certificates** – Used for authentication and encryption.
#### Capabilities:
* **Secure Network Communication** – TLS encryption for data integrity.
* **Node Authentication** – Ensures trusted interactions within Fabric.
* **Certificate Management** – Maintains cryptographic security.
***
### 2.4 Resources tab
Similar to Besu, this tab provides:
* **CPU & Memory Monitoring** – Optimizes resource allocation.
* **Storage Usage** – Ensures sufficient disk space for operations.
* **Peer & Orderer Connectivity** – Maintains network stability.
***
### 2.5 Logs tab

Displays **Fabric-specific logs** related to:
* **Orderer operations & consensus messages.**
* **Transaction validation & endorsement events.**
* **Network health & debugging information.**
***
## Node connections
For an application to interact with a blockchain (e.g. by sending
transactions/data to the network, or even just by reading data), it must connect
to a node.
To connect to a node, you use an endpoint, which is a URL that enables an API to
gain access to the node. You interact with the node by sending requests to, and
receiving responses from it via an API.
You can find the endpoints on the node detail page, in the Connect tab, together
with node interaction tools with playgrounds for real-time tryouts (e.g.
JSON-RPC, GraphQL, etc.).
## Connection details
### Access Node
Navigate to your node in the application
### View Connection Info
Open the **Connect** tab to find:
* Endpoint URLs
* Authentication tokens
* Connection examples
```bash
# Get node connection details
settlemint platform read node --show-connection
```
```typescript
const getConnectionDetails = async () => {
const node = await client.node.read("node-name");
console.log('Connection details:', node.connection);
};
```
## Connection examples
```javascript
const Web3 = require('web3');
const web3 = new Web3('https://your-node-url/token');
```
````javascript const {ethers} = require('ethers'); const provider = new
ethers.JsonRpcProvider('https://your-node-url/token'); ```
```bash
curl -X POST https://your-node-url \
-H "Content-Type: application/json" \
-H "x-auth-token: " \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
````
Replace `your-node-url` and `token` with the actual values from your node's
connection details.
file: ./content/docs/platform-components/blockchain-infrastructure/consortium-manager.mdx
meta: {
"title": "Consortium manager",
"description": "Guide to using blockchain explorers in SettleMint"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
Permissioned networks, although started by a single organization, allow multiple
organizations with a shared business goal to come together and form the
consortium. The different organizations transacting with each other in a
permissioned network are called **network participants**. The organization who
created the network, i.e. the owner, can invite network participants and set
specific permissions for the organizations joining the network.

Depending on how you organize your work, you can grow the network with new
participants in two ways:
* **Invite an organization** to join the SettleMint platform so they can join
the network (e.g. if the organization itself is responsible for adding and
managing their nodes)
* **Add an organization** to the network yourself (e.g. if the organization is
your client and you are managing the project for them)
## Invite an organization
Navigate to the relevant **application**, and click **Blockchain network** in
the left navigation.

Open the **Participants** tab and click **Invite organization**. This opens a
form.
### Enter contact information
Enter the **email address** of the contact person from the organization you want to invite.
### Set permissions
Set the **permissions** for this new network participant. You can change these
permissions at any time.
### Add optional message
Optionally, you can add a **message** to be included in the invitation email.
### Confirm invitation
Click **Confirm** to go to the list of organizations participating in the
network. Your email invitation has now been sent, and you see in the list that
it is pending.
The invitation email includes a code that the recipient can use to get access to
the network.
## Add an organization
Navigate to the relevant **application**, and click **Blockchain network** in
the left navigation.
Open the **Participants** tab and click **Add organization**. This opens a form.
### Define organization
Define the **organization**. You can select an organization you already have in place, or create a new one and choose a name for this new organization. Separate invoices are generated for each organization, so creating a new organization might be more convenient if you need to separate invoices.
### Enter billing information
Enter **billing information** if you created a new organization. SettleMint
creates a billing account for this organization. You will be billed monthly for
the resources you use within this organization.
### Define application
Define the **application**. You can select an application you already created,
or create a new one and choose a name for this new application.
### Set permissions
Set the **permissions** for this new network participant. You can change these
permissions at any time.
### Confirm addition
Click **Confirm** to go to the list of organizations participating in the
network. You see the new participant added to the list.
## Manage a network participant
Navigate to the relevant **application**, and click **Blockchain network** in
the left navigation.
Open the **Participants** tab and click **Manage participant** to see available
actions. You can only perform these actions if you have administrator rights.
**Change permissions** - Changes the network participant's permissions with
immediate effect.
All operations require appropriate permissions in your workspace.
# Join a network by invitation
In a permissioned blockchain network (often called a consortium network),
participants need to be invited by the network's owner to join the network.

You need an invitation code from the network owner to join a permissioned
network.
file: ./content/docs/platform-components/blockchain-infrastructure/insights.mdx
meta: {
"title": "Insights",
"description": "Guide to using blockchain explorers in SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
To view and inspect transactions in your blockchain application, SettleMint
provides insightful dashboards via integrated blockchain explorers:
* **Blockscout** - For EVM compatible networks (Besu, Polygon Edge)
* **Hyperledger Explorer** - For Fabric networks
## Add blockchain explorer
Navigate to the **application** where you want to add a blockchain explorer. Click **Insights** in the left navigation, and then click **Add Insights**. This opens a form.
Follow these steps:
1. Select **Blockchain Explorer**
2. Select the target **blockchain node** and click **Continue**
3. Enter a **name** for your explorer instance
4. Configure deployment settings (provider, region, size)
5. Click **Confirm** to add the explorer
First ensure you're authenticated:
```bash
settlemint login
```
Create blockchain explorer:
```bash
# Create blockchain explorer
settlemint platform create insights blockscout
# Get information about the command and all available options
settlemint platform create insights blockscout --help
```
For a full example of how to create a blockchain explorer using the SDK, see the [Blockscout SDK API Reference](https://www.npmjs.com/package/@settlemint/sdk-blockscout#api-reference).
## Manage explorer
Navigate to your explorer and click **Manage insights** to:
* View explorer details and status
* Monitor health status
* Access the explorer interface
* Update configurations
Current status values:
* `DEPLOYING` - Initial deployment in progress
* `COMPLETED` - Running normally
* `FAILED` - Deployment or operation failed
* `PAUSED` - Explorer is paused
* `RESTARTING` - Explorer is restarting
Health status indicators:
* `HEALTHY` - Operating normally
* `HAS_INDEXING_BACKLOG` - Processing backlog
* `NOT_HA` - High availability issue
* `NO_PEERS` - Network connectivity issue
```bash
# List explorers
settlemint platform list services --type insights
# Restart explorer
settlemint platform restart insights blockscout
```
```typescript
// List explorers
const listExplorers = async () => {
const explorers = await client.insights.list("your-app");
console.log('Explorers:', explorers);
};
// Get explorer details
const getExplorer = async () => {
const explorer = await client.insights.read("explorer-unique-name");
console.log('Explorer details:', explorer);
};
// Restart explorer
const restartExplorer = async () => {
await client.insights.restart("explorer-unique-name");
};
```
## Using the explorer
When the blockchain explorer is deployed and running successfully, you can:
1. Access the web interface through the **Interface tab**
2. View in fullscreen mode for better visibility
3. Inspect blocks, transactions, addresses and balances
Key features:
* View latest blocks and transactions
* Search by block number, transaction hash, or address
* Inspect transaction details and status
* View account balances and token transfers
* Monitor smart contract interactions

### Transaction details
Click a Transaction hash to see detailed information including:
* Gas usage and fees
* Input data and events
* Status and confirmations
* Related addresses

### Address details
Click an Account address to view:
* Balance and token holdings
* Transaction history
* Contract interactions
* Analytics and graphs

All operations require appropriate permissions in your workspace.
# Blockscout explorer for EVM chains
Blockscout is an open-source blockchain explorer optimized for Ethereum Virtual
Machine (EVM)-compatible networks. It provides a comprehensive interface for
querying and analyzing blockchain data, including transactions, blocks, token
transfers, addresses, and smart contracts. Designed for real-time visibility,
Blockscout delivers structured access to on-chain operations, serving as a
critical tool for developers, auditors, and system architects. Its extensible
architecture and detailed data presentation facilitate both high-level
monitoring and granular inspection of EVM-based ecosystems.
Transactions refer to standard on-chain actions initiated by externally owned
accounts (EOAs), like sending tokens, deploying contracts, or interacting with
smart contracts. These are recorded directly on the blockchain with their own
transaction hash.

Internal Transactions (also called “message calls”) are operations triggered
within smart contracts, often as a result of a transaction. For example, a
contract calling another contract or transferring ETH/token internally. These
are not standalone transactions but are captured through execution traces and
don’t appear directly on-chain.

It enables precise interrogation of blockchain state through its block and
transaction monitoring capabilities. Blocks are indexed by unique hashes or
sequential numbers, exposing attributes such as block height, timestamp, gas
consumption, and transaction volume. Transaction data includes sender and
recipient addresses, transferred values, gas costs, and execution status (e.g.,
success, failure, pending). For smart contract interactions, Blockscout parses
input data to extract function calls and parameters, providing developers with
actionable insights for debugging and validation workflows. The explorer
supports detailed address inspection for both externally owned accounts (EOAs)
and smart contracts. Queryable data encompasses current balances, transaction
histories, and token associations. For verified smart contracts, Blockscout
exposes source code and Application Binary Interface (ABI), enabling direct
interaction via the platform. This functionality supports use cases such as
wallet tracking, address investigation, and contract deployment verification,
making it an indispensable resource for EVM developers and security analysts.
## Api overview
Blockscout provides multiple API interfaces to interact with blockchain data,
including REST API, JSON RPC & ETH Compatible RPC Endpoints, and GraphQL. These
APIs are designed for ease of use, supporting developers transitioning from
other explorers like Etherscan to Blockscout, as well as those requiring general
API and data support.

### Api access
```
REST API URL: /api
JSON RPC URL: /api/eth-rpc
GraphQL URL: /graphiql
```
## Rest api endpoints
The REST API supports both GET and POST requests and is structured around
modules and actions. The following modules are supported: Account, Logs, Token,
Stats, Block, Contract, and Transaction.
### Search
```http
GET /search # Perform a general search
GET /search/check-redirect # Search redirect
```
### Transactions
```http
GET /transactions # Retrieve transactions
GET /transactions/{transaction_hash} # Get transaction details
GET /transactions/{transaction_hash}/token-transfers # Get token transfers for a transaction
GET /transactions/{transaction_hash}/internal-transactions # Get internal transactions
GET /transactions/{transaction_hash}/logs # Get transaction logs
GET /transactions/{transaction_hash}/raw-trace # Get transaction raw trace
GET /transactions/{transaction_hash}/state-changes # Get transaction state changes
GET /transactions/{transaction_hash}/summary # Get a human-readable transaction summary
```
### Blocks
```http
GET /blocks # Retrieve blocks
GET /blocks/{block_number_or_hash} # Get block details
GET /blocks/{block_number_or_hash}/transactions # Get transactions in a block
GET /blocks/{block_number_or_hash}/withdrawals # Get block withdrawals
```
### Addresses
```http
GET /addresses # Get native coin holders list
GET /addresses/{address_hash} # Get address details
GET /addresses/{address_hash}/transactions # Get transactions related to an address
GET /addresses/{address_hash}/token-transfers # Get token transfers
GET /addresses/{address_hash}/internal-transactions # Get internal transactions
GET /addresses/{address_hash}/logs # Get logs related to an address
GET /addresses/{address_hash}/blocks-validated # Get blocks validated by the address
GET /addresses/{address_hash}/token-balances # Get all token balances
GET /addresses/{address_hash}/tokens # Get token balances with filtering and pagination
GET /addresses/{address_hash}/coin-balance-history # Get coin balance history
GET /addresses/{address_hash}/coin-balance-history-by-day # Get balance history by day
GET /addresses/{address_hash}/withdrawals # Get withdrawals related to an address
GET /addresses/{address_hash}/nft # Get list of NFTs owned by an address
GET /addresses/{address_hash}/nft/collections # Get NFTs grouped by collection
```
### Tokens
```http
GET /tokens # Get a list of tokens
GET /tokens/{address_hash} # Get token details
GET /tokens/{address_hash}/transfers # Get token transfers
GET /tokens/{address_hash}/holders # Get token holders
GET /tokens/{address_hash}/counters # Get token statistics
GET /tokens/{address_hash}/instances # Get NFT instances
GET /tokens/{address_hash}/instances/{id} # Get NFT instance by ID
GET /tokens/{address_hash}/instances/{id}/transfers # Get NFT instance transfers
GET /tokens/{address_hash}/instances/{id}/holders # Get NFT instance holders
GET /tokens/{address_hash}/instances/{id}/transfers-count # Get NFT transfer count
PATCH /tokens/{address_hash}/instances/{id}/refetch-metadata # Re-fetch NFT metadata
```
### Smart contracts
```http
GET /smart-contracts # Get verified smart contracts
GET /smart-contracts/{address_hash} # Get smart contract details
GET /smart-contracts/{address_hash}/methods-read # Get read methods of a smart contract
GET /smart-contracts/{address_hash}/methods-write # Get write methods of a smart contract
POST /smart-contracts/{address_hash}/query-read-method # Query a smart contract's read method
```
### Statistics & charts
```http
GET /stats # Get statistics counters
GET /stats/charts/transactions # Get transactions chart
GET /stats/charts/market # Get market chart
```
### Other endpoints
```http
GET /config/json-rpc-url # Get JSON-RPC URL
GET /withdrawals # Get withdrawals
GET /proxy/account-abstraction/status # Get account abstraction indexing status
```
### Schemas
Blockscout provides multiple schemas representing different blockchain data
structures, including: Block, Transaction, TokenTransfer, InternalTransaction,
SmartContract, NFTInstance, TokenInfo, and TransactionSummary.
**Last Updated:** 8 months ago
## Json rpc & eth compatible rpc endpoints
In addition to custom RPC endpoints, the Blockscout ETH RPC API supports most
commonly used methods in the exact format specified for Ethereum nodes, as per
the Ethereum JSON-RPC Specification. These methods are provided for convenience
and are most suitable as a fallback option in your JSON RPC API providers. For
other use cases, REST or custom RPC methods are recommended.
### Supported methods
```json
eth_blockNumber # Returns the latest block number in the chain in hexadecimal format
eth_getBalance # Returns the balance of a given address in wei
eth_getLogs # Returns an array of logs matching a specified filter object
eth_gasPrice # Returns the current gas price
eth_getTransactionByHash # Retrieves a transaction by its hash
eth_getTransactionReceipt # Retrieves the receipt of a transaction
eth_chainId # Returns the chain ID
eth_maxPriorityFeePerGas # Returns the maximum priority fee per gas
eth_getTransactionCount # Returns the number of transactions sent from an address
eth_getCode # Returns the code at a given address
eth_getStorageAt # Returns the value from a storage position at a given address
eth_estimateGas # Estimates the gas needed for a transaction
eth_getBlockByNumber # Retrieves a block by number
eth_getBlockByHash # Retrieves a block by hash
eth_sendRawTransaction # Sends a raw transaction
eth_call # Executes a new message call immediately without creating a transaction
```
## Graphql in blockscout
The Graph is a decentralized protocol for indexing and querying blockchain data,
making it easier to access and use. It acts like a librarian for blockchain
data, organizing it for quick retrieval. It decentralizes the reading layer,
ensuring reliability and security by avoiding single points of failure.
Subgraphs are custom databases within The Graph that define how to collect,
organize, and store data from blockchain smart contracts. They make data
queryable via GraphQL, simplifying access to complex information like NFT
transfer histories. Blockscout integrates with The Graph to enhance its data
querying capabilities. Subgraphs can be created to index data from EVM chains
supported by Blockscout, such as Ethereum or Sepolia. Once deployed to The
Graph's network, which includes over 450 indexers worldwide, this data can be
queried efficiently using GraphQL. This integration allows developers to combine
Blockscout's detailed blockchain exploration with The Graph's powerful indexing
and querying, enabling more advanced dApp development.

### What is graphql?
GraphQL is an open-source data query and manipulation language for APIs, and a
runtime for fulfilling queries with existing data. It provides an efficient,
powerful, and flexible approach to developing web APIs. It allows clients to
define the structure of the data required, and exactly the same structure of the
data is returned from the server, preventing excessively large amounts of data
from being returned.
#### Key concepts of graphql
* **Hierarchical:** Queries are structured hierarchically, allowing nested data
retrieval.
* **Strongly Typed:** Schemas define types for all data, ensuring predictable
responses.
* **Client-Specified Queries:** Clients can request exactly the data they need,
reducing over-fetching or under-fetching.
#### Advantages of graphql
* **Declarative Integration on Client:** Clients specify what data/operations
they need.
* **Standard Way to Expose Data and Operations:** Provides a consistent API
structure.
* **Support for Real-Time Data:** Enables real-time updates with subscriptions.
### Query types
There are three main query types in a GraphQL schema:
1. **Query:** Fetch data, such as retrieving posts or transactions.
2. **Mutation:** Change data, such as updating a post or modifying a record.
3. **Subscription:** Subscribe to real-time data, such as new posts in a
category.
### Access graphql api
To access Blockscout's GraphQL interface, use **GraphiQL**, an in-browser IDE
for exploring GraphQL, which is built into Blockscout. From the APIs dropdown
menu, choose GraphQL. Alternatively, you can use your favorite HTTP client to
send requests to the GraphQL endpoint.
#### Graphiql interface
The GraphiQL interface provides a user-friendly environment to explore and test
GraphQL queries. It includes a documentation explorer (Docs section) that
provides schema details, such as root types, and a query editor to write and
execute queries.
### Queries
Blockscout's GraphQL API provides queries and a subscription, viewable in the
GraphQL interface under the Docs menu. Example queries include:
```graphql
address(hash: AddressHash!): Address # Gets an address by hash
addresses(hashes: [AddressHash!]): [Address] # Gets addresses by hashes
block(number: Int!): Block # Gets a block by number
transaction(hash: FullHash!): Transaction # Gets a transaction by hash
```
#### Example query to retrieve transactions for a specific address
```graphql
{
address(hash: "0xaddressHash") {
transactions(first: 10) {
edges {
node {
blockNumber
createdContractAddressHash
fromAddressHash
gas
hash
}
}
}
}
}
```
## Hyperledger fabric explorer
Hyperledger Explorer is a web-based tool designed to provide a **comprehensive
and real-time** view of blockchain operations within **Hyperledger Fabric**
networks. It enables users to monitor and analyze blockchain activities,
including **blocks, transactions, and chaincodes**, while maintaining privacy
and security. With its feature-rich dashboard, Hyperledger Explorer allows users
to **navigate through blocks, transactions, peers, and channels** with ease. The
tool provides advanced search and filtering capabilities, real-time
notifications for new blocks, and interactive metrics for visualizing blockchain
trends. By offering deep insights into ledger data and enabling efficient
network management, Hyperledger Explorer becomes an essential solution for
organizations leveraging **Hyperledger Fabric**.

* **Real-time Monitoring**: Displays network activity as it happens, providing
immediate visibility into new blocks and transactions.
* **Comprehensive Dashboard**: A central hub for monitoring network health,
including metrics such as the number of blocks, transactions, nodes, and
chaincodes.
* **Detailed Block & Transaction Views**:
* Block list with metadata such as block hash, transaction count, and
timestamps.
* Transaction explorer for tracking transaction details, types, and associated
metadata.
* **Search & Filtering**:
* Filter transactions and blocks by **date range, channel, or organization**.
* Advanced sorting capabilities for customized data views.
* **Channel & Chaincode Management**:
* View and manage available channels.
* Display installed chaincodes with versioning details.
* **Interactive Metrics & Analytics**:
* Graphical visualizations of blockchain activity.
* Hover-based insights for precise data analysis.
## Dashboard overview
The **Dashboard** serves as the main interface, providing an overview of the
blockchain network. It includes various panels such as **Peer Lists, Network
Metrics, and Recent Transactions by Organization**. Users can dynamically switch
channels via a dropdown to customize their view. Additionally, a **Latest Blocks
Notification Panel** presents key block details, including:
* Block number
* Channel name
* Data hash
* Transaction count
Each block link redirects to an in-depth **Block Details** view, offering
insights into timestamps, hashes, and transaction summaries.
## Network & channel management
The **Network View** presents details on configured properties for each channel.
Users can analyze peer statuses, their roles, and network configurations,
including **ledger height and Membership Service Provider (MSP) identity**.
The **Channel List** section provides an overview of available channels,
enabling users to navigate different segments of the blockchain network
effortlessly.
## Exploring blocks & transactions
Hyperledger Explorer provides powerful tools for tracking blockchain activities:
* **Block List**: A sortable, filterable table displaying block metadata like
block hash, transaction count, and creation timestamps.
* **Transaction List**: Supports up to **100 rows per page** with pagination and
allows users to drill down into transaction specifics.
* **JSON Transaction Views**: Enables structured previews with fold/unfold
options for easy data inspection.
## Chaincodes & smart contracts
The **Chaincode List** presents installed chaincodes across the network,
allowing filtering and sorting by:
* Chaincode name
* Version
* Deployment status
* Associated transactions
This section helps users manage smart contracts efficiently and track changes
over time.
## Analytics & metrics
A dedicated **Metrics Panel** delivers real-time statistics, such as:
* Number of blocks and transactions processed per hour or minute
* Network activity trends over time
* Interactive charts for monitoring blockchain operations
These visual analytics tools enhance user insights and ensure efficient
blockchain monitoring.
file: ./content/docs/platform-components/blockchain-infrastructure/load-balancer.mdx
meta: {
"title": "Load balancer",
"description": "Guide to adding load balancers in SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
A **blockchain load balancer** is a **networking component** designed to
**distribute traffic efficiently across multiple blockchain nodes** to optimize
performance, reliability, and scalability. It ensures that transaction requests,
queries, and smart contract interactions are handled efficiently, preventing
overloading on any single node. Load balancing is particularly important in
blockchain environments where **multiple nodes serve API requests** for wallets,
dApps, and enterprise applications. By intelligently routing requests, a
blockchain load balancer enhances **availability, fault tolerance, and
transaction throughput**. A blockchain load balancer operates as an
**intermediary layer** between blockchain clients (wallets, dApps, APIs) and
backend blockchain nodes. It ensures that requests are distributed efficiently
based on predefined rules, improving performance and resilience.
### **Key steps in load balancing:**
1. **Incoming Request Handling** – The load balancer receives API requests from
users, smart contracts, or external applications.
2. **Node Health Check** – It continuously monitors node health, availability,
and performance to route traffic efficiently.
3. **Request Distribution** – Transactions and queries are forwarded to the most
appropriate blockchain node using load-balancing strategies.
4. **Response Management** – The selected node processes the request and returns
the response to the client.
5. **Failover Handling** – If a node becomes unresponsive, the load balancer
automatically reroutes requests to healthy nodes.
***
### Settlemint blockchain platform: load balancer
***

SettleMint provides an **integrated blockchain load balancer** to optimize the
performance of blockchain networks deployed on its platform. The load balancer
ensures **high availability, fault tolerance, and scalable performance** by
**distributing traffic** across multiple blockchain nodes.\
SettleMint employs a multi-layered application aware load balancing strategy to
ensure optimal performance and network resilience. Our approach dynamically
adapts to varying workloads and network conditions, ensuring seamless
transaction processing and high availability. By leveraging a combination of
intelligent request distribution and fault-tolerant mechanisms, we optimize
efficiency while maintaining a robust and scalable blockchain environment. This
feature is particularly beneficial for applications that require **high
throughput, low latency, and continuous uptime** in blockchain transactions and
queries.
The **SettleMint blockchain load balancer** intelligently routes transaction and
API requests across active nodes based on **network health, workload
distribution, and failover mechanisms**.
### **Load balancer process in settlemint:**
1. **Traffic Reception** – The load balancer receives requests from users, smart
contracts, and external systems.
2. **Node Monitoring & Health Check** – It continuously checks node
availability, latency, and processing load.
3. **Intelligent Routing** – Requests are distributed based on real-time node
performance using strategies like **round-robin, least connections, or
weighted routing**.
4. **Failover Protection** – If a node goes offline, the load balancer
automatically redirects traffic to healthy nodes, ensuring uninterrupted
blockchain operations.
5. **Response Handling** – The processed response is returned to the client from
the assigned node.
***
## Features of settlemint’s blockchain load balancer
* **Auto-Scaling Support** – Dynamically adds or removes nodes to optimize
resource usage.
* **High Availability** – Ensures continuous uptime by redirecting requests to
healthy nodes.
* **Performance Optimization** – Reduces network congestion by balancing
workloads effectively.
* **Multi-Protocol Support** – Compatible with **Ethereum JSON-RPC, Hyperledger
Fabric APIs, and custom blockchain endpoints**.
* **Security & Rate Limiting** – Protects against **DDoS attacks and excessive
API calls**.
***
## Deployment options
SettleMint allows **flexible deployment** of its blockchain load balancer based
on the specific needs of an organization:
### **1. Cloud-based load balancer**
* Deployed on cloud infrastructure (AWS, Azure, GCP) with **auto-scaling
capabilities**.
* Ideal for **enterprise-grade blockchain solutions**.
### **2. On-premises load balancer**
* Runs within a **private network** for **enhanced security and regulatory
compliance**.
* Suitable for **financial, government, and enterprise applications**.
### **3. Hybrid load balancer**
* A combination of **cloud and on-prem** nodes to balance traffic dynamically.
* Enables **cost efficiency and scalability** while ensuring **data privacy**.
***
## Security considerations
When implementing a **blockchain load balancer**, security must be a top
priority. SettleMint incorporates the following best practices:
* **API Rate Limiting:** Prevents misuse by limiting excessive transaction
requests.
* **Node Authentication & Access Control:** Ensures only authorized users can
interact with nodes.
* **DDoS Protection:** Detects and mitigates distributed denial-of-service
(DDoS) attacks.
* **Encrypted Communications:** Uses **TLS encryption** for secure node-to-node
communication.
***
file: ./content/docs/platform-components/blockchain-infrastructure/network-manager.mdx
meta: {
"title": "Network manager",
"description": "Blockchain network manager offers a user-friendly interface for configuring private permissioned networks, connecting to Layer 1 and Layer 2 public networks, and joining existing external networks."
}
The Blockchain Network Manager in SettleMint simplifies the setup and management of blockchain infrastructure across any environment. It enables users to deploy, configure, and monitor blockchain nodes on both public and permissioned networks without requiring manual DevOps effort. With built-in capabilities for node scaling, load balancing, and multi-region deployments, it ensures high availability, operational efficiency, and full lifecycle control.
The Blockchain Network Manager offers integrated monitoring and control features to manage blockchain infrastructure with precision.
It provides real-time insights into node health, transaction throughput, and resource utilization across all environments.
Administrators can view logs, receive alerts, and track operational metrics through a unified dashboard and it ensures proactive management, faster issue resolution, and greater reliability of blockchain network deployments.
## Key capabilities
### Create private permissioned networks
* Deploy **Hyperledger Besu, Hyperledger Fabric, or Quorum** networks with
pre-configured templates and guided workflows, ensuring rapid setup for
enterprise use cases.
### Connect with public networks
* Seamlessly integrate with **Ethereum, Polygon, Hedera, Avalanche, Arbitrum,
Optimism, Sonic, Soneium**, and other leading public blockchains, enabling
hybrid solutions that leverage both public and private networks.
### Join external networks
* Use **SettleMint's tooling** to connect to existing networks, expand
infrastructure, or migrate to the SettleMint platform, providing flexibility
for organizations with pre-existing blockchain deployments.
### Join via invitation code
* Easily connect to **pre-existing networks** within SettleMint using an
invitation code, streamlining collaboration in consortium setups.
***

## Supported networks
The SettleMint Network Manager supports a wide range of blockchain protocols,
catering to both private permissioned and public network requirements. Below is
a summary of the supported networks:
| Network Type | Protocol | Description |
| ------------ | ------------------ | ---------------------------------------------------------------------- |
| Permissioned | Hyperledger Besu | Enterprise-grade permissioned blockchain with QBFT consensus. |
| Permissioned | Quorum | Ethereum fork with privacy features and encrypted transactions. |
| Permissioned | Hyperledger Fabric | Modular blockchain with pluggable consensus and customizable policies. |
| Public L1 | Ethereum | Decentralized blockchain with PoS, known for smart contracts. |
| Public L1 | Avalanche | High-speed chain with subnet support and PoS. |
| Public L1 | Hedera Hashgraph | Scalable public ledger with aBFT and low fees. |
| Public L1 | Sonic | High-performance EVM-compatible blockchain with sub-second finality. |
| Public L2 | Polygon PoS | Ethereum sidechain for faster, cheaper transactions. |
| Public L2 | Polygon zkEVM | Zero-knowledge rollup for efficient Ethereum scaling. |
| Public L2 | Optimism | Optimistic Rollup solution for Ethereum scalability. |
| Public L2 | Arbitrum | Optimistic Rollup for improved Ethereum performance and lower fees. |
| Public L2 | Soneium | Ethereum L2 scaling solution with high throughput and low costs. |
***
## Network deployment & configuration
### Evm-based private networks (besu, quorum)

Users can configure the following parameters before deploying a private
permissioned **EVM-based network**:
| Parameter | Description |
| ------------------- | --------------------------------------------------------------- |
| Chain ID | A unique identifier for the blockchain network. |
| Seconds per Block | Time interval for block creation. |
| Gas Price | Minimum gas price required for transactions (specified in wei). |
| Gas Limit | Maximum amount of gas allowed per block. |
| EVM Stack Size | Maximum stack size for the Ethereum Virtual Machine (EVM). |
| Contract Size Limit | Maximum size of a smart contract in kilobytes (KB). |
#### Genesis block configuration
The `genesis.json` file is a critical component of EVM-based blockchain
networks, defining the initial state and parameters of the blockchain to ensure
secure and structured network operations. Key elements include:
* **Chain ID**: Uniquely identifies the network to prevent replay attacks.
* **Consensus Mechanism**: Determines how blocks are validated, providing the
necessary governance structure (e.g. QBFT, PoA, PoS).
* **Pre-allocated Balances**: Specify the initial allocation of tokens for
specific addresses.
* **QBFT Validator Information**: Defines the nodes responsible for validating
transactions in Quorum Byzantine Fault Tolerance (QBFT)-based networks.
#### Developer integrations
The SettleMint Network Manager provides developer-friendly tools to facilitate
smart contract development and network interactions:
* **Faucet Wallets**: Enable test token distribution for private networks,
making it easier for developers to test transactions.
* **Genesis file availability**: Users can access and download `genesis.json`
file, allowing for easy network expansion outside SettleMint platform.
### Hyperledger fabric networks

Users can configure the following settings before deploying a **Fabric
network**:
| Parameter | Description |
| ---------------------------- | ---------------------------------------------------------------------------------------- |
| Endorsement Policy | Defines transaction endorsement requirements ("By all peers" or "By majority of peers"). |
| Batch Timeout | Time before transactions are grouped into a block. |
| Max Messages in Batch | Maximum number of messages in a batch. |
| Absolute Max Bytes in Batch | Upper limit on batch size in megabytes (MB). |
| Preferred Max Bytes in Batch | Preferred batch size in megabytes (MB). |
#### Channel configuration and policies
Hyperledger Fabric networks use a `configtx.json` file to define network
channels, membership rules, and policies. Key components include:
* **Application Group**: Defines policies for participating organizations,
specifying details such as:
* **Organization Name**
* **Policies**:
* **Admin**: Roles allow users to modify configurations.
* **Endorsement**: Policies require transaction approvals from specific
peers.
* **Readers and Writers**: Policies define access to channel data.
* **Orderer Group**: Configures the ordering service responsible for transaction
finalization. Settings include:
* **Batch Timeout**: Determines the time before transactions are grouped into
a block.
* **Max Messages Per Batch**: Controls block size.
* **Consensus Type**: Typically `etcdraft`, a Raft-based ordering service.
#### Network governance and security
Hyperledger Fabric networks require robust security and governance mechanisms:
* **Membership Service Provider (MSP)**: Controls identity verification and
authentication, ensuring only authorized participants can access the network.
* **Root Certificates and TLS Certificates**: Define trusted entities for secure
communication.
* **Endorsement Policies**: Determine how transactions are validated across
organizations, enforcing compliance and preventing unauthorized modifications.
* **Block Validation Policies**: Ensure the integrity and security of the
distributed ledger, maintaining network trustworthiness.
***
## Network monitoring & management
### Evm-based networks

The **dashboard** provides insights into:
* **Network Details**: Name, deployment location, creation date, blockchain
version, protocol type.
* **Key Configurations**: Chain ID, block time, gas price, gas limit, contract
size limit, EVM stack size.
* **Genesis File Access**: Contains initial network configuration.
#### System recommendations
> **Recommendation** At least **four validator nodes** are required to ensure
> **Byzantine Fault Tolerance**.
#### Faucet wallet
* Includes **mnemonic phrase, private key, derivation path, Ethereum address**.
* Provides a **large test balance** for development and testing.
#### Public network monitoring parameters
The **dashboard** provides real-time analytics on:
* Best block height
* Current gas price
* Current gas used
* Block height over time
* Suggested gas price over time
* Gas used over time
* Transactions per block
* Pending transactions over time
* Gas limit over time
* Block size over time
* Geographical location of nodes
#### Monitoring and analytics

The Network Manager provides real-time insights into network performance:
* **Current Block Height**: Represents the latest block processed.
* **Transaction Volume**: Gives an overview of the number and frequency of
transactions, allowing organizations to analyze usage trends.
* **Node Health Monitoring**: Ensures that validator and RPC nodes remain active
and operational.
* **Gas Consumption**: Analytics provide insights into network congestion and
transaction costs.
* **Pending Transactions**: Monitoring helps identify potential bottlenecks in
the system, enhancing troubleshooting and optimization efforts.
### Hyperledger fabric networks

The **dashboard** offers comprehensive network monitoring, including:
* **Network Overview**: Name, deployment location, creation date, blockchain
version, protocol type, channel ID, MSP ID.
* **Channel Configuration JSON File Access**.
* **Batch Processing Settings**:
* Timeout
* Maximum messages
* Batch size
#### Real-time performance monitoring

* Number and location of nodes.
* Active consensus nodes and cluster size.
* Latest block committed.
* Real-time transaction monitoring, allowing users to keep track of all
blockchain activities.
* Health status of orderer and peer nodes.
* Performance analytics, including block generation times, to help organizations
optimize their blockchain operations.
* Endorsement policy compliance tracking to ensure transactions adhere to
predefined security and governance policies.
#### System recommendations
> **Recommendation** Alerts for **fault tolerance** and **orderer node
> requirements** are provided in the system.
#### Key benefits
* Simplifies the deployment process for Hyperledger Fabric networks through a
guided setup approach.
* Efficiently configures access control, consensus models, and governance
settings, ensuring a seamless blockchain deployment experience.
* Designed for scalability, supporting multi-organization setups with secure
identity management.
* Integrated monitoring provides organizations with real-time insights into
network performance and compliance adherence.
***
## Supported blockchain network protocols
## Private permissioned networks
SettleMint's Network Manager excels at creating and managing private
permissioned networks, which are ideal for enterprises requiring strict control
over data privacy, access, and governance. Below are the supported frameworks,
enriched with additional details and practical use cases.
### Hyperledger besu
Hyperledger Besu is an enterprise-grade, Ethereum-based blockchain framework
designed for permissioned and consortium networks. It offers private
transactions, configurable consensus mechanisms (IBFT, QBFT), and Ethereum
Virtual Machine (EVM) compatibility, allowing seamless integration with existing
Ethereum tools. Its modular architecture makes it a flexible choice for
businesses that require high security and compliance.
**Key Features**
* Private transactions via Tessera.
* High performance with configurable consensus mechanisms.
* Ethereum compatibility for smart contract development.
**Example Use Cases**
* Enterprise financial settlements.
* Private blockchain networks for regulated industries.
* Supply chain tracking and transparency.
Quorum is an Ethereum-based blockchain designed for enterprises needing privacy
and confidentiality in transactions. It supports Raft and IBFT consensus,
ensuring high throughput and fast finality. With private transactions and smart
contract execution, Quorum is widely used in finance, healthcare, and government
sectors where data protection is crucial.
**Key Features**
* Privacy through private transactions and contract execution.
* Ethereum compatibility for existing smart contracts.
* High throughput with Raft consensus.
**Example Use Cases**
* Banking and financial services requiring confidential transactions.
* Healthcare data exchange with controlled access.
* Corporate consortia with secure shared ledgers.
Hyperledger Fabric is a modular and scalable blockchain framework, ideal for
enterprise solutions requiring private and permissioned networks. Its
multi-channel architecture allows organizations to share data securely while
maintaining privacy. Fabric's pluggable consensus mechanisms and endorsement
policies make it a powerful choice for industries needing custom governance
models.
**Key Features**
* Channels for private data sharing.
* Pluggable consensus for optimization.
* Endorsement policies for transaction validation.
**Example Use Cases**
* Enterprise supply chain management.
* Trade finance and document verification.
* Interbank settlements with controlled access.
Ethereum is the most widely adopted decentralized blockchain, supporting smart
contracts, decentralized applications (dApps), and financial services (DeFi). It
transitioned to Proof of Stake (PoS) with Ethereum 2.0, reducing energy
consumption while improving scalability. Ethereum's rich ecosystem makes it a
leading choice for developers and enterprises.
**Key Features**
* Smart contracts written in Solidity.
* Large ecosystem of tools and dApps.
* PoS consensus for scalability.
Avalanche is a high-speed Layer 1 blockchain that enables the creation of custom
subnets, making it highly scalable and interoperable. It uses a unique Proof of
Stake (PoS) consensus, ensuring low fees, near-instant finality, and high
transaction throughput. Its EVM compatibility allows seamless migration of
Ethereum-based dApps.
**Key Features**
* Subnets for isolated, customizable blockchain environments.
* High throughput with transaction finality in under 2 seconds.
* EVM compatibility for easy migration of Ethereum dApps.
**Example Use Cases**
* Institutional DeFi solutions.
* Gaming and metaverse projects.
* Tokenized assets and securities.
Hedera Hashgraph is an enterprise-ready public ledger that uses asynchronous
Byzantine Fault Tolerance (aBFT) for security and efficiency. It provides
low-cost transactions, predictable pricing, and high throughput with up to
10,000 transactions per second. Its governance model, managed by major
enterprises, ensures stability and regulatory compliance.
**Key Features**
* aBFT consensus for high security and fault tolerance.
* Scalability with up to 10,000 transactions per second.
* Fixed, low-cost fees for predictable budgeting.
**Example Use Cases**
* Government ID and voting systems.
* Secure digital asset management.
* Supply chain and logistics solutions.
Sonic is a high-performance Layer 1 blockchain designed for speed and scalability,
offering EVM compatibility and sub-second transaction finality. Built by the team
behind Fantom, it supports up to 10,000 transactions per second, making it ideal
for DeFi and real-time applications. Its native bridge to Ethereum enhances
liquidity and interoperability.
**Key Features**
* High throughput with up to 10,000 transactions per second.
* Sub-second finality for real-time processing.
* EVM compatibility with a secure Ethereum bridge.
**Example Use Cases**
* Real-time DeFi applications.
* High-frequency trading platforms.
* Scalable gaming and NFT ecosystems.
***
## Layer 2 (l2) public networks
### Polygon pos
Polygon PoS is an Ethereum-compatible Layer 2 sidechain that provides faster
transactions and lower fees while being secured by the Ethereum mainnet. It
supports high-volume applications like gaming, DeFi, and NFTs, reducing
congestion on Ethereum. The PoS mechanism ensures low-cost and scalable
transaction processing.
**Key Features**
* Near-instant transactions with minimal fees.
* Full Ethereum compatibility for smart contracts and dApps.
* Bridging mechanism for secure asset transfers between Polygon and Ethereum.
**Example Use Cases**
* High-volume gaming applications.
* NFT marketplaces with low transaction fees.
* Scalable DeFi platforms.
Polygon zkEVM is a zero-knowledge rollup solution that enables secure and
private transactions while maintaining Ethereum compatibility. It improves
scalability, reduces gas fees, and supports high-throughput applications.
Businesses looking for privacy-preserving blockchain solutions benefit from its
advanced cryptographic techniques.
**Key Features**
* zk-rollups for high throughput and low costs.
* EVM compatibility for seamless dApp migration.
* Enhanced privacy through zero-knowledge proofs.
**Example Use Cases**
* Secure enterprise transactions with privacy.
* Scalable Ethereum-based DeFi applications.
* Cost-effective NFT minting and trading.
Optimism uses Optimistic Rollups to batch process transactions off-chain,
reducing Ethereum's congestion and gas fees. It ensures faster finality while
maintaining Ethereum's security and decentralization. Optimism is widely used
for scalable DeFi applications and developer-friendly dApps.
**Key Features**
* Optimistic Rollups for cost-effective scaling.
* Near-identical Ethereum experience for developers.
* Fast transaction confirmation with Ethereum finality.
Arbitrum is an Optimistic Rollup solution that enhances Ethereum's scalability
by processing transactions off-chain while ensuring on-chain security and
fraud-proof verification. It provides reduced fees, high throughput, and
seamless Ethereum compatibility, making it a popular choice for scalable dApps
and DeFi applications.
**Key Features**
* High throughput with reduced gas costs.
* Full EVM compatibility for smart contracts.
* Seamless integration with Ethereum's ecosystem.
**Example Use Cases**
* Cost-effective Layer 2 scaling for DeFi applications.
* Scalable smart contract automation.
* Low-fee gaming and metaverse applications.
Soneium is an Ethereum Layer 2 blockchain developed by Sony Block Solutions Labs,
leveraging Optimistic Rollups for scalability and efficiency. Built on the OP Stack,
it offers high throughput, low-cost transactions, and EVM compatibility, targeting
entertainment and enterprise use cases with seamless Web3 integration.
**Key Features**
* Optimistic Rollups for scalable, low-cost transactions.
* EVM compatibility for developer-friendly dApp deployment.
* Integration with Ethereum via the OP Stack Superchain.
**Example Use Cases**
* Entertainment-focused dApps and NFT platforms.
* Scalable enterprise solutions on Ethereum.
* Cost-efficient gaming and content ecosystems.
***
file: ./content/docs/platform-components/blockchain-infrastructure/transaction-signer.mdx
meta: {
"title": "Transaction signer",
"description": "Blockchain Transaction Signer in SettleMint"
}
## Blockchain transaction signer
## Introduction
A **blockchain transaction signer** is a **cryptographic mechanism** used to
authorize and verify transactions before they are submitted to the blockchain.
It ensures that only the legitimate owner of an account can initiate
transactions, thereby preventing unauthorized access and fraud.\
Transaction signing plays a crucial role in blockchain security by using
**public-key cryptography** to create digital signatures. These signatures
confirm transaction authenticity without revealing the sender's private key,
ensuring both security and integrity.
***
Transaction signing follows these key steps:
1. **Transaction Creation** – The user or system constructs a transaction,
specifying details such as sender, recipient, amount, and gas fees.
2. **Transaction Hashing** – The transaction data is hashed to generate a unique
digest, ensuring the data remains tamper-proof.
3. **Digital Signing** – The sender's **private key** is used to sign the
transaction hash, creating a **digital signature**.
4. **Transaction Submission** – The signed transaction is broadcast to the
blockchain network.
5. **Verification by Nodes** – Other nodes verify the transaction using the
sender’s **public key** to confirm authenticity before processing it.
By signing transactions locally before submission, users ensure that their
private keys remain secure and are never exposed to the network.
***

## Settlemint blockchain platform: transaction signing
SettleMint allows blockchain nodes to **automatically sign transactions** using
pre-attached private keys. This process ensures seamless execution of
transactions such as **smart contract interactions, token transfers, and
API-driven blockchain operations**.
The process includes:
1. **Private Key Attachment** – A private key is securely linked to a blockchain
node.
2. **Transaction Request Generation** – An application or smart contract submits
a transaction request.
3. **Transaction Hashing & Signing** – The node hashes and signs the transaction
using the attached private key.
4. **Transaction Broadcast** – The signed transaction is submitted to the
blockchain network.
5. **Network Validation** – Other nodes verify the transaction signature before
adding it to the blockchain.
The platform supports **`eth_sendTransaction`**, enabling automated signing for
transactions on Ethereum-compatible networks.
***
## Attaching private keys to nodes on settlemint
SettleMint provides multiple ways to attach private keys securely to blockchain
nodes:
### **1. SettleMint dashboard configuration**
* Users can create private keys securely via the **SettleMint UI**.
* The private key is encrypted and stored within the **node’s secure
environment**.

### **2. Hardware security module (HSM) or vault integration**
* SettleMint allows integration with **AWS KMS, HashiCorp Vault, and other
secure key management services**.
* This approach keeps private keys **off-chain and protected from unauthorized
access**.
### **3. Remote signing via API**
* Instead of storing private keys on nodes, SettleMint supports **external
signing services** that handle digital signatures remotely.
* This enhances security by **reducing node exposure** to potential attacks.
***
## Security considerations
When attaching private keys to nodes for transaction signing, **strong security
measures** should be implemented:
* **Use Encrypted Storage:** Private keys should always be **encrypted at rest**
to prevent unauthorized access.
* **Restrict Access Controls:** Only authorized applications should have access
to the signing key.
* **Enable Multi-Signature if Needed:** High-value transactions should use
**multisig wallets** for enhanced security.
* **Monitor & Audit Transactions:** Logging and monitoring signed transactions
help detect **unauthorized or suspicious activities**.
***
file: ./content/docs/platform-components/custom-deployments/custom-deployment.mdx
meta: {
"title": "Custom deployment",
"description": "Guide to deploying custom Docker images on SettleMint"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
A Custom Deployment allows you to deploy your own Docker images, such as
frontend applications, on the SettleMint platform. This feature provides
flexibility for integrating custom solutions within your blockchain-based
applications.
## Create a custom deployment
1. Prepare your container image and push it to a container registry (public or private).
2. In the SettleMint platform, navigate to the Custom Deployments section.
3. Click on the "Add Custom Deployment" button to create a new deployment.
4. Provide the necessary details:
* Container image path (e.g., registry.example.com/my-app:latest)
* Container registry credentials (if using a private registry)
* Environment variables (if required)
* Custom domain information (if applicable)
5. Configure any additional settings as needed.
6. Click on 'Confirm' and wait for the Custom Deployment to be in the Running status.
```bash
# Create a custom deployment
settlemint platform create custom-deployment my-deployment \
--application my-app \
--image-repository registry.example.com \
--image-name my-app \
--image-tag latest \
--port 3000 \
--provider gcp \
--region europe-west1
# With environment variables
settlemint platform create custom-deployment my-deployment \
--application my-app \
--image-repository registry.example.com \
--image-name my-app \
--image-tag latest \
--env-vars NODE_ENV=production,DEBUG=false
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
const createDeployment = async () => {
const result = await client.customDeployment.create({
applicationId: "app-123",
name: "my-deployment",
imageRepository: "registry.example.com",
imageName: "my-app",
imageTag: "latest",
port: 3000,
provider: "gcp",
region: "europe-west1",
environmentVariables: {
NODE_ENV: "production"
}
});
};
```
## Dns configuration for custom domains
When using custom domains with your Custom Deployment, you'll need to configure
your DNS settings correctly. Here's how to set it up:
1. **Add Custom Domain to the SettleMint Platform**:
* Navigate to your Custom Deployment in the SettleMint platform.
* In the manage custom deployment menu, click on the edit custom deployment
action.
* Locate the custom domains configuration section.
* Enter your desired custom domain (e.g., example.com for top-level domain or
app.example.com for subdomain).
* Save the changes to update your Custom Deployment settings.
2. **Obtain Your Application's Hostname**: After adding your custom domain, the
SettleMint platform will provide you with an ALIAS (for top-level domains) or
CNAME (for subdomains) record. This can be found in the "Connect" tab of your
Custom Deployment.
3. **Access Your Domain's DNS Settings**: Log in to your domain registrar or DNS
provider's control panel.
4. **Configure DNS Records**:
For Top-Level Domains (e.g., example.com):
* Remove any existing A and AAAA records for the domain you're configuring.
* Remove any existing A and AAAA records for the www domain (e.g.,
[www.example.com](http://www.example.com)) if you're using it.
```
ALIAS example.com gke-europe.settlemint.com
ALIAS www.example.com gke-europe.settlemint.com
```
For Subdomains (e.g., app.example.com):
```
CNAME app.example.com gke-europe.settlemint.com
```
5. **Set TTL (Time to Live)**:
* Set a lower TTL (e.g., 300 seconds) initially to allow for quicker
propagation.
* You can increase it later for better caching (e.g., 3600 seconds).
6. **Verify DNS Propagation**:
* Use online DNS lookup tools to check if your DNS changes have propagated.
* Note that DNS propagation can take up to 48 hours, although it's often much
quicker.
7. **SSL/TLS Configuration**:
* The SettleMint platform typically handles SSL/TLS certificates
automatically for both top-level domains and subdomains.
* If you need to use your own certificates, please contact us for assistance
and further instructions.
Note: The configuration process is similar for both top-level domains and
subdomains. The main difference lies in the type of DNS record you create (ALIAS
for top-level domains, CNAME for subdomains) and whether you need to remove
existing records.
## Manage custom deployments
1. Navigate to your application's **Custom Deployments** section
2. Click on a deployment to:
* View deployment status and details
* Manage environment variables
* Configure custom domains
* View logs
* Check endpoints
```bash
# List custom deployments
settlemint platform list custom-deployments --application my-app
# Get deployment details
settlemint platform read custom-deployment my-deployment
# Restart deployment
settlemint platform restart custom-deployment my-deployment
# Edit deployment
SettleMint platform edit custom-deployment my-deployment \
--container-image registry.example.com/my-app:v2
```
```typescript
// List deployments
const listDeployments = async () => {
const deployments = await client.customDeployment.list("my-app");
};
// Get deployment details
const getDeployment = async () => {
const deployment = await client.customDeployment.read("deployment-unique-name");
};
// Restart deployment
const restartDeployment = async () => {
await client.customDeployment.restart("deployment-unique-name");
};
// Edit deployment
const editDeployment = async () => {
await client.customDeployment.edit("deployment-unique-name", {
imageTag: "v2"
});
};
```
## Limitations and considerations
When using Custom Deployment, keep the following limitations in mind:
1. **No Root User Privileges**: Your application will run without root user
privileges for security reasons.
2. **Read-Only Filesystem**: The filesystem is read-only. For data persistence,
consider using:
* Hasura: A GraphQL engine that provides a scalable database solution. See
[Hasura](/platfrom-components/hasura-backend-as-a-service).
* Other External Services: Depending on your specific needs, you may use
other cloud-based storage or database services
3. **Stateless Applications**: Your applications should be designed to be
stateless. This ensures better scalability and reliability in a cloud
environment.
4. **Use AMD-based Images**: Currently, our platform supports AMD-based
container images. Ensure your Docker images are built for AMD architecture to
guarantee smooth compatibility with our infrastructure.
## Best practices
* Design your applications to be stateless and horizontally scalable
* Use environment variables for configuration to make your deployments more
flexible
* Implement proper logging to facilitate debugging and monitoring
* Regularly update your container images to include the latest security patches
Custom Deployment offers a powerful way to extend the capabilities of your
blockchain solutions on the SettleMint platform. By following these guidelines
and best practices, you can seamlessly integrate your custom applications into
your blockchain ecosystem.
Custom Deployments support automatic SSL/TLS certificate management for custom
domains.
file: ./content/docs/platform-components/database-and-storage/hasura-backend-as-a-service.mdx
meta: {
"title": "Hasura backend-as-a-service",
"description": "Guide to using Hasura in SettleMint"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
# Hasura - backend-as-a-service
Many dApps need more than just decentralized tools to build an end-to-end
solution. The SettleMint Hasura SDK provides a seamless way to interact with
Hasura GraphQL APIs for managing application data.
Hasura is an open-source Backend-as-a-Service (BaaS) platform that provides
instant, real-time GraphQL APIs backed by your relational databases. It connects
to your data sources (e.g. PostgreSQL, MS SQL, etc.) and **auto-generates a
unified GraphQL schema** with queries, mutations, and subscriptions for your
data – all secured by a built-in authorization layer . In practical terms,
simply pointing Hasura at an existing database gives you a ready-to-use GraphQL
API with CRUD operations, real-time capabilities, and fine-grained access
control out of the box This allows development teams to **rapidly build
data-driven applications** without writing boilerplate backend code, while still
retaining the flexibility to add custom business logic when needed.
## Core functionality: auto-generated graphql from your database
At the heart of Hasura is its ability to instantly create a full-featured
GraphQL API from a relational database schema. When you connect Hasura to a
database (commonly PostgreSQL, though Hasura supports multiple databases like
MySQL, SQL Server, etc.), it introspects the schema and automatically
**generates GraphQL types and operations for each table**.
For example, if you have a `users` table, Hasura will provide:
* **Query fields** – to fetch data (with powerful filtering, ordering,
pagination arguments) or fetch by primary key.
* **Mutation fields** – to insert new records (with support for bulk inserts and
upserts), update existing records (optionally by primary key or conditions),
and delete records.
* **Aggregate queries** – to get counts and aggregates (min, max, sum, etc.) of
data.
* **Subscriptions** – to listen for real-time changes on query results.
Behind the scenes, Hasura’s engine compiles incoming GraphQL requests **directly
into optimized SQL** queries . This means there are no traditional resolvers to
write or maintain. The GraphQL engine handles the translation of GraphQL into
efficient SQL, including complex joins or deeply nested queries, all while
applying any permission rules. In fact, Hasura acts like a just-in-time compiler
for GraphQL: it parses the client’s GraphQL request and produces a single SQL
statement (with your access control rules embedded as `WHERE` clauses) that hits
the database. This yields very high performance and avoids common pitfalls like
the N+1 query problem, even as your schema grows. Developers get the **benefits
of GraphQL (strong typing, flexible querying)** without having to manually
implement resolvers or ORM code for basic data fetching.
**Tracking Tables & Schema:** In Hasura’s console or via its API, you “track”
the tables and views you want to expose. Once tracked, those tables are
instantly available in the GraphQL schema. Hasura generates a GraphQL type for
each table and a comprehensive set of operations for CRUD and real-time queries
on that table. For example, if `users` is tracked, your API might include
`query { users(...) { ... } }` for fetching data,
`mutation { insert_users(...) }` for inserts, and `subscription { users(...) }`
to subscribe to live changes. All of this happens **without writing any server
code** – Hasura’s automation covers \~80% of typical API needs, letting
developers focus on the unique parts of their application.
## Real-time graphql subscriptions
One of Hasura’s standout features is its **real-time capabilities**. Any GraphQL
query that you can perform on Hasura can also be made as a **subscription**,
enabling clients to get live updates whenever the underlying data changes. Under
the hood, Hasura handles the complexity of monitoring the database for changes
and pushing those updates to subscribed clients over WebSocket connections.
Developers don’t need to set up separate real-time servers or polling; you
simply use GraphQL subscriptions and Hasura streams the data changes to the
client.
This makes building **real-time applications (chat apps, live dashboards, data
monitors, etc.) very straightforward**. For example, a subscription like
`subscription { users { id, name } }` will emit a new result to the client
whenever a user is added, updated, or deleted in the `users` table (according to
the subscription’s filter conditions). Hasura ensures these updates are
delivered reliably and efficiently. It leverages PostgreSQL’s capabilities and a
high-performance push mechanism so that **clients see changes with minimal
latency**, without overwhelming the database. In fact, Hasura’s GraphQL engine
was designed to provide “**instant realtime APIs on Postgres**” from day one.
This real-time functionality is not an add-on, but a first-class part of the
GraphQL API – meaning you can convert any query into a live result feed simply
by using the GraphQL subscription operation.
Subscriptions are useful for a variety of use cases: live feeds, notifications,
collaborative editing apps, or any scenario where you want the UI to reflect
server state in real time. By handling the heavy lifting for you, Hasura’s
real-time engine greatly reduces the effort to build reactive applications.
Moreover, because Hasura’s subscription handling is built into its compiled
query engine, it **scales** well – the engine can handle high numbers of
concurrent subscriptions by sharing work and using efficient data push
algorithms (using techniques like live query invalidation and batching of
updates). In summary, Hasura delivers **real-time GraphQL out-of-the-box**,
turning your database into a live data source for clients with virtually no
extra code or infrastructure.
## Event triggers for async workflows
Besides querying data in real-time, Hasura allows you to react to data changes
via **Event Triggers**. Event Triggers are a mechanism to invoke custom business
logic whenever certain database events occur. You can configure Hasura to listen
on specific tables (for inserts, updates, or deletions) and call a webhook or
serverless function when those events happen. This effectively turns your
database into an event source for your application – enabling an **event-driven
architecture** with minimal effort.
**How it works:** When you create an Event Trigger, you specify a table, the
event types to listen for (INSERT, UPDATE, DELETE), and a webhook URL to call.
When a matching event occurs on that table, Hasura captures the event (ensuring
it’s not lost even if transient failures occur) and delivers an HTTP POST
request to your webhook with a JSON payload describing the change. This allows
you to **automate backend actions in response to data changes**. For example,
you could set up an event trigger on a `users` table for new inserts to send a
welcome email via a third-party service, or trigger a serverless function to
propagate the change to another system.
Event Triggers are designed with reliability in mind – Hasura uses an **atomic,
durable queue** internally to track events and will retry delivery if your
webhook fails. This means you can trust that your business logic (e.g. an AWS
Lambda or any HTTP endpoint) will eventually receive the event even if it’s
temporarily unavailable, ensuring no critical events are dropped. You can also
configure retry schedules and dead-letter queues for advanced use cases.
Typical uses of Event Triggers include:
* **Async Processing** – e.g., when an order is placed (row inserted), call a
webhook to handle payment processing or inventory updates.
* **Notifications** – e.g., trigger an SMS or push notification when a certain
record changes.
* **Data Pipelines / ETL** – e.g., on data insert, forward the data to an
analytics index or search engine (such as indexing a new record in
Algolia/Elasticsearch).
* **Sync with external systems** – e.g., propagate a change in your app’s
database to a legacy system via an API call.
By offloading these event triggers to Hasura, developers can decouple complex
workflows from the request-response cycle of the app. The **integration with
serverless functions is seamless** – Hasura can effectively serve as the glue
between the database and cloud function triggers. In summary, Event Triggers
empower you to **extend Hasura with custom business logic in an asynchronous,
scalable way**, turning database changes into actionable events in your
architecture.
## Remote schemas for api integration (graphql federation)
Hasura supports a modular, federated architecture through **Remote Schemas**.
This feature allows you to **merge external GraphQL schemas** (from other
services or third-party APIs) into Hasura’s unified GraphQL API
([Remote Schemas Overview | Hasura GraphQL Docs](https://hasura.io/docs/2.0/remote-schemas/overview/#:~:text=Hasura%20has%20the%20ability%20to,it%20like%20automated%20schema%20stitching)).
In essence, Hasura can act as a **single GraphQL gateway** that combines your
database data with other GraphQL-based services, so clients can query both
through one endpoint.
For example, suppose you have Hasura connected to your primary database, but you
also have a separate GraphQL service for payments or an external GraphQL API
(like a CMS or analytics service). Instead of having your frontend hit two
different GraphQL endpoints, you can **add the external service as a remote
schema in Hasura**. Hasura will stitch that schema together with the
auto-generated database schema, presenting them as one cohesive GraphQL API to
clients (no manual schema stitching required). **Queries and mutations to
disparate sources can be made from the single Hasura endpoint**
([Remote Schemas Overview | Hasura GraphQL Docs](https://hasura.io/docs/2.0/remote-schemas/overview/#:~:text=Hasura%20has%20the%20ability%20to,it%20like%20automated%20schema%20stitching))
– for instance, a GraphQL query could fetch data from both the Hasura-tracked
database and the remote payment service in one request.
([Remote Schemas Overview | Hasura GraphQL Docs](https://hasura.io/docs/2.0/remote-schemas/overview/))
*Figure: Hasura’s architecture can unify data from both a database and remote
GraphQL services into a single endpoint. The Hasura engine merges the schema of
connected sources (the database and any remote GraphQL APIs) so that the client
sees one **merged GraphQL schema**. This allows an app to query across systems
(e.g., a payment API and the app database) through one GraphQL gateway, with
Hasura handling the schema stitching and auth context propagation
([Remote Schemas Overview | Hasura GraphQL Docs](https://hasura.io/docs/2.0/remote-schemas/overview/#:~:text=Hasura%20has%20the%20ability%20to,it%20like%20automated%20schema%20stitching)).*
Setting up a remote schema is straightforward – you provide Hasura with the
remote GraphQL server’s URL (and any auth if needed), and Hasura introspects its
schema and incorporates it. Once added, you can also **join data between your
database and the remote schema** using Hasura’s remote join feature, effectively
allowing foreign-key-like connections across services (e.g., resolve a field in
a database query by calling a remote API). Hasura also allows you to forward
**authentication context** (JWT claims/headers) to the remote service, ensuring
that permissions can be consistently enforced across the unified API.
Remote Schemas enable a **microservices-friendly architecture**: you can keep
specialized GraphQL services for certain domains (or use third-party GraphQL
APIs) and let Hasura aggregate them. This provides **modularity** (each service
can be developed/maintained independently) while still giving clients a single
endpoint. In practice, many teams use Hasura to front multiple databases and
services – for example, combining a PostgreSQL database, a legacy REST API
(exposed via Hasura Actions or a GraphQL wrapper), and maybe a cloud service’s
GraphQL API, all into one GraphQL schema. The result is a **unified, federated
GraphQL graph** that greatly simplifies client interactions.
## Role-based access control and security
Hasura includes a powerful **role-based access control (RBAC)** system to
enforce fine-grained authorization rules on the API. From the Hasura console or
via metadata, you can define **roles** (e.g. `user`, `admin`, `manager`,
`anonymous`) and specify, for each role, what operations are allowed on which
tables, columns, and rows. These permissions are applied automatically by Hasura
for every GraphQL query or mutation, ensuring that each request **only returns
or modifies data that the requesting role is allowed to access**
([Authentication and Authorization Overview | Hasura GraphQL Docs](https://hasura.io/docs/2.0/auth/overview/#:~:text=With%20Hasura%2C%20you%20can%20easily,application%20is%20secure%20and%20robust)).
Key aspects of Hasura’s RBAC and security model:
* **Granular Permissions:** You can restrict data at multiple levels. For each
table, you can choose which roles can *select* (query), *insert*, *update*, or
*delete*, and even set conditions (Boolean expressions) that filter which rows
each role can see or modify. For example, a role `user` might have a select
permission on the `orders` table limited to
`orders.user_id = X-Hasura-User-Id` (a session variable), effectively
enforcing row-level security so users only see their own orders. You can also
limit which columns are selectable or updatable by a given role, and define
check constraints for inserts/updates (ensuring, say, a user can only create
an order with their own user\_id). These rules map to SQL `WHERE` clauses under
the hood, which Hasura adds to generated queries for that role.
* **Role Hierarchy and Combined Access:** Hasura can attach **multiple roles**
to a single request (especially useful when using JWT authentication with
multiple role claims). There is also a notion of a superuser role (commonly
`admin`) which by default bypasses all checks. Typically, you secure the admin
role with a secret key (the admin secret) and use it only for trusted access
or console operations, while normal client requests use non-admin roles with
limited permissions.
* **Authentication Integration:** While Hasura doesn’t handle user
authentication itself, it integrates with your auth provider to figure out the
role and identity of the user making each request. Commonly, this is done via
JWT tokens or an authorization webhook. For instance, if using a JWT-based
auth (Auth0, Firebase Auth, your custom JWT), you configure Hasura with the
signing key and the token’s claims format. Clients then include their JWT in
the `Authorization` header when querying Hasura. Hasura will verify the token
and extract the user’s role and other attributes (like user ID) from custom
claims (e.g. `x-hasura-role`, `x-hasura-user-id`). Those become **session
variables** available in permission rules. This way, Hasura works with “many
popular auth services or your own custom solution” seamlessly
([Authentication and Authorization Overview | Hasura GraphQL Docs](https://hasura.io/docs/2.0/auth/overview/#:~:text=Hasura%20gives%20you%20the%20power,existing%20custom%20solution%20hosted%20elsewhere)),
and you offload authentication to proven providers.
* **Column and Field Permissions:** The RBAC not only covers database tables but
also extends to custom actions and remote schema fields. You can restrict
which roles can call a given Action (custom resolver) or which roles can see
fields from a merged remote schema. This ensures a consistent security policy
even when you extend Hasura beyond the database.
* **No-Code Security Rules:** All permissions are declarative and part of
Hasura’s metadata. Setting up rules does not require coding business logic
checks in resolvers – it’s configured in Hasura and enforced centrally. This
significantly reduces the surface for mistakes and makes auditing easier. You
can review a **permissions summary** in the console to see all access rules at
a glance, helping to verify that roles like `anonymous` (unauthenticated) have
only intended access.
Through this RBAC system, Hasura ensures that your auto-generated API is
**secure for production use**. It essentially brings database-level access
control to the GraphQL layer, including the ability to leverage your database’s
features (like row-level security policies in Postgres) in combination with
Hasura’s rules. The result is a GraphQL API where each request is scoped to the
user’s privileges, without requiring manual checks in application code. By
combining Hasura’s RBAC with your authentication of choice, you get a robust
security model covering authentication **and** authorization for all data
operations.
## Metadata management and migrations
In Hasura, the state of your GraphQL API (what tables are tracked, what
relationships exist, what permissions are defined, etc.) is represented as
**metadata**. This metadata is essentially a collection of configurations that
tells the GraphQL engine how to expose your data. Hasura provides tools to
manage this metadata and your database schema changes in a version-controlled,
reproducible way – crucial for teams working across development, staging, and
production environments.
**Hasura Metadata:** This includes all the “non-database” configuration in
Hasura – tracked tables/views, GraphQL schema customizations, permission rules
for roles, event triggers, remote schema configurations, actions (custom
business logic endpoints), and REST endpoint mappings. Hasura lets you export or
save this metadata as YAML/JSON files. Using the Hasura Console, you can make
changes (e.g., add a permission, track a table) and then export the whole
metadata as a file. With the Hasura CLI, you can pull metadata into a local
project directory. Because metadata defines the entire GraphQL API setup,
checking these files into source control allows you to **treat your Hasura
config as code**.
**Database Migrations:** While Hasura can track existing tables, you often need
to evolve your database schema itself (create tables, alter columns, etc.).
Hasura’s CLI includes a migration system to manage SQL schema changes. Whenever
you modify the database through the Hasura console (or manually), you can record
a migration – typically, the CLI intercepts schema changes made via the console
and writes out SQL migration files. Each migration is a SQL script (or a pair of
up/down scripts) that can be applied to recreate that change. These migration
files, alongside the metadata files, together represent the entire state of your
backend. Hasura’s migration tool (inspired by Rails’ ActiveRecord migrations)
allows you to apply these to another environment easily.
By using **migrations and metadata files**, teams can propagate changes in a
controlled manner. For example, you might develop your schema changes locally,
run `hasura migrate create` and `hasura metadata export` to capture them, push
to Git, and then in a CI/CD pipeline apply those in a staging or production
environment with `hasura migrate apply` and `hasura metadata apply`. This
ensures that the Hasura service in each environment has the same schema and API
configuration.
Hasura’s docs describe these pieces clearly: “Hasura’s Metadata represents the
configuration state of your project. Hasura Migrations are SQL files
representing changes to your database, and Seeds are SQL files for populating
initial data”
([Migrations, Metadata, and Seeds Overview | Hasura GraphQL Docs](https://hasura.io/docs/2.0/migrations-metadata-seeds/overview/#:~:text=Hasura%20Migrations%2C%20Metadata%2C%20and%20Seeds,populating%20your%20database%20with%20data)).
**Combined with version control**, these allow you to reliably move changes
through environments and keep track of how your schema/API evolves over time.
Some best practices with migrations/metadata include: developing with the Hasura
CLI in a persistent project directory (so all changes are tracked), using
migrations for any database changes instead of making manual production DB
edits, and grouping metadata changes as needed. Hasura can even **automatically
apply migrations and metadata on server startup** (or in CI) for continuous
delivery
([Migrations, Metadata, and Seeds Overview | Hasura GraphQL Docs](https://hasura.io/docs/2.0/migrations-metadata-seeds/overview/#:~:text=Manage%20Migrations%20Manage%20your%20Migrations,is%20helpful%20for%20CI%2FCD%20pipelines)).
This makes it possible to fully script the deployment of your backend.
In short, Hasura treats the database schema and GraphQL API config as
first-class artifacts that can be **managed like code**. This is essential for
collaborating on a Hasura project in a team and for maintaining **consistency
across dev/staging/prod** in enterprise setups.
## Auto-generated rest endpoints
While GraphQL is Hasura’s primary interface, Hasura also caters to RESTful
patterns by allowing you to **expose REST endpoints** for specific queries or
mutations. This feature (often called “RESTified Endpoints”) is useful for cases
where you might need a traditional REST API for integration or backward
compatibility, without giving up Hasura’s automation.
There are two ways to create REST endpoints in Hasura:
* **Automatic CRUD Endpoints:** Hasura can automatically generate RESTful
endpoints for each tracked table (enabled via the console). With a few clicks,
you can get standard endpoints like `GET /api/rest/users` (to fetch data from
a table), `POST /api/rest/users` (to insert), etc., corresponding to the
underlying GraphQL queries/mutations for that table. These require no custom
code and provide a quick way to serve a basic REST API in addition to GraphQL
([Create a RESTified Endpoint | Hasura GraphQL Docs](https://hasura.io/docs/2.0/restified/create/#:~:text=Option%201%20,endpoints%20from%20table)).
This is helpful if you want to support legacy clients or third-party services
that expect RESTful JSON APIs.
* **Custom REST Endpoints from GraphQL:** You can also create a REST endpoint
from any saved GraphQL query or mutation. Using the Hasura Console’s API
Explorer, a developer can build a GraphQL query or mutation and save it with
an alias. By clicking “REST” and giving an endpoint path and HTTP method,
Hasura will expose that operation at a REST endpoint
([Add REST endpoints to Hasura GraphQL queries and mutationss ](https://hasura.io/blog/adding-rest-endpoints-to-hasura-cloud#:~:text=Fortunately%2C%20with%20REST%20support%20in,your%20endpoint%20should%20respond%20to)).
You can include path parameters in the URL (e.g., `/users/:id` mapping to a
GraphQL query with a variable) and choose which HTTP verb to use. The request
to that REST endpoint will internally execute the associated GraphQL operation
and return the result. Hasura also supports parsing a JSON body for POST/PATCH
requests to pass dynamic variables to the GraphQL operation.
With these features, Hasura effectively **bridges REST and GraphQL**. You get
the flexibility to use GraphQL for new development (and benefit from its power
and type-safety), while still offering REST endpoints for specific use cases or
clients that need them. This can ease the transition for teams moving from REST
to GraphQL, or enable incremental adoption (for example, gradually replacing
REST endpoints with GraphQL without breaking existing clients). All the usual
Hasura advantages – like authorization rules – apply equally to requests coming
through these RESTified endpoints. Underneath, it’s the same permission-checked
GraphQL execution, just accessed via a RESTful URL. **Hasura Actions** allow
developers to extend the GraphQL API with custom business logic. In this
diagram, a client calls a custom mutation defined as a Hasura Action, which
triggers an HTTP request to a specified endpoint (step 2 → 3). The custom logic
executes (e.g., in a serverless function or microservice) and returns a
response, which Hasura then merges back into the GraphQL response (step 4) to
send to the client
([Introducing Actions: Add custom business logic to Hasura](https://hasura.io/blog/introducing-actions#:~:text=Step%201%3A%20You%20specify%20the,mutation%20GraphQL%20contract%20at%20Hasura)).
This architecture lets you offload standard CRUD to Hasura and focus on custom
operations where needed, without sacrificing the unified API experience.\*
Additionally, you might use **Event Triggers** (as discussed) for tasks that
should happen asynchronously after a database change, or **Remote Schemas** if
you decide to pull in data from other GraphQL services. For example, you could
mount a GitHub GraphQL API as a remote schema for certain queries, or use remote
joins to link a Hasura table with data from an external API.
Using Hasura as your BaaS can significantly accelerate development and provide a
robust, enterprise-grade API layer. Some of the key benefits and advantages
include:
* **Rapid Development & Productivity:** Hasura eliminates a huge amount of
boilerplate work. As noted, it **“automatically exposes full-featured GraphQL
query, mutation, subscription CRUD types for each table”**, saving you from
writing basic create/read/update/delete logic
([FAQs | Hasura GraphQL Docs](https://hasura.io/docs/2.0/faq/index/#:~:text=%2A%20Automatically%20exposes%20full,business%20logic%20you%20may%20need)).
Teams can skip months of API development – studies and case reports suggest
Hasura can cut development time by **50-80%** in building a data backend. This
lets developers focus on core business logic rather than repetitive CRUD
coding. New features can be prototyped and shipped faster, since adding a new
table or field to the database instantly updates the API. The learning curve
for GraphQL is also smoother for teams, because Hasura provides a working
example of queries and schema to start from.
* **Real-Time and Reactive by Default:** Unlike many backend solutions where
real-time features are an afterthought, Hasura was built with live queries in
mind. You get **instant realtime APIs** (GraphQL subscriptions) as a
first-class feature. This is a huge benefit for applications that need live
updates, like collaborative apps or dashboards – you don’t need a separate
socket server or polling mechanism. The fact that subscriptions are integrated
at the query level (with the same filtering and permission logic) means you
can turn any data feed into realtime with minimal effort. This real-time
support can be a differentiator in user experience, enabling push-based UI
updates with the simplicity of writing a GraphQL query.
* **Scalability and Performance:** Hasura’s architecture is cloud-native and
designed for scale. The engine is stateless and **horizontally scalable** –
you can run multiple instances behind a load balancer to handle increased
load, without any special coordination (no primary/secondary roles to worry
about). This makes it easy to scale up to high traffic: scale vertically by
giving Hasura more CPU/RAM, or scale horizontally by adding more instances,
even enable auto-scaling on Kubernetes or your cloud of choice. The compiled
approach and use of prepared statements means Hasura can often outperform
hand-written resolvers, especially for complex relational data, since it
optimizes join fetching and batching internally. In terms of resource
footprint, Hasura is lightweight (written in Haskell/C++ with aggressive
optimization); it’s known to handle thousands of requests per second on modest
hardware, and use only a few hundred MB of RAM even under load. The bottom
line: Hasura’s **performance is production-proven**, and it scales with
relatively little ops effort (you mainly ensure your database scales, as
Hasura will efficiently utilize it).
* **Unified & Modular Architecture:** With Hasura, you can consolidate multiple
data backends into a single **unified API**. This is beneficial for
microservice architectures or enterprises with many data silos. Hasura can
integrate databases, REST services, and external GraphQL APIs into one graph,
simplifying the client side considerably. It effectively acts as a **data
federation layer**, but one that is easy to configure. At the same time, its
modular design (via Actions and Remote Schemas) means you’re not constrained –
you can always drop down to custom logic or plug in another service. This
gives a clean separation of concerns: use Hasura for what it’s good at (CRUD,
realtime, auth, relationships), and augment it for domain-specific functions.
Many developers find this hybrid approach very productive: Hasura covers the
generic 80%, and the remaining 20% you implement as isolated services that
mesh in via Hasura. The result is an architecture that is both **extensible
and maintainable**, leveraging Hasura as an engine and router for various
backend pieces.
* **Robust Security and Access Control:** Hasura’s in-built security features
(RBAC, row-level permissions, allow-lists, etc.) allow you to build a secure
API without writing a lot of custom code or middleware. Permissions are
declarative and enforced at the query level, reducing chances of oversight.
You can confidently expose your database through Hasura because you can
tightly control what each user can do. Furthermore, Hasura supports
**enterprise security practices**: you can enforce SSL, configure CORS
domains, require authentication for all requests, and even turn on a query
**allow-list** to restrict operations to only pre-approved queries in
production (preventing malicious or expensive queries). This, coupled with
detailed logging and monitoring, gives ops teams the tools to run a GraphQL
API in production with the same confidence as a traditional REST API behind an
API gateway.
* **Seamless Developer Experience:** Developers working with Hasura often praise
its DX. The Hasura Console GUI makes it easy to visualize your data model, run
test queries, and manage everything from permissions to events with a
point-and-click interface. It also has a built-in migration/metadata system
that plugs into Git workflows, so everything can be scripted and reviewed. The
GraphQL API Hasura generates is fully introspectable and compatible with
standard GraphQL tooling (you can use GraphiQL, Apollo Client, Relay, etc. out
of the box). Documentation for your API is automatically available (since
GraphQL’s schema can be queried for documentation strings or visualized in
tools like GraphQL Playground). Moreover, because Hasura is open-source and
widely adopted, there is a rich community and plenty of examples for various
use cases. You’ll find that using Hasura can standardize how your team builds
backends – it encourages a consistent, declarative style which often leads to
fewer bugs and faster onboarding for new developers. As a testament to its DX
and reliability, Hasura has been **widely adopted in production by companies
of all sizes** and has a large GitHub community (it gained popularity quickly
due to the “seamless developer experience” it provides.
file: ./content/docs/platform-components/database-and-storage/ipfs-storage.mdx
meta: {
"title": "Ipfs storage",
"description": "Guide to using IPFS storage solutions in SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
**InterPlanetary File System (IPFS)** is a **decentralized and distributed file
storage system** designed to enhance the way data is stored and accessed on the
web. Unlike traditional cloud storage solutions that rely on centralized
servers, **IPFS stores content across a peer-to-peer (P2P) network**, making it
**efficient, secure, and resilient** against failures and censorship.
Within **SettleMint**, **IPFS is available as a storage option**, allowing
developers to **store, retrieve, and manage files seamlessly** for blockchain
applications. It enables **off-chain data storage**, ensuring that blockchain
networks remain efficient while still being able to reference large datasets
securely. [Learn more on IPFS here](https://docs.ipfs.tech/concepts/)
### Why use ipfs?
* **Decentralized & Fault-Tolerant** – No single point of failure.
* **Content Addressability** – Files are retrieved using unique cryptographic
**Content Identifiers (CIDs)** rather than URLs.
* **Efficient Storage & Bandwidth Optimization** – Files are **de-duplicated and
distributed** across nodes.
* **Ideal for Blockchain Applications** – Enables **off-chain storage** while
linking data securely to on-chain smart contracts.
* **Scalable & Cost-Effective** – No dependency on expensive centralized storage
solutions.
***
SettleMint offers **IPFS as a decentralized storage solution**, allowing users
to store data in a **distributed, verifiable, and tamper-proof manner**. This is
particularly useful for **storing large files, metadata, documents, NFTs, and
other digital assets** that would otherwise be expensive or inefficient to store
directly on-chain.
### Features of ipfs storage in settlemint
* **Seamless File Upload & Retrieval** – Store files and retrieve them via
**CIDs**.
* **Blockchain Integration** – Reference IPFS-stored files within **smart
contracts**.
* **Secure & Immutable Storage** – Files stored on IPFS remain **tamper-proof**.
* **Enhanced Performance** – Optimized file access through **IPFS gateways**.
* **Redundancy & Availability** – Files are distributed across the network for
increased resilience.
***
## Api reference
SettleMint provides **multiple IPFS APIs** as shown in your dashboard:
| **API** | **Endpoint** | **Purpose** |
| ----------------- | ----------------------------------------------------------- | -------------------------------------------------------------- |
| **IPFS HTTP API** | `https://your-ipfs-name.gke-region.settlemint.com/api/v0` | Core IPFS node operations (add, cat, get, pin) |
| **Gateway** | `https://your-ipfs-name.gke-region.settlemint.com/gateway/` | Public access to IPFS content via HTTP |
| **Cluster API** | `https://your-ipfs-name.gke-region.settlemint.com/cluster` | Manage content across the IPFS cluster |
| **Pinning API** | `https://your-ipfs-name.gke-region.settlemint.com/pinning` | Control pinning operations using the standard IPFS Pinning API |
### Common IPFS HTTP API endpoints
| **HTTP Method** | **Endpoint** | **Description** |
| --------------- | ----------------------------------- | ------------------------------------------------------- |
| `POST` | `/api/v0/add` | Uploads a file to IPFS and returns its **CID** |
| `POST` | `/api/v0/cat?arg=` | Retrieves a file's contents from IPFS using its **CID** |
| `POST` | `/api/v0/get?arg=` | Downloads a file (with directory structure) |
| `POST` | `/api/v0/pin/add?arg=` | Pins a file to prevent it from being garbage collected |
| `POST` | `/api/v0/pin/rm?arg=` | Unpins a file |
| `POST` | `/api/v0/pin/ls?arg=&type=all` | Lists pinned content |
For complete API reference, see the [official IPFS HTTP API documentation](https://docs.ipfs.tech/reference/http/api/).
***
## Credentials & authentication
Your SettleMint IPFS instance provides the following credentials for authentication:
| **Credential** | **Description** | **Used With** |
| ------------------------ | -------------------------------------------------------- | ------------------------------------- |
| **Cluster API Username** | Identifies your IPFS cluster user (e.g., `your-ipfs-id`) | Basic Auth for API requests |
| **Cluster API Password** | Password for authenticating API requests | Basic Auth for API requests |
| **Peer ID** | Unique identifier for your node in the IPFS network | P2P communication |
| **Public Key** | Used for cryptographic verification | P2P communication |
| **Private Key** | For signing operations (keep secure) | P2P operations/advanced functionality |
| **Cluster Pinning JWT** | JWT token for authenticating to the IPFS Pinning API | Remote pinning services |
These credentials can be found in your SettleMint dashboard under the IPFS storage instance details.
### Authentication examples
#### Basic authentication (for HTTP API and Cluster API)
```javascript
// Basic authentication with your Cluster API credentials
const username = "your-ipfs-id"; // Your Cluster API Username
const password = "your-password"; // Your Cluster API Password
const authString = btoa(`${username}:${password}`);
fetch("https://your-ipfs-name.gke-region.settlemint.com/api/v0/add", {
method: "POST",
headers: {
"Authorization": `Basic ${authString}`
},
body: formData // Your file in FormData format
});
```
#### JWT authentication (for Pinning API)
```javascript
// JWT authentication with your Cluster Pinning JWT Token
const jwtToken = "your-pinning-jwt-token";
fetch("https://your-ipfs-name.gke-region.settlemint.com/pinning/pins", {
method: "POST",
headers: {
"Authorization": `Bearer ${jwtToken}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
cid: "QmExample...",
name: "important-file.txt"
})
});
```
## Usage examples
Here are practical examples of common IPFS operations with SettleMint:
### 1. Uploading a file
```javascript
async function uploadToIPFS(file) {
const formData = new FormData();
formData.append('file', file);
// Get credentials from your SettleMint dashboard
const username = "your-ipfs-id"; // Your Cluster API Username
const password = "your-password"; // Your Cluster API Password
const authString = btoa(`${username}:${password}`);
const response = await fetch("https://your-ipfs-name.gke-region.settlemint.com/api/v0/add", {
method: "POST",
headers: {
"Authorization": `Basic ${authString}`
},
body: formData
});
const data = await response.json();
console.log("File uploaded with CID:", data.Hash);
return data.Hash; // Returns the CID of the uploaded file
}
// Example usage with file input
// const file = document.getElementById('fileInput').files[0];
// uploadToIPFS(file).then(cid => console.log("Use this CID in your smart contracts:", cid));
```
### 2. Retrieving a file
```javascript
async function getFileFromIPFS(cid) {
// Get credentials from your SettleMint dashboard
const username = "your-ipfs-id";
const password = "your-password";
const authString = btoa(`${username}:${password}`);
const response = await fetch(`https://your-ipfs-name.gke-region.settlemint.com/api/v0/cat?arg=${cid}`, {
method: "POST",
headers: {
"Authorization": `Basic ${authString}`
}
});
// For text files
const content = await response.text();
console.log("File content:", content);
return content;
// For binary files (uncomment as needed)
// const blob = await response.blob();
// return blob;
}
```
### 3. Pinning a file permanently
```javascript
async function pinFile(cid) {
// Get credentials from your SettleMint dashboard
const username = "your-ipfs-id";
const password = "your-password";
const authString = btoa(`${username}:${password}`);
const response = await fetch(`https://your-ipfs-name.gke-region.settlemint.com/api/v0/pin/add?arg=${cid}`, {
method: "POST",
headers: {
"Authorization": `Basic ${authString}`
}
});
const data = await response.json();
console.log("Pinned:", data.Pins);
return data;
}
```
### 4. Using the Gateway for public access
The IPFS gateway allows anyone to access content by CID through a standard web browser without authentication:
```
https://your-ipfs-name.gke-region.settlemint.com/gateway/ipfs/QmYourCidHere
```
This is useful for:
* Sharing files publicly
* Accessing IPFS content in web applications
* Integrating IPFS content with smart contracts that need to read the data
### 5. Using with remote pinning services
SettleMint's IPFS implements the IPFS Pinning Service API, allowing you to pin content from other IPFS nodes:
```javascript
async function remotePinFile(cid, name) {
// Get pinning JWT token from your SettleMint dashboard
const jwtToken = "your-pinning-jwt-token";
const response = await fetch("https://your-ipfs-name.gke-region.settlemint.com/pinning/pins", {
method: "POST",
headers: {
"Authorization": `Bearer ${jwtToken}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
cid: cid,
name: name
})
});
const data = await response.json();
return data;
}
```
# Interplanetary file system (ipfs)
InterPlanetary File System (IPFS) is an open-source, peer-to-peer distributed
file system for storing and accessing content on the decentralized web. Unlike
traditional HTTP-based systems that locate data by server address (URL), IPFS
uses content addressing – identifying data by its content hash – to retrieve
files from any node that holds them. By combining ideas from well-established
technologies (like distributed hash tables and Git-like Merkle trees), IPFS
enables a global, versioned, and content-addressable storage network.
This document provides a technical overview of IPFS, explaining its underlying
architecture and practical usage for developers, architects, and technically
inclined stakeholders. We will cover IPFS's foundational principles, how files
are stored and retrieved via content IDs (CIDs), the core components and
protocols that make up IPFS, methods of interacting with the network, common
usage patterns (from file sharing to dApp integration), as well as the benefits,
limitations, and best practices for deploying IPFS in production environments.
## Foundational principles of ipfs
IPFS is built on several key principles that distinguish it from traditional
file storage and sharing systems:
### Content addressing and cids
At the heart of IPFS is content addressing – every piece of data is identified
by a content identifier (CID) which is derived from the data itself (via a
cryptographic hash). In simpler terms, the address of a file in IPFS is its
content, not the location of a server. This means that if two files have exactly
the same content, they will have the same CID, and that CID will always refer to
that content regardless of where it is stored.
When a file is added to IPFS, it's split into fixed-size blocks (chunks) and
each block is hashed; a final hash (CID) is produced for the entire file (often
represented as a root node linking all the chunks). The CID is effectively a
unique fingerprint of the content, and even a small change in the file will
produce a completely different CID.
Content addressing provides strong integrity guarantees: if you fetch data by
its CID, you can verify (by re-hashing) that the content matches the CID you
requested. This removes the need to trust a particular server – you're trusting
the cryptographic hash. CIDs are designed with a flexible format (using
multihash and multicodec conventions) that includes metadata about the hashing
and encoding, but conceptually a CID is just a hash of content.
In summary, IPFS's content addressing decouples the what (the data) from the
where (the location), enabling data to be retrieved from any peer in the network
that has it.
### Merkle dag (content linking and immutability)
IPFS represents data as a Merkle Directed Acyclic Graph (Merkle DAG) – a
structure in which each node (file block or object) is linked via hashes to
other nodes. Every file added to IPFS is stored as a Merkle DAG: if the file is
small enough it might be a single block (node) whose CID is the hash of the
file, but larger files are broken into many hashed blocks linked together
(Lesson: Turn a File into a Tree of Hashes | IPFS Primer).
The top-level node (often called a root) contains links (hash pointers) to its
constituent blocks, which may themselves link to sub-blocks, forming a tree of
hashes. Because each link is a hash, the entire structure is
self-authenticating: the root CID effectively seals the content of the whole
file tree.
The use of Merkle DAGs means content is immutable – once data is added, that
exact data will always correspond to the same CID. If you update a file, the
modified file will produce a new CID, while the old version remains addressable
by the old hash (this enables versioning, as discussed later). Merkle DAGs also
enable deduplication: if two files share common chunks, those chunks (being
identical content) have the same CID and can be stored only once and referenced
in multiple graphs, saving space.
IPFS's data model, called IPLD (InterPlanetary Linked Data), generalizes this
Merkle DAG structure so that many data types (files, directories, Git trees,
blockchains, etc.) can be represented and linked by their hashes. On top of
IPLD, IPFS uses UnixFS as a higher-level schema for files and directories,
allowing hierarchical file systems to be built using Merkle DAG nodes (for
example, directories are nodes that list hashes of their children).
The Merkle DAG approach gives IPFS its properties of content integrity and
naturally supports a versioned file system (much like how Git commits form a DAG
of versions).
### Peer-to-peer networking and decentralization
IPFS operates over a distributed peer-to-peer (P2P) network of nodes rather than
client-server architecture. Any computer running an IPFS node can participate in
the network, storing data and fulfilling requests from others. There are no
central servers holding the authoritative copy of content; instead, content is
shared and cached by many peers.
When you request a CID on IPFS, the system doesn't query a single location – it
asks the network which peers have the content and retrieves it from whichever
peer (or peers) can serve it fastest. This P2P design makes IPFS inherently
decentralized and resilient: even if some nodes leave or go offline, data can
still be retrieved from other nodes that have it, with no single point of
failure.
Peers discover and communicate with each other using a networking library called
libp2p, which handles peer addressing, secure transport, and multiplexing for
the IPFS network. Each IPFS node has a unique Peer ID (derived from a
cryptographic key) which is used to identify it in the network, and nodes
connect to each other via swarm addresses (multiaddresses) over various
transport protocols (TCP, UDP, QUIC, etc.).
This peer network forms the substrate over which IPFS content is distributed. A
new node joining IPFS initially connects to a set of bootstrap peers and then
learns about other peers progressively. In essence, IPFS creates a global swarm
of peers that collectively store and serve content, akin to a sophisticated
BitTorrent-like swarm but for a unified filesystem.
## Ipfs architecture and core components
Under the hood, IPFS is composed of several interconnected subsystems that work
together to enable content-addressed storage and retrieval. The core components
of IPFS include the node software (sometimes called a Kubo node, formerly
go-ipfs), the distributed hash table for peer/content discovery, the Bitswap
exchange protocol, and the IPLD data model layer. Let's explore these components
in more detail:
### Ipfs nodes and repositories
An IPFS node is a program (and by extension, the machine running it) that
participates in the IPFS network. Each node stores data in a local repository
which contains the content blocks the node has pinned or cached, as well as
indexing information. Nodes can be run by anyone – from personal laptops to
servers – and all nodes collectively form the IPFS network.
Each node is identified by a Peer ID, and can have multiple network addresses
through which it connects to others. IPFS nodes communicate over libp2p, meaning
they use a modular networking stack that can run over various transports and
apply encryption and NAT traversal as needed. When running, a node continuously
maintains connections to a set of peers.
Nodes do not automatically replicate all data; instead, a node stores only the
content it intentionally adds or "pins", plus any other content it has fetched
(which it may cache temporarily). By default, IPFS treats stored data like a
cache – it may be garbage-collected if not pinned (more on pinning in a later
section). This design ensures that participating in IPFS doesn't mean storing
the entire network's data, only what each node finds relevant.
The node exposes a few interfaces for users/applications: a command-line
interface, a RESTful HTTP API, and optionally a gateway interface for browser
access. In essence, an IPFS node is a self-contained peer that can store content
(in a local Merkle block store), connect to other peers, advertise the content
it holds, and fetch content from others upon request.
### Distributed hash table (dht) for content routing
To locate which nodes have a given piece of content (CID), IPFS relies on a
distributed hash table (DHT) called Kademlia. The IPFS DHT is a decentralized
index that maps CIDs to the peer IDs of nodes that can provide that content.
When a node adds content to IPFS, it announces (publishes) to the DHT that "Peer
X has content with CID Y". Later, when some node wants to retrieve that CID, it
performs a DHT lookup to find provider records – essentially the network
addresses of peers who have the content.
The DHT is spread across all IPFS nodes (or specifically, those that support the
DHT – some light nodes might use delegate servers). It uses Kademlia's XOR-based
routing: each peer is responsible for a portion of the hash space and knows how
to route queries closer to the target CID's key. In practical terms, an IPFS
node searching for content will query the DHT by hashing the CID and finding the
closest peers in the key space, who either know the provider or can refer the
query further along.
The public IPFS DHT (sometimes called the Amino DHT) is a global, open network
averaging thousands of peers. It is designed to handle churn (peers
joining/leaving) gracefully and to find providers within a short time (most
lookups complete in well under 2 seconds on the public network. The DHT makes
IPFS content routing decentralized – there is no central index server, the "who
has what" information is distributed among all peers.
In addition to the DHT, IPFS can also use mDNS (multicast DNS) for discovering
peers on a local network (useful for LAN or offline scenarios), and can fall
back to delegated routing (asking a trusted server to perform DHT queries on
behalf of lightweight nodes) in constrained environments. But the primary
mechanism is the Kademlia DHT which allows any node to ask the network for
providers of a given CID.
### Bitswap: the block exchange protocol
Once you know which peers have the content you want, the next step is to
retrieve the data. IPFS uses a protocol called Bitswap to coordinate the
transfer of content blocks between peers.. When an IPFS node needs blocks
(identified by CIDs), it sends out Bitswap wantlist messages to peers it's
connected to, asking for those CIDs. Peers that have the requested blocks will
respond by sending them back.
A key feature of Bitswap is that it's not restricted to a single file "swarm" –
an IPFS node might be simultaneously exchanging blocks for many different files
with many peers. Bitswap also allows parallel downloads: if multiple peers have
a block, a node can fetch different blocks from different peers, increasing
throughput for large files. Essentially, Bitswap enables a node to assemble
content by grabbing pieces from any peers that can provide them, which can
dramatically speed up retrieval for popular content (swarming).
Peers running Bitswap maintain a ledger to incentivize fairness (they track data
exchanged and generally prefer to send to peers who reciprocate, to avoid
freeloaders). Interestingly, Bitswap can also discover content in the process of
transfer – if you connect to some peers and request a block, even if they don't
have it, they might forward the request or later receive the block and then send
it, functioning as a dynamic supply network. This means Bitswap acts as both a
data transfer protocol and a limited content discovery mechanism (for example, a
node might learn about a provider when that provider responds to a third party's
request in a swarm).
Overall, Bitswap is the engine that moves blocks around in IPFS: it's how data
actually gets from point A to point B (or C, D, etc.), once point B knows that
point A (and others) have what it needs.
### Ipld and data formats
IPLD (InterPlanetary Linked Data) is the data model layer of IPFS that defines
how structured data is represented as content-addressable objects. All content
in IPFS – files, directories, and other complex data – is expressed in terms of
IPLD nodes and links. An IPLD node can be thought of as a small data object
(e.g., a file chunk or a directory listing) with a content-addressable
identifier (CID). Links between IPLD nodes are just CIDs pointing to other
nodes, which, thanks to content addressing, also serve as cryptographic
pointers. The Merkle DAG we discussed earlier is essentially an IPLD instance.
IPLD is designed to be flexible: it supports multiple codecs and formats
(through the multicodec mechanism) so that it can interoperate with data from
other systems. For example, Git commits and Ethereum blocks can both be
represented as IPLD nodes – IPFS doesn't just handle "files" but any data that
can be content-addressed. In the context of typical file storage, the main IPLD
format is UnixFS, which defines how file data and metadata (like filenames,
sizes, directory structure) are represented in the DAG. When you add a file via
ipfs add, it is chunked and encoded into an IPLD UnixFS DAG automatically.
IPLD gives IPFS advantages like easy upgradability and interoperability: new
data structures or hash functions can be introduced without breaking the system,
because CIDs are self-describing and IPLD provides a common framework. IPLD is
the component that ensures IPFS isn't limited to a single file format or data
type – it's a universal graph layer where any content-addressed data structure
can be modeled and linked.
## Storing and retrieving files on ipfs
One of the core functions of IPFS is, of course, adding files and getting them
back. Let's walk through how files are stored and retrieved in IPFS using
content addressing and the components above:
### Adding (storing) a file
Suppose you want to store a file on IPFS. Using the IPFS CLI or API, you run
`ipfs add `. The IPFS node first chunkifies the file into blocks
(default \~256 KB each) and generates cryptographic hashes for each block. If the
file is small enough to fit in one block, that block's hash is the file's CID.
If the file is larger, IPFS creates a Merkle DAG: it will create a root IPFS
object (a kind of meta-block) that contains the hashes (CIDs) of all the file's
chunks as links. This root object gets its own hash which becomes the CID
representing the entire file.
IPFS then stores all these blocks in the local repository. As a final step, the
node announces to the network that it has this content. It does so by publishing
a provider record in the DHT for the file's root CID (and possibly for each
block CID) – essentially telling the DHT, "Peer X can provide CID Y". Once
added, the content is now available to any other IPFS peer that requests it by
that CID.
Notably, adding a file to IPFS does not mean the whole world immediately gets a
copy – it means the file is now available on the network through your node.
Other peers can retrieve it if they know the CID or discover it. Also, IPFS
ensures identical content isn't duplicated: if you add a file that contains some
blocks already present on your node (or if you add the exact same file twice),
it will reuse the existing blocks rather than store duplicates, thanks to
content-addressing (this deduplication can work across files and even across
users in the network in cases where the same chunks are shared).
### Retrieving a file
Now, how does someone fetch that file using IPFS? Given the CID of the content,
an IPFS node will perform a lookup to find who has the data, then fetch the data
block by block. In practice, the process works in two phases: content routing
and content transfer.
First, the node uses the DHT (or other discovery methods) to ask, "Who has CID
X?" It sends queries through the DHT network until it finds one or more provider
records for that CID (e.g., it learns that Peer X at such-and-such address can
serve it). With provider info in hand, the node opens connections (via libp2p)
to one or more of those peers.
Next comes the Bitswap phase: the node sends a wantlist for the CID (if it's a
multi-block file, it will request the root block first, then proceed to request
the linked blocks). The peer(s) holding the data respond by sending the blocks
over. If multiple peers have the content, the downloading node can get different
chunks from different peers in parallel, potentially speeding up the transfer.
As blocks arrive, IPFS verifies each block's hash against the expected CID,
ensuring data integrity.
When all the pieces are retrieved, IPFS assembles them (following the DAG links)
to reconstruct the original file. The user who requested the file can now read
the content (e.g., the `ipfs cat ` command will output the file's data once
fetched). Importantly, the act of retrieving also caches the data on the
downloader's node: now that node can serve the file to others as well, at least
temporarily.
By design, IPFS retrieval is location-agnostic – it doesn't matter where the
content comes from, as long as the content hash matches. You could get one chunk
from a server across the ocean and another from a peer on your local network;
the resulting file is verified and identical to the original. This distributed
retrieval provides robustness and potentially better performance through
locality (if a nearby node has the data) and redundancy. And since content is
addressed by hash, IPFS ensures you never get the wrong file – if someone tries
to send bogus data, the hash won't match and it will be rejected.
## Data persistence and pinning
By design, IPFS is agnostic about persistence: it doesn't automatically make
content permanent or highly available; it simply provides the mechanism to
distribute and retrieve it. Persistence in IPFS is the responsibility of nodes
that care about the data. When you add a file to IPFS, your local node now has a
copy and will serve it to others – as long as you keep that node running and
don't remove the data. Other nodes that download that file may cache it, but
caches are not guaranteed to stay forever (nodes have limited storage and will
eventually clean up data that isn't explicitly marked to keep).
The act of marking data as "do not delete" on an IPFS node is called pinning.
Pinning a CID on a node tells that node to store the data indefinitely, exempt
from garbage collection. For example, after adding content, you'd typically pin
it on at least one node (often the same node that added it is automatically
pinning it by default). If content is not pinned, an IPFS node treats it as
cache – it might be dropped if space is needed or if the node operator runs
`ipfs repo gc` (garbage collect). Therefore, to achieve persistence, someone
must pin the data on a persistent IPFS node.
In a decentralized context, this could be the original uploader or any number of
volunteers or service providers who decide the content is worth keeping. IPFS
itself doesn't replicate content across the network without instruction; you
either rely on others to fetch and thereby temporarily host it, or you use
additional services to distribute copies. Many decentralized storage setups use
multiple IPFS nodes (or pinning services) to ensure there are always several
online copies of important data. If no node pins a piece of content and no
## Use cases for ipfs in blockchain applications
### 1. **Smart contract data storage**
* Instead of storing large data **on-chain**, store it **off-chain** on IPFS and
reference the **CID** in smart contracts.
* Example: **Legal documents, digital agreements, audit records**.
### 2. **NFT metadata & digital assets**
* Store **metadata, images, and media** for NFTs in a decentralized and
tamper-proof manner.
* Example: **NFT artwork, game assets, token metadata**.
### 3. **Decentralized identity & credentials**
* Store and verify **identity documents, certificates, and credentials**
securely on IPFS.
* Example: **Verifiable credentials in education, healthcare, and finance**.
### 4. **Immutable data storage for regulatory compliance**
* Ensure **auditable and tamper-proof records** for compliance-heavy industries.
* Example: **Financial records, compliance reports, supply chain tracking**.
***
## Alternative Ways to Interact with IPFS
Command
### File Manager Interface
SettleMint provides a convenient **web-based File Manager** for your IPFS storage, allowing you to manage files without writing any code.
#### Key features of the file manager
* **Upload Files** – Drag-and-drop or select files to upload to your IPFS node
* **Browse Files** – View all files stored on your IPFS node
* **Search** – Find files using CID or filenames
* **Pin Management** – Pin/unpin files to control persistence
* **File Details** – View metadata including size, CID, and pin status
* **Copy CIDs** – Easily copy content identifiers for use in applications
#### To access the File Manager:
1. Navigate to your IPFS instance in the SettleMint dashboard
2. Click the **File Manager** tab
3. Use the **Import** button to add new files
The File Manager provides an easy-to-use interface for basic IPFS operations without requiring command-line tools or programming knowledge, making it ideal for testing and management tasks.
### JavaScript/TypeScript Integration
#### js-ipfs-http-client
The official JavaScript client library for IPFS HTTP API:
```bash
# Install using npm
npm install ipfs-http-client
# or using yarn
yarn add ipfs-http-client
```
```javascript
import { create } from 'ipfs-http-client'
// Connect to your SettleMint IPFS node
const auth = 'Basic ' + Buffer.from('your-ipfs-id:your-password').toString('base64')
const client = create({
host: 'your-ipfs-name.gke-region.settlemint.com',
port: 443,
protocol: 'https',
apiPath: '/api/v0',
headers: {
authorization: auth
}
})
// Upload file example
const addFile = async (file) => {
const added = await client.add(file)
return added.cid.toString()
}
// Retrieve file example
const getFile = async (cid) => {
const chunks = []
for await (const chunk of client.cat(cid)) {
chunks.push(chunk)
}
return Buffer.concat(chunks)
}
```
### Python Integration
#### ipfshttpclient
Python client library for the IPFS HTTP API:
```bash
pip install ipfshttpclient
```
```python
import ipfshttpclient
import base64
# Connect to your SettleMint IPFS node
auth = base64.b64encode(b"your-ipfs-id:your-password").decode("ascii")
client = ipfshttpclient.connect(
'/dns/your-ipfs-name.gke-region.settlemint.com/https',
headers={'Authorization': f'Basic {auth}'}
)
# Upload file example
def add_file(file_path):
res = client.add(file_path)
return res['Hash']
# Retrieve file example
def get_file(cid):
return client.cat(cid)
```
### Go Integration
#### go-ipfs-api
Official Go client library for the IPFS HTTP API:
```bash
go get github.com/ipfs/go-ipfs-api
```
```go
package main
import (
"fmt"
"os"
"encoding/base64"
shell "github.com/ipfs/go-ipfs-api"
)
func main() {
// Connect to your SettleMint IPFS node
auth := base64.StdEncoding.EncodeToString([]byte("your-ipfs-id:your-password"))
sh := shell.NewShell("https://your-ipfs-name.gke-region.settlemint.com/api/v0")
sh.SetHeader("Authorization", "Basic "+auth)
// Upload file example
cid, err := sh.Add(os.Stdin)
if err != nil {
fmt.Fprintf(os.Stderr, "error: %s", err)
os.Exit(1)
}
fmt.Printf("added %s\n", cid)
// Retrieve file example
data, err := sh.Cat(cid)
if err != nil {
fmt.Fprintf(os.Stderr, "error: %s", err)
os.Exit(1)
}
// Process data stream
// ...
}
```
### Java Integration
#### java-ipfs-http-client
Java implementation for the IPFS HTTP API:
```groovy
dependencies {
implementation 'com.github.ipfs:java-ipfs-http-client:1.3.3'
}
```
```java
import io.ipfs.api.IPFS;
import io.ipfs.api.MerkleNode;
import io.ipfs.api.NamedStreamable;
import java.io.File;
import java.io.IOException;
import java.util.Base64;
import java.util.HashMap;
import java.util.Map;
public class IPFSExample {
public static void main(String[] args) throws IOException {
// Connect to your SettleMint IPFS node
String auth = Base64.getEncoder().encodeToString("your-ipfs-id:your-password".getBytes());
IPFS ipfs = new IPFS("your-ipfs-name.gke-region.settlemint.com", 443, "/api/v0", "https");
Map headers = new HashMap<>();
headers.put("Authorization", "Basic " + auth);
ipfs.setRequestHeaders(headers);
// Upload file example
NamedStreamable.FileWrapper file = new NamedStreamable.FileWrapper(new File("path/to/file"));
MerkleNode added = ipfs.add(file).get(0);
String cid = added.hash.toString();
// Retrieve file example
byte[] fileContents = ipfs.cat(cid);
}
}
```
## Best practices & considerations
* **Pinning Important Files** – IPFS operates on a caching mechanism, meaning
files may be garbage-collected if not pinned.
* **CID Management** – Since CIDs are content-based hashes, file updates result
in new CIDs. Maintain proper reference tracking.
* **Security & Privacy** – IPFS is **public by default**. For sensitive data,
consider **encryption before storing files**.
***
## Troubleshooting
* **File Not Found?** – Ensure the file is **pinned** and accessible through an
active IPFS node.
* **Slow Retrieval?** – Use **SettleMint's dedicated IPFS gateway** or public
**IPFS gateways** for faster access.
* **Storage Limitations?** – Consider using **external pinning services** to
maintain long-term file availability.
For further assistance, refer to **SettleMint's documentation** or the
**official IPFS documentation**.
***
## Additional resources
* **[IPFS Official Documentation](https://docs.ipfs.io/)**
* **[SettleMint Platform Guide](https://console.settlemint.com/documentation)**
* **[IPFS GitHub Repository](https://github.com/ipfs/ipfs)**
* **[IPFS & Blockchain Use Cases](https://ipfs.io/#use-cases)**
***
IPFS provides a **scalable, decentralized, and efficient** storage solution for
blockchain applications. Within **SettleMint**, IPFS can be easily used as a
**storage option**, allowing users to **store, retrieve, and reference files**
with minimal setup. By integrating IPFS into blockchain workflows, developers
can ensure **secure, tamper-proof, and cost-efficient off-chain storage**, while
keeping essential references **on-chain**.
file: ./content/docs/platform-components/database-and-storage/s3-storage.mdx
meta: {
"title": "S3 storage",
"description": "Guide to using S3 Minio storage solutions"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
## Minio as an s3-compatible storage solution
## Overview
MinIO is an **open-source, high-performance object storage system** that
provides full compatibility with **MiniO S3 APIs**. It is designed to handle
**large-scale unstructured data storage**, making it a reliable option for
**cloud-native applications, AI/ML pipelines, big data analytics, and backup
solutions**. MinIO supports **self-hosted, private cloud, and hybrid
environments**, allowing users to maintain full control over their storage
infrastructure.
MinIO's architecture enables **high-speed read/write operations, horizontal
scalability, and strong data protection**. It supports **erasure coding,
encryption, and identity access management (IAM) policies**, ensuring secure and
efficient storage. Its lightweight nature allows it to run on **bare-metal
servers, containers, and Kubernetes clusters**, making it a versatile
alternative to centralized cloud storage solutions.
MinIO is an open-source, high-performance distributed object storage system
designed for large-scale unstructured data and cloud-native applications. It
provides a software-defined storage solution that is fully Amazon S3 API
compatible, allowing existing S3-based tools and applications to work with MinIO
seamlessly. MinIO can handle a wide range of unstructured data (e.g. photos,
videos, log files, backups, container images) with objects up to 5 TB in size,
making it suitable for demanding workloads in analytics, machine learning, and
enterprise IT environments. Released under an open-source AGPL v3.0 license,
MinIO has become a popular choice for building on-premises and hybrid cloud
storage infrastructures as a drop-in alternative to public cloud storage.
MinIO's architecture is purpose-built for scalability and reliability. A running
MinIO server instance manages local storage (drives or volumes) and exposes an
S3-compatible endpoint. In a standalone deployment, a single MinIO server with
one or more drives forms an object storage instance (useful for dev/test or edge
use). For production, MinIO is typically deployed in a distributed cluster:
multiple MinIO server nodes join to act as one unified object storage system.
MinIO recommends at least 4 nodes for proper durability and high availability in
distributed mode. Each node should have similar compute, storage, and network
resources, and they are grouped into a server pool that presents itself as a
single object storage service.
**Data distribution and erasure coding:**
MinIO automatically organizes the drives across all nodes into one or more
erasure sets (groups of drives) to protect data against failures. When an object
is stored, MinIO partitions the object's data into multiple shards and computes
parity shards (using Reed-Solomon erasure coding). These data and parity shards
are distributed across the drives in an erasure set. As long as a threshold
number of shards (e.g. M of M+K) are available, the object can be reconstructed,
meaning the cluster can tolerate up to K failures (drives or nodes) without data
loss. By default, MinIO uses a 12 data + 4 parity configuration (EC:4), so any 4
drives in an erasure set can fail and the data remains intact. This
erasure-coded design is central to MinIO's ability to provide high durability
and automatic data healing: if a drive or node fails, missing shards are
re-calculated from parity and healed onto healthy drives in the background once
the cluster stabilizes.
**No external metadata database:**
MinIO stores object metadata alongside the data on the storage drives (e.g. in a
hidden .minio.sys directory), avoiding any separate metadata server that could
become a single point of failure. Each object's metadata (such as its name,
size, content-type, and user-defined metadata) is stored with the object or in
distributed files, and MinIO uses consensus and locking protocols to keep
metadata consistent cluster-wide. This simplifies the architecture and ensures
that the system remains fully distributed, with every node capable of serving
any request for data or metadata.
**Cluster expansion:**
A MinIO deployment can scale horizontally by adding additional sets of nodes
(called new server pools) to the cluster. Once added, the new pool's capacity is
available for new objects, and MinIO will write incoming objects to whichever
pool has the most free space (to naturally balance utilization). Each server
pool has a fixed size (number of nodes and drives) once created; to scale
further, you add another pool rather than arbitrarily adding one node to an
existing pool. This approach avoids expensive data rebalancing - new data simply
flows into the new pool, while existing data stays in the old pool (unless
explicitly migrated). This design provides incremental scalability: enterprises
can start with a small cluster and grow capacity by adding more nodes/pools over
time, with minimal disruption.
**Multi-tenancy architecture:**
MinIO can be deployed in a multi-tenant fashion by running separate MinIO server
instances for each tenant (each with its own data volumes and network
endpoints). For example, on a single server or Kubernetes node, multiple MinIO
processes can run on different ports, each serving a different tenant's buckets.
In Kubernetes, the MinIO Operator automates multi-tenant deployments, creating
isolated MinIO clusters (tenants) in separate namespaces, each with dedicated
drives and credentials. This ensures strong isolation between tenants while
still allowing efficient use of the same underlying hardware. MinIO secures each
tenant separately - each instance has its own access control configuration and
optionally its own encryption keys - so that data and credentials are not shared
between tenants. Multi-tenancy improves hardware utilization and lowers cost by
consolidating services, without compromising on security or performance.
**Key components summary:**
In a MinIO deployment, the primary components include the MinIO server process
(which handles S3 API requests and manages storage), the storage drives (where
object data and metadata shards reside), and the internal erasure coding engine
(performing shard splitting and recombination). There is also an embedded or
external Key Management Service (KMS) if server-side encryption is enabled, and
an optional identity provider integration for external authentication (discussed
later). Administrators interact with the cluster using the MinIO Client (mc) CLI
or the MinIO Console UI, which are described in the next section. Overall,
MinIO's architecture is deliberately minimalistic - a single binary can run the
entire server - yet capable of scaling out to hundreds of nodes and many
petabytes of data while maintaining high throughput and fault tolerance through
its distributed erasure-coded design.
## How minio works: object storage features and functionality
MinIO provides a rich set of features that implement the core principles of
object storage while adding enterprise-grade capabilities. Below are the key
aspects of how MinIO works, covering its object model and major features:
* **Object Storage Model:** MinIO stores data as objects within buckets,
analogous to folders in cloud storage. Buckets serve as the top-level
namespace for objects, but the namespace is flat - there is no hierarchical
directory tree. Each object is identified by a unique key (name) within a
bucket. This flat address space allows MinIO to scale to billions of objects
without performance degradation. Users or applications interact with objects
via HTTP RESTful operations (PUT, GET, DELETE, etc.), with MinIO handling the
persistence, metadata, and indexing behind the scenes. Objects can be
accompanied by metadata (user-defined key-value pairs and system metadata like
content-type), enabling rich descriptions of data. By embracing a pure object
storage design, MinIO simplifies storing large unstructured data sets and
decouples storage from any file-system semantics - applications do not need to
manage or be aware of underlying file paths or mount points, they simply
address objects via bucket and key.
* **Amazon S3 API Compatibility:** One of MinIO's hallmark features is full
compatibility with the S3 API, including its authentication methods and
bucket/object operations. Applications can use AWS SDKs or CLI tools (or
MinIO's provided SDKs) to interact with MinIO as if it were AWS S3. This
includes support for multipart uploads (splitting large objects into parts for
efficient upload) , object versioning, object locking (WORM compliance),
bucket policies and ACLs, access signatures (AWS V4 signatures for
authenticated requests), and more. MinIO implements a wide range of S3 API
calls - for example, creating and listing buckets, putting and getting
objects, copying objects, and retrieving object metadata - following the same
request and response formats as AWS S3. Unsupported AWS-specific features
(like certain Glacier or RDS integrations) are generally not relevant to
MinIO's scope, but core object storage functionalities are covered. This S3
compatibility makes MinIO a drop-in replacement in many scenarios: any
software written for S3 (backup tools, data lake frameworks, etc.) can be
pointed at MinIO's endpoint with minimal changes. It also means developers can
leverage existing S3 SDKs in their language of choice, or MinIO's official
SDKs, to build applications against MinIO. In essence, MinIO provides the
experience of S3 on whatever infrastructure you choose, enabling hybrid-cloud
and on-premises setups with the same APIs used in public cloud.
* **Erasure Coding and Data Durability:** As described in the architecture
section, MinIO uses erasure coding to achieve high durability and
availability. When an object is stored, MinIO splits the data into slices
(shards) and generates parity slices, distributing them across the cluster's
drives. This method provides similar fault-tolerance to RAID or replication
but with better space efficiency. For example, with 12 data and 4 parity
shards (the default), MinIO only incurs \~33% storage overhead to tolerate 4
simultaneous drive failures (whereas making 3 full replicas would be 200%
overhead). Erasure coding also means that data is still available during
failures - clients can read objects even if some drives or an entire node is
down, as long as the remaining shards suffice to reconstruct the data. MinIO's
implementation includes bit-rot detection and checksums on each shard, so it
can detect corruption and automatically heal or rebuild corrupted fragments
from parity. If a new drive replaces a failed drive, MinIO will automatically
heal all missing fragments to the new drive in the background , restoring
full protection. From a user perspective, all of this is transparent - you
simply see an object stored, and MinIO ensures that it remains intact and
retrievable despite hardware issues. This design gives MinIO high resiliency
(no single point of failure) and enterprise-grade data protection in any
environment.
* **High Availability and Scalability:** MinIO was built for scalability from
day one. In a distributed deployment, all nodes in a MinIO cluster actively
serve data, and the system has no central coordinator that could bottleneck
throughput. Clients can connect to any node (via a load balancer or
round-robin DNS) and perform reads/writes; the cluster internally manages data
placement and replication of shards. A MinIO cluster can scale horizontally by
adding additional nodes and drives (in new server pools) to increase capacity
and aggregate throughput. Because MinIO doesn't rebalance existing data on
expansion, new capacity is immediately utilized for new objects, and the
cluster's performance scales roughly linearly with each additional node. This
makes MinIO suitable for exabyte-scale storage deployments - multiple clusters
can even be federated if needed. High availability is achieved through the
combination of erasure coding and distribution: even if one or multiple nodes
go offline, the data remains available via surviving nodes. MinIO clusters
have built-in distributed locking to handle concurrent writes and prevent
conflicts, which is crucial for consistency when clients perform operations
like multipart upload commits or object overwrites in a HA environment.
Additionally, MinIO supports geo-replication (via bucket replication) across
clusters for disaster recovery: you can configure bucket-level continuous
replication to a secondary MinIO deployment in another data center or cloud.
This can provide site-level redundancy on top of the local HA, ensuring
business continuity even if an entire site is lost. In summary, MinIO's
distributed design yields a highly available object store where capacity and
performance can grow with your needs, and hardware failures cause minimal
disruption to uptime.
* **Multi-Tenancy and Isolation:** While MinIO is inherently a single-tenant
service (one set of IAM accounts and buckets per server/cluster), it provides
mechanisms to serve multiple tenants in practice. As noted, you can run
separate MinIO instances for different tenants on the same hardware (each with
their own data directories and network ports). The MinIO Operator for
Kubernetes makes this easier by provisioning dedicated MinIO Tenant clusters
on-demand, each with its own set of pods and volumes. Each tenant's data is
cryptographically isolated and access-controlled so that one tenant cannot
access another's buckets. Encryption is applied per tenant (with unique keys
or keystores), and data in transit is isolated by separate endpoints and
credentials. This approach ensures multi-tenant deployments maintain strong
security boundaries. From an admin perspective, multi-tenancy allows
consolidation - you might host storage for multiple applications or even
multiple external customers on a single physical cluster - but MinIO's design
avoids any shared state that could leak across tenants. Each MinIO instance
(or tenant) can integrate with different identity providers or use different
policy sets, further separating their environments. In large organizations,
this means you can offer "Object Storage as a Service" internally, with MinIO,
safely sharing the infrastructure among teams or departments. Hardware
resources are efficiently utilized across tenants, while each tenant
experiences the service as if it were their own isolated S3-compatible storage
system.
* **Security (Encryption and Immutability):** MinIO incorporates robust security
features suitable for enterprise use. All communication with MinIO can be
encrypted via TLS for privacy in transit. For data at rest, MinIO supports
Server-Side Encryption (SSE) of objects: it can encrypt each object with a
unique key using AES-256 and store the ciphertext on disk. Encryption keys can
be managed by an external KMS; MinIO provides integration with KMS solutions
(like HashiCorp Vault or others) to manage master keys for SSE-S3 and SSE-KMS
modes. This means data remains encrypted on the drives, and even if someone
gains access to the raw storage, they cannot read the content without proper
keys. Additionally, MinIO supports object lock (immutability) feature in
compliance mode, similar to AWS S3 Object Lock, which allows buckets or
objects to be WORM (Write-Once-Read-Many) protected for a specified retention
period. This is crucial for use cases that require immutability (e.g.
blockchain ledger archives, financial records, compliance logs) to prevent
data from being tampered with or deleted before a retention period ends. The
combination of encryption and immutability means MinIO can be used in highly
regulated environments - it ensures confidentiality, integrity, and retention
of data as needed.
* **Performance Optimizations:** MinIO is optimized in Go for high-throughput
workloads, using features like SIMD acceleration for erasure coding and
checksums. It can saturate network links with sequential reads/writes and
handle many small objects efficiently with its in-memory indexing and
batching. MinIO is designed to take advantage of modern hardware (NVMe SSDs,
100 GbE networking, etc.) and can utilize features like RDMA or multiple cores
for parallel transfer. Benchmarks by MinIO and third parties often demonstrate
very high aggregate throughput (in the range of tens of gigabits per second)
with proper hardware. This makes MinIO suitable for intense workloads such as
AI training data pipelines, real-time analytics on large datasets, or backing
high-traffic web applications. It is not uncommon to see MinIO deployed in
hyper-converged configurations where storage and compute are co-located (for
example, running MinIO on the same nodes that run Spark or Presto, to serve
data with minimal network hops). The server also supports caching (MinIO can
tier hot data to fast media) and selective compression, further boosting
performance for certain I/O patterns. All of these ensure that MinIO can meet
the performance demands of enterprise and cloud-native environments, often
rivaling or exceeding the performance of managed cloud storage.
## Developer and user capabilities
MinIO offers numerous interfaces and tools that developers, DevOps engineers,
and end-users can utilize to interact with and manage the storage system. These
capabilities include SDKs and APIs for application integration, command-line
tools for scripting and administration, a web-based UI for management, as well
as robust access control and logging features for security and observability.
Below is an overview of these key capabilities:
* **S3 APIs and SDKs:** As noted, MinIO exposes an Amazon S3-compatible API
endpoint. Developers can integrate applications with MinIO using any S3 API
client. Common choices include the official AWS SDKs (for Python, Java, Go,
JavaScript, etc.) which can be pointed to MinIO by simply changing the
endpoint URL and credentials. MinIO also provides native SDKs optimized for
MinIO in various languages (Java, Python, Go, JavaScript, .NET, and more) -
these are essentially S3 client libraries with MinIO-specific documentation
and examples. The SDKs cover all typical operations: bucket management
(create/delete/list bucket), object operations (upload, download, list, copy,
delete objects), presigned URL generation, and even advanced features like
implementing S3 Select or bucket notifications where supported. For temporary
security credentials, MinIO includes a Security Token Service (STS)
implementation to generate temporary access tokens (analogous to AWS STS).
This is useful for federated access scenarios, e.g., a web app requesting
temporary upload credentials for a user. In short, developers have a rich set
of API capabilities to build MinIO-backed applications, with the advantage
that these skills and code are transferable from the AWS ecosystem.
* **Command-Line Interface (CLI) Tools:** MinIO provides a powerful CLI tool
called mc (MinIO Client) for interacting with the object store. The mc command
is analogous to AWS's S3 CLI: it allows users to browse buckets,
upload/download objects, set policies, and more from a terminal. For example,
mc cp copies files to/from MinIO, mc ls lists bucket contents, and mc mb makes
a new bucket. In addition to general object operations, there is mc admin
subcommand set for administrative tasks. Using mc admin, an operator can
manage users and groups, get status info on the servers, monitor cluster
health, and configure settings like bucket replication or lifecycle policies.
These CLI tools are scriptable and ideal for automation (e.g., in CI/CD
pipelines or cron jobs for backups). They support alias configurations so you
can easily target multiple MinIO endpoints or even AWS S3 from one CLI by
name. The CLI is noted for being simple and consistent - it uses UNIX-like
commands and syntax - making it easy for developers and sysadmins to learn.
According to user feedback, "the CLI version for MinIO is also simple and can
be used with any AWS S3 compatible object storage product" , highlighting its
ease of use and flexibility.
* **MinIO Console (Web UI):** MinIO includes an embedded web-based
administration UI called the MinIO Console. When the server is running, the
console can be accessed (by default) on port 9001, providing a graphical
interface in the browser. Through this console, users can log in (using the
same credentials as the CLI) and perform many tasks: browse buckets and
objects (upload/download through the browser), view object details and
metadata, create new buckets, and apply settings like bucket policies or
object locks. Admins can also use the console for operational monitoring: it
displays real-time performance metrics (like I/O throughput and number of API
operations), storage usage per disk and node, and system logs or
notifications. There are panels for configuring identity providers, managing
users/service accounts and their access keys, and setting up replication or
bucket event endpoints. Essentially, anything you could do via CLI or API, you
can also do in the Console with a point-and-click interface. This is
particularly useful for less technical users or for quickly checking cluster
status at a glance. The console is a separate process (launched by the main
server binary) but integrated, and it supports features like dark mode,
multi-language, etc. In multi-tenant setups, each MinIO instance has its own
console. Because it's web-based, you can secure it behind SSO or corporate
portals if needed. The presence of a user-friendly UI makes MinIO accessible
to a broader range of users and speeds up tasks like debugging issues or
inspecting data.
* **Identity and Access Management (IAM):** MinIO implements a robust IAM system
inspired by AWS IAM for S3. At a basic level, access to MinIO is controlled by
Access Key / Secret Key pairs (analogous to AWS access key ID and secret). The
root user is created on startup (by default minioadmin:minioadmin, but in
practice you set your own secure keys). Administrators can create additional
users, each with their own credentials. More powerfully, MinIO supports groups
and policy-based access control. Policies are JSON documents that define
allowed or denied actions on resources (buckets and objects), very much like
AWS S3 bucket policies or IAM policies. For example, you can create a policy
that grants read-only access to a certain bucket, and then assign that policy
to a user or group. By attaching policies, you avoid hard-coding permissions
per user. MinIO comes with some built-in policies (e.g., readwrite, readonly,
writeonly) which can be used or you can define custom ones. The server
enforces these policies on each API request. Additionally, MinIO can integrate
with external IAM systems: it supports Active Directory/LDAP integration so
that enterprise users can authenticate with their directory credentials and be
automatically mapped to MinIO groups/policies. It also supports OIDC (OpenID
Connect) for single sign-on using providers like Okta, Keycloak, or even
social logins. In those setups, an external identity token can be exchanged
for temporary MinIO credentials, enabling SSO login to the MinIO Console or
API without managing separate accounts. These features let MinIO slot into
enterprise security environments easily, reusing existing user directories and
auth flows. In terms of network security, MinIO supports IP filtering and will
soon (if not already) support bucket firewall rules to allow or block certain
IPs or VPC-style restrictions (as hinted by their roadmap). Overall, IAM in
MinIO ensures that access to data is controlled and auditable, satisfying
requirements for multi-user environments.
* **Logging and Auditing:** For operational insight and compliance, MinIO
provides extensive logging capabilities. All server actions (startup messages,
errors, HTTP requests) are logged to the console or syslog by default. Admins
can plug into this by capturing the stdout or system journal. Importantly,
MinIO also offers an audit logging feature which records every API call in
detail (who did what and when). These audit logs can be configured to include
essential information like the timestamp, user, API method (GET/PUT/etc.),
resource accessed, response status, and even the client IP. Audit logs are
critical for security review and compliance (for example, tracking data access
in regulated industries). MinIO can publish both the standard server logs and
audit logs to external systems via a webhook mechanism. You can configure one
or multiple HTTP endpoints to receive log events; MinIO will PUT a JSON log
entry for each action to those endpoints in real time. This allows integration
with log management solutions (Splunk, ELK stack, etc.) or cloud monitoring
services. Alternatively, because MinIO's metrics and logs are compatible with
Prometheus format , you can set up Prometheus to scrape MinIO's metrics
endpoint and combine that with logs for a full observability stack. The
Prometheus metrics include internal stats such as number of requests, errors,
latency, throughput, capacity usage, etc., which can be visualized in Grafana
(MinIO provides sample Grafana dashboards for this). In the MinIO Console UI,
many of these metrics are displayed in a built-in dashboard as well, and
there's an option to view recent logs and traces. For tracing, MinIO can emit
traces in a format consumable by tracing systems (OpenTelemetry support is
being added). In summary, monitoring MinIO is straightforward: it provides the
hooks needed to log every operation (for audit/security)  and to monitor
performance and health (for SREs to track). This level of transparency is
crucial in enterprise deployments where one needs to ensure the storage is
functioning correctly, track access patterns, and meet audit requirements.
* **Administration and Management:** Beyond the above, MinIO provides various
admin-focused capabilities. The mc admin CLI can perform configuration
management on a running cluster (e.g., setting environment variables, managing
encryption keys, or rotating credentials), and it can trigger consistency
checks or prints of cluster info. If a node needs to be decommissioned, an
admin can instruct the cluster to rebalance and evacuate data from that node
(by adding a new pool and migrating data off the old one). MinIO also supports
scheduled disk usage checks and background data scanner threads that
constantly verify data integrity in the background. These can be tuned to run
at certain times or throttled to reduce impact. Backup of configuration is as
simple as saving the MinIO config file (as most state is on the drives
themselves). There are also notification hooks: MinIO can publish events (like
"object created" or "object removed" events) to various targets - e.g., HTTP
endpoints, AWS Lambda, NATS, or Kafka - enabling serverless processing or
alerting based on bucket activity. This is part of its event notification
feature set, useful for building reactive data pipelines (for instance,
automatically processing an image when it's uploaded to a bucket). Finally,
updates to MinIO are seamless; it's a single binary upgrade and MinIO's design
allows upgrading one node at a time in a cluster (since clients can tolerate a
node going down for a brief period), enabling zero-downtime upgrades in
distributed environments.
## Advantages for enterprise and cloud-native workloads
MinIO's design and feature set bring a number of advantages that cater to
enterprise requirements and modern cloud-native environments. Here we outline
the key benefits that MinIO offers in these contexts:
* **Cloud-Native Architecture:** MinIO is built to run natively on containerized
and orchestrated platforms. Its lightweight, stateless container (all state is
in object data and minimal config) makes it easy to deploy on Kubernetes,
Docker, or Nomad. The MinIO Kubernetes Operator further streamlines operations
like provisioning new clusters, expanding capacity, and managing failover.
Because MinIO can run anywhere - on-premises bare metal, VMs, containers on
any cloud, or at the edge - it aligns perfectly with hybrid and multi-cloud
strategies. Enterprises can deploy MinIO on their own infrastructure and
achieve the cloud operating model internally. The Kubernetes-native approach
also means MinIO inherits benefits like easy scaling via Kubernetes APIs,
automated restarts, and upgrades, and integration with Kubernetes storage
classes. In essence, MinIO delivers object storage as a cloud-native
microservice, which is a big advantage for organizations embracing DevOps and
infrastructure-as-code.
* **High Performance at Scale:** Enterprises running data-intensive workloads
(AI/ML training, big data analytics, streaming, etc.) benefit from MinIO's
extreme performance optimizations. MinIO is highly parallel - it can use all
CPU cores and disks across the cluster concurrently to serve different parts
of data. It has been shown to achieve very high throughput and low latency,
often comparable to or better than alternative object stores. For example,
MinIO is commonly used to feed data to AI/ML pipelines, as it can deliver
training data to GPU servers at line-rate speeds. Performance tuning features
(like adjusting block and shard sizes, using direct disk I/O, enabling
client-side compression) allow tailoring MinIO to the workload. Unlike some
legacy storage, MinIO has no bottleneck like a metadata server, so performance
scales with the hardware. This is crucial for enterprise AI deployments and
large-scale analytics, where storage throughput can be a limiting factor. By
deploying MinIO on modern NVMe storage and 100 Gbit networks, organizations
have demonstrated multi-gigabyte per second read/write rates  , enabling
them to keep GPUs and distributed compute jobs fully utilized. Consistent
performance under heavy load and during failures (thanks to erasure coding) is
another plus - enterprises can trust that even if a drive fails, the
performance degrades gracefully rather than catastrophically.
* **Enterprise-Grade Security and Compliance:** MinIO incorporates the security
features needed by enterprise IT. Encryption of data at rest (with external
key management) helps meet compliance standards like HIPAA, GDPR, etc., by
ensuring sensitive data is safe even if disks are stolen or breached. The
immutability (WORM) feature allows financial or healthcare institutions to use
MinIO as a compliant archive that meets regulations such as SEC 17a-4(f) or
FINRA rules for data retention. MinIO's fine-grained IAM and audit logging
make it possible to enforce least-privilege access and track exactly who
accessed what data. Additionally, MinIO is SOC 2 compliant and undergoes
security audits, giving enterprises confidence in its security posture.
Multi-tenancy and integration with enterprise identity systems (AD/OIDC) allow
MinIO to slot into existing security frameworks - users can seamlessly
authenticate with corporate credentials, and admins can manage permissions
centrally. These features reduce the friction of adopting MinIO in an
enterprise environment, where security reviews are mandatory. Overall, MinIO's
security capabilities mean it can be trusted to store even the most sensitive
enterprise data.
* **S3 Ecosystem and Compatibility:** Many enterprises have developed an
ecosystem of tools, workflows, and skills around AWS S3. By being fully
S3-compatible, MinIO allows organizations to reuse those tools and skills
on-premises or in any environment. For example, an enterprise could use MinIO
as a target for backups with software like Veeam, Commvault, or Veritas, all
of which support writing to S3-compatible storage. Data lake frameworks such
as Spark, Presto, Hive can use MinIO as the storage layer (via S3 API),
enabling on-premises data lakes that behave like those on AWS. By avoiding
proprietary APIs, MinIO ensures vendor neutrality and prevents lock-in: data
can be moved to or from AWS S3 or other clouds without reprocessing, since
it's stored in the same format. This compatibility extends to third-party
middleware - for instance, in big data workflows, MinIO can work with Apache
Kafka Connect, Apache NiFi, or other ingest pipelines that have S3 connectors.
For developers, this means learning MinIO has virtually no learning curve if
they know S3, and for architects, it means the entire AWS S3 ecosystem of
integrations (from data processing to AI frameworks to monitoring tools) is
available for use with MinIO. This leverage of the existing ecosystem is a
major advantage in enterprise settings, as it accelerates development and
integration efforts.
* **Flexible Deployment and Cost Efficiency:** MinIO's flexibility in deployment
translates to cost and operational benefits. Enterprises can deploy MinIO on
commodity x86 or ARM servers, leveraging existing hardware investments or
choosing cost-optimized gear (no need for specialized appliances). They can
also choose the optimal mix of storage media (HDD for capacity, SSD for
performance, or a tiered mix using MinIO's caching feature) to balance cost
and speed. Because MinIO is lightweight, it can even run on smaller edge
devices or in remote offices without requiring a large footprint, avoiding
expensive hardware at the edge. Licensing cost is another area: MinIO's
open-source licensing means there are no software license fees for usage
(though support subscriptions are available from MinIO Inc.). This can
significantly lower TCO compared to proprietary storage solutions.
Additionally, MinIO's multi-cloud readiness means enterprises can avoid data
egress costs by keeping data on-prem or only moving data to public cloud when
needed - yet still interface with cloud services via S3 if required. In a
hybrid cloud scenario, MinIO might store data locally for low-latency access
and periodically replicate critical data to AWS S3 or Azure Blob (using
built-in replication) for off-site backup, thus leveraging cloud strategically
rather than for all data. This flexibility ensures that enterprises can
optimize for both performance and cost. Finally, MinIO's minimal admin
overhead - thanks to self-healing and simple scaling - can reduce operational
expenses. It doesn't require a large storage admin team to babysit; typically
a small DevOps team can manage MinIO alongside other cloud-native apps, using
common automation tools. All these factors contribute to a strong value
proposition for enterprise adoption.
* **Designed for Modern Workloads:** Traditional storage systems can struggle
with modern workloads that involve microservices, CI/CD, and agile
development. MinIO, being software-defined and easily redeployable, fits well
into continuous deployment pipelines. Developers can run a local MinIO
instance for testing (since it's just a single executable), use the same
object storage API in dev, staging, and production, which promotes environment
parity. For containerized applications, MinIO is often used as a backing store
for stateful cloud-native apps - for example, a cloud-native CI system might
use MinIO to store build artifacts or logs. Because it's built to handle
massive concurrency and ephemeral container lifecycles, it's more aligned with
these patterns than legacy NAS or SAN storage. MinIO also shines in edge and
IoT scenarios where data is generated outside the data center: its small
footprint allows it to run on edge gateways or even IoT appliances, buffering
and storing data where it is created, and then syncing needed subsets to
central cloud storage. This supports modern architectures that push compute
and storage closer to data sources (e.g., for real-time analytics on factory
floors or content delivery at edge sites). In summary, MinIO's feature set and
architecture are in tune with today's IT trends - containerization,
microservices, distributed computing - which is a decisive advantage for
enterprises undergoing digital transformation or cloud-native initiatives.
## Common use cases and deployment scenarios
MinIO's versatility allows it to be applied to a wide range of storage use
cases. Below are some of the typical usage patterns and deployment scenarios
where MinIO is commonly utilized:
* **Private Cloud Object Storage:** Organizations often deploy MinIO to build a
private cloud storage service that mimics public cloud S3 within their own
data centers. In this scenario, MinIO clusters run on-premises (on physical or
virtual servers) and provide internal teams with S3-compatible storage for
various applications. Use cases include storing internal application data,
databases backups, VM images, documents and media files, and more. The private
MinIO cloud offers low-latency access (since it's on the local network), data
residency (important for companies that need to keep data on-site for
compliance), and cost predictability. For example, a company might replace
traditional SAN/NAS appliances with MinIO to serve as a unified object store
for all departments, each with their own buckets and access controls.
Developers get the benefit of cloud-like storage (self-service provisioning of
buckets, infinite scalability from their perspective) without data leaving the
premises. Many IT departments also use MinIO behind the scenes to store things
like container registry storage (Harbor or other registries can use an S3
backend), logging and metrics data, or artifacts from CI/CD systems -
essentially acting as a central storage hub in the private cloud.
* **Hybrid Cloud and Multi-Cloud Deployments:** MinIO is frequently used in
hybrid cloud architectures, where some infrastructure is on-premises and some
in public cloud, or in multi-cloud setups across different cloud providers.
Because MinIO speaks S3, it can bridge on-prem and cloud: data can be
replicated from on-prem MinIO to AWS S3 or vice versa, enabling data mobility.
A common pattern is to use MinIO on-prem for active data, and periodically
replicate critical datasets to a cloud bucket for off-site backup (disaster
recovery). Conversely, an organization might ingest data in the cloud (from
cloud-native apps or IoT feeds), but then sync it down to an on-prem MinIO for
local processing (to save on cloud egress costs or to use on-prem GPU
clusters). MinIO's gateway mode (legacy feature) or simply running MinIO in
the cloud can present a uniform interface to applications while internally
tiering data to different storage backends. In multi-cloud usage, MinIO
provides a consistent API across AWS, Azure, GCP, etc. - an app can be
deployed in any environment and use MinIO to abstract away the differences.
This also simplifies cloud migrations: an application can be developed against
MinIO (S3 API) and later pointed to AWS S3 when deployed in AWS, or vice
versa, giving a form of cloud agnosticism. Hybrid cloud file sharing is
another use: MinIO can be a target for backing up cloud application data onto
on-prem storage or consolidating data from multiple edge sites into a central
repository (with the cloud serving as the intermediary). Overall, MinIO's
portability and identical behavior in any environment make it a linchpin in
hybrid strategies - data can flow between on-prem and cloud easily, and
applications see a single unified storage interface.
* **Edge Computing and IoT Deployments:** In edge scenarios, MinIO is valued for
its small footprint and robustness. Companies are deploying MinIO at edge
locations such as retail stores, factory floors, vehicles, and remote
facilities to store data generated on-site (for example from sensors, cameras,
or user devices). These edge MinIO instances provide local buffering and quick
local access, which is crucial when connectivity to central cloud is limited
or when real-time processing is needed. For instance, a factory might generate
high-volume sensor data; an on-site MinIO can ingest and store this data in
real-time. Local applications (perhaps performing anomaly detection or
aggregating sensor readings) read from MinIO with minimal latency. Later, the
data (or just summarized results) can be replicated to a core data center or
cloud for long-term storage. In retail, an edge MinIO might store video
footage from security cameras or daily transaction logs, ensuring they are
safely kept even if the store's internet link goes down, and then sync to
central storage overnight. Because MinIO can run on modest hardware (even a
single small server or an industrial PC), it's feasible to deploy at many
distributed sites. And with features like erasure coding, even a single
location with a few drives can have resilience against device failures (useful
in edge where physical maintenance may be infrequent). Some edge deployments
also use MinIO in disconnected mode - e.g., an oil rig or a naval vessel might
run MinIO to store data during long periods offline, then when a connection is
available, data is pushed to headquarters. The ability to operate autonomously
and reliably makes MinIO a good fit for these cases. Additionally, edge AI
deployments use MinIO to hold AI models and collected data at the edge, making
updates and access efficient. In summary, MinIO extends the cloud's storage
paradigm to the edge, enabling a true edge-to-core data pipeline with
consistent tooling.
* **AI/ML and Big Data Analytics:** Modern AI and analytics workflows often
involve reading and writing enormous datasets (imagine training data for image
recognition, or a data lake of billions of logs for analysis). MinIO has
become a popular storage backend in these scenarios due to its scalability and
performance. In AI/ML, MinIO is used to store training datasets (which could
be millions of images or videos) and to serve them to distributed training
jobs. Frameworks like TensorFlow or PyTorch can directly read data from MinIO
using S3 APIs, or via connectors like TensorFlow's built-in S3 support.
Because MinIO can saturate high-speed networks, it can keep GPU nodes fed with
data. It's also used to version and store ML models and results - data
scientists often save checkpoints, trained model files, and metrics to MinIO
for sharing and later retrieval. In analytics and data lake use cases, MinIO
serves as the storage layer for Hadoop/Spark or Presto/Trino clusters. Instead
of HDFS or other storage, companies use MinIO to hold raw data (CSV, Parquet
files, etc.) and query them with engines like Spark SQL or Presto using S3
connectors. The advantage is that MinIO provides easier scalability and
management than HDFS (no namenode, easier to expand) and is more
general-purpose. Also, multiple frameworks can all access the same data on
MinIO (one team might use Spark, another uses Presto, a third uses Python
pandas - all can fetch from the same MinIO source). Many organizations
migrating from Hadoop to cloud-native architectures use MinIO to replace HDFS
as they containerize their analytics stack. Additionally, MinIO's object
versioning and locking can be used to implement data lake features like
time-travel or audit trails for data changes. The combination of unlimited
scalability, strong read/write performance, and S3 compatibility with analytic
tools makes MinIO a natural choice for on-premises data lakes, AI data stores,
and general big data repositories.
* **Backup, Archive, and Disaster Recovery:** MinIO is frequently deployed as a
backup target in enterprise IT setups. Many modern backup solutions can output
backup images or streams to an S3-compatible store. By using MinIO as the
target, organizations can keep backups within their controlled environment.
For example, VMware admins use MinIO as an alternative to AWS S3 or Azure Blob
to store VM backups (via products like Veeam which support S3 endpoints).
Database administrators might use tools that dump database backups directly to
MinIO for safekeeping. Because MinIO can be configured for WORM, it's useful
for immutable backups - ensuring backup files cannot be altered by ransomware.
In terms of archival, MinIO's efficiency and erasure coding make it
cost-effective for cold storage of data that is not frequently accessed but
must be retained (e.g., compliance archives, historical records). It can be
deployed on high-density drives with relaxed performance, giving a private
"glacier-like" archive that the company controls. If needed, MinIO can later
transition these archives to tape or other storage via external processes, but
having them in MinIO first (with S3 API) means any application expecting S3
can still access even archive data (though maybe with some delay if stored on
slower media). For disaster recovery, organizations set up geo-replication
between MinIO clusters across sites. For instance, a primary data center and a
secondary data center each run MinIO; critical buckets are replicated
asynchronously so that the secondary always has a copy of the data. In a DR
scenario (primary site outage), applications can failover and point to the
secondary MinIO with up-to-date data. This provides cloud-like cross-region
redundancy in a completely self-managed way. Some also replicate data from
MinIO to true cloud storage as a tertiary copy (3-2-1 backup rule: 3 copies, 2
mediums, 1 off-site - MinIO can cover the first two, and cloud copy covers
off-site). Overall, MinIO serves as a reliable sink for backups and a
repository for archives, leveraging its integrity features to ensure these
last-resort copies are safe.
* \*\*Software Development Artifacts and CI/CD: \*\*A slightly less obvious but
common use is using MinIO for storing build artifacts, container images, and
other outputs of the development process. Many on-premises CI/CD pipelines
prefer an S3-like storage for artifacts because it is easier to scale than
shared file systems. MinIO is deployed to store things like compiled binaries,
test reports, dependency caches, etc. Jenkins, for example, has plugins to
publish artifacts to S3 - pointing those to MinIO allows large binaries to be
kept external to the Jenkins master. Teams using containerized builds might
push intermediate layers or finished Docker images to a registry backed by
MinIO (some caching proxies for Docker registries can use S3 storage). By
using MinIO, these artifact repositories benefit from the same durability and
scalability, ensuring that even as the number of builds grows, the storage can
keep up. MinIO's ability to handle lots of small objects efficiently is useful
here, as artifact stores often have many files. Similarly, for logging and
monitoring in dev/test environments, MinIO can be the target for aggregated
logs or metrics before they are processed, giving developers an easy way to
retrieve raw logs via S3 API if needed. In summary, MinIO plays a role in
DevOps toolchains by providing a unified storage backend for various pipeline
components, thereby simplifying data management in software production
workflows.
## Minio in blockchain ecosystems
In addition to traditional IT use cases, MinIO is increasingly being recognized
for its role in blockchain and decentralized application ecosystems. While
blockchains themselves provide a distributed ledger for transaction data, they
often rely on external storage for handling large binary data, logs, and
off-chain information. MinIO's capabilities align well with these needs, making
it a valuable component alongside blockchain networks. Here's how MinIO
integrates into blockchain scenarios:
* **Storing Blockchain Data and Node Backups:** Running a blockchain node (e.g.,
Bitcoin, Ethereum, Hyperledger Fabric) generates significant data - blockchain
ledgers can grow to hundreds of gigabytes or more. MinIO can be used to store
blockchain ledger data off-chain or as a backup repository. For instance, a
blockchain explorer service or analytics platform might periodically dump the
state of the blockchain (blocks and state trie for Ethereum, etc.) to object
storage. Using MinIO for this means those dumps are durably stored and easily
accessible via S3 API for anyone who needs to download a blockchain snapshot
to spin up a new node. Some enterprises running private blockchain networks
use MinIO to keep point-in-time archives of the ledger for compliance: by
exporting daily or weekly ledger data to a WORM-enabled MinIO bucket, they
ensure an immutable record of the ledger history outside the blockchain
itself. This can be important if they need to restore a network to a past
state or verify historical data independently. Additionally, node backup
solutions can integrate with MinIO - for example, an Ethereum client could be
modified or accompanied by a script to backup its key data directories to
MinIO at intervals. In case of node failure, the data can be retrieved from
MinIO to quickly bring a new node up to sync. The high throughput of MinIO
helps in such scenarios, as blockchain data can be large and time-sensitive to
restore.
* **Blockchain Logs and Analytics:** Blockchain nodes and decentralized apps
produce a wealth of logs (transaction logs, events, smart contract outputs).
MinIO serves as an excellent sink for collecting these logs and event data.
Rather than storing logs on local disk (where they might be lost if a node
crashes), nodes can push their logs to MinIO in near real-time. This creates a
centralized (yet distributed and reliable) repository of all logs across a
blockchain network. From there, monitoring or analytics systems can consume
the logs - for example, a SIEM system could pull logs from MinIO to analyze
security or a big data system like Spark could periodically load new log
objects for trend analysis. The advantage of MinIO here is durability and
organization: each node could write to a separate bucket or prefix, timestamps
can be part of object keys, making it easy to partition and query the data.
Moreover, if logs are stored as objects, they can be retained long-term
cheaply and even versioned if needed. For analytics, teams often export
detailed blockchain data (such as all transactions or contract events) into
CSV or Parquet files for off-chain analysis. These large files can reside on
MinIO, where data scientists or analytic tools can access them via the S3
interface. This decouples heavy analytical workloads from the operational
blockchain nodes, improving overall system performance and allowing richer
analysis. In summary, MinIO becomes the off-chain data lake for
blockchain-generated data, benefiting from its scalability to handle the
ever-growing log volumes.
* **Off-Chain Asset Storage for Decentralized Apps (dApps):** Many blockchain
applications (dApps) deal with assets or data that are not stored on-chain due
to blockchain size/cost constraints. Examples include large files in NFT
platforms (images, videos linked to tokens), user profile data in
decentralized social networks, documents in supply chain blockchain solutions,
etc. Instead of putting this data on the blockchain (which is impractical),
the blockchain stores a reference (like a hash or URL) to the data, and the
actual data is stored off-chain in a storage system. MinIO is a compelling
choice for this off-chain storage. It can store NFT metadata JSON files and
associated media securely, with the content hash used as the object key to
ensure integrity (if the object's name or metadata includes its hash, one can
verify the content hasn't changed - aligning with blockchain's immutability
ethos). Similarly, in a supply chain dApp, PDFs of certificates or images of
products can be kept in MinIO, with the blockchain storing only a pointer or
fingerprint. MinIO's object immutability can be leveraged here: you could mark
these off-chain assets as non-deletable for a certain period or indefinitely,
to mirror the immutability of the blockchain (e.g., once an NFT is minted and
its image stored, you lock the image object to prevent alteration). Since
MinIO is S3-compatible, dApp developers find it easy to integrate - many
blockchain frameworks (like Truffle, Hardhat, or others) can call HTTP APIs,
so pushing or fetching data from MinIO is straightforward. Also, using MinIO
in an enterprise blockchain context keeps sensitive off-chain data within the
organization's control, as opposed to using a public IPFS or third-party
storage (which might be less compliant or reliable). In essence, MinIO acts as
the decentralized app's data layer for anything too bulky or unsuitable for
on-chain storage, while still maintaining the spirit of decentralization by
being self-hosted and distributed.
* **Integration with Decentralized Storage Networks:** Interestingly, MinIO
itself is not a blockchain or token-based system - it's a traditional
distributed storage. However, its open-source nature and S3 API have made it
compatible with blockchain-based storage solutions and Web3 projects. For
example, the decentralized storage network Storj uses MinIO internally as an
S3 gateway for its service. Storj is a blockchain-enabled network where
storage nodes are compensated with cryptocurrency; by using MinIO's gateway,
Storj offers an S3 front-end to developers while the actual storage is on a
blockchain-managed network. This showcases MinIO's flexibility - it can
operate as a middleware that speaks S3 on one side and interfaces with a
decentralized backend on the other. Similarly, projects like Filecoin or IPFS
could be fronted by MinIO to provide familiar APIs. Moreover, some blockchain
projects explore using MinIO on the network itself: for instance, a
Polkadot-based project (Acurast) has proposed integrating MinIO into their
decentralized compute cloud to provide storage for compute tasks, accessible
through a peer-to-peer layer. By running MinIO on decentralized compute nodes
(like phones or community-run servers), and exposing it via the network, they
aim to offer "true decentralized storage" for Polkadot-native projects with S3
compatibility. This indicates that MinIO is seen as a building block in Web3
infrastructure, bridging the gap between blockchain networks and conventional
storage interfaces. Its open-source nature and API standards make it a neutral
component that can plug into decentralized contexts without proprietary
barriers. Developers in blockchain ecosystems are packaging MinIO as part of
their toolset - for example, the SettleMint blockchain platform includes a
MinIO integration to give dApps an easy way to store files with reliability
and scale. All these points illustrate that MinIO not only coexists with
blockchain tech but actively complements it by handling off-chain data in a
decentralized architecture.
* **Data Immutability and Auditability:** Blockchains are valued for their
immutability and transparency. MinIO can reinforce these qualities in the
off-chain domain. With object locking enabled, any data written to MinIO can
be made tamper-proof for a specified duration or indefinitely. This is useful
for logging critical blockchain events or storing compliance data that should
align with the immutable ledger - once written, it cannot be changed, only
appended. In a consortium blockchain (say multiple organizations sharing a
Fabric or Corda ledger), they might agree to also use a shared MinIO instance
for documents or backups, with WORM settings to ensure no single party can
covertly alter off-chain data. Furthermore, MinIO's detailed audit logs
(logging every access)  provide an audit trail that complements the
blockchain's own transaction history. For example, if a blockchain transaction
references a file in MinIO, one can cross-verify by checking MinIO's logs that
the file was indeed accessed or created at the right time by the authorized
party. This synergy between on-chain and off-chain auditability is important
in applications like supply chain provenance or digital identity, where both
ledger events and supporting documents must be provably untampered.
## Key features
* **MiniO S3 Compatibility** – Fully supports the **AWS S3 API**, enabling
seamless integration with existing S3-based applications and tools.
* **Scalable Object Storage** – Designed for **petabyte-scale** data storage
with horizontal scaling capabilities.
* **Data Security & Encryption** – Provides **AES-256 encryption** for data at
rest and **TLS encryption** for data in transit.
* **Erasure Coding** – Ensures data protection and redundancy across distributed
nodes.
* **Fine-Grained Access Control** – Implements IAM policies for **role-based
access management**.
* **Efficient Performance** – Optimized for **fast data retrieval and
low-latency transactions**.
* **Flexible Deployment** – Can be deployed on **on-premises, cloud, or hybrid
infrastructure**.
***
## Minio as an object storage solution
MinIO serves as a **scalable and resilient object storage system**, providing
**reliable data storage and accessibility** across multiple environments. Its
compatibility with **S3 APIs** ensures that applications built for AWS S3 can
work with MinIO **without modification**.
### Benefits of using minio:
* **Cost-Effective Alternative to Managed Cloud Storage** – Avoids the pricing
models of public cloud storage providers.
* **Self-Hosted Data Management** – Enables complete **control over data
security and storage policies**.
* **High Availability & Resilience** – Supports **distributed clustering** for
fault tolerance and redundancy.
* **Optimized for Large Workloads** – Designed to handle **big data, analytics,
and AI/ML storage needs**.
***
## Use cases for minio
### 1. **Off-chain storage for blockchain applications**
MinIO enables **off-chain storage** for **smart contracts, audit logs, and
regulatory documents**, reducing blockchain transaction costs while maintaining
secure, accessible data.
### 2. **Backup and disaster recovery**
MinIO provides **reliable and redundant storage** for **business-critical data,
logs, and archives**. It integrates with **backup systems** to ensure data
protection and availability.
### 3. **Storing large-scale application data**
Applications handling **high-volume transactional data, logs, and analytics**
can use MinIO to **store and retrieve data efficiently**.
### 4. **Ai/ml and big data workloads**
MinIO supports **high-speed storage and retrieval of structured and unstructured
data**, making it ideal for **machine learning models, analytics, and research
datasets**.
### 5. **NFT & digital asset storage**
For applications managing **NFTs, gaming assets, or tokenized data**, MinIO
provides **secure, scalable storage** while ensuring fast access to large media
files.
***
## Api & integration
MinIO in SettleMint provides industry-standard **S3-compatible APIs** for object storage operations. When you deploy MinIO in the SettleMint platform, you'll receive the following endpoint information:
| **Service** | **Endpoint Format** | **Purpose** |
| -------------- | --------------------------------------------------- | -------------------------------------- |
| **S3 API** | `https://your-minio-name.gke-region.settlemint.com` | Primary endpoint for S3 API operations |
| **Console UI** | `https://your-minio-name.gke-region.settlemint.com` | Web-based administration interface |
### S3 API operations
The MinIO S3 API supports standard S3 operations, including:
| **Operation** | **S3 API Method** | **Description** |
| ---------------------- | --------------------------- | --------------------------------------- |
| Create bucket | `PUT /bucket` | Creates a new storage bucket |
| Upload object | `PUT /bucket/key` | Uploads a file to specified bucket path |
| Download object | `GET /bucket/key` | Retrieves a file from storage |
| Delete object | `DELETE /bucket/key` | Removes a file from storage |
| List objects | `GET /bucket` | Lists objects in a bucket |
| Generate presigned URL | `GET /bucket/key?X-Amz-...` | Creates temporary access link to object |
These operations use the standard S3 protocol and authentication mechanisms, not simplified HTTP endpoints.
### Authentication credentials
Your SettleMint MinIO instance provides the following credentials:
| **Credential** | **Description** | **Where to Find** |
| -------------------- | ---------------------------------- | ----------------------------------- |
| **Access Key** | Username for S3 API authentication | MinIO instance details in dashboard |
| **Secret Key** | Password for S3 API authentication | MinIO instance details in dashboard |
| **Console Username** | Credentials for web console login | MinIO instance details in dashboard |
| **Console Password** | Password for web console access | MinIO instance details in dashboard |
## Ways to interact with MinIO
### Web Console Interface
SettleMint's MinIO provides a modern web-based management console for easy bucket and object management.
#### Key features
* **Visual file browser** – Upload, download, and manage files with drag-and-drop
* **Bucket management** – Create, delete, and configure buckets
* **Access policy configuration** – Set permissions and access controls
* **Monitoring dashboard** – View storage usage and performance metrics
#### To access the Console
1. Navigate to your MinIO instance URL in the SettleMint dashboard
2. Log in with your Console Username and Password
3. Use the interface to manage buckets and objects
The console provides an intuitive way to manage your storage without coding.
### AWS CLI Integration
The AWS Command Line Interface works seamlessly with SettleMint's MinIO service.
#### Setup
```bash
# Configure a named profile for your MinIO instance
aws configure --profile settlemint-minio
# Enter the following details when prompted:
# AWS Access Key ID: [Your MinIO Access Key]
# AWS Secret Access Key: [Your MinIO Secret Key]
# Default region name: us-east-1 (or any region, MinIO ignores this)
# Default output format: json
```
#### Example commands
```bash
# List buckets
aws s3 ls --endpoint-url https://your-minio-name.gke-region.settlemint.com --profile settlemint-minio
# Create a bucket
aws s3 mb s3://my-bucket --endpoint-url https://your-minio-name.gke-region.settlemint.com --profile settlemint-minio
# Upload a file
aws s3 cp local-file.txt s3://my-bucket/ --endpoint-url https://your-minio-name.gke-region.settlemint.com --profile settlemint-minio
# Download a file
aws s3 cp s3://my-bucket/remote-file.txt local-file.txt --endpoint-url https://your-minio-name.gke-region.settlemint.com --profile settlemint-minio
```
### JavaScript/Node.js Integration
The AWS SDK for JavaScript works with MinIO's S3-compatible API.
#### Installation
```bash
npm install aws-sdk
# or
bun add aws-sdk
```
#### Example usage
```javascript
const AWS = require('aws-sdk');
// Configure the S3 client
const s3 = new AWS.S3({
accessKeyId: 'YOUR_MINIO_ACCESS_KEY',
secretAccessKey: 'YOUR_MINIO_SECRET_KEY',
endpoint: 'https://your-minio-name.gke-region.settlemint.com',
s3ForcePathStyle: true, // Required for MinIO
signatureVersion: 'v4',
region: 'us-east-1' // MinIO ignores this, but it's required
});
// List buckets
async function listBuckets() {
try {
const data = await s3.listBuckets().promise();
console.log('Buckets:', data.Buckets);
return data.Buckets;
} catch (err) {
console.error('Error listing buckets:', err);
}
}
// Upload a file
async function uploadFile(bucketName, objectKey, fileContent) {
const params = {
Bucket: bucketName,
Key: objectKey,
Body: fileContent
};
try {
const data = await s3.upload(params).promise();
console.log('File uploaded successfully:', data.Location);
return data;
} catch (err) {
console.error('Error uploading file:', err);
}
}
// Download a file
async function downloadFile(bucketName, objectKey) {
const params = {
Bucket: bucketName,
Key: objectKey
};
try {
const data = await s3.getObject(params).promise();
console.log('File content:', data.Body.toString());
return data.Body;
} catch (err) {
console.error('Error downloading file:', err);
}
}
```
### Python Integration
Use the boto3 library to interact with MinIO's S3-compatible API.
#### Installation
```bash
pip install boto3
```
#### Example usage
```python
import boto3
from botocore.client import Config
# Configure the S3 client
s3 = boto3.client(
's3',
endpoint_url='https://your-minio-name.gke-region.settlemint.com',
aws_access_key_id='YOUR_MINIO_ACCESS_KEY',
aws_secret_access_key='YOUR_MINIO_SECRET_KEY',
config=Config(signature_version='s3v4'),
region_name='us-east-1' # MinIO ignores this, but boto3 requires it
)
# List buckets
def list_buckets():
try:
response = s3.list_buckets()
for bucket in response['Buckets']:
print(f'Bucket: {bucket["Name"]}')
return response['Buckets']
except Exception as e:
print(f'Error listing buckets: {e}')
# Upload a file
def upload_file(bucket_name, object_key, file_path):
try:
s3.upload_file(file_path, bucket_name, object_key)
print(f'File {file_path} uploaded to {bucket_name}/{object_key}')
return True
except Exception as e:
print(f'Error uploading file: {e}')
return False
# Download a file
def download_file(bucket_name, object_key, file_path):
try:
s3.download_file(bucket_name, object_key, file_path)
print(f'File downloaded to {file_path}')
return True
except Exception as e:
print(f'Error downloading file: {e}')
return False
```
### Go Integration
Use the AWS SDK for Go to interact with MinIO's S3-compatible API.
#### Installation
```bash
go get github.com/aws/aws-sdk-go/aws
go get github.com/aws/aws-sdk-go/aws/credentials
go get github.com/aws/aws-sdk-go/aws/session
go get github.com/aws/aws-sdk-go/service/s3
```
#### Example usage
```go
package main
import (
"fmt"
"os"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
// Configure the S3 client
s3Config := &aws.Config{
Credentials: credentials.NewStaticCredentials("YOUR_MINIO_ACCESS_KEY", "YOUR_MINIO_SECRET_KEY", ""),
Endpoint: aws.String("https://your-minio-name.gke-region.settlemint.com"),
Region: aws.String("us-east-1"), // MinIO ignores this, but the SDK requires it
S3ForcePathStyle: aws.Bool(true), // Required for MinIO
}
newSession, err := session.NewSession(s3Config)
if err != nil {
fmt.Println("Error creating session:", err)
return
}
s3Client := s3.New(newSession)
// List buckets
listBucketsResult, err := s3Client.ListBuckets(nil)
if err != nil {
fmt.Println("Error listing buckets:", err)
return
}
fmt.Println("Buckets:")
for _, bucket := range listBucketsResult.Buckets {
fmt.Println(*bucket.Name)
}
// Upload a file
file, err := os.Open("file-to-upload.txt")
if err != nil {
fmt.Println("Error opening file:", err)
return
}
defer file.Close()
uploadParams := &s3.PutObjectInput{
Bucket: aws.String("my-bucket"),
Key: aws.String("file-key.txt"),
Body: file,
}
_, err = s3Client.PutObject(uploadParams)
if err != nil {
fmt.Println("Error uploading file:", err)
return
}
fmt.Println("File uploaded successfully")
}
```
## Security & access control
MinIO includes robust **security mechanisms** for protecting stored data:
* **IAM Policies** – Define user roles and permissions for managing storage
access.
* **Data Encryption** – Supports **AES-256 encryption** for data at rest and
**TLS encryption** for data in transit.
* **Access Control Lists (ACLs)** – Configure **bucket-level and object-level
permissions** for restricted access.
### Managing storage credentials
Users interacting with MinIO storage typically require **secure credentials**:
| **Credential** | **Description** |
| --------------- | --------------------------------------------- |
| **Access Key** | Used for authenticating API requests. |
| **Secret Key** | Provides secure access to MinIO buckets. |
| **Bucket Name** | Represents the storage container for objects. |
| **Region** | Defines the MinIO deployment environment. |
These authentication mechanisms ensure **data integrity and controlled access**
across all storage operations.
***
## Best practices for using minio
* **Enable Encryption** – Encrypt stored data to **prevent unauthorized
access**.
* **Use IAM Policies for Access Control** – Implement **role-based policies** to
manage user permissions effectively.
* **Monitor Storage Utilization** – Regularly track **storage usage and
performance** to optimize efficiency.
* **Replicate Data for Redundancy** – Ensure high availability by using
**multi-node clusters**.
***
## Additional resources
* **[MinIO Official Documentation](https://docs.min.io/)**
* **[MiniO S3 API Reference](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html)**
* **[MinIO GitHub Repository](https://github.com/minio/minio)**
***
file: ./content/docs/platform-components/dev-tools/ai-code-assistant.mdx
meta: {
"title": "AI code assistant",
"description": "RooCode Assistant"
}
## AI code assistant
RooCode is an AI-powered coding assistant integrated into SettleMint’s Code
Studio, replacing the former “AI Genie”. It enhances Code Studio by introducing
a more versatile and powerful AI engine directly in your development
environment. With RooCode, you can generate and improve code using natural
language, leverage multiple AI models for different tasks, and even integrate
custom or local AI instances to meet your project’s needs. This guide will walk
you through what RooCode is, how to set it up, and how to make the most of its
features.

### What is roocode and how does it enhance code studio?
RooCode is a next-generation AI assistant that lives in your Code Studio editor.
Think of it as your intelligent pair programmer: you can ask it to write code,
explain code, suggest improvements, or even create new project files – all
through simple prompts. Unlike the previous AI Genie (which was tied to a single
AI model), RooCode is built to be provider-agnostic and highly extensible. This
means it can connect to a range of AI models and services: • Multiple AI
Providers: Out of the box, RooCode supports popular AI providers like OpenAI
(GPT models), Anthropic (Claude), Google Vertex AI, AWS Bedrock, and more.
You’re not limited to one AI engine; you can choose the model that best fits
your task for better results.
* Advanced Context Awareness: RooCode can handle larger context windows and
smarter context management than before. It “remembers” more of your codebase
and conversation history, which helps it generate responses that consider your
entire project scope. In practice, you’ll notice more coherent help even as
your files grow or you switch between different parts of your project.
* Extensibility via MCP: RooCode supports the Model Context Protocol (MCP) , a
framework that lets the AI assistant use external tools and services,
including the [SettleMint MCP server](/platform-components/dev-tools/mcp).
This is a big enhancement for Code Studio – it means the AI can potentially
perform complex operations like looking up information in a knowledge base,
running test suites, or controlling a web browser for web-related tasks, all
from within the coding session. (By default, you’ll only use these features if
you choose to enable or add them, so the environment stays straightforward
unless you need the extra power.)
* Seamless Code Studio Integration: RooCode is fully embedded in SettleMint’s
Code Studio interface. You can access it through the familiar chat or prompt
interface. You can access it through the familiar chat or prompt panel. It
works alongside your code in real-time – for example, you can highlight a
piece of code and ask RooCode to explain or refactor it, and it will provide
the answer or suggestion in seconds. This tight integration means your
development workflow is smoother and more efficient, with AI help always at
your fingertips.
In summary, RooCode enhances Code Studio by making the AI assistance more
powerful, flexible, and context-aware. Whether you’re a developer looking for
quick code generation or an enterprise user needing compliance-friendly AI,
RooCode adapts to provide the best experience.
### Step-by-step setup and configuration
Getting started with RooCode in Code Studio is straightforward. Here’s how to
set up and configure it for your needs:
1. Open Code Studio: Log in to the SettleMint Console and open your Code Studio
environment. Ensure you have the latest version of the Code Studio where
RooCode is available (if SettleMint releases updates, make sure your
environment is updated). You should notice references to RooCode or AI
Assistant in the IDE interface.
2. Access RooCode Settings: In Code Studio, locate the RooCode settings panel.
This is accessible via an rocket icon the Code Studio toolbar. Click on that
to open the configuration settings.
3. Choose an AI Provider: In the RooCode settings, you’ll see an option to
select your AI provider or model. RooCode supports many providers; common
options include OpenAI, Anthropic, Google Vertex AI, AWS Bedrock, etc. Decide
which AI service you want to use for generating suggestions. For instance, if
you have an OpenAI API key and want to use GPT-4, select “OpenAI.” If you
prefer Anthropic’s Claude, choose “Anthropic” from the dropdown. (You can
change this later or even set up multiple profiles for different
providers.) 4. Enter API Keys/Credentials: After selecting a provider, you’ll
need to provide the API key or credentials for that service:
* For cloud providers like OpenAI or Anthropic: Enter your API key in the
provided field. You might also need to specify any additional info (for
example, an OpenAI Organization ID if applicable, or select the model
variant from a list). RooCode’s Anthropic integration, for example, will
have a field for the Anthropic API Key and a dropdown to pick which Claude
model to use.
* If you choose OpenAI Compatible or custom endpoints (for instance, via a
service like OpenRouter or Requesty that aggregates models), input the base
URL or choose the service name, and then provide the corresponding API key.
* For Azure OpenAI or enterprise-specific endpoints: you’ll typically enter
an endpoint URL and an API key (and possibly a deployment name) as required
by that service. RooCode allows configuring a custom base URL for providers
like Anthropic or OpenAI if needed, which is useful for enterprise proxies
or Azure endpoints.
4. Configure Model and Settings: Once your key is in place, select the exact
model or version you want to use. For example, choose “GPT-4” or a specific
Claude variant from the model dropdown. You can also adjust any optional
settings here:
* Context Limit or Mode Settings: Some providers/models allow adjusting the
maximum tokens or response length. RooCode might expose these or just
manage them automatically. (By default, it optimizes context usage for
you.)
* MCP and Tools: If you plan to use advanced features, ensure that MCP
servers are enabled in settings (this might be on by default). There may be
an option like “Enable MCP Tools” or similar. If you don’t need these, you
can leave it as is. (Advanced users can add specific MCP server
configurations later, this is optional and not required for basic usage.)
* Profiles (Optional): RooCode supports multiple configuration profiles. You
might see an option to create or switch “API Profiles.” This is useful if
you want to quickly switch between different providers or keys (say one
profile for OpenAI, another for a local model). For now, using the default
or a single profile is fine.
5. Save and Test: Save your settings (there might be a “Save” button or it may
apply changes immediately). Now test RooCode to confirm it’s working:
* Look for the RooCode chat panel or command input in Code Studio. It might
be a sidebar or bottom panel where you can type a prompt.
* Try a simple prompt like: “Hello RooCode” or ask it to write a snippet,
e.g., “// Prompt: write a Solidity function to add two numbers”.
* RooCode should respond with a code suggestion or answer. If it prompts for
any permissions (like file access, since RooCode can write to files),
approve it to allow the AI to assist with coding tasks.
* If you get an error (e.g., unauthorized or no response), double-check your
API key and internet connectivity, or see if the provider might have usage
limits. Adjust keys or settings as needed.
* With setup complete, you can now fully leverage RooCode in your development
workflow. Use natural language to ask for code, explanations, or
improvements. For example:
* “Create a unit test for the above function.” – RooCode will generate test
code.
* “I’m getting a validation error in this contract, can you help find the
bug?” – RooCode can analyze your code and point out potential issues.
* “Document this function.” – RooCode will write documentation comments
explaining the code.
* You can interact with it as you code, and it will utilize the configured AI
model to assist you. Feel free to adjust the provider or model as you see
what works best for your project.
### Key features and benefits of roocode
RooCode brings a rich set of features to improve your development experience in
Code Studio. Here are some of the highlights:
* 🎯 Multiple AI Models & Providers: Connect RooCode to various AI backends.
You’re not locked into one AI engine – choose from OpenAI’s GPT series,
Anthropic’s Claude, Google’s PaLM/Gemini (via Vertex AI), or even open-source
models through services like Ollama or LM Studio. This flexibility means you
can leverage the strengths of different models (e.g., one might be better at
Ollama or LM Studio. This flexibility means you can leverage the strengths of
different models (e.g., one might be better at code completion, another at
explaining concepts) as needed.
* 📚 Advanced Context Management: RooCode is designed to handle large codebases
and lengthy conversations more gracefully. It uses intelligent context
management to include relevant parts of your project when generating answers.
For you, this means less time spent copy-pasting code to show the AI – RooCode
will automatically consider the files you’re working on and recent
interactions. The result is more informed suggestions that truly understand
your project’s context.
* 🤖 MCP (Model Context Protocol) Support: One of the standout advanced features
is RooCode’s ability to use MCP. This allows the AI assistant to interface
with external tools and services in a standardized way . For example, with an
appropriate MCP server configured, RooCode could perform a task like searching
your company’s knowledge base, querying a database for a value, or running a
custom script – all triggered by an AI command. This extends what the AI can
do beyond text generation, turning it into a mini agent that can act on your
behalf. (This is an optional power-user feature; you can use Code Studio and
RooCode fully without ever touching MCP, but it’s there for those who need to
integrate with other systems.)
* 🛠 In-Editor Tools & Actions: RooCode comes with a variety of built-in
capabilities accessible directly in the editor. It can read from and write to
files in your project (with your permission), meaning it can create new code
files or modify existing ones when you accept its suggestions. It can execute
terminal commands in the Code Studio environment – useful for running tests or
compiling code to verify solutions. It even has the ability to control a
browser or other tools via MCP, as mentioned. These actions help automate
routine tasks: imagine generating code and then automatically running your
test suite to verify it, all through AI assistance.
* 🔒 Customization & Control: Despite its power, RooCode gives you control over
the AI’s behavior. You can set custom instructions (for example, telling the
AI about project-specific guidelines or coding style preferences). You can
also adjust approval settings – e.g., require manual approval every time
RooCode tries to write to a file or run a command, or relax this for trusted
actions to speed up your workflow. For enterprise scenarios, features like
disabling MCP entirely or restricting certain actions are available for
compliance (administrators can centrally manage these policies). This balance
ensures you get helpful automation without sacrificing oversight.
* 🚀 Continuous Improvement: RooCode is regularly updated with performance
improvements and new features. Being a part of the SettleMint platform means
it’s tested for our specific use cases (like blockchain and smart contract
development) and tuned for reliability. Expect faster responses and new
capabilities over time – for instance, support for the latest AI models as
they become available, improved prompt handling, and more. All these benefits
come to you automatically through platform updates.
Together, these features make RooCode a robust AI co-developer. You’ll find that
repetitive tasks get easier, complex tasks become more approachable with AI
guidance, and your team’s overall development speed and quality can increase.
### Integrating personal api keys and enterprise/local instances
One of the great advantages of RooCode is its flexibility in how it connects to
AI models. Depending on your needs, you can either use personal API keys for
public AI services, or leverage local/enterprise instances for more control.
Here’s how to manage those scenarios:
* Using Your Own API Keys: If you have your own accounts with AI providers (such
as an OpenAI API subscription or access to Anthropic’s Claude), you can plug
those credentials into RooCode. In the RooCode settings profile, select the
provider and enter your API key (as described in the setup steps). This will
make Code Studio use your allotment of that AI service for all AI completions
and chats. The benefit is that you can tailor which model and version you use
(and often get the newest models immediately), and you have full visibility
into your usage via the provider’s dashboard. For instance, you might use your
OpenAI key to get GPT-4’s latest features. RooCode will respect any rate
limits or quotas on your key, and you’ll be billed by the provider according
to your plan with them (if applicable). This approach is ideal for individual
power users or teams who want the best models and are okay managing their own
API costs.
* Enterprise API Integrations: Enterprises often have special arrangements or
requirements for AI usage – such as using Azure OpenAI Service, deploying
models via AWS Bedrock, or using a private endpoint hosted in a secure
environment. RooCode supports these cases. You can configure a custom base URL
and API key to point RooCode to your enterprise’s AI endpoint. For example, if
your company uses Azure OpenAI, you’d select “OpenAI Compatible” and provide
the Azure endpoint URI and key. Similarly, for AWS Bedrock, choose the Bedrock
option and enter the necessary credentials. By doing so, all AI requests from
Code Studio will route through those enterprise channels, ensuring compliance
with your org’s data policies (no data leaves your approved environment). This
is crucial for sectors with strict data governance – you get the convenience
of AI coding assistance while keeping data management in line with internal
rules.
* Local Instances (Offline/On-Premises Use): RooCode can also work with local AI
models running on your own hardware. This is a powerful feature if you need
full offline capability or extra privacy. Using a tool like Ollama or LM
Studio via AWS Bedrock, or using a private endpoint hosted in a secure
environment. RooCode supports these cases. You can configure a custom base URL
and API key to point RooCode to your enterprise’s AI endpoint. For example, if
your company uses Azure OpenAI, you’d select “OpenAI Compatible” and provide
the Azure endpoint URI and key. Similarly, for AWS Bedrock, choose the Bedrock
option and enter the necessary credentials. By doing so, all AI requests from
Code Studio will route through those enterprise channels, ensuring compliance
with your org’s data policies (no data leaves your approved environment). This
is crucial for sectors with strict data governance – you get the convenience
of AI coding assistance while keeping data management in line with internal
rules.
* Local Instances (Offline/On-Premises Use): RooCode can also work with local AI
models running on your own hardware. This is a powerful feature if you need
full offline capability or extra privacy. Using a tool like Ollama or LM
Studio , you can host language models on a local server that mimics the
OpenAI API. In RooCode’s settings, you would choose a “Local” provider option
(for instance, LM Studio appears as an option) and set the base URL to your
local server (often something like [http://localhost:PORT](http://localhost:PORT) with no API key
needed or a token if the local server requires one). Once configured, RooCode
will send all requests to the local model, meaning your code and queries never
leave your machine. Keep in mind, running local models may require a powerful
computer, and the AI’s performance depends on the model you use (some
open-source models are smaller than the big cloud ones). Still, this option is
fantastic for experimentation, working offline, or ensuring absolute
confidentiality for sensitive code.
* Switching and Managing Configurations: Through RooCode’s configuration
profiles feature , you can maintain multiple setups. For instance, you might
have one profile called “Personal-OpenAI” with your OpenAI key and GPT-4,
another called “Enterprise-Internal” for your company’s endpoint, and a third
called “Local-LLM” for a model on your machine. In Code Studio, you can
quickly switch between these depending on the project or context. This
flexibility means you’re never locked in – you can always choose the best
route for AI assistance on a case-by-case basis.
> Tip: Always ensure that when using external API keys or services, you follow
> the provider’s usage policies and secure your keys. Never commit API keys into
> your code repositories. Set them via the Code Studio interface or environment
> variables if supported. SettleMint’s platform will store any keys you enter in
> a secure way, but it’s good practice to manage and rotate keys periodically.
> For enterprise setups, work with your system administrators to obtain the
> correct endpoints and credentials.
By integrating your own keys or instances with RooCode, you essentially bring
your preferred AI brain into SettleMint’s Code Studio. This empowers you to use
the AI on your terms – whether prioritizing cost, performance, or compliance.
It’s all about giving you the choice.
### Conclusion and next steps
RooCode dramatically expands the AI capabilities of SettleMint Code Studio,
making it a versatile assistant for blockchain development and beyond. We’ve
covered what RooCode is, how to get it up and running, its key features, and how
to tailor it to your environment. As you start using RooCode, you may discover
new ways it can help in your daily coding tasks – don’t hesitate to explore
features like custom modes or ask RooCode itself for tips on how it can assist
you best!
For more detailed technical information, troubleshooting, and advanced tips,
check out the (official RooCode documentation)\[[https://docs.roocode.com](https://docs.roocode.com)]. The
RooCode community is also active – you can find resources like FAQ pages or
community forums (e.g., RooCode’s Discord or subreddit) via the documentation
site if you’re interested in deep dives or sharing experiences.
file: ./content/docs/platform-components/dev-tools/cli.mdx
meta: {
"title": "CLI",
"description": "Overview of the SettleMint CLI"
}
## About
The SettleMint CLI provides a command-line interface for interacting with the SettleMint platform. It enables you to manage your blockchain networks, deploy smart contracts, configure your SettleMint infrastructure directly from the terminal.
## Usage
### As a dependency in your package.json
```bash
# npm
npm install @settlemint/sdk-cli
npx settlemint --version
# bun
bun add @settlemint/sdk-cli
bunx settlemint --version
# pnpm
pnpm add @settlemint/sdk-cli
pnpm dlx settlemint --version
# yarn
yarn add @settlemint/sdk-cli
yarn create settlemint --version
```
### Globally install the CLI
```bash
# npm
npm install -g @settlemint/sdk-cli
# bun
bun install -g @settlemint/sdk-cli
# pnpm
pnpm add -g @settlemint/sdk-cli
# yarn
yarn global add @settlemint/sdk-cli
```
You can access the CLI globally by running `settlemint` in your terminal.
## GitHub Action
Execute SettleMint CLI commands directly in your GitHub Actions workflows using our official GitHub Action.
For detailed setup and usage instructions, check out our [documentation](https://github.com/settlemint/settlemint-action/blob/main/README.md).
Basic example:
```yaml
steps:
- name: Get SettleMint CLI version
uses: settlemint/settlemint-action@main
with:
access-token: ${{ secrets.SETTLEMINT_ACCESS_TOKEN }}
command: "--version"
```
## Examples
### Get the version of the CLI
```bash
settlemint --version
```
### Get help for a command
The CLI uses a hierarchical command structure. You can navigate through available commands and subcommands using the `--help` flag at any level.
```bash
settlemint --help
settlemint platform --help
settlemint platform create --help
```
### Login to the platform
To use the SettleMint CLI, you first need to authenticate with the platform. Create a Personal Access Token (PAT) on the SettleMint platformand paste it when prompted by the login command.
Visit [the documentation](https://console.settlemint.com/documentation/platform-components/security-and-authentication/personal-access-tokens) to learn how to create a Personal Access Token.
Then run the login command and paste your token when prompted:
```bash
settlemint login
```
### Creating a new project from a template
To create a new project from a template, use the `create` command with the `--template` flag:
```bash
settlemint create --project-name --template
```
#### Installing dependencies
To install the dependencies for your project, use the `dependencies` command.
```bash
# bun
bun install
bun run dependencies
# npm
npm install
npm run dependencies
# yarn
yarn install
yarn run dependencies
# pnpm
pnpm install
pnpm run dependencies
```
#### Connecting to your SettleMint infrastructure
After creating your project, you'll need to connect it to your SettleMint infrastructure. This requires setting up environment variables with your SettleMint credentials and infrastructure details.
You can use the `connect` command to automatically configure your project and select the services you want to connect to.
```bash
settlemint connect
```
#### Deploying your smart contracts and subgraphs
To deploy your smart contracts and subgraphs, you can use the `deploy` command.
```bash
settlemint scs hardhat deploy remote --accept-defaults
```
To deploy your subgraphs, use the `subgraph` command.
```bash
settlemint scs subgraph deploy --accept-defaults
```
#### Generating code for your dApp
After deploying your smart contracts and subgraphs, you can generate TypeScript code for your dApp to interact with them. The `codegen` command will generate type-safe code for your integrations with the services selected in the `connect` command.
```bash
settlemint codegen
```
#### Start your dApp in development mode
```bash
# bun
bun run dev
# npm
npm run dev
# yarn
yarn dev
# pnpm
pnpm dev
```
### Creating a new project from a smart contract template
To create a new project from a smart contract template, use the `create` command with the `--use-case` flag:
```bash
settlemint scs create --project-name --use-case
```
#### Testing your smart contracts
To test your smart contracts, you can use the `test` command.
```bash
settlemint scs foundry test
```
#### Deploying your smart contracts and subgraphs
To deploy your smart contracts and subgraphs, you can use the `deploy` command.
```bash
settlemint scs hardhat deploy remote --accept-defaults
```
To deploy your subgraphs, use the `subgraph` command.
```bash
settlemint scs subgraph deploy --accept-defaults
```
## API Reference
See the [documentation](/building-with-settlemint/cli/command-reference) for available commands.
## Contributing
We welcome contributions from the community! Please check out our [Contributing](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md) guide to learn how you can help improve the SettleMint SDK through bug reports, feature requests, documentation updates, or code contributions.
## License
The SettleMint SDK is released under the [FSL Software License](https://fsl.software). See the [LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for more details.
file: ./content/docs/platform-components/dev-tools/code-studio.mdx
meta: {
"title": "Code studio",
"description": "Code Studio introduction"
}
## Introduction
The Code Studio is a web-based Visual Studio Code IDE. It offers a comprehensive
toolset for building decentralized applications (dApps), including
pre-configured extensions and a seamless GitHub integration.
With the built-in SettleMint SDK Command Line Interface (CLI), you can easily
use platform services directly from within the Code Studio, making it easier to
build your dApp.
### Types of code studio
Currently, we offer the following types of Code Studio:
* [Smart contract sets](/platform-components/dev-tools/code-studio#solidity-contracts-ide) -
A powerful tool that accelerates the development of your smart contracts. This
code studio comes with pre-built smart contract set templates for your chosen
use case, which are easily customizable to match your needs. It also includes
compilation and migration scripts that drastically simplify deployment to the
relevant blockchain.

A Smart Contract Set is a
[code studio](/platform-components/dev-tools/code-studio) that comes
with a [smart contract set template](/platform-components/dev-tools/code-studio#template-library) for your chosen
use case. It is a powerful tool that accelerates the development of your smart
contracts.
You can choose from a wide variety of templates in our open-source
[template library](/platform-components/dev-tools/code-studio#template-library). Each template includes pre-built
smart contracts which you can then customize to meet your specific needs.

## Overview of the smart contract deployment process on settlemint
SettleMint's smart contract sets include both Hardhat and Foundry, enabling you
to compile, test, and deploy using your preferred framework or a combination of
both. This flexibility allows you to optimize your development process to best
suit your project needs and preferences.
The following is a high-level overview of smart contract development processes
at SettleMint.
### 1. Adding a smart contract set
* **Add dev tool**: Navigate to the application you want to create the smart
contract set in, then to the dev tools page and press the button "Add dev
tool".
* **Code studio**: Select the "Code studio" option as the type of dev tool.
* **Smart contract set**: Select the "Smart contract set" option as the type of
Code studio.
* **Picking Your Template**: Pick the template of your choice.
For detailed instructions, please see
[add a smart contract set](/platform-components/dev-tools/code-studio#adding-a-smart-contract-set).
### 2. Compiling and configuring the smart contract
* **Compiling**: Convert your smart contract code into a format that the
blockchain can understand and execute.
* **Configuring**: SettleMint sets all the necessary configurations for you,
* **Purpose**: Tailors the deployment process to your specific requirements and
ensures your contract can run on the blockchain.
### 3. Deploying and interacting with the smart contract
* **Deploying**: Upload your compiled smart contract to a blockchain network.
* **Interacting**: Once deployed, interact with the smart contract through
transactions that call its functions.
* **Purpose**: Makes the contract accessible on the blockchain so users can
interact with it and utilize its features to perform actions defined in its
logic.
## Tools to use
At SettleMint, we provide the option to use either Foundry or Hardhat. Both of
these tools allow you to compile and deploy smart contracts within the
SettleMint IDE. The workflow in both frameworks is very similar: you compile and
then deploy the smart contracts.
### Foundry
Foundry is a toolkit for EVM development. It provides tools to compile, test,
and deploy smart contracts.
1. **Initialize Project**: Set up your project folder and deploy a Foundry smart
contract set.
2. **Write and Configure Contract**: Create your smart contract code in Solidity
and set up your project settings in a `foundry.toml` file if needed.
3. **Compile and Deploy Contract**: Convert your Solidity code into bytecode and
deploy your compiled contract to the blockchain network of your choice.
### Hardhat
Hardhat is a development environment for EVM software. It provides a flexible
and extensible ecosystem for building, testing, and deploying smart contracts.
1. **Initialize Project**: Set up your project folder and deploy a Hardhat smart
contract set.
2. **Write and Configure Contract**: Create your smart contract code in Solidity
and set up your project settings in a `hardhat.config.js` file if needed.
3. **Compile and Deploy Contract**: Convert your Solidity code into bytecode and
deploy your compiled contract to the blockchain network of your choice.
## Key points
* **Smart Contracts**: Self-executing programs with predefined rules.
* **Compiling and Configuring**: Converts code into a format the blockchain can
run and tailors the deployment process.
* **Deploying and Interacting**: Uploads the compiled code to the blockchain and
makes it accessible for interaction. By following these steps and using the
appropriate tools, you can easily create, compile, and deploy smart contracts
to automate and secure your business processes on the blockchain.
SettleMint's smart contract templates serve as open-source, ready-to-use
foundations for blockchain application development, significantly accelerating
the deployment process. These templates enable users to quickly customize and
extend their blockchain applications, leveraging tested and community-enhanced
frameworks to reduce development time and accelerate market entry.
## Open-source smart contract templates under the MIT license
Benefit from the expertise of the blockchain community and trust in the
reliability of your smart contracts. These templates are vetted and used by
major enterprises and institutions, ensuring enhanced security and confidence in
your deployments.
## Template library
The programming languages for smart contracts differ depending on the protocol:
* For **EVM-compatible networks** (like Ethereum), smart contracts are written
in **Solidity**.
* For **Hyperledger Fabric**, smart contracts (also called chaincode) are
written in **TypeScript** or **Go**.
***
### Solidity contracts IDE
| Template | Description |
| ------------------------------------------------------------------------------------------- | ----------------------------------------- |
| [Empty](https://github.com/settlemint/solidity-empty) | A minimal smart contract in Solidity |
| [ERC20 Token](https://github.com/settlemint/solidity-token-erc20) | Standard ERC20 token implementation |
| [ERC20 with MetaTx](https://github.com/settlemint/solidity-token-erc20-metatx) | ERC20 token with meta-transaction support |
| [ERC20 with Crowdsale](https://github.com/settlemint/solidity-token-erc20-crowdsale) | ERC20 token with integrated crowdsale |
| [ERC1155 Token](https://github.com/settlemint/solidity-token-erc1155) | Multi-token standard (ERC1155) |
| [ERC721](https://github.com/settlemint/solidity-token-erc721) | Standard NFT token (ERC721) |
| [ERC721a](https://github.com/settlemint/solidity-token-erc721a) | Gas-optimized NFT (ERC721A) |
| [ERC721 Generative Art](https://github.com/settlemint/solidity-token-erc721-generative-art) | NFT with generative art logic |
| [Soulbound Token](https://github.com/settlemint/solidity-token-soulbound) | Non-transferable token |
| [Supply Chain](https://github.com/settlemint/solidity-supplychain) | Asset tracking across supply chain |
| [State Machine](https://github.com/settlemint/solidity-statemachine) | State transition logic |
| [Diamond Bond](https://github.com/settlemint/solidity-diamond-bond) | Bond issuance and tracking |
| [Attestation Service](https://github.com/settlemint/solidity-attestation-service) | Verifiable claim attestations |
***
### Chaincode templates (hyperledger fabric)
| Template | Description |
| ------------------------------------------------------------------------------------------- | ---------------------------------------- |
| [Empty (TypeScript)](https://github.com/settlemint/chaincode-typescript-empty) | Minimal TypeScript chaincode |
| [Empty with PDC (TypeScript)](https://github.com/settlemint/chaincode-typescript-empty-pdc) | Chaincode using private data collections |
| [Empty (Go)](https://github.com/settlemint/chaincode-go-empty) | Minimal Go chaincode |
***
## Create your own smart contract templates for your consortium
Within the self-managed SettleMint platform, you can create
and add your own templates for use within your consortium. This fosters a
collaborative environment where templates can be reused and built upon,
promoting innovation and efficiency within your network.
To get started, visit:
[SettleMint GitHub Repository](https://github.com/settlemint/solidity-empty)
file: ./content/docs/platform-components/dev-tools/mcp.mdx
meta: {
"title": "MCP",
"description": "Using the Model Context Protocol (MCP) to connect LLM to blockchain"
}
## Introduction to model context protocol MCP
The Model Context Protocol (MCP) is a framework designed to enhance the
capabilities of AI agents and large language models (LLMs) by providing
structured, contextual access to external data. It acts as a bridge between AI
models and a variety of data sources such as blockchain networks, external APIs,
databases, and developer environments. In essence, MCP allows an AI model to
pull in relevant context from the outside world, enabling more informed
reasoning and interaction.

MCP is not a single tool but a standardized protocol. This means it defines how
an AI should request information and how external systems should respond. By
following this standard, different tools and systems can communicate with AI
agents in a consistent way. The result is that AI models can go beyond their
trained knowledge and interact with live data and real-world applications
seamlessly.
### Why does AI matter?
Modern AI models are powerful but traditionally operate as closed systems - they
generate responses based on patterns learned from training data, without
awareness of the current state of external systems. This lack of live context
can be a limitation. MCP matters because it bridges that gap, allowing AI to
become context-aware and action-oriented in real time.
Here are a few reasons MCP is important:
* Dynamic Data Access: MCP allows AI models to interact seamlessly with external
ecosystems (e.g., blockchain networks or web APIs). This means an AI agent can
query a database or blockchain ledger at runtime to get the latest
information, rather than relying solely on stale training data.
* Real-Time Context: By providing structured, real-time access to data (such as
smart contract states or application status), MCP ensures that the AI's
decisions and responses are informed by the current state of the world. This
contextual awareness leads to more accurate and relevant outcomes.
* Extended Capabilities: With MCP, AI agents can execute actions, not just
retrieve data. For example, an AI might use MCP to trigger a blockchain
transaction or update a record. This enhances the agent's decision-making
ability with precise, domain-specific context and the power to act on it.
* Reduced Complexity: Developers benefit from MCP because it offers a unified
interface to various data sources. Instead of writing custom integration code
for each external system, an AI agent can use MCP as a single conduit for many
sources. This streamlines development and reduces errors.
Overall, MCP makes AI more aware, adaptable, and useful by connecting it to live
data and enabling it to perform tasks in external systems. It's a significant
step toward AI that can truly understand and interact with the world around it.
### Key features and benefits
MCP introduces several key features that offer significant benefits to both AI
developers and end-users:
* Contextual Awareness: AI models gain the ability to access live information
and context on demand. Instead of operating in isolation, an AI agent can ask
for specific data (like "What's the latest block on the blockchain?" or "Fetch
the user profile from the database") and use that context to tailor its
responses. This results in more accurate and situationally appropriate
outcomes.
* Blockchain Integration: MCP provides a direct connection to on-chain data and
smart contract functionality. An AI agent can query blockchain state (for
example, checking a token balance or reading a contract's variable) and even
invoke contract methods via MCP. This opens up possibilities for AI-managed
blockchain operations, DeFi automation, and more, all through a standardized
interface.
* Automation Capabilities: With structured access to external systems, AI agents
can not only read data but also take actions. For instance, an AI could
automatically adjust parameters of a smart contract, initiate a transaction,
or update a configuration file in a repository. These automation capabilities
allow the creation of intelligent agents that manage infrastructure or
applications autonomously, under specified guidelines.
* Security and Control: MCP is designed with security in mind (covered in more
detail later). It provides a controlled environment where access to external
data and operations can be monitored and sandboxed. This ensures that an AI
agent only performs allowed actions, and sensitive data can be protected
through authentication and permissioning within the MCP framework.
By combining these features, MCP greatly expands what AI agents can do. It
transforms passive models into active participants that can sense and influence
external systems - all in a safe, structured manner.
## How **MCP** works
### The core concept
At its core, MCP acts as middleware between an AI model and external data
sources. Rather than embedding all possible knowledge and tools inside the AI,
MCP keeps the AI model lean and offloads the data fetching and execution tasks
to external services. The AI and the MCP communicate through a defined protocol:
1. AI Agent (Client): The AI agent (e.g., an LLM or any AI-driven application)
formulates a request for information or an action. This request is expressed
in a standard format understood by MCP. For example, the AI might ask, "Get
the value of variable X from smart contract Y on blockchain Z," or "Fetch the
contents of file ABC from the project directory."
2. MCP Server (Mediator): The MCP server receives the request and interprets it.
It acts as a mediator that knows how to connect to various external systems.
The server will determine which external source is needed for the request
(blockchain, API, file system, etc.) and use the appropriate connector or
handler to fulfill the query.
3. External Data Source: This can be a blockchain node, an API endpoint, a
database, or even a local development environment. The MCP server
communicates with the external source, for example by making an API call,
querying a blockchain node, or reading a file from disk.
4. Contextual Response: The external source returns the requested data (or the
result of an action). The MCP server then formats this information into a
structured response that the AI agent can easily understand. This might
involve converting raw data into a simpler JSON structure or text format.
5. Return to AI: The MCP server sends the formatted data back to the AI agent.
The AI can then incorporate this data into its reasoning or continue its
workflow with this new context. From the perspective of the AI model, it's as
if it just extended its knowledge or took an external action successfully.
The beauty of MCP is that it abstracts away the differences between various data
sources. The AI agent doesn't need to know how to call a blockchain or how to
query a database; it simply makes a generic request and MCP handles the rest.
This modular approach means new connectors can be added to MCP for additional
data sources without changing how the AI formulates requests.
### Technical workflow
Let's walk through a typical technical workflow with MCP step by step:
1. AI Makes a Request: The AI agent uses an MCP SDK or API to send a request.
For example, in code it might call something like mcp.fetch("settlemint",
"getContractState", params) - where "settlemint" could specify a target MCP
server or context.
2. MCP Parses the Request: The MCP server (in this case, perhaps the SettleMint
MCP server) receives the request. The request will include an identifier of
the desired operation and any necessary parameters (like which blockchain
network, contract address, or file path is needed).
3. Connector Activation: Based on the request type, MCP selects the appropriate
connector or module. For a blockchain query, it might use a blockchain
connector configured with network access and credentials. For a file system
query, it would use a file connector with the specified path.
4. Data Retrieval/Action Execution: MCP executes the action. If it's a data
retrieval, it fetches the data: e.g., calls a blockchain node's API to get
contract state, or reads from a local file. If it's an action (like executing
a transaction or writing to a file), it will perform that operation using the
credentials and context it has.
5. Data Formatting: The raw result is often in a format specific to the source
(JSON from a web API, binary from a file, etc.). MCP will format or serialize
this result into a standard format (commonly JSON or a text representation)
that can be easily consumed by the AI model. It may also include metadata,
like timestamps or success/failure status.
6. Response to AI: MCP sends the formatted response back to the AI agent. In
practice, this could be a return value from an SDK function call or a message
sent over a websocket or HTTP if using a networked setup.
7. AI Continues Processing: With the new data, the AI can adjust its plan,
generate a more informed answer, or trigger further actions. For example, if
the AI was asked a question about a user/s blockchain balance, it now has the
balance from MCP and can include it in its answer. If the AI was autonomously
managing something, it might decide the next step based on the data.
This workflow happens quickly and often behind the scenes. From a high-level
perspective, MCP extends the AI's capabilities on-the-fly. The AI remains
focused on decision-making and language generation, while MCP handles the grunt
work of fetching data and executing commands in external systems.
### Key components
MCP consists of a few core components that work together to make the above
workflow possible:
```mermaid
flowchart LR
A[AI Agent / LLM] --(1) request--> B{{MCP Server}}
subgraph MCP Server
B --> C1[Blockchain Connector]
B --> C2[API Connector]
B --> C3[File System Connector]
end
C1 -- fetch/query --> D[(Blockchain Network)]
C2 -- API call --> E[(External API/Data Source)]
C3 -- read/write --> F[(Local File System)]
D -- data --> C1
E -- data --> C2
F -- file data --> C3
B{{MCP Server}} --(2) formatted data--> A[AI Agent / LLM]
```
* MCP Server: This is the central service or daemon that runs and listens for
requests from AI agents. It can be thought of as the brain of MCP that
coordinates everything. The MCP server is configured to know about various
data sources and how to connect to them. In practice, you might run an MCP
server process locally or on a server, and your AI agent will communicate with
it via an API (like HTTP requests, RPC calls, or through an SDK).
* MCP SDK / Client Library: To simplify usage, MCP provides SDKs in different
programming languages. Developers include these in their AI agent code. The
SDK handles the communication details with the MCP server, so a developer can
simply call functions or methods (like mcp.getData(...)) without manually
constructing network calls. The SDK ensures requests are properly formatted
and sends them to the MCP server, then receives the response and hands it to
the AI program.
* Connectors / Adapters: These are modules or plugins within the MCP server that
know how to talk to specific types of external systems. One connector might
handle blockchain interactions (with sub-modules for Ethereum, Hyperledger,
etc.), another might handle web APIs (performing HTTP calls), another might
manage local OS operations (file system access, running shell commands). Each
connector understands a set of actions and data formats for its domain.
Connectors make MCP extensible - new connectors can be added to support new
systems or protocols.
* Configuration Files: MCP often uses configuration (like JSON or YAML) to know
which connectors to activate and how to reach external services. For example,
you might configure an MCP instance with the URL of your blockchain node, API
keys for external services, or file path permissions. The configuration
ensures that at runtime the MCP server has the info it needs to carry out
requests safely and correctly.
* Security Layer: Since MCP can access sensitive data and perform actions, it
includes a security layer. This may involve API keys (like the --pat personal
access token in the example) or authentication for connecting to blockchains
and databases. The security layer also enforces permissions: it can restrict
what an AI agent is allowed to do via MCP, preventing misuse. For instance,
you might allow read-only access to some data but not allow any write or
state-changing operations without additional approval.
These components together make MCP robust and flexible. The separation of
concerns (AI vs MCP vs Connectors) means each part can evolve or be maintained
independently. For example, if a new blockchain is introduced, you can add a
connector for it without changing how the AI asks for data. Or if the AI model
is updated, it can still use the same MCP server and connectors as before.
## Settlemint's implementation of AI
SettleMint is a leading blockchain integration platform that has adopted and
implemented MCP to empower AI agents with blockchain intelligence and
infractructure control. In SettleMint's implementation, MCP serves as a bridge
between AI-driven applications and blockchain environments managed or monitored
by SettleMint's platform. This means AI agents can deeply interact with
blockchain resources (like smart contracts, transactions, and network data) but
also with the underlying infrastructure (nodes, middlewares) through a
standardized interface.
By leveraging MCP, SettleMint enables scenarios where:
* An AI assistant can query on-chain data in real time, such as retrieving the
state of a smart contract or the latest block information.
* Autonomous agents can manage blockchain infrastructure tasks (deploying
contracts, adjusting configurations) without human intervention, guided by AI
decision-making.
* Developers using SettleMint can integrate advanced AI functionalities into
their blockchain applications with relatively little effort, because MCP
handles the heavy lifting of connecting the two worlds.
```mermaid
sequenceDiagram
participant AI as AI Model (Agent)
participant MCP as MCP Server
participant Chain as The Graph / Portal / Node
participant API as External API
AI->>MCP: (1) Query request (e.g., get contract state)
Note over AI,MCP: AI asks MCP for on-chain data
MCP-->>AI: (2) Acknowledgement & processing
MCP->>Chain: (3) Fetch data from blockchain
Chain-->>MCP: (4) Return contract state
MCP->>API: (5) [Optional] Fetch related off-chain data
API-->>MCP: (6) Return external data
MCP-->>AI: (7) Send combined response
Note over AI,MCP: AI receives on-chain data (and any other context)
AI->>MCP: (8) Action request (e.g., execute transaction)
MCP->>Chain: (9) Submit transaction to blockchain
Chain-->>MCP: (10) Return tx result/receipt
MCP-->>AI: (11) Confirm action result
```
In summary, SettleMint's version of MCP extends their platform's capabilities,
allowing for AI-driven blockchain operations. This combination brings together
the trust and transparency of blockchain with the adaptability and intelligence
of AI.
### Capabilities and features
SettleMint's MCP implementation comes with a rich set of capabilities tailored
for blockchain-AI integration:
* Seamless IDE Integration: SettleMint's tools work within common developer
environments, meaning you can use MCP in the context of your development
workflow. For example, if you're coding a smart contract or an application, an
AI agent (like a code assistant) can use MCP to fetch blockchain state or
deploy contracts right from your IDE. This streamlines development by giving
real-time blockchain feedback and actions as you code.
* Automated Contract Management: AI agents can interact with and even modify
smart contracts autonomously through MCP. This includes deploying new
contracts, calling functions on existing contracts, or listening to events.
For instance, an AI ops agent could detect an anomaly in a DeFi contract and
use MCP via SettleMint to trigger a safeguard function on that contract, all
automatically.
* AI-Driven Analytics: Through MCP, AI models can analyze blockchain data for
insights and predictions. SettleMint's platform might feed transaction
histories, token movements, or network metrics via MCP to an AI model
specialized in analytics. The AI could then, say, identify patterns of
fraudulent transactions or predict network congestion and feed those insights
back into the blockchain application or to administrators.
These features demonstrate how SettleMint's integration of MCP isn't just a
basic link to blockchain, but a comprehensive suite that makes blockchain data
and control accessible to AI in a meaningful way. It effectively makes
blockchain networks intelligent by allowing AI to continuously monitor and react
to on-chain events.
### Usage in AI and blockchain
By combining the strengths of AI and blockchain via MCP, SettleMint unlocks
several powerful use cases:
* AI-Powered Smart Contract Management: Smart contracts often need tuning or
updates based on external conditions (like market prices or usage load). An AI
agent can use MCP to monitor these conditions and proactively adjust smart
contract parameters (or advise humans to do so) through SettleMint's tools.
This creates more adaptive and resilient blockchain applications.
* Real-time Blockchain Monitoring: Instead of static dashboards, imagine an AI
that watches blockchain transactions and alerts you to important events. With
MCP, an AI can continuously query the chain for specific patterns (like large
transfers, or certain contract events) and then analyze and explain these to a
user or trigger automated responses.
* Autonomous Governance: In blockchain governance (e.g., DAOs), proposals and
decisions could be informed by AI insights. Using MCP, an AI agent could
gather all relevant on-chain data about a proposal's impact, simulate
different outcomes, and even cast votes or execute approved decisions
automatically on the blockchain. This merges AI decision support with
blockchain's execution capabilities.
* Cross-System Orchestration: SettleMint's MCP doesn't have to be limited to
blockchain data. AI can use it to orchestrate actions that span blockchain and
off-chain systems. For example, an AI agent might detect that a supply chain
shipment (tracked on a blockchain) is delayed, and then through MCP, update an
off-chain database or send a notification to a logistics system. The AI acts
as an intelligent middleware, using MCP to ensure both blockchain and
traditional systems stay in sync.
In practice, using MCP with SettleMint's SDK (discussed next) makes implementing
these scenarios much easier. Developers can focus on the high-level logic of
what the AI should do, while the MCP layer (managed by SettleMint's platform)
deals with the complexity of connecting to the blockchain and other services.
## Practical examples
To solidify the understanding, let's look at some concrete examples of how MCP
can be used in a development workflow and in applications, especially with
SettleMint's tooling.
### Implementing AI in a development workflow
Suppose you are a developer working on a blockchain project, and you want to use
an AI assistant to help manage your smart contracts. You can integrate MCP into
your workflow so that the AI assistant has direct access to your project's
context (code, files) and the blockchain environment.
For instance, you might use a command (via a CLI or an npm script) to start an
MCP server that is pointed at your project directory and connected to the
SettleMint platform. An example command could be:
```sh
npx -y @settlemint/sdk-mcp@latest --path=/Users/llm/asset-tokenization-kit/ --pat=sm_pat_xxx
```
Here's what this command does:
* npx is used to execute the latest version of the @settlemint/sdk-mcp package
without needing a separate install.
* \--path=/Users/llm/asset-tokenization-kit/ specifies the local project
directory that the MCP server will have context about. This could allow the AI
to query files or code in that directory through MCP and have access to the
environment settings from `SettleMint connect`
* \--pat=sm\_pat\_xxx provides a Personal Access Token (PAT) for authenticating
with SettleMint's services. This token (masked here as xxx) is required for
the MCP server to connect to the SettleMint platform on your behalf.
After running this command, you would have a local MCP server up and running,
connected to both your local project and the SettleMint platform. Your AI
assistant (say a specialized Claude Sonnet-based agent) could then do things
like:
* Ask MCP to write forms and lists based on the data you indexed in for example
The Graph.
* Query the live blockchain to get the current state of a contract you're
working on, to verify something or test changes.
* Deploy an an extra node in your network
* List and later mint some new tokens in your stablecoin contract
This greatly enhances a development workflow by making the AI an active
participant that can fetch and act on real information, rather than just being a
passive code suggestion tool.
#### Using the SettleMint mpc in cursor
Cursor (0.47.0 and up) provides a global `~/.cursor/mcp.json` file where you can
configure the SettleMint MCP server. Point the path to the folder of your
program, and set your personal access token.
> The reason we use the global MCP configuration file is that your personal
> access token should never, ever, ever be committed into hits and putting it in
> the project folder, which is also possible in cursor opens up that
> possibility.
```json
{
"mcpServers": {
"settlemint": {
"command": "npx",
"args": [
"-y",
"@settlemint/sdk-mcp@latest",
"--path=/Users/llm/asset-tokenization-kit/",
"--pat=sm_pat_xxx"
]
}
}
}
```
Open Cursor and navigate to Settings/MCP. You should see a green active status
after the server is successfully connected.
#### Using the SettleMint mpc in claude desktop
Open Claude desktop and navigate to Settings. Under the Developer tab, tap Edit
Config to open the configuration file and add the following configuration:
```json
{
"mcpServers": {
"settlemint": {
"command": "npx",
"args": [
"-y",
"@settlemint/sdk-mcp@latest",
"--path=/Users/llm/asset-tokenization-kit/",
"--pat=sm_pat_xxx"
]
}
}
}
```
Save the configuration file and restart Claude desktop. From the new chat
screen, you should see a hammer (MCP) icon appear with the new MCP server
available.
#### Using the SettleMint mpc in cline
Open the Cline extension in VS Code and tap the MCP Servers icon. Tap Configure
MCP Servers to open the configuration file and add the following configuration:
```json
{
"mcpServers": {
"settlemint": {
"command": "npx",
"args": [
"-y",
"@settlemint/sdk-mcp@latest",
"--path=/Users/llm/asset-tokenization-kit/",
"--pat=sm_pat_xxx"
]
}
}
}
```
Save the configuration file. Cline should automatically reload the
configuration. You should see a green active status after the server is
successfully connected.
#### Using the SettleMint mpc in windsurf
Open Windsurf and navigate to the Cascade assistant. Tap on the hammer (MCP)
icon, then Configure to open the configuration file and add the following
configuration:
```json
{
"mcpServers": {
"settlemint": {
"command": "npx",
"args": [
"-y",
"@settlemint/sdk-mcp@latest",
"--path=/Users/llm/asset-tokenization-kit/",
"--pat=sm_pat_xxx"
]
}
}
}
```
Save the configuration file and reload by tapping Refresh in the Cascade
assistant. You should see a green active status after the server is successfully
connected.
### Ai-driven blockchain application or agent
To illustrate a real-world scenario, consider an AI-driven Decentralized Finance
(DeFi) application. In DeFi, conditions change rapidly (prices, liquidity, user
activity), and it's critical to respond quickly.
Scenario: You have a smart contract that manages an automatic liquidity pool.
You want to ensure it remains balanced - if one asset's price drops or the pool
becomes unbalanced, you'd like to adjust fees or parameters automatically.
Using MCP in this scenario:
1. An AI agent monitors the liquidity pool via MCP. Every few minutes, it
requests the latest pool balances and external price data (from on-chain or
off-chain oracles) through the MCP server.
2. MCP fetches the latest state from the blockchain (pool reserves, recent
trades) and maybe calls an external price API for current market prices, then
returns that data to the AI.
3. The AI analyzes the data. Suppose it finds that Asset A's proportion in the
pool has drastically increased relative to Asset B (perhaps because Asset A's
price fell sharply).
4. The AI decides that to protect the pool, it should increase the swap fee
temporarily (a common measure to discourage arbitrage draining the pool).
5. Through MCP, the AI calls a function on the smart contract to update the fee
parameter. The MCP's blockchain connector handles creating and sending the
transaction to the network via SettleMint's infrastructure.
6. The transaction is executed on-chain, adjusting the fee. MCP catches the
success response and any relevant event (like an event that the contract
might emit for a fee change).
7. The AI receives confirmation and can log the change or inform administrators
that it took action.
In this use case, MCP enabled the AI to be a real-time guardian of the DeFi
contract. Without MCP, the AI would not have access to the live on-chain state
or the ability to execute a change. With MCP, the AI becomes a powerful
autonomous agent that ensures the blockchain application adapts to current
conditions.
This is just one example. AI-driven blockchain applications could range from
automatic NFT marketplace management, to AI moderators for DAO proposals, to
intelligent supply chain contracts that react to sensor data. MCP provides the
pathway for these AI agents to communicate and act where it matters - on the
blockchain and connected systems.
file: ./content/docs/platform-components/dev-tools/sdk.mdx
meta: {
"title": "SDK",
"description": "SDK introduction"
}
A Software Development Kit (SDK) is a collection of tools, libraries, and
documentation designed to help developers build applications efficiently. SDKs
provide pre-built functions, APIs, and utilities that eliminate the need to
write code from scratch, making development faster and more reliable. Whether
for mobile apps, cloud integrations, or blockchain applications, SDKs streamline
the process by offering standardized solutions that ensure compatibility and
ease of use. By leveraging SDKs, developers can focus on innovation rather than
the complexities of low-level coding, leading to quicker deployment and enhanced
functionality.

The SettleMint Blockchain SDK is designed to simplify blockchain development and
integration for enterprises and developers. It provides a modular set of tools
that allow seamless interaction with the SettleMint Blockchain Transformation
Platform, enabling smart contract deployment, dApp connectivity, and blockchain
infrastructure management. With support for multiple blockchain protocols,
developer-friendly APIs, and integrations with frameworks like JavaScript,
TypeScript, and Next.js, the SDK empowers organizations to build scalable
blockchain applications effortlessly. By reducing complexity and offering
plug-and-play functionality, SettleMint’s SDK accelerates blockchain adoption
across industries.
**[NPM Package](https://www.npmjs.com/package/@settlemint/sdk-cli)**
**[GitHub Repository](https://github.com/settlemint/sdk)**
### Key features
* **Modular Design** – Pick and use only the packages you need.
* **Seamless Integration** – Connect applications to the SettleMint platform
effortlessly.
* **Blockchain Agnostic** – Supports multiple protocols and networks.
* **Developer-Friendly** – Works with JavaScript, TypeScript, CLI, and
frameworks like Next.js.
* **Open Source** – Contributions are welcome to enhance functionality.
## Getting started
### Prerequisites
* **Node.js or Bun**: Use the latest LTS version
* **Package manager**: npm, bun, pnpm and yarn are supported
* **SettleMint Account**: Sign up at
[console.settlemint.com](https://console.settlemint.com)
* **[Personal Access Token (PAT)](/platform-components/security-and-authentication/personal-access-tokens)**:
Required for authenticated SDK usage
### SDK module overview
The **SettleMint SDK** is a modular suite of tools designed for seamless
blockchain development. Each package is specialized for different blockchain
functionalities, allowing developers to integrate only the components they need.
| Package | Description | NPM |
| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| [`@settlemint/sdk-blockscout`](sdk/blockscout) | Blockscout integration module for SettleMint SDK, enabling blockchain explorer and analytics functionality | [](https://www.npmjs.com/package/@settlemint/sdk-blockscout) |
| [`@settlemint/sdk-cli`](sdk/cli) | Command-line interface for SettleMint SDK, providing development tools and project management capabilities | [](https://www.npmjs.com/package/@settlemint/sdk-cli) |
| [`@settlemint/sdk-eas`](sdk/eas) | Ethereum Attestation Service (EAS) integration for SettleMint SDK | [](https://www.npmjs.com/package/@settlemint/sdk-eas) |
| [`@settlemint/sdk-hasura`](sdk/hasura) | Hasura and PostgreSQL integration module for SettleMint SDK, enabling database operations and GraphQL queries | [](https://www.npmjs.com/package/@settlemint/sdk-hasura) |
| [`@settlemint/sdk-ipfs`](sdk/ipfs) | IPFS integration module for SettleMint SDK, enabling decentralized storage and content addressing | [](https://www.npmjs.com/package/@settlemint/sdk-ipfs) |
| [`@settlemint/sdk-js`](sdk/js) | Core JavaScript SDK for integrating SettleMint's blockchain platform services into your applications | [](https://www.npmjs.com/package/@settlemint/sdk-js) |
| [`@settlemint/sdk-mcp`](sdk/mcp) | MCP interface for SettleMint SDK, providing development tools and project management capabilities | [](https://www.npmjs.com/package/@settlemint/sdk-mcp) |
| [`@settlemint/sdk-minio`](sdk/minio) | MinIO integration module for SettleMint SDK, providing S3-compatible object storage capabilities | [](https://www.npmjs.com/package/@settlemint/sdk-minio) |
| [`@settlemint/sdk-next`](sdk/next) | Next.js integration module for SettleMint SDK, providing React components and middleware for web applications | [](https://www.npmjs.com/package/@settlemint/sdk-next) |
| [`@settlemint/sdk-portal`](sdk/portal) | Portal API client module for SettleMint SDK, providing access to smart contract portal services and APIs | [](https://www.npmjs.com/package/@settlemint/sdk-portal) |
| [`@settlemint/sdk-thegraph`](sdk/thegraph) | TheGraph integration module for SettleMint SDK, enabling querying and indexing of blockchain data through subgraphs | [](https://www.npmjs.com/package/@settlemint/sdk-thegraph) |
| [`@settlemint/sdk-utils`](sdk/utils) | Shared utilities and helper functions for SettleMint SDK modules | [](https://www.npmjs.com/package/@settlemint/sdk-utils) |
| [`@settlemint/sdk-viem`](sdk/viem) | Viem (TypeScript Interface for Ethereum) module for SettleMint SDK | [](https://www.npmjs.com/package/@settlemint/sdk-viem) |
## How to contribute
We welcome contributions from the community! Please check out our
[Contributing guide](https://github.com/settlemint/sdk/blob/main/.github/CONTRIBUTING.md)
to learn how you can help improve the SettleMint SDK through bug reports,
feature requests, documentation updates, or code contributions.
## Reporting issues
If you find a bug or have a suggestion, please open an issue on GitHub:
* Go to the [Issues Page](https://github.com/settlemint/sdk/issues).
* Click **New Issue** and provide a detailed description.
## License
The SettleMint SDK is released under the
**[FSL Software License](https://fsl.software/)**. See the
[LICENSE](https://github.com/settlemint/sdk/blob/main/LICENSE) file for details.
file: ./content/docs/platform-components/middleware-and-api-layer/attestation-indexer.mdx
meta: {
"title": " Ethereum attestation service (EAS)",
"description": "A comprehensive guide to implementing and using the Ethereum Attestation Service (EAS) for creating, managing, and verifying on-chain attestations",
"keywords": [
"ethereum",
"eas",
"attestation",
"blockchain",
"web3",
"smart contracts",
"verification",
"schema registry",
"resolver"
]
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## 1. Introduction to eas
### What is eas?
Ethereum Attestation Service (EAS) is a decentralized protocol that allows users
to create, verify, and manage attestations (verifiable claims) on the Ethereum
blockchain. It provides a standardized way to make claims about data,
identities, or events that can be independently verified by others.
### Why use eas?
* **Decentralization**: No central authority is needed to verify claims.
* **Interoperability**: Standardized schemas allow for cross-platform
compatibility.
* **Security**: Attestations are secured by the Ethereum blockchain.
* **Transparency**: All attestations are publicly verifiable.
***
## 2. Key concepts
### Core components
1. **SchemaRegistry**:
* A smart contract that stores and manages schemas.
* Schemas define the structure and data types of attestations, ensuring that
all attestations conform to a predefined format.
2. **EAS Contract**:
* The main contract that handles the creation and management of attestations.
* It interacts with the `SchemaRegistry` to ensure that attestations adhere
to the defined schemas.
3. **Attestations**:
* Verifiable claims stored on the blockchain.
* Created and managed by the `EAS Contract`.
4. **Resolvers**:
* Optional contracts that provide additional validation logic for
attestations.
***
## 3. How EAS works
```mermaid
graph TD
SchemaRegistry["SchemaRegistry"]
UsersSystems["Users/Systems"]
EASContract["EAS Contract"]
Verifiers["Verifiers"]
Attestations["Attestations"]
SchemaRegistry -- "Defines Data Structure" --> EASContract
UsersSystems -- "Interact" --> EASContract
EASContract -- "Creates" --> Attestations
Verifiers -- "Verify" --> Attestations
```
### Workflow
1. **Schema Definition**: Start by defining a schema using the
**SchemaRegistry** contract.
2. **Attestation Creation**: Use the **EAS Contract** to create attestations
based on the schema.
3. **Optional Validation**: Resolvers can be used for further validation logic.
4. **On-chain Storage**: Attestations are securely stored and retrievable
on-chain.
***
## 4. Contract deployment
Before deploying the EAS contracts, you must add the smart contract set to your
project.
### Adding the smart contract set
1. **Navigate to the Dev tools Section**: Go to the application dashboard of the
application where you want to deploy the EAS contracts, then navigate to the
**Dev tools** section in the left sidebar.
2. **Select the Attestation Service Set**: From there, click on **Add a dev
tool**, choose **Code Studio** and then **Smart Contract Set**. Choose the
**Attestation Service** template.
3. **Customize**: Modify the set as needed for your specific project.
4. **Save**: Save the configuration.
For detailed instructions, visit the
[Smart Contract Sets Documentation](/platform-components/dev-tools/code-studio).
***
### Deploying the contracts
Once the contract set is ready, you can deploy it using either the **Task Menu**
in the SettleMint IDE or via the **Terminal**.
#### Deploy using the task menu
1. **Open the Task Menu**:
* In the SettleMint Integrated IDE, access the **Task Menu** from the
sidebar.
2. **Select Deployment Task**:
* Choose the task corresponding to the **Hardhat- Reset & Deploy to platform
network** module.
3. **Monitor Deployment Logs**:
* The terminal output will display the deployment progress and contract
addresses.
#### Deploy using the terminal
1. **Prepare the Deployment Module**:\
Ensure the module is defined in `ignition/modules/main.ts`:
```typescript
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const CustomEASModule = buildModule("EASDeployment", (m) => {
const schemaRegistry = m.contract("SchemaRegistry", [], {});
const EAS = m.contract("EAS", [schemaRegistry], {});
return { schemaRegistry, EAS };
});
export default CustomEASModule;
```
2. **Run the Deployment Command**:\
Execute the following command in your terminal:
```bash
```
bunx settlemint scs hardhat deploy remote -m ignition/modules/main.ts'" \`\`\`
3. **Monitor Deployment Logs**:
* The terminal output will display the deployment progress and contract
addresses.
***
## 5. Registering a schema
### Example use case
Imagine building a service where users prove ownership of their social media
profiles. The schema might include:
* **Username**: A unique identifier for the user.
* **Platform**: The social media platform name (e.g., Twitter).
* **Handle**: The user's handle on that platform (e.g., `@coolcoder123`).
### Example
```javascript
const { ethers } = require("ethers");
// Configuration object for network and contract details
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // The network endpoint (e.g., Ethereum mainnet/testnet)
registryAddress: "YOUR_SCHEMA_REGISTRY_ADDRESS_HERE", // Where the SchemaRegistry contract lives
privateKey: "YOUR_PRIVATE_KEY_HERE", // Your wallet's private key (keep this secret!)
};
// Create connection to blockchain and setup contract interaction
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const signer = new ethers.Wallet(config.privateKey, provider);
const schemaRegistry = new ethers.Contract(
config.registryAddress,
[
// This event helps us track when new schemas are registered
"event Registered(bytes32 indexed uid, address indexed owner, string schema, address resolver, bool revocable)",
// This function lets us register new schemas
"function register(string calldata schema, address resolver, bool revocable) external returns (bytes32)",
],
signer
);
async function registerSchema() {
try {
// Define what data fields our attestations will contain
const schema = "string username, string platform, string handle";
const resolverAddress = ethers.ZeroAddress; // No special validation needed
const revocable = true; // Attestations can be revoked if needed
console.log("🚀 Registering schema for social media ownership...");
// Send the transaction to create our schema
const tx = await schemaRegistry.register(
schema,
resolverAddress,
revocable
);
const receipt = await tx.wait(); // Wait for blockchain confirmation
// Get our schema's unique ID from the transaction
const schemaUID = receipt.logs[0].topics[1];
console.log("✅ Schema registered successfully! UID:", schemaUID);
} catch (error) {
console.error("❌ Error registering schema:", error.message);
}
}
registerSchema();
```
***
## 6. Creating attestations
### Example use case
Let's create an attestation that proves:
* **Username**: `awesome_developer`
* **Platform**: `GitHub`
* **Handle**: `@devmaster`
### Example
```javascript
const { EAS, SchemaEncoder } = require("@ethereum-attestation-service/eas-sdk");
const { ethers } = require("ethers");
// Setup our connection details
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // Network endpoint
easAddress: "YOUR_EAS_CONTRACT_ADDRESS_HERE", // Main EAS contract address
privateKey: "YOUR_PRIVATE_KEY_HERE", // Your wallet's private key
schemaUID: "YOUR_SCHEMA_UID_HERE", // The UID from when we registered our schema
};
// Connect to the blockchain
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const signer = new ethers.Wallet(config.privateKey, provider);
const EAS = new EAS(config.easAddress);
eas.connect(signer);
// Create an encoder that matches our schema structure
const schemaEncoder = new SchemaEncoder(
"string username, string platform, string handle"
);
// The actual data we want to attest to
const attestationData = [
{ name: "username", value: "awesome_developer", type: "string" },
{ name: "platform", value: "GitHub", type: "string" },
{ name: "handle", value: "@devmaster", type: "string" },
];
async function createAttestation() {
try {
// Convert our data into the format EAS expects
const encodedData = schemaEncoder.encodeData(attestationData);
// Create the attestation
const tx = await eas.attest({
schema: config.schemaUID,
data: {
recipient: ethers.ZeroAddress, // Public attestation (no specific recipient)
expirationTime: 0, // Never expires
revocable: true, // Can be revoked later if needed
data: encodedData, // Our encoded attestation data
},
});
// Wait for confirmation and get the result
const receipt = await tx.wait();
console.log(
"✅ Attestation created successfully! UID:",
receipt.attestationUID
);
} catch (error) {
console.error("❌ Error creating attestation:", error.message);
}
}
createAttestation();
```
## 7. Verifying attestations
Verification is essential to ensure the integrity and authenticity of
attestations. You can verify attestations using one of the following methods:
1. **Using the EAS SDK**: Perform lightweight, off-chain verification
programmatically.
2. **Using a Custom Smart Contract Resolver**: Add custom on-chain validation
logic for attestations.
### Choose your verification method
#### Verification using the EAS sdk
The EAS SDK provides an easy way to verify attestations programmatically, making
it ideal for off-chain use cases.
##### Example
```javascript
const { ethers } = require("ethers");
const { EAS } = require("@ethereum-attestation-service/eas-sdk");
// Basic configuration for connecting to the network
const config = {
rpcUrl: "YOUR_RPC_URL_HERE", // Network endpoint
easAddress: "YOUR_EAS_CONTRACT_ADDRESS_HERE", // Main EAS contract
};
async function verifyAttestation(attestationUID) {
// Setup our blockchain connection
const provider = new ethers.JsonRpcProvider(config.rpcUrl);
const EAS = new EAS(config.easAddress);
eas.connect(provider);
console.log("🔍 Verifying attestation:", attestationUID);
// Try to find the attestation on the blockchain
const attestation = await eas.getAttestation(attestationUID);
// Check if we found anything
if (!attestation) {
console.error("❌ Attestation not found");
return;
}
// Show the attestation details
console.log("✅ Attestation Details:");
console.log("Attester:", attestation.attester); // Who created this attestation
console.log("Data:", attestation.data); // The actual attested data
console.log("Revoked:", attestation.revoked ? "Yes" : "No"); // Is it still valid?
}
// Replace with your attestation UID
verifyAttestation("YOUR_ATTESTATION_UID_HERE");
```
##### Key points
* **Lightweight**: Suitable for most off-chain verifications.
* **No Custom Logic**: Fetches and verifies data stored in EAS.
#### Verification using a custom smart contract resolver
Custom resolvers enable on-chain validation with additional business rules or
logic.
##### Example: trusted attester verification
The following smart contract resolver ensures that attestations are valid only
if made by trusted attesters.
###### Smart contract code
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
// This contract checks if attestations come from trusted sources
contract CustomResolver {
// Keep track of which addresses we trust to make attestations
mapping(address => bool) public trustedAttesters;
// When deploying, we set up our initial list of trusted attesters
constructor(address[] memory initialAttesters) {
for (uint256 i = 0; i < initialAttesters.length; i++) {
trustedAttesters[initialAttesters[i]] = true;
}
}
// EAS calls this function before accepting an attestation
function validate(
bytes32 attestationUID, // Unique ID of the attestation
address attester, // Who's trying to create the attestation
bytes memory data // The attestation data (unused in this example)
) external view returns (bool) {
// Only allow attestations from addresses we trust
if (!trustedAttesters[attester]) {
return false;
}
return true;
}
}
```
###### Deploying the resolver with hardhat ignition
Deploy this custom resolver using the Hardhat Ignition framework.
```typescript
import { buildModule } from "@nomicfoundation/hardhat-ignition/modules";
const CustomResolverDeployment = buildModule("CustomResolver", (m) => {
const initialAttesters = ["0xTrustedAddress1", "0xTrustedAddress2"];
const resolver = m.contract("CustomResolver", [initialAttesters], {});
return { resolver };
});
export default CustomResolverDeployment;
```
Run the following command in your terminal to deploy:
```bash
npx hardhat deploy --module ignition/modules/main.ts
```
###### Linking the resolver to a schema
When registering a schema, include the resolver's address for on-chain
validation.
```javascript
const resolverAddress = "YOUR_DEPLOYED_RESOLVER_ADDRESS";
const schema = "string username, string platform, string handle";
const schemaUID = await schemaRegistry.register(schema, resolverAddress, true);
console.log("✅ Schema with resolver registered! UID:", schemaUID);
```
###### Validating attestations with the resolver
To validate an attestation, call the `validate` function of your deployed
resolver contract.
```javascript
const resolver = new ethers.Contract(
"YOUR_RESOLVER_ADDRESS",
["function validate(bytes32, address, bytes) external view returns (bool)"],
provider
);
const isValid = await resolver.validate(
"YOUR_ATTESTATION_UID",
"ATTESTER_ADDRESS",
"ATTESTATION_DATA"
);
console.log("✅ Is the attestation valid?", isValid);
```
##### Key points
* **Customizable Rules**: Add your own validation logic to the resolver.
* **On-Chain Validation**: Ensures attestations meet specific conditions before
they are considered valid.
***
### When to use each method?
* **EAS SDK**: Best for off-chain applications where simple validation suffices.
* **Custom Resolver**: Use for on-chain validation with additional rules, such
as verifying trusted attesters or specific data formats.
## 8. Using the attestation indexer
### What is the attestation indexer?
The SettleMint attestation indexer is a specialized middleware that indexes and provides API access to blockchain-based attestation data. Based on the [EAS Indexing Service](https://github.com/ethereum-attestation-service/eas-indexing-service), it serves as the critical bridge between your EAS smart contracts and applications, offering:
* **Real-time indexing** of on-chain attestation events
* **High-performance GraphQL API** for complex data queries
* **Scalable architecture** designed for production workloads
* **Seamless integration** with any EVM-compatible chain
SettleMint's implementation provides a complete attestation solution with both the required smart contracts and optimized indexing service, eliminating the complexity of manually setting up and maintaining these components separately.
### How the indexer works
The attestation indexer continuously monitors your EAS contract deployments, capturing events like:
* Schema registrations
* Attestation creations
* Attestation revocations
It processes these events into a structured database and exposes them through a GraphQL API. This architectural approach delivers several key benefits:
1. **Performance optimization**: Queries execute in milliseconds instead of requiring multiple on-chain RPC calls
2. **Cost efficiency**: Eliminate expensive blockchain read operations
3. **Enhanced functionality**: Complex filtering, sorting, and pagination capabilities
4. **Simplified integration**: Standard GraphQL API familiar to developers
### Setup attestation indexer
1. Go to your application's **Middleware** section
2. Click "Add a middleware"
3. Select "Attestation Indexer"
4. Configure with your contract addresses:
* EAS Contract: `EAS contract address`
* Schema Registry: `Schema Registry contract address`
Once deployed, the indexer automatically begins synchronizing with the blockchain, processing historical attestations and monitoring for new events in real-time.
### Querying attestations
#### Connection details
After deployment:
1. Go to your Attestation Indexer
2. Click "Connections" tab
3. You'll find your GraphQL endpoint URL
4. Create an Application Access Token (Settings → Application Access Tokens)
#### Using the GraphQL UI
The indexer provides a built-in GraphQL playground where you can interactively test queries. Click "GraphQL UI" in your indexer to access it.
#### Example query implementation
```javascript
// Example fetch request to query attestations by schema
async function queryAttestations(schemaId) {
const response = await fetch("YOUR_INDEXER_URL", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer YOUR_APP_TOKEN",
},
body: JSON.stringify({
query: `{
attestations(
where: {
schemaId: {
equals: "${schemaId}"
}
}
orderBy: { time: desc }
take: 10
) {
id
attester
recipient
revoked
time
data
}
}`,
}),
});
const data = await response.json();
return data.data.attestations;
}
// Usage example:
const schemaId = "YOUR_SCHEMA_ID"; // From the schema registration step
const attestations = await queryAttestations(schemaId);
console.log("Attestations:", attestations);
```
#### Advanced querying capabilities
The EAS indexer supports sophisticated query patterns for complex use cases:
```graphql
{
# Find attestations by specific attester
attestations(
where: {
attester: { equals: "0x123..." }
revoked: { equals: false }
}
) {
id
time
data
}
# Get schema information with related attestations
schema(where: { id: { equals: "0xschema..." } }) {
id
schema
revocable
attestations(take: 5) {
id
attester
}
}
# Filter attestations by timestamp range
attestations(
where: {
time: {
gte: "2023-01-01T00:00:00Z",
lt: "2023-02-01T00:00:00Z"
}
}
) {
id
time
}
}
```
### Building applications with the indexer
The attestation indexer enables powerful use cases including:
* **Verifiable credentials** for KYC/identity solutions
* **Reputation systems** based on on-chain attestations
* **Trust frameworks** with customizable validation rules
* **Certification management** for professional qualifications
* **Claim verification** for insurance or compliance scenarios
By combining SettleMint's EAS smart contracts with the attestation indexer, developers can rapidly implement robust attestation-based solutions while maintaining full control over their data models and business logic.
## 9. Integration studio implementation
For those using integration studio, we've created a complete flow implementation
of the EAS interactions. This flow automates the entire process we covered in
this guide.
### Flow overview
The flow includes:
* EAS Configuration Setup
* Schema Registration
* Attestation Creation
* Attestation Verification
* Debug nodes for monitoring results
### Installation
1. In Integration Studio, go to Import → Clipboard
2. Paste the flow JSON below
3. Click Import
Click to view/copy the complete Node-RED flow JSON
```json
[
{
"id": "eas_flow",
"type": "tab",
"label": "EAS Attestation Flow",
"disabled": false,
"info": ""
},
{
"id": "setup_inject",
"type": "inject",
"z": "eas_flow",
"name": "Inputs: RpcUrl, Registry address,Eas address, Private key",
"props": [
{
"p": "rpcUrl",
"v": "RPC-URL/API-KEY",
"vt": "str"
},
{
"p": "registryAddress",
"v": "REGISTERY-ADDRESS",
"vt": "str"
},
{
"p": "easAddress",
"v": "EAS-ADDRESS",
"vt": "str"
},
{
"p": "privateKey",
"v": "PRIVATE-KEY",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 250,
"y": 120,
"wires": [["setup_function"]]
},
{
"id": "setup_function",
"type": "function",
"z": "eas_flow",
"name": "Setup Global Variables",
"func": "// Initialize provider with specific network parameters\nconst provider = new ethers.JsonRpcProvider(msg.rpcUrl)\n\nconst signer = new ethers.Wallet(msg.privateKey, provider);\n\n// Initialize EAS with specific gas settings\nconst EAS = new eassdk.EAS(msg.easAddress);\neas.connect(signer);\n\n// Store in global context\nglobal.set('provider', provider);\nglobal.set('signer', signer);\nglobal.set('eas', eas);\nglobal.set('registryAddress', msg.registryAddress);\n\nmsg.payload = 'EAS Configuration Initialized';\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "ethers",
"module": "ethers"
},
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
}
],
"x": 580,
"y": 120,
"wires": [["setup_debug"]]
},
{
"id": "register_inject",
"type": "inject",
"z": "eas_flow",
"name": "Register Schema",
"props": [],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 120,
"y": 260,
"wires": [["register_function"]]
},
{
"id": "register_function",
"type": "function",
"z": "eas_flow",
"name": "Register Schema",
"func": "// Get global variables set in init\nconst signer = global.get('signer');\nconst registryAddress = global.get('registryAddress');\n\n// Initialize SchemaRegistry contract\nconst schemaRegistry = new ethers.Contract(\n registryAddress,\n [\n \"event Registered(bytes32 indexed uid, address indexed owner, string schema, address resolver, bool revocable)\",\n \"function register(string calldata schema, address resolver, bool revocable) external returns (bytes32)\"\n ],\n signer\n);\n\n// Define what data fields our attestations will contain\nconst schema = \"string username, string platform, string handle\";\nconst resolverAddress = \"0x0000000000000000000000000000000000000000\"; // No special validation needed\nconst revocable = true; // Attestations can be revoked if needed\n\ntry {\n const tx = await schemaRegistry.register(schema, resolverAddress, revocable);\n const receipt = await tx.wait();\n\n const schemaUID = receipt.logs[0].topics[1];\n // Store schemaUID in global context for later use\n global.set('schemaUID', schemaUID);\n\n msg.payload = {\n success: true,\n schemaUID: schemaUID,\n message: \"Schema registered successfully!\"\n };\n} catch (error) {\n msg.payload = {\n success: false,\n error: error.message\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "ethers",
"module": "ethers"
}
],
"x": 310,
"y": 260,
"wires": [["register_debug"]]
},
{
"id": "create_inject",
"type": "inject",
"z": "eas_flow",
"name": "Input: Schema uid",
"props": [
{
"p": "schemaUID",
"v": "SCHEMA-UID",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 130,
"y": 400,
"wires": [["create_function"]]
},
{
"id": "create_function",
"type": "function",
"z": "eas_flow",
"name": "Create Attestation",
"func": "// Get global variables\nconst EAS = global.get('eas');\nconst schemaUID = msg.schemaUID;\n\n// Create an encoder that matches our schema structure\nconst schemaEncoder = new eassdk.SchemaEncoder(\"string username, string platform, string handle\");\n\n// The actual data we want to attest to\nconst attestationData = [\n { name: \"username\", value: \"awesome_developer\", type: \"string\" },\n { name: \"platform\", value: \"GitHub\", type: \"string\" },\n { name: \"handle\", value: \"@devmaster\", type: \"string\" }\n];\n\ntry {\n // Convert our data into the format EAS expects\n const encodedData = schemaEncoder.encodeData(attestationData);\n\n // Create the attestation\n const tx = await eas.attest({\n schema: schemaUID,\n data: {\n recipient: \"0x0000000000000000000000000000000000000000\", // Public attestation\n expirationTime: 0, // Never expires\n revocable: true, // Can be revoked later if needed\n data: encodedData // Our encoded attestation data\n }\n });\n\n // Wait for confirmation and get the result\n const receipt = await tx.wait();\n\n // Store attestation UID for later verification\n global.set('attestationUID', receipt.attestationUID);\n\n msg.payload = {\n success: true,\n attestationUID: receipt,\n message: \"Attestation created successfully!\"\n };\n} catch (error) {\n msg.payload = {\n success: false,\n error: error.message\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
},
{
"var": "ethers",
"module": "ethers"
}
],
"x": 330,
"y": 400,
"wires": [["create_debug"]]
},
{
"id": "verify_inject",
"type": "inject",
"z": "eas_flow",
"name": "Input: Attestation UID",
"props": [
{
"p": "attestationUID",
"v": "Attestation UID",
"vt": "str"
}
],
"repeat": "",
"crontab": "",
"once": false,
"onceDelay": "",
"topic": "",
"x": 140,
"y": 540,
"wires": [["verify_function"]]
},
{
"id": "verify_function",
"type": "function",
"z": "eas_flow",
"name": "Verify Attestation",
"func": "const EAS = global.get('eas');\nconst attestationUID = msg.attestationUID;\n\ntry {\n const attestation = await eas.getAttestation(attestationUID);\n const schemaEncoder = new eassdk.SchemaEncoder(\"string pshandle, string socialMedia, string socialMediaHandle\");\n const decodedData = schemaEncoder.decodeData(attestation.data);\n\n msg.payload = {\n isValid: !attestation.revoked,\n attestation: {\n attester: attestation.attester,\n time: new Date(Number(attestation.time) * 1000).toLocaleString(),\n expirationTime: attestation.expirationTime > 0 \n ? new Date(Number(attestation.expirationTime) * 1000).toLocaleString()\n : 'Never',\n revoked: attestation.revoked\n },\n data: {\n psHandle: decodedData[0].value.toString(),\n socialMedia: decodedData[1].value.toString(),\n socialMediaHandle: decodedData[2].value.toString()\n }\n };\n} catch (error) {\n msg.payload = { \n success: false, \n error: error.message,\n details: JSON.stringify(error, Object.getOwnPropertyNames(error))\n };\n}\n\nreturn msg;",
"outputs": 1,
"timeout": "",
"noerr": 0,
"initialize": "",
"finalize": "",
"libs": [
{
"var": "eassdk",
"module": "@ethereum-attestation-service/eas-sdk"
},
{
"var": "ethers",
"module": "ethers"
}
],
"x": 350,
"y": 540,
"wires": [["verify_debug"]]
},
{
"id": "setup_debug",
"type": "debug",
"z": "eas_flow",
"name": "Setup Result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 770,
"y": 120,
"wires": []
},
{
"id": "register_debug",
"type": "debug",
"z": "eas_flow",
"name": "Register Result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 500,
"y": 260,
"wires": []
},
{
"id": "create_debug",
"type": "debug",
"z": "eas_flow",
"name": "Create Result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 520,
"y": 400,
"wires": []
},
{
"id": "verify_debug",
"type": "debug",
"z": "eas_flow",
"name": "Verify Result",
"active": true,
"tosidebar": true,
"console": false,
"tostatus": false,
"complete": "payload",
"targetType": "msg",
"x": 530,
"y": 540,
"wires": []
},
{
"id": "1322bb7438d96baf",
"type": "comment",
"z": "eas_flow",
"name": "Initialize EAS Config",
"info": "",
"x": 110,
"y": 60,
"wires": []
},
{
"id": "e5e3294119a80c1b",
"type": "comment",
"z": "eas_flow",
"name": "Register a new schema",
"info": "/* SCHEMA GUIDE\nEdit the schema variable to define your attestation fields.\nFormat: \"type name, type name, type name\"\n\nAvailable Types:\n- string (text)\n- bool (true/false)\n- address (wallet address)\n- uint256 (number)\n- bytes32 (hash)\n\nExamples:\n\"string name, string email, bool isVerified\"\n\"string twitter, address wallet, uint256 age\"\n\"string discord, string github, string telegram\"\n*/\n\nconst schema = \"string pshandle, string socialMedia, string socialMediaHandle\";",
"x": 120,
"y": 200,
"wires": []
},
{
"id": "2be090c17b5e4fce",
"type": "comment",
"z": "eas_flow",
"name": "Create Attestation",
"info": "",
"x": 110,
"y": 340,
"wires": []
},
{
"id": "3d99f76c5c0bdaf0",
"type": "comment",
"z": "eas_flow",
"name": "Verify Attestation",
"info": "",
"x": 110,
"y": 480,
"wires": []
}
]
```
### Configuration steps:
1. Update the setup inject node with your:
* RPC URL
* Registry Address
* EAS Address
* Private Key
2. Customize the schema in the register function
3. Deploy the flow
4. Test each step sequentially using the inject nodes
The flow provides debug outputs at each step to monitor the process.
file: ./content/docs/platform-components/middleware-and-api-layer/fabconnect.mdx
meta: {
"title": "Firefly fabconnect",
"description": "Firefly fabconnect for Hyperledger Fabric Networks"
}
FireFly FabConnect is a **Fabric middleware** that provides an API layer for
interacting with **Hyperledger Fabric networks**. It abstracts the complexities
of Fabric’s native SDKs, enabling developers to interact with the blockchain
using **RESTful APIs** and **WebSocket-based event streaming**. FireFly
FabConnect is designed to handle **identity management, transaction submission,
and event subscriptions**, making it an essential component for enterprise
blockchain solutions.
### Adding fabconnect middleware in settlemint
1. **Navigate to Middleware Selection**
* Go to the **Middleware** section in SettleMint.
* Select **FireFly FabConnect** from the available options.
2. **Select Network and Nodes**
* **Step 1:** Enter a name for the middleware instance (e.g., `FabConnect`).
* **Step 2:** Choose a **peer node** from the available **Hyperledger
Fabric** nodes.
* **Step 3:** Select an **orderer node** to process transactions.
* **Step 4:** Confirm the setup and deploy FabConnect.
Once selected, SettleMint automatically installs and configures FireFly
FabConnect, making it ready for use with minimal effort.
***
## Api categories
FireFly FabConnect provides three main sets of API endpoints:
1. **Client MSPs (Wallet)**
* Register and enroll Fabric identities.
* Modify and revoke existing identities.
* Retrieve identity details.
2. **Transactions**
* Submit transactions to Fabric networks.
* Query transaction results and receipts.
* Retrieve ledger details, such as blocks and chain information.
3. **Events**
* Subscribe to blockchain events using regex-based filters.
* Stream real-time events via WebSocket.
Below is a summary of the available API endpoints:
| Path | Method | Summary |
| ----------------------------- | ------ | ----------------------------------------------------------------------- |
| /identities | GET | List all signing identities registered with the Fabric CA |
| /identities | POST | Register a new signing account with the Fabric CA |
| /identities/ | GET | Get the signing identity registered with the Fabric CA |
| /identities/ | PUT | Modify an existing signing identity |
| /identities/param/enroll | POST | Enroll the registered signing identity with the Fabric CA |
| /identities/param/reenroll | POST | Re-enroll the registered signing identity with the Fabric CA |
| /identities/param/revoke | POST | Revoke enrollment certificates for the registered signing identity |
| /chaininfo | GET | Return ledger information for a specified channel |
| /blocks/blockNumberOrHash | GET | Query a block by number or hash |
| /blockByTxId/txId | GET | Query a block by a transaction ID included in the block |
| /transactions | POST | Send proposal to peers and transaction with endorsements to the orderer |
| /transactions/txId | GET | Query a transaction by ID (hash) on a channel |
| /query | POST | Send a query request to the target chaincode |
| /receipts | GET | Retrieve transaction receipts from the receipts store |
| /receipts/receiptId | GET | Retrieve a transaction receipt by receipt ID |
| /eventstreams | GET | List all event streams |
| /eventstreams | POST | Create a new event stream |
| /eventstreams/eventstreamId | GET | Get an event stream by ID |
| /eventstreams/eventstreamId | DELETE | Delete an event stream by ID |
| /subscriptions | GET | List all subscriptions under the specified event stream |
| /subscriptions | POST | Create a new subscription under the specified event stream |
| /subscriptions/subscriptionId | GET | Get a subscription by ID |
| /subscriptions/subscriptionId | DELETE | Delete a subscription by ID |
For a full API specification, refer to the
[Swagger Documentation](https://github.com/hyperledger/firefly-fabconnect).
***
## Prerequisites
To use FireFly FabConnect within SettleMint, ensure:
* A **Hyperledger Fabric Network** is deployed and accessible.
* Fabric **peer and orderer nodes** are available for selection.
* API credentials for Fabric CA (if identity management is required).
Since FireFly FabConnect is managed within SettleMint, there is no need to
manually install or configure the service.
***
## Integration & usage
Once FireFly FabConnect is set up in SettleMint:
1. **API Access**
* Use the provided **RESTful API endpoints** to interact with the Fabric
network.
* Query blockchain data, submit transactions, and manage identities.
2. **Event Subscriptions**
* Set up WebSocket or webhook-based event listeners.
* Receive real-time updates on transactions and smart contract events.
3. **Identity Management**
* Register new Fabric identities.
* Enroll and revoke identities as needed.
The API simplifies interaction with the Fabric network, making blockchain
integration seamless.
***
## Troubleshooting
* **Nodes Not Available?** Ensure your Fabric network is running with active
peer and orderer nodes.
* **Transaction Errors?** Check the transaction payload format and ensure it
matches the Fabric chaincode specifications.
* **Event Subscription Issues?** Validate the WebSocket connection and event
filters.
For further support, refer to FireFly FabConnect's official documentation.
***
## Additional resources
* **[FireFly FabConnect GitHub](https://github.com/hyperledger/firefly-fabconnect)**
* **[Hyperledger FireFly Documentation](https://hyperledger.github.io/firefly/)**
* **[Hyperledger Fabric Documentation](https://hyperledger-fabric.readthedocs.io/)**
file: ./content/docs/platform-components/middleware-and-api-layer/graph-middleware.mdx
meta: {
"title": "Graph middleware",
"description": "Guide to using middleware in SettleMint"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
Middleware acts as a bridge between your blockchain network and applications,
providing essential services like data indexing, API access, and event
monitoring. Before adding middleware, ensure you have an application and
blockchain node in place.
## Available Options
* **Graph Middleware** - For EVM chains, providing subgraph-based indexing with GraphQL API
* **Smart contract portal** - For EVM chains, offering REST & GraphQL APIs with webhooks
* **FabConnect** - For Hyperledger Fabric, providing RESTful API
* **Attestation Indexer** - Specialized indexer for attestations with GraphQL API
## Key Features
* Data indexing
* API access
* Event monitoring
* Webhook support
## How to add middleware
**Navigate to application**
Navigate to the **application** where you want to add middleware.
**Access middleware section**
Click **middleware** in the left navigation, and then click **add a middleware**. This opens a form.
**Configure middleware**
1. Choose middleware type (graph or portal)
2. Choose a **middleware name**
3. Select the **blockchain node** (prefered option for portal) or **load balancer** (prefered option for the graph)
4. Configure deployment settings
5. Click **confirm**
First ensure you're authenticated:
```bash
settlemint login
```
Create a middleware:
```bash
# Get the list of available middleware types
settlemint platform create middleware --help
# Create a middleware
settlemint platform create middleware
# Get information about the command and all available options
settlemint platform create middleware --help
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create middleware
const result = await client.middleware.create({
applicationUniqueName: "your-app-unique-name",
name: "my-middleware",
type: "SHARED",
interface: "HA_GRAPH", // Valid options: "HA_GRAPH" | "SMART_CONTRACT_PORTAL"
blockchainNodeUniqueName: "your-node-unique-name",
region: "EUROPE", // Required
provider: "GKE", // Required
size: "SMALL" // Valid options: "SMALL" | "MEDIUM" | "LARGE"
});
console.log('Middleware created:', result);
```
Get your access token from the Platform UI under User Settings → API Tokens.
## Manage middleware
Navigate to your middleware and click **manage middleware** to:
* View middleware details and status
* Update configurations
* Monitor health
* Access endpoints
```bash
# List middlewares
settlemint platform list middlewares --application
```
```bash
# Get middleware details
settlemint platform read middleware
```
```typescript
// List middlewares
await client.middleware.list("your-app-unique-name");
```
```typescript
// Get middleware details
await client.middleware.read("middleware-unique-name");
```
## The graph middleware
The Graph provides powerful indexing capabilities for EVM chains through
subgraphs. Use this middleware when you need:
* Custom indexing logic through subgraph manifests
* Complex GraphQL queries
* Real-time data updates
### Using the graph sdk
```typescript
import { createTheGraphClient } from "@settlemint/sdk-thegraph";
const { client: graphClient, graphql } = createTheGraphClient({
instances: JSON.parse(
process.env.SETTLEMINT_THEGRAPH_SUBGRAPHS_ENDPOINTS || "[]"
),
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN!,
subgraphName: "your-subgraph",
});
```
For detailed API reference and advanced usage examples, check out the
[TheGraph SDK
documentation](https://github.com/settlemint/sdk/tree/main/sdk/thegraph).
### Using the graph middleware
[The Graph](https://thegraph.com/en/) is a protocol for indexing and querying
blockchain data from networks. It can be used with all EVM-compatible chains
like Ethereum, Hyperledger Besu, Polygon, Avalanche, etc. You can run it on your
own blockchain nodes (both public and permissioned).
Using the Graph protocol, you can create **subgraphs** that define which
blockchain data will be indexed. The middleware will then use these subgraphs to
correctly index your smart contracts and expose a developer-friendly and
efficient **GraphQL API**, allowing you to query the data you need.
We have some prebuilt subgraph indexing modules included in the smart contract
set, and you can build your own modules if you have a custom smart contract set.
Before you start, make sure you are running an EVM-compatible network
(Ethereum, Polygon, Hyperledger Besu, Avalanche, etc.)
When the middleware is deployed, follow these steps to start using it:
### Define and deploy a subgraph
Navigate to the **smart contract set** which you want to index, go the
**details** and open the **IDE**. Here you will define the subgraph to set the
indexing specifications, and deploy it so it can be loaded into the middleware.
There are instructions included in the IDE to help you.
#### Subgraph raw configuration
Inside the root you will find a file called `subgraph.config.json` that contains
the raw configuration of the subgraph. The important section is the
**datasources** section.
* **Name** - here we defined the smart contracts with their name (the name of
the artifact created in the 'deployments' folder when running the deploy task)
* **Address & Startblock** - You will notice the startblock and address to be 0.
You must fill these in when your contract has been deployed. The block number
and the address can be found in the **deployment** folder, under **ignition**.
* **Module** - In the modules array all the indexing modules to activate for
this smart contract.
#### About the indexing modules
We provide **two libraries** of indexing modules: one by the **OpenZeppelin**
team for all the common smart contracts in their smart contract library, and one
by the **SettleMint** team to extend the capabilities of the OpenZeppelin one,
and to provide indexing of the specific SettleMint smart contract sets.
The OpenZeppelin set contains the following indexing modules:
* accesscontrol
* erc1155
* erc1967upgrade
* erc20
* erc721
* governor
* ownable
* pausable
* timelock
* voting
The SettleMint set contains the following indexing modules:
* erc721ipfs: to extend the ERC-721 from OpenZeppelin to index IPFS metadata of
your ERC-721 tokens
* crowdsale/vestingvault/vestingwallet: to index and expose all the data for the
crowdsale contract set
* forwarder: for the ERC-20 Meta transactions forwarder data
* statemachinemetadata: to index IPFS metadata for state machines
These are available in the `subgraph` folder in your IDE. You can create your
own modules for any other data you want to index, or for custom smart contracts
not part of the default sets. And you can modify the existing ones if you want
to index things a bit different.
#### Write your own indexing module
You can also write your own custom indexing module for smart contracts that are
not part of the default sets.
Follow these steps to create a custom indexing module:
* Primitives to generate a GraphQL schema: `subgraph/datasource/x.gql.json` - In
order to allow composability, the schema are not defined in the GraphQL format
but rather in a dedicated JSON format which is can be assembled and compiled
to GraphQL.
* Template to generate a subgraph manifest: `subgraph/datasource/x.yaml` - This
file lists all the events that the datasources should listen to, and links
that to the corresponding indexing logic.
* Indexing logic: `subgraph/datasources/x.ts` and (optionally)
`subgraph/fetch/x.ts` - This is the core logic that processes the events to
index the onchain activity.
[To learn more, check it out on Github.](https://github.com/OpenZeppelin/openzeppelin-subgraphs)
For detailed step-by-step guides on setting up custom Graph Middleware, refer
to:
* [EVM Chains Guide:g Setting up Graph Middleware](/building-with-settlemint/evm-chains-guide/setup-graph-middleware)
* [Hedera Hashgraph Guide: Setting up Graph Middleware](/building-with-settlemint/hedera-hashgraph-guide/setup-graph-middleware)
#### Start your subgraph
The following tasks need to be run in this sequence:
* `bunx settlemint scs subgraph codegen` - Generates the AssemblyScript types
for your contracts ABI.
* `bunx settlemint scs subgraph build` - Compiles the WASM files based on the
outputs generated by `bunx settlemint scs subgraph codegen`.
* `bunx settlemint scs subgraph deploy` - Deploys the WASM files to IPFS and
updates the middleware to start or update the indexing.
The indexing of your smart contracts has now started. This can take a while, but
once done you can query the middleware for your data in seconds using the
**GraphQL API**. You can find the **endpoint** in the **Connect-tab**.
file: ./content/docs/platform-components/middleware-and-api-layer/integration-studio.mdx
meta: {
"title": "Integration studio",
"description": "Low-code development environment for implementing business logic"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
The SettleMint Integration Studio is a low-code development environment which
enables you to implement business logic for your application simply by dragging
and dropping.
Under the hood, the Integration Studio is powered by a **Node-RED** instance
dedicated to your application. It is a low-code programming platform built on
Node.js and designed for event-driven application development.
[Learn more about Node-RED here](https://nodered.org/docs/).
## Basic concepts
The business logic for your application can be represented as a sequence of
actions. Such a sequence of actions is represented by a **flow** in the
Integration Studio. To bring your application to life, you need to create flows.
**Nodes** are the smallest building blocks of a flow.
### Nodes
The nodes are the smallest building blocks. They can have at most one input
port, and multiple output ports. They are triggered by some event (eg. an http
request). When triggered, they perform some user defined actions, and generate
an output. This output can be passed to the input of another node, to trigger
another action.
### Flows
A flow is represented as a tab within the editor workspace and is the main way
to organize nodes. You can have more than one set of connected nodes in a flow
tab.
The Integration Studio allows you to create flows in the fastest way possible.
You can drag and drop nodes in workspace and easily connect them by clicking
from the output port of one node to input port of another to create complex
flows. This allows you to visualise the orchestration and interaction between
your components (your nodes). Since you can clearly visualize the sequence of
actions your application is going to perform, it is not only more interpretable
but also much easier to debug in the future.
The use cases include interacting with other web services, applications, and
even IoT devices - orchestrating them for any kind of purpose to bring your
business solution to life.
[Learn more about the basic concepts of Node-RED here](https://nodered.org/docs/user-guide/concepts)
## Adding the integration studio
Navigate to the **application** where you want to add the integration studio.
Click **Integration tools** in the left navigation, and then click **Add an
integration tool**. This opens a form.
### Select integration studio
Select **Integration Studio** and click **Continue** to proceed.
### Choose a name
Choose a **name** for your Integration Studio. Choose one that will be easily
recognizable in your dashboards (eg. Crowdsale Flow)
### Select deployment plan
Choose a deployment plan. Select the type, cloud provider, region and resource
pack.
[More about deployment plans](/launching-the-platform/managed-cloud-saas/deployment-plans)
### Confirm setup
You can see the **resource cost** for the Integration Studio displayed at the
bottom of the form. Click **Confirm** to add the Integration Studio.
## Using the integration studio
When the Integration Studio is deployed, click on it from the list, and go to
the **Interface** tab to start building your flows. You can also view the
interface in full screen mode.
Once the Integration Studio interface is loaded, you will see 2 flow tabs: "Flow
1" and "Example". Head over to the **"Example" tab** to see some full blown
example flows to get you started.
Double-click any of the nodes to see the code they are running. This code is
written in JavaScript, and it represents the actions the particular node
performs.

### Setting up a flow
Before we show you how to set up your own flow, we recommend reading this
[article by Node-RED on creating your first flow](https://nodered.org/docs/tutorials/first-flow).
Now let's set up an example flow together and build an endpoint to get the
latest block number of the Polygon Mumbai Testnet using the Integration Studio.
If you do not have a Polygon Mumbai Node, you can easily
[deploy a node](/platfrom-components/add-a-node-to-a-network) first.
### Add http input node
Drag and drop a **Http In node** to listen for requests. If you double-click the node, you will see you have a couple parameters to set:
* `METHOD` - set it to `GET`. This is HTTP Method that your node is configured
to listen to.
* `URL` - set it to `/getLatestBlock`. This the endpoint that your node will
listen to.
### Add function node
Drag and drop a **function node**. This is the node that will query the
blockchain for the block number. Double-click the node to configure it.
`rpcEndpoint` is the RPC url of your Polygon Mumbai Node.
Under the **Connect tab** of your Polygon Mumbai node, you will find its RPC url.
`accessToken` - You will need an access token for your application. If you do
not have one, you can easily
[create an access token](/platfrom-components/application-access-tokens) first.
Enter the following snippet in the Message tab:
```javascript
///////////////////////////////////////////////////////////
// Configuration //
///////////////////////////////////////////////////////////
const rpcEndpoint = "https://YOUR_NODE_RPC_ENDPOINT.settlemint.com";
const accessToken = "YOUR_APPLICATION_ACCESS_TOKEN_HERE";
///////////////////////////////////////////////////////////
// Logic //
///////////////////////////////////////////////////////////
const ethers = global.get("ethers");
const provider = new ethers.providers.JsonRpcProvider(
`${rpcEndpoint}/${accessToken}`
);
msg.payload = await provider.getBlockNumber();
return msg;
///////////////////////////////////////////////////////////
// End //
///////////////////////////////////////////////////////////
```
**Note:** ethers and some ipfs libraries are already available by default and can be used like this:
```javascript
const ethers = global.get("ethers");
const provider = new ethers.providers.JsonRpcProvider(
`${rpcEndpoint}/${accessToken}`
);
const ipfsHttpClient = global.get("ipfsHttpClient");
const client = ipfsHttpClient.create(`${ipfsEndpoint}/${accessToken}/api/v0`);
const uint8arrays = global.get("uint8arrays");
const itAll = global.get("itAll");
const data = uint8arrays.toString(
uint8arrays.concat(await itAll(client.cat(cid)))
);
```
If the library you need isn't available by default you will need to import it in
the setup tab. Example for ethers providers:

### Add http response node
Drag and drop a **Http Response node** to reply to the request. Double-click and
configure:
* `Status code` - This is the HTTP status code that the node will respond with
after completion of the request. We set it to 200 (`OK`)
Click on the `Deploy` button in the top right corner to save and deploy your
changes.
### Test your endpoint
Now, go back to the **Connect tab** of your Integration Studio to see your **API
endpoint**, which looks something like
`https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com`.
You can now send requests to
`https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com/getLatestBlock` to get
the latest block number. Do not forget to create an API Key for your Integration
studio and pass it as the `x-auth-token` authorization header with your request.
Example terminal command:
```bash
curl -H "x-auth-token: bpaas-YOUR_INTEGRATION_KEY_HERE" https://YOUR_INTEGRATION_STUDIO_API_URL.settlemint.com/getLatestBlock
```
The API is live and protected by the authorization header, and you can
seamlessly integrate with your application.
You can use the Integration Studio to build very complex flows. Learn more in
this [cookbook by Node-RED](https://cookbook.nodered.org/) on the different
types of flows.
file: ./content/docs/platform-components/middleware-and-api-layer/smart-contract-api-portal.mdx
meta: {
"title": "Smart contract portal",
"description": "Smart contract portal for zero-config self generated APIs"
}
## The smart contract portal middleware
The smart contract portal is a middleware which creates an easy to use api on
top of your smart contracts. It can be used with all EVM-compatible chains like
Ethereum, Hyperledger Besu, Polygon, Avalanche, etc. You can run it on your own
blockchain nodes (both public and permissioned) or on a Load Balancer.
Benefits of using the smart contract portal:
1. Simplified Integration: APIs allow developers to interact with complex smart
contract functions through familiar interfaces, reducing the need to
understand blockchain-specific languages and protocols.
2. Data Aggregation: APIs can consolidate data from multiple smart contracts,
providing a unified view.
3. Improved Performance: GraphQL optimizes data fetching, ensuring that clients
retrieve only the necessary data in a single request, reducing network load
and improving performance.
4. Stack agnostic: Teams are free to choose their own technology stack.
5. Transaction Monitoring & Alerting: Monitor blockchain transactions in
real-time, filter by parameters like sender, receiver, block number, function
name, and contract address. Set up custom alerts to trigger actions such as
email notifications, webhooks, or automated processes when specific
conditions are met.
Before you start, make sure you are running an EVM-compatible network
(Ethereum, Polygon, Hyperledger Besu, Avalanche, etc.) and have a private key
to deploy your smart contracts.
### Using the smart contract portal middleware
The Portal middleware provides instant API access to your smart contracts. Key
features include:
* Auto-generated REST & GraphQL APIs
* Built-in webhooks for event notifications
* Type-safe contract interactions
* Automatic ABI parsing
### Using the portal sdk
```typescript
import { createPortalClient } from "@settlemint/sdk-portal";
const { client: portalClient, graphql: portalGraphql } = createPortalClient({
instance: process.env.SETTLEMINT_PORTAL_GRAPHQL_ENDPOINT,
accessToken: process.env.SETTLEMINT_ACCESS_TOKEN,
});
```
For comprehensive API documentation and advanced features, check out the
[Portal SDK
documentation](https://github.com/settlemint/sdk/tree/main/sdk/portal).
### Upload an abi
A smart contract ABI (Application Binary Interface) is a standardized way for
interacting with smart contracts in the Ethereum blockchain and other compatible
systems. It serves as the bridge between human-readable contract code (written
in languages like Solidity) and the Ethereum Virtual Machine (EVM), which
executes the contract. The ABI specifies the functions that can be called on the
contract, including their names, input parameters, and output types.
When deploying a smart contract the ABI file can be found as part of the
artificats. See
[Deploying the Smart Contract](/building-with-settlemint/evm-chains-guide/deploy-smart-contracts).
Download the ABI json files and save them on your local filesystem.
When creating a new middleware you'll need to upload at least one ABI.
To update the ABIs of an existing smart contract portal middleware navigate to
the middleware, go the details and click on the 'Manage Middleware' button on
the top right. Click on the 'Update ABIs' item and a dialog will open. In this
dialog upload the ABI file(s) you saved on your local filesystem in the previous
step.

### Rest
A fully typed REST api with documentation is created out of the Smart Contract
ABI, you can discover all its endpoints on the REST tab. To see examples in your
technology of choice use the dropdown in the example section on the right.

### Graphql
The GraphQL api exposes the same functionality as the REST api, you can discover
it on the GraphQL tab.

### Webhooks
On the Webhooks tab you can register your own webhook. The portal will send
events to this webhook when a transaction is processed.
When sending a message the event will have a signature which allows the receiver
to validate if the event has not been tampered with.
The secret to validate the signature can be copied from the details page of your
webhook.

Standard Webhooks has built
[SDKs and useful tools](https://www.standardwebhooks.com/#resources) using
different programming languages that make it easy to start using webhooks.
An example using Typescript, [Elysia](https://elysiajs.com/) and
[standard webhooks](https://www.standardwebhooks.com/).
```ts
import { Elysia, t } from "elysia";
import { Webhook } from "standardwebhooks";
async function webhookConsumerBootstrap(secret: string) {
const webhookConsumer = new Elysia().post(
"/scp-listener",
({ headers, body }) => {
try {
const wh = new Webhook(btoa(secret));
const verifiedPayload = wh.verify(JSON.stringify(body.payload), {
"webhook-id": headers["btp-portal-event-id"]!,
"webhook-signature": headers["btp-portal-event-signature"]!,
"webhook-timestamp": headers["btp-portal-event-timestamp"]!,
});
console.log(
`Received a webhook event: ${JSON.stringify(verifiedPayload)}`
);
} catch (err) {
console.error("Webhook payload invalid", err);
throw err;
}
},
{
body: t.Object({
payload: t.Object({
apiVersion: t.String(),
eventId: t.String(),
eventName: t.String(),
timestamp: t.Number(),
data: t.Any(),
}),
}),
}
);
const app = new Elysia().use(webhookConsumer).onStart(({ server }) => {
console.log(
`Started the test webhook consumer on ${server?.url.toString()}`
);
});
}
```
### Transaction Monitoring & Alerting
The smart contract portal provides powerful on-chain monitoring capabilities
that enable you to track, filter, and respond to blockchain transactions in
real-time.
On-chain monitoring is critical for:
* Security: Detect and respond to suspicious activities instantly
* Compliance: Track transactions for regulatory reporting
* Operations: Ensure critical transactions are processed correctly
* Business Intelligence: Gain insights from transaction patterns
Common monitoring scenarios include:
* High-value transfers exceeding threshold amounts
* Contract interactions from specific addresses
* Contract events signaling state changes
* Failed transactions that require attention
With the portal, you can set up customized alerting rules that trigger actions
when specified conditions are met. These actions can include:
* Sending email notifications to stakeholders
* Triggering webhooks to external systems
* Logging events for audit purposes
* Executing automated workflows in response
The combination of real-time monitoring and flexible alerting provides a
powerful foundation for building robust dApps that can respond dynamically to
on-chain activities.
## Further reading
* [The Graph Middleware](/platform-components/middleware-and-api-layer/graph-middleware)
* [The Smart contract portal Middleware](/platform-components/middleware-and-api-layer/smart-contract-api-portal)
* [Attestation Indexer](/platform-components/middleware-and-api-layer/attestation-indexer)
* [Firefly FabConnect](/platform-components/middleware-and-api-layer/fabconnect)
* [Configure transaction monitoring & alerting](/building-with-settlemint/building-with-sdk/portal#examples)
All operations require appropriate permissions in your workspace.
file: ./content/docs/platform-components/platform-info/connect-external-network.mdx
meta: {
"title": "Besu - External network"
}
The SettleMint platform seamlessly integrates with existing external networks.
You can deploy nodes on your external network within the SettleMint platform,
enabling you to leverage the platform's robust features, including monitoring,
resource scaling, an intuitive JSON-RPC UI, and reliable uptime management.
## Prerequisites
* A Hyperledger Besu or Quorum OBFT network
* The genesis file of the network
* At least one enode URL of an existing running node on the network (required to
sync the platform node with the existing network)
## Joining a Network
1. Navigate to the create network form (see
[how to do this here](/building-with-settlemint/evm-chains-guide/add-network-and-nodes)).
2. Select **Join permissioned network**.
3. Choose **Hyperledger Besu** or **Quorum** depending on the network you want
to join.
4. Enter names for the network and the node.
5. Upload the network's genesis file. Bootnodes specified in the genesis file
will be automatically identified and added as external nodes.
6. Add at least one enode URL of an existing running node on the network. Note:
If a bootnode is specified in the genesis file, it will be added
automatically as an external node, allowing you to skip this step.
7. Choose the deployment plan for the node. For more information about
deployment plans,
[see here](/launching-the-platform/managed-cloud-saas/deployment-plans).
This process will create a new non-validator node in your existing network.
## Adding Nodes
To add more nodes to your network:
1. Navigate to the create node form.
2. Choose between creating the node as a validator or non-validator.
3. Note: To deploy nodes as validators, a majority (66%) of validators must be
running on the SettleMint platform.
4. If you don't have a majority, create the node as a non-validator first, then
follow the process in [Add a Validator](#add-a-validator) to make it a
validator.
Once a majority of validators are running on the platform, deploying new nodes
as validators becomes possible without voting on external validators. We
recommend having a majority of validators running on the platform for seamless
addition and removal of validators from the network.
## Add a Validator
Unless a majority of validators are running on the platform, you need to send
votes on the externally running validators to add the platform node as a
validator.
Execute the following on all your validator nodes:
* For Hyperledger Besu:
[qbft\_proposeValidatorVote](https://besu.hyperledger.org/stable/private-networks/reference/api#qbft_proposevalidatorvote)
* For Quorum:
[istanbul\_propose](https://docs.goquorum.consensys.io/reference/api-methods#istanbul_propose)
Find the enode URL of the platform node in the 'Details' tab of the node under
the 'Node Identity' section. Once the vote is reflected in the network, restart
the node in the platform. The node will be added as a validator and will start
proposing blocks.
## Remove a Validator
To make a platform validator a non-validator, execute the following on every
validator node:
* For Hyperledger Besu:
[qbft\_proposeValidatorVote](https://besu.hyperledger.org/stable/private-networks/reference/api#qbft_proposevalidatorvote)
with proposal "false"
* For Quorum:
[istanbul\_propose](https://docs.goquorum.consensys.io/reference/api-methods#istanbul_propose)
with proposal "false"
Once the vote is reflected in the network, restart the node in the platform. The
node will be removed as a validator and will stop proposing blocks.
## Node Type Conflict Warning
The platform displays a node type conflict warning when there's a discrepancy
between the node type in the platform and the node type on the network.
This can occur when:
* The node is added as a non-validator on the platform but runs as a validator
on the network.
* The node is added as a validator on the platform but runs as a non-validator
on the network.
To resolve this, you can either:
1. Update the node type in the platform to match the node type on the network,
or
2. Add or remove the node as a validator on the network using the steps
mentioned above.
The platform will automatically resolve the node type conflict warning shortly
after the necessary changes are made.
## Migrating existing networks to SettleMint platform
Migrating an existing Hyperledger Besu or Quorum (using OBFT or IBFT2 consensus)
network to the SettleMint platform enables organizations to move from
self-managed infrastructure to a robust, cloud-native blockchain operations
environment. The platform provides an intuitive and secure environment for node
management, validator orchestration, and real-time monitoring while ensuring
compatibility with existing private networks. This process begins with ensuring
that a few key prerequisites are in place: access to the current network’s
genesis file, at least one `enode://` address of an active node for
synchronization, and the contract ABIs and addresses of any deployed smart
contracts.
## Joining the existing network
To initiate the migration, the organization will log into the SettleMint
platform and navigate to the "Create Network" form. From there, they will choose
the "Join permissioned network" option and select either Hyperledger Besu or
Quorum as the client, depending on their existing setup. They will assign a name
to the network and the joining node and then upload the `genesis.json` file. If
the genesis file includes bootnodes, the platform automatically identifies and
configures these as external peers. If not, at least one enode address must be
manually added. The organization will then choose a deployment plan based on
performance requirements, and SettleMint will spin up a non-validator node that
connects to the external network.
## Syncing data and smart contracts
Once the node joins the network, it will begin full synchronization using the
Ethereum protocol. This includes downloading block headers, transaction bodies,
receipts, and reconstructing the entire state trie up to the current block. This
process ensures that all smart contracts deployed on the existing network are
immediately accessible from the SettleMint node without requiring redeployment.
The platform uses standard syncing algorithms such as snap sync or full sync to
ensure the node reconstructs the full world state, including account balances,
contract bytecode, and storage variables. As a result, all transaction history,
event logs, and deployed contract states will be visible and accessible via the
SettleMint platform’s JSON-RPC explorer or API endpoints.
## Migrating validators
To migrate validator nodes, organizations can use SettleMint to deploy
additional nodes and vote them in as validators from the currently running
validator nodes. This involves retrieving the enode of the SettleMint node from
the platform’s dashboard and issuing a validator proposal on each legacy
validator node. For Besu networks, this is done using the
`qbft_proposeValidatorVote` RPC method, while for Quorum, it involves calling
`istanbul_propose`. Once the validator vote is reflected in the consensus state,
the platform node must be restarted. This process can be repeated for each node
until a majority (66% or more) of the validators are hosted on SettleMint. At
that point, further changes to validator sets can be handled exclusively within
the platform, streamlining validator governance.
## Dismantling external infrastructure
After all SettleMint nodes are synced and operating correctly, and once
validator roles have been transitioned, the organization may proceed to
decommission their old infrastructure. This includes shutting down legacy
non-validator nodes and removing any remaining external validators by proposing
removal votes (`qbft_proposeValidatorVote` or `istanbul_propose` with `false`).
It is also important to update all dependent applications—dApps, API services,
and frontends—to point to the new SettleMint-managed JSON-RPC or HTTP endpoints.
## Using SettleMint platform features
Following the migration, the organization will gain access to the full suite of
SettleMint platform tools. These include live node health dashboards, block
explorers, logs, a contract management interface, metrics and alerts via Grafana
and Prometheus, and scalable infrastructure for increasing throughput and fault
tolerance. In the event of a node type conflict—where a node’s role (validator
or non-validator) differs between the network and platform—the platform will
flag this discrepancy and guide the user to either update the node type in the
platform or modify the node’s role on the network. Once the correction is made
and the node is restarted, the conflict will automatically resolve.
The migration process is designed to be non-disruptive and reversible until the
point of final infrastructure decommissioning. The platform allows organizations
to run SettleMint nodes alongside their existing infrastructure, enabling a
phased and secure migration path that aligns with operational and governance
policies.
file: ./content/docs/platform-components/platform-info/connect-external-node.mdx
meta: {
"title": "Besu - External node"
}
There are many usecases where not all nodes are running on the SettleMint
platform. For example, you might want to connect to a node running on a
different server, you might want to connect to a node running on a different
blockchain platform or just for development purposes. In this guide, we will
show you how to connect to an external node.
## Prerequisites
* A running Hyperledger Besu network on the SettleMint platform with at least
one node hosted on either Amazon Web Services (AWS) or Microsoft Azure.
* For this guide we will use Docker and Docker Compose, but you can also use
your own setup.
* If you don't have Docker installed, you can find the installation
instructions [here](https://docs.docker.com/get-docker/).
* If you don't have Docker Composer installed, you can find the installation
instructions [here](https://docs.docker.com/compose/install/).
## Step 1: Getting the genesis file
The genesis file of a network contains all the information about your network,
including a list of bootnodes. This list is automatically updated upon each
change you make in the platform. If you add or remove nodes it makes sense to
redownload the file.
You can download the genesis file by going to the network details page and
clicking on the genesis.json link in the Info box.
Create a folder (e.g. MyNetwork) on your computer and add the file into it:
```
MyNetwork
|- genesis.json
```
## Step 2: Create the docker compose file
Create a docker-compose.yml file in the same folder:
```
MyNetwork
|- genesis.json
|- docker-compose.yml
```
Add the following content to the docker-compose.yml file:
```yaml
services:
my-besu-node:
# Not required but recommended to use the same version as your nodes on the platform
image: hyperledger/besu:23.7.2
volumes:
# Mounts the genesis.json file into the container
- ./genesis.json:/config/genesis.json
# Mounts the data folder into the container, this will hold your actual blockchain data
- ./data:/data
ports:
# Exposes the port for the JSON-RPC HTTP API on http://localhost:8454
- 8545:8545
# Exposes the port for the JSON-RPC WebSocket API on ws://localhost:8546
- 8546:8546
# Exposes the port for the GraphQL HTTP API on http://localhost:8547
- 8547:8547
# Exposes the port for the P2P connection between nodes
- 30303:30303
# Exposes the port for the P2P discovery mechanism between nodes
- 30303:30303/udp
entrypoint:
- /opt/besu/bin/besu
# More info on these options on https://besu.hyperledger.org/stable/public-networks/reference/cli/options
- --Xdns-enabled=true
- --Xdns-update-enabled=true
- --genesis-file=/config/genesis.json
- --data-path=/data
- --tx-pool-retention-hours=999
- --tx-pool-max-size=1024
- --min-gas-price=0
- --random-peer-priority-enabled=true
- --host-allowlist="*"
- --rpc-http-enabled=true
- --rpc-http-host=0.0.0.0
- --rpc-http-port=8545
- --rpc-http-api=DEBUG,ETH,ADMIN,WEB3,IBFT,NET,TRACE,EEA,PRIV,QBFT,PERM,TXPOOL,PLUGINS
- --rpc-http-cors-origins=all
- --rpc-http-authentication-enabled=false
- --revert-reason-enabled=true
- --rpc-http-max-active-connections=1000
- --graphql-http-enabled=true
- --graphql-http-host=0.0.0.0
- --graphql-http-port=8547
- --graphql-http-cors-origins=all
- --rpc-ws-enabled=true
- --rpc-ws-host=0.0.0.0
- --rpc-ws-port=8546
- --rpc-ws-api=DEBUG,ETH,ADMIN,WEB3,IBFT,NET,TRACE,EEA,PRIV,QBFT,PERM,TXPOOL,PLUGINS
- --rpc-ws-authentication-enabled=false
- --rpc-ws-max-active-connections=1000
- --logging=INFO
- --nat-method=DOCKER
```
## Step 3: Start your node
```bash
docker compose up -d
```
Your node will now search for peers and connect to them. You can check the logs
to see if it is working correctly:
```
mynetwork-my-besu-node-1 | 2023-09-12 12:07:13.576+00:00 | nioEventLoopGroup-3-2 | INFO | FullSyncTargetManager | Unable to find sync target. Currently checking 3 peers for usefulness
```
When it connects, it will sync the chain locally and stay up to date from now on
```
mynetwork-my-besu-node-1 | 2023-09-12 12:07:19.023+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 200 (0x654418ab6edb96d7cf25f2e4a5955810b09dbb59b0e1cf018f0673b824356b31), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:19.215+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 400 (0xa9e9c3c0a085fb1afb3bbf178a2f5dd8ed0bcee0600e50c31bda41ba4d0cab98), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:19.360+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 600 (0xbf6353b60120d11fe964f233fd1b0c9d383c550cd038b70c3f6d60fb7704e528), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:19.499+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 800 (0x1915c4a8ca9a7bbf8058459156cbd8232c59fb119fae535e64782fc9c6e0c453), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:19.626+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 1000 (0x9fbecae6acfb202d45d60e970ff3a10136b2dadca7888f0caffdb4f8406b99a6), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:19.754+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 1200 (0x13603e5902fa92f08d27d28a5d3e194099b770eb5f3cea070853a7e6a2dcd88c), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:19.873+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 1400 (0x31d888a5f6b4fc2deb9f519e3c596d5eb89656e408d448692e073e9efcebc390), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:19.985+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 1600 (0x55a27a108fa2b6eeb7b8a2b74b3736f051955e20f0aefa1938827bfd860e3e7a), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:20.093+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 1800 (0x151cec20ab87e2ab54752464128cefbbbc295e1817b3f2ed663a4091b2434df6), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:20.195+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 2000 (0x1ee6a31c0e6d79ea9bb4e616214671951c73846e6e2b1a5e2e4fa1b51ac11144), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:20.308+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 2200 (0xddf8d8e64c38892206a527bc32302401b52642a777524c65d5d558018fee3ea6), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:20.420+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 2400 (0x7ef25e58acc437a6a7472b5fe7423b0b966e4ac4dc7cc843366603dae84c4765), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:20.539+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 2600 (0x86358a90b88cebafb88f9a1725218c42eac66fe0675c42d0387c0cf0cda31db1), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:20.659+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 2800 (0x5734a0cac8304fc157c8a4fafb55073094f036c9b8ac6ca3eb6fbd73428e12e8), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:20.774+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 3000 (0xd32ba1c77e7f3850b75cab37b561831bcf57d59e67f0873ed75f8f3b63c3946c), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:20.892+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 3200 (0x29391cee08c1c04323f9fd66cbd3d6475b54ecad246ab1f53e0d693327d2bb56), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:21.021+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 3400 (0xb6ed5326cc98bbdf43cdbf62ce766df13f8b8120d9506346b55c20739cc3288b), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:21.126+00:00 | EthScheduler-Services-5 (importBlock) | INFO | FullImportBlockStep | Import reached block 3600 (0xeda0eb9d0853d3f9bff2da577772914c72b38c29bd9cfc9b90a9b92a0e6f9f8f), - Mg/s, Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:32.042+00:00 | EthScheduler-Workers-0 | INFO | PersistBlockTask | Imported #3,729 / 0 tx / 0 om / 0 (0.0%) gas / (0xb5c7eeea2ad6b7c7cd32f7af950f3651e0063007349346213cf441b144dff5ac) in 0.008s. Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:07:47.219+00:00 | EthScheduler-Workers-0 | INFO | PersistBlockTask | Imported #3,730 / 0 tx / 0 om / 0 (0.0%) gas / (0x555663fe3f7a47ea06e4d9f510d1d7c9c34da68659853a170c9fcd817d268e9b) in 0.001s. Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:08:02.068+00:00 | EthScheduler-Workers-0 | INFO | PersistBlockTask | Imported #3,731 / 0 tx / 0 om / 0 (0.0%) gas / (0xae4e049605e4fa1de5f15f3e48cd99de7c0b319c9f1d7bd20a613de06d5a129a) in 0.003s. Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:08:17.069+00:00 | EthScheduler-Workers-0 | INFO | PersistBlockTask | Imported #3,732 / 0 tx / 0 om / 0 (0.0%) gas / (0x085ba2ebec981dd69230a13e8e2301a9f0d4318bdf376850e51b1f9e79e51c11) in 0.001s. Peers: 3
mynetwork-my-besu-node-1 | 2023-09-12 12:08:32.009+00:00 | EthScheduler-Workers-0 | INFO | PersistBlockTask | Imported #3,733 / 0 tx / 0 om / 0 (0.0%) gas / (0xf4b63cfcdca4a83e8810361e03de158b827e4d2850ccc65fc70310d4f6963fcc) in 0.005s. Peers: 3
```
## Step 4: Validators
This is a dangerous step that can break your network without a way to recover.
You can assign this new node as a validator in the platform. This will make it
sign blocks and transactions. Note that more than 66% of your validators need to
be online for the network to keep functioning.
Execute
[qbft\_proposeValidatorVote](https://besu.hyperledger.org/stable/private-networks/reference/api#qbft_proposevalidatorvote)
on all your validator nodes. You can find the enode address of your new node in
the logs of the container or by executing
[admin\_nodeInfo](https://besu.hyperledger.org/stable/public-networks/reference/api#admin_nodeinfo).
Similarly you can make a platform validator a regular node by executing
[qbft\_proposeValidatorVote](https://besu.hyperledger.org/stable/private-networks/reference/api#qbft_proposevalidatorvote)
with proposal "false" on every validator node.
file: ./content/docs/platform-components/platform-info/performance.mdx
meta: {
"title": "Besu Performance"
}
This document aims to provide a comprehensive overview of transaction throughput
on a Hyperledger Besu node, explaining key concepts for readers who may not be
familiar with blockchain technology.
## Factors Affecting Transaction Performance
Several crucial factors influence the transaction performance of a Besu node:
1. **Network Latency**: The time it takes for data to travel between nodes in
the network and between the sender of the transaction and the node can
significantly impact transaction speed.
2. **Cloud Provider**: Different cloud providers offer varying levels of
performance, which can affect node operation.
3. **Resource Pack**: The selected resource pack determines the computational
resources (CPU, RAM, storage) allocated to a node and the requests-per-second
rate limit, both of which significantly impact performance and maximum
throughput.
## Read vs. Write Transactions in Blockchain
In a blockchain context, particularly for Ethereum-based systems like Besu,
transactions can be categorized into two types:
### Read Transactions
Read transactions, also known as "calls," do not modify the blockchain state.
They retrieve information from the blockchain without consuming gas or requiring
mining. Examples include checking an account balance or reading smart contract
data.
Using the loadbalancer feature, traditional scaling methods (horizontal scaling)
can be applied to scale beyond a single node. This means that multiple Besu
nodes can be set up behind a loadbalancer, allowing for increased read
transaction throughput by distributing the load across multiple nodes. This
approach is particularly effective for read-heavy applications, as it allows for
parallel processing of read requests across multiple nodes, significantly
increasing the overall capacity of the system.
### Write Transactions
Write transactions, on the other hand, modify the blockchain state. They require
gas, must be mined into a block, and permanently alter the blockchain's data.
Examples include transferring tokens or updating smart contract state.
The process of executing a write transaction involves several time-consuming
steps:
1. Transaction Preparation: This includes constructing the transaction object
with the necessary parameters such as recipient address, value, and data.
2. Gas Estimation: Before sending a transaction, it's crucial to estimate the
gas required. This involves a call to the node to simulate the transaction
and determine the appropriate gas limit.
3. Nonce Management: Each account has a nonce that must be incremented
sequentially for each transaction. Managing nonces, especially for
high-frequency transactions, requires careful tracking and can introduce
delays.
4. Transaction Signing: The transaction must be cryptographically signed. This
can be done using Accessible Keys stored in Hashicorp Vault (in memory) or
via AWS KMS. The signing process, while quick, adds to the overall
transaction time.
5. Transaction Submission: Once prepared and signed, the transaction is
submitted to the node's transaction pool.
6. Mining and Confirmation: Finally, the transaction must be picked up by a
miner, included in a block, and confirmed by the network.
Each of these steps contributes to the overall time taken for a write
transaction, impacting the achievable transactions per second. The complexity of
these operations, especially when dealing with high volumes of transactions,
underscores the difference in throughput between read and write operations in
blockchain systems.
## Factors Affecting Write Transactions
Several elements influence the performance of write transactions:
1. **Block Time**: The average time between blocks being added to the chain
affects how quickly transactions can be processed.
2. **Block Gas Limit**: This caps the total amount of computational work that
can be done in a single block, limiting the number of transactions per block.
3. **Transaction Gas Usage**: Different transactions consume varying amounts of
gas, affecting how many can fit into a block.
4. **Nonce Management**: In Ethereum-style transactions, each account has a
nonce that must increment correctly for each transaction. This can impact the
rate at which transactions from a single account can be processed.
## Real-World Benchmarks
To provide concrete performance metrics, we conducted several tests using a Besu
node with the following configuration:
* Resource Pack: Large (rate limiter disabled)
* Cloud Provider: Google Cloud
* Location: Brussels
* Testing Location: Within Belgium, using a high-speed internet connection
* Volume: 100 virtual users over 30 seconds
### Read Transaction Performance
In our tests, a single Besu node was able to handle over 2,000 read requests per
second. This high throughput is possible because read operations do not modify
the blockchain state and do not require consensus.
### Write Transaction Performance
For write transactions, there are two scenarios. The first scenario involved
sending pre-signed transactions using the `eth_sendRawTransaction` method. This
approach bypasses the need for real-time transaction signing on the node,
potentially reducing processing overhead. This method demonstrated impressive
performance, achieving between 700 and 800 transactions per second (TPS).
This high throughput is attributed to the reduced computational load on the
node, as it doesn't need to perform signature verification for each incoming
transaction. However, it's important to note that while this method can
significantly boost transaction throughput, it requires careful management of
nonces and pre-signing of transactions, which may introduce additional
complexity in the transaction preparation process.
In the second scenario, real-time signed transactions using
`eth_sendTransaction` without a nonce supplied in the call and all from a single
address (the slowest possible scenario), we observed a lower but still
significant throughput of over 120 TPS accepted by the node. This reduction in
throughput compared to a multi-address scenario is due to the sequential nature
of transactions from a single sender.
Each transaction from an address must use a unique, incrementing nonce value.
This nonce tracking ensures transactions are processed in the correct order and
prevents double-spending, but it also means that transactions from a single
address cannot be processed in parallel. This bookkeeping is handled by the buit
in transaction signer, which will also retry transactions in the case of a
duplicate nonce.
The node must process each transaction sequentially, verifying and incrementing
the nonce for each one, which naturally limits the throughput compared to
transactions from multiple addresses that can be processed concurrently.
file: ./content/docs/platform-components/security-and-authentication/application-access-tokens.mdx
meta: {
"title": "Application access tokens",
"description": "Guide to managing application access tokens in SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Application access tokens let you connect your SettleMint services with other
apps securely. They represent your application, not individual users, and can be
created by both admins and users. They can be used to connect to all or selected
services of an application.
## Create an application access token
Go to your application's dashboard and click on "App access tokens" in the left
navigation.
Click on the "Add an application access token" button. This opens a form where
you can create your application access token.
1. Choose a **name** for your application access token.
2. Select an **expiration date**. You cannot update this later.
3. Select a scope type. There are two types of scope: **All** or **Specific**.
1. If you selected **All**, you grant access to all services of the
application. If you add more services to the application later, this
access token will grant access to these new services as well.
2. If you selected **Specific**, you can choose which specific services this
access token will grant access to.
3. You can also update the scopes of your application access token later.
4. Click **Confirm** to create your application access token.
Copy and save your token securely - you won't see it again. Treat it like a
password and keep it secret.
## Update an application access token
Navigate to the **application** whose token you want to update.
1. Click **App Access Tokens** in the left navigation, you will see a list of
all application access tokens for this application.
2. Click on **View scopes** of the token you wish to update. This will first
open a list where you can view the current scopes of the token.
3. Click on **Update** in the bottom right corner to open a form where you can
update your application access token.
4. Choose the new scopes for your application access token.
5. Click **Confirm** to update your application access token.
## Delete an application access token
If you are worried that an application access token has been compromised, or you
no longer use the integration for which you had generated a particular
application access token, you can delete that application access token.
1. Navigate to the application dashboard whose application access tokens you
wish to delete.
2. Click **App Access Tokens** in the left navigation.
3. Click **Delete** next to the application access token you want to delete.
4. Type **DELETE** to confirm. The application access token will no longer be
usable.
## Use an application access token
You can use these application access tokens in three ways depending on what
works for your use case.
* As a header, you can use the header `x-auth-token: TOKEN`.
* As a query parameter using `https://myservice.settlemint.com/?token=TOKEN`
appended to any URL.
* As the last part of the URL `https://myservice.settlemint.com/TOKEN`.
* For IPFS nodes build your uri so it becomes
`https://myservice.settlemint.com/TOKEN/api/v0/...`
* For Avalanche and Fuji build your uri so they look like
`https://myservice.settlemint.com/ext/bc/C/rpc/TOKEN`
file: ./content/docs/platform-components/security-and-authentication/personal-access-tokens.mdx
meta: {
"title": "Personal access tokens",
"description": "Guide to managing personal access tokens in SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
# Personal access tokens
Personal access tokens (or
[Application access tokens](/platform-components/security-and-authentication/application-access-tokens)) let
you connect your SettleMint services with other apps securely. They represent an
individual user, and have the same rights as the user's role in the organization
(admin or user). They can be used to connect to all services that the user has
access to.
## Create a personal access token
In the upper right corner of any page, click your **profile picture or avatar**,
and then click **Personal access tokens**.
Click on the **Add a personal access token** button, this opens a form where you
can create your personal access token.
1. Choose a **name** for your personal access token.
2. Select an **expiration date**. You cannot update this later.
3. Click **Confirm** to create your personal access token.
Copy and save your token securely - you won't see it again. Treat it like a
password and keep it secret.
## Delete a personal access token
If you are worried that your personal access token has been compromised, or you
no longer use the integration for which you had generated a particular personal
access token, you can delete that personal access token.
1. Navigate to the list of your personal access tokens, and find the personal
access token you want to delete.
2. Click **Delete** next to the personal access token.
3. Type **DELETE** to confirm. The personal access token will no longer be
usable.
## Use a personal access token
You can use these personal access tokens in three ways depending on what works
for your use case.
* As a header, you can use the header `x-auth-token: TOKEN`.
* As a query parameter using `https://myservice.settlemint.com/?token=TOKEN`
appended to any URL.
* As the last part of the URL `https://myservice.settlemint.com/TOKEN`.
* For IPFS nodes build your uri so it becomes
`https://myservice.settlemint.com/TOKEN/api/v0/...`
* For Avalanche and Fuji build your uri so they look like
`https://myservice.settlemint.com/ext/bc/C/rpc/TOKEN`
## Using application access tokens vs personal access tokens
For most use cases, you should use application access tokens. Since they are
directly linked to the application, the token continues to work even if the user
leaves the organization. They also provide more granular access control.
Personal access tokens are a simpler way to authenticate, but they are linked to
the user's account. If the user leaves the organization, the token will no
longer work for the services of that organization.
file: ./content/docs/platform-components/security-and-authentication/private-keys.mdx
meta: {
"title": "Private keys",
"description": "Guide to managing private keys on SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Private key management is a crucial aspect of blockchain security, ensuring that
transactions are securely signed and verified. Blockchain transactions require
**private keys** for **authentication, ownership verification, and signing**,
and the associated blockchain address must have sufficient funds to cover
**transaction fees (gas costs)**.
Depending on risk, compliance requirements, and scale, SettleMint supports
multiple approaches:
* **Accessible ECDSA P-256**: Straightforward software-based storage.
* **Hierarchical Deterministic (HD) ECDSA P-256**: Generate multiple child keys
from a master seed for structured backups.
* **Hardware Security Modules (HSMs)**: Tamper-resistant devices ensuring
maximum security for enterprise or regulated use cases. Each approach
integrates with the **Transaction Signer**, guaranteeing seamless and secure
execution of on-chain operations.
***
## Importance of private key management
Private keys are essential for:
* **Digitally signing transactions** before they are broadcast to the
blockchain.
* **Generating public addresses** for receiving assets.
* **Ensuring ownership** of blockchain-based assets.
* **Providing security** through cryptographic encryption.
Improper handling of private keys can lead to **unauthorized access,
irreversible loss of funds, and compromised blockchain operations**.
***
## Key management approaches
SettleMint supports multiple **key storage and generation options** to balance
security, usability, and compliance.
### 1. **Externally generated private keys**
Users can create and manage their private keys **outside of SettleMint** and
import them when needed. Common options include:
* **MetaMask** – A popular Ethereum-based wallet.
* **Ledger/Trezor** – Secure hardware wallets for private key storage.
* **OpenSSL & CLI Tools** – For manually generating and managing key pairs.
* **Cloud-Based Key Vaults** – Secure solutions like AWS KMS, Azure Key Vault,
and GCP KMS.
### 2. **Private key management in settlemint**
SettleMint provides **built-in private key generation and storage**, allowing
users to:
* **Generate ECDSA P-256, (HD) ECDSA P-256, and HSM private keys** within the
platform.
* **Manage multiple keys** for different blockchain networks.
* **Securely sign transactions** without exposing private keys externally.
### 3. **Custom key material: mnemonic & derivation path**
For advanced users, SettleMint allows the use of **custom key material**,
including:
* **Mnemonics** – A **12/24-word recovery phrase** that generates a
**Hierarchical Deterministic (HD) Wallet**.
* **Derivation Paths** – Structured key generation paths, such as:
* `m/44'/60'/0'/0/0` (Ethereum Standard)
* `m/44'/0'/0'/0/0` (Bitcoin Standard)
Using **HD ECDSA P-256**, users can derive multiple child keys from a **master
seed**, allowing structured backups and better key management.
***
## Signing transactions in settlemint
SettleMint integrates **a signing proxy** that ensures secure transaction
execution.
* **Captures `eth_sendTransaction` calls** made via **JSON-RPC endpoints**.
* **Uses the appropriate private key** from the key management section.
* **Signs the transaction** before sending it to the blockchain node.
This setup allows **seamless integration** with **external blockchain
development tools**, such as:
* **JSON-RPC API** – Direct node interactions
([Ethereum JSON-RPC](https://eth.wiki/json-rpc/API)).
* **Hardhat Framework** – Remote transaction signing
([Hardhat Configuration](https://hardhat.org/config/#json-rpc-based-networks)).
***
## Security considerations
### 1. **Software-based private key storage**
* Keys are stored in **software memory** for easy access.
* Suitable for **low-risk applications** but vulnerable to potential attacks.
### 2. **Hierarchical deterministic (hd) key management**
* Generates **multiple child keys** from a **single mnemonic seed**.
* Ensures **structured key backups and recovery**.
### 3. **Hardware security modules (hsms)**
* Uses **tamper-resistant hardware devices** for **strong security**.
* Preferred for **enterprise-grade and regulated environments**.
| **Storage Method** | **Security Level** | **Best Use Case** |
| ------------------ | ------------------ | ----------------------------------------- |
| Software (ECDSA) | Medium | Low-risk applications, fast access needs. |
| HD Wallets | High | Structured backups, multi-account setups. |
| HSM | Very High | Enterprise, regulatory compliance. |
***
## Best practices for private key security
* **Use Secure Backups** – Store keys in **encrypted, air-gapped storage
solutions**.
* **Enable Multi-Factor Authentication (MFA)** – Adds an extra layer of
protection.
* **Restrict Access via IAM Policies** – Implement **role-based security
controls**.
* **Rotate Keys Regularly** – Minimizes risk associated with long-term key
exposure.
* **Monitor Transactions** – Set up **alerts for unauthorized activity**.
***
## Additional resources
* **[Ethereum JSON-RPC API](https://eth.wiki/json-rpc/API)**
* **[Hardhat JSON-RPC Signing Guide](https://hardhat.org/config/#json-rpc-based-networks)**
* **[MetaMask Private Key Management](https://metamask.io/)**
* **[Ledger Hardware Wallets](https://www.ledger.com/)**
* **[AWS Key Management Service](https://aws.amazon.com/kms/)**
***
You can sign transactions with private keys you created outside SettleMint with
e.g. MetaMask or other wallet solutions. SettleMint however provides an option
to **create and manage private keys within the platform**.
When you deploy a blockchain node it contains a signing proxy that captures the
eth\_sendTransaction call, uses the appropriate key from the private key section
to sign it, and sends it onwards to the blockchain node. You can use this proxy
directly via the nodes JSON-RPC endpoints
([https://eth.wiki/json-rpc/API](https://eth.wiki/json-rpc/API)) and via tools
like Hardhat
([https://hardhat.org/config/#json-rpc-based-networks](https://hardhat.org/config/#json-rpc-based-networks))
configured to use the "remote" default option for signing.
## Create a private key
Navigate to your **application**, click **Private keys** in the left navigation, and then click **Create a private key**. This opens a form.
Follow these steps to create the private key:
1. Choose a **private key type**:
* **Accessible ECDSA P256**: Standard Ethereum-style private keys with exposed mnemonic
* **HD ECDSA P256**: Hierarchical Deterministic keys for advanced key management
* **HSM ECDSA P256**: Hardware Security Module protected keys for maximum security
2. Choose a **name** for your private key
3. Select the **nodes** on which you want the key to be active
4. Click **Confirm** to create the key
```bash
# Create Accessible ECDSA P256 key
settlemint platform create private-key accessible-ecdsa-p256 my-key \
--application my-app \
--blockchain-node node-123
# Create HD ECDSA P256 key
settlemint platform create private-key hd-ecdsa-p256 my-key \
--application my-app
# Create HSM ECDSA P256 key
settlemint platform create private-key hsm-ecdsa-p256 my-key \
--application my-app
```
```typescript
import { createSettleMintClient } from '@settlemint/sdk-js';
const client = createSettleMintClient({
accessToken: 'your_access_token',
instance: 'https://console.settlemint.com'
});
// Create private key
const createKey = async () => {
const result = await client.privateKey.create({
name: "my-key",
applicationUniqueName: "my-app",
privateKeyType: "ACCESSIBLE_ECDSA_P256", // or "HD_ECDSA_P256" or "HSM_ECDSA_P256"
blockchainNodeUniqueNames: ["node-123"] // optional
});
console.log('Private key created:', result);
};
```
## Manage private keys
1. Navigate to your application's **Private keys** section
2. Click on a private key to:
* View details and status
* Manage node associations
* Check balances
* Fund the key
```bash
# List all private keys
settlemint platform list private-keys --application
# View specific key details
settlemint platform read private-key
# Restart a private key
settlemint platform restart private-key
```
```typescript
// List private keys
const listKeys = async () => {
const keys = await client.privateKey.list("your-app-name");
};
// Get key details
const getKey = async () => {
const key = await client.privateKey.read("key-unique-name");
};
// Restart key
const restartKey = async () => {
await client.privateKey.restart("key-unique-name");
};
```
## Fund the private key
For networks that require gas to perform a transaction, your private key should
contain enough funds to cover the gas price.
1. Click the **private key** in the overview to see detailed information
2. Open the **Balances tab**
3. Click **Fund**
4. Scan the **QR code** with your wallet/exchange to fund the key
Ensure your private key has sufficient funds before attempting transactions on
networks that require gas fees.
file: ./content/docs/platform-components/security-and-authentication/user-wallets.mdx
meta: {
"title": "User wallets",
"description": "Guide to managing user wallets in SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
SettleMint's **User Wallets** feature offers a production-ready solution for
managing infinite wallets with efficiency and scalability. This tool empowers
users with seamless wallet generation, ensuring **cost-effective management**
and eliminating additional expenses. By generating **unique addresses for each
user**, privacy is significantly enhanced, while improved performance ensures
faster, parallel transaction processing through separate nonces. User wallet
also simplifies wallet recovery since all wallets are derived from a single
master key. User wallets use the same signing proxy to sign transactions with
the corresponding user private key.
## Set up user wallets
To set up your user wallets, navigate to your application, click **Private
keys** in the left navigation, and then click **Create a private key**. This
opens a form.
Select **HD ECDSA P256** as the private key type then, enter a **name** for your
deployment. You can also select the nodes or load balancers on which you want to
enable the user wallets. You can change this later if you want to use your user
wallets on a different node. Click **Confirm** to deploy the wallet.

## Create user wallets
When your deployment status is **Running**, you can click on it to check the
details. You can see the Mnemonic from which the user wallets are generated
under **Key material**.

Upon initialization, the User Wallets section is empty. To create your first
user wallet, click on **Create a user wallet**.

This opens a form in which you must enter a wallet name.

The new user wallet appears in the list.

You can now see the address associated with that user. Remember that for
networks that require gas to perform a transaction, the user wallet should
contain enough funds to cover the gas price. You can fund it using the address
displayed in the list.
file: ./content/docs/platform-components/usage-and-logs/audit-logs.mdx
meta: {
"title": "Audit logs",
"description": "Guide to using audit logs in SettleMint"
}
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps } from "fumadocs-ui/components/steps";
import { Card } from "fumadocs-ui/components/card";
Audit logs provide a detailed record of activities in any application deployed
on the SettleMint Blockchain Transformation Platform. These logs provide the
following key benefits:
* Compliance: Ensure adherence to regulatory requirements and industry
standards.
* Accountability: Maintain a clear record of actions and changes made by users
for transparency.
* Troubleshooting: Facilitate the identification and resolution of issues by
tracking system activity.
* Data integrity: Provide a reliable trail of data access and modifications,
protecting against data tampering
Audit logs can be accessed from the application menu, on the left.
Four filters are available to find specific entries in the logs:
* Timestamp: select a time range or a day for which you want to get the logs.
* Service: choose the service you want to analyze.
* User: select a user from your application to see their actions.
* Action: filter based on a specific action (e.g. create, delete, pause,...).
## Access audit logs
### Navigate to Logs
Access from application menu on the left
### Apply filters
Use available filters:
* Timestamp: Select time range
* Service: Choose specific service
* User: Filter by user
* Action: Filter by action type
### View details
Examine detailed log entries
## Log categories
### System Events
* Service deployments
* Configuration changes
* Resource scaling
* System updates
### User Actions
* Resource creation
* Permission changes
* Token management
* Access attempts
Audit logs provide essential tracking for regulatory compliance and security
monitoring.
file: ./content/docs/platform-components/usage-and-logs/monitoring-tools.mdx
meta: {
"title": "Monitoring tools"
}
For all your running services in a blockchain application, a set of monitoring
tools is available to gain insights into the health and performance of those
services.

## Service statuses
For SettleMint platform status you can go to [SettleMint status monitor](https://status.settlemint.com/)

The service status indicates whether your service is running well, facing
issues, or needs your attention.
Go to the **service's overview page** or a **service detail page** to view the
status of a particular service (e.g. network, node, smart contract set, etc.).
## Resource usage status & metrics
You can view the resources (memory, vCPU, and disk space) allocated to your
services at any time, and follow up on the current usage. When the current
resource usage is about to reach its limit, you will see a warning with the
recommendation to scale your resource pack to keep the service running.
Go to the **Resource tab** of a **service detail page** to view the resource
usage status and metrics.
## Network and node stats
Live dashboards allow you to follow up on important stats for network and node
monitoring and identify possible bottlenecks.
Go to the **Details tab** of a **network or node detail page** to view the
stats.
## Network and node logs
You can view real-time logs of your networks and nodes for granular insights
into activity related to them. The logs can be used as a troubleshooting tool if
something goes wrong.


Go to the **Logs tab** of a **network or node detail page** to view the logs.
file: ./content/docs/platform-components/usage-and-logs/resource-usage.mdx
meta: {
"title": "Resource usage",
"description": "Guide to monitoring resource usage"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
When you add a service (network, node, smart contract set, etc.), you select the
amount of resources to allocate to that service by choosing a small, medium or
large resource pack. Resources are the **memory**, **vCPU**, and **disk space**
you need to keep your services running. You can see the resources allocated to
your services at any time, and follow up on the current resource usage.
## Resource usage metrics

Navigate to your **application**. Click **Blockchain networks**, **Blockchain
nodes** or **Smart contracts** in the left navigation, depending on which
resource usage you want to see, and choose the relevant one from the list.
Open the **Metrics tab**.
Here you have clear visual representation of **used versus allocated** memory,
vCPU, and disk space.
Moreover, detailed graphs are available to check memory, vCPU, and disk space
usage for the last hour, day, week and month.
## Resource usage status

The **status** indicates whether or not your resources are still sufficient for
optimal functioning of the service.
The following statuses are defined:
* **Healthy** - the used resources are less than 75% of the allocated resources
* **Suboptimal** - the used resources are between 75% and 90% of the allocated
resources
* **Critical** - the used resources are over 90% of the allocated resources
When the status is **Suboptimal**, and the current resource usage is about to
reach its limit, you will see a **warning** with the recommendation to scale
your resource pack to keep the service running.
file: ./content/docs/supported-blockchains/L1-public-networks/avalanche.mdx
meta: {
"title": "Avalanche"
}
Avalanche was launched in 2020 by Ava Labs, and has its own cryptocurrency
called AVAX. It focuses on scalability, speed and low transactions costs.
Avalanche is fully compatible with Ethereum components, dApps (distributed
applications), and tooling.
## Mainnet and testnet
SettleMint supports both the Avalanche **Mainnet** and the **Fuji Testnet**.
The Mainnet is the primary public Avalanche production blockchain, where
actual-value transactions take place. Each transaction requires payment of a
transaction fee, payable in the native coin AVAX. The Testnet is an instance of
the blockchain to be used for testing and experimentation. There are also coins
used in the Testnet but they have no value, so there is no risk of real fund.
You can consider the Testnet as a prototype and the Mainnet as the official
production blockchain. Or think of this as an analog to production versus
staging servers.
## X-chain, c-chain and p-chain
Avalanche is somewhat unique, because its Mainnet is made up of three
blockchains: X-Chain, C-Chain, and P-Chain.
### X-chain: exchange chain
This blockchain is used to create digital assets and carry out transactions,
which are paid in AVAX.
### C-chain: contract chain
This blockchain runs smart contracts and is compatible with Ethereum.
### P-chain: platform chain
This is the blockchain where anyone can create their own custom blockchain
network, called a subnet or subnetwork. A subnet is managed by its own
validators.
With each blockchain having different roles, Avalanche improves speed and
scalability compared to running all processes on just one chain.
## Consensus mechanism
A consensus mechanism defines the rules for the nodes in a blockchain network to
reach agreement on the current state of the blockchain ledger.
Avalanche uses a novel consensus mechanism that is close to Proof of Stake
(PoS), but that has its own specific characteristics. In fact, there are two
consensus mechanims, called **Avalanche** and **Snowman**.
The core concept of both is **random subsampling**. Validators randomly poll
other validators to determine whether a new transaction is valid. After a
certain amount of repeated random subsampling, it's statistically proven that it
would be almost impossible for a transaction to be false. This repeated sampling
happens extremely fast regardless of the number of validators in the network.
Any node that has staked AVAX can vote on every transaction, which makes the
network more robust and decentralized.
Avalanche and Snowman are quite similar, but support a different structure to
store data, in line with the different needs of the three specific blockchains.
The X-Chain uses the Avalanche consensus mechanism, while the C-Chain and the
P-Chain use the Snowman mechanism.
More information can be found on the
[official Avalanche website](https://build.avax.network/docs).
file: ./content/docs/supported-blockchains/L1-public-networks/ethereum.mdx
meta: {
"title": "Ethereum"
}
Ethereum is one of the most popular public blockchains. It has its own
cryptocurrency, called Ether (ETH) or Ethereum, and its own native programming
language called Solidity to build and publish dApps (distributed applications)
on the Ethereum blockchain.
## Mainnet and testnet
cle SettleMint supports, the Ethereum **Mainnet**, the **Sepolia Testnet** and
the **Holesky Testnet**.
The Mainnet is the primary public Ethereum production blockchain, where
actual-value transactions take place. Each transaction requires payment of a
transaction fee, payable in the native coin ETH. The Testnet is an instance of
the blockchain to be used for testing and experimentation. There are also coins
used in the Testnet but they have no value, so there is no risk of real fund.
You can consider the Testnet as a prototype and the Mainnet as the official
production blockchain. Or think of this as an analog to production versus
staging servers.
## Geth client
In order to participate in a blockchain, you need some form of client software
that implements the features required to run a node. SettleMint uses **Geth**
which is the official Ethereum client, written in the programming language Go,
and fully open source. While there are other clients (like Parity), Geth can be
seen as the de facto reference implementation for running an Ethereum node. It
is the most widespread client with the biggest user base and variety of tooling
for developers.
More information on Geth can be found on the
[official Geth website](https://geth.ethereum.org/).
## Consensus mechanism
Proof of Stake (PoS) is Ethereum's new consensus mechanism. In PoS, nodes are
chosen to validate transactions and create new blocks based on the amount of
Ether they hold and lock up as a stake. This means that the more Ether a node
stakes, the higher its chances of being chosen to validate transactions and earn
rewards for its work.
Unlike Proof of Work (PoW), which requires miners to perform computationally
intensive tasks, PoS relies on the concept of "finality" – the idea that once a
block is added to the blockchain, it is irreversible and has been "finalized."
This makes the PoS consensus mechanism more energy-efficient than PoW.
Validators are chosen to participate in block validation based on a randomized
algorithm that takes into account their staked Ether. This collateral can be
slashed if they misbehave or act maliciously. The PoS consensus mechanism is
currently being implemented on the Ethereum mainnet chain as well as Goereli
testnet.
More information on the consensus mechanism can be found on the
[official Ethereum website](https://ethereum.org/en/developers/docs/consensus-mechanisms/).
file: ./content/docs/supported-blockchains/L1-public-networks/hedera.mdx
meta: {
"title": "Hedera"
}
Hedera is a public distributed ledger technology (DLT) network that was launched
in August 2018 by Hedera Hashgraph, LLC. It uses the Hashgraph consensus
algorithm, which is a unique and novel approach to achieving consensus in a
distributed network. Hedera's native cryptocurrency is called HBAR, and it is
used to power the network's services, including smart contracts, file storage,
and regular transactions.
Hedera focuses on providing high throughput, low latency, and fair transaction
ordering, making it suitable for enterprise-grade applications. Unlike
blockchain-based systems, Hedera's Hashgraph algorithm ensures fast, fair, and
secure transactions without compromising decentralization.
## Mainnet and testnet
SettleMint supports, the **Hedera Mainnet** and the **Hedera Testnet**.
The Mainnet is the primary public Hedera production network, where actual-value
transactions occur. Each transaction requires payment of a transaction fee,
payable in HBAR, the native cryptocurrency. The Testnet, on the other hand, is
an environment used for testing and experimentation. Testnet transactions use
test HBAR, which have no real-world value, ensuring no risk of real fund loss
during development and testing.
You can think of the Testnet as a sandbox for developers and the Mainnet as the
official production network. This setup is similar to the concept of production
versus staging servers in software development.
## Consensus mechanism
Hedera uses the Hashgraph consensus algorithm, which is based on a directed
acyclic graph (DAG). This algorithm provides several advantages:
* `High Throughput`: The network can process thousands of transactions per second.
* `Fairness`: Transactions are timestamped, ensuring fair ordering.
* `Security`: It achieves asynchronous Byzantine Fault Tolerance (aBFT), the highest level of security for a consensus algorithm.
In Hashgraph, each node in the network shares information (events) with other
nodes, and through a process called gossip-about-gossip and virtual voting,
consensus is reached efficiently and quickly. This method ensures that Hedera
remains decentralized and resilient against attacks while maintaining high
performance.
## Json rpc relay
The Hedera JSON-RPC Relay is an open-source project implementing the Ethereum
JSON-RPC standard. The JSON-RPC relay allows developers to interact with Hedera
nodes using familiar Ethereum tools. This allows Ethereum developers and users
to deploy, query, and execute contracts as they usually would.
Hedera's JSON RPC relay is a complex software component that relies on multiple
elements, including consensus nodes and the mirror node. This complexity can
lead to issues such as connection `timeouts`, especially during smart contract
deployments. One Ethereum transaction can generate more than ten Hedera
transactions, which increases the likelihood of encountering these problems.
However, these issues are typically limited to the initial contract deployment
phase. Once a contract is successfully deployed, subsequent contract calls
should not experience such problems.
More information can be found on the
[official Hedera website](https://hedera.com/).
file: ./content/docs/supported-blockchains/L1-public-networks/sonic.mdx
meta: {
"title": "Sonic"
}
Sonic, originally launched as Fantom in June 2018 by the Fantom Foundation
and now rebranded, is a high-performance public blockchain network. It
leverages a unique consensus mechanism called Lachesis and supports its
native cryptocurrency, FTM. Sonic is designed to power decentralized
applications (dApps), smart contracts, and fast transactions, using the
Ethereum Virtual Machine (EVM) and Solidity for development.
Sonic emphasizes rapid transaction processing, low costs, and scalability,
making it an attractive choice for both developers and enterprises. Unlike
traditional blockchain systems, Sonic’s Lachesis consensus delivers near-instant
finality and high throughput without sacrificing decentralization.
## Mainnet and testnet
SettleMint supports the **Sonic Mainnet** and the **Sonic Testnet**.
The Mainnet is the primary public Sonic network, where real-value transactions
take place. Each transaction incurs a fee, payable in FTM, the native
cryptocurrency. The Testnet, conversely, is a development environment for
testing and experimentation. Testnet FTM has no real-world value, allowing
developers to experiment freely without financial risk.
You can view the Testnet as a sandbox for prototyping and the Mainnet as the
live production network, akin to staging versus production environments in
software development.
## Consensus mechanism
Sonic utilizes the **Lachesis** consensus algorithm, an asynchronous Byzantine
Fault Tolerant (aBFT) protocol built on a directed acyclic graph (DAG)
structure. This approach offers several key benefits:
* `High Throughput`: Sonic can handle thousands of transactions per second.
* `Near-Instant Finality`: Transactions are confirmed in seconds, ensuring quick
settlement.
* `Security`: The aBFT design provides robust protection against malicious
actors.
In Lachesis, nodes process transactions independently and share updates
asynchronously through a gossip protocol. This allows the network to reach
consensus rapidly and efficiently, maintaining decentralization and resilience.
Validators stake FTM to participate, with penalties for misbehavior, ensuring
network integrity while keeping energy use minimal compared to Proof of Work
systems.
More information can be found on the
[official Sonic website](https://www.soniclabs.com/).
file: ./content/docs/supported-blockchains/L2-public-networks/arbitrum.mdx
meta: {
"title": "Arbitrum"
}
Arbitrum was launched in August 2021 by Offchain Labs, and has its own
cryptocurrency since March 2023 called ARB. It is a layer 2 scaling solution for
Ethereum that focusses on security, scalability and compatibility. Arbitrum uses
optimistic rollup technology to process transactions off-chain, which allows it
to offer significantly faster transaction speeds and lower fees than Ethereum
mainnet.
It uses AVM (Arbitrum Virtual Machine) which is a custom virtual machine that
was created for the Arbitrum Layer 2 scaling solution. The AVM is designed to be
fully compatible with the Ethereum Virtual Machine (EVM), but it also includes a
number of optimizations that make it more efficient and scalable. In addition to
scalability and compatibility, Arbitrum is also focused on decentralization. The
Arbitrum network is secured by a decentralized network of validators, and it is
governed by a DAO (decentralized autonomous organization). This ensures that
Arbitrum is not controlled by any single entity.
## Mainnet and testnet
SettleMint supports, the **Arbitrum One Mainnet** and the **Arbitrum Testnet** .
The Mainnet is the primary public Arbitrum production blockchain, where
actual-value transactions take place. Each transaction requires payment of a
transaction fee, payable in the native coin ARB. The Testnet is an instance of
the blockchain to be used for testing and experimentation. There are also coins
used in the Testnet but they have no value, so there is no risk of real fund.
You can consider the Testnet as a prototype and the Mainnet as the official
production blockchain. Or think of this as an analog to production versus
staging servers.
## Consensus mechanism
The Arbitrum consensus algorithm is a hybrid system that combines **optimistic
rollups** with a **sequencer**. Optimistic rollups assume that all transactions
are valid unless proven otherwise, which allows transactions to be processed
off-chain, resulting in faster and cheaper transactions.
The sequencer is a node that is responsible for proposing batches of
transactions to the Arbitrum network. The sequencer is not trusted by the
network, and any node can challenge the sequencer's proposal if they believe
that it contains invalid transactions.
The Arbitrum consensus algorithm is scalable, secure, and decentralized. However
currently it has a seven-day withdrawal period and relies on a centralized
sequencer, but the team is looking into creating a decentralized sequencer and
reduce the withrawal period to two days.
More information can be found on the
[official Arbitrum website](https://docs.arbitrum.io/intro/).
file: ./content/docs/supported-blockchains/L2-public-networks/optimism.mdx
meta: {
"title": "Optimism"
}
Optimism was launched in March 2021 by Optimism PBC, and has its own
cryptocurrency since May 2022 called OP. It is a layer 2 scaling solution for
Ethereum that focusses on security, scalability and ease-of-use. Optimism uses
optimistic rollup technology to process transactions off-chain, which allows it
to offer significantly faster transaction speeds and lower fees than Ethereum
mainnet.
# The basics
It uses Ethereum Virtual Machine (EVM), which means that developers can deploy
their existing Ethereum dapps to Optimism without any changes. In addition to
scalability and compatibility, Optimism is also focused on decentralization. The
Optimism network is secured by a decentralized network of validators, and it is
governed by a DAO (decentralized autonomous organization). This ensures that
Optimism is not controlled by any single entity.
## Mainnet and testnet
SettleMint supports, the **OP Mainnet** and the **OP Goerlli Testnet**.
The Mainnet is the primary public Optimism production blockchain, where
actual-value transactions take place. Each transaction requires payment of a
transaction fee, payable in the native coin OP. The Testnet is an instance of
the blockchain to be used for testing and experimentation. There are also coins
used in the Testnet but they have no value, so there is no risk of real fund.
You can consider the Testnet as a prototype and the Mainnet as the official
production blockchain. Or think of this as an analog to production versus
staging servers.
## Consensus mechanism
The Optimism consensus algorithm is a hybrid system that combines **optimistic
rollups** with a **sequencer**. Optimistic rollups assume that all transactions
are valid unless proven otherwise, which allows transactions to be processed
off-chain, resulting in faster and cheaper transactions.
The sequencer is a node that is responsible for proposing batches of
transactions to the Arbitrum network. The sequencer is not trusted by the
network, and any node can challenge the sequencer's proposal if they believe
that it contains invalid transactions.
The Optimism consensus algorithm is scalable, secure, and decentralized.
However, it has a seven-day withdrawal period and relies on a centralized
sequencer.
More information can be found on the
[official Optimism website](https://console.optimism.io/).
file: ./content/docs/supported-blockchains/L2-public-networks/polygon-zkevm.mdx
meta: {
"title": "Polygon zkevm"
}
Polygon zkEVM, introduced by the Polygon (formerly Matic) team in March 2023,
represents the latest advancement in Polygon's efforts to provide a Layer 2
scalability solution. Using cryptographic zero-knowledge proofs to offer
validity and quick finality to off-chain transaction computation, also known as
a ZK-Rollup.
It is the first zkEVM to be fully equivalent to an EVM, meaning that all
existing smart contracts, developer toolings, and wallets work seamlessly.
Polygon zkEVM provides a complete EVM-like experience for developers and users
alike, with significantly lower transaction costs and higher throughput than
Ethereum.
## Mainnet and testnet
SettleMint supports both the Polygon zkEVM **Mainnet** and the **Testnet**.
The Mainnet is the primary public Polygon zkEVM production blockchain, where
actual-value transactions take place. Polygon zkEVM does not have its own native
token. Instead, it uses MATIC, the native token of the Polygon network, for
transactions, gas fees, and other network activities. The Testnet is an instance
of the blockchain to be used for testing and experimentation. There are also
coins used in the Testnet but they have no value, so there is no risk of real
fund.
You can consider the Testnet as a prototype and the Mainnet as the official
production blockchain. Or think of this as an analog to production versus
staging servers.
## Zkevm
zkEVM stands for "zero-knowledge Ethereum Virtual Machine". It is a type of
Ethereum scaling solution that uses zero-knowledge proofs (ZKPs) to verify the
validity of transactions off-chain. This means that zkEVMs can process thousands
or even millions of transactions per second, with very low fees.
ZKPs are a cryptographic technique that allows someone to prove that they know a
piece of information without revealing the information itself. In the context of
zkEVMs, ZKPs are used to prove that a batch of transactions has been processed
correctly, without revealing the individual transactions in the batch.
Here are some of the benefits of zkEVMs:
* **Scalability**: zkEVMs can process thousands or even millions of transactions
per second, which is much faster than Ethereum's current throughput.
* **Low fees**: zkEVMs can significantly reduce transaction fees, making
Ethereum more affordable to use.
* **Security**: zkEVMs are just as secure as Ethereum, as they use the same
underlying blockchain technology.
* **Compatibility**: zkEVMs are compatible with existing Ethereum smart
contracts and wallets, so developers and users can start using them
immediately.
In a nutshell, Polygon zkEVM allows you to use Ethereum with the scalability and
low fees of a Layer 2 solution.
## Consensus mechanism
The Polygon zkEVM consensus algorithm is a permissionless system that allows
anyone to participate in the process of validating and finalizing transactions.
It is based on a Proof of Efficiency (PoE) mechanism, which rewards participants
for their contributions to the network.
The Polygon zkEVM consensus algorithm works as follows:
1. **Sequencers** propose batches of transactions to the network.
2. **Validators** verify the validity of the proposed batches and generate
proofs of correctness.
3. **Aggregators** aggregate the proofs from the validators and submit them to
the Consensus Contract.
4. **The Consensus Contract** verifies the proofs and finalizes the
transactions.
Sequencers, validators, and aggregators are all incentivized to participate in
the network by receiving rewards in MATIC, the native token of the Polygon
network.
The Polygon zkEVM consensus algorithm is designed to be secure, efficient, and
fair. It is also designed to be compatible with the Ethereum mainnet, so that
users can easily move their assets between the two networks.
More information on the consensus mechanism can be found on the
[official Polygon website](https://wiki.polygon.technology/docs/zkevm/architecture/).
file: ./content/docs/supported-blockchains/L2-public-networks/polygon.mdx
meta: {
"title": "Polygon"
}
Polygon, previously MATIC, was launched in 2017, mainly to tackle Ethereum's
scaling problem. Polygon is a layer 2 commit chain to the Ethereum network, and
acts as an add-on layer to Ethereum. It does not seek to change the original
Ethereum blockchain layer, but solves pain points associated with it, like high
gas fees and slow speeds, without sacrificing on security. Polygon supports all
the existing Ethereum tooling, along with faster and cheaper transactions.
Polygon allows developers to easily launch Ethereum-compatible scaling solutions
and stand-alone blockchains as part of a network of interconnecting blockchains.
Polygon is often referred to as "Ethereum's internet of blockchains", and has
gained wide adoption within the Web3 community. It has gained popularity because
of the great throughput and low gas expenses, and as a consequence the Polygon
ecosystem is growing fast.
Polygon has its own cryptocurrency, called MATIC.
## Mainnet and testnet
SettleMint supports both the Polygon **Mainnet** and the **Amoy Testnet**.
The Mainnet is the primary public Polygon production blockchain, where
actual-value transactions take place. Each transaction requires payment of a
transaction fee, payable in the native coin MATIC. The Testnet is an instance of
the blockchain to be used for testing and experimentation. There are also coins
used in the Testnet but they have no value, so there is no risk of real fund.
You can consider the Testnet as a prototype and the Mainnet as the official
production blockchain. Or think of this as an analog to production versus
staging servers.
## Consensus mechanism
A consensus mechanism defines the rules for the nodes in a blockchain network to
reach agreement on the current state of the blockchain ledger.
Polygon uses a **Proof of Stake (PoS**) consensus mechanism, giving power to
validate transactions and create new blocks within the network to any actor who
stakes his MATIC token.
More information can be found on the
[official Polygon website](https://docs.polygon.technology/).
file: ./content/docs/supported-blockchains/L2-public-networks/soneium.mdx
meta: {
"title": "Soneium"
}
Soneium was launched in early 2025 by a team focused on advancing blockchain
interoperability and scalability. It introduces its native cryptocurrency, SON,
designed to power its ecosystem. Soneium operates as a layer 2 solution built
atop Ethereum, emphasizing high throughput, low-cost transactions, and seamless
cross-chain connectivity. It leverages advanced rollup technology to process
transactions off-chain, delivering faster speeds and reduced fees compared to
the Ethereum mainnet.
# The basics
Soneium is fully compatible with the Ethereum Virtual Machine (EVM), allowing
developers to port their Ethereum-based dapps effortlessly. Beyond scalability,
Soneium prioritizes decentralization and user empowerment. The network is
secured by a distributed set of validators and governed through a decentralized
autonomous organization (DAO), ensuring no single party holds control over its
operations.
## Mainnet and testnet
SettleMint supports the **Soneium Mainnet** and the **Soneium Testnet**.
The Mainnet serves as Soneium’s primary public blockchain, where real-value
transactions occur, requiring the SON token for fees. The Testnet, on the other
hand, is a sandbox environment for developers to test and experiment without
financial risk, using valueless test tokens.
You can think of the Testnet as a sandbox for developers and the Mainnet as the
official production network. This setup is similar to the concept of production
versus staging servers in software development.
## Consensus mechanism
Soneium employs a hybrid consensus model combining **optimistic rollups** with a
**coordinator node**. Optimistic rollups assume transaction validity by default,
processing them off-chain for efficiency, while enabling challenges if fraud is
detected. The coordinator node batches transactions and submits them to the
network, though it remains untrusted, any validator can dispute invalid batches.
This design ensures Soneium is scalable, secure, and decentralized. However, it
may involve a withdrawal delay (typically seven days) and relies on a
centralized coordinator for transaction ordering, a trade-off for its
performance gains.
More information can be found on the
[official Soneium website](https://soneium.org/).
file: ./content/docs/supported-blockchains/permissioned-networks/hyperledger-besu.mdx
meta: {
"title": "Hyperledger besu"
}
Enterprise Ethereum is the permissioned blockchain version of public Ethereum.
The two major Enterprise Ethereum clients are **Hyperledger Besu and Quorum**.
Both clients have implemented a permission layer, which only allows known nodes,
designed specifically for use in a consortium environment, to join the network.
## What is hyperledger besu?
Hyperledger Besu is an open-source Ethereum client developed under the Apache
2.0 license and written in Java. It was started by the Linux Foundation under
the Hyperledger umbrella project. This project is largely known for its
Hyperledger Fabric component, which is one of the most prominent permissioned
protocols in the blockchain space. While they both exist under the Hyperledger
umbrella, Fabric and Besu have little in common in terms of the underlying
technology. More specifically, whereas Fabric is a private protocol designed
from the ground up to support enterprise-grade solutions, Besu seeks to utilize
the public Ethereum network. Besu can run on the public network or on private
networks, as well as on a number of testnets. The project, formerly known as
Pantheon, joined the Hyperledger family in 2019, adding for the first time a
public blockchain implementation to Hyperledger's suite of private blockchain
frameworks.
## Features
Hyperledger Besu's main features include:
* **Permissioning**: Contrary to the Ethereum Mainnet, a permissioned network
allows only specified nodes to join the network and to participate.
* **The Ethereum Virtual Machine (EVM)**: The EVM is what enables the deployment
and execution of Ethereum smart contracts.
* **Privacy**: The Private Transaction Manager makes it possible to keep
transactions between predefined parties private from other users of the
network.
* **User-facing API**: The client provides mainnet Ethereum and EEA JSON-RPC
APIs over HTTP and WebSocket protocols. It also supports a GraphQL API.
Hyperledger Besu's enterprise features are designed to adhere to the
requirements of the
[Enteprise Ethereum Alliance (EEA)](https://entethalliance.org/) client
specification.
## Consensus mechanisms
A consensus mechanism defines the rules for the nodes in a blockchain network to
reach an agreement on the current state of the blockchain ledger.
Besu comes with several consensus mechanisms. As an Ethereum implementation,
Proof of Work (PoW) is a given, but the **Proof of Authority (PoA)** options are
more suitable for enterprise projects. These can be used when participants know
each other and there is a level of trust between them, e.g. in a permissioned
consortium network.
PoA is a light and practical consensus mechanism that gives a small and
designated number of blockchain actors the power to validate transactions within
the network and to create new blocks. This results in faster block times and a
much greater transaction throughput.
**SettleMint's Enterprise Ethereum networks always use QBFT**
In QBFT networks, a set of validator nodes is responsible for maintaining
consensus on the blockchain. Similar to IBFT, these validators participate in a
round-based process to propose and validate blocks. At the start of each round,
one validator is chosen as the proposer. This proposer puts forward a new block
to be added to the chain.
The remaining validators review the proposed block and share their votes. If a
supermajority (at least 66%) of the validators agree that the block is valid, it
is finalized and committed to the chain. Like IBFT 2.0, QBFT achieves immediate
finality , blocks are either committed or discarded in a single round. There are
no forks, and every valid block becomes a permanent part of the main chain.
QBFT enhances performance and fault tolerance over IBFT by introducing
optimizations for large validator sets, making it better suited for
enterprise-grade and high-throughput networks.
When you deploy a Hyperledger Besu blockchain network on SettleMint, it should
be Byzantine fault tolerant.
More information on Hyperledger Besu can be found in the official
[Hyperledger Besu documentation](https://besu.hyperledger.org/en/stable/).
file: ./content/docs/supported-blockchains/permissioned-networks/hyperledger-fabric.mdx
meta: {
"title": "Hyperledger fabric"
}
Hyperledger Fabric is an open source enterprise-grade permissioned distributed
ledger technology (DLT) platform, designed for use in enterprise contexts, that
delivers some key differentiating capabilities over other popular distributed
ledger or blockchain platforms.
Fabric has a highly modular and configurable architecture, enabling innovation,
versatility and optimization for a broad range of industry use cases including
banking, finance, insurance, healthcare, human resources, supply chain and even
digital music delivery.
Fabric is the first distributed ledger platform to support smart contracts
authored in general-purpose programming languages such as Java, Go and Node.js,
rather than constrained domain-specific languages (DSL). This means that most
enterprises already have the skill set needed to develop smart contracts, and no
additional training to learn a new language or DSL is needed.
The Fabric platform is also permissioned, meaning that, unlike with a public
permissionless network, the participants are known to each other, rather than
anonymous and therefore fully untrusted. This means that while the participants
may not fully trust one another (they may, for example, be competitors in the
same industry), a network can be operated under a governance model that is built
off of what trust does exist between participants, such as a legal agreement or
framework for handling disputes.
Fabric can leverage consensus protocols that do not require a native
cryptocurrency to incent costly mining or to fuel smart contract execution.
Avoidance of a cryptocurrency reduces some significant risk/attack vectors, and
absence of cryptographic mining operations means that the platform can be
deployed with roughly the same operational cost as any other distributed system.
The combination of these differentiating design features makes Fabric one of the
better performing platforms available today both in terms of transaction
processing and transaction confirmation latency, and it enables privacy and
confidentiality of transactions and the smart contracts (what Fabric calls
"chaincode") that implement them.
## Consensus mechanism
Fabric currently offers a CFT (crash fault-tolerant) ordering service
implementation based on the [etcd](https://coreos.com/etcd/) library of the Raft
protocol.
The Raft protocol is the go-to ordering service choice for production networks,
the Fabric implementation of the established Raft protocol uses a "leader and
follower" model, in which a leader is dynamically elected among the ordering
nodes in a channel (this collection of nodes is known as the "consenter set"),
and that leader replicates messages to the follower nodes. Because the system
can sustain the loss of nodes, including leader nodes, as long as there is a
majority of ordering nodes (what's known as a "quorum") remaining, Raft is said
to be "crash fault tolerant" (CFT). In other words, if there are three nodes in
a channel, it can withstand the loss of one node (leaving two remaining). If you
have five nodes in a channel, you can lose two nodes (leaving three remaining
nodes). This feature of a Raft ordering service is a factor in the establishment
of a high availability strategy for your ordering service. Additionally, in a
production environment, you would want to spread these nodes across data centers
and even locations. For example, by putting one node in three different data
centers. That way, if a data center or entire location becomes unavailable, the
nodes in the other data centers continue to operate.
More information can be found on the official
[Hyperledge Fabric documentation website](https://hyperledger-fabric.readthedocs.io/en/latest).
file: ./content/docs/supported-blockchains/permissioned-networks/quorum.mdx
meta: {
"title": "Quorum"
}
Quorum is an enterprise-grade, permissioned blockchain platform derived from Ethereum and originally developed by J.P. Morgan. It extends Ethereum's features with robust, enterprise-centric innovations, including enhanced performance, flexible consensus, and advanced governance.
Quorum empowers organizations to build and deploy **secure, high-performance blockchain applications** while retaining full Ethereum compatibility. Its design caters specifically to industries such as finance, supply chain, healthcare, and government, where compliance, transparency, and security are paramount.
## Dynamic Block Production
One of the key strengths of Quorum is its **dynamic block production** mechanism. Unlike public blockchains that generate blocks at fixed intervals regardless of network activity, Quorum's consensus protocols (such as Raft or IBFT) are designed to produce blocks **only when there is an actual transaction to process**. This behavior minimizes wasted computational resources and removes the overhead of mining empty blocks.
### Consensus Models in Quorum
Quorum supports multiple consensus mechanisms designed for enterprise use:
#### Raft (Crash Fault Tolerance)
* **Efficiency:** The Raft protocol uses a leader/follower structure to order transactions efficiently. Blocks are generated only in response to incoming transactions—there is no periodic block production.
* **Fault Tolerance:** As a Crash Fault Tolerant (CFT) system, it can handle node failures as long as a majority of the network remains operational. For example, in a three-node network, the failure of one node does not affect consensus.
#### Istanbul Byzantine Fault Tolerance (IBFT)
* **Immediate Finality:** IBFT is a Byzantine Fault Tolerant (BFT) consensus mechanism that requires a supermajority (at least two-thirds) of nodes to agree on a block before it is finalized. This provides robust security against malicious actors while ensuring that blocks are only created when necessary.
* **Resilience:** IBFT is ideal in environments with a mix of trusted and untrusted participants and offers rapid confirmation of transactions.
By eliminating the need to produce empty blocks, Quorum ensures that system resources are used only when necessary—improving both throughput and latency.
## Transaction Privacy Considerations
While native implementations of Quorum support **private transactions** (using privacy managers such as Tessera to encrypt sensitive data), **Settlemint does not enable private transaction support**. Our focus is on leveraging Quorum's efficient consensus and on-demand block production to create a secure and performant network environment. This model is particularly attractive for enterprises that prioritize governance, scalability, and resource efficiency over on-chain privacy.
## Key Benefits for Enterprise Deployments
* **Resource Efficiency:** Blocks are created only in response to transaction activity, leading to lower processing overhead.
* **Scalability:** Flexible consensus options (Raft and IBFT) allow enterprises to tailor the network to their operational needs.
* **Security and Governance:** As a permissioned platform, Quorum ensures that only authorized participants can transact, supporting higher levels of security and regulatory compliance.
* **Ethereum Compatibility:** Quorum maintains compatibility with Ethereum, enabling the use of existing smart contract tools and developer resources.
For further details, please refer to the official\
[Quorum documentation](https://consensys.net/docs/quorum/).
file: ./content/docs/use-case-guides/template-libraries/introduction.mdx
meta: {
"title": "Introduction",
"description": "Overview of EVM and Fabric contract templates"
}
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
## Smart Contract & Chaincode Templates (EVM & Fabric)
* **EVM-compatible networks** using **Solidity** (e.g., Ethereum, Besu, Quorum
Polygon, Avalanche)
* **Hyperledger Fabric** networks using **GoLang** and **TypeScript**
These templates are crafted for developers looking to **accelerate smart
contract or chaincode development**, ensure **security alignment**, and
streamline **integration with middleware, APIs, and off-chain systems**.
***
## What Are These Templates?
These are **starter libraries** built with industry-relevant design patterns and
reusable code components. Each one includes:
### EVM Templates (Solidity + TypeScript)
* Core Solidity contracts structured for modular deployment
* Built-in patterns like Ownable, role-based access control, and event logging
* TypeScript SDK bindings for contract interaction and deployment
### Fabric Templates (GoLang + TypeScript)
* GoLang-based chaincode modules for key-value data storage, access control, and
function invocation
* Transaction flow patterns using Fabric Contract API
* TypeScript client SDKs for ledger access, endorsement submission, and result
handling
Each template is intended to work as a **scaffold**, allowing developers to
focus on **domain logic**, not base plumbing.
***
## What Do These Templates Help With?
By using these templates, you will:
* Avoid writing redundant boilerplate for state reads/writes, ACLs, and access
layers
* Kick-start **full-stack blockchain workflows** with backend, smart contracts,
and integration layers
* Implement **clean separation of concerns** between business logic and system
orchestration
* Follow best practices for security, gas/resource efficiency, and observability
* Accelerate delivery of POCs, MVPs, or production-grade logic
***
SettleMint's smart contract templates serve as open-source, ready-to-use
foundations for blockchain application development, significantly accelerating
the deployment process. These templates enable users to quickly customize and
extend their blockchain applications, leveraging tested and community-enhanced
frameworks to reduce development time and accelerate market entry.
## Open-source smart contract templates under the MIT license
Benefit from the expertise of the blockchain community and trust in the
reliability of your smart contracts. These templates are vetted and used by
major enterprises and institutions, ensuring enhanced security and confidence in
your deployments.
## Template library
The programming languages for smart contracts differ depending on the protocol:
* For **EVM-compatible networks** (like Ethereum), smart contracts are written
in **Solidity**.
* For **Hyperledger Fabric**, smart contracts (also called chaincode) are
written in **TypeScript** or **Go**.
***
### Solidity contracts IDE
| Template | Description |
| ------------------------------------------------------------------------------------------- | ----------------------------------------- |
| [Empty](https://github.com/settlemint/solidity-empty) | A minimal smart contract in Solidity |
| [ERC20 Token](https://github.com/settlemint/solidity-token-erc20) | Standard ERC20 token implementation |
| [ERC20 with MetaTx](https://github.com/settlemint/solidity-token-erc20-metatx) | ERC20 token with meta-transaction support |
| [ERC20 with Crowdsale](https://github.com/settlemint/solidity-token-erc20-crowdsale) | ERC20 token with integrated crowdsale |
| [ERC1155 Token](https://github.com/settlemint/solidity-token-erc1155) | Multi-token standard (ERC1155) |
| [ERC721](https://github.com/settlemint/solidity-token-erc721) | Standard NFT token (ERC721) |
| [ERC721a](https://github.com/settlemint/solidity-token-erc721a) | Gas-optimized NFT (ERC721A) |
| [ERC721 Generative Art](https://github.com/settlemint/solidity-token-erc721-generative-art) | NFT with generative art logic |
| [Soulbound Token](https://github.com/settlemint/solidity-token-soulbound) | Non-transferable token |
| [Supply Chain](https://github.com/settlemint/solidity-supplychain) | Asset tracking across supply chain |
| [State Machine](https://github.com/settlemint/solidity-statemachine) | State transition logic |
| [Diamond Bond](https://github.com/settlemint/solidity-diamond-bond) | Bond issuance and tracking |
| [Attestation Service](https://github.com/settlemint/solidity-attestation-service) | Verifiable claim attestations |
***
### Chaincode templates (hyperledger fabric)
| Template | Description |
| ------------------------------------------------------------------------------------------- | ---------------------------------------- |
| [Empty (TypeScript)](https://github.com/settlemint/chaincode-typescript-empty) | Minimal TypeScript chaincode |
| [Empty with PDC (TypeScript)](https://github.com/settlemint/chaincode-typescript-empty-pdc) | Chaincode using private data collections |
| [Empty (Go)](https://github.com/settlemint/chaincode-go-empty) | Minimal Go chaincode |
***
## Important Notes Before Usage
These templates are reference implementations, not
ready-to-deploy production contracts or chaincode. Always customize and test
thoroughly before using in any live or sensitive environment.
**Key cautions:**
* **Security**: Templates should be audited for your use case and environment
specifics. Critical paths like funds transfer or permission control must be
reviewed manually and/or formally verified.
* **Customization Required**: The templates are generic. Specific business
logic, regulatory constraints, and contract-specific behaviors must be
implemented manually.
* **Resource Considerations**: Smart contract templates must be gas-efficient
(EVM), and Fabric templates must avoid long execution or response times due to
chaincode lifecycle constraints.
* **Environment Sensitivity**: Test on devnets (e.g., Ganache, local Fabric)
before deploying on public/mainnet infrastructure.
***
## Best Practices for Template Adoption
### 1. **Separate Core Logic and Permissions**
* Define role-based modifiers (Solidity) or access logic (GoLang) as reusable
layers
* Avoid embedded ACLs in core business functions to maintain clarity and
auditability
### 2. **Follow Transactional Boundaries**
* In Fabric: Avoid multiple read/write invocations inside a loop unless
atomicity is ensured
* In EVM: Minimize state changes per call to avoid unnecessary gas consumption
or complexity
### 3. **Structure Events Consistently**
* Use semantic, structured event naming (`ProfileCreated`, `AssetTransferred`)
* Emit key identifiers and indexes for off-chain sync and indexing systems
(e.g., The Graph, Kafka listeners)
### 4. **Leverage Strong Typing in TypeScript SDKs**
* Use typed interfaces for contract interaction and payload schemas
* Ensure runtime error handling for all transaction submission flows and network
responses
### 5. **Use Version Control for Models**
* Store contract/chaincode hashes and deployment metadata
* Implement migration logic for versioned schema changes in smart contracts or
ledger records
### 6. **Align With CI/CD and Test Frameworks**
* Add unit and integration tests with frameworks like Mocha, Chai, Jest (for
TS), and Go test (for Fabric)
* Include basic gas profiling or peer endorsement simulations
* Use GitHub Actions, GitLab CI, or similar for continuous checks
### 7. **Adopt Secure Development Patterns**
* Avoid dynamic `delegatecall` or unbounded loops in Solidity
* Use Fabric's built-in MSP identity abstraction for managing user roles
* Sanitize and validate all inputs rigorously, both on-chain and at the API
level
***
## Languages & Tooling Used
| Template Layer | Language | Usage |
| --------------------- | ---------- | ----------------------------------- |
| Smart Contracts (EVM) | Solidity | Core business logic on-chain |
| Chaincode (Fabric) | GoLang | Permissioned chain logic |
| Integration Layer | TypeScript | Contract calls, event listeners, UI |
***
## What's Next?
You can now browse specific templates in:
* `EVM Contracts` (Solidity)
* `Fabric Chaincode` (GoLang + TypeScript)
The smart contract and chaincode libraries aim to help developers and teams:
* Move from zero to working logic quickly
* Avoid reinventing core state and ACL patterns
* Align with best practices across Solidity, GoLang, and TypeScript
* Establish a common starting point for internal or client-facing solutions
Use them as a **launchpad**, not an endpoint , always customize to your
environment, test with realistic data, and plan for maintainability and audit
readiness.
file: ./content/docs/application-kits/asset-tokenization/asset-classes/bond.mdx
meta: {
"title": "Bond",
"description": "Secure, Collateralized Fixed-Income Digital Asset"
}
Digital bonds represent traditional fixed-income securities securely on
blockchain, backed by real-world collateral. They combine blockchain
transparency with predictable returns, redemption at maturity, and automated
yield distribution. This Bond asset ensures secure issuance, precise maturity
management, comprehensive compliance capabilities, and investor-friendly
redemption processes tailored specifically for financial institutions.
Bond token is a tokenized representation of fixed-income securities, issued and
managed through blockchain infrastructure. Designed for institutional-grade use,
bond tokens offer a secure, transparent, and fully automated experience for
fixed-income product management. Each bond token is collateralized by underlying
assets, ensuring capital preservation and trust among investors. The system
enables seamless automation of the entire bond lifecycle, from issuance and
distribution to interest payments and final redemption, using smart contracts to
eliminate manual intervention and reduce operational overhead.
## Why tokenized bonds?
Tokenized bonds combine the reliability of traditional debt instruments with the
operational advantages of distributed ledger technology. Institutions adopting
this model benefit from significant cost savings, improved processing speeds,
and enhanced transparency. The use of programmable logic in smart contracts
makes it possible to enforce bond terms, automate interest distribution, and
maintain real-time audit trails. Tokenized bonds also provide a single source of
truth for all stakeholders, including regulators, thereby simplifying compliance
and reporting obligations.
## Institutional use cases
Institutions can now automate the process of bond issuance, reducing reliance on
manual paperwork and underwriting services. The system supports workflows where
investors are whitelisted, subscription processes are handled digitally, and
tokens representing the bond are issued directly to participants. All essential
bond parameters, such as coupon rates and maturity dates, are embedded into the
token itself. This allows for automated execution of interest payouts and
redemption of principal without requiring manual intervention. In live
implementations, this automation has led to dramatic efficiency gains, such as
reducing coupon processing time by over 90%.
When it comes to secondary market transactions, tokenized bonds introduce
real-time settlement through atomic delivery-versus-payment mechanisms. This
means trades are either completed entirely or not at all, effectively
eliminating counterparty risk. Unlike traditional settlement cycles which can
take two or more days, these digital bonds enable immediate clearance. This
reduction in settlement time translates into lower collateral requirements,
improved liquidity, and a notable decrease in systemic risk. Institutions such
as HSBC have reported significant operational cost reductions and improved
capital efficiency through tokenized bond pilots.
Tokenization also enhances market accessibility and liquidity by enabling
fractional ownership. Bonds can be broken into smaller units, allowing a broader
base of investors to participate in markets that were previously limited to
large financial entities. These digital bonds are tradable around the clock on
regulated exchanges, creating a continuous and liquid market. By expanding the
investor base and improving ease of entry and exit, tokenized bonds facilitate
stronger price discovery and greater market dynamism.
From a compliance and regulatory perspective, tokenized bonds offer real-time
visibility into transactions and ownership. Every transfer and holding is
recorded immutably on the blockchain, eliminating the need for manual
reconciliation and reducing the potential for error or fraud. Regulatory rules
such as whitelisting, ownership limits, and anti-money laundering checks can be
enforced directly through smart contracts. This level of automation not only
simplifies internal compliance processes but also supports transparent reporting
for regulators and auditors.
In addition, tokenized bonds enable financial innovation through
programmability. Institutions can design bonds with dynamic features, for
instance, automatically adjusting interest rates based on a reference index or
market condition. This programmability can also be extended to structured
products and composite offerings like digital bond ETFs, where multiple
tokenized bonds are bundled into a single instrument. These programmable bonds
can integrate with treasury management systems, participate in lending or repo
operations, and enable instant transfer of ownership, thereby increasing the
velocity of money and utility of assets across financial operations.
## Contract capabilities
The bond contract includes a built-in collateralization mechanism, ensuring that
each issued token maintains real-world value backing. Bonds follow a defined
lifecycle with clear maturity terms. Upon reaching maturity, token holders are
able to redeem their holdings for the equivalent value of the underlying asset.
This predictability provides investors with confidence in liquidity and return,
while also enabling institutions to manage asset flows effectively.
Interest or yield distributions are automated, allowing institutions to fulfill
coupon payments without manual calculations or third-party processing. These
payments are executed at predefined intervals and are fully recorded on the
ledger, supporting transparency and simplifying post-distribution
reconciliation. Historical balance tracking is integrated into the contract to
ensure accuracy in payment calculations and audit readiness.
To maintain robust access controls, the system defines specific roles. Supply
management is handled by designated administrators who are responsible for
minting and controlling token supply. User permissions, including the ability to
block or unblock accounts for regulatory compliance purposes, are managed
through a dedicated role. A higher-level administrative role provides authority
to pause token operations in critical situations, such as regulatory
interventions or security incidents.
The system also incorporates advanced compliance mechanisms, such as transaction
pausing and user blocklisting. These features support AML/KYC enforcement,
protect against unauthorized activity, and provide regulatory bodies with the
ability to enforce controls directly on the digital asset infrastructure. All
actions within the contract, from yield payments to administrative changes, are
logged in detail to support thorough auditing and reporting.
To improve accessibility for enterprise users, the contract supports
meta-transactions. This allows third parties to sponsor gas fees on behalf of
institutional stakeholders, removing friction from the user experience and
enabling smoother onboarding of participants who may not manage digital wallets
directly.
Tokenized bonds support a wide range of enterprise applications. Banks and
financial institutions can issue bonds programmatically, reducing costs and
improving speed-to-market. Treasury departments can manage fixed-income
portfolios digitally with enhanced visibility and automation. These bonds can
also be used as high-quality collateral in lending or repo markets, with
real-time ownership transfer enabling faster settlement and liquidity access.
Additionally, programmable bonds can support dynamic financial instruments and
serve as foundational components in more complex investment structures.
file: ./content/docs/application-kits/asset-tokenization/asset-classes/cryptocurrency.mdx
meta: {
"title": "Cryptocurrency",
"description": "Reliable and Customizable Digital Asset"
}
CryptoCurrency assets are customizable digital instruments offering controlled minting, robust security, and full regulatory compliance, ideal for institutional use. They streamline financial operations through programmable transactions, enhanced transparency, and integrated meta-transactions for improved usability. Key features include role-based access management, secure issuance, and easy integration into institutional payment processes.
Cryptocurrency token is a secure and programmable digital token designed to support a wide range of financial and enterprise use cases. This asset is suitable for institutions seeking to issue or manage digital currencies with built-in control, compliance, and automation. It includes role-based access, minting and supply oversight, programmable behaviors, and the ability to interact with other smart contracts. It simplifies digital asset operations while ensuring regulatory alignment and security.
## Why use digital tokens?
Digital tokens like cryptocurrency enable institutions to improve the efficiency, traceability, and speed of financial operations. By replacing legacy systems with programmable digital assets, enterprises can reduce transaction costs, enhance auditability, and unlock new models of financial interaction. These tokens allow for real-time settlement, automation of fund flows, and better control over how assets are issued, distributed, or retired.
## Institutional use cases
Banks and financial institutions can utilize customizable digital currencies to support a variety of high-value functions. In cross-border remittances, for example, cryptocurrency can be used to bypass slow and costly correspondent banking networks. Instead, a bank can convert fiat into a digital currency and transfer it instantly to a partner abroad, where it is redeemed into local currency. These crypto rails dramatically lower the cost and time required for international transfers, improving accessibility for migrant workers and reducing barriers to financial inclusion.
Large organizations can also integrate digital tokens into treasury operations. Tokens can be programmed to automate supplier payments, escrow arrangements, or intra-company liquidity transfers. Funds can be released automatically when predefined conditions are met, reducing the need for manual intervention. This transforms cash management into a responsive, real-time process, improving security and reducing operational risk. Corporate trials have demonstrated how programmable tokens eliminate friction caused by bank cut-off times and manual reconciliations.
For financial inclusion, banks or NGOs may issue digital currencies that can be used by unbanked populations via mobile applications. These tokens can be used for micro-payments and micro-savings, enabling access to financial services without a traditional bank account. With low transaction fees and no minimum balance requirements, digital tokens can bring underserved users into the financial system while still allowing banks to enforce rules for usage, compliance, and traceability.
A digital token can also act as a bridge asset in foreign exchange settlements. Bank A may convert one currency into a token and transfer it to Bank B, which redeems it into another currency. This simplifies FX operations by eliminating intermediaries and speeding up settlement, while also reducing exchange risk. These digital assets can be enhanced with programmable features, such as hedging or time-bound expiration, to further control and secure cross-currency transactions.
Banks exploring decentralized finance (DeFi) opportunities can use regulated, institutionally controlled tokens to participate in yield-generating activities. For instance, a digital token can be used in pre-approved lending pools or liquidity platforms to earn returns on idle capital. Access can be strictly controlled through whitelisting, ensuring only vetted contracts are allowed to interact with the asset. This enables secure exposure to DeFi mechanisms while maintaining institutional compliance and control.
## Token capabilities
The cryptocurrency contract includes robust mechanisms for managing token supply and access control. Authorized administrators can mint or burn tokens in accordance with operational policies or regulatory requirements. This enables institutions to maintain precise oversight over the lifecycle of the digital asset, including issuance, redemption, and adjustments to supply.
Comprehensive access roles are built into the contract. A supply management role is responsible for minting tokens and ensuring issuance policies are followed. A higher-level admin role oversees system governance, allowing emergency actions, security interventions, or pausing of transactions if needed.
To further support regulatory compliance, the system includes strict access controls ensuring that only authorized personnel can execute sensitive functions. Operational transparency is supported through full on-chain logging, which allows for complete auditability of asset movements and governance actions.
For better accessibility, the token supports meta-transactions. This feature allows third parties to relay transactions on behalf of users, effectively removing the need for end-users to pay gas fees. This improves user experience, especially for enterprise clients who may operate without managing wallet infrastructure directly.
Tokens can be fully customized during deployment. Parameters such as token name, decimal precision, and initial supply are all configurable. This ensures that the token can be adapted to fit diverse enterprise or institutional needs, whether it's used for payments, settlement, loyalty programs, or cross-border value transfers.
## Enterprise applications
Enterprises can leverage cryptocurrency for a wide range of use cases. Institutional asset managers can track and manage funds with real-time visibility. Corporations can automate supplier payments and treasury operations. Digital loyalty programs can be issued and managed securely on-chain. In global supply chains, these tokens can streamline settlement processes, creating end-to-end transparency and efficiency across multiple jurisdictions.
Cryptocurrency enables enterprises and financial institutions to securely manage digital assets in a programmable, compliant, and efficient way. By combining automation, security, and flexibility, it serves as a foundation for financial innovation and operational modernization across industries.
file: ./content/docs/application-kits/asset-tokenization/asset-classes/equity.mdx
meta: {
"title": "Equity",
"description": "Institutional-Grade Digital Equity Management"
}
Equity assets digitize traditional equity securities, combining advanced blockchain capabilities with robust governance and compliance tools. Designed for institutional investors, they offer secure equity issuance, shareholder voting rights, detailed access controls, and regulatory compliance mechanisms. This asset simplifies equity management, enhances transparency, and ensures secure investor participation.
Equity token is a secure and programmable digital token designed to represent and manage ownership in companies, funds, and investment vehicles. Built for use by banks, corporations, and financial institutions, it enables the issuance, distribution, and governance of equity digitally. The system supports features such as shareholder voting, access control, compliance enforcement, and seamless interaction with digital capital markets. By leveraging blockchain infrastructure, it brings transparency, automation, and real-time visibility to equity management, while remaining compliant with regulatory frameworks.
## Why digital equity tokens?
Digital equity tokens modernize traditional equity systems by digitizing share ownership and governance. They eliminate manual workflows, reduce administrative overhead, and bring instant transparency to shareholder records and corporate actions. Institutions can use these tokens to issue multiple classes of equity, automate cap table management, and execute dividends, stock splits, or buybacks directly through smart contracts. This makes equity management more scalable, compliant, and investor-friendly.
## Institutional use cases
Tokenized equity allows financial institutions to digitize private equity shares, limited partnership interests, or real assets. These tokens can represent fractional ownership, enabling smaller investors to participate in traditionally illiquid asset classes. For example, a private equity fund could tokenize its shares, allowing smaller institutions or accredited investors to invest and trade portions of their holdings. This increases liquidity and enables peer-to-peer trading of shares that would otherwise take years to exit. Investors benefit from earlier rebalancing opportunities, while asset owners gain access to broader capital pools and improved valuation potential.
Governance is significantly improved through on-chain voting. Each token can carry proportional voting rights, and shareholders can participate in ballots remotely using blockchain-based interfaces. Votes are recorded immutably and instantly tallied by smart contracts, ensuring transparency and reducing fraud. This replaces the legacy model of paper proxies and manual vote counting, lowering costs and enabling cross-border shareholder participation. For custodians and banks managing shareholder services, on-chain voting simplifies operations and builds investor confidence in corporate decision-making.
With tokenized equity, cap table management becomes real-time and fully transparent. Each transfer of tokens updates the ownership register automatically. Companies always have an up-to-date view of who holds their shares, and certain transactions can be restricted to ensure regulatory compliance, such as blocking unaccredited investors or enforcing jurisdictional rules. Corporate actions like stock splits or dividend distributions can be executed directly through smart contracts, eliminating intermediaries and manual recordkeeping. This reduces legal complexity, enhances due diligence accuracy, and simplifies audits.
Tokenized equity also improves overall transparency and governance practices. Key corporate data, such as financial disclosures or shareholder updates, can be shared directly through the token interface. If the equity is used in a consortium or private market, regulators may even be granted observer access to monitor token movements in real time. Tokens can be coded to enforce ownership thresholds, disclosure rules, or trading restrictions, embedding compliance into the asset itself. This dramatically reduces the effort required for regulatory reporting and enhances the integrity of capital markets.
Financial institutions can also use tokenized equity to design innovative investment products. For example, a bank might create a tokenized index fund or exchange-traded product that includes shares of multiple private or public companies. Each token would represent a small share of a diversified portfolio and could be traded on secondary markets. Revenue-sharing rights, performance-based rewards, or conditional access rights can all be programmed into the token. Distribution and management of these products are streamlined, reducing cost and complexity while opening up new revenue channels and margin opportunities for issuers.
## Token capabilities
The equity token system supports advanced functionality tailored for institutional equity issuance and management. Tokens are issued and distributed through a permissioned process, ensuring that only authorized parties can mint or transfer ownership. Supply management roles can control how shares are created and distributed, while user management roles handle permissions, including account blocking or whitelisting to ensure compliance with securities regulations.
On-chain governance capabilities are embedded to support digital voting and transparent decision-making. Each tokenholder’s voting rights are tracked and enforced, allowing secure and immediate execution of shareholder resolutions. These governance features simplify board approvals, annual general meetings, and corporate referenda, reducing administrative effort and enabling broader stakeholder engagement.
Administrative controls are built into the token infrastructure, allowing designated operators to pause transfers or take emergency action during audits, legal interventions, or market disruptions. Compliance-related features such as blocklisting are integrated to prevent unauthorized access or use of the equity token by flagged addresses. This helps maintain the integrity of the ownership structure and supports regulatory adherence.
Meta-transaction support is also included, allowing institutions to offer gasless transactions to investors. Transactions can be relayed through approved third parties, reducing the need for investors to manage blockchain wallets or cover transaction fees directly. This improves onboarding and ease of use, particularly in regulated environments or investor portals where user experience must meet enterprise-grade standards.
In addition, the system allows customization of equity types. Tokens can represent different share classes, such as common stock, preferred equity, or specialized investment units. Each class can be configured with unique rules, rights, or constraints, offering flexibility for complex capital structures.
## Enterprise applications
Equity tokens support a wide range of enterprise scenarios. Companies can issue digital shares to investors, update cap tables in real time, and execute governance actions securely online. Institutions can manage investor communications, enforce compliance rules, and distribute profits or voting rights through programmable logic. In private markets, these tokens can improve transparency and liquidity, while in public offerings they can streamline regulatory filings and settlement operations. From startups managing early-stage funding rounds to large institutions launching tokenized ETFs, the equity token system supports the full spectrum of digital capital management.
Equity provides institutions with a modern, compliant, and efficient approach to managing equity digitally. By integrating programmable governance, automated compliance, and secure ownership tracking, it significantly enhances transparency, reduces administrative costs, and creates new opportunities for financial innovation. Through tokenized equity, organizations can transform how they issue, manage, and trade shares in a secure and future-ready framework.
file: ./content/docs/application-kits/asset-tokenization/asset-classes/fund.mdx
meta: {
"title": "Fund",
"description": "Institutional-Grade Digital Fund Management"
}
Fund assets digitize investment fund shares, offering automated management fee
collection, investor governance, and comprehensive regulatory compliance. Ideal
for financial institutions, they streamline administrative processes, ensure
transparent fee management, and enable secure investor voting. The contract
provides strong security, customizable fund attributes, and enhanced operational
efficiency.
Fund token is a programmable digital token designed to represent ownership in
investment vehicles such as mutual funds, hedge funds, or alternative investment
portfolios. Built for financial institutions and fund administrators, it enables
seamless digitization of fund operations, including issuance, investor
management, fee calculation, governance, and compliance. By leveraging
blockchain infrastructure, Fund enhances transparency, security, and operational
efficiency across the lifecycle of a fund.
## Why digital fund tokens?
Digital fund tokens modernize how funds are structured and managed by
transforming traditional fund shares into programmable digital assets. This
transition enables real-time administration, automated fee collection, and
greater auditability, while enforcing regulatory standards at the protocol
level. Institutions gain clear visibility into transactions, streamlined
reporting, and more responsive governance processes. Investors benefit from
timely distributions, improved access to information, and, in some cases,
enhanced liquidity.
## Institutional use cases
Tokenizing a fund means representing investor ownership as digital tokens on a
blockchain. This allows distributions of profits or dividends to be automated
via smart contracts. When the fund earns income from interest, rent, or asset
exits, the smart contract can instantly calculate each investor’s share and
distribute payouts or reinvest funds based on predefined settings. This removes
delays typically associated with manual processing and ensures transparent,
timely delivery of returns. The result is reduced administrative burden,
increased trust, and faster capital rotation for investors.
A tokenized fund can also integrate investor governance by enabling token-based
voting. Investors can vote on important matters such as extending the fund’s
term or approving significant asset acquisitions. These votes are conducted
securely and transparently on-chain, ensuring that all outcomes are verifiable
and instantly recorded. This democratized approach increases investor engagement
and is particularly valuable for alternative investment funds, where governance
rights often play a key role in attracting capital. Global participation is made
easier, as physical meetings are no longer required for decision-making.
Transparency is further enhanced through real-time auditability and reporting.
Every fund activity, contributions, redemptions, valuations, trades, is recorded
immutably on the blockchain. Auditors and regulators can be granted permissioned
access to verify operations or Net Asset Value (NAV) calculations in near real
time. This constant audit trail reduces the need for reconciliation, prevents
manipulation or hidden activity, and assures investors that fund operations
align with investment mandates and restrictions.
Compliance and administration are also significantly streamlined. Investor
onboarding workflows, including KYC and AML checks, can be integrated such that
only verified investors are permitted to hold or transfer fund tokens. Transfers
can automatically trigger compliance validation, ensuring that the fund remains
within regulatory boundaries. Smart contracts can also automate capital calls,
fee calculations, and investor notifications. These features reduce operational
friction, allowing managers to accept more investors, even with smaller
contributions, without increasing administrative complexity.
Fund tokens are also designed to meet institutional security and custody
standards. Using blockchain infrastructure ensures distributed recordkeeping and
eliminates a single point of failure. Ownership records are cryptographically
secured and recoverable in the event of a system issue. These tokens can be held
in secure custodial wallets, protected through multi-signature or MPC protocols,
similar to how traditional securities are stored. Institutions benefit from
instant settlement of redemptions, precise ownership granularity, and the global
accessibility of blockchain-based assets.
## Token capabilities
The Fund token system includes automated mechanisms for fee management.
Management and performance fees are calculated in real time based on time
elapsed and assets under management. This provides predictable, transparent, and
consistent administration of fund expenses without manual involvement. All
calculations and distributions are logged on-chain, supporting full
auditability.
Voting and governance features are embedded to support secure investor
engagement. Token holders can participate in proposals, and their voting rights
are tracked automatically. The system tallies votes transparently, ensuring
accurate and trustworthy governance outcomes. These capabilities enable remote
participation in key decisions while reducing the cost and complexity of
corporate governance procedures.
Role-based access control provides operational flexibility and security. Fund
administrators can control token supply, process investor onboarding, or block
users when needed for compliance. Key roles are defined to separate
responsibilities such as supply management, user operations, and administrative
oversight. During audits or emergencies, transfers can be paused, preserving the
integrity of fund operations.
Compliance is further enforced through programmable blocklist functionality,
which allows the restriction of certain users or jurisdictions from
participating in fund transactions. All critical actions are logged for
reporting purposes, and compliance rules can be configured to reflect applicable
laws and regulatory frameworks.
The token system supports meta-transactions, allowing third parties to sponsor
transaction costs on behalf of investors. This feature improves accessibility,
especially for institutional investors or platforms where end users should not
be burdened with gas or network fees.
At deployment, fund managers can configure token attributes to reflect the
structure of the fund, including fund types (e.g., hedge fund, mutual fund) and
strategy categories (e.g., long/short equity, private debt). This ensures that
the digital representation of the fund aligns with its real-world objectives and
reporting requirements.
## Enterprise applications
Institutions can use fund tokens to manage digital versions of pooled investment
vehicles. These include mutual funds, private equity funds, venture funds, and
real estate investment trusts. Smart contracts simplify the administration of
shareholder records, distributions, voting, fee processing, and compliance
workflows. Fund managers can scale operations by accepting smaller investors
globally without increasing operational risk or administrative effort.
Regulators benefit from continuous oversight, while investors gain faster access
to data and liquidity, if peer trading of tokens is enabled.
Fund enables institutional-grade digital fund management by automating core
operational processes, enhancing transparency, and embedding compliance into the
asset layer. With programmable logic, role-based controls, and secure governance
tools, it helps institutions modernize fund administration, reduce costs, and
improve investor confidence. This represents a foundational shift in how
investment vehicles can be built, managed, and scaled in the digital era.
file: ./content/docs/application-kits/asset-tokenization/asset-classes/stablecoin.mdx
meta: {
"title": "Stablecoin",
"description": "A Secure and Collateralized Digital Currency"
}
Stablecoins are digital currencies designed to maintain a stable value by being
backed by real-world assets or reserves. They offer the advantages of digital
assets , such as speed, transparency, and programmability , while avoiding price
volatility typically associated with cryptocurrencies. This StableCoin contract
ensures every token issued is fully collateralized, providing institutions with
secure, auditable, and reliable digital money management. Key features include
collateral-backed issuance, comprehensive role-based controls, robust pause
mechanisms, and regulatory compliance capabilities.
Stablecoin token is a secure and transparent digital token designed to maintain
a consistent value by being fully backed by real-world assets or fiat currency
reserves. It enables banks and financial institutions to manage digital money
with the trust and compliance expected in regulated environments. The stablecoin
framework combines the speed and programmability of blockchain technology with
the assurance of full collateralization, offering a powerful solution for
modernizing financial operations. Built-in features support collateral
management, transaction control, regulatory enforcement, and role-based
administration.
## Why stablecoins?
Stablecoins offer a reliable way to bridge traditional finance with digital
infrastructure. Unlike cryptocurrencies that experience significant price
fluctuations, stablecoins are designed to hold a steady value. This makes them
ideal for institutions that require predictability and compliance in their
financial operations. With stablecoins, banks can perform real-time payments,
reduce transaction processing times, and remove intermediaries from complex
financial workflows. These benefits lead to faster settlements, lower costs, and
streamlined international remittance and treasury management systems.
## Institutional use cases
Banks and financial institutions are increasingly adopting stablecoins for
real-time interbank settlements. Instead of relying on legacy systems like
SWIFT, institutions can transfer funds on-chain instantly, any time of day.
Settlement time is reduced from days to seconds, improving liquidity and
minimizing operational risk. In cross-border payments, stablecoins backed by
fiat reserves can serve as bridge currencies, achieving finality quickly and
with significantly lower counterparty exposure.
Stablecoins also support institutional payment networks where multiple banks
form a consortium to process large-scale financial transactions. These networks
can be established on permissioned blockchains where each participant is fully
verified through KYC procedures. Transactions such as loan syndications or trade
finance settlements can be executed securely within seconds. The use of a
stablecoin backed 1:1 by fiat reserves ensures that all transactions remain
stable and compliant with regulatory frameworks.
From a compliance and auditability standpoint, stablecoin transactions are fully
traceable. Each transfer is recorded immutably on the blockchain, providing a
verifiable audit trail for regulators and compliance teams. Financial
institutions can monitor transactions in real time to ensure they align with
anti-money laundering (AML) and know-your-customer (KYC) requirements. This
visibility simplifies internal reporting and significantly reduces
reconciliation time, enabling better oversight of fund flows.
Stablecoins also enable programmable payment solutions. Smart contracts can be
used to automate conditional payments, such as releasing funds from escrow only
after the delivery of goods. This capability introduces efficiency and security
into complex payment scenarios. Corporate actions like interest payments,
milestone-based disbursements, or liquidity transfers can all be executed
automatically on-chain. With programmable logic embedded into each token,
stablecoins reduce manual processing and administrative overhead while improving
transactional accuracy.
## Token capabilities
Stablecoin is issued through a fully collateralized mechanism. Each token is
backed by underlying reserves, ensuring that its value remains stable and
trustworthy. Institutions can conduct regular collateral reporting and publish
proof of reserves to maintain transparency and market confidence. This approach
enables the token to function as a secure medium of exchange in both domestic
and cross-border contexts.
Role-based access control is integrated into the token architecture. A supply
management role handles the creation of new tokens and updates to the collateral
pool, ensuring proper governance over monetary supply. A user management role
manages investor interactions, including the ability to block or unblock
accounts in line with compliance policies. A dedicated administrative role
oversees operational controls and can pause token operations during audits or
regulatory reviews, enhancing risk response capabilities.
Security and compliance are central to the design. The system includes pause
functionality to halt all transfers in the event of a breach or compliance
requirement. A built-in blocklist ensures that only approved users can access
the token, supporting strict adherence to AML and KYC regulations. These
mechanisms enforce institutional policies at the protocol level.
To improve accessibility, the token architecture supports custodial account
management, allowing institutions to manage user accounts securely.
Meta-transaction support enables third-party transaction relaying, so users can
interact with the system without directly paying network fees. This feature is
especially valuable for banks managing end-user transactions at scale or
integrating digital currencies into existing payment systems.
Error handling and event tracking are embedded to support operational oversight.
All major actions, such as token issuance, collateral updates, user
restrictions, and administrative interventions, are logged transparently.
Descriptive error messages reduce ambiguity, while the event log supports audits
and compliance reviews.
## Enterprise applications
Stablecoins are applicable across a wide range of financial functions. They can
be used to simplify treasury operations, enable faster cross-border payments,
support supply chain financing, or serve as foundational infrastructure for
central bank digital currency initiatives. Institutions can tokenize assets,
facilitate liquidity, and build digital financial products with embedded
compliance and real-time transparency. Whether operating in a domestic context
or enabling global financial interactions, stablecoins deliver stability,
control, and innovation simultaneously.
Stablecoin provides financial institutions and enterprises with a secure and
fully compliant digital currency solution. By combining full collateral backing,
strong regulatory features, and programmable transaction logic, it allows for
faster, safer, and more transparent financial operations. This modern digital
asset aligns with institutional standards for trust, auditability, and
performance, enabling banks and businesses to confidently embrace the future of
digital finance.
file: ./content/docs/application-kits/asset-tokenization/asset-classes/tokenized-deposits.mdx
meta: {
"title": "Tokenized deposits",
"description": "A Secure and Compliant Digital Deposits"
}
Deposits digitally represent traditional banking deposits, providing financial
institutions with enhanced security, real-time transparency, and rigorous
compliance capabilities. Key features include customizable allowlists, custodial
oversight, robust role-based management, and integrated meta-transactions,
simplifying institutional deposit administration.
Tokenized deposit tokens are digital representations of traditional bank
deposits, issued and managed within regulated banking environments. They offer
the trust and backing of conventional deposits while introducing the speed,
programmability, and transparency of blockchain technology. Tokenized deposits
allow banks to modernize core operations, enhance payment infrastructure, and
create customer-centric services, all while maintaining compliance with
financial regulations. This solution ensures deposits remain securely within the
banking system, while enabling 24/7 digital cash-like functionality.
## Why tokenized deposits?
Tokenized deposits modernize legacy banking processes by allowing real-time
transactions, streamlined interbank transfers, and programmable banking
products. They are issued on permissioned blockchains by regulated banks and
represent a direct claim on a customer’s deposit held on the bank’s balance
sheet. Unlike stablecoins that may be issued by private entities off-chain,
tokenized deposits retain their regulatory clarity, deposit insurance
eligibility, and balance sheet integrity. These digital instruments bring
efficiency, transparency, and interoperability to institutional and retail
banking services alike.
## Institutional use cases
Tokenized deposits enable instant interbank payments without reliance on
traditional settlement systems such as ACH or wire transfers. Using a shared
ledger, banks can transfer tokenized deposits 24/7, with immediate settlement
and finality. For example, Citi has demonstrated how tokenized deposits can
facilitate real-time cross-border liquidity transfers between branches, enabling
large-value payments to settle continuously, regardless of time zones or central
bank hours. This significantly improves liquidity management and reduces delays
in time-sensitive financial operations.
They also allow banks to build programmable and customer-friendly deposit
products. A deposit represented as a token can be programmed with instructions,
such as auto-paying bills, sweeping balances into savings or investment
accounts, or enforcing conditions like escrow releases or overdraft protection.
Smart contracts attached to tokenized deposits allow real-time interest
calculation and distribution without batch processing. This flexibility empowers
customers with greater control over their funds, while banks maintain compliance
and security in the background.
In interbank lending markets, tokenized deposits simplify short-term liquidity
management. Banks can issue, lend, and repay funds using smart contracts that
enforce repayment schedules, interest rates, and collateralization terms.
Settlement occurs instantly, eliminating the delays and risks associated with
legacy systems. Since tokenized deposits are considered digital cash, they serve
as high-quality collateral, improving confidence and accessibility in secured
lending. This improves agility in liquidity operations and reduces both
operational risk and cost.
Compliance and monitoring are automated at the protocol level. Each transfer can
carry embedded rules for KYC/AML checks, and only verified accounts may hold or
transfer tokens. Large transactions can trigger alerts or require additional
sign-offs automatically. The entire transaction history is recorded immutably
on-chain, which regulators and auditors can access in near real time. This
drastically simplifies audit preparation, reduces reconciliation effort, and
ensures full transparency of fund flows. Banks can demonstrate reserve backing,
transaction traceability, and compliance adherence with minimal overhead.
Tokenized deposits also support a broader customer-centric payment ecosystem.
Consumers could make instant peer-to-peer payments or retail purchases by
transferring tokenized deposits between wallets. This transaction occurs in
seconds, without delays seen in traditional interbank retail payment rails.
Banks can offer APIs to fintech apps and connected devices, enabling use cases
like IoT-triggered payments, automated subscriptions, or digital commerce
experiences. Unlike crypto wallets, these digital deposits remain fully
regulated and covered by deposit insurance, preserving the safety of customer
funds. Banks benefit by offering modern digital money services while retaining
customer balances within a compliant framework.
## Token capabilities
Tokenized deposits are issued through a controlled and compliant mechanism.
Role-based permissions ensure that minting, transfers, and redemptions are only
executed by authorized users. The issuance of tokens reflects actual deposits
held, maintaining 1:1 reserve backing and aligning with regulatory obligations.
Access is managed using allowlists and account-level permissions. User
onboarding can be controlled through pre-approved lists, ensuring only
KYC-verified clients may hold or transact tokenized deposits. This built-in
compliance infrastructure aligns with AML regulations and supports real-time
monitoring of account activity. Administrators are also equipped with pause
capabilities to suspend operations during audits, regulatory reviews, or risk
events.
Custodial support is available for enterprise-grade asset management.
Institutions can hold and manage client tokens under secure custodianship with
oversight capabilities. Meta-transaction functionality is also supported,
allowing third-party relayers to pay for gas on behalf of users. This improves
accessibility and user experience, particularly in consumer banking environments
where end users may not manage wallets or gas fees directly.
## Enterprise applications
Tokenized deposits are well suited for a wide range of institutional and retail
applications. Banks can digitize large-value settlements between branches,
automate treasury operations, and offer programmable account services to
corporate clients. Fintechs can build new applications on top of tokenized
deposits through APIs, while regulators gain visibility into the flow of funds
in real time. Use cases span instant payroll, programmable savings, micro-loans,
escrow services, and integrated financial services across web and mobile
channels. The infrastructure can also support central bank digital currency
pilots or be integrated into existing national payment systems.
Tokenized deposits deliver a secure, compliant, and modern alternative to
traditional deposit instruments. They combine the trust and regulatory
safeguards of conventional bank deposits with the flexibility and efficiency of
blockchain. By supporting real-time transfers, programmability, auditability,
and seamless integration into banking systems, tokenized deposits empower
financial institutions to offer next-generation digital cash services with
confidence.
file: ./content/docs/application-kits/asset-tokenization/use-cases/bond-tokenization.mdx
meta: {
"title": "Bond tokenization",
"description": "A comprehensive guide to the tokenization of bond instruments using blockchain infrastructure for issuance, trading, and lifecycle management"
}
## Introduction to bond tokenization
Bond tokenization refers to the digital representation of a traditional bond
instrument on a blockchain ledger. This transformation involves issuing a
security token that reflects the same legal and financial obligations as a
conventional bond — including coupon payments, maturity terms, and credit
exposure — but with the added benefits of programmability, transparency, and
enhanced accessibility.
The traditional bond market, valued in the hundreds of trillions globally, is a
cornerstone of institutional finance. However, the infrastructure that underpins
it remains fragmented, slow, and reliant on manual processes and intermediaries.
Settlement cycles often span several days. Custody, clearing, and reconciliation
involve multiple layers of legal and operational friction. Secondary market
liquidity is restricted to established players with access to central
clearinghouses.
Tokenizing bonds on blockchain addresses these inefficiencies by introducing a
shared ledger for asset issuance, transfer, and lifecycle event tracking. This
unlocks instant settlement, fractional ownership, automated compliance, and
broader participation — including in emerging markets, retail channels, and
digital-native ecosystems.
This documentation presents a comprehensive view of how bond tokenization works,
what benefits it offers, what challenges it must overcome, and what
architectural components are required to deploy such systems in real-world
financial settings.
## Understanding traditional bonds and lifecycle components
A bond is a fixed-income instrument representing a loan made by an investor to a
borrower, typically a corporate, sovereign, or municipal entity. It includes key
features such as:
* **Principal**: The face value of the bond, repaid at maturity
* **Coupon**: The periodic interest payment made to the bondholder
* **Maturity**: The date on which the principal must be repaid
* **Yield**: The effective return based on market price and interest
* **Covenants**: Terms and conditions that protect the interests of the issuer
and investor
The bond lifecycle includes the following phases:
* **Origination**: Structuring, documentation, and pricing of the bond
* **Issuance**: Placement of the bond into the market via underwriters or direct
channels
* **Trading**: Secondary market exchange among investors
* **Settlement**: Transfer of ownership and payment processing
* **Servicing**: Coupon distribution, tax reporting, and investor communications
* **Maturity or Redemption**: Principal repayment and de-listing
In traditional systems, these processes are facilitated by banks, central
securities depositories (CSDs), custodians, registrars, and market makers. Each
introduces latency, operational cost, and settlement risk.
## Challenges in traditional bond markets
Despite their scale and importance, bond markets face persistent inefficiencies
and barriers to innovation:
* **Slow settlement**: T+2 or T+3 settlement introduces counterparty risk and
capital inefficiency
* **High costs**: Intermediary fees, legal structuring, and compliance overhead
reduce yield
* **Limited transparency**: Ownership records and pricing data are siloed across
platforms
* **Restricted access**: Retail and emerging market participants face high entry
barriers
* **Manual reconciliation**: Cross-institution recordkeeping relies on batch
processing
* **Illiquid instruments**: Smaller issuers struggle to find liquidity in opaque
over-the-counter (OTC) markets
These frictions hinder innovation in product design, delay funding for issuers,
and reduce net returns for investors. Market participants seek digitization
solutions that retain compliance and regulatory alignment while reducing
complexity.
## Tokenization as a digitization strategy
Tokenization is the process of converting real-world assets into digital tokens
recorded on a blockchain. In the context of bonds, this means representing bond
instruments as security tokens with embedded rights and obligations.
Tokenized bonds may follow standards such as ERC-1400 (Ethereum), RToken
(Reserve), or bespoke formats depending on the target network and regulatory
jurisdiction.
Core principles include:
* **Programmability**: Token logic encodes ownership, compliance, and transfer
conditions
* **Interoperability**: Tokens integrate with wallets, trading platforms, and
analytics tools
* **Auditability**: All transactions are timestamped and cryptographically
verifiable
* **Custody**: Token ownership is tracked via public-private key pairs, or
institutional custody services
* **Compliance enforcement**: Smart contracts can enforce KYC/AML, transfer
restrictions, and whitelist management
The goal is not to disrupt bond markets but to re-platform them — keeping legal
frameworks and investor protections intact while modernizing the infrastructure
layer.
## Benefits of tokenized bonds for market participants
Tokenizing bond instruments brings concrete advantages to issuers, investors,
regulators, and infrastructure providers.
### For issuers
* **Faster issuance**: Deploy contracts and mint tokens without long
underwriting timelines
* **Lower costs**: Reduce dependence on intermediaries and reduce documentation
burdens
* **Programmable compliance**: Automate eligibility checks and distribution
rules
* **Expanded investor base**: Reach digital-native and global retail segments
via wallet onboarding
* **Fractionalization**: Break large instruments into small denominations to
enhance participation
### For investors
* **Real-time settlement**: Reduce counterparty risk and free capital faster
* **Greater liquidity**: Enable trading in secondary markets via decentralized
platforms or exchanges
* **Improved transparency**: View ownership records, coupon schedules, and
issuer activity on-chain
* **Direct access**: Hold assets without relying on custodians or brokers
* **Portfolio composability**: Integrate tokenized bonds with DeFi products,
robo-advisors, or custom portfolios
### For regulators and auditors
* **Full traceability**: Access real-time data for compliance and supervision
* **Event tracking**: Monitor coupon issuance, investor eligibility, and
ownership changes
* **Automated reporting**: Generate proofs of compliance or breach with minimal
overhead
Tokenization aligns technological flexibility with the needs of highly regulated
capital markets.
## Key components of a tokenized bond architecture
To tokenize and operate bond instruments on blockchain, a complete architecture
includes the following layers:
### Asset origination and structuring
* Legal documentation that defines bond terms and reference to the digital twin
* Digital agreement frameworks (e.g., DLT-compatible ISDA templates)
* API integration with capital markets infrastructure for deal rooms or data
feeds
### Token issuance and registry
* Smart contracts that mint bond tokens with embedded logic
* On-chain registry of holders with whitelist management and caps
* Integration with wallet onboarding, investor ID verification, and AML/KYC
databases
### Trading and secondary markets
* Listing on compliant security token exchanges or bulletin boards
* Support for OTC transfers with regulatory checks
* Automated market-making (AMM) pools or bonding curves for long-tail liquidity
### Settlement and clearing
* Atomic delivery-vs-payment (DvP) via escrow smart contracts or digital
currency rails
* Integration with CBDC pilots or stablecoins for cross-border or fiat-linked
transfers
* Institutional APIs for reconciliation with traditional accounting systems
### Lifecycle servicing and investor relations
* Scheduled coupon distributions via programmable payouts
* On-chain governance (e.g., bondholder votes for covenant changes)
* Event notifications, tax handling, and digital document distribution
The full stack ensures that a bond token is not just a digital representation,
but a fully operable financial instrument across its entire lifecycle.
## Regulatory considerations for tokenized bonds
Bond issuance and trading are subject to strict legal frameworks globally. Any
tokenization approach must comply with existing securities laws, investor
protection rules, and reporting requirements. While blockchain introduces novel
capabilities, it does not override the underlying legal obligations associated
with a bond.
Key regulatory domains include:
* **Securities classification**: Tokenized bonds are considered securities in
most jurisdictions, requiring registration or exemptions
* **Investor eligibility**: Accredited, qualified, or institutional investor
restrictions must be enforced based on jurisdiction
* **Transfer restrictions**: Secondary transfers may require approval,
whitelisting, or holding period compliance
* **Disclosure obligations**: Offering memorandums, risk factors, and issuer
background must be accessible and auditable
* **Settlement systems**: If tokenized instruments are used within CSD-linked
infrastructure, interoperability must be maintained
To navigate these requirements, issuers and platform providers typically:
* Engage legal counsel early to align token design with regulatory norms
* Register with securities authorities or seek sandbox exemptions
* Use permissioned blockchains or identity-linked token systems
* Implement transfer restriction smart contracts
* Partner with licensed custodians, brokers, and exchanges
The regulatory landscape is evolving rapidly. Jurisdictions such as Switzerland,
Luxembourg, Singapore, and the UAE have published guidelines or created
frameworks for security tokens, making them leading venues for tokenized bond
pilots.
## Smart contract design patterns for bond tokens
Smart contracts are the backbone of tokenized bond infrastructure. They define
the behavior of the token, enforce compliance, and trigger financial flows such
as coupon payments.
Essential components of bond smart contracts include:
### Token metadata
* Name, symbol, and version of the token
* Reference to legal documentation (on-chain or off-chain)
* Link to underlying asset or purpose (e.g., green bond, real estate-backed,
etc.)
### Transfer control logic
* Whitelisting checks based on KYC/AML registries
* Jurisdictional and investor-type validation
* Lock-up periods or vesting schedules
* Pausing or blacklisting capabilities for regulatory intervention
### Payment scheduling
* Coupon payment frequency, day count convention, and calculation formula
* Automatic transfer of stablecoins or CBDC equivalents to token holders
* Pro-rata distribution logic based on ownership snapshot at record date
* Non-payment alert triggers and remedy periods
### Redemption and maturity
* Principal repayment logic at bond maturity
* Early redemption clauses (e.g., callable bonds)
* Default scenarios and automated voting for creditor action
### Event handling and notifications
* Logging of lifecycle events: issuance, transfer, payment, amendment
* Hooks for DAO voting, investor consent, or document distribution
* Integration with oracles for interest rate indexes or external benchmarks
Well-designed smart contracts abstract complexity for users and encode
traditional legal mechanisms into programmable conditions.
## Settlement infrastructure and stablecoin integration
For tokenized bonds to be viable, the associated cash leg — coupon payments,
principal, and trade settlement — must be reliably managed. This can be achieved
via:
### Stablecoins
* Fiat-pegged digital currencies issued on blockchain (e.g., USDC, EURC, XSGD)
* Used for DvP settlement in secondary trades or scheduled payments
* Require custody, reserve audits, and network liquidity
### CBDCs (Central Bank Digital Currencies)
* State-issued digital currencies piloted or live in several jurisdictions
* Enable legally recognized settlement finality for tokenized financial
instruments
* Can be used for direct cash-leg fulfillment in issuance or secondary trades
### On-chain escrow mechanisms
* Smart contracts that hold buyer and seller assets during transfer
* Settle when both asset and payment conditions are satisfied
* Enable trustless bilateral trades in peer-to-peer models
### Interoperability with traditional systems
* Use of digital representations (e.g., wrapped tokens or mirrored assets)
* API integration with banks, custodians, or clearing systems
* Hybrid workflows that reconcile on-chain and off-chain asset transfers
Example settlement flow:
* An investor purchases tokenized bonds via a DEX integrated with a whitelist
module
* The investor sends USDC to a DvP contract
* The bond token is transferred upon receipt of funds
* A transaction receipt is emitted, and balances are updated in both wallets and
investor records
Settlement rails are a critical link between digital and fiat worlds,
determining the operational feasibility of tokenized debt markets.
## Use case: Government bonds on blockchain
Sovereign and municipal bonds represent a significant portion of global debt
markets. Tokenizing these instruments opens the door to broader investor access,
enhanced transparency, and better cost management.
### Benefits
* Democratized access to public debt markets for retail and regional investors
* Streamlined tax reporting and regulatory compliance
* Transparent use-of-proceeds tracking (especially for green or infrastructure
bonds)
* Real-time analytics and open performance data
### Implementation
* A government issues a tokenized treasury bond denominated in local currency
stablecoins
* Investors access the offering via a mobile app with embedded KYC onboarding
* Coupons are paid directly to user wallets in programmable digital currency
* Token transfers are restricted to verified citizens or regulated exchanges
### Real-world examples
* **Thailand**: The Public Debt Management Office issued savings bonds via
blockchain infrastructure
* **Philippines**: Bonds.PH allowed retail users to purchase tokenized bonds
using mobile apps
* **El Salvador**: Explored Bitcoin-linked bonds with blockchain-based
distribution
Tokenization increases the inclusivity and efficiency of public financing while
reducing dependency on centralized issuance platforms.
## Use case: Corporate bonds and private placements
Corporate issuers, especially in mid-market or emerging sectors, often face
challenges accessing capital through public bond markets. Tokenization enables
private placements and alternative bond structures that are more flexible,
cost-effective, and digitally native.
### Features
* Customizable tenor, coupon, and redemption schedules
* Direct distribution to targeted investor pools via smart contract whitelists
* Reduced time-to-market and documentation cost
* Secondary liquidity via bulletin boards or DeFi marketplaces
### Workflow
* A logistics company issues a tokenized bond to finance fleet expansion
* The bond pays 6 percent annually and matures in 3 years
* Investors sign a digital subscription agreement and receive tokens in exchange
for stablecoins
* A platform handles coupon distribution, tax compliance, and investor
communications
* Upon maturity, the principal is returned automatically via smart contract
logic
Private placements on blockchain are particularly useful for impact investing,
ESG-linked bonds, or tokenized revenue-backed securities.
## Use case: Structured products and tranching via tokens
Structured bonds combine traditional debt features with embedded options, asset
backing, or payout dependencies. Blockchain allows precise modeling of these
instruments through token tranching and layered payout logic.
### Components
* Multiple token classes representing senior, mezzanine, and equity tranches
* Smart contract-managed cash flow waterfalls
* Rule-based distribution based on performance triggers or external indexes
* Built-in rating migration or dynamic conversion
### Example
* A real estate development firm tokenizes a bond backed by rental cash flows
* Senior tokens receive fixed payments; mezzanine holders get variable income
based on occupancy
* Token holders can vote on capital deployment or refinancing terms
* The underlying data (leases, expenses) is streamed on-chain for transparency
This approach supports bespoke instruments and aligns stakeholder incentives
through programmable financial logic.
## Case study: World Bank’s bond-i on blockchain
The World Bank pioneered blockchain-based bond issuance through its **bond-i**
(Blockchain Operated New Debt Instrument) program.
### Highlights
* First issued in 2018 on a private Ethereum-based network managed by the
Commonwealth Bank of Australia
* Aimed at simplifying bond issuance and lifecycle management through
distributed ledgers
* Included investor onboarding, cash flow tracking, and smart
contract-controlled events
* Resulted in improved settlement time, transparency, and audit readiness
The project demonstrated how multilaterals and public finance entities can use
blockchain to modernize sovereign bond infrastructure in a legally compliant,
operationally efficient way.
## Enhancing investor experience through tokenized bonds
Tokenized bonds deliver a fundamentally different investor experience, replacing
outdated systems of communication and recordkeeping with seamless, real-time,
digital interactions.
### Investor onboarding
* Instant KYC/AML screening via integrated identity platforms
* Onboarding through mobile apps, browser wallets, or embedded flows in fintech
platforms
* Support for institutional wallet configurations and multisig custody
### Portfolio management
* Real-time updates on bond balances, coupons, and maturity timelines
* Dashboards with yield curves, rating feeds, and capital allocation tools
* On-chain governance tools to vote on amendments or restructuring
### Reporting and compliance
* Downloadable or API-accessible tax reports, interest certificates, and
statements
* Integration with personal finance tools or robo-advisory engines
* Audit trails for investor disputes or regulatory compliance
### Engagement and liquidity
* Secondary market access through built-in trade widgets or external exchanges
* Alerts for new issuance opportunities or coupon receipt
* Ability to fractionalize, pledge, or lend tokenized bonds within digital
portfolios
The investor journey becomes transparent, mobile-first, and interoperable across
finance, DeFi, and wealth management platforms.
## Enabling cross-border participation in bond markets
One of the most transformative aspects of tokenized bonds is the ability to open
capital markets to a global investor base, without compromising regulatory
compliance or security.
### Challenges in traditional cross-border bond investment
* Foreign exchange risks and high conversion costs
* Custody and settlement complexity between jurisdictions
* Barriers to investor verification or capital controls
* Local market access limitations for offshore retail or SME investors
### Blockchain-enabled improvements
* Multi-currency stablecoin settlement with real-time exchange rates
* Smart contract-based transfer compliance using jurisdictional whitelists
* Token passports with linked KYC profiles stored on-chain or through
zero-knowledge attestations
* API integrations with regulated exchanges in multiple regions
### Use case example
* A Latin American infrastructure project issues tokenized bonds denominated in
USD
* Retail investors from Southeast Asia access the bonds via a fintech app that
handles onboarding and stablecoin conversion
* Bond tokens are custodied by a licensed platform in Singapore, which offers
liquidity through a secondary market pool
* Investors track impact metrics, coupon payments, and local currency yield
performance on their dashboard
Blockchain’s borderless and programmable nature makes it possible to build truly
global fixed-income participation networks with compliant access rails.
## Integrating tokenized bonds into decentralized finance
Decentralized finance (DeFi) protocols unlock composability, permissionless
access, and automation for on-chain assets. Integrating tokenized bonds into
DeFi expands their utility beyond passive holding.
### Key integrations
* **Lending markets**: Use tokenized bonds as collateral in money markets such
as Aave or Compound
* **Yield aggregators**: Optimize coupon flows with auto-compounding strategies
* **Decentralized exchanges (DEXs)**: Enable peer-to-peer trading of bonds
without centralized order books
* **Staking derivatives**: Tokenize yield streams for additional financial
engineering (e.g., split coupon from principal)
* **Composability**: Combine tokenized bonds with insurance, leverage, or
prediction markets
### Considerations for DeFi integration
* Oracles for bond pricing, coupon schedules, and risk metrics
* Compliance layers that ensure only eligible users can interact with security
tokens
* Wrapped versions of bond tokens for broader protocol compatibility
* Risk frameworks and stress tests for liquidity and market risk
Example flow:
* A tokenized green bond is added to a DeFi pool as collateral
* The protocol calculates a loan-to-value (LTV) based on on-chain pricing feeds
* A user borrows stablecoins using their bond holdings while continuing to earn
coupon yield
* If the bond price drops, smart contracts trigger a liquidation auction with
programmable rules
This transforms fixed-income assets from static holdings into active components
of decentralized financial portfolios.
## Comparing token standards for bond tokenization
Several token standards have emerged for representing regulated financial
instruments on blockchain. These standards define how data is stored, who can
transfer tokens, and what event hooks are available.
### ERC-20
* Widely supported fungible token standard on Ethereum
* Lacks compliance, transfer restriction, or document linking functionality
* Can be extended with permissioned wrappers or additional layers
### ERC-1400
* Security token standard combining ERC-20 and ERC-777 capabilities
* Supports partitions (e.g., tranches), transfer validation, and on-chain
document references
* Includes hooks for issuance, redemption, and regulatory reporting
* Developed for institutional compatibility and compliance enforcement
### ERC-3643 (formerly T-REX)
* Modular and identity-focused standard developed by Tokeny
* Emphasizes issuer control, transfer rules, and investor whitelisting
* Includes compliance frameworks for jurisdictional and role-based restrictions
### RToken (Reserve)
* Designed for compliant asset-backed tokens with programmable collateral
* Strong focus on stability, reserve auditability, and automated governance
### Custom or hybrid standards
* Many platforms define proprietary token standards tailored to specific
workflows
* Some projects use off-chain compliance engines with generic token interfaces
* Wrappers are often used to make security tokens compatible with DEX or lending
protocols
When selecting a standard, issuers and developers must consider:
* Target investor base and compliance jurisdiction
* Interoperability with custody, trading, and reporting tools
* Lifecycle management complexity (e.g., callable bonds, variable coupons)
* Gas efficiency, upgradeability, and security track record
Standardization improves portability and reduces integration effort across the
digital bond ecosystem.
## Managing off-chain data and oracles in bond platforms
Bond performance and compliance depend on data that may not originate on-chain.
Integrating trusted off-chain data sources into smart contracts is critical for
reliable operation and investor confidence.
### Oracle use cases in tokenized bonds
* Pricing feeds from rating agencies, benchmark indices, or market data
providers
* Legal event triggers (e.g., issuer bankruptcy, jurisdictional changes)
* Macroeconomic variables for inflation-linked or floating-rate bonds
* ESG metrics, sustainability scores, or carbon tracking for green bonds
* Tax rates and regulatory status by geography
### Types of oracles
* **First-party oracles**: Issuer or regulated third-party provides signed data
to the chain
* **Decentralized oracle networks**: Use multiple independent data providers to
reduce trust risk (e.g., Chainlink)
* **Zero-knowledge oracles**: Prove that data meets conditions without revealing
the full dataset
* **Off-chain attestations**: Use event logs and APIs to publish metadata to
IPFS, Arweave, or sidechains
### Oracle integration pattern
* The smart contract defines an external call for a pricing or macroeconomic
variable
* The oracle fetches and verifies the data, then signs a message or posts it to
chain
* The bond contract reads the update and triggers internal logic (e.g., coupon
adjustment)
* A record is emitted for transparency and regulator access
Oracles bridge the gap between legal realities, real-world events, and
programmable financial instruments.
Modular architecture for bond tokenization platforms
Bond tokenization platforms benefit from modularity, allowing various components
to evolve independently and interoperate with external systems. A robust modular
design supports scalability, integration, and regulatory flexibility.
Core modules • Token issuance engine: Responsible for minting, distribution, and
redemption logic. Includes smart contracts for asset metadata and lifecycle
events.
• Investor identity and compliance: Integrates KYC/AML verification tools, jurisdictional whitelisting, and access control modules. Can be connected to digital identity wallets or national ID registries.
• Settlement and custody: Manages digital wallets, stablecoin rails, and optional escrow accounts. Supports institutional custody options for regulated environments.
• Lifecycle management: Automates coupon payments, maturity redemptions, and event notifications. Allows overrides for restructuring, extension, or early redemption.
• Analytics and reporting: Generates dashboards, investor reports, and regulator summaries. Tracks performance, compliance, and market data for auditability.
• Governance and voting: Facilitates bondholder decisions, such as covenant changes or issuer actions. Implements quorum logic, delegation, and result execution via contracts.
• APIs and integration layers: Exposes REST, GraphQL, or Web3 interfaces for external platforms, fintech apps, exchanges, or regulatory portals.
Legal modeling and enforceability of tokenized bonds
A tokenized bond must align with legal enforceability requirements to ensure
investor protection and cross-border recognition. Legal modeling defines how the
token relates to traditional security definitions and how disputes or defaults
are resolved.
Legal form options • Digitally native bond: Issued entirely on blockchain,
recognized through jurisdiction-specific digital securities laws • Digital
representation of traditional bond: Mirror token issued alongside conventional
security and linked through custodial agreement • Wrapped security: A wrapper
token references a legal agreement stored off-chain or tokenized through a trust
structure
Core legal components • Bond prospectus or information memorandum • Terms and
conditions, covenants, governing law, and jurisdiction clause • Issuer
obligations and event of default procedures • Registered holder definition
(wallet address as beneficiary)
Enhancing enforceability • Align token terms with national digital asset
regulations (e.g., Liechtenstein TVTG, Swiss DLT Act) • Maintain legal
documentation references in token metadata or on-chain registry • Use digital
signatures for investor consent and governance participation • Integrate
notarization or timestamping mechanisms for legal proof
Tokenization does not eliminate the need for legal frameworks. It requires them
to evolve toward digital-native implementation while preserving enforceable
investor rights.
Digital identity in bond market access
Digital identity systems enable secure, compliant, and scalable access to
tokenized bonds. They support investor verification, role management, and access
enforcement at the protocol level.
Identity features • KYC credentials: Issued by regulated verifiers and linked to
wallet addresses • Zero-knowledge proof support: Allows users to prove
eligibility without revealing sensitive data • Reputation scoring: Tracks
participation in bond votes, coupon claim timeliness, or trading activity •
Credential revocation and updates: Supports dynamic eligibility (e.g.,
jurisdictional changes or sanctions)
Implementation models • Self-sovereign identity (SSI) using frameworks like DID
(Decentralized Identifiers) • On-chain registries of verified addresses
maintained by issuers or agents • Reusable credentials across multiple
issuances, platforms, and markets
Digital identity infrastructure enables permissioned compliance without
introducing centralized chokepoints or unnecessary friction for users.
Investor dashboards and analytics in tokenized bond ecosystems
Investor-facing tools enhance visibility, usability, and trust in tokenized
fixed-income instruments. Dashboards help users track performance, manage
holdings, and assess risks.
Dashboard components • Holdings overview: Display of owned bonds, tranches,
maturity dates, and total value • Coupon calendar: Visual interface showing
upcoming payments and payment history • Yield analytics: Real-time and projected
yield, price history, and spread comparisons • Governance status: Participation
in bondholder votes, outcomes, and open proposals • Market feed: News, issuer
updates, ESG performance, and credit rating changes
Technical considerations • Integration with on-chain data and off-chain oracle
feeds • Mobile responsiveness and accessibility in emerging markets • Export
features for tax reports and regulatory filing • User notification settings and
multi-wallet support
By combining blockchain transparency with investor-centric design, dashboards
make tokenized bonds more intuitive and interactive than legacy instruments.
## Risk modeling and credit analysis in tokenized bonds
Effective risk modeling is essential for investors evaluating tokenized bonds.
While blockchain improves transparency, the underlying credit risk must still be
quantified and managed.
### Risk dimensions
* **Credit risk**: Likelihood of default by the issuer, based on financial
health and market conditions
* **Liquidity risk**: Ability to exit a position without significant price
impact
* **Market risk**: Impact of interest rate changes or macroeconomic events on
bond valuation
* **Operational risk**: Smart contract vulnerabilities, custody risks, or oracle
failures
### Blockchain-enhanced risk metrics
* Real-time exposure tracking across wallets and tranches
* On-chain default probability estimates using machine learning models
* Programmatic alerts for covenant breaches or payment delays
* Historical yield performance from trade and coupon logs
### Smart contract role
* Define conditional cash flows based on performance thresholds
* Trigger restructuring proposals when covenants are at risk
* Integrate risk scores from decentralized oracles into UI/UX elements
Digital bonds must offer familiar fixed-income analytics — duration, convexity,
spread — while augmenting them with programmable transparency and event-driven
logic.
## Structuring multi-asset and hybrid tokenized bonds
Blockchain allows for sophisticated financial engineering through composable
digital assets. Multi-asset and hybrid bonds represent an evolution of
traditional debt into programmable, yield-optimizing products.
### Multi-asset structures
* Collateralized token baskets backing a single bond token (e.g., stablecoins +
real estate tokens)
* Tranches with differentiated claims on revenue from multiple projects
* Dynamic asset weighting based on price feeds or external triggers
### Hybrid instruments
* Equity-convertible tokenized bonds based on issuer valuation or milestones
* ESG-linked payouts based on verified sustainability performance
* Embedded options such as callability or step-up coupons
### Example
* A renewable energy cooperative issues a bond token backed by two solar
projects and one wind farm
* Revenue flows are aggregated and split between a senior bond token and a
junior equity token
* Coupons are paid in stablecoin or tokenized energy credits, depending on
market conditions
* Token holders vote on asset reallocation if one project underperforms
These structures enhance yield customization, risk distribution, and investor
engagement across digital finance ecosystems.
## Automating compliance and audit readiness
Blockchain’s immutability supports real-time compliance automation and
continuous auditability for regulated bond instruments.
### Compliance layers
* Role-based access control to ensure only eligible wallets interact with tokens
* Smart contract enforcement of holding limits, jurisdictional exclusions, or
transfer thresholds
* Dynamic screening for sanctions, blacklists, or regulatory changes
### Audit automation
* Timestamped records of all transactions, redemptions, and interest payments
* Wallet-linked identity records (when consented) for regulator reviews
* API-exportable logs for external audit software or supervision portals
### Example workflow
* An exchange-integrated token checks sender and receiver wallets against an
OFAC-sanctioned address list before allowing transfer
* Weekly compliance snapshots are automatically hashed and published for
third-party verification
* Any KYC update or incident report is linked to the investor’s token activity
for risk profiling
These systems reduce the cost and complexity of compliance while enhancing
transparency and regulatory alignment.
## Impact finance and green bonds on blockchain
Tokenized bonds are well-suited to support impact finance, where capital is
linked to measurable environmental or social outcomes.
### Green bond features
* Use-of-proceeds tracking for ESG-aligned projects
* Verified emissions reductions or sustainability targets
* On-chain attestation from independent verifiers
* Coupon adjustments or bonuses based on impact score achievements
### Social and SDG-linked bonds
* Proceeds funding education, housing, or healthcare infrastructure
* Performance indicators tied to human development metrics
* Tokenized engagement rewards for communities or investors
### Blockchain advantages
* Transparent tracking of disbursements, outcomes, and validator reports
* Verifiable metrics to satisfy investor mandates or regulatory frameworks
* Integration with sustainability-focused DAOs or grant programs
Example:
* A city issues a tokenized sustainability-linked bond to upgrade water
infrastructure
* An oracle system reports progress on pollution reduction targets
* If goals are met ahead of schedule, investors receive a bonus coupon payment
* The entire bond lifecycle is visible through a public dashboard with ESG
indicators
Tokenization empowers a new level of alignment between capital and positive
impact, with measurable, auditable outcomes.
## Long-term vision for digital bond markets
Bond tokenization is not just a digitization step — it represents a
transformation of how capital formation, credit, and public debt function in a
programmable economy.
### Expected trends
* Convergence between CeFi and DeFi platforms for fixed-income assets
* Tokenized sovereign debt issued directly to digital wallets with programmable
benefits
* AI-driven structuring tools that design and deploy bond smart contracts on
demand
* Real-time credit risk scoring powered by decentralized data feeds and ML
models
* DAO-managed bond issuance pools with community governance over lending terms
### Institutional adoption
* Central banks integrating tokenized bonds into monetary policy operations
* Tier-1 banks offering tokenized debt investment products through APIs
* ESG and sustainable finance mandates enforcing on-chain proof of impact
### Global accessibility
* Millions of users globally accessing tokenized treasuries via mobile wallets
* Micro-denominated bonds enabling savings and investment for unbanked
populations
* Cross-chain interoperability for digital bonds settled across global
stablecoin networks
The future of bonds is decentralized, accessible, and intelligent —
re-engineered to meet the liquidity, compliance, and transparency demands of the
next century of finance.
## Developer tools and SDKs for tokenized bond platforms
Building tokenized bond systems requires developer access to tested tools,
modular SDKs, and extensible frameworks that simplify integration with
blockchain infrastructure, wallets, and financial backends.
### Common toolkits
* **OpenZeppelin Contracts**: Reusable audited smart contracts including ERC1400
extensions for security tokens
* **Hardhat and Foundry**: Development environments for compiling, testing, and
deploying bond smart contracts
* **Graph Protocol**: Real-time indexing and querying layer to support
dashboards, reporting, and investor portals
* **Chainlink Oracles**: Secure oracle infrastructure for pricing, economic, or
ESG data integration
* **SettleMint SDKs**: Low-code abstraction over smart contract deployment,
identity management, and workflow automation
### Use cases
* Rapid deployment of bond issuance flows with coupon logic pre-configured
* UI layers for issuer dashboards and investor onboarding portals
* Automation of lifecycle events with customizable contract hooks
* Role-based access with middleware identity adapters (e.g., Civic, Fractal,
World ID)
By using modular SDKs, fintech startups, investment banks, and public
institutions can launch bond tokenization products with reduced engineering
complexity and increased speed to market.
## Smart contract templates for tokenized bonds
Developers benefit from audited contract templates that encapsulate standard
bond behavior while allowing custom logic for features like callable schedules,
ESG-linked coupons, or voting.
### Core contract structure
* **BondToken.sol**: Inherits from ERC1400 or custom security token
implementation
* **BondTerms.sol**: Stores metadata for maturity date, coupon rate, payment
intervals
* **CouponDistributor.sol**: Automates payouts and withdrawal logic for
investors
* **RedemptionModule.sol**: Enables principal repayment and optional early calls
* **GovernanceHooks.sol**: Optional voting or amendment logic by bondholders
### Example constructor
```solidity
constructor(
uint256 _maturityDate,
uint256 _couponRate,
address _stablecoin,
address[] memory _eligibleInvestors
) {
maturity = _maturityDate;
coupon = _couponRate;
paymentAsset = _stablecoin;
whitelist[_eligibleInvestors] = true;
}
```
This smart contract template would include validation checks, ownership
controls, and integration with compliance oracles.
## Open API specifications for bond platforms
Tokenized bond ecosystems require reliable APIs for wallet apps, custodians,
exchanges, and regulators. These APIs allow external systems to query, submit,
and automate key actions.
### Key endpoints
* **GET /bond/:id** — Retrieve metadata, terms, and payment history
* **POST /subscribe** — Submit investor application with KYC reference
* **GET /holder/:wallet** — Return holdings, upcoming coupons, and redemption
status
* **POST /transfer** — Initiate peer-to-peer transfer with compliance
verification
* **POST /governance/vote** — Submit or update votes on proposals
* **GET /report/tax** — Export income, capital gain, and jurisdiction summaries
APIs can be RESTful or GraphQL-based and are typically secured by wallet-based
authentication or investor credentials.
## Ecosystem tools and user interfaces
User adoption depends on high-quality frontend tools that abstract away
blockchain complexity while retaining transparency and control.
### Platform modules
* **Issuer portal**: Smart contract deployment, bond configuration, and investor
management
* **Investor wallet**: Bond holding overview, payout notifications, transfer
interface
* **Analytics dashboard**: Market data, ESG scoring, transaction heatmaps
* **Governance interface**: Bondholder voting, quorum tracking, and amendment
logs
* **Compliance monitor**: Real-time transfer scanning and whitelist enforcement
logs
### UI/UX considerations
* Gasless interactions using meta-transactions or account abstraction
* Mobile-first layouts for emerging markets
* Multilingual support for cross-border issuance
* Educational overlays to simplify bond terminology for retail users
Interfaces can be built using frameworks like Next.js, React, Vue, or embedded
into mobile apps through APIs and web views.
## Summary: The path forward for tokenized bond markets
Bond tokenization represents one of the most compelling applications of
blockchain in regulated finance. It brings together programmable automation,
transparency, global access, and operational efficiency to transform
fixed-income instruments.
Key takeaways:
* Tokenized bonds retain the legal and economic properties of traditional debt
but operate on a programmable infrastructure layer
* Blockchain enables faster issuance, fractional ownership, and real-time
settlement while reducing intermediary overhead
* Smart contracts enforce compliance, automate payments, and enable advanced
features like ESG linkage, voting, and embedded options
* Tools including SDKs, APIs, and governance modules empower institutions to
launch compliant, scalable products with global reach
* Integration with DeFi, stablecoins, identity systems, and impact oracles
expands the utility of bonds in digital ecosystems
The future of capital markets lies in infrastructure that is open, intelligent,
and decentralized — with tokenized bonds playing a foundational role in shaping
the next generation of financial services.
## Appendix: Bond coupon calculation and distribution models
Tokenized bonds can use programmable smart contracts to automate traditional
coupon logic. While the formulas remain consistent with traditional finance,
blockchain enables transparent, predictable, and on-chain computation of
payments.
### Common coupon formulas
**Fixed rate coupon**
```
Coupon = (Face Value × Coupon Rate × Days in Period) / Days in Year
```
**Floating rate coupon**
```
Coupon = (Reference Rate + Spread) × Face Value × (Days in Period / Days in Year)
```
**Zero-coupon bond**
```
Final Redemption = Face Value × (1 + Yield) ^ (Years to Maturity)
```
Smart contracts should:
* Fetch reference rates from trusted oracles (e.g., SOFR, LIBOR, EURIBOR)
* Use appropriate day count convention (e.g., ACT/ACT, 30/360)
* Trigger distributions to token holders based on wallet snapshots
* Log successful and failed payment attempts with re-attempt or escalation logic
### Payment options
* Stablecoins pegged to fiat (e.g., USDC, EURC)
* CBDCs via whitelisted settlement contracts
* Wrapped fiat tokens from authorized financial institutions
Coupon modules can support both push (issuer sends) and pull (investor claims)
models depending on legal requirements and investor preference.
## Lifecycle pseudocode for tokenized bond execution
Below is simplified pseudocode describing the flow of a programmable bond smart
contract.
```ts
on TokenMint(issuer, bondTerms, investorList):
store bondTerms
initialize paymentSchedule
for investor in investorList:
whitelist[investor.address] = true
balances[investor.address] = investor.allocation
emit TokenMinted(bondTerms)
on Transfer(from, to, amount):
require whitelist[to] == true
update balances
emit Transfer(from, to, amount)
on PayCoupon():
currentPeriod = getCurrentPaymentPeriod()
for holder in balances:
coupon = calculateCoupon(holder.balance, bondTerms.rate)
if payoutAsset.balance >= coupon:
payoutAsset.transfer(holder.address, coupon)
emit CouponPaid(holder.address, coupon)
else:
emit PaymentFailed(holder.address, coupon)
on RedeemAtMaturity():
if block.timestamp >= bondTerms.maturityDate:
for holder in balances:
payoutAsset.transfer(holder.address, principalAmount(holder.balance))
burnToken()
emit BondRedeemed()
```
This model can be extended with early redemption, slashing for defaults,
governance integration, or ESG-linked conditional logic.
## Role definitions and smart contract access control
Institutional-grade bond platforms must implement robust access control for
various roles, which are enforced on-chain.
### Role-based permissions
* **Issuer**: Initiates bond, updates metadata, triggers redemption
* **Investor**: Receives token, transfers within limits, claims coupon
* **Custodian**: Optional co-signer or multisig participant for managed accounts
* **Regulator**: Read access to whitelists, events, and reporting API
* **Auditor**: Downloads proof of payment, compliance logs, or tax summaries
### Smart contract patterns
* OpenZeppelin’s `AccessControl` or `Ownable` for modular role checks
* Event emissions for every privileged action for auditability
* Emergency pause or circuit breaker in case of oracle/data failure
Tokenized bond infrastructure benefits from formal, minimal, and verifiable
access control logic that aligns with off-chain governance rules and investor
protections.
## Digital escrow design for primary issuance
Primary issuance of bonds may use on-chain escrow logic to guarantee delivery vs
payment without trusted intermediaries.
### Escrow process
1. Investor submits stablecoin to escrow contract with signed subscription
intent
2. Smart contract validates KYC/whitelist inclusion
3. Token allocation is held until funding period ends
4. On success:
* Tokens distributed pro-rata
* Funds released to issuer
5. On failure:
* Funds returned to investors
### Benefits
* No manual reconciliation of bank wires or custody mismatches
* Automatic enforcement of minimum/maximum raise conditions
* Transparent subscription and refund activity for regulators
Escrow contracts are critical to digitizing the subscription phase of bond
issuance.
## Blockchain network selection criteria
Tokenized bond deployments require careful selection of blockchain
infrastructure that balances security, cost, interoperability, and regulatory
clarity.
### Key evaluation factors
* **Finality**: Is the network deterministic (e.g., Tendermint) or probabilistic
(e.g., Ethereum mainnet)?
* **Gas costs**: Are transaction fees predictable and affordable at scale?
* **Permissioning**: Can access to contract functions be restricted for
compliance?
* **Tooling**: Is developer tooling (indexing, wallets, explorers) mature?
* **Legal recognition**: Does the jurisdiction recognize digital ledger entries
as legal records?
### Common choices
* **Ethereum Mainnet**: Deepest liquidity, highest decentralization, but
expensive and congested
* **Polygon / Avalanche**: Compatible with EVM and more affordable, used in
multiple regulated pilots
* **Quorum / Besu**: Enterprise Ethereum variants suitable for permissioned
environments
* **Stellar / Algorand**: Strong focus on asset issuance, low fees, high
finality
* **Hyperledger Fabric**: Used for internal settlement networks and CSD-level
integrations
Selecting the right chain is as important as structuring the instrument itself.
Migration or interoperability support should be considered from day one.
## Final closing reflections
Bond tokenization stands at the intersection of tradition and transformation. It
preserves the rigor and stability of fixed-income instruments while unlocking
new paradigms of access, automation, and programmability.
By delivering:
* Real-time, permissioned, and secure financial flows
* Cost savings through automation and direct issuance
* Flexibility in structuring and investor targeting
* Auditability for regulators and compliance officers
* New utility and composability through DeFi and ESG linkages
...blockchain-based bond markets represent not just a digital replica of analog
finance, but a complete reimagining of how debt, investment, and economic
coordination can function in a global digital-first world.
file: ./content/docs/application-kits/asset-tokenization/use-cases/cryptocurrency.mdx
meta: {
"title": "Cryptocurrency tokens",
"description": "A comprehensive technical and functional guide to understanding cryptocurrencies, their ecosystem, architecture, and use cases"
}
## Introduction to cryptocurrency
Cryptocurrency refers to a digital medium of exchange that operates on
decentralized networks using blockchain or similar distributed ledger
technologies. Unlike fiat currencies issued by central banks, cryptocurrencies
are secured through cryptographic principles and operate without reliance on
intermediaries like banks or clearinghouses.
Since the launch of Bitcoin in 2009, cryptocurrencies have evolved from niche
experiments in peer-to-peer money into a sprawling ecosystem of programmable
assets, decentralized finance (DeFi), Web3 applications, and alternative
economic systems. This guide explores the technical foundations, economic
principles, key players, infrastructure layers, and future trajectory of the
cryptocurrency space.
Cryptocurrencies challenge conventional assumptions about monetary policy,
censorship, transparency, and control. They empower users with self-custody,
privacy, global access, and programmable financial tools. At the same time, they
raise concerns around volatility, regulation, scalability, and illicit finance.
Understanding cryptocurrency requires a blend of computer science, cryptography,
economics, game theory, and governance. This document walks through all major
aspects of cryptocurrency, from consensus and issuance models to wallets,
mining, tokenomics, and regulatory impact.
## Evolution of digital currencies and monetary innovation
The idea of digital money precedes Bitcoin. Prior to blockchain-based
cryptocurrencies, there were several attempts at creating internet-native
currencies or anonymous value transfer systems. Examples include:
* **DigiCash (1990s)**: An early e-cash system based on blind signature
cryptography by David Chaum
* **e-gold (1996)**: A centralized digital currency backed by gold reserves,
eventually shut down for regulatory violations
* **Liberty Reserve (2001)**: A digital payment processor used for anonymous
transfers, also shut down for enabling illicit activity
These projects failed to gain sustained adoption due to centralization risks,
regulatory fragility, or technical limitations. The breakthrough came with
Bitcoin’s introduction of decentralized consensus, eliminating the need for
trusted intermediaries.
Bitcoin combined:
* Public-key cryptography (for wallets and signatures)
* Proof-of-work mining (to secure the network)
* A capped supply schedule (mimicking digital scarcity)
* A distributed ledger (to synchronize state without a central server)
This architecture enabled censorship-resistant, globally accessible, and
programmable money.
## Defining characteristics of cryptocurrencies
Cryptocurrencies share several key properties that distinguish them from fiat
currencies, electronic money, or central bank digital currencies (CBDCs):
* **Decentralization**: Operate without a single point of control or failure
* **Cryptographic security**: Use cryptographic techniques for identity,
transaction authorization, and network protection
* **Immutable ledger**: Transactions recorded on-chain cannot be altered without
consensus
* **Pseudonymity**: Users are represented by public keys, not personal
identities
* **Global accessibility**: Usable across borders without reliance on banking
infrastructure
* **Programmability**: Smart contracts allow logic to be executed based on
on-chain conditions
* **Digital scarcity**: Most cryptocurrencies have hard-coded or algorithmic
supply schedules
These characteristics make cryptocurrencies suitable for a variety of roles —
from alternative money and digital gold to programmable capital and governance
tokens.
## Cryptographic foundations
Cryptocurrency networks rely heavily on cryptographic primitives for security,
privacy, and consensus.
### Public-key cryptography
Each user is associated with a cryptographic key pair:
* **Private key**: Known only to the user, used to sign transactions
* **Public key**: Derived from the private key, used to generate wallet
addresses
Transactions are authorized by signing them with the private key. Anyone can
verify the signature using the public key, ensuring authenticity without
revealing the private key.
### Hash functions
Cryptocurrencies use hash functions for:
* Block linking in blockchains (e.g., SHA-256 in Bitcoin)
* Proof-of-work mining (finding a hash below a difficulty target)
* Transaction identifiers and Merkle trees (for efficient state verification)
Hash functions are designed to be:
* Deterministic
* Collision-resistant
* Non-invertible
* Uniformly distributed
### Digital signatures
Most cryptocurrencies use elliptic curve digital signatures (ECDSA or Schnorr)
to prove ownership of funds and authorize state changes. The digital signature
proves the message was created by someone with the private key, without
revealing the key itself.
### Zero-knowledge proofs
Some privacy-focused cryptocurrencies (e.g., Zcash) use zero-knowledge proofs
(zk-SNARKs, zk-STARKs) to enable transaction validation without revealing
sender, receiver, or amount.
Cryptographic soundness is critical to the security and trustworthiness of any
cryptocurrency system.
## Blockchain architecture and transaction lifecycle
Cryptocurrencies typically operate on blockchain infrastructure, a linear
sequence of blocks containing transactions, validated by consensus rules.
### Key components
* **Ledger**: Tracks account balances or UTXO (unspent transaction output)
states
* **Nodes**: Devices running client software that store and propagate the
blockchain
* **Validators/miners**: Participants who validate transactions and propose new
blocks
* **Mempool**: Queue of pending transactions waiting to be confirmed
### Transaction flow
1. User signs transaction with private key
2. Transaction is broadcast to the network and enters the mempool
3. Miner/validator includes transaction in a new block
4. Block is validated, appended to the chain, and propagated to nodes
5. User receives confirmation once the block is accepted by the majority
The transaction becomes increasingly irreversible as more blocks are added after
it.
### Block contents
Each block typically contains:
* Block header (timestamp, previous block hash, nonce)
* Merkle root (hash of all transactions in the block)
* List of validated transactions
* Optional metadata (e.g., miner’s message, smart contract logs)
Different chains may use account-based models (like Ethereum) or UTXO-based
models (like Bitcoin).
## Consensus mechanisms
To maintain a consistent and tamper-resistant ledger across all participants,
cryptocurrencies use consensus algorithms.
### Proof of work (PoW)
* First implemented by Bitcoin
* Miners solve cryptographic puzzles to propose new blocks
* Requires energy and hardware investment
* Secure but resource-intensive and slower in throughput
### Proof of stake (PoS)
* Validators are chosen based on staked tokens
* Incentivizes honest behavior through slashing and reward distribution
* Used in Ethereum 2.0, Cardano, Solana, and others
* Reduces energy usage and increases scalability
### Other models
* **Delegated PoS** (e.g., EOS): Token holders vote on a limited set of
validators
* **Proof of authority** (e.g., BSC): Validators are pre-approved or
institutionally known
* **Byzantine Fault Tolerant (BFT)** (e.g., Cosmos, Tendermint): Fast finality
with limited validator sets
* **Hybrid systems**: Combine PoW and PoS (e.g., Decred)
The choice of consensus mechanism affects the network’s decentralization,
security, energy efficiency, and governance dynamics.
## Cryptocurrency types and classification
There are thousands of cryptocurrencies, but they can be grouped based on
function and design.
### Native coins
* Used to secure and operate a blockchain network
* Examples: BTC (Bitcoin), ETH (Ethereum), ADA (Cardano), SOL (Solana)
* Typically issued at genesis or through block rewards
### Stablecoins
* Pegged to fiat currencies or commodities
* Used for trading, payments, and DeFi stability
* Categories:
* Fiat-backed (e.g., USDC, USDT)
* Crypto-collateralized (e.g., DAI)
* Algorithmic (e.g., former UST)
### Utility tokens
* Provide access to services or features within a protocol
* Not intended as currencies but as access or incentive layers
* Examples: LINK (Chainlink), BAT (Brave), GRT (The Graph)
### Governance tokens
* Represent voting rights in protocol decisions
* Used in DAOs to allocate resources, update parameters, or deploy changes
* Examples: UNI (Uniswap), AAVE (Aave), MKR (MakerDAO)
### Privacy coins
* Emphasize anonymous transactions
* Use advanced cryptography to obscure sender, receiver, or amount
* Examples: XMR (Monero), ZEC (Zcash)
### Meme coins and experimental tokens
* Community-driven or satirical in nature
* Often high volatility, speculative use
* Examples: DOGE (Dogecoin), SHIB (Shiba Inu)
Each token type reflects specific use cases, incentive models, and governance
frameworks.
## Cryptocurrency wallets and custody
Cryptocurrency wallets are software or hardware tools that allow users to manage
their private keys and interact with blockchain networks. Wallets do not hold
coins themselves but provide access to the cryptographic credentials required to
control assets stored on-chain.
### Types of wallets
* **Hot wallets**: Connected to the internet; convenient but exposed to higher
risk. Examples: MetaMask, Trust Wallet, Coinbase Wallet.
* **Cold wallets**: Offline storage; includes hardware wallets (e.g., Ledger,
Trezor) and paper wallets.
* **Custodial wallets**: Managed by a third party (e.g., exchange or
institution).
* **Non-custodial wallets**: User holds full control over private keys.
### Wallet features
* Seed phrase generation and recovery
* Public address management
* Token support (multi-chain or EVM-compatible)
* Integration with dApps via Web3 interfaces
* Support for NFTs and smart contract interactions
Choosing the right wallet depends on use case, risk profile, and technical
comfort level.
## Cryptocurrency exchanges and trading infrastructure
Cryptocurrency exchanges are platforms where users can buy, sell, or trade
digital assets. They serve as liquidity hubs and price discovery mechanisms for
the entire crypto market.
### Types of exchanges
* **Centralized exchanges (CEXs)**: Operated by companies with order books and
custody (e.g., Binance, Coinbase).
* **Decentralized exchanges (DEXs)**: Peer-to-peer trading via smart contracts
(e.g., Uniswap, Curve, PancakeSwap).
* **Hybrid exchanges**: Combine elements of CEX and DEX (e.g., dYdX, Loopring).
### Core components
* Order books or automated market makers (AMMs)
* Trading pairs (e.g., BTC/USDT)
* Liquidity pools or order routing
* KYC/AML processes for regulated platforms
* Fiat on-ramps and off-ramps
### Trading tools
* Spot, margin, and futures trading
* API access and algorithmic strategies
* Price alerts and charting interfaces
* Risk management tools (stop-loss, limit orders)
Exchanges are critical for price formation, liquidity provisioning, and user
adoption of cryptocurrencies.
## Mining and validator economics
Consensus mechanisms reward participants for securing the network. The economic
structure behind mining or validation defines incentives, costs, and
competition.
### Proof of Work (PoW) mining
* Miners invest in hardware (ASICs, GPUs) and electricity
* Compete to solve hash puzzles and win block rewards
* Revenue = block subsidy + transaction fees
* Margins depend on difficulty, hardware efficiency, and energy costs
### Proof of Stake (PoS) validation
* Validators lock tokens as collateral (stake)
* Are selected to propose and attest to blocks
* Earn staking rewards and fees; risk slashing for malicious behavior
* Staking can be solo, pooled, or delegated via staking-as-a-service
### Security and alignment
* Game-theoretic models align economic incentives with honest participation
* Costs to attack scale with network value (e.g., 51% attack)
* Reward mechanisms adjust dynamically to maintain decentralization and liveness
Mining and staking underpin the trust and robustness of permissionless
cryptocurrency networks.
## Tokenomics and issuance models
Tokenomics refers to the design of a token's economic model, including supply,
distribution, inflation, and utility. Strong tokenomics align incentives for
network growth and value creation.
### Key components
* **Total supply**: Fixed (e.g., 21 million BTC) vs. inflationary (e.g., ETH
post-merge)
* **Distribution**: Mining, staking, airdrops, ICOs, liquidity mining, or dev
grants
* **Utility**: Payments, governance, gas fees, collateral, or service access
* **Burn mechanisms**: Supply reduction through fee burns or redemption events
* **Treasury management**: DAO or foundation-managed reserves for ecosystem
development
### Common issuance models
* **Hard cap**: Max total supply never exceeds fixed amount (Bitcoin)
* **Tail emission**: Small ongoing inflation for security or incentives (Monero)
* **Burn and mint**: Elastic supply based on demand and usage (Luna/UST
pre-collapse)
* **Bonding curves**: Price discovery based on supply-demand interaction (e.g.,
Balancer)
Tokenomics must balance scarcity, incentives, and utility to sustain value over
time.
## Economic use cases of cryptocurrencies
Cryptocurrencies support a range of economic roles beyond simple payments. These
include:
### Store of value
* Digital scarcity and fixed supply mimic gold-like properties
* Popular in inflation-prone or capital-restricted economies
### Medium of exchange
* Used for remittances, P2P payments, microtransactions
* Low-fee stablecoins enable merchant adoption and cross-border commerce
### Unit of account
* Less common due to volatility
* Used in DeFi protocols, NFTs, and DAOs for internal accounting
### Collateral and yield generation
* Locked in DeFi protocols to borrow, earn, or mint synthetic assets
* Staking yields or lending interest incentivize holding
### Speculation and hedging
* Crypto derivatives offer exposure to volatility and risk management
* Options, perpetuals, and structured products deepen market complexity
The economic function of a cryptocurrency depends on adoption, network effects,
and policy frameworks.
## Network effects and ecosystem growth
Cryptocurrency value often grows non-linearly due to network effects. These
feedback loops include:
* **Developer adoption**: More devs → more dApps → more users → more demand for
native token
* **Liquidity flywheels**: High volume attracts LPs, traders, and integrations
* **Security via stake**: More tokens staked → higher cost to attack → more
trust
* **Community alignment**: Token holders support ecosystem growth via DAOs and
governance
Metcalfe’s Law applies, value grows proportionally to the square of connected
users. Strong ecosystems like Ethereum, Solana, or Cosmos leverage these effects
to build defensible value.
## Smart contracts and programmable cryptocurrencies
Smart contracts are self-executing agreements that run on blockchain networks.
They enable cryptocurrencies to move beyond simple payments into programmable
systems for finance, governance, and applications.
### Characteristics of smart contracts
* Deterministic and tamper-proof execution
* Autonomous code triggered by on-chain events
* Transparency of logic and state changes
* Immutable unless designed with upgrade paths
Smart contracts are most commonly associated with Ethereum but are supported on
many networks including Solana, Avalanche, Polygon, Near, and Fantom.
### Common use cases
* Token issuance and management (ERC-20, ERC-721)
* Lending and borrowing protocols
* Automated market makers and DEXs
* Governance and DAO tooling
* Crowdfunding (e.g., initial DEX offerings)
Smart contracts turn cryptocurrencies into composable primitives for building
decentralized infrastructure.
## Decentralized finance (DeFi)
DeFi is an ecosystem of financial services built on smart contracts and public
blockchains. It recreates traditional instruments in a trustless and open
format.
### Key components
* **Stablecoins**: Act as unit of account and collateral (e.g., DAI, USDC)
* **Lending markets**: Allow users to supply and borrow assets (e.g., Aave,
Compound)
* **DEXs**: Trade tokens without intermediaries (e.g., Uniswap, SushiSwap)
* **Derivatives**: Synthetics, options, and perpetuals (e.g., Synthetix, dYdX)
* **Aggregators**: Optimize for best yields or swap rates (e.g., Yearn, 1inch)
### DeFi advantages
* Global, permissionless access
* Composability between protocols
* Transparent and verifiable logic
* Non-custodial control
DeFi has grown into a multibillion-dollar ecosystem with hundreds of
interoperable protocols leveraging cryptocurrency as collateral and value
medium.
## Decentralized autonomous organizations (DAOs)
DAOs are governance structures where rules and decisions are encoded in smart
contracts and enforced by token-weighted voting.
### DAO architecture
* **Governance tokens**: Provide voting power and access rights
* **Proposals**: Submitted by community members or core teams
* **Voting**: Based on stake or delegation models
* **Execution**: Smart contract-based execution of approved proposals
### DAO use cases
* Protocol upgrades and parameter tuning
* Treasury allocation and grant programs
* Community curation and incentives
* Investment and M\&A decisions
DAOs demonstrate how cryptocurrencies can enable scalable, programmable
governance models without centralized intermediaries.
## Regulatory considerations for cryptocurrencies
As cryptocurrencies grow in adoption, they intersect increasingly with financial
regulation. Governments classify and regulate crypto assets differently based on
their design and use.
### Common regulatory topics
* **Securities classification**: Some tokens may be deemed investment contracts
* **AML/KYC compliance**: Exchanges and DeFi frontends may require user
verification
* **Taxation**: Treated as property or income depending on jurisdiction
* **Consumer protection**: Ensuring fair disclosures and protocol risk
visibility
* **Stablecoin regulation**: Scrutiny on reserves, audits, and systemic risk
### Jurisdictional approaches
* **United States**: SEC and CFTC jurisdiction; state-level licensing; pending
legislation
* **European Union**: Markets in Crypto-Assets (MiCA) regulation framework
* **Asia**: Varying policies, with regulatory sandboxes and bans in different
regions
Developers and users must consider jurisdictional risks, especially for tokens
involved in fundraising, yield products, or derivatives.
## Cross-chain bridges and interoperability
Cryptocurrencies and dApps increasingly operate across multiple chains. Bridges
and interoperability protocols allow assets and data to move between ecosystems.
### Interoperability mechanisms
* **Wrapped tokens**: Represent assets from one chain on another (e.g., WBTC)
* **Bridges**: Lock assets on origin chain and mint them on target chain
* **Message passing**: Enables cross-chain calls and event triggers
* **Interchain protocols**: IBC (Cosmos), LayerZero, Axelar
### Use cases
* Cross-chain yield farming
* Arbitrage between DEXs on different chains
* NFT minting on one chain and trading on another
* Governance proposals affecting multiple ecosystems
Interoperability enhances liquidity, developer optionality, and cross-ecosystem
collaboration, positioning cryptocurrencies as modular and scalable digital
infrastructure.
## NFTs and cryptocurrency-based digital assets
Non-fungible tokens (NFTs) are unique digital assets secured by cryptocurrency
networks. Each NFT represents ownership of a distinct piece of data, media, or
logic stored on-chain or referenced off-chain.
### Characteristics of NFTs
* Uniqueness and indivisibility
* Cryptographic proof of ownership
* Metadata and media linkage via IPFS or Arweave
* Transferable and tradable across NFT marketplaces
### NFT standards
* **ERC-721**: Standard for single-instance NFTs
* **ERC-1155**: Multi-token standard allowing both fungible and non-fungible
tokens
* **Solana SPL NFTs**: Used on Solana-based NFT platforms
### NFT use cases
* Digital art and collectibles
* Music and film rights
* Virtual land and in-game items
* Identity, credentials, and certificates
* On-chain licensing and IP management
NFTs represent a major new application class of cryptocurrencies, one where
ownership, scarcity, and creativity converge.
## Privacy and anonymity in cryptocurrencies
While blockchains are transparent, some cryptocurrencies and privacy protocols
focus on preserving user anonymity.
### Privacy features
* Obfuscation of sender and receiver addresses
* Confidential transaction amounts
* Hidden transaction graphs and metadata
### Privacy mechanisms
* **Ring signatures** (Monero): Mix real transactions with decoys
* **zk-SNARKs** (Zcash): Allow private transfers with zero-knowledge proofs
* **Stealth addresses**: Enable untraceable receiver identities
* **Mimblewimble** (Grin, Beam): Aggregated and compact transaction design
### Trade-offs
* Reduced traceability vs. regulatory concerns
* Greater computational complexity
* Optional privacy vs. default privacy approaches
Privacy-preserving cryptocurrencies are vital for financial freedom, especially
in oppressive regimes or under surveillance-heavy systems.
## Sustainability and energy in cryptocurrency networks
Environmental concerns, particularly with PoW-based cryptocurrencies, have
driven debate over sustainability and energy use.
### PoW energy profile
* Bitcoin mining consumes energy at scale comparable to small nations
* Relies on competition and physical infrastructure for security
* Incentivizes renewable energy use where it is cheapest
### Responses and alternatives
* Migration to PoS (Ethereum merged to PoS in 2022)
* Layer 2 solutions reduce on-chain load
* Mining efficiency improvements and heat recycling
* Token incentives for carbon offset and regenerative finance (ReFi)
Sustainable cryptocurrency development focuses on efficiency, environmental
offsetting, and emerging green-native protocols.
## Public narratives and adoption psychology
Cryptocurrency movements are shaped by both technology and culture. Public
narratives define user behavior, perception, and investment cycles.
### Key themes
* **Bitcoin as digital gold**: Store of value, hedge against inflation
* **Ethereum as programmable money**: Infrastructure for decentralized
applications
* **Web3 and ownership**: Users control their data, assets, and identity
* **DeFi as open finance**: Equal access to financial tools globally
* **NFTs as digital creativity**: Scarcity and monetization of culture
### Behavioral dynamics
* Hype cycles, bubbles, and market crashes
* FOMO and network effects
* Meme culture and viral adoption (e.g., DOGE, PEPE)
* Influence of key personalities and influencers
Narratives change over time but serve as a powerful force in directing capital,
community, and attention within cryptocurrency ecosystems.
## Layer 2 solutions and scalability advancements
As demand on Layer 1 blockchains increases, Layer 2 (L2) solutions provide
scalability without compromising security or decentralization.
### Types of Layer 2
* **Rollups**: Bundle transactions and post compressed data to L1
* Optimistic rollups (Arbitrum, Optimism)
* ZK-rollups (zkSync, Starknet)
* **State channels**: Off-chain agreements settled on-chain when necessary
* **Plasma**: Child chains with root chain verification
* **Sidechains**: Independent chains bridged to mainnet (e.g., Polygon POS)
### Benefits
* Lower fees and higher throughput
* Fast confirmation times
* Extended composability and smart contract support
Layer 2s unlock mainstream scalability, allowing cryptocurrency applications to
serve millions of users with minimal cost and friction.
## Economic theories and monetary design in cryptocurrency
Cryptocurrencies are not just technological innovations, they also experiment
with novel economic systems and monetary policies.
### Economic schools of influence
* **Austrian economics**: Emphasis on sound money, fixed supply (Bitcoin)
* **Modern monetary theory**: Inspiration for flexible stablecoin models
* **Game theory**: Underpins incentive design in consensus and governance
* **Post-Keynesian thinking**: Used in community treasury and resource
allocation mechanisms
### Monetary models in crypto
* **Deflationary**: Supply decreases over time (e.g., BNB burn model)
* **Inflationary**: Rewards distributed to validators (e.g., Ethereum
post-merge)
* **Elastic supply**: Adjusts based on demand (e.g., rebase tokens like
Ampleforth)
* **Dual-token models**: Separate utility and governance or collateral tokens
(e.g., Maker's DAI and MKR)
These models are tested in open markets, offering live experiments in
programmable economic design.
## Governance models in cryptocurrency protocols
Governance determines how changes are made to protocols, treasuries are managed,
and communities evolve.
### On-chain governance
* Voting is executed via smart contracts
* Tokens represent stake and decision power
* Proposals have thresholds, quorums, and timelocks
* Examples: Compound, Uniswap, Curve
### Off-chain governance
* Social consensus and coordination on forums, GitHub, or Discord
* Voting conducted via tools like Snapshot (off-chain signaling)
* Maintainers and multisig signers execute decisions manually
* Examples: Bitcoin, Ethereum core protocol upgrades
### Governance tools
* Proposal builders and templates
* Delegate registries and vote delegation
* Real-time dashboards for voter participation and proposal impact
Strong governance aligns community incentives and ensures protocol adaptability.
## Risks and vulnerabilities in cryptocurrency networks
Cryptocurrency ecosystems face risks across technical, economic, social, and
legal dimensions.
### Smart contract risks
* Bugs or logic errors leading to fund loss
* Reentrancy attacks (e.g., The DAO hack)
* Oracle manipulation (e.g., flash loan exploits)
* Insufficient testing or audit coverage
### Protocol-level risks
* Economic attacks (e.g., stablecoin de-pegs)
* Governance capture or voter apathy
* Consensus manipulation (e.g., 51% attacks)
### User risks
* Phishing and wallet compromise
* Key mismanagement and loss of funds
* Scams, rug pulls, and social engineering
Risk mitigation strategies include audits, bug bounties, insurance protocols,
multisig wallets, and circuit breaker contracts.
## Developer ecosystems and open-source collaboration
Open-source development is central to cryptocurrency innovation. Protocols
compete for talent, community, and composability.
### Developer tools
* Smart contract frameworks: Hardhat, Foundry, Truffle
* Programming languages: Solidity, Vyper, Rust, Move
* Blockchain clients and APIs: Web3.js, Ethers.js, viem
* Indexing: The Graph, subgraphs, third-party APIs
### Ecosystem support
* Grants and public goods funding (e.g., Gitcoin, Ethereum Foundation)
* Developer DAOs and guilds
* Testnets and devnets for experimentation
* Hackathons and incentive programs
The success of a cryptocurrency often correlates with the strength, size, and
openness of its developer ecosystem.
## Future outlook for cryptocurrencies
Cryptocurrencies are moving beyond speculative assets into foundational
infrastructure for the digital economy.
### Long-term trends
* **Integration with traditional finance**: Tokenized securities, stablecoins,
and institutional custody
* **Decentralized identity and reputation**: Wallet-linked credentials and
social graphs
* **Modular blockchain architectures**: App-specific chains, shared sequencers,
rollup-as-a-service
* **AI + crypto convergence**: Autonomous agents, reputation markets,
decentralized inference
* **Mass adoption**: Embedded crypto in social apps, games, and payment
platforms
Cryptocurrencies will continue evolving, driven by new use cases, social
coordination, and programmable innovation.
## Glossary of key cryptocurrency terms
Understanding cryptocurrency involves navigating a wide range of technical and
financial terms. This glossary provides concise definitions for common concepts.
### Core terms
* **Blockchain**: A distributed ledger recording transactions in a sequential,
immutable format
* **Cryptocurrency**: A digital asset used as money or utility, secured by
cryptography and blockchain
* **Wallet**: Software or hardware that stores private keys and facilitates
interactions with the blockchain
* **Private key**: A secret cryptographic key used to authorize transactions
* **Public address**: A blockchain-visible address derived from the public key,
used to receive assets
### Network and protocol
* **Node**: A computer that participates in the network by validating or
relaying transactions
* **Validator**: A participant responsible for block production and consensus in
PoS systems
* **Miner**: A participant solving PoW challenges to add new blocks and earn
rewards
* **Consensus mechanism**: The method by which a network agrees on the current
state (e.g., PoW, PoS)
### Token types
* **Native token**: A base asset used for fees and consensus in a blockchain
(e.g., ETH, SOL)
* **ERC-20**: Standard for fungible tokens on Ethereum
* **ERC-721**: Standard for NFTs
* **Stablecoin**: Token pegged to fiat currency (e.g., USDC, DAI)
* **Governance token**: Token used to vote on protocol decisions
### Financial and DeFi
* **DEX**: Decentralized exchange
* **Liquidity pool**: A pool of assets enabling token swaps without traditional
order books
* **Yield farming**: Strategy of providing liquidity to earn token incentives
* **Impermanent loss**: Potential loss from providing liquidity due to price
divergence
* **Staking**: Locking tokens to support consensus or earn yield
### Security and privacy
* **Reentrancy attack**: Exploit where external calls re-enter a contract
unexpectedly
* **Slashing**: Penalty for malicious validator behavior
* **zk-SNARKs**: Zero-knowledge proofs for privacy-preserving computations
* **Cold wallet**: Offline storage for maximum key security
***
## Token lifecycle and transaction flow
Understanding the journey of a token across its lifecycle is essential to
understanding how cryptocurrency systems work.
### Token lifecycle stages
1. **Creation**: Deployed via smart contract or genesis block
2. **Distribution**: Airdrop, ICO, mining, staking, or bonding curve
3. **Listing**: Made available for trading on exchanges or DEXs
4. **Transfer**: Sent between wallets with optional logic (tax, limits)
5. **Utility**: Used for payments, governance, or service access
6. **Burning**: Sent to inaccessible address to reduce supply
7. **Redemption**: Converted back to another asset or value unit
### Sample ERC-20 transfer flow
* User signs transaction with wallet
* TX is broadcast to mempool
* Validator includes TX in block
* Contract updates balances
* Event logs notify frontends or dApps
This flow is secured, transparent, and final once confirmed.
This comparison reflects trade-offs in decentralization, scalability, and
programmability across popular networks.
Cryptocurrency is more than a financial trend, it is a foundational shift in
how societies manage value, governance, and digital ownership.
From Bitcoin’s immutable scarcity to Ethereum’s composable logic, from
zero-knowledge breakthroughs to DAO treasuries and DeFi markets —
cryptocurrencies represent an ongoing experiment in redesigning financial and
social systems.
Whether used for remittances in underserved regions, powering virtual economies
in the metaverse, or securing global supply chains, cryptocurrency is no longer
optional to understand, it is becoming essential.
As we move into a future shaped by open protocols, programmable money, and
digital-native networks, cryptocurrency will continue to redefine the boundaries
of what money can do.
file: ./content/docs/application-kits/asset-tokenization/use-cases/equity-tokenization.mdx
meta: {
"title": "Equity tokenization",
"description": "A comprehensive guide to equity tokenization using blockchain, including cap table automation, investor onboarding, and lifecycle management"
}
## Introduction to equity tokenization
Equity tokenization refers to the digital representation of ownership shares in
a company or legal entity on a blockchain network. These tokens carry the same
rights and obligations as traditional shares, such as dividends, voting rights,
and liquidation preference, but offer far greater efficiency, transparency, and
programmability.
Traditional equity issuance, especially in private markets, is plagued by manual
record-keeping, inefficient fundraising processes, and fragmented shareholder
management. Cap tables are stored in spreadsheets, shares are transferred
through wet-ink signatures, and investor rights are enforced via legal
intermediaries. These constraints limit access to capital, increase transaction
costs, and reduce liquidity.
By tokenizing equity, companies create a programmable representation of shares
that can be issued, transferred, and governed through smart contracts. This
enables seamless investor onboarding, real-time cap table updates, automated
dividend distribution, and enhanced liquidity through regulated secondary
markets or peer-to-peer transfers.
Equity tokenization is not about replacing legal structures, it’s about
upgrading how equity is issued and operated in a digital-first world.
## Limitations of traditional equity management
Equity issuance and cap table management have remained largely unchanged for
decades. The traditional model creates friction for both founders and investors,
especially in private companies.
### Key challenges
* **Manual processes**: Equity issuance, transfer, and record-keeping rely on
PDFs, spreadsheets, and email workflows
* **Lack of transparency**: Investors have limited visibility into ownership
changes or dilution events
* **Compliance complexity**: Jurisdictional rules, accreditation checks, and
transfer restrictions are enforced manually
* **High friction fundraising**: Subscription agreements, KYC, and payments
require legal and administrative overhead
* **Secondary market illiquidity**: Selling shares requires legal consent,
broker involvement, and trust in counterparties
For early-stage and growth companies, these inefficiencies lead to longer
fundraising cycles, reduced investor reach, and increased legal risk.
## What is a cap table?
A capitalization table (cap table) is a record of the equity ownership structure
of a company. It outlines how shares are allocated among founders, investors,
employees, and other stakeholders.
### Typical components
* Shareholder name or entity
* Share class (common, preferred, etc.)
* Number of shares or tokens
* Ownership percentage
* Vesting schedule (if applicable)
* Rights (voting, dividend, liquidation preference)
The cap table evolves over time as new equity rounds are issued, options are
exercised, or shares are transferred. Maintaining an accurate, auditable, and
real-time cap table is critical for governance, reporting, and valuation.
In the tokenized model, the cap table is embedded within the blockchain itself —
with every transaction updating the shareholder ledger programmatically.
## Equity token structure and rights
A tokenized equity instrument mirrors traditional share structures, encoded as a
smart contract that manages ownership, transfer rules, and rights.
### Token attributes
* **Name and symbol**: Identifiers for the equity token
* **Decimals and supply**: Number of shares and precision
* **Holder registry**: Mapping of wallet addresses to shareholder IDs
* **Transfer conditions**: Whitelist requirements, lockups, and jurisdictional
restrictions
* **Governance hooks**: Voting eligibility, quorum logic, and delegation
* **Payout logic**: Dividend distribution based on token holdings
### Equity rights encoded on-chain
* **Voting**: Token-weighted voting via governance contracts
* **Dividends**: Stablecoin or fiat payments distributed automatically
* **Liquidity preference**: Tranching and payout hierarchy encoded in contract
logic
* **Vesting**: Time-based or milestone-based token release for
founders/employees
Tokenized equity does not require reinventing securities law, it brings the
same rights into a verifiable and programmable format, reducing disputes and
legal overhead.
## Technical standards for equity tokens
To ensure interoperability and upgradeability, equity tokens often follow
standardized protocols across EVM-compatible blockchains.
### Common standards
* **ERC-20 with access control**: Basic fungible token with transfer
restrictions
* **ERC-1400 / ERC-1410**: Security token standard for compliant equity
instruments
* **ERC-3643 (T-REX)**: Includes modular compliance layers, identity management,
and on-chain documentation references
### On-chain features enabled by standards
* Role-based permissions for issuers, verifiers, and investors
* Partitioning of tokens (e.g., separate tranches or classes)
* Document linkage (offering memorandum, shareholder agreement)
* Transfer pre-checks via modular compliance contracts
Using well-adopted standards simplifies integration with exchanges, custody
platforms, and governance interfaces.
## Cap table tokenization architecture
Tokenizing a cap table involves replacing a spreadsheet or legal ledger with a
blockchain-native smart contract registry.
### Core components
* **Issuer contract**: Deploys and manages equity token supply
* **Compliance layer**: Ensures only eligible investors can hold or transfer
tokens
* **Investor registry**: Maps wallet addresses to legal identities
* **Governance module**: Facilitates voting and shareholder proposals
* **Payout engine**: Automates dividend or profit-sharing distributions
* **Dashboard UI**: Frontend for founders, investors, and legal teams
Each component interacts through on-chain events and APIs, enabling real-time
updates, immutable history, and external auditability.
### Cap table updates
* New issuance (mint)
* Secondary transfers (subject to compliance)
* Option exercises (triggered by HR or vesting contracts)
* Shareholder exits (burn or treasury redemption)
This model transforms the cap table from a static document to a living, secure,
and transparent system.
## Use cases for equity tokenization
Tokenized equity infrastructure serves multiple types of companies and investor
structures. Common use cases include:
### Venture-backed startups
* Streamlined seed, Series A, and follow-on rounds
* Automated vesting for founders and employees
* Cap table clarity for due diligence and exit planning
### Private equity and funds
* Tokenized LP units with programmable waterfall logic
* Real-time NAV tracking and investor reporting
* Simplified capital call and distribution workflows
### Real estate and asset holding companies
* Tokenized shares of SPVs for individual buildings or projects
* Investor onboarding with AML/KYC and accredited investor verification
* On-chain cash flow distribution from rent or profit
### Franchises and cooperatives
* Community-owned or member-driven equity structures
* Transparent share issuance and governance participation
* Regulated secondary liquidity through bulletin boards or exchanges
These use cases reduce issuance cost, broaden access, and modernize governance
for previously illiquid and manual equity systems.
## Investor onboarding and compliance workflows
Equity tokenization platforms streamline the investor onboarding process by
integrating KYC, AML, and accreditation checks directly into the issuance and
transfer logic of the equity tokens.
### Onboarding flow
1. **Investor signs up** via web or mobile interface
2. **Identity verification** using KYC provider APIs (e.g., Sumsub, Persona)
3. **Accreditation check** for applicable jurisdictions (e.g., US, EU)
4. **Wallet linkage** to verified investor identity
5. **Subscription agreement** digitally signed and stored off-chain or on-chain
via IPFS
6. **Equity tokens minted** to investor's wallet upon payment confirmation
### Compliance mechanisms
* **Whitelist enforcement** at smart contract level
* **Jurisdictional flags** for each investor wallet
* **Transfer pre-checks** to prevent unauthorized resale
* **Token lockups** for vesting, cliff periods, or founder restrictions
Investor onboarding becomes seamless, secure, and compliant, supporting both
institutional and retail participation.
## Smart contract modules for equity lifecycle
Tokenized equity systems are built with composable smart contract modules that
handle issuance, transfer, payout, and governance.
## Dividend distribution logic
Dividends in tokenized equity can be distributed automatically using smart
contracts that trigger payments based on wallet balances at record dates.
### Process
1. **Company deposits payout** amount in contract (in stablecoins or native
token)
2. **Snapshot block** captures current token holders and balances
3. **Pro-rata calculation** based on share class and ownership percentage
4. **Distribution execution** using looped transfers or Merkle proof-based
claims
5. **Investor notifications** via UI or blockchain events
### Dividend modes
* **Push**: Company initiates payout directly
* **Pull**: Investors claim their share via interface
* **Scheduled**: Periodic (monthly, quarterly) distributions
Smart contracts remove administrative burden and reduce error or dispute risk in
dividend issuance.
## Secondary market strategies for equity tokens
Liquidity is one of the key benefits of tokenizing equity. Platforms may support
compliant secondary trading through multiple mechanisms.
### Trading models
* **Bulletin boards**: Match buyers and sellers with off-chain agreements and
on-chain settlement
* **Whitelisted DEXs**: Permissioned exchanges with KYC-only wallets
* **Security token marketplaces**: Licensed platforms offering compliant
secondary liquidity (e.g., tZERO, INX)
* **P2P Transfers**: Direct wallet-to-wallet transactions with transfer logic
enforced by smart contracts
### Liquidity features
* Transfer approvals for private companies
* Volume limits for regulatory compliance
* Escrow services for fiat or stablecoin settlement
* Price discovery via bonding curves or auction mechanisms
Secondary market tools extend the usability and reach of tokenized equity while
remaining within regulatory boundaries.
## Alignment with legal structures
Tokenized equity does not eliminate legal contracts, it enhances and enforces
them through code. Proper alignment between smart contracts and legal
documentation ensures enforceability.
### Legal documents
* Shareholder agreements
* Subscription agreements
* Corporate resolutions
* Offering memoranda
* Governance charters
### Alignment strategies
* Off-chain documents referenced in on-chain token metadata
* Dual-record systems with blockchain as single source of truth
* Jurisdictional compliance baked into token transfer logic
* Notarization or timestamping of signed agreements for dispute resolution
Tokenized equity should integrate with existing corporate law rather than
compete with it, reducing ambiguity and improving transparency.
## Multi-class equity tokenization
Equity tokenization supports multiple share classes, each with unique rights and
privileges. Smart contracts can issue, track, and enforce conditions across
common, preferred, or special-purpose equity instruments.
### Typical share classes
* **Common shares**: Standard voting and economic rights
* **Preferred shares**: Priority in dividends and liquidation, sometimes
convertible
* **Non-voting shares**: Economic rights without governance access
* **Employee options**: Vesting-based equity convertible into common shares
## Exit and liquidity event management
Tokenized equity platforms can encode exit strategies into smart contracts,
automating investor return scenarios and compliance.
### Common events
* **M\&A**: Tokens are converted to acquirer assets or settled at agreed price
* **IPO**: Tokenized shares convert to listed equity
* **Buyback**: Company offers repurchase at predefined terms
* **Redemption**: Time-based or event-based token burning in exchange for payout
### Smart contract logic
* Trigger events by board or majority vote
* Lock token transfers during transaction finalization
* Execute pro-rata payouts based on cap table snapshot
* Manage partial or full conversions into new token structures
These mechanisms reduce legal ambiguity and administrative complexity in exit
workflows.
## Integration with fundraising platforms and investors
Equity tokenization platforms integrate with investor ecosystems to simplify
capital raising, investment management, and compliance.
### Platform integrations
* **Investor onboarding portals**: Embedded KYC and document flows
* **Payment gateways**: Fiat, stablecoin, or crypto investment support
* **Fund administration tools**: NAV calculation, investor statements
* **Digital custodians**: Wallet infrastructure and off-chain document vaults
* **Regulatory APIs**: Real-time filings and jurisdictional reporting
Tokenized equity fits seamlessly into digital fundraising platforms, venture
marketplaces, and digital banking APIs.
## Token vesting and employee ownership plans
Employee stock ownership and founder vesting can be enforced through
programmable schedules and smart contract logic.
### Vesting parameters
* **Start date**
* **Cliff period**
* **Vesting frequency**
* **Total duration**
* **Event-based acceleration**
### Smart contract enforcement
* Tokens held in a vesting contract
* Transfer restrictions until unlocked
* Partial unlocks emitted per schedule
* Dashboard interface for progress visibility
This eliminates spreadsheet-based tracking and manual release approvals,
creating transparency and trust in incentive structures.
## Analytics and reporting dashboards
Real-time access to equity metrics improves decision-making and transparency for
stakeholders.
### Key metrics
* Cap table breakdown by share class, holder type, jurisdiction
* Ownership concentration and dilution history
* Vesting progress and equity reserved
* Dividend payout history and yields
* Governance participation rates
### Technical components
* Subgraphs or on-chain indexers for data aggregation
* Role-based dashboards (issuer, investor, admin)
* Exportable reports for compliance and audits
* Alerts for voting, dividends, or transfer requests
Dashboards bridge the gap between on-chain equity data and business-level
insights, making tokenized equity actionable.
## Regulatory compliance in equity tokenization
Compliance with securities laws and corporate governance regulations is critical
for equity tokenization. Smart contracts and token architectures must be
tailored to regional legal frameworks to ensure enforceability and trust.
### Core compliance pillars
* **Securities classification**: Tokens must comply with local definitions of
equity instruments
* **Investor eligibility**: Accreditation, residency, and qualification checks
* **Transfer restrictions**: Lockups, jurisdictional bans, or board approval
logic
* **Disclosure obligations**: Offering documents, risk factors, and financials
### Enforcement methods
* On-chain KYC registry and transfer allowlists
* Smart contract pre-transfer compliance modules
* Dynamic jurisdiction mapping based on wallet identity
* Integration with regulated platforms and custodians
Legal wrappers or digital securities regulations such as Liechtenstein’s TVTG,
Switzerland’s DLT law, or Singapore’s sandbox frameworks can support full legal
alignment.
## Comparing equity token standards
Multiple token standards are available for creating programmable equity tokens,
each with trade-offs in flexibility, compliance, and integration.
### ERC-20 (with restrictions)
* Fungible, widely supported
* Needs external compliance layers
* No native document references or class logic
### ERC-1400
* Modular and composable
* Supports document linking, partitions, and hooks
* Tailored for security tokens
### ERC-1410
* Focused on partitioned ownership
* Allows tranches or share class structures
* Works well for multi-class equity
### ERC-3643 (T-REX)
* Compliance-centric, identity-linked
* Built-in role management and event hooks
* Developed for institutional adoption
Choosing the right standard depends on jurisdiction, investor base, and
integration roadmap.
## Regulated secondary markets and transfer frameworks
Secondary market activity for tokenized equity must comply with private
placement rules, transfer restrictions, and jurisdictional limits.
### Regulated models
* **ATS or MTF platforms**: Licensed venues for security token trading
* **Whitelisted DEXs**: Smart contract-enforced access with KYC-verified wallets
* **Peer-to-peer transfers**: Direct, permissioned exchanges with legal backend
* **Broker networks**: Transfer agents or custodians manage off-chain agreements
### Transfer controls
* Time-based lockups
* Ownership thresholds per investor type
* Jurisdictional allow/deny logic
* Audit trails for regulatory inspection
Smart contracts can enforce these frameworks natively, enabling programmable
liquidity that respects legal constraints.
## Automating investor rights and corporate actions
Tokenized equity platforms support end-to-end automation of investor rights,
including governance, disclosures, and consent mechanisms.
### Rights automation
* **Voting**: Direct or delegated token-based voting
* **Consent**: Approval or rejection of company actions (e.g., M\&A, new
issuance)
* **Notification**: On-chain event logs for shareholder updates
* **Information access**: Whitelisted data rooms and document vaults
### Corporate action logic
* Board-authorized mint or burn operations
* Dividends and bonus issuance
* Rights offerings and redemption flows
* Token upgrades via vote-triggered contract migration
These features replace manual legal coordination with verifiable smart contract
enforcement.
## Jurisdictional mapping and cross-border tokenization
Equity tokenization introduces new opportunities and complexities when issuing
or managing shares across multiple jurisdictions.
### Considerations
* Local recognition of digital securities
* Cross-border KYC and tax obligations
* Foreign ownership limits
* Securities offering exemptions (e.g., Reg D, Reg S)
### Jurisdictional alignment
* EU’s MiCA framework and national sandbox pilots
* Singapore’s Project Guardian and MAS guidance
* UAE’s ADGM and DIFC tokenization frameworks
* US SEC no-action letters and broker-dealer guidance
Global expansion of tokenized equity depends on region-specific strategies that
integrate legal, technical, and operational compliance.
## Equity token lifecycle and state transitions
Tokenized equity instruments follow a structured lifecycle, from issuance to
exit. Each stage is recorded on-chain, enabling full traceability and
auditability.
### Key stages
1. **Authorization**: Board or DAO approves issuance
2. **Minting**: Tokens created and allocated to wallets
3. **Distribution**: Investors complete onboarding and receive shares
4. **Transfers**: Subject to compliance logic and shareholder rights
5. **Vesting and unlocks**: For founders, employees, or reserved equity pools
6. **Corporate actions**: Dividends, buybacks, splits, conversions
7. **Exit**: M\&A, IPO, or token redemption logic
Each state transition is triggered via smart contract calls, governed by
permissions, and logged for legal and compliance tracking.
## Developer toolkits and infrastructure
Equity tokenization projects rely on a mature and modular tech stack to build
secure, compliant, and scalable systems.
### Smart contract tools
* **OpenZeppelin**: Base contracts, access control, and governance
* **Foundry / Hardhat**: Contract development and testing frameworks
* **Superfluid / Sablier**: Streaming payments for dividends or vesting
### Identity and compliance
* **Civic, Fractal, Polygon ID**: Wallet-based KYC identity layers
* **Kleros**: Token-based dispute resolution mechanisms
* **T-REX**: Identity-linked token registry and compliance engine
### UI/UX and frontend
* **React, Next.js, Vue**: Web interfaces for issuers and investors
* **The Graph**: Real-time cap table and transaction indexing
* **Tailwind CSS / shadcn/ui**: Component libraries for enterprise dashboards
SDKs and APIs from issuance platforms can speed up MVP and production
deployments for startups, issuers, and service providers.
## Stakeholder roles and responsibilities
Tokenized equity introduces on-chain representations of traditional roles in the
equity lifecycle.
### Founders and issuers
* Initiate token issuance
* Set governance and compliance policies
* Trigger dividends, upgrades, or exits
### Investors and shareholders
* Own and transfer equity tokens
* Participate in governance
* Receive distributions or redemption payments
### Legal and compliance teams
* Map smart contract behavior to legal obligations
* Ensure regional compatibility and disclosures
* Interface with regulators and auditors
### Technical operators
* Maintain smart contracts and infrastructure
* Implement UI features and data pipelines
* Perform contract upgrades and bug fixes
Proper separation of roles, automated and manual, ensures secure, compliant
equity token operations.
## Integrations and ecosystem partners
Successful equity tokenization deployments are built around integrations with
service providers, legal tech, and compliance rails.
### Key integrations
* **Digital signature platforms**: DocuSign, HelloSign for subscription
agreements
* **Custodians and banks**: Fiat on/off ramps, escrow accounts
* **Cap table services**: Carta, Pulley, or custom dashboards
* **Auditors and regulators**: Read-only access for financial and compliance
inspections
API-first architectures and modular design enable ecosystem interoperability and
future-proofing.
## Future outlook for equity tokenization
Tokenized equity is poised to reshape private capital markets, startup
financing, and corporate governance.
### Emerging trends
* **DAO equity models**: Hybrid legal-on-chain governance for internet-native
startups
* **Tokenized venture funds**: Tradable LP positions with pro-rata deal rights
* **Compliant global secondary markets**: 24/7 peer-to-peer share trading within
legal rails
* **Smart equity clauses**: Code-enforced legal agreements tied to token
ownership
### Strategic impacts
* Reduced friction in startup funding rounds
* Democratized access to equity ownership and liquidity
* Transparent, real-time governance and reporting
* Better alignment between founders, investors, and employees
As equity instruments become programmable, transparent, and composable, they
will unlock capital efficiency, stakeholder alignment, and new economic models.
file: ./content/docs/application-kits/asset-tokenization/use-cases/fund-tokenization.mdx
meta: {
"title": "Fund tokenization",
"description": "Institutional-Grade Digital Fund Management"
}
## Introduction to fund tokenization
Fund tokenization refers to the creation of blockchain-based representations of
investment fund units. These tokens encapsulate investor shares in a fund and
inherit all economic rights such as returns, redemption privileges, and
governance roles while offering improved liquidity, automation, and
transparency.
Fund managers operate in an environment where capital formation, investor
onboarding, compliance, and redemption processes are time-consuming and
expensive. Traditional fund administration systems rely on centralized
registries, batch-based settlements, manual reporting, and delayed NAV
disclosures. These inefficiencies increase the operational burden, limit
distribution capabilities, and reduce investor access.
Tokenization offers a programmable representation of fund units through smart
contracts. Investors receive digital tokens representing their stake in the
fund, while the underlying logic automates issuance, redemptions, distributions,
governance, and compliance. This results in lower costs, faster settlement,
broader distribution, and improved investor experiences.
Tokenized funds serve multiple models, hedge funds, venture funds, private
equity vehicles, ETFs, and alternative investment structures, across both
open-ended and closed-ended configurations.
## Traditional fund management limitations
Conventional fund administration relies on legacy software, intermediated
operations, and asynchronous workflows. Fund managers face challenges across
investor onboarding, capital allocation, and regulatory compliance.
### Key challenges
* **Manual record-keeping** of share registries and capital commitments
* **Delayed settlement cycles** due to bank wires and netting processes
* **High operational costs** from fund accountants and transfer agents
* **Lack of real-time NAV** and asset transparency
* **Illiquid shares** with lengthy redemption lockups
* **Investor onboarding friction** from jurisdictional compliance requirements
Fund tokenization provides a modern architecture that replaces fragmented
back-office infrastructure with transparent, secure, and real-time ledgers.
## Fund structures and token mapping
Fund tokens represent claims on the assets of the underlying legal structure,
which may vary depending on regulatory environment and investor target.
### Common structures
* **Limited partnerships (LPs)**: General Partner (GP) manages the fund; LPs
invest and receive tokenized units
* **Special purpose vehicles (SPVs)**: Single-asset or deal-specific holding
companies with tokenized ownership
* **Trusts or regulated funds**: Jurisdiction-specific regulated vehicles
issuing tokens as legal share equivalents
* **Feeder/master funds**: Multi-tiered structures issuing feeder tokens with
pass-through economics
### Tokenization approach
* Tokens issued on-chain mirror fund shares or units
* Smart contracts maintain the investor registry and cap table
* Each token represents a proportional claim on fund NAV, payouts, or
redemptions
* Metadata links tokens to legal documentation such as PPM, LPA, or NAV
statements
These models can support both perpetual funds (continuous issuance/redemption)
and closed-ended funds (fixed term, no secondary liquidity).
## Benefits of tokenized funds
Tokenizing fund shares results in operational, investor, and strategic
advantages for fund managers and stakeholders.
### Operational efficiency
* Automated NAV calculation and redemption
* Reduced reliance on fund accountants and intermediaries
* Real-time transaction recording with immutable audit trails
### Investor experience
* Instant subscription and redemption flows
* Access to dashboard with position, NAV, and performance tracking
* Direct wallet-based custody or integration with digital custodians
### Regulatory and reporting
* On-chain compliance enforcement via smart contract logic
* Standardized investor onboarding workflows with reusable KYC credentials
* Timestamped documents and role-based data room access
### Strategic access
* Broader distribution to global investor base
* Support for micro-investments via fractional token ownership
* Interoperability with DeFi tools for staking, collateralization, or liquidity
provisioning
Tokenized fund shares create a digitized wrapper around traditional vehicles —
enabling operational improvements without disrupting existing regulatory
alignment.
## Core architecture of a tokenized fund
Tokenized funds are structured as modular systems, combining smart contracts,
investor UIs, custody integrations, and off-chain legal infrastructure.
### Core components
* **Fund token smart contract**: Manages issuance, transfer, and redemption
logic
* **Investor registry**: KYC/AML-compliant wallet mappings
* **NAV oracle or admin module**: Periodically updates asset value on-chain
* **Redemption module**: Handles investor exits and fund asset netting
* **Distribution engine**: Executes dividends, interest, or profit-sharing
* **Admin dashboard**: For fund manager operations and approvals
* **Investor portal**: Wallet-based UI for onboarding and investment tracking
These components are connected via event-based data flows and REST or GraphQL
APIs for external integration.
## Smart contract design for fund tokens
Smart contracts define how fund shares are issued, transferred, redeemed, and
linked to compliance frameworks.
### Typical variables
* Token name and symbol
* Decimals and total supply
* NAV per share (mutable or oracle-fed)
* Compliance rules and jurisdiction mappings
* Lockup periods and redemption rules
* Distribution policies (e.g., reinvestment vs. payout)
### Functions
* `subscribe(amount, investorMetadata)`: Issues new tokens against stablecoin or
fiat
* `redeem(amount)`: Burns tokens and initiates asset payout or queue
* `updateNAV(numerator, denominator)`: Updates NAV with admin or oracle call
* `distributeDividends()`: Transfers payouts to wallet holders
* `pauseTransfers()`: Enables emergency halts or fund closure logic
Modular smart contracts ensure maintainability, upgrade paths, and compliance
alignment.
## Investor onboarding and compliance in tokenized funds
Investor onboarding in tokenized fund systems integrates digital identity
verification, subscription agreement signing, and wallet binding, creating a
seamless and compliant investment flow.
### Onboarding workflow
1. **Investor registration** via fund portal or DApp
2. **KYC/AML verification** through integrated providers (e.g., Sumsub, Veriff)
3. **Wallet linkage** to verified identity with on-chain or off-chain
attestation
4. **Jurisdiction check** for eligibility and regulatory limits
5. **Subscription agreement** signed digitally and recorded with metadata
6. **Stablecoin or fiat transfer** to fund’s treasury address
7. **Fund tokens minted** to investor’s wallet after verification
### Compliance mechanisms
* On-chain allowlist of verified wallet addresses
* Token transfer pre-checks enforcing jurisdictional rules
* Dynamic investor eligibility logic based on changing residency, accreditation,
or sanctions status
* Hash-linked metadata for document and identity records
Compliance logic can be updated dynamically using proxy contracts or modular
compliance layers tied to investor registries.
## Net asset value (NAV) tracking and update mechanisms
NAV updates reflect the value of underlying fund assets and determine token
redemption price and investor equity.
### NAV sources
* Manual NAV entry by fund admin
* NAV feed from oracle providers
* On-chain indexers for tokenized portfolios (e.g., DeFi yield funds)
### Update flow
1. **Fund administrator** uploads new NAV
2. **Smart contract stores** NAV numerator and denominator or per-token unit
price
3. **Events emitted** for investor dashboards to update metrics
4. **Redemptions and subscriptions** use updated NAV to determine amount
NAV updates can be hourly, daily, or monthly, depending on asset class and
reporting cadence.
## Redemption and liquidity models
Fund token redemption mechanisms vary based on fund structure, liquidity model,
and jurisdiction.
### Common models
* **Open-ended fund**: Investors can redeem any time at NAV (subject to lockup
or gate)
* **Closed-ended fund**: No redemption; tokens tradable on secondary markets
* **Queued redemption**: Redemptions processed in batches with delayed
settlement
* **Rolling lockup**: Minimum holding period per investment before redemption
eligibility
### Smart contract logic
* `requestRedemption(amount)`: Queues redemption and timestamps request
* `processRedemption(batchId)`: Admin-triggered batch execution
* `withdrawPayout()`: Investor claims stablecoin or on-chain asset equivalent
* `setRedemptionFee(feeRate)`: Optional penalty or liquidity protection logic
Well-designed redemption logic balances liquidity access with fund solvency and
compliance safeguards.
## Distributions and yield flows
Income-generating funds must distribute dividends, interest, or performance fees
to token holders.
### Distribution types
* **Pro-rata income**: Distributed based on ownership share and share class
* **Performance fees**: Smart contract calculates high watermark and allocates
carry
* **Streaming payouts**: Distributed continuously via protocols like Superfluid
* **Manual payouts**: Admin deposits funds to contract and triggers batch payout
### Distribution workflow
1. Admin deposits payout asset to contract (e.g., USDC, DAI)
2. Snapshot block identifies eligible wallets and token balances
3. Smart contract executes payout to all wallets
4. Investors receive notification and transaction confirmation
All distributions are logged on-chain and displayed in dashboards for audit and
investor clarity.
## Fund manager and investor dashboards
Dashboards provide real-time visibility and interaction for both fund managers
and investors in tokenized fund systems.
### Fund manager features
* Mint/burn controls for subscriptions and redemptions
* NAV updates and performance tracking
* Redemption queue and liquidity monitoring
* Distribution triggers and reporting tools
* Compliance override and audit logs
### Investor portal features
* Portfolio value and NAV-based balance tracking
* Subscription and redemption status
* Payout and income history
* Governance proposals or votes (if enabled)
* Integrated messaging and support
Dashboards are typically built with React, Tailwind, viem, and subgraph-based
indexing for fast performance and reliable data.
## Secondary market access for fund tokens
Tokenized fund units may be eligible for secondary trading depending on the
legal structure, jurisdiction, and investor profile. While some tokens are
non-transferable, others allow peer-to-peer transactions or trading on licensed
platforms.
### Liquidity models
* **Transfer-restricted tokens**: Transfers permitted only between allowlisted
wallets
* **Compliance-aware DEX integration**: Transfer logic enforced by smart
contract middleware
* **Brokered secondary sales**: Legal and compliance layers off-chain with
on-chain settlement
* **ATS or MTF listing**: Tokenized funds listed on regulated exchanges for
security tokens
### Transfer controls
* Jurisdictional allow/deny lists
* Ownership caps per investor type or geography
* Minimum holding period enforcement (e.g., 12-month lockups)
* Event logs for audit and filing requirements
Secondary market access can be implemented progressively, with manual transfer
approvals or on-chain governance triggering new transfer rights.
## Fund categories and tokenization benefits
Different types of investment funds benefit from tokenization in different ways.
Each structure can be tailored with smart contracts that encode unique logic for
issuance, redemption, and governance.
### Hedge funds
* Enhanced investor access with digital onboarding
* Real-time NAV feeds from on-chain assets
* Token-based redemption requests replacing fax/email cycles
### Private equity and venture funds
* LP tokens representing capital commitments and carried interest
* Transfer rights based on time-based or event-based vesting
* DAO-managed fund governance and investment committees
### Real estate funds
* Tokenized SPVs for specific properties or portfolios
* On-chain rental income distribution and property valuation updates
* Lower investment minimums and broader distribution
### DeFi yield aggregators
* Vault tokens with auto-compounding strategies
* Transparent on-chain strategy execution and fee disclosures
* Live performance dashboards and instant redemption mechanisms
Each fund type can leverage blockchain infrastructure to streamline operations,
reduce counterparty risk, and enhance investor experience.
## Advanced compliance and transfer logic
Tokenized funds use programmable compliance frameworks to ensure investor
eligibility, jurisdictional alignment, and auditability.
### Smart compliance features
* **Dynamic allowlists**: Updated based on regulatory changes or fund policies
* **Document signatures**: On-chain verification of PPM, LPA, or subscription
agreement hashes
* **Geo-fencing**: Blocking wallet interactions based on IP or residency data
* **Transfer pre-checks**: Blocking transfers unless all criteria are met in
real time
### Third-party integrations
* AML screening APIs for wallet history
* Sanctions list checks (OFAC, EU, UN)
* Accreditation verification via credential providers
* Tax identity frameworks for reporting obligations
These capabilities ensure ongoing legal compliance without manual intervention.
## Governance rights in fund tokens
Governance rights may be offered to token holders, allowing investors to
participate in fund-level decisions such as strategy changes, new investments,
or fee structures.
### Voting models
* **Token-weighted voting**: 1 token = 1 vote
* **Quadratic voting**: Reduces whale influence in smaller funds
* **Multi-class governance**: Preferred vs. common token structures
* **Off-chain signaling with on-chain execution**: Snapshot + governance
contract
### Governance use cases
* Approval of new investment mandates
* Fee structure changes or NAV calculation method updates
* Dissolution or exit trigger votes
* Auditor or administrator selection
On-chain governance adds transparency and alignment but may be optional or
advisory in most fund jurisdictions.
## Extending utility of fund tokens
Beyond basic investment representation, fund tokens can serve additional roles
across Web3 and DeFi ecosystems.
### Utility extensions
* **Collateral**: Used to mint stablecoins or borrow against NAV
* **Access**: Token gating to fund-sponsored services, reporting, or strategy
dashboards
* **Incentives**: Liquidity mining or staking rewards for long-term holders
* **Token upgrade paths**: Migration to new contract versions with improved
features
Fund tokens evolve from static receipts to interactive instruments with
programmable logic and ecosystem integration.
## Legal frameworks for tokenized funds
The legal enforceability of fund tokens depends on their alignment with existing
securities laws, fund structures, and regulatory definitions. Tokenization
should enhance, not replace, traditional legal documentation.
### Legal wrapper approaches
* **LP token mapping**: Tokens represent LP shares in a limited partnership; GP
retains management authority
* **SPV tokenization**: Shares of a special purpose vehicle are tokenized and
managed via shareholder agreements
* **Tokenized feeder funds**: On-chain issuance for feeder fund units that
invest in off-chain master fund
* **Regulated tokenized funds**: Full registration under jurisdictions like the
Cayman Islands, Liechtenstein, Switzerland, or Luxembourg
### Contractual alignment
* Token metadata links to PPM, LPA, and other fund documents
* Off-chain docs reference token ID, wallet address, and legal identity
* Court-recognizable timestamping via on-chain proofs or notary integrations
Maintaining dual-record systems, legal contracts off-chain and mirrored logic
on-chain, enables enforceability in dispute or audit scenarios.
## Fund administrator and auditor integration
Tokenized funds may collaborate with existing service providers or adopt
on-chain equivalents to fulfill required fund administration tasks.
### Administrator functions
* Capital call scheduling and collection tracking
* NAV calculation and reporting
* Redemption processing and compliance review
* Investor communication and reporting
### On-chain administrator tooling
* Admin-only smart contract functions (NAV updates, redemptions)
* Multi-sig controls for minting or fund closure
* Off-chain systems feed data to contracts via oracle bridges
* Read-only access for auditors and regulators
Tokenization augments administrators with transparency and automation, while
reducing reconciliation errors and manual delays.
## Performance fee and carried interest mechanics
Smart contracts support programmable logic for management and performance fees,
carried interest, and hurdle rates.
### Performance fee models
* **High watermark**: Fees apply only to new profits beyond previous peaks
* **Hurdle rate**: Minimum annualized return before fees apply
* **Crystallization period**: Defines when fees are realized and claimable
* **Tokenized carry**: Distributes profits to a carry wallet or GP token pool
### Example logic
```solidity
function calculateFee(uint nav, uint previousHigh, uint feeRate) public pure returns (uint) {
if (nav > previousHigh) {
return (nav - previousHigh) * feeRate / 10000;
}
return 0;
}
```
Automating these mechanics ensures investor alignment and transparent fee
disclosures.
## DAO-managed and community-governed funds
Decentralized Autonomous Organizations (DAOs) can operate tokenized funds,
allowing community-led investment strategies and governance.
### DAO fund models
* **Treasury-backed funds**: Community allocates DAO assets to fund strategies
* **Tokenized syndicates**: Permissioned DAOs pooled for deal-by-deal
investments
* **Grant or impact funds**: Token holders vote to allocate to public goods or
research
### Tooling
* Snapshot for off-chain voting
* Gnosis Safe for treasury control
* Coordinape or Karma for contributor coordination
* Quadratic voting and reputation-based weights
DAO-managed funds require clear documentation, smart contract security, and
responsible community engagement for success.
## Exit mechanics and fund dissolution
Tokenized funds may include structured exit mechanisms that determine how tokens
are redeemed or settled when the fund closes or hits maturity.
### Exit scenarios
* **Maturity-based closure**: Fixed-term funds distribute assets on end date
* **Asset liquidation**: Portfolio sold off, and stablecoins or other tokens
distributed pro-rata
* **Redemption window**: Open period for all redemptions before token contract
is closed
* **Exit vote**: Governance vote triggers fund wind-down or strategy change
### Final settlement logic
* Snapshot token holders
* Calculate NAV per share at closure
* Execute stablecoin transfers to all holders
* Emit closure event and revoke permissions
Structured exits ensure orderly transitions and maintain investor trust in
tokenized fund frameworks.
## Developer tools and infrastructure for fund tokenization
Tokenized fund ecosystems rely on developer toolkits to build secure,
customizable, and compliant infrastructure across the entire lifecycle.
### Smart contract development
* **Foundry and Hardhat**: Tooling for testing, deployment, and scripting
* **OpenZeppelin libraries**: Base ERC-20, access control, upgradeable patterns
* **ERC-4626**: Standard for tokenized yield-bearing vaults
* **Modular architecture**: Separate contracts for compliance, NAV, and payout
logic
### Frontend and UI/UX
* **Next.js / React**: Web frameworks for fund portals and dashboards
* **Tailwind / shadcn/ui**: Design systems for responsive and accessible
interfaces
* **Web3 libraries**: viem or ethers.js for wallet connections, event
subscriptions, and contract interactions
* **Subgraphs**: Index fund metadata, token balances, payout events, and NAV
history
These tools form the foundation for investor apps, fund manager portals, and
admin dashboards.
## Token lifecycle and event mapping
Fund tokens follow a full lifecycle from creation to redemption. Understanding
state transitions is essential for both legal and technical stakeholders.
### Lifecycle stages
1. **Creation**: Fund contract deployed, token initialized with parameters
2. **Subscription**: Investors complete KYC and receive tokens
3. **NAV updates**: Fund assets updated manually or by oracle
4. **Redemptions**: Token holders request and receive payouts
5. **Distributions**: Income or profit shared with holders
6. **Transfers**: Secondary transactions subject to compliance rules
7. **Exit**: Tokens settled or burned upon closure or redemption
Each stage emits on-chain events and interacts with investor interfaces or
compliance middleware.
## Ecosystem partnerships and integration
Tokenized fund platforms benefit from a network of service providers, DeFi
integrations, and legal tech partners.
### Strategic partners
* **Custodians**: Anchorage, Fireblocks, BitGo for regulated custody
* **Compliance providers**: Sumsub, Blockpass for KYC/AML
* **Stablecoins**: USDC, DAI, EURC for subscription and payouts
* **Auditors**: Armanino, Certora for financial or contract audits
* **Regulatory tech**: Notabene, Chainalysis for reporting and monitoring
Ecosystem alignment enables end-to-end product delivery with reduced operational
overhead.
## Global regulatory landscape
Tokenized funds must adhere to the evolving regulatory environments across
multiple jurisdictions.
### Regional frameworks
* **United States**: SEC regulation under Reg D, Reg S, or Reg A+; broker-dealer
or ATS licenses required for trading
* **European Union**: MiFID II and MiCA regulations; sandbox support in France,
Germany, and the Netherlands
* **Asia**: MAS regulatory sandbox (Singapore), SFC token guidance (Hong Kong),
RBI pilot projects (India)
* **Middle East**: DFSA and ADGM regulatory regimes in the UAE
Legal compliance should be handled proactively through licensed partners or
regulatory engagement.
## Future outlook for tokenized funds
Fund tokenization is evolving from proof-of-concept pilots to scalable,
regulated asset management platforms.
### Key trends
* **Tokenized money market funds** and short-term treasuries
* **Retail-accessible real estate and private equity**
* **Composability with DeFi**: Tokenized LP shares used in lending, staking, or
as DAO treasuries
* **Automated investment DAOs** using streaming subscriptions and real-time
allocation
* **Modular compliance SDKs** that handle global rules and reporting
As tokenized capital markets mature, funds will become programmable, composable,
and globally interoperable, reducing friction while maintaining security and
investor trust.
file: ./content/docs/application-kits/asset-tokenization/use-cases/stablecoin.mdx
meta: {
"title": "Stablecoins",
"description": "Comprehensive documentation on stablecoin architecture, classifications, mechanisms, and ecosystem use cases"
}
## Introduction to stablecoins
Stablecoins are blockchain-based digital assets designed to maintain a stable value relative to an external benchmark, typically a fiat currency like the US dollar or euro. By combining the programmability and transferability of crypto tokens with price stability, stablecoins serve as essential infrastructure in decentralized finance (DeFi), remittance systems, and on-chain financial applications.
Unlike traditional cryptocurrencies such as Bitcoin or Ether, which can experience significant price volatility, stablecoins aim to preserve purchasing power and enable predictable exchange value. They are commonly used for trading, settlement, borrowing, payments, and as collateral for other assets or contracts.
Stablecoins are not a single asset class, they represent a spectrum of mechanisms, from fiat-collateralized tokens held in bank accounts to algorithmically adjusted token supplies and fully crypto-backed systems. The design, collateral structure, and governance model of a stablecoin determine its behavior, scalability, and regulatory treatment.
## The case for stablecoins
Stablecoins address one of the primary limitations of blockchain-based currencies: volatility. Their stable value unlocks a range of applications and benefits that cannot be practically achieved with fluctuating tokens.
### Key advantages
* **Medium of exchange**: Enables everyday payments, pricing, and contracts
* **Unit of account**: Supports denominating values in fiat terms on-chain
* **Store of value**: More consistent preservation of capital over short timeframes
* **Bridge to traditional finance**: Enables seamless on/off ramps and compliance workflows
* **Liquidity anchor**: Used as base pairs and collateral in DeFi and exchanges
From cross-border settlements and payroll to NFTs and staking platforms, stablecoins power many of the most widely used blockchain applications today.
## Major types of stablecoins
Stablecoins are categorized based on the mechanism used to maintain their peg to a target value. Each model presents trade-offs in scalability, transparency, decentralization, and stability guarantees.
### Fiat-collateralized stablecoins
These tokens are backed 1:1 by fiat currency held in custodial accounts and are issued and redeemed by a centralized entity.
* **Examples**: USDC, USDT, EURC, GUSD
* **Collateral**: Bank deposits, T-bills, commercial paper, money market funds
* **Redemption**: Issuer guarantees redemption for fiat upon request
* **Use cases**: High-volume trading, compliance-friendly financial applications
### Crypto-collateralized stablecoins
These are overcollateralized with cryptocurrencies and operate through smart contracts, enabling non-custodial and permissionless issuance.
* **Examples**: DAI, MIM, LUSD
* **Collateral**: ETH, BTC, liquid staking tokens, LP tokens
* **Mechanism**: Users lock collateral and mint stablecoins up to a safe debt ceiling
* **Liquidation**: Triggered automatically if collateral value falls below thresholds
### Algorithmic stablecoins
These use supply control mechanisms, incentive loops, and smart contracts to maintain peg without explicit collateral.
* **Examples**: FRAX (partially algo), formerly UST, AMPL (rebase model)
* **Mechanism**: Dynamic minting and burning, oracle-fed price signals, market arbitrage
* **Risks**: Vulnerable to feedback loops and depegging under stress
### Hybrid models
Some stablecoins use a combination of collateral backing and algorithmic controls, aiming to achieve decentralization and efficiency.
* **Examples**: FRAX (fractional), sUSD (Synth-based), UXD (delta-neutral crypto hedge)
These categories are fluid and evolving, with many projects experimenting across the design spectrum.
## Design goals and trade-offs
Creating a sustainable stablecoin involves balancing multiple design factors that affect usability, security, adoption, and compliance.
### Key design goals
* **Price stability**: Maintain peg across volatility and market conditions
* **Liquidity**: Ensure sufficient issuance and redemption pathways
* **Scalability**: Support increasing supply without degrading performance
* **Transparency**: Provide verifiable information on reserves and mechanisms
* **Censorship resistance**: Operate independently of single points of control
* **Compliance**: Align with regulatory frameworks and user jurisdictions
### Trade-offs
* Centralized vs. decentralized governance
* Collateral efficiency vs. systemic safety
* Reserve transparency vs. privacy
* Redemption rights vs. transfer restrictions
A well-designed stablecoin makes these trade-offs explicit and manageable for its intended audience and use case.
## Technical architecture of stablecoins
The structure of a stablecoin system varies by type, but typically includes a token contract, mint/burn logic, collateral manager, and oracle integration.
### Token contract
* ERC-20 compliant for fungibility and integrations
* Supports metadata, transfer events, and balance tracking
* May include blacklisting, freezing, or pausing mechanisms
### Minting and redemption
* Custodial tokens use off-chain APIs and banking operations to mint/burn
* Crypto-backed systems use smart contract functions like `deposit()`, `mint()`, `burn()`
* Oracle feeds ensure peg accuracy and update price reference
### Collateral management
* Vaults, treasuries, or custodians hold the underlying reserves
* Liquidation contracts handle overcollateralized positions
* Auditors and oracles validate reserve sufficiency and performance
### Governance and upgrades
* DAO or multisig-controlled smart contracts manage parameters and upgrades
* Timelocks and proposal systems used for changes in stability fees, risk thresholds, or collateral whitelisting
Technical design must be secure, auditable, and responsive to evolving market and governance needs.
## Stablecoin issuance and redemption mechanisms
The stability of a stablecoin is tightly coupled to its issuance and redemption mechanisms. These determine how tokens enter and exit circulation and how the peg is maintained under changing demand.
### Fiat-backed issuance
* Users send fiat to the issuer's bank account
* Issuer mints an equivalent amount of stablecoins
* Redemption occurs when users return stablecoins for fiat withdrawal
* Reserves are managed off-chain and audited periodically
### Crypto-backed issuance
* Users deposit collateral into smart contract vaults (e.g., ETH, wBTC)
* Smart contracts mint stablecoins up to a collateralization threshold (e.g., 150%)
* Stablecoins are burned upon repayment of the loan
* Collateral is returned, minus fees or penalties
* Liquidation logic ensures collateral is sold if value drops below thresholds
### Algorithmic mechanisms
* Supply expands when price exceeds $1 (mint new coins, incentivize arbitrage)
* Supply contracts when price falls below $1 (burn coins or issue bonds)
* Peg depends on responsive market participants and trusted price feeds
### Hybrid structures
* Partially collateralized stablecoins use a mix of on-chain assets and algorithmic balancing
* May use rebase models, redemption rights, or secondary token (e.g., governance/coupon token) to absorb volatility
## Oracle systems for stablecoins
Stablecoins rely on price oracles to accurately determine collateral values, peg status, and trigger system behaviors like liquidation or rebase.
### Oracle sources
* Chainlink and other decentralized oracles
* Time-weighted average prices (TWAP) from DEXs
* Off-chain data feeds via API aggregators and bridges
* Multi-oracle configurations with fallback mechanisms
### Oracle responsibilities
* Feed fiat-crypto exchange rates (e.g., USD/ETH)
* Update price of collateral assets in crypto-backed systems
* Inform rebase or coupon mechanisms in algorithmic systems
* Signal depegging events and trigger stabilization routines
Oracle attacks (e.g., price manipulation, latency) represent one of the largest risks to stablecoin health. Systems should implement redundancy and tamper-proof logic.
## Smart contract components of a decentralized stablecoin
Stablecoins built on public blockchains use modular smart contracts to execute their economic logic and user interactions.
### Core modules
* **Token contract**: ERC-20-compatible with mint/burn logic
* **Vault contract**: Manages collateral deposits and debt positions
* **Liquidator contract**: Auctions or sells collateral when under-collateralized
* **Oracle contract**: Feeds external prices into the system
* **Stability fee module**: Tracks borrowing costs and fee accrual
* **Governance module**: DAO or multisig controls parameter updates and upgrades
All interactions, from minting new tokens to triggering liquidations, happen transparently on-chain and emit events for monitoring and analytics.
## Cross-chain stablecoin deployments
Stablecoins often operate across multiple blockchains to serve users and dApps on different ecosystems.
### Deployment strategies
* **Native issuance**: Smart contracts deployed independently on each chain with custody bridges between
* **Bridged models**: Token minted on one chain and locked when bridged to another; synthetic version minted on destination
* **Canonical minting**: Stablecoin issuer authorizes direct minting on multiple chains, with oracles and custodians per network
### Interoperability tools
* LayerZero, Axelar, Wormhole for messaging and asset bridging
* Omnichain token standards (e.g., OFT, ERC-5164)
* Liquidity providers and market makers ensure cross-chain parity
Cross-chain minting and redemption processes must handle settlement risk, latency, and oracle dependency with strong verification layers.
## Composability in DeFi and beyond
Stablecoins are foundational to DeFi ecosystems. Their composability means they can be used as money legos across a wide array of protocols.
### DeFi integrations
* **DEXs**: Used in trading pairs (e.g., USDC/ETH)
* **Lending protocols**: As collateral or borrowable asset (e.g., Compound, Aave)
* **Staking**: Used in LP positions, farming strategies, and reward systems
* **Derivatives**: Used to settle futures, options, and perpetual contracts
* **DAOs**: Held in treasuries or used for on-chain budget proposals
### Other use cases
* Payroll and remittance tools (e.g., StablePay, Request Finance)
* NFT marketplaces (denominated in USDC or DAI)
* Real-world asset tokens and on-chain real estate
* Micro-payments, subscriptions, and streaming payments
The utility of a stablecoin grows as its integrations expand, making DeFi composability both a distribution strategy and a value driver.
## Risk models and failure modes in stablecoin systems
Every stablecoin architecture carries a set of inherent risks depending on its design. Understanding and modeling these risks is essential to building resilient systems that can handle market shocks and preserve the peg.
### Key risk categories
* **Peg deviation**: Failure to maintain 1:1 parity with fiat
* **Liquidity crunch**: Inability to redeem or trade at fair value
* **Smart contract bugs**: Code vulnerabilities or exploits
* **Oracle failure**: Incorrect price feeds triggering false liquidations
* **Governance abuse**: Malicious proposals or admin key compromises
* **Regulatory seizure**: Asset freezing or custodial shutdowns
### Examples of historic failures
* **UST/LUNA collapse**: Algorithmic feedback loop collapse and overreliance on reflexive value
* **Iron Finance**: Partial collateral model with panic-induced bank run
* **Basis Cash**: Inability to maintain sufficient demand for secondary token
A sound stablecoin must include mitigation strategies for each of these categories through modular controls, capital buffers, and transparency mechanisms.
## Governance frameworks
Stablecoins may be governed by centralized entities, multisig administrators, or decentralized autonomous organizations (DAOs). The choice of governance affects trust, flexibility, and legal exposure.
### Governance models
* **Centralized issuer**: Corporate entity governs minting, redemption, and compliance
* **Multisig governance**: Limited group of trusted actors manage upgrades and parameters
* **DAO governance**: Token-weighted voting or reputation systems control protocol-level changes
### Governable parameters
* Stability fees and redemption incentives
* Oracle sources and quorum thresholds
* Accepted collateral types and risk weights
* Minting limits, transfer permissions, or emergency pause switches
Transparent and auditable governance systems are essential for credibility and security, especially for crypto-native stablecoins.
## Regulatory frameworks and classification
Stablecoins are the subject of intense regulatory scrutiny worldwide. They touch on issues of consumer protection, financial stability, money transmission, and systemic risk.
### Regulatory classification
* **Payment instrument**: Recognized as digital money for transactions (e.g., EU MiCA)
* **Security**: If offering yields, governance rights, or investment expectations
* **Commodity or property**: Depending on jurisdiction (e.g., IRS treatment of crypto)
* **Bank-like liability**: Treated as depository instrument if redeemable 1:1
### Key jurisdictions
* **United States**: Oversight by SEC, CFTC, OCC, and FinCEN; pending legislation (e.g., Stablecoin TRUST Act)
* **European Union**: MiCA regulation defines e-money tokens vs. asset-referenced tokens
* **Asia**: Japan and Singapore have stablecoin-specific guidance and licensing
* **G20**: Ongoing global coordination via FSB and BIS frameworks
Regulated stablecoins must implement AML/KYC processes, disclosure policies, and capital reserve mechanisms to maintain licenses and public trust.
## Reserve audits and transparency
For fiat-backed stablecoins, third-party verification of reserves is critical to user confidence and regulatory approval.
### Transparency techniques
* Monthly or real-time attestation of bank holdings
* Proof of reserves via Merkle tree snapshots
* On-chain visibility of backing assets (e.g., tokenized T-bills)
* Independent audits by certified accounting firms
Stablecoins like USDC publish reserve breakdowns, while newer entrants use tokenized treasuries for real-time reserve composition.
For decentralized stablecoins, transparency includes:
* On-chain collateral dashboards
* Public liquidation event logs
* DAO voting records and fee accrual models
## Stress testing and peg resilience
Resilience is tested during high volatility, redemptions surges, or smart contract exploits. Stablecoins must include both proactive and reactive mechanisms to handle these shocks.
### Stress testing methods
* Simulations of mass redemptions and liquidity outflows
* Oracle manipulation scenarios and failover tests
* Collateral price drops and liquidation slippage
* Governance capture attempts and proposal attacks
### Peg defense tools
* Minting fees or redemption delays to slow bank runs
* Automated market operations using treasury reserves
* Circuit breakers or emergency pausing mechanisms
* Dynamic interest rate adjustments for crypto-backed debt
By modeling and preparing for worst-case scenarios, stablecoin systems can maintain credibility and market adoption over the long term.
## Adoption metrics and usage analysis
Stablecoins are among the most widely adopted applications in blockchain. Analyzing their usage metrics helps assess economic impact, user behavior, and areas of risk or growth.
### Key adoption indicators
* **Total supply in circulation**: Indicates demand and monetary base
* **On-chain activity**: Number of unique holders, transactions per day, wallet retention
* **Exchange listings**: Presence on centralized and decentralized markets
* **Redemption volume**: Rate at which stablecoins are converted back to fiat or collateral
* **Protocol integrations**: Number of DeFi platforms, wallets, and applications using the stablecoin
### Market-leading stablecoins (as of 2024)
* **USDT**: Highest circulating supply, broadest CEX support
* **USDC**: Strong compliance reputation and institutional adoption
* **DAI**: Most widely used decentralized stablecoin
* **LUSD**: Fully crypto-backed, censorship-resistant alternative
* **FRAX**: Pioneering hybrid stability mechanism
Adoption is also influenced by regional access to fiat currencies, capital controls, and DeFi ecosystem maturity.
## Stablecoin monetization and sustainability
Stablecoin issuers must generate revenue to cover operational costs, maintain reserves, and incentivize governance or expansion.
### Monetization models
* **Interest on reserves**: Yield from T-bills, repo markets, and custodial accounts
* **Minting and redemption fees**: Charged for creating or destroying tokens
* **Stability fees**: For collateralized debt positions in crypto-backed systems
* **Treasury yield**: Protocols retain some share of generated interest
* **Interchain bridge fees**: For moving tokens across chains
Sustainable models must balance revenue with user fees, decentralization goals, and long-term stability.
## Stablecoins in treasury and enterprise finance
Stablecoins are increasingly adopted by DAOs, fintechs, and traditional companies for treasury management, payroll, and cross-border payments.
### Treasury use cases
* **Working capital**: Stablecoins used for day-to-day operations, expenses, or vendor payments
* **Yield generation**: Idle funds deployed into low-risk DeFi protocols (e.g., Aave, Compound)
* **Risk hedging**: Diversification against volatility of native tokens or operating currencies
### Tools and integrations
* Accounting APIs and dashboards (e.g., Request Finance, Multis)
* DAO multisigs with stablecoin allocations (e.g., Gnosis Safe)
* Corporate stablecoin rails (e.g., Circle APIs, Fireblocks infrastructure)
Stablecoins reduce friction in global money movement and lower operational barriers for digital-native organizations.
## Stablecoin narratives and public perception
Narratives drive adoption, influence regulation, and shape investment interest. The stablecoin space is framed by various stories based on product model and issuer identity.
### Narrative examples
* **Dollar-denominated crypto**: USDC and USDT as on-chain versions of USD
* **Decentralized money**: DAI and LUSD as censorship-resistant value layers
* **Tokenized central bank money**: Institutional stablecoins backed by central banks or regulated banks
* **Stablecoin as infrastructure**: Base layer for DeFi, gaming, creator economy, and cross-border finance
* **Algorithmic innovation**: Risk-optimized models like FRAX for efficient liquidity with decentralization goals
Public perception varies by geography, use case, and market maturity. Clarity in messaging improves adoption and regulatory alignment.
## Future of programmable stablecoins
The next generation of stablecoins will go beyond stability, enabling programmability, native integration with apps, and compliance by design.
### Trends and innovations
* **Account abstraction**: Stablecoins used as gas or fee tokens on L2 networks
* **Smart wallets**: Native stablecoin balances embedded in identity-linked wallets
* **Compliance-aware tokens**: Transfer rules, AML/KYC modules, and jurisdictional whitelisting baked into token logic
* **Interest-bearing stablecoins**: Automatically accrue yield from underlying reserves
* **ZK-native privacy**: Confidential stablecoin balances for selective disclosure
### Role in global finance
* Digital dollar and euro equivalents at scale
* Cross-border settlement layer for real-time payments
* Collateral for synthetic assets, RWAs, and CBDCs
* Bridges between central banks, fintechs, and public blockchains
Stablecoins are poised to be the connective tissue between Web3 innovation and real-world financial systems.
## Stablecoin implementation patterns
Designing and deploying a stablecoin system involves careful selection of architecture, collateral strategy, governance, and integration points.
### Implementation types
* **Custodial model**: Backed by fiat, requires banking and licensing partnerships
* **Non-custodial crypto-backed**: Smart contract vaults with on-chain collateral management
* **Algorithmic model**: Supply control based on peg signals and market incentives
* **Synthetic model**: Token value tracked via oracle and collateralized by protocol shares
* **Hybrid model**: Combines off-chain assets and algorithmic mechanisms
### Key decisions
* Minting logic and access control
* Token upgrade paths and governance constraints
* Oracle providers and failover logic
* Reserve audit and reporting practices
* Ecosystem partners for distribution and integration
Choosing the right model depends on target users, geographic regulations, ecosystem compatibility, and desired decentralization level.
## Toolkit and reference infrastructure
Developers and issuers can leverage open-source libraries, protocol templates, and API services to build and manage stablecoin systems.
### Smart contract libraries
* **OpenZeppelin**: ERC-20 base contracts, roles, pausable modules
* **MakerDAO modules**: CDP vaults, liquidation engines, DSR
* **Liquity Protocol**: Zero-interest stablecoin architecture
* **Frax**: Fractional reserve and AMO extensions
### Infrastructure tools
* Chainlink, Redstone, Pyth: Oracles
* Viem, Ethers.js: Frontend blockchain APIs
* Gnosis Safe: Admin controls and multisig
* Hardhat, Foundry: Testing and deployment frameworks
### Third-party APIs
* Circle or Coinbase APIs for fiat-backed minting and compliance
* The Graph for subgraph indexing and frontend queries
* Chainalysis / TRM Labs for risk and sanctions screening
These tools provide a foundation for rapid iteration and robust deployment across L1s and L2s.
## Security and audit best practices
Stablecoins often carry systemic risk for users and protocols. Thorough security practices are essential from launch through scale.
### Best practices
* Independent audits before deployment and after upgrades
* Formal verification of economic logic and oracle functions
* Bug bounty programs and responsible disclosure channels
* Multi-oracle setups with redundancy and fallback checks
* Role separation and timelocks for governance controls
### Common vulnerabilities
* Incorrect collateral accounting (rounding, decimals, or math bugs)
* Oracle manipulation or latency
* Under-collateralized positions due to delayed liquidations
* Infinite mint bugs or flawed mint/burn permissions
Secure stablecoins must combine code-level security with operational rigor and active monitoring.
## Ecosystem roles and stakeholder responsibilities
Stablecoin systems depend on coordinated efforts by multiple actors in the ecosystem.
### Stakeholders
* **Issuer or DAO**: Maintains peg, reserves, and system upgrades
* **Minters/redeemers**: Create and burn tokens with collateral or fiat
* **Traders and arbitrageurs**: Maintain peg via market activity
* **Oracles**: Feed price data for collateral and peg logic
* **Custodians**: Hold reserves in fiat-backed models
* **Regulators**: Define and enforce operational boundaries
Clear documentation, transparent policy enforcement, and on-chain governance tools help manage these stakeholder relationships.
## Stablecoin lifecycle mapping
Stablecoin products evolve across multiple phases, from launch to maturity. Lifecycle mapping ensures smooth protocol growth and user trust.
### Lifecycle phases
1. **Design**: Define peg model, governance, minting logic, collateral types
2. **Deployment**: Launch contracts, fund reserves, open initial minting
3. **Bootstrapping**: Incentivize adoption, deepen liquidity, build integrations
4. **Stabilization**: Adjust parameters based on market performance and feedback
5. **Expansion**: Scale to new chains, asset pairs, and fiat equivalents
6. **Compliance**: Engage with regulators, evolve legal wrapper, conduct audits
7. **Maturity**: System maintains peg reliably across market cycles and grows into core infrastructure
By tracking each stage with metrics and governance processes, stablecoins can scale responsibly and sustainably.
file: ./content/docs/application-kits/asset-tokenization/use-cases/tokenized-deposits.mdx
meta: {
"title": "Tokenized deposits",
"description": "Comprehensive technical documentation for tokenized deposit systems in blockchain-based financial infrastructure"
}
## Introduction to tokenized deposits
Tokenized deposits represent bank-issued liabilities in digital form on a blockchain network. They are distinct from traditional stablecoins in that they are directly linked to a customer’s deposit in a regulated financial institution and operate under banking oversight. These tokens maintain 1:1 parity with fiat currency and function as programmable representations of commercial bank money.
Tokenized deposits bridge the gap between conventional financial systems and blockchain-based platforms. By offering compliance-aware, fiat-backed digital money with banking-grade guarantees, they enable real-time payments, improved settlement efficiency, and programmable financial workflows.
Unlike stablecoins, which are typically issued by fintech entities or crypto-native protocols, tokenized deposits originate from banks or licensed intermediaries and are governed by deposit protection frameworks. They integrate with core banking systems, support regulatory reporting, and offer traceability and settlement finality.
## Rationale and industry context
The financial industry is undergoing a shift toward programmable money, and tokenized deposits represent a key pillar in this evolution. Banks are exploring digital currencies to stay competitive, modernize infrastructure, and meet the demands of institutional and retail clients engaging with blockchain applications.
### Limitations of traditional bank money
* **Batch-based settlement**: End-of-day or T+2 processing delays
* **Limited interoperability**: Closed systems, siloed data, and restricted access
* **High cost of reconciliation**: Manual reporting and transaction matching
* **No native programmability**: Banking rails are not API-native or smart contract-aware
* **Restricted availability**: No 24/7 global access to account-based money
Tokenized deposits solve these limitations by combining the trust of bank money with the flexibility of digital tokens on programmable networks.
## Differentiating from stablecoins and CBDCs
Tokenized deposits must be understood in contrast with other digital fiat representations like stablecoins and central bank digital currencies (CBDCs). Each serves a distinct role in the digital money stack.
### Tokenized deposits vs. stablecoins
| Feature | Tokenized Deposits | Stablecoins |
| ------------------- | -------------------------- | ------------------------------- |
| Issuer | Licensed commercial bank | Private fintech or DAO |
| Backing asset | Customer deposits | Bank reserves, treasuries |
| Regulation | Banking law | Varies by issuer/jurisdiction |
| Convertibility | Redeemable in bank account | May involve off-chain processes |
| Use case | On-chain bank payments | DeFi, trading, cross-border |
| Programmability | Yes | Yes |
| KYC/AML enforcement | Enforced at token level | Varies |
### Tokenized deposits vs. CBDCs
| Feature | Tokenized Deposits | CBDCs |
| ----------------------- | -------------------------- | ----------------------- |
| Issuer | Commercial banks | Central bank |
| Form | Private money | Public money |
| Monetary policy control | Indirect | Direct |
| Accessibility | Based on bank relationship | Defined by central bank |
| Distribution | Bank-mediated | Direct or tiered |
Tokenized deposits complement CBDCs by maintaining the role of commercial banks in money creation, risk management, and credit allocation.
## Use cases for tokenized deposits
Tokenized deposits are applicable across a range of financial services and digital ecosystems, including retail, institutional, and wholesale banking.
### Retail and SME banking
* Real-time peer-to-peer payments between customers of different banks
* Smart contract-based salary disbursement and invoice automation
* Tokenized bank loyalty programs and merchant offers
### Corporate and treasury operations
* 24/7 settlement of treasury cash management workflows
* Just-in-time supplier payments with programmable release conditions
* Integration with ERP systems and finance automation platforms
### Capital markets and asset tokenization
* Delivery-versus-payment (DvP) for tokenized bonds, securities, and digital assets
* Real-time fund subscriptions and redemptions
* FX settlement between multiple bank-issued tokens on different chains
### Interbank settlement and clearing
* Real-time gross settlement (RTGS) with programmable netting
* Interoperability with existing systems like SWIFT, SEPA, or Fedwire
* Integration with CBDC corridors or multi-bank shared ledgers
By embedding bank liabilities into programmable networks, tokenized deposits power next-generation financial services with reduced cost, enhanced speed, and improved auditability.
## Technology architecture
Tokenized deposit systems typically consist of several integrated components: core banking interfaces, smart contracts, compliance services, and user-facing APIs.
### Core layers
* **Token contract**: Represents the deposit token; complies with ERC-20 or other standards
* **Mint/burn gateway**: Interfaces with the bank’s core ledger to ensure 1:1 issuance
* **KYC/AML engine**: Validates wallet addresses and manages identity-linked controls
* **Access control**: Implements transfer restrictions and policy enforcement
* **Bank API bridge**: Links on-chain actions to bank account databases and transaction records
### Optional modules
* **Event logging**: Real-time notifications to core systems for compliance and reporting
* **Smart contract wallet integration**: For programmable disbursement and conditional payments
* **Multi-chain deployment**: Enables token issuance across L2s and private chains
* **Digital identity integration**: Wallet-bound IDs or zero-knowledge credentials
Systems must balance compliance with user experience, ensuring that deposit tokens remain programmable, secure, and transparently governed.
## Lifecycle and token behavior
Tokenized deposits exhibit a predictable lifecycle that reflects the customer’s bank balance on-chain.
### Lifecycle stages
1. **Account onboarding**: User completes KYC and links blockchain wallet to bank account
2. **Deposit funding**: Fiat is deposited into the bank account
3. **Token minting**: Bank smart contract mints equivalent tokens to user wallet
4. **Transfer**: Tokens are transferred on-chain with optional checks
5. **Redemption**: User burns tokens and receives fiat in bank account
6. **Reconciliation**: Bank system logs mint/burn for internal and regulatory records
Each action may emit on-chain events or trigger API calls to bank infrastructure for audit, compliance, or accounting workflows.
## Compliance frameworks and regulatory alignment
Tokenized deposits operate within a banking compliance perimeter, requiring robust identity, AML, and reporting controls across all layers of the system.
### Core compliance requirements
* **KYC and onboarding**: Each wallet address is linked to a verified customer identity
* **AML monitoring**: Transactions are screened in real time for suspicious activity
* **Jurisdiction enforcement**: Token transfers restricted to permitted geographies
* **Transfer constraints**: Enforced rules based on account type, transaction size, or purpose
* **Record-keeping**: On-chain and off-chain logs maintained for audit and regulatory reporting
### Governance and audit
* Transaction logs are timestamped and cryptographically secured
* Banks retain ultimate control over mint/burn permissions and token logic
* Compliance modules are auditable and upgradeable under regulatory supervision
* Integration with internal systems for SAR (suspicious activity report) generation and regulatory filings
Tokenized deposits are not “permissionless” assets, they are trust-enhanced digital instruments embedded in the bank’s operational and legal infrastructure.
## Technical standards and token design
Tokenized deposit systems implement token standards that are compatible with public and private blockchains, and allow fine-grained control over token transfer logic.
### Common standards
* **ERC-20 with transfer hooks**: Basic token interface extended to support compliance logic
* **ERC-777 or ERC-1155**: For more advanced messaging, metadata, or multi-token models
* **Permissioned token templates**: Whitelist-based logic using OpenZeppelin’s `AccessControl`
* **Enterprise standards**: ISO 20022 alignment for financial messaging interoperability
### Token attributes
* Token name and symbol (e.g., `eINR`, `USDdb`)
* Decimals set to match fiat precision
* Mint and burn restricted to designated roles
* Metadata may include issuance time, jurisdictional tags, or compliance IDs
Programmable tokens must remain composable with DeFi primitives while respecting the constraints of regulated financial infrastructure.
## Smart contract logic and control
The smart contract layer enforces critical business logic, including issuance policies, transfer authorization, and event notifications.
### Functional modules
* `mint(to, amount)`: Issues tokens on successful fiat deposit
* `burn(from, amount)`: Destroys tokens on withdrawal or redemption
* `transfer(from, to, amount)`: Performs compliance checks before allowing movement
* `pause()` / `unpause()`: Emergency controls for operational or legal response
* `setComplianceRule(ruleId, enabled)`: Enables modular compliance logic by type
Smart contracts are deployed under bank-controlled keys or via multisig to ensure upgradeability and response capacity.
## Programmability patterns
Tokenized deposits offer native support for programmable money, enabling innovation in payment workflows and financial automation.
### Key patterns
* **Escrow contracts**: Conditional disbursement of tokens based on legal agreements or service delivery
* **Streaming payments**: Scheduled micro-payments for salaries or subscriptions
* **Trigger-based flows**: Token release linked to off-chain events or sensor data
* **Time locks and vesting**: Delayed availability for compliance or investment scenarios
* **Custom spend controls**: Merchant-specific allowances or category restrictions
Smart contracts can embed business logic directly into money, enabling real-time, logic-aware payments for institutions and consumers.
## Custody models and wallet design
Tokenized deposit systems support a range of custody and wallet models tailored to different user types and risk profiles.
### Custody options
* **Self-custody**: End users hold keys via browser wallets or mobile apps
* **Smart contract wallets**: Role-based access, transaction limits, or social recovery
* **Institutional custody**: Bank or regulated third-party holds assets on behalf of clients
* **Custody-as-a-service**: API-driven key management and compliance overlays
### Wallet integrations
* Web-based wallets linked via Web3 APIs and bank identity layers
* Hardware wallets or biometric signers for high-value accounts
* Embedded wallets in banking apps for seamless UX
Banks may offer their own wallet infrastructure or integrate with partners to ensure secure, compliant, and scalable access for deposit token holders.
## Interbank interoperability and network design
Tokenized deposits gain strategic value when they operate within interoperable ecosystems, allowing transactions across different banks, currencies, and blockchains.
### Interbank use cases
* **Real-time interbank payments**: Settlement of obligations directly via tokenized deposits
* **Cross-border corridors**: Tokenized deposits from different banks exchanged under FX rules
* **Clearing networks**: Automated netting and batch settlement of B2B obligations
* **Interbank repo and liquidity**: Collateralized lending using tokenized deposits between regulated institutions
### Network architectures
* **Private consortium chain**: Shared ledger among participating banks
* **Public-permissioned L2**: Rollup or sidechain model with compliance layer
* **Hub-and-spoke model**: Central bank or utility manages routing between bank-issued tokens
* **Cross-chain bridges**: Interoperability through messaging protocols and custody-backed synthetic models
Interbank tokenized deposit networks require governance frameworks, SLA definitions, dispute resolution, and standardized APIs.
## Settlement mechanics and transaction finality
Tokenized deposit systems enable atomic, deterministic, and auditable settlement with lower risk than traditional systems.
### Settlement models
* **Atomic settlement**: Transaction is final immediately after inclusion in the ledger
* **Deferred netting**: Multiple token transfers netted and settled periodically
* **Instant DvP**: Delivery-versus-payment for tokenized assets settled simultaneously with payment
* **Programmatic clearing**: Rules-based batch processing of internal or interbank transactions
### Finality guarantees
* Smart contract confirmations plus bank ledger reconciliation
* Integration with national RTGS or payment system to synchronize off-chain ledger
* Timestamped events and immutability provide cryptographic proof of payment
This reduces credit risk, fraud, and reconciliation overhead while providing real-time visibility to counterparties and regulators.
## Risk models and operational safeguards
Tokenized deposit systems must model operational, liquidity, and systemic risks, especially when embedded in broader financial markets.
### Key risk domains
* **Liquidity mismatch**: Tokens issued without sufficient fiat reserves or settlement buffers
* **Redemption pressure**: Spike in withdrawals due to macro or legal risk perception
* **Smart contract failure**: Bugs or logic flaws disrupting mint/burn or compliance enforcement
* **Oracle errors**: Inaccurate triggers for programmatic flows or FX settlement
### Safeguards
* Real-time reserve tracking and circuit breakers for minting caps
* Daily or intra-day reconciliation with banking core
* Emergency pause and manual override functions
* Redundant oracles and compliance layer monitoring
A well-designed system includes both proactive limits and reactive tools to contain stress scenarios and maintain confidence.
## Dashboards and analytics
Transparency is critical in tokenized financial systems. Banks, users, and regulators benefit from real-time analytics interfaces.
### Admin dashboards
* Total supply, mint/burn volume, redemption trends
* KYC status by address, wallet jurisdiction distribution
* Alerts on suspicious activity or rule violations
* Pending upgrades, governance actions, or system flags
### User dashboards
* Token balance and fiat equivalent
* Transaction history and fiat ledger linkage
* Active contracts or scheduled disbursements
* Regulatory disclosures and policies
### Technical monitoring
* Smart contract gas usage and execution logs
* Oracle update latency and quorum analysis
* Cross-chain bridge activity and liquidity depth
Dashboards support operational decision-making, regulatory compliance, and product monitoring in production-grade deployments.
## Deployment strategies and scaling
Launching tokenized deposits requires careful planning across legal, technical, and operational domains.
### Phased deployment
1. **Prototype**: Internal testing on testnet with fiat simulation
2. **Regulatory sandbox**: Launch with limited users and oversight
3. **Closed loop deployment**: Launch with specific ecosystem partners or bank-internal use cases
4. **Open issuance**: Allow retail and institutional users on mainnet
5. **Cross-bank integration**: Connect with other issuers and payment networks
### Scaling considerations
* Transaction throughput and chain congestion
* KYC throughput and onboarding flow UX
* API rate limits and real-time event propagation
* Support for multi-currency and multi-chain extensions
A successful deployment roadmap aligns technical capabilities with legal comfort, user demand, and ecosystem readiness.
## Integration with treasury and corporate systems
Tokenized deposits provide a bridge between traditional finance tools and programmable money, offering automation and liquidity benefits for enterprises and institutions.
### Treasury use cases
* **Cash management**: Tokenized balances used to optimize liquidity across subsidiaries
* **Just-in-time payments**: Automated supplier disbursement via smart contract triggers
* **Yield management**: Deployment of idle funds into regulated on-chain liquidity pools
* **Multi-bank visibility**: Unified dashboard of tokenized balances across issuing institutions
### ERP and system integrations
* RESTful APIs or GraphQL endpoints for ERP systems (e.g., SAP, Oracle)
* Webhooks for event-driven updates to treasury software
* Role-based access control for CFOs, auditors, and compliance officers
* Exportable transaction reports for reconciliation and audit trails
Tokenized deposits unlock real-time treasury operations while ensuring compliance with corporate finance and tax regulations.
## DeFi and programmable finance compatibility
While inherently more controlled than stablecoins, tokenized deposits can still be integrated into decentralized finance environments with appropriate safeguards.
### Integration scenarios
* **Permissioned DeFi**: Whitelist-only lending or trading protocols restricted to KYC-verified wallets
* **Liquidity pools**: Bank-issued tokens paired with digital assets in AMMs or DEXs
* **Collateralization**: Tokenized deposits used to mint synthetic assets or stablecoins
* **Streaming payments**: Real-time payroll and grant disbursement to DAO contributors
### Considerations
* Transfer hooks and compliance modules must be preserved
* Composability limited by protocol-level permissioning
* Chain choice influences available integrations (e.g., EVM L2s vs. private chains)
Tokenized deposits in DeFi require careful balancing of innovation, legal enforceability, and risk management.
## Policy and regulatory evolution
The growth of tokenized deposit systems will be shaped by evolving regulations, central bank guidance, and public-private collaboration frameworks.
### Regulatory focus areas
* **Deposit classification**: Whether tokens are considered liabilities or new instruments
* **Licensing requirements**: For issuers, custodians, and wallet providers
* **Interoperability mandates**: Encouragement of open standards and anti-fragmentation
* **Anti-money laundering**: Enforcement of FATF Travel Rule and transaction traceability
* **Prudential oversight**: Risk management standards akin to Basel III or local equivalents
### Policy milestones
* BIS Innovation Hub pilots (e.g., mBridge, Project Icebreaker)
* ECB and MAS guidelines on tokenized bank money
* US and EU legislation under discussion for digital asset classification
Ongoing dialogue between regulators, technologists, and financial institutions will determine the speed and scope of adoption.
## Public-private collaboration
The success of tokenized deposits depends on cooperation between public sector entities (e.g., central banks, regulators) and private actors (e.g., banks, fintechs).
### Collaborative models
* **Industry consortia**: Shared infrastructure and token standards across banks
* **Central bank nodes**: CBDC infrastructure acting as clearing agent for tokenized deposits
* **Utility settlement coins**: Specialized stable units for interbank use under public oversight
* **Open-source infrastructure**: Publicly auditable contract templates and SDKs
This collaboration ensures system resilience, compliance alignment, and shared innovation across the ecosystem.
## Long-term outlook and transformation potential
Tokenized deposits will reshape financial infrastructure, merging the stability and trust of the traditional banking system with the speed, efficiency, and programmability of blockchain.
### Strategic shifts
* Embedded bank money in consumer and enterprise software
* Fragmentation of legacy correspondent banking and FX rails
* New business models for banks offering programmable payment services
* Seamless digital identity and account integration across chains and platforms
### Ecosystem convergence
* Integration with CBDCs, stablecoins, and RWAs in unified liquidity layers
* Cross-chain operability between banks, fintechs, and DeFi platforms
* Transformation of payments from settlement rails into composable money flows
Tokenized deposits will become foundational infrastructure in the programmable financial ecosystem, offering secure, compliant, and universally accessible digital money for the next generation of finance.
file: ./content/docs/building-with-settlemint/cli/settlemint/codegen.mdx
meta: {
"title": "Codegen"
}
{
Usage: settlemint codegen Examples:
# Generate GraphQL types and queries for your dApp $ settlemint codegen
# Generate GraphQL types and queries for specific TheGraph subgraphs $ settlemint codegen --thegraph-subgraph-names subgraph1 subgraph2
Options: --prod Connect to your production environment --thegraph-subgraph-names <subgraph-names...> The name(s) of the TheGraph subgraph(s) to generate (skip if you want to generate all) --generate-viem Generate Viem resources -h, --help display help for command
# Connect to your environment $ settlemint connect
# Connect to your environment using defaults from the .env file $ settlemint connect --accept-defaults
# Connect to your production environment $ settlemint connect --prod
# Connect to a standalone environment (when not using the SettleMint platform) $ settlemint connect --instance standalone
Connects your dApp to your application
Options: --prod Connect to your production environment -a, --accept-defaults Accept the default and previously set values -i, --instance <instance> The instance to connect to (defaults to the instance in the .env file). Use 'standalone' if your resources are not deployed on the SettleMint platform -h, --help display help for command
# Create a new application from a template $ settlemint create
# Create a new asset tokenization application $ settlemint create --template asset-tokenization
# Create a new asset tokenization application from a specific version $ settlemint create --template asset-tokenization --version 1.0.0
Create a new application from a template
Options: -n, --project-name <name> The name for your SettleMint project -t, --template <template> The template for your SettleMint project (run `settlemint platform config` to see available templates) -v, --version <version> Specify the template version to use (defaults to latest stable version) -i, --instance <instance> The instance to connect to -h, --help display help for command
# Login to your SettleMint account $ settlemint login
# Login to your SettleMint account using a token from STDIN $ cat ~/my_token.txt | settlemint login --token-stdin --accept-defaults
Login to your SettleMint account.
Options: -a, --accept-defaults Accept the default and previously set values --token-stdin Provide a token using STDIN -i, --instance <instance> The instance to login to (defaults to the instance in the .env file) -h, --help display help for command
# Get pincode verification response for a wallet address $ settlemint settlemint pincode-verification-response --wallet-address 0x1234567890123456789012345678901234567890
# Get pincode verification response for a wallet address and connect to a specific blockchain node $ settlemint settlemint pincode-verification-response --wallet-address 0x1234567890123456789012345678901234567890 --blockchain-node my-blockchain-node
Get pincode verification response for a blockchain node
Options: --wallet-address <walletAddress> The wallet address to get pincode verification response for -i, --instance <instance> The instance to connect to (defaults to the instance in the .env file) --blockchain-node <blockchainNode> Blockchain Node unique name to get pincode verification response for -h, --help display help for command
Commands: config|cfg [options] Get platform configuration create|c Create a resource in the SettleMint platform delete|d Delete a resource in the SettleMint platform list|ls List resources in the SettleMint platform restart Restart a resource in the SettleMint platform update|u Update a resource in the SettleMint platform help [command] display help for command
Commands: create [options] Bootstrap your smart contract set foundry|f Foundry commands for building and testing smart contracts hardhat|h Hardhat commands for building, testing and deploying smart contracts subgraph|sg Commands for managing TheGraph subgraphs for smart contract indexing help [command] display help for command
}
file: ./content/docs/launching-the-platform/self-hosted-onprem/prerequisites/domain-and-tls.mdx
meta: {
"title": "Domain and tls configuration",
"description": "Configure domain names and TLS certificates for your self-hosted platform"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Overview
### Purpose
* Secure platform access
* Service-to-service communication
* API endpoint security
* User authentication
### Requirements
* Registered domain name
* DNS management access
* Ability to create DNS records
* TLS certificate provider
## Domain configuration
### 1. Configure Main Domain
* Create an A record pointing to your ingress controller IP
* Example: `platform.company.com → 203.0.113.1`
### 2. Add Wildcard Subdomain
* Create a CNAME record for all subdomains
* Pattern: `*.platform.company.com → platform.company.com`
### DNS Resolution Tests
```bash
# Check A record
dig +short platform.company.com
# Check CNAME record
dig +short test.platform.company.com
# Verify IP matches ingress
kubectl -n ingress-nginx get svc ingress-nginx-controller \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}'
```
## Tls configuration
### Quick Setup with Cloudflare
### Add Domain to Cloudflare
* Transfer DNS management
* Update nameservers
### Configure SSL/TLS
* Purchase Advanced Certificate Manager (ACM)
* Enable Total TLS
* Set SSL/TLS mode to Full (Strict)
**Benefits**
* Automatic certificate management
* DDoS protection included
* Easy wildcard certificate support
* Global CDN
### Setup with cert-manager
### Install cert-manager
```bash
helm repo add jetstack https://charts.jetstack.io --force-update
helm repo update
helm upgrade --install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
```
### Configure DNS Provider
```bash
# Create API token secret
kubectl apply -n cert-manager -f - <
EOF
```
### Create ClusterIssuer
```bash
kubectl apply -f - <
**Important**
* Use a valid email address for certificate notifications
* Ensure DNS provider API token has sufficient permissions
* Allow time for initial certificate issuance
## Information collection
### Required values for platform installation
* [ ] Domain name (e.g., `platform.company.com`)
* [ ] Ingress annotations (if using cert-manager:
`cert-manager.io/cluster-issuer: "letsencrypt"`)
* [ ] TLS secret name for the certificate
* [ ] SSL redirect setting (`true` or `false`)
```yaml
ingress:
enabled: true
className: nginx
host: "platform.company.com"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
tls:
- secretName: "tls-secret"
hosts:
- "platform.company.com"
- "*.platform.company.com"
deploymentEngine:
platform:
domain:
hostname: "platform.company.com"
clusterManager:
domain:
hostname: "platform.company.com"
targets:
- clusters:
- domains:
service:
tls: true
hostname: "platform.company.com"
ingress:
ingressClass: "nginx"
```
## Troubleshooting
### DNS Issues
**Not Resolving**
* Verify A record IP
* Check CNAME configuration
* Allow DNS propagation (48h max)
**Wrong IP**
* Confirm ingress controller IP
* Update DNS records
* Clear local DNS cache
### Certificate Issues
**cert-manager**
* Check issuer status
* Verify DNS01 challenge
* Review cert-manager logs
**Cloudflare**
* Verify SSL/TLS mode
* Check certificate status
* Confirm proxy status
Need help? Contact [support@settlemint.com](mailto:support@settlemint.com) if
you encounter any issues.
file: ./content/docs/launching-the-platform/self-hosted-onprem/prerequisites/ingress-controller.mdx
meta: {
"title": "Ingress controller",
"description": "Setup and configure the ingress controller for your self-hosted platform"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Overview
## Deployment options
### Install with Helm
```bash
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
```
Wait for the load balancer IP to be assigned:
```bash
kubectl get service -n ingress-nginx ingress-nginx-controller \
--output jsonpath='{.status.loadBalancer.ingress[0].ip}'
```
### Cloud Provider Marketplaces
Choose your cloud provider's marketplace offering:
#### Digital Ocean
* Install "NGINX Ingress Controller" from marketplace
* Automatically configures load balancer
#### CIVO
* Enable "Nginx ingress controller" during cluster creation
* Or add from marketplace post-creation
#### Other Providers
* Most cloud providers offer similar marketplace solutions
* Follow provider-specific installation steps
## Validation
### Check pods are running
```bash
kubectl get pods -n ingress-nginx
```
### Verify service and ip allocation
```bash
kubectl get svc -n ingress-nginx
```
## Information collection
### Required values for platform installation
* [ ] Ingress class name (default: `nginx`)
* [ ] Load balancer IP address
* [ ] Ingress controller namespace
```yaml
ingress:
enabled: true
className: nginx
# Other ingress settings will be configured in Domain & TLS section
```
## Troubleshooting
### No Load Balancer IP
* Verify cloud provider load balancer service is running
* Check cloud provider quotas
* Ensure correct service annotations
### Controller Not Ready
* Check pod logs: `kubectl logs -n ingress-nginx `
* Verify resource requirements are met
* Check network policies
## Next steps
### Verify ingress controller is running ### Note down the load balancer IP
### Proceed to \[Domain and TLS
Setup]\(/documentation/launching-the-platform/self-hosted/prerequisites/domain-and-tls)
Need help? Contact [support@settlemint.com](mailto:support@settlemint.com) if
you encounter any issues.
file: ./content/docs/launching-the-platform/self-hosted-onprem/prerequisites/metrics-and-logs.mdx
meta: {
"title": "Metrics and logs",
"description": "Configure monitoring and logging for your self-hosted platform"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Overview
### Monitoring Stack
* Metrics collection (Prometheus/VictoriaMetrics)
* Log aggregation (Loki)
* Metrics server for resource metrics
* Kube-state-metrics for cluster state
### Key Benefits
* Complete observability
* Performance monitoring
* Resource tracking
* Centralized logging
### Metrics Not Collecting
* Verify service endpoints
* Check scrape configurations
* Review service monitors
* Validate permissions
### Log Issues
* Check Loki status
* Verify storage configuration
* Review retention settings
* Check network policies
Need help? Contact [support@settlemint.com](mailto:support@settlemint.com) if
you encounter any issues.
file: ./content/docs/launching-the-platform/self-hosted-onprem/prerequisites/oauth.mdx
meta: {
"title": "Oauth provider",
"description": "Setup and configure OAuth provider for your self-hosted platform"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Overview
### Purpose
* User authentication
* Access control
* Single sign-on capabilities
* Identity management
### Key Features
* OpenID Connect support
* OAuth 2.0 compliance
* User profile information
* Email verification
## Provider options
### Google OAuth Setup
### Access Google Cloud Console
* Go to [Google Cloud Console](https://console.developers.google.com/apis/credentials)
* Select or create a project
### Create OAuth Client
* Click `+ CREATE CREDENTIALS`
* Select `OAuth client ID`
* Choose `Web application` type
### Configure OAuth Client
* Add Authorized JavaScript origins:
```
https://your-domain.com
```
* Add Authorized redirect URIs:
```
https://your-domain.com/api/auth/callback/google
```
Make sure to replace `your-domain.com` with your actual platform domain.
### Azure Active Directory Setup
### Access Azure Portal
* Go to Azure Active Directory
* Register a new application
### Configure Application
* Add redirect URIs
* Set up platform configurations
* Configure authentication settings
### Set Required Permissions
* OpenID Connect permissions
* User.Read permissions
* Additional scopes as needed
### Custom OIDC Provider
For enterprise setups, you can use any OpenID Connect compliant provider:
* Okta
* Auth0
* Keycloak
* Other OIDC-compliant providers
Required provider capabilities:
* OpenID Connect support
* OAuth 2.0 compliance
* User profile information
* Email verification
## Jwt configuration
### Generate a secure signing key `bash openssl rand -base64 32 `
Store this key securely - it's used to sign user sessions.
## Information collection
### Required values for platform installation
* [ ] OAuth Client ID
* [ ] OAuth Client Secret
* [ ] JWT signing key
* [ ] Configured redirect URI
```yaml
auth:
jwtSigningKey: "your-generated-key" # From openssl command
providers:
google:
enabled: true
clientID: "your-client-id" # From OAuth provider
clientSecret: "your-secret" # From OAuth provider
```
## Validation
### Verify OAuth client is properly configured ### Confirm redirect URIs match
your domain ### Check JWT signing key is generated and saved ### Validate
required scopes are enabled
## Troubleshooting
### Invalid Redirect URI
* Verify exact URI match
* Check for protocol (https) mismatch
* Confirm domain spelling
### Authentication Failures
* Verify client credentials
* Check scope configurations
* Validate JWT signing key
Need help? Contact [support@settlemint.com](mailto:support@settlemint.com) if
you encounter any issues.
file: ./content/docs/launching-the-platform/self-hosted-onprem/prerequisites/overview.mdx
meta: {
"title": "Prerequisites overview",
"description": "Complete guide to setting up prerequisites for the SettleMint Platform installation"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
Before installing the SettleMint Platform, you'll need to set up several core
services. This guide will walk you through each prerequisite and help you
collect the necessary information for installation.
Use the sidebar to navigate between different prerequisites. We recommend
following them in order, but you can skip to specific sections if needed.
## How to use this section
1. Review each prerequisite service
2. Choose your preferred deployment method for each service
3. Follow the setup instructions
4. Record the required information in a secure location
5. Proceed to the next prerequisite
Make sure to complete **all** prerequisites before proceeding with the
platform installation. Missing or incorrectly configured services can cause
installation failures.
## Required services
### Ingress Controller
* Traffic management and load balancing
* SSL/TLS termination
* [Setup Guide](/launching-the-platform/self-hosted-onprem/prerequisites/ingress-controller)
### Domain and TLS
* Domain name configuration
* SSL/TLS certificates
* [Setup Guide](/launching-the-platform/self-hosted-onprem/prerequisites/domain-and-tls)
### Metrics and Logs
* Prometheus metrics collection
* Grafana visualization
* Loki log aggregation
* [Setup Guide](/launching-the-platform/self-hosted-onprem/prerequisites/metrics-and-logs)
### PostgreSQL Database
* Primary platform database
* Stores user data and configurations
* Minimum version: PostgreSQL 13+
* [Setup Guide](/launching-the-platform/self-hosted/prerequisites/postgresql)
### Redis Cache
* Session management
* Real-time features
* Performance optimization
* [Setup Guide](/launching-the-platform/self-hosted/prerequisites/redis)
### S3-Compatible Storage
* Platform assets storage
* Blockchain data persistence
* [Setup Guide](/launching-the-platform/self-hosted/prerequisites/s3-compatible-storage)
### Secrets management
* Secrets management
* Encryption keys
* [Setup Guide](/launching-the-platform/self-hosted/prerequisites/secret-management)
### OAuth Provider
* Authentication service
* User identity management
* [Setup Guide](/launching-the-platform/self-hosted/prerequisites/oauth)
## Deployment options
Choose deployment options based on your:
* Security requirements
* Infrastructure capabilities
* Operational expertise
* Budget constraints
## Information collection
As you complete each prerequisite, you'll need to collect specific information
required for the platform installation.
### Information collection checklist
* [ ] Domain and TLS certificates
* [ ] Database connection strings
* [ ] Redis credentials
* [ ] S3 bucket details
* [ ] Vault access tokens
* [ ] OAuth client credentials
* [ ] Metrics endpoints
## Next steps
1. Start with the
[Ingress Controller](/launching-the-platform/self-hosted-onprem/prerequisites/ingress-controller)
setup
2. Follow each prerequisite guide in order
3. Validate your configurations
4. Proceed to
[Platform Installation](/launching-the-platform/self-hosted-onprem/platform-installation)
## Need help?
### Documentation
* Review the prerequisites guides
* Check troubleshooting sections
* Follow best practices
* Consult platform architecture
### Support
* Email: [support@settlemint.com](mailto:support@settlemint.com)
* Schedule technical consultation
* Contact your account manager
file: ./content/docs/launching-the-platform/self-hosted-onprem/prerequisites/postgresql.mdx
meta: {
"title": "Postgresql database",
"description": "Setup and configure PostgreSQL database for your self-hosted platform"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Overview
### Primary Database
* User data and configurations
* Platform state
* Application data
* Minimum version: PostgreSQL 13+
### Key Features
* High availability
* Data persistence
* Backup support
* Performance monitoring
## Deployment options
### Cloud Provider Options
### Digital Ocean Managed Database
* Create new database cluster
* Choose PostgreSQL 13+
* Select plan (minimum 2 vCPU, 4GB RAM)
* Enable connection pooling (recommended: 50 connections)
### Neon Serverless PostgreSQL
* Create new project
* Set up new database
* Enable connection pooling
* Note the connection string
### Other Providers
* Amazon RDS
* Google Cloud SQL
* Azure Database for PostgreSQL
**Benefits**
* Automatic backups
* High availability
* Security patches
* Performance monitoring
### Bitnami PostgreSQL Chart
### Install PostgreSQL
```bash
helm upgrade --install postgresql oci://registry-1.docker.io/bitnamicharts/postgresql \
--namespace postgresql \
--version 14.3.3 \
--create-namespace \
--set global.postgresql.auth.username=platform \
--set global.postgresql.auth.password=your-secure-password \
--set global.postgresql.auth.database=platform
```
### Wait for deployment
```bash
kubectl -n postgresql get pods -w
```
**For Production Use**
* Configure proper resource limits
* Set up regular backups
* Consider high availability setup
## Requirements
## Information collection
### Required values for platform installation
* [ ] Redis hostname/endpoint
* [ ] Port number (default: 6379)
* [ ] Password (if authentication enabled)
* [ ] TLS enabled/disabled
```yaml
redis:
host: "" # Redis host collected in prerequisites
port: 6379 # Redis port collected in prerequisites
password: "" # Redis password collected in prerequisites
prefix: "sm" # In shared redis servers, this separates queues
tls: false # Set to true to use TLS mode
```
When using Google Memorystore:
1. Enable only one Redis solution (`redis.enabled` or `redis.memorystore.enabled`)
2. Ensure your GKE cluster has access to the Memorystore instance
3. Configure the same region as your GKE cluster
## Validation
```bash
# Get the Memorystore instance connection details
REDIS_HOST=$(gcloud redis instances describe [INSTANCE_ID] \
--region=[REGION] --format='get(host)')
REDIS_PORT=$(gcloud redis instances describe [INSTANCE_ID] \
--region=[REGION] --format='get(port)')
# Test connection using redis-cli
redis-cli -h $REDIS_HOST -p $REDIS_PORT ping
```
```bash
# Using redis-cli
redis-cli -h your-redis-host -p 6379 -a your-password ping
# Expected response
PONG
```
## Troubleshooting
## Deployment options
### GCP Secret Manager Setup
### Enable the Secret Manager API
* Go to [Google Cloud Console](https://console.cloud.google.com)
* Navigate to Secret Manager
* Enable the Secret Manager API for your project
### Create Service Account
* Navigate to IAM & Admin > Service Accounts
* Create a new service account
* Grant the following roles:
* `Secret Manager Admin`
### Download Credentials
* Create and download a JSON key for the service account
* Keep this file secure - you'll need it during platform installation
**GCP Secret Manager provides:**
* Fully managed service
* Automatic replication
* Fine-grained IAM controls
* Audit logging
**Helm Chart Values:**
```yaml
# values.yaml for Helm installation
gcpSecretManager:
# -- Enable Google Secret Manager integration
enabled: true
# -- The Google Cloud project ID
projectId: "your-project-id"
# -- The Google Cloud service account credentials JSON
credentials: |
{
// Your service account JSON key
}
```
Make sure to:
1. Enable Google Secret Manager in your Helm values
2. Use the same project ID and credentials as in your platform configuration
3. Properly format the service account JSON credentials
### HashiCorp Cloud Platform Setup
### Create Vault Cluster
* Sign up for [HashiCorp Cloud](https://portal.cloud.hashicorp.com)
* Choose Development tier (sufficient for most setups)
* Select "Start from Scratch" template
* Pick your preferred region
### Configure Secret Engines
```bash
vault secrets enable -path=ethereum kv-v2
vault secrets enable -path=ipfs kv-v2
vault secrets enable -path=fabric kv-v2
```
### Set Up Authentication
```bash
# Enable AppRole auth method
vault auth enable approle
# Create platform policy
vault policy write btp - <
**TTL Configuration**
* `token_ttl`: How long tokens are valid (e.g., `1h`, `24h`, `30m`)
* `token_max_ttl`: Maximum token lifetime including renewals
* `secret_id_ttl`: How long secret IDs remain valid
* Set to `0` for non-expiring secret IDs
* Or specify duration like `6h`, `24h`, `168h` (1 week)
**HCP Vault provides:**
* Managed infrastructure
* Automatic updates
* Built-in high availability
* Professional support
### Helm Chart Installation
### Install Vault
```bash
helm upgrade --install vault vault \
--repo https://helm.releases.hashicorp.com \
--namespace vault \
--create-namespace
```
### Initialize Vault
```bash
# Initialize and save keys
kubectl exec vault-0 -n vault -- vault operator init \
-key-shares=1 \
-key-threshold=1
# Unseal Vault (replace with your key)
kubectl exec vault-0 -n vault -- vault operator unseal $VAULT_UNSEAL_KEY
```
### Configure Vault
Follow the same configuration steps as HCP Vault (steps 2-5) after logging in with the root token.
**For Production Use:**
* Use multiple key shares
* Configure proper storage backend
* Set up high availability
* Implement proper unsealing strategy
### AWS Secret Manager Setup
### Create IAM User
* Go to AWS IAM Console
* Create a new IAM user
* Grant the following permissions:
* `secretsmanager:CreateSecret`
* `secretsmanager:GetSecretValue`
* `secretsmanager:PutSecretValue`
* `secretsmanager:DeleteSecret`
* `secretsmanager:ListSecrets`
### Generate Access Keys
* In the IAM console, select your user
* Go to "Security credentials" tab
* Create new access key
* Save both the Access Key ID and Secret Access Key
**AWS Secret Manager provides:**
* Regional availability
* Automatic encryption
* Fine-grained IAM controls
* AWS CloudTrail integration
**Helm Chart Values:**
```yaml
# values.yaml for Helm installation
awsSecretManager:
# -- Enable AWS Secret Manager integration
enabled: true
# -- The AWS region
region: 'us-east-1'
# -- The AWS access key ID
accessKeyId: 'your-access-key-id'
# -- The AWS secret access key
secretAccessKey: 'your-secret-access-key'
```
## Information collection
### Required values for platform installation
Choose one of the following configurations for your Helm values:
**For GCP Secret Manager:**
* [ ] GCP Project ID
* [ ] Service Account JSON key
```yaml
# Values.yaml
vault:
enabled: false
awsSecretManager:
enabled: false
gcpSecretManager:
enabled: true
projectId: "your-project-id"
credentials: |
{
// Your service account JSON key
}
```
**For HashiCorp Vault:**
* [ ] Vault address/endpoint
* [ ] Role ID
* [ ] Secret ID
* [ ] Namespace (if using HCP Vault: `admin`)
```yaml
# Values.yaml
googleSecretManager:
enabled: false
awsSecretManager:
enabled: false
vault:
enabled: true
address: "https://vault-cluster.hashicorp.cloud:8200"
namespace: "admin" # Required for HCP Vault
roleId: "your-role-id"
secretId: "your-secret-id"
```
**For AWS Secret Manager:**
* [ ] AWS Region
* [ ] AWS Access Key ID
* [ ] AWS Secret Access Key
```yaml
# Values.yaml
vault:
enabled: false
gcpSecretManager:
enabled: false
awsSecretManager:
enabled: true
region: "your-aws-region"
accessKeyId: "your-access-key-id"
secretAccessKey: "your-secret-access-key"
```
Make sure to:
1. Enable only one secret management solution
2. Explicitly disable all other secret management options by setting `enabled: false`
3. Provide all required values for your chosen solution
## Validation
```bash
# Set environment variables
export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
export PROJECT_ID="your-project-id"
# Verify access
gcloud secrets list --project=$PROJECT_ID
```
```bash
# Set environment variables
export VAULT_ADDR="your-vault-address"
export VAULT_NAMESPACE="admin" # For HCP Vault
export VAULT_ROLE_ID="your-role-id"
export VAULT_SECRET_ID="your-secret-id"
# Verify access
vault write auth/approle/login \
role_id=$VAULT_ROLE_ID \
secret_id=$VAULT_SECRET_ID
```
```bash
# Set environment variables
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_REGION="your-aws-region"
# Verify access (requires AWS CLI)
aws secretsmanager list-secrets
```
## Troubleshooting
### GCP Secret Manager Issues
* Verify service account permissions
* Check credentials file format
* Confirm API is enabled
* Validate project ID
### Vault Issues - Verify Vault address - Check network access - Confirm TLS
settings - Validate namespace (HCP)
### AWS Secret Manager Issues
* Verify IAM permissions
* Check access key validity
* Confirm region setting
* Validate network access
Need help? Contact [support@settlemint.com](mailto:support@settlemint.com) if
you encounter any issues.
file: ./content/docs/launching-the-platform/self-hosted-onprem/prerequisites/storage.mdx
meta: {
"title": "Storage",
"description": "Setup and configure S3-compatible storage for your self-hosted platform"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Overview
### Purpose
* Platform assets storage
* Blockchain data persistence
* File management
* State storage
### Key Features
* Built-in redundancy
* Automatic scaling
* Global availability
* Integrated monitoring
## Deployment options
### AWS S3 (Native)
### Create S3 bucket
* Choose region
* Enable versioning
* Configure default encryption
### Create IAM user
* Generate access key and secret
* Attach minimal required permissions
### Digital Ocean Spaces
### Setup Spaces
* Access Digital Ocean Console
* Create new Spaces bucket:
* Choose datacenter region
* Configure CDN (optional)
* Create Spaces access key
### Azure Blob Storage
### Create Storage Account
* Go to Azure Portal
* Create new Storage Account
* Select performance tier and redundancy
* Enable hierarchical namespace (recommended)
### Create Container
* Navigate to Storage Account
* Create new container
* Set access level (private recommended)
### Get Access Credentials
* Generate Shared Access Signature (SAS)
* Or use Storage Account access keys
* Note the connection string
**Azure Blob Storage offers:**
* Geo-redundant storage options
* Integration with Azure AD
* Built-in disaster recovery
* Pay-as-you-go pricing
### Google Cloud Storage
### Create Storage Bucket
* Go to Google Cloud Console
* Create new bucket
* Choose location type
* Set storage class
* Configure access control
### Set up Service Account
* Create new service account
* Generate JSON key file
* Assign Storage Object Admin role
* Download credentials
**GCP Storage benefits:**
* Multi-regional deployment
* Object lifecycle management
* Strong consistency
* Integrated security controls
### MinIO Installation
### Install MinIO
```bash
helm upgrade --install minio oci://registry-1.docker.io/bitnamicharts/minio \
--namespace minio \
--version 13.8.4 \
--create-namespace \
--set defaultBuckets=platform-bucket \
--set auth.rootUser=admin \
--set auth.rootPassword=your-secure-password \
--set provisioning.enabled=true \
--set "provisioning.config[0].name=region" \
--set "provisioning.config[0].options.name=us-east-1"
```
### Create Service Account
```bash
mc admin user svcacct add minio platform-user
```
**For Production Use:**
* Configure proper storage class
* Set up backup procedures
* Enable encryption
* Configure monitoring
## State encryption
### Generate encryption key `bash openssl rand -base64 32 `
Store this encryption key securely - it's used to protect platform state data.
## Information collection
### Required values for platform installation
* [ ] S3 endpoint URL (e.g., s3.amazonaws.com)
* [ ] Bucket name
* [ ] Access key ID
* [ ] Secret access key
* [ ] Region (e.g., us-east-1)
* [ ] State encryption key
{" "}
* [ ] Storage account name - \[ ] Container name - \[ ] Storage account key - \[
] State encryption key
{" "}
* [ ] Project ID - \[ ] Bucket name - \[ ] Service account credentials (JSON) -
\[ ] State encryption key
* [ ] MinIO endpoint URL
* [ ] Bucket name
* [ ] Access key
* [ ] Secret key
* [ ] Region
* [ ] State encryption key
```yaml
deploymentEngine:
state:
# AWS S3
connectionUrl: 's3://bucket-name?region=us-east-1&endpoint=s3.amazonaws.com'
# Azure Blob Storage
connectionUrl: 'azblob://'
# Google Cloud Storage
connectionUrl: 'gs://bucket-name'
credentials:
encryptionKey: 'your-generated-key' # From openssl command
# AWS Credentials
aws:
accessKeyId: 'your-access-key'
secretAccessKey: 'your-secret-key'
region: 'us-east-1'
# Azure Credentials
azure:
storageAccount: 'storage-account-name'
storageKey: 'storage-account-key'
# GCP Credentials
google:
project: 'project-id'
credentials: |
{
"type": "service_account",
"project_id": "your-project",
"private_key_id": "key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "service-account@project.iam.gserviceaccount.com",
"client_id": "client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account@project.iam.gserviceaccount.com"
}
```
## Validation
### Test AWS S3
```bash
aws s3 ls s3://your-bucket \
--endpoint-url your-endpoint \
--access-key your-access-key \
--secret-key your-secret-key
```
### Test azure storage
```bash
az storage blob list \
--container-name your-container \
--account-name your-storage-account \
--account-key your-storage-key
```
### Test google cloud storage
```bash
gsutil ls gs://your-bucket
```
Make sure you have installed and configured the respective CLI tools: - AWS
CLI: `aws configure` - Azure CLI: `az login` - Google Cloud CLI: `gcloud auth
login`
## Troubleshooting
Need help? Contact [support@settlemint.com](mailto:support@settlemint.com) if
you encounter any issues.
file: ./content/docs/launching-the-platform/self-hosted-onprem/prerequisites/terraform.mdx
meta: {
"title": "Terraform installation (optional)",
"description": "Optional quick setup using Terraform for testing environments"
}
import { Callout } from "fumadocs-ui/components/callout";
import { Card } from "fumadocs-ui/components/card";
import { Steps } from "fumadocs-ui/components/steps";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
### Quick setup only
This Terraform-based installation is designed for quick setup and testing environments only. For production deployments, we strongly recommend following the manual installation process to properly configure and secure each component according to your organization's requirements.
**Key limitations:**
* Components run locally in the cluster without High Availability
* Basic security configurations
* Limited customization options
* Not suitable for production workloads
## Prerequisites
## Required apis
### Enable Container API
Visit: `https://console.developers.google.com/apis/api/container.googleapis.com/overview?project=`
### Enable cloud kms api
Visit:
`https://console.developers.google.com/apis/api/cloudkms.googleapis.com/overview?project=`
## Iam permissions
### Recommended for Quick Setup
* `Owner` role
### Required Roles
* `Editor`
* `Cloud KMS Admin`
* `Project IAM Admin`
* `Kubernetes Engine Admin`
* `Service Account Admin`
## Installation steps
### Clone Repository
```bash
git clone git@github.com:settlemint/tutorial-btp-on-gcp.git
```
### Set environment variables
```bash
# Dns zone (subdomain) for platform access
export TF_VAR_gcp_dns_zone='YOUR_DNS_ZONE'
# Your gcp project id
export TF_VAR_gcp_project_id='YOUR_GCP_PROJECT_ID'
# Target gcp region
export TF_VAR_gcp_region='YOUR_GCP_REGION'
# Oauth credentials
export TF_VAR_gcp_client_id='YOUR_GCP_CLIENT_ID'
export TF_VAR_gcp_client_secret='YOUR_GCP_CLIENT_SECRET'
# Registry credentials (provided by settlemint)
export TF_VAR_oci_registry_username='YOUR_REGISTRY_USERNAME'
export TF_VAR_oci_registry_password='YOUR_REGISTRY_PASSWORD'
export TF_VAR_btp_version='BTP_VERSION'
```
## Dns zone setup
### Navigate to DNS Zone Directory
```bash
cd tutorial-btp-on-gcp/00_dns_zone
```
### Create dns zone
```bash
terraform init
terraform apply
```
### Configure domain registrar
Add NS records for your subdomain (e.g., btp.settlemint.com) pointing to Google
nameservers:
* ns-cloud-a1.googledomains.com
* ns-cloud-a2.googledomains.com
* ns-cloud-a3.googledomains.com
* ns-cloud-a4.googledomains.com
### Verify dns delegation
```bash
dig NS btp.settlemint.com
```
## Platform infrastructure setup
### Navigate to Infrastructure Directory
```bash
cd ../01_infrastructure
```
### Deploy infrastructure
```bash
terraform init
terraform apply
```
## Cleanup
### Remove Resources
```bash
terraform destroy
```
You may need to run the destroy command twice if the first attempt fails.
## Next steps
### Access Platform
Visit `https://btp.`
### Complete setup
Follow the initial setup wizard
### Review documentation
Check the
[platform documentation](/launching-the-platform/self-hosted/introduction)
## Troubleshooting
### Common Issues
* Verify all environment variables are set correctly
* Ensure DNS delegation is complete (can take up to 48 hours)
* Check Terraform logs for specific error messages
### Get Help
* Review error messages in detail
* Check GCP quotas and limits
* Contact [support@settlemint.com](mailto:support@settlemint.com)
The Terraform installation is designed for demonstration and testing. For
production deployments, we recommend following the manual installation process
to configure each component according to your specific requirements.
file: ./content/docs/use-case-guides/template-libraries/evm-smart-contracts/erc20.mdx
meta: {
"title": "ERC20 token"
}
ERC-20 tokens are blockchain-based assets, issued on the Ethereum network, that
have value and can be sent and received. These tokens are fungible, meaning they
can be exchanged with another token of the same type because they have identical
properties and there is an equal value. For example, the ERC-20 token of Alice
is exactly the same as the ERC-20 token of Bob. They can exchange their token
without consequences.
Examples of fungible assets are currencies, stocks of a company, bonds, gold and
other precious metals.
The ERC-20 smart contract is the perfect standard to organize a crowdsale and
let companies raise funds to launch their project.
## ERC-20 smart contract features
An ERC-20 smart contract is used to create fungible tokens and bring them to the
blockchain. This process is called minting. It also keeps track of the total
supply as well as the balances of users as they exchange their tokens.
The ERC-20 smart contract on the SettleMint platform has the following features:
* Custom name, symbol and initial supply that can be chosen by the user.
* Minting capabilities that let the admin of the smart contract mint (i.e
create) new tokens.
* Pausable capabilities that let the admin pause the contract in case of
emergency.
* Burnable capabilities that let users burn (i.e destroy) their token.
By default, the account that deploys the ERC-20 smart contract gets 1,000,000
tokens. You can change this behaviour by modifying the **“constructor”** in
**“GenericToken.sol”**. If you do not mint tokens in the constructor, make sure
to mint some after the deployment.
```solidity
contract GenericToken is ERC20, ERC20Burnable, Pausable, AccessControl {
constructor(string memory name_, string memory symbol_) ERC20(name_, symbol_) {
_setupRole(DEFAULT_ADMIN_ROLE, msg.sender);
_mint(msg.sender, 1000000 * 10**decimals());
}
```
## Deploying an ERC-20 smart contract
To set the name and symbol for your token, go to the **“deploy”** folder and in
**“00\_Deploy\_GenericToken.ts”**, change the values in **“args”** in the
**“deploy”** function.
```typescript
await deploy("GenericToken", {
from: deployer,
args: ["GenericToken", "GT"],
log: true,
});
```
As soon as you are happy with the changes you made, just click on **“deploy”**
in the **“task runner”** of the IDE and after a few seconds, your ERC-20 smart
contract should be deployed on the network of your choice.
The **“GenericToken.ts”** script in the **“test”** folder showcases all the
functionalities of the ERC-20 standard. It shows you how to use the smart
contract in your dapp.
## ERC-20 with meta transactions
The SettleMint platform also provides an ERC-20 template set with meta
transaction capabilities. Meta transactions are used to fill the need for
EVM-based contracts to accept transactions from externally owned accounts that
do not have ETH to pay for gas. In short, implementing such an interface omits
the need for the end user to pay for gas. With meta transactions, gas is paid by
a `gas relayer` and a smart contract, known as a `trusted forwarder`, forwards
the transactions to the recipient contract, i.e the one that the end user wants
to interact with in the first place.
### Setting up meta transactions
When looking at the GenericTokenMeta.sol smart contract in the IDE we can see
the main differences with the basic ERC20 being the following:
```solidity
...
constructor(
...
address trustedForwarder
) ERC20(name_, symbol_) ERC2771Context(trustedForwarder_)
...
function _msgSender() internal view override(Context, ERC2771Context) returns (address sender) {
sender = ERC2771Context._msgSender();
}
function _msgData() internal view override(Context, ERC2771Context) returns (bytes calldata) {
return ERC2771Context._msgData();
}
```
Let’s unpack this:
1. We pass in the constructor an Ethereum address, the `trustedForwarder`, to
the `ERC2771Context` constructor. This enables the smart contract to accept
transactions coming from the `Trusted Forwarder`.
2. `_msgSender()` function is kind of an alias for `msg.sender`. When called it
returns `msg.sender` for regular transactions, but for meta transactions it
returns the end user (rather than the `relayer` or the `trusted forwarder`).
3. `_msgData()` function is again an alias of some sort for msg.data. For meta
transactions it will return the raw transaction data from the perspective of
the end user rather than the `relayer`.
### Sending a meta transaction to the ERC-20 contract
Sending a meta transaction is slightly different than sending a regular
transaction, but the template set comes with an example in which 10 tokens are
transferred between two wallets without ETH.
First, to send meta transactions using the `forwarder`, we have to define three
objects called `EIP712Domain`, `domain` and `types` as follows:
```typescript
const EIP712Domain = [
{ name: "name", type: "string" },
{ name: "version", type: "string" },
{ name: "chainId", type: "uint256" },
{ name: "verifyingContract", type: "address" },
];
const domain = {
name: "MinimalForwarder",
version: "0.0.1",
chainId: parseInt(await getChainId()),
verifyingContract: forwarderAddress,
};
const types = {
EIP712Domain,
ForwardRequest: [
{ name: "from", type: "address" },
{ name: "to", type: "address" },
{ name: "value", type: "uint256" },
{ name: "gas", type: "uint256" },
{ name: "nonce", type: "uint256" },
{ name: "data", type: "bytes" },
],
};
```
The name and version of domain have to match those of the forwarder (see the
contract `Forwarder.sol`).
Then, we need to generate the function data as follows:
```typescript
const functionData = token.interface.encodeFunctionData("transfer", [
walletTwoAddress,
ethers.utils.parseUnits("10"),
]);
```
In that expression, `transfer` is the ERC-20 function we want to execute,
`walletTwoAddress` is the account that will receive the tokens and the last
parameter is the amount of tokens to be transferred.
The last step before sending the meta transaction is to create and sign the
message containing the underlying transaction as follows:
```typescript
const walletOneNonce = Number(
await read("Forwarder", "getNonce", walletOneAddress)
);
const req = {
from: walletOneAddress,
to: token.address,
value: "0",
gas: "100000",
nonce: walletOneNonce,
data: functionData,
};
const signedData = ethSigUtil.signTypedData({
privateKey: walletOne.getPrivateKey(),
data: {
types: types,
domain: domain,
primaryType: "ForwardRequest",
message: req,
},
version: ethSigUtil.SignTypedDataVersion.V4,
});
```
Finally, once the transaction is signed, we can send it to the `forwarder`:
```typescript
await forwarder.execute(req, signedData, { gasLimit: "100000" });
```
## ERC-20 Crowdsale
Crowdsales allow the participants of a network to purchase tokens, usually in
exchange for ether. A crowdsale can take many different forms, and our powerful
templateset allows you the flexibility to shape and deploy the crowdsale
according to your needs.
### Stage 1: Creating a supply of the tokens being sold
This is done in the templateset by deploying an ERC-20 contract.
### Stage 2: Configuring and deploying the crowdsale
When deploying a crowdsale, there are different specifications to be considered:
* Crowdsale rates - price of the token being sold
* Validation - Who can actually purchase the tokens?
* Distribution - When does the distribution of the tokens actually take place?
* Token emission - Who actually transfers the tokens to the beneficiaries?
* Phases of the crowdsale - Are all the tokens going to be distributed in one go
or are they going to be distributed in different phases?
The ERC-20 crowdsale templateset we provide is designed to give you full
flexibility to modify these different parameters according to your requirements.
#### Price of the token being sold
In our Crowdsale templateset, you can fix the price of the token in the
`_usdRate` field on the `CrowdSale` contract.
Its value is the number of tokens for one USD.
#### Validation
Validation refers to ensuring that the buyers meet certain conditions before
they can purchase tokens.
Our templateset provides KYC / AML whitelisting capabilities out of the box. The
buyers must be whitelisted before they can purchase tokens.
This is implemented using openzepellin’s `AccessControl`. The address with the
`DEFAULT_ADMIN_ROLE` grants `WHITELISTED_ROLE` to buyers before they can
purchase tokens.
#### Purchase of tokens
The buyers can be allocated the tokens in two ways.
The first method is where the whitelisted buyers can send ETH to the contract
directly. The equivalent amount of tokens will be calculated automatically. The
calculation is done by first converting the ETH to USD leveraging Chainlink
Oracle. Then, the USD is converted to the equivalent amount of tokens using the
`_usdRate` set in the crowdsale.
Alternatively, the admin of the crowdsale can directly allocate tokens to
certain buyers by calling the `externalBuyTokens` function. This function can
only be called by addresses which have been granted `DEFAULT_ADMIN_ROLE`. Here
the tokens get transferred from the sender of the transaction to the beneficiary
address listed as a parameter. The reason for providing such a function is to
support allocation of tokens to buyers:
* Who do not know how to send ETH to a contract
* Support payments in other forms other than ETH. For example, fiat payments can
be supported. The buyer would send fiat to the crowdsale admin. The crowdsale
admin would then call this function to allocate the equivalent amount of
tokens to the buyer.
#### Distribution
Our templateset gives you flexibility over when you want to actually credit the
tokens to the beneficiary.
This can be done immediately after the tokens have been purchased or a certain
amount of time after the purchase, called vesting period.
To transfer the tokens immediately, set the `_vestingEndDate` field on the
`CrowdSale` contract to `0` while deploying the contract.
When the vesting end date is not set, the tokens purchased get transferred
immediately to the beneficiary’s address.
Transfering the tokens a certain amount of time after the purchase is achieved
using two pieces: Setting the `_vestingEndDate` on the contract to the timestamp
at the end of the vesting period Deploying a `VestingVault` contract and
initializing `_vestingVault` field on the `CrowdSale` contract to it’s address
The beneficiary in this case is the `VestingVault` contract. All the tokens
purchased by buyers get stored in the `VestingVault` contract.
A point to note here is that to store tokens in the `VestingVault` contract, the
sender of the transaction must have a `VAULT_CONTROLLER_ROLE` of the
`VestingVault` enabled. This can be seen in the deploy steps, explained below.
The buyers withdraw the tokens they bought after the vesting period has ended by
calling the `release` method on the `VestingVault` of the crowdsale contract.
#### Token emission
Token emission refers to the actual transaction of transfer of tokens to the
beneficiary. In our templateset, the tokens are transferred from the crowdsale
contract itself to the beneficiary.
The actual transfer from the contract to the beneficiary happens in the
`deliverTokens` function on the `CrowdSale` contract. This is the standard
method for token emission.
There are 2 other patterns of token emission - you can read more about them
here:
[https://docs.openzeppelin.com/contracts/2.x/crowdsales#token-emission](https://docs.openzeppelin.com/contracts/2.x/crowdsales#token-emission)
#### Phases of the crowdsale
Crowdsales are divided into two broad categories. Crowdsales where the tokens
are distributed all in one sale. Or, crowdsales with different phases, where in
each phase of the crowdsale, the token being sold usually has a different price.
There is also a limit to the number of tokens that can be sold in a particular
phase.
To run a crowdsale where there is only one phase, you need to deploy only one
`CrowdSale` contract.
To orchestrate a crowdsale with multiple phases, you need to deploy multiple
`CrowdSale` contracts. Here, you deploy a `CrowdSale` contract for each phase.
Deploy steps:
Now that we know the different specifications needed for a crowdsale, we will
walk you through how we have configured them and deployed the templateset
example.
We are deploying an `ExampleCrowdSale` which is going to sell `ExampleToken`s at
the rate of 250 tokens for 1 USD. The tokens are not going to be transferred to
the beneficiary directly, they are going to be vested for 30 months in the
`ExampleVestingVault`.
Here we are going to deploy:
1. An ERC20 token, the token to be sold In step `00_deploy_token` we deploy the
`ExampleToken` which is the actual token to be sold.
2. A `VestingVault` contract to store the tokens for the vesting period In step
`02_deploy_vestingvault` we deploy the `ExampleVestingVault` to store the
tokens for the vesting period.
3. An `CrowdSale` contract which is the actual crowdsale In step
`03_deploy_crowdsale` we deploy the `ExampleCrowdSale` contract which is the
actual crowdsale. We pass the address of the token to be sold, the address of
the vesting vault, along with other parameters like the USD rate.
4. Enable the `ExampleCrowdSale` to store tokens in the `ExampleVestingVault` In
step `04_enable_crowdsale` we grant the `ExampleCrowdSale` crowdsale a
`VAULT_CONTROLLER_ROLE` which allows the crowdsale to store tokens in the
vesting vault.
5. Transfer tokens to the `ExampleCrowdSale` to start the crowdsale process We
transfer 100 million Example Tokens to the crowdsale to be sold.
## Integration with the Middleware
Working with complex or large data in your dApp can be a challenge. In the
SettleMint platform we provide you with a
[middleware solution](/building-with-settlemint/evm-chains-guide/setup-graph-middleware)
that allows you to index and query this data easily and efficiently.
file: ./content/docs/use-case-guides/template-libraries/evm-smart-contracts/erc721.mdx
meta: {
"title": "ERC721 token"
}
ERC-721 tokens are blockchain-based assets, issued on the Ethereum network, that
have value and can be sent and received. Contrary to the ERC-20 tokens, these
ERC-721 tokens are non- fungible, meaning that two tokens from the same smart
contract are not equivalent.
Non-fungible tokens, or NFTs, are digitally unique, no two NFTs are the same.
For example, if Alice and Bob exchange their NFTs, one of them might feel
unlucky as their new token is worth less than their previous ones. NFTs give the
ability to assign or claim ownership of any unique piece of digital data,
trackable on the blockchain. It can be created from digital objects, as a
representation of digital or non-digital assets.
Examples of what an NFT can represent are real estate properties, collectibles,
event tickets, music videos, and artwork.
The SettleMint platform comes with three ERC-721 contract sets.
* The first one, simply called **ERC-721 Token**, has all the functionalities to
create the token, but it has no specific asset attached to it. It is up to you
to create one. The optimised **ERC-721a Token** provides significant gas
savings for minting multiple NFTs in a single transaction.
* The second set, called **ERC-721 trading cards**, show you how you can create
trading cards with different scarcities.
* Finally, the third set, called **ERC-721 Generative Art**, demonstrates how
you can automatically create images by combining several layers of assets.
This is the process that was used to create famous NFT collections such as the
Bored Ape Yacht Club or the Cryptopunks.
The trading cards and the generative art sets are extensions of the ERC-721
Token set. The specific features related to these two sets are presented in
their respective sections.
## ERC-721 smart contract features
An ERC-721 smart contract is used to create non-fungible tokens and bring them
to the blockchain.
The process of creating an ERC-721 has a few distinct phases. The smart contract
sets define one such a process which is what we describe below. This is by no
means the only way to run your ERC-721 project, if you plan not to follow the
playbook below, you can use it to setup your own flow easily.
### Phase 0: Image generation
#### Generative Art
The image generation code for the generative art set is based on the
[Hashlips Art Engine](https://github.com/HashLips/hashlips_art_engine), please
check out the README file in the `art_engine` folder on the usage instructions.
In short, replace the images in the `art_engine/layers` folder, change the
settings in the `art_engine/src/config.js` file, and run `yarn artengine:build`
to generate your images. Rinse and repeat until you are happy with the result.
Note that the generated images are randomized to prevent streaks of similar
images, this can be configured in the `art_engine/src/config.js` file.
If you want to use the engine to generate a preview image run
`yarn artengine:preview` for a static image and `yarn artengine:preview_gif` for
a gif.
Using `yarn artengine:rarity` you can check the rarity of each generated image.
If you want to pixelate your images, use `yarn artengine:pixelate`, the settings
are again in the `art_engine/src/config.js` file.
Not that the generated metadata does not have a real base uri set, after we have
uploaded everything to IPFS, we can set it in the `art_engine/src/config.js`
file and update all the metadata using `yarn artengine:update_info`.
The end result looks like this:

```json
{
"name": "thumbzup #419",
"image": "ipfs://bafybeihroeexeljv5yoyum2x4jz6riuqp6xwg6y7cg7jaumcdpyrjxg5zi",
"attributes": [
{
"trait_type": "background",
"value": "yellow"
},
{
"trait_type": "body",
"value": "thumb"
},
{
"trait_type": "face",
"value": "happy"
},
{
"trait_type": "hair",
"value": "long brown hair"
},
{
"trait_type": "accessories",
"value": "sunglasses"
}
]
}
```
#### Trading Cards
The image generation code for Trading Cards is based on the a Hardhat task found
in the `tasks` folder. This task is written especially for the cards for this
example project, but it should be fairly simple to adapt it to your needs.
In short, replace the images in the `assets/layers` folder, change the logic in
the `task/generate-assets.ts` file. To generate the trading cards execute
`yarn artengine:build --common 10 --limited 5 --rare 2 --unique 1 --ipfsnode `.
The ipfs node key can be found in `.secrets/default.hardhat.config.ts`.
The end result would look like this:

```json
{
"name": "Aiko (#1/1)",
"description": "Aiko can express more with his tail in seconds than his owner can express with his tongue in hours.",
"image": "ipfs://bafybeia5truvedhrtdfne3qmoh3tvsvpku6h4airpku6eqvcmrfoja7h4m",
"attributes": [
{
"trait_type": "Serial Number",
"value": 1,
"max_value": 1,
"display_type": "number"
},
{
"trait_type": "Breed",
"value": "English Cocker Spaniel"
},
{
"trait_type": "Shedding",
"value": 3,
"max_value": 5,
"display_type": "number"
},
{
"trait_type": "Affectionate",
"value": 5,
"max_value": 5,
"display_type": "number"
},
{
"trait_type": "Playfulness",
"value": 3,
"max_value": 5,
"display_type": "number"
},
{
"trait_type": "Floof",
"display_type": "boost_number",
"value": 100
},
{
"trait_type": "Birthday",
"value": 1605465513,
"display_type": "date"
}
]
}
```
### Phase 1: Initial Setup
The first step of the process is to deploy the ERC721 contract, and claim the
reserve tokens.
Reserves are an initital amount of tokens that are created at the start of the
sale. This is used typically to generate tokens for the team members and to mint
tokens for later use (e.g. for marketing purposes).
During this setup phase, some of the important parameters of the sale and
collection are set. In the contract look for the `Configuration` section and
tweak the parameters as needed.
```solidity
//////////////////////////////////////////////////////////////////
// CONFIGURATION //
//////////////////////////////////////////////////////////////////
uint256 public constant RESERVES = 5; // amount of tokens for the team, or to sell afterwards
uint256 public constant PRICE_IN_WEI_WHITELIST = 0.0069 ether; // price per token in the whitelist sale
uint256 public constant PRICE_IN_WEI_PUBLIC = 0.0420 ether; // price per token in the public sale
uint256 public constant MAX_PER_TX = 6; // maximum amount of tokens one can mint in one transaction
uint256 public constant MAX_SUPPLY = 100; // the total amount of tokens for this NFT
```
Furthermore, the collection will be launched without exposing any of the
metadata or art, leaving the reveal for after the public sale. In the
`assets/placeholder` folder, modify the artwork and metadata which will be
exposed until the reveal.
Also make sure to go through the files in the `deploy` folder to change any of
the values to match your project.
When you are happy with the setup, you can deploy the contract and claim the
reserves by running.
```bash
yarn smartcontract:deploy:setup
```
### Phase 2: Building the whitelist
To have a successful launch, you will engage in a lot of marketing efforts and
community building. Typically before engaging in the actual sale, various
marketing actions are taken to build a whitelist. This list is to allow people
to buy in before the public sale. Allowing a person on the whitelist should be
close to a concrete commitment to the sale.
Thw whitelist process is built to be very gas efficient using
[Merkle Trees](https://medium.com/@ItsCuzzo/using-merkle-trees-for-nft-whitelists-523b58ada3f9).
You start by filling the `assets/whitelist.json` file with the addresses of the
whitelisted participants and they amount they can buy in the pre-sale.
When you have enough commitments we will built the Merkle Tree, generate all the
proofs and stire the Merkle Root in the contract.
```bash
yarn smartcontract:deploy:whitelist
```
This will export the proofs needed to include in your dAPP in the
`./assets/generated/whitelist.json` file. Your dAPP will provide a page where
the participants connects their wallet to. Using the address of the wallet, you
can load the proofs and allowances from this JSON file. The dAPP will then
configure a form where the participant can choose, with a maximum of their
allowance, how many tokens they want to buy. Pressing the submit button will
trigger a transaction to the `whitelistMint` function with all the parameters
filled in and the correct amount of ETH/MATIC/etc as a value. The user signs
this transaction in their wallet and the transaction is sent to the network.
To display the state of the sale, the items minted, the items left, use the
GraphQL endpoint from the The Graph node you can launch in the SettleMint
platform.
### Phase 3: Opening up the pre-sale
As soon as you execute the following command, the pre-sale is live.
```bash
yarn smartcontract:deploy:presale
```
### Phase 4: Opening up the public sale
As soon as you execute the following command, the pre-sale is terminated and the
public sale is live.
```bash
yarn smartcontract:deploy:publicsale
```
### Phase 5: The big reveal
At some point during the process, you will want to reveal the metadata. Some
projects choose to reveal immediately, others choose to reveal after the
whitelist sale, and others will wait until a point during the public sale or
even after it has concluded.
Revealing the metadata is done by switching the baseURI to the final IPFS folder
with setBaseURI. This can be followed up by running the following to freeze the
metadata and prevent further changes.
```bash
yarn smartcontract:deploy:reveal
```
## Integration with the Middleware
Working with complex or large data in your dApp can be a challenge. In the
SettleMint platform we provide you with a
[middleware solution](/building-with-settlemint/evm-chains-guide/setup-graph-middleware)
that allows you to index and query this data easily and efficiently.
file: ./content/docs/use-case-guides/template-libraries/evm-smart-contracts/health-records.mdx
meta: {
"title": "Health Records"
}
A healthcare smart contract is a blockchain-based program designed to automate
and securely manage various processes within the healthcare ecosystem, such as
patient data access, provider accreditation, insurance claim processing, and
treatment record keeping. These contracts enforce logic through code, ensuring
that only authorized entities can perform sensitive actions like registering
patients, submitting medical claims, or accessing health records. By using
cryptographic consent and role-based access control, healthcare smart contracts
give patients greater control over their data while reducing the administrative
overhead for providers and insurers. All interactions are logged immutably,
providing transparency and accountability for regulators and auditors.
The use of healthcare smart contracts supports a wide range of applications,
including the creation of national electronic health record systems, streamlined
insurance claim workflows, public health campaign tracking, and secure sharing
of health credentials. These contracts help address major challenges in the
healthcare sector such as data fragmentation, fraud, manual processing delays,
and lack of traceability. They ensure compliance with legal frameworks by
encoding privacy rules directly into the system and by enabling real-time
auditability of care delivery and financial transactions.
## Disclaimer
This smart contract implementation is provided for educational and illustrative
purposes only. It represents a conceptual framework for blockchain-based
healthcare systems.
Key considerations before any production use:
* **Legal Compliance**: healthcare systems are highly regulated and vary
significantly by jurisdiction. This implementation would require substantial
modification to meet specific legal requirements in any given location.
* **Security Review**: The contract should undergo comprehensive security
auditing by qualified blockchain security professionals before handling real
user or financial transactions.
* **No Warranty**: The authors and contributors disclaim all liability for any
use of this code. Users assume all risks associated with implementation and
operation.
* **Consultation Required**: Any organization considering using this template
should obtain advice from qualified legal counsel, and blockchain security
experts before deployment.
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;
import "@openzeppelin/contracts-upgradeable/proxy/utils/Initializable.sol";
import "@openzeppelin/contracts-upgradeable/proxy/utils/UUPSUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/access/AccessControlUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/utils/CountersUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/security/PausableUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/security/ReentrancyGuardUpgradeable.sol";
contract NationalHealthcareSystem is
Initializable,
UUPSUpgradeable,
AccessControlUpgradeable,
PausableUpgradeable,
ReentrancyGuardUpgradeable
{
using CountersUpgradeable for CountersUpgradeable.Counter;
// ========== CONSTANTS ==========
bytes32 public constant ADMIN_ROLE = keccak256("ADMIN_ROLE");
bytes32 public constant PROVIDER_ROLE = keccak256("PROVIDER_ROLE");
bytes32 public constant AUDITOR_ROLE = keccak256("AUDITOR_ROLE");
// ICD-10 code length validation
uint256 public constant MIN_DIAGNOSIS_CODE_LENGTH = 3;
uint256 public constant MAX_DIAGNOSIS_CODE_LENGTH = 7;
// ========== STRUCTS ==========
struct Patient {
address walletAddress;
bytes32 nationalIdHash; // SHA-3 hashed national ID
bool isActive;
uint256 registrationDate;
uint256 lastUpdated;
}
struct Provider {
string name;
string licenseNumber;
string providerType; // "HOSPITAL"|"CLINIC"|"LAB"
bool isActive;
bool isSuspended;
uint256 registrationDate;
uint256 lastUpdated;
}
struct Consent {
address providerAddress;
bool isGranted;
uint256 grantDate;
uint256 revokeDate;
string purpose; // "TREATMENT"|"CLAIMS"|"RESEARCH"
}
struct EHR {
string ipfsHash;
address providerAddress;
string documentType; // "PRESCRIPTION"|"LAB_RESULT"|"IMAGE"
uint256 timestamp;
bytes32 dataHash; // Hash of original data for integrity
}
struct InsuranceClaim {
address patientAddress;
address providerAddress;
string diagnosisCode; // ICD-10
uint256 amountRequested;
uint256 amountApproved;
ClaimStatus status;
uint256 submissionDate;
uint256 approvalDate;
uint256 settlementDate;
uint256 denialDate;
string denialReason;
string[] supportingEHRs; // Supporting EHR documents (e.g., IPFS hashes)
}
enum ClaimStatus {
PENDING,
APPROVED,
DENIED,
SETTLED
}
// ========== STATE VARIABLES ==========
CountersUpgradeable.Counter private _patientIds;
CountersUpgradeable.Counter private _claimIds;
mapping(uint256 => Patient) private _patients;
mapping(address => Provider) private _providers;
mapping(address => mapping(address => Consent)) private _consents;
mapping(address => EHR[]) private _patientEHRs;
mapping(uint256 => InsuranceClaim) private _claims;
mapping(bytes32 => bool) private _registeredNationalIds;
mapping(address => uint256) private _addressToPatientId;
mapping(string => uint256) private _licenseToProviderCount;
// New: Array to track provider addresses
address[] private _providerAddresses;
// ========== EVENTS ==========
event PatientRegistered(uint256 indexed patientId, address indexed walletAddress);
event PatientUpdated(uint256 indexed patientId, bool isActive);
event ProviderRegistered(address indexed providerAddress, string providerType);
event ProviderUpdated(address indexed providerAddress, bool isActive, bool isSuspended);
event ConsentGranted(address indexed patientAddress, address indexed providerAddress, string purpose);
event ConsentRevoked(address indexed patientAddress, address indexed providerAddress);
event EHRAdded(address indexed patientAddress, address indexed providerAddress, string documentType, string ipfsHash);
event ClaimSubmitted(uint256 indexed claimId, address indexed patientAddress, string diagnosisCode);
event ClaimApproved(uint256 indexed claimId, uint256 amountApproved);
event ClaimDenied(uint256 indexed claimId, string reason);
event ClaimSettled(uint256 indexed claimId);
event EmergencyPaused(address indexed admin);
event EmergencyUnpaused(address indexed admin);
// ========== MODIFIERS ==========
modifier onlyActiveProvider() {
require(
_providers[msg.sender].isActive && !_providers[msg.sender].isSuspended,
"Provider not active"
);
_;
}
modifier onlyValidPatient(address patientAddress) {
require(_addressToPatientId[patientAddress] != 0, "Patient not registered");
_;
}
modifier onlyWithConsent(address patientAddress) {
require(
_consents[patientAddress][msg.sender].isGranted,
"Consent not granted"
);
_;
}
modifier validDiagnosisCode(string memory code) {
bytes memory codeBytes = bytes(code);
require(
codeBytes.length >= MIN_DIAGNOSIS_CODE_LENGTH &&
codeBytes.length <= MAX_DIAGNOSIS_CODE_LENGTH,
"Invalid diagnosis code"
);
_;
}
// ========== INITIALIZATION ==========
/// @custom:oz-upgrades-unsafe-allow constructor
constructor() {
_disableInitializers();
}
function initialize(address superAdmin) public initializer {
__AccessControl_init();
__UUPSUpgradeable_init();
__Pausable_init();
__ReentrancyGuard_init();
_setupRole(DEFAULT_ADMIN_ROLE, superAdmin);
_setupRole(ADMIN_ROLE, superAdmin);
_setupRole(AUDITOR_ROLE, superAdmin);
// Increment patient counter so that the first valid patientId is 1
_patientIds.increment();
}
function _authorizeUpgrade(address newImplementation)
internal
override
onlyRole(DEFAULT_ADMIN_ROLE)
{
require(newImplementation != address(0), "Invalid new implementation");
}
// ========== PATIENT REGISTRY (ENHANCED) ==========
function registerPatient(
address walletAddress,
bytes32 nationalIdHash,
bytes calldata governmentSignature
) external onlyRole(ADMIN_ROLE) whenNotPaused nonReentrant {
require(!_registeredNationalIds[nationalIdHash], "Patient already registered");
require(_addressToPatientId[walletAddress] == 0, "Wallet already registered");
require(_verifyGovernmentSignature(walletAddress, nationalIdHash, governmentSignature), "Invalid signature");
_patientIds.increment();
uint256 patientId = _patientIds.current();
_patients[patientId] = Patient({
walletAddress: walletAddress,
nationalIdHash: nationalIdHash,
isActive: true,
registrationDate: block.timestamp,
lastUpdated: block.timestamp
});
_registeredNationalIds[nationalIdHash] = true;
_addressToPatientId[walletAddress] = patientId;
emit PatientRegistered(patientId, walletAddress);
}
function updatePatientStatus(uint256 patientId, bool isActive) external onlyRole(ADMIN_ROLE) {
require(_patients[patientId].walletAddress != address(0), "Patient not found");
_patients[patientId].isActive = isActive;
_patients[patientId].lastUpdated = block.timestamp;
emit PatientUpdated(patientId, isActive);
}
// ========== PROVIDER REGISTRY (ENHANCED) ==========
function registerProvider(
address providerAddress,
string memory name,
string memory licenseNumber,
string memory providerType,
bytes calldata accreditationProof
) external onlyRole(ADMIN_ROLE) whenNotPaused {
require(!_providerExists(providerAddress), "Provider already registered");
require(_verifyAccreditation(providerAddress, licenseNumber, providerType, accreditationProof), "Invalid accreditation");
_providers[providerAddress] = Provider({
name: name,
licenseNumber: licenseNumber,
providerType: providerType,
isActive: true,
isSuspended: false,
registrationDate: block.timestamp,
lastUpdated: block.timestamp
});
_licenseToProviderCount[licenseNumber]++;
_grantRole(PROVIDER_ROLE, providerAddress);
// Track the provider address for consent lookups
_providerAddresses.push(providerAddress);
emit ProviderRegistered(providerAddress, providerType);
}
function suspendProvider(address providerAddress, bool suspend) external onlyRole(ADMIN_ROLE) {
require(_providerExists(providerAddress), "Provider not found");
_providers[providerAddress].isSuspended = suspend;
_providers[providerAddress].lastUpdated = block.timestamp;
emit ProviderUpdated(providerAddress, _providers[providerAddress].isActive, suspend);
}
// ========== CONSENT MANAGEMENT (ENHANCED) ==========
function grantConsent(
address providerAddress,
string memory purpose
) external onlyValidPatient(msg.sender) whenNotPaused {
require(_providerExists(providerAddress), "Provider not found");
require(!_consents[msg.sender][providerAddress].isGranted, "Consent already granted");
_consents[msg.sender][providerAddress] = Consent({
providerAddress: providerAddress,
isGranted: true,
grantDate: block.timestamp,
revokeDate: 0,
purpose: purpose
});
emit ConsentGranted(msg.sender, providerAddress, purpose);
}
function revokeConsent(address providerAddress) external onlyValidPatient(msg.sender) {
require(_consents[msg.sender][providerAddress].isGranted, "No active consent");
_consents[msg.sender][providerAddress].isGranted = false;
_consents[msg.sender][providerAddress].revokeDate = block.timestamp;
emit ConsentRevoked(msg.sender, providerAddress);
}
// ========== EHR MANAGEMENT (ENHANCED) ==========
function addEHR(
address patientAddress,
string memory ipfsHash,
string memory documentType,
bytes32 dataHash
) external onlyActiveProvider onlyWithConsent(patientAddress) whenNotPaused nonReentrant {
_patientEHRs[patientAddress].push(EHR({
ipfsHash: ipfsHash,
providerAddress: msg.sender,
documentType: documentType,
timestamp: block.timestamp,
dataHash: dataHash
}));
emit EHRAdded(patientAddress, msg.sender, documentType, ipfsHash);
}
// ========== CLAIMS MANAGEMENT (ENHANCED) ==========
function submitClaim(
address patientAddress,
string memory diagnosisCode,
uint256 amountRequested,
string[] calldata supportingEHRs
) external onlyActiveProvider onlyWithConsent(patientAddress) validDiagnosisCode(diagnosisCode)
whenNotPaused nonReentrant returns (uint256) {
_claimIds.increment();
uint256 claimId = _claimIds.current();
_claims[claimId] = InsuranceClaim({
patientAddress: patientAddress,
providerAddress: msg.sender,
diagnosisCode: diagnosisCode,
amountRequested: amountRequested,
amountApproved: 0,
status: ClaimStatus.PENDING,
submissionDate: block.timestamp,
approvalDate: 0,
settlementDate: 0,
denialDate: 0,
denialReason: "",
supportingEHRs: supportingEHRs
});
emit ClaimSubmitted(claimId, patientAddress, diagnosisCode);
return claimId;
}
function approveClaim(
uint256 claimId,
uint256 approvedAmount,
string memory approvalNotes
) external onlyRole(ADMIN_ROLE) whenNotPaused {
require(_claims[claimId].status == ClaimStatus.PENDING, "Claim not pending");
require(approvedAmount <= _claims[claimId].amountRequested, "Amount exceeds request");
_claims[claimId].status = ClaimStatus.APPROVED;
_claims[claimId].amountApproved = approvedAmount;
_claims[claimId].approvalDate = block.timestamp;
emit ClaimApproved(claimId, approvedAmount);
}
function denyClaim(
uint256 claimId,
string memory reason
) external onlyRole(ADMIN_ROLE) whenNotPaused {
require(_claims[claimId].status == ClaimStatus.PENDING, "Claim not pending");
_claims[claimId].status = ClaimStatus.DENIED;
_claims[claimId].denialReason = reason;
_claims[claimId].denialDate = block.timestamp;
emit ClaimDenied(claimId, reason);
}
// New: Settle an approved claim
function settleClaim(uint256 claimId) external onlyRole(ADMIN_ROLE) whenNotPaused {
require(_claims[claimId].status == ClaimStatus.APPROVED, "Claim must be approved to settle");
_claims[claimId].status = ClaimStatus.SETTLED;
_claims[claimId].settlementDate = block.timestamp;
emit ClaimSettled(claimId);
}
// ========== EMERGENCY FUNCTIONS ==========
function emergencyPause() external onlyRole(DEFAULT_ADMIN_ROLE) {
_pause();
emit EmergencyPaused(msg.sender);
}
function emergencyUnpause() external onlyRole(DEFAULT_ADMIN_ROLE) {
_unpause();
emit EmergencyUnpaused(msg.sender);
}
// ========== VIEW FUNCTIONS ==========
function getPatientConsents(address patientAddress) external view returns (Consent[] memory) {
uint256 count;
address[] memory providers = _getAllProviders();
// First pass: count valid consents
for (uint i = 0; i < providers.length; i++) {
if (_consents[patientAddress][providers[i]].isGranted) {
count++;
}
}
// Second pass: populate result array
Consent[] memory result = new Consent[](count);
uint256 index;
for (uint i = 0; i < providers.length; i++) {
if (_consents[patientAddress][providers[i]].isGranted) {
result[index] = _consents[patientAddress][providers[i]];
index++;
}
}
return result;
}
// ========== PRIVATE HELPERS ==========
function _providerExists(address providerAddress) private view returns (bool) {
return bytes(_providers[providerAddress].licenseNumber).length > 0;
}
function _getAllProviders() private view returns (address[] memory) {
return _providerAddresses;
}
function _verifyGovernmentSignature(
address walletAddress,
bytes32 nationalIdHash,
bytes calldata signature
) private pure returns (bool) {
// Robust verification logic to be implemented.
return true;
}
function _verifyAccreditation(
address providerAddress,
string memory licenseNumber,
string memory providerType,
bytes calldata proof
) private pure returns (bool) {
// Robust accreditation verification to be implemented.
return true;
}
}
```
## Detailed explanation:
***
## 1. Upgradeability and initialization
* **Upgradeable architecture**:\
The contract uses the UUPS (Universal Upgradeable Proxy Standard) pattern to
allow its implementation to be updated without losing state. The
`_authorizeUpgrade` function restricts upgrades to addresses that hold the
`DEFAULT_ADMIN_ROLE` and ensures that the new implementation address is valid
(non-zero).
* **Initialization process**:\
Instead of a traditional constructor, the contract has an `initialize`
function that:
* Initializes inherited modules such as access control, upgradeability,
pausing, and reentrancy guards.
* Sets up key roles (`DEFAULT_ADMIN_ROLE`, `ADMIN_ROLE`, and `AUDITOR_ROLE`)
with a provided super administrator.
* Increments the patient ID counter during initialization so that the first
patient receives an ID of 1, thereby avoiding potential ambiguity with an
unregistered state.
***
## 2. Access control and role management
* **Defined Roles**:\
The contract establishes three main roles:
* **ADMIN\_ROLE**: Grants permissions for administrative actions including
patient and provider registration, status updates, and claim management.
* **PROVIDER\_ROLE**: Assigned to healthcare providers after successful
registration and accreditation.
* **AUDITOR\_ROLE**: Intended for users responsible for oversight and auditing
without direct control over system operations.
* **Modifiers for Security**:\
Several modifiers enforce strict access control and state validations:
* `onlyActiveProvider`: Ensures that a provider is active and not suspended
before they can execute certain functions.
* `onlyValidPatient`: Confirms that a patient is registered by checking for a
non-zero patient ID.
* `onlyWithConsent`: Validates that a provider has been granted consent by the
patient.
* `validDiagnosisCode`: Checks that a diagnosis code is within the acceptable
ICD-10 length range (between 3 and 7 characters).
***
## 3. Patient registry
* **Patient Data Structure**:\
The `Patient` struct holds critical information including:
* The wallet address.
* A SHA-3 hashed version of the national ID.
* An active status flag.
* Timestamps for registration and the most recent update.
* **Patient registration process**:\
The `registerPatient` function (accessible only by an admin) performs several
validations:
* It confirms that the hashed national ID has not been registered before.
* It ensures that the wallet address is not already associated with another
patient.
* It verifies a government signature (currently a placeholder) to authenticate
the registration.
On successful validation, a new patient ID is generated (starting from 1), and
the patient’s data is stored. The mapping from wallet address to patient ID is
also updated for quick lookups.
* **Updating patient status**:\
Administrators can change the active status of a patient using the
`updatePatientStatus` function, which also updates the patient’s last modified
timestamp.
***
## 4. Provider registry
* **Provider data structure**:\
The `Provider` struct contains:
* The provider's name, license number, and type (e.g., "HOSPITAL", "CLINIC",
"LAB").
* Flags indicating if the provider is active or suspended.
* Timestamps for when the provider was registered and last updated.
* **Provider registration and tracking**:\
The `registerProvider` function allows an admin to add a new provider after:
* Checking that the provider is not already registered.
* Verifying the provider’s accreditation using a placeholder function.
Once validated, the provider’s data is stored, the `PROVIDER_ROLE` is granted,
and the provider’s address is added to a dedicated array
(`_providerAddresses`). This array is later used to retrieve all provider
addresses for operations like consent management.
* **Provider suspension**:\
Administrators can suspend or reinstate a provider using the `suspendProvider`
function, which updates the provider’s suspension status and last updated
timestamp.
***
## 5. Consent management
* **Consent data structure**:\
The `Consent` struct records:
* The provider’s address for which consent is granted.
* A flag indicating whether the consent is active.
* Timestamps for when the consent was granted and, if applicable, when it was
revoked.
* The purpose for which consent is provided (e.g., "TREATMENT", "CLAIMS",
"RESEARCH").
* **Granting and revoking consent**:
* **Granting consent**:\
Patients can call `grantConsent` to allow a provider to access their data
for a specific purpose. This function checks that the provider exists and
that consent has not been previously granted.
* **Revoking consent**:\
The `revokeConsent` function allows patients to withdraw consent, updating
the consent status and recording the time of revocation.
***
## 6. Electronic health records (EHR) Management
* **EHR data structure**:\
The `EHR` struct is used to store:
* An IPFS hash that points to the off-chain location of the health record.
* The provider’s address that is adding the record.
* The document type (such as "PRESCRIPTION", "LAB\_RESULT", or "IMAGE").
* A timestamp of when the record was added.
* A data hash for ensuring the integrity of the record.
* **Recording EHRs**:\
The `addEHR` function enables an active provider (with valid consent from the
patient) to add an EHR. The record is stored in an array associated with the
patient’s address and an event is emitted to log this addition.
***
## 7. Insurance claims management
* **Insurance claim data structure**:\
The `InsuranceClaim` struct encapsulates:
* Addresses for both the patient and the provider.
* The diagnosis code (which must adhere to ICD-10 length requirements).
* The requested amount and the amount that gets approved.
* A status field using an enum (`PENDING`, `APPROVED`, `DENIED`, `SETTLED`).
* Timestamps for submission, approval, settlement, and denial.
* A string to record the reason for denial.
* An array to hold supporting EHR document references (e.g., IPFS hashes).
* **Claims workflow**:
* **Submission**:\
The `submitClaim` function lets a provider submit a claim on behalf of a
patient (with proper consent). It validates the diagnosis code, accepts
supporting EHRs, and logs the claim submission.
* **Approval**:\
Through the `approveClaim` function, an admin can approve a claim, ensuring
the approved amount does not exceed the requested amount. The approval
timestamp is recorded.
* **Denial**:\
The `denyClaim` function allows an admin to deny a claim, documenting the
reason and timestamp for the denial.
* **Settlement**:\
The newly added `settleClaim` function permits an admin to mark an approved
claim as settled, updating its status and recording the settlement
timestamp.
***
## 8. Emergency controls
* **Pausing operations**:\
The contract integrates a pausing mechanism using OpenZeppelin’s pausable
module.
* The `emergencyPause` function allows an admin to pause all contract
operations, which is useful in a crisis or security incident.
* The `emergencyUnpause` function restores normal operations.
Both functions emit events to provide an audit trail of emergency actions.
***
## 9. View functions and private helpers
* **Retrieving patient consents**:\
The `getPatientConsents` function gathers all active consents for a given
patient by iterating through the tracked provider addresses. It compiles and
returns an array of active consent records.
* **Private helper functions**:
* `_providerExists`: Checks if a provider is registered by verifying the
existence of a license number.
* `_getAllProviders`: Returns the list of all registered provider addresses
stored in `_providerAddresses`.
* `_verifyGovernmentSignature` and `_verifyAccreditation`: These are stub
functions that currently always return true; they serve as placeholders
where robust external verification logic will be integrated.
file: ./content/docs/use-case-guides/template-libraries/evm-smart-contracts/intellectual-property.mdx
meta: {
"title": "Intellectual Property"
}
The intellectual property management system is designed to streamline the
registration, verification, and dispute resolution processes for IP assets in a
decentralized environment. It addresses the challenges faced by creators and
organizations in protecting their intellectual property by providing a secure
and transparent platform for asset registration. This solution integrates with
the Ethereum Attestation Service to generate immutable proofs of ownership and
asset details, thereby reducing administrative overhead and improving trust
among stakeholders. It is particularly useful for industries where the
protection and transfer of intellectual property rights are critical, such as
technology, media, and pharmaceuticals.
The smart contract is built using Solidity and leverages OpenZeppelin’s secure
libraries for role-based access control, reentrancy protection, and pausing
capabilities, ensuring robust security for production deployment.
## Disclaimer
This smart contract implementation is provided for educational and illustrative
purposes only. It represents a conceptual framework for blockchain-based
Intellectual Property management system.
Key considerations before any production use:
* **Legal Compliance**: Intellectual property registrations are highly regulated
and vary significantly by jurisdiction. This implementation would require
substantial modification to meet specific legal requirements in any given
location.
* **Security Review**: The contract should undergo comprehensive security
auditing by qualified blockchain security professionals before handling real
registration or financial transactions.
* **No Warranty**: The authors and contributors disclaim all liability for any
use of this code. Users assume all risks associated with implementation and
operation.
* **Consultation Required**: Any organization considering using this template
should obtain advice from qualified legal counsel, and blockchain security
experts before deployment.
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;
// Import OpenZeppelin contracts for security and role management.
import "@openzeppelin/contracts/access/AccessControl.sol";
import "@openzeppelin/contracts/security/ReentrancyGuard.sol";
import "@openzeppelin/contracts/security/Pausable.sol";
import "@openzeppelin/contracts/utils/Counters.sol";
/// @notice Interface for the Ethereum Attestation Service (EAS)
interface IEAS {
/**
* @notice Creates an attestation.
* @param schema The attestation schema identifier.
* @param recipient The address receiving the attestation.
* @param expirationTime Unix timestamp for expiration (0 for non-expiring).
* @param revocable Whether the attestation is revocable.
* @param data Encoded attestation data.
* @return attestationId The unique identifier for the created attestation.
*/
function attest(
bytes32 schema,
address recipient,
uint256 expirationTime,
bool revocable,
bytes calldata data
) external payable returns (uint256 attestationId);
/**
* @notice Revokes an attestation.
* @param attestationId The identifier of the attestation to revoke.
* @param data Encoded revocation data.
* @return success A boolean indicating whether revocation was successful.
*/
function revoke(
uint256 attestationId,
bytes calldata data
) external payable returns (bool success);
}
/// @title Intellectual Property Management Contract
/// @notice This contract manages the registration, verification, dispute resolution,
/// and attestation of intellectual property (IP) assets.
/// It integrates with the Ethereum Attestation Service (EAS) for immutable proofs.
contract IntellectualPropertyManagement is AccessControl, Pausable, ReentrancyGuard {
using Counters for Counters.Counter;
// ====================================================
// Role Definitions
// ====================================================
bytes32 public constant DEFAULT_ADMIN_ROLE = 0x00;
// Role for addresses authorized to register new IP assets.
bytes32 public constant REGISTRAR_ROLE = keccak256("REGISTRAR_ROLE");
// Role for verifying IP assets.
bytes32 public constant VERIFIER_ROLE = keccak256("VERIFIER_ROLE");
// Role for resolving disputes related to IP assets.
bytes32 public constant DISPUTE_RESOLVER_ROLE = keccak256("DISPUTE_RESOLVER_ROLE");
// ====================================================
// EAS Integration and Attestation Schema
// ====================================================
IEAS public eas;
// Attestation schema for IP assets; should be set according to the deployed schema.
bytes32 public constant IP_ASSET_SCHEMA = keccak256(
"ipAssetSchema(uint256 assetId,string title,string description,string ipfsHash,uint256 registrationTime,address owner)"
);
// ====================================================
// Asset Management Data Structures
// ====================================================
// Counter to generate unique asset IDs.
Counters.Counter private _assetIdCounter;
/// @notice Structure representing an Intellectual Property asset.
struct IPAsset {
uint256 id; // Unique asset identifier.
address owner; // Current owner of the asset.
string title; // Title of the IP asset.
string description; // Detailed description of the asset.
string ipfsHash; // IPFS hash linking to off-chain metadata.
uint256 registrationTime; // Timestamp when the asset was registered.
bool verified; // Verification status by a verifier.
uint256 attestationId; // Attestation ID returned by the EAS.
bool disputeFiled; // Indicates if a dispute has been filed.
string disputeDetails; // Details about the dispute.
bool disputeResolved; // Indicates if the dispute has been resolved.
string disputeResolution; // Outcome or remarks regarding dispute resolution.
}
// Mapping from asset ID to its corresponding IPAsset details.
mapping(uint256 => IPAsset) public ipAssets;
// ====================================================
// Events for Off-Chain Tracking and Auditing
// ====================================================
event AssetRegistered(
uint256 indexed assetId,
address indexed owner,
string title,
uint256 registrationTime,
uint256 attestationId
);
event AssetVerified(uint256 indexed assetId, bool verified, string verificationComments);
event DisputeFiled(uint256 indexed assetId, address indexed filer, string disputeDetails);
event DisputeResolved(uint256 indexed assetId, string resolution);
event AssetTransferred(uint256 indexed assetId, address indexed from, address indexed to);
event AssetReattested(uint256 indexed assetId, uint256 oldAttestationId, uint256 newAttestationId);
event EASAddressUpdated(address indexed newEASAddress);
// ====================================================
// Constructor and Role Setup
// ====================================================
constructor(address easAddress) {
require(easAddress != address(0), "Invalid EAS address");
// Set up default admin role.
_setupRole(DEFAULT_ADMIN_ROLE, msg.sender);
// Grant deployer additional roles.
_setupRole(REGISTRAR_ROLE, msg.sender);
_setupRole(VERIFIER_ROLE, msg.sender);
_setupRole(DISPUTE_RESOLVER_ROLE, msg.sender);
// Set the EAS contract address.
eas = IEAS(easAddress);
}
// ====================================================
// IP Asset Registration with EAS Attestation
// ====================================================
/**
* @notice Registers a new IP asset and creates an attestation using EAS.
* @dev Only accounts with the REGISTRAR_ROLE can register an asset.
* @param title The title of the asset (must be non-empty).
* @param description A detailed description of the asset.
* @param ipfsHash The IPFS hash pointing to off-chain asset metadata.
* @return assetId The unique identifier assigned to the registered asset.
*/
function registerAsset(
string memory title,
string memory description,
string memory ipfsHash
) external whenNotPaused nonReentrant onlyRole(REGISTRAR_ROLE) returns (uint256 assetId) {
require(bytes(title).length > 0, "Title is required");
require(bytes(ipfsHash).length > 0, "IPFS hash is required");
// Increment asset counter and assign new asset ID.
_assetIdCounter.increment();
assetId = _assetIdCounter.current();
uint256 registrationTime = block.timestamp;
// Create a new asset record.
IPAsset storage asset = ipAssets[assetId];
asset.id = assetId;
asset.owner = msg.sender;
asset.title = title;
asset.description = description;
asset.ipfsHash = ipfsHash;
asset.registrationTime = registrationTime;
asset.verified = false;
asset.disputeFiled = false;
asset.disputeResolved = false;
// Encode the attestation data per the defined schema.
bytes memory attestationData = abi.encode(
assetId,
title,
description,
ipfsHash,
registrationTime,
msg.sender
);
// Create an attestation via the EAS.
// Parameters: schema, recipient, expiration (0 means non-expiring), revocable flag, and data.
uint256 attestationId = eas.attest(IP_ASSET_SCHEMA, msg.sender, 0, true, attestationData);
asset.attestationId = attestationId;
emit AssetRegistered(assetId, msg.sender, title, registrationTime, attestationId);
}
// ====================================================
// Asset Verification and Attestation Update
// ====================================================
/**
* @notice Verifies an IP asset.
* @dev Only accounts with the VERIFIER_ROLE can call this function.
* @param assetId The ID of the asset to verify.
* @param isVerified The verification result (true if verified).
* @param verificationComments Additional comments regarding the verification.
*/
function verifyAsset(
uint256 assetId,
bool isVerified,
string memory verificationComments
) external whenNotPaused nonReentrant onlyRole(VERIFIER_ROLE) {
IPAsset storage asset = ipAssets[assetId];
require(asset.id != 0, "Asset does not exist");
asset.verified = isVerified;
emit AssetVerified(assetId, isVerified, verificationComments);
}
// ====================================================
// Dispute Resolution with Attestation Revocation Option
// ====================================================
/**
* @notice Files a dispute for an asset.
* @dev Only the asset owner is allowed to file a dispute to prevent frivolous claims.
* @param assetId The ID of the asset for which the dispute is filed.
* @param disputeDetails Detailed information about the dispute.
*/
function fileDispute(
uint256 assetId,
string memory disputeDetails
) external whenNotPaused nonReentrant {
IPAsset storage asset = ipAssets[assetId];
require(asset.id != 0, "Asset does not exist");
require(msg.sender == asset.owner, "Only asset owner can file a dispute");
require(!asset.disputeFiled, "Dispute already filed");
asset.disputeFiled = true;
asset.disputeDetails = disputeDetails;
asset.disputeResolved = false;
asset.disputeResolution = "";
emit DisputeFiled(assetId, msg.sender, disputeDetails);
}
/**
* @notice Resolves a dispute for an asset.
* @dev Only accounts with the DISPUTE_RESOLVER_ROLE can resolve disputes.
* @param assetId The ID of the asset with the dispute.
* @param resolution The outcome of the dispute resolution.
* @param revokeAttestation If true, revokes the asset's attestation using EAS.
*/
function resolveDispute(
uint256 assetId,
string memory resolution,
bool revokeAttestation
) external whenNotPaused nonReentrant onlyRole(DISPUTE_RESOLVER_ROLE) {
IPAsset storage asset = ipAssets[assetId];
require(asset.id != 0, "Asset does not exist");
require(asset.disputeFiled, "No dispute filed");
require(!asset.disputeResolved, "Dispute already resolved");
asset.disputeResolved = true;
asset.disputeResolution = resolution;
// If required, revoke the attestation through EAS.
if (revokeAttestation && asset.attestationId != 0) {
// Prepare revocation data with resolution details.
bytes memory revocationData = abi.encode(assetId, resolution, block.timestamp);
bool success = eas.revoke(asset.attestationId, revocationData);
require(success, "Attestation revocation failed");
asset.attestationId = 0;
}
emit DisputeResolved(assetId, resolution);
}
// ====================================================
// Asset Ownership Transfer and Re-Attestation
// ====================================================
/**
* @notice Transfers ownership of an asset to a new owner.
* @dev Only the current asset owner can initiate the transfer.
* @param assetId The ID of the asset to transfer.
* @param newOwner The address of the new owner.
*/
function transferAsset(
uint256 assetId,
address newOwner
) external whenNotPaused nonReentrant {
require(newOwner != address(0), "New owner address cannot be zero");
IPAsset storage asset = ipAssets[assetId];
require(asset.id != 0, "Asset does not exist");
require(msg.sender == asset.owner, "Only owner can transfer asset");
address previousOwner = asset.owner;
asset.owner = newOwner;
emit AssetTransferred(assetId, previousOwner, newOwner);
}
/**
* @notice Re-attests an asset to update its attestation (e.g., after transfer).
* @dev Callable only by the current asset owner. Revokes the old attestation if it exists.
* @param assetId The ID of the asset to re-attest.
*/
function reattestAsset(
uint256 assetId
) external whenNotPaused nonReentrant {
IPAsset storage asset = ipAssets[assetId];
require(asset.id != 0, "Asset does not exist");
require(msg.sender == asset.owner, "Only owner can re-attest asset");
require(!asset.disputeFiled, "Cannot re-attest during active dispute");
uint256 oldAttestationId = asset.attestationId;
// If an attestation exists, revoke it.
if (oldAttestationId != 0) {
bytes memory revocationData = abi.encode(assetId, "Re-attestation", block.timestamp);
bool success = eas.revoke(oldAttestationId, revocationData);
require(success, "Old attestation revocation failed");
}
// Update registration time for new attestation.
asset.registrationTime = block.timestamp;
// Re-encode attestation data.
bytes memory attestationData = abi.encode(
assetId,
asset.title,
asset.description,
asset.ipfsHash,
asset.registrationTime,
asset.owner
);
// Create a new attestation.
uint256 newAttestationId = eas.attest(IP_ASSET_SCHEMA, asset.owner, 0, true, attestationData);
asset.attestationId = newAttestationId;
emit AssetReattested(assetId, oldAttestationId, newAttestationId);
}
// ====================================================
// Administrative Functions
// ====================================================
/**
* @notice Updates the EAS contract address.
* @dev Only callable by an account with DEFAULT_ADMIN_ROLE.
* @param newEASAddress The new EAS contract address.
*/
function updateEASAddress(address newEASAddress) external onlyRole(DEFAULT_ADMIN_ROLE) {
require(newEASAddress != address(0), "Invalid EAS address");
eas = IEAS(newEASAddress);
emit EASAddressUpdated(newEASAddress);
}
/**
* @notice Pauses the contract in case of emergency.
* @dev Only callable by an account with DEFAULT_ADMIN_ROLE.
*/
function pause() external onlyRole(DEFAULT_ADMIN_ROLE) {
_pause();
}
/**
* @notice Unpauses the contract.
* @dev Only callable by an account with DEFAULT_ADMIN_ROLE.
*/
function unpause() external onlyRole(DEFAULT_ADMIN_ROLE) {
_unpause();
}
}
```
## Contract overview
The contract leverages OpenZeppelin libraries, such as AccessControl, Pausable,
ReentrancyGuard, and Counters, to implement secure role management, emergency
stops, reentrancy protection, and unique ID generation for IP assets. It also
integrates with an external EAS via a defined interface, allowing the creation
and revocation of immutable attestations that certify IP asset registration.
## Role management and access control
Several roles are defined to segregate duties and protect sensitive operations:
* **Default admin role**: Holds ultimate control to perform administrative tasks
like updating the EAS address and pausing or unpausing the contract.
* **Registrar role**: Authorized to register new IP assets. Only accounts with
this role can call the registration function.
* **Verifier role**: Permitted to verify the authenticity of IP assets. Accounts
with this role can update an asset’s verification status.
* **Dispute resolver role**: Empowered to resolve disputes related to IP assets.
Only these accounts can resolve disputes and, if needed, revoke attestations.
Access to each function is restricted by modifiers that check the caller’s role,
ensuring that only authorized addresses can perform sensitive operations.
## EAS integration and attestation schema
The contract interacts with the Ethereum Attestation Service (EAS) using the
defined `IEAS` interface. Two main functions are used:
* **attest**: Creates an attestation for an IP asset using a predefined schema
that includes the asset ID, title, description, IPFS hash, registration time,
and owner address.
* **revoke**: Revokes an existing attestation, typically used during dispute
resolution or asset re-attestation.
The attestation schema is stored as a constant (`IP_ASSET_SCHEMA`), ensuring
that each attestation follows a consistent structure.
## Asset registration with EAS attestation
The `registerAsset` function allows an account with the `REGISTRAR_ROLE` to
register a new IP asset. Key steps include:
* Validating that the title and IPFS hash are non-empty.
* Generating a unique asset ID using a counter.
* Storing asset details such as title, description, IPFS hash, and the
registration timestamp.
* Encoding the asset data as per the attestation schema and calling the EAS
`attest` function.
* Storing the returned attestation ID in the asset record.
* Emitting an event to log the registration and attestation details.
This process creates an immutable on-chain record that is backed by an off-chain
attestation.
## Asset verification and update
The `verifyAsset` function enables an account with the `VERIFIER_ROLE` to update
an asset’s verification status. This function sets the asset’s `verified` flag
and logs the verification result along with any comments. This verification step
is crucial for ensuring that only validated IP assets are recognized by the
system.
## Dispute resolution and attestation revocation
Disputes can be filed by the asset owner using the `fileDispute` function. This
function:
* Ensures that only the asset owner can file a dispute.
* Checks that no dispute has been filed already.
* Records dispute details and updates the asset’s dispute status.
* Emits an event to log the dispute filing.
To resolve a dispute, an account with the `DISPUTE_RESOLVER_ROLE` calls the
`resolveDispute` function. This function:
* Updates the asset’s dispute resolution status and records the resolution
outcome.
* Optionally revokes the asset’s attestation via the EAS if the resolution deems
the asset registration invalid.
* Emits an event to log the dispute resolution and any attestation revocation.
## Asset ownership transfer and re-attestation
Ownership transfer is handled by the `transferAsset` function, which allows the
current owner to transfer an asset to a new owner. It verifies that:
* The caller is the current asset owner.
* The new owner address is valid.
Although the ownership change is recorded on-chain, the original attestation
remains unless an update is required. For updating the attestation, the
`reattestAsset` function can be called by the current asset owner. This
function:
* Revokes the old attestation (if one exists) by calling the EAS `revoke`
function.
* Updates the registration time.
* Encodes new attestation data and creates a new attestation via the EAS.
* Stores the new attestation ID in the asset record and emits an event to log
the re-attestation.
## Administrative functions
Critical administrative functions are restricted to accounts with the default
admin role:
* **Updating the EAS address**: The `updateEASAddress` function lets the admin
update the EAS contract address, which is vital if the EAS implementation
changes.
* **Pausing and unpausing the contract**: The `pause` and `unpause` functions
allow an emergency stop of all contract operations, safeguarding against
unexpected issues.
## Security enhancements and production-grade features
To ensure robust security and production readiness, the contract includes:
* **Reentrancy protection**: All external state-changing functions use the
`nonReentrant` modifier to prevent reentrancy attacks.
* **Pausability**: The `whenNotPaused` modifier is applied to critical
functions, allowing the contract to be paused during emergencies.
* **Role-based access control**: Functions are strictly restricted by roles,
ensuring proper separation of duties.
* **Detailed event logging**: Every critical operation emits events to create an
audit trail, facilitating off-chain monitoring and accountability.
* **Attestation management**: Integration with EAS provides immutable
attestations of asset registration, with options for revocation and
re-attestation in case of disputes or ownership changes.
file: ./content/docs/use-case-guides/template-libraries/evm-smart-contracts/land-registry.mdx
meta: {
"title": "Land Registry"
}
A land registry smart contract in Solidity can handle the registration,
transfer, split, and merge of land parcels, including details for land and
multistory buildings. It ensures security through role-based access control and
tracks essential information like area, GPS coordinates, and buyer/seller
details, while addressing privacy concerns by hashing sensitive data like
national IDs.
## Disclaimer
This smart contract implementation is provided for educational and illustrative
purposes only. It represents a conceptual framework for blockchain-based land
registry systems.
Key considerations before any production use:
* **Legal Compliance**: Land registration systems are highly regulated and vary
significantly by jurisdiction. This implementation would require substantial
modification to meet specific legal requirements in any given location.
* **Security Review**: The contract should undergo comprehensive security
auditing by qualified blockchain security professionals before handling real
property or financial transactions.
* **No Warranty**: The authors and contributors disclaim all liability for any
use of this software. Users assume all risks associated with implementation
and operation.
* **Consultation Required**: Any organization considering use of this technology
should obtain advice from qualified legal counsel, real estate professionals,
and blockchain security experts before deployment.
#### Contract Features
* **Registration and Ownership:** Only authorized registrars can register new
land parcels, specifying area, GPS coordinates (as polygons), jurisdiction,
and initial owner. Owners can transfer parcels, ensuring the new owner has
verified details.
* **Transfer and Transactions:** Transfers record buyer and seller addresses,
with transaction history logged on-chain, accessible via events for auditing.
* **Split and Merge:** Registrars manage splitting parcels into smaller ones or
merging adjacent parcels, ensuring area consistency, which is crucial for land
management.
* **Building Details:** Multistory buildings are tracked per parcel, including
name, number of stories, and area, supporting comprehensive property records.
* **Security and Access:** Roles like registrar and dispute resolver control
critical operations, with modifiers ensuring only authorized parties can act,
enhancing security.
* **Dispute and Jurisdiction:** The contract flags disputes, managed by dispute
resolvers, and records jurisdiction, aiding legal compliance.
* **Taxes:** The contract tracks tax payment status and history for each parcel,
ensuring compliance with fiscal regulations.
* **Inheritance:** The contract supports ownership transfers via inheritance,
enabling updates upon verified inheritance events, aligning with succession
laws.
```solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;
import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
import "@openzeppelin/contracts/token/ERC721/extensions/ERC721Enumerable.sol";
import "@openzeppelin/contracts/token/ERC721/extensions/ERC721URIStorage.sol";
import "@openzeppelin/contracts/access/AccessControl.sol";
import "@openzeppelin/contracts/security/ReentrancyGuard.sol";
import "@openzeppelin/contracts/security/Pausable.sol";
import "@openzeppelin/contracts/utils/Counters.sol";
import "@openzeppelin/contracts/utils/math/SafeMath.sol";
import "@openzeppelin/contracts/utils/cryptography/ECDSA.sol";
import "@openzeppelin/contracts/utils/cryptography/draft-EIP712.sol";
contract LandRegistry is
ERC721,
ERC721Enumerable,
ERC721URIStorage,
AccessControl,
ReentrancyGuard,
Pausable,
EIP712
{
using Counters for Counters.Counter;
using SafeMath for uint256;
using ECDSA for bytes32;
// ========== CONSTANTS ==========
bytes32 public constant REGISTRAR_ROLE = keccak256("REGISTRAR_ROLE");
bytes32 public constant DISPUTE_RESOLVER_ROLE = keccak256("DISPUTE_RESOLVER_ROLE");
bytes32 public constant TAX_AUTHORITY_ROLE = keccak256("TAX_AUTHORITY_ROLE");
bytes32 public constant COURT_ROLE = keccak256("COURT_ROLE");
// ========== STRUCTS ==========
struct PaymentDetail {
string paymentReference;
string currency; // ISO 4217
uint256 amount;
bool isForeign;
string sourceBank;
string proofOfPayment;
uint256 timestamp;
}
struct LandParcel {
uint256 id;
string parcelNumber;
address owner;
uint256 area; // sqm with 4 decimals
string gpsPolygon; // GeoJSON
string jurisdiction;
string landUseType; // RESIDENTIAL/COMMERCIAL/AGRICULTURAL
uint256 landRate; // Local currency per sqm
bool hasDispute;
uint256[] buildingIds;
uint256[] parentParcels; // For merged/split parcels
PaymentDetail[] paymentHistory;
uint256 lastTaxPaid;
string ipfsHash;
}
struct Building {
uint256 id;
string name;
uint256 stories;
uint256 builtArea;
string constructionType;
}
struct SplitMergeRequest {
uint256[] parcelIds;
uint256[] newAreas;
string[] newParcelNumbers;
string[] newGpsPolygons;
bool isMerge;
bool approved;
bytes[] approvalSignatures;
}
// ========== STATE VARIABLES ==========
Counters.Counter private _parcelIdCounter;
Counters.Counter private _buildingIdCounter;
mapping(uint256 => LandParcel) private _parcels;
mapping(uint256 => Building) private _buildings;
mapping(string => bool) private _usedNationalIds;
mapping(bytes32 => bool) private _usedSignatures;
mapping(uint256 => SplitMergeRequest) private _splitMergeRequests;
mapping(uint256 => uint256) private _parcelToRequest;
uint256 public baseTaxRate = 100; // 1%
uint256 public foreignTransferSurcharge = 200; // +2%
uint256 public lateTaxPenalty = 50; // 0.5% per month
uint256 public governanceApprovalThreshold = 2;
// ========== EVENTS ==========
event ParcelRegistered(uint256 indexed id, address owner);
event ParcelTransferred(uint256 indexed id, address from, address to, uint256 taxPaid);
event BuildingAdded(uint256 indexed parcelId, uint256 buildingId);
event TaxPaid(uint256 indexed parcelId, uint256 amount, string currency);
event DisputeFiled(uint256 indexed parcelId, string details);
event DisputeResolved(uint256 indexed parcelId);
event SplitRequested(uint256 indexed requestId, uint256 indexed originalParcelId);
event MergeRequested(uint256 indexed requestId, uint256[] sourceParcelIds);
event SplitCompleted(uint256 indexed requestId, uint256[] newParcelIds);
event MergeCompleted(uint256 indexed requestId, uint256 newParcelId);
// ========== MODIFIERS ==========
modifier onlyRegistrar() {
require(hasRole(REGISTRAR_ROLE, msg.sender), "Unauthorized: Registrar only");
_;
}
modifier onlyTaxAuthority() {
require(hasRole(TAX_AUTHORITY_ROLE, msg.sender), "Unauthorized: Tax authority only");
_;
}
modifier noActiveRequest(uint256 parcelId) {
require(_parcelToRequest[parcelId] == 0, "Parcel has active request");
_;
}
// ========== CONSTRUCTOR ==========
constructor()
ERC721("NationalLandToken", "NLT")
EIP712("LandRegistry", "1")
{
_setupRole(DEFAULT_ADMIN_ROLE, msg.sender);
_setupRole(REGISTRAR_ROLE, msg.sender);
}
// ========== PAUSE/UNPAUSE FUNCTIONS ==========
function pause() external onlyRole(DEFAULT_ADMIN_ROLE) {
_pause();
}
function unpause() external onlyRole(DEFAULT_ADMIN_ROLE) {
_unpause();
}
// ========== CORE FUNCTIONS ==========
function registerParcel(
address owner,
string memory parcelNumber,
uint256 area,
string memory gpsPolygon,
string memory jurisdiction,
string memory landUseType,
uint256 landRate,
string memory nationalId,
string memory ipfsHash,
bytes memory kycSignature
) external onlyRegistrar nonReentrant returns (uint256) {
require(!_usedNationalIds[nationalId], "National ID already registered");
require(_verifyKYC(owner, nationalId, kycSignature), "KYC verification failed");
uint256 parcelId = _parcelIdCounter.current();
_parcelIdCounter.increment();
_parcels[parcelId] = LandParcel({
id: parcelId,
parcelNumber: parcelNumber,
owner: owner,
area: area,
gpsPolygon: gpsPolygon,
jurisdiction: jurisdiction,
landUseType: landUseType,
landRate: landRate,
hasDispute: false,
buildingIds: new uint256[](0),
parentParcels: new uint256[](0),
paymentHistory: new PaymentDetail[](0),
lastTaxPaid: 0,
ipfsHash: ipfsHash
});
_mint(owner, parcelId);
_setTokenURI(parcelId, ipfsHash);
_usedNationalIds[nationalId] = true;
emit ParcelRegistered(parcelId, owner);
return parcelId;
}
// ========== PARCEL TRANSFER & TAXATION ==========
function transferParcel(
uint256 parcelId,
address buyer,
string memory buyerNationalId,
PaymentDetail memory payment
) external onlyRegistrar nonReentrant noActiveRequest(parcelId) {
require(!_parcels[parcelId].hasDispute, "Parcel has active dispute");
require(!_usedNationalIds[buyerNationalId], "Buyer ID already registered");
address seller = _parcels[parcelId].owner;
// Calculate tax
uint256 taxRate = payment.isForeign ?
baseTaxRate.add(foreignTransferSurcharge) : baseTaxRate;
uint256 taxAmount = payment.amount.mul(taxRate).div(10000);
// Record payment and tax
_parcels[parcelId].paymentHistory.push(payment);
_parcels[parcelId].paymentHistory.push(PaymentDetail({
paymentReference: string(abi.encodePacked("TAX-", payment.paymentReference)),
currency: payment.currency,
amount: taxAmount,
isForeign: payment.isForeign,
sourceBank: "National Treasury",
proofOfPayment: string(abi.encodePacked("TAX-RECEIPT-", payment.paymentReference)),
timestamp: block.timestamp
}));
_parcels[parcelId].lastTaxPaid = block.timestamp;
// Execute transfer
_transfer(seller, buyer, parcelId);
_parcels[parcelId].owner = buyer;
_usedNationalIds[buyerNationalId] = true;
emit ParcelTransferred(parcelId, seller, buyer, taxAmount);
emit TaxPaid(parcelId, taxAmount, payment.currency);
}
// ========== BUILDING MANAGEMENT ==========
function addBuilding(
uint256 parcelId,
string memory name,
uint256 stories,
uint256 builtArea,
string memory constructionType
) external onlyRegistrar returns (uint256) {
uint256 buildingId = _buildingIdCounter.current();
_buildingIdCounter.increment();
_buildings[buildingId] = Building({
id: buildingId,
name: name,
stories: stories,
builtArea: builtArea,
constructionType: constructionType
});
_parcels[parcelId].buildingIds.push(buildingId);
emit BuildingAdded(parcelId, buildingId);
return buildingId;
}
// ========== PARCEL SPLIT/MERGE ==========
function requestSplit(
uint256 parcelId,
uint256[] memory newAreas,
string[] memory newParcelNumbers,
string[] memory newGpsPolygons
) external onlyRegistrar noActiveRequest(parcelId) {
require(newAreas.length > 1, "Must split into at least 2 parcels");
require(newAreas.length == newParcelNumbers.length, "Mismatched arrays");
require(newAreas.length == newGpsPolygons.length, "Mismatched polygons");
require(!_parcels[parcelId].hasDispute, "Parcel has dispute");
uint256 requestId = uint256(keccak256(abi.encodePacked(parcelId, block.timestamp)));
_splitMergeRequests[requestId] = SplitMergeRequest({
parcelIds: _asSingletonArray(parcelId),
newAreas: newAreas,
newParcelNumbers: newParcelNumbers,
newGpsPolygons: newGpsPolygons,
isMerge: false,
approved: false,
approvalSignatures: new bytes[](0)
});
_parcelToRequest[parcelId] = requestId;
emit SplitRequested(requestId, parcelId);
}
function requestMerge(
uint256[] memory parcelIds,
string memory newParcelNumber
) external onlyRegistrar {
require(parcelIds.length > 1, "Need multiple parcels to merge");
uint256 requestId = uint256(keccak256(abi.encodePacked(parcelIds[0], block.timestamp)));
_splitMergeRequests[requestId] = SplitMergeRequest({
parcelIds: parcelIds,
newAreas: new uint256[](0),
newParcelNumbers: _asSingletonArray(newParcelNumber),
newGpsPolygons: new string[](0),
isMerge: true,
approved: false,
approvalSignatures: new bytes[](0)
});
for (uint i = 0; i < parcelIds.length; i++) {
require(_parcelToRequest[parcelIds[i]] == 0, "Parcel has active request");
_parcelToRequest[parcelIds[i]] = requestId;
}
emit MergeRequested(requestId, parcelIds);
}
function approveRequest(
uint256 requestId,
bytes memory signature
) external onlyRegistrar {
SplitMergeRequest storage request = _splitMergeRequests[requestId];
require(!request.approved, "Already approved");
bytes32 digest = _hashTypedDataV4(keccak256(abi.encode(
keccak256("SplitMergeApproval(uint256 requestId,bool isMerge)"),
requestId,
request.isMerge
)));
require(!_usedSignatures[digest], "Signature already used");
address signer = digest.recover(signature);
require(hasRole(REGISTRAR_ROLE, signer), "Invalid signer");
_usedSignatures[digest] = true;
request.approvalSignatures.push(signature);
if (request.approvalSignatures.length >= governanceApprovalThreshold) {
_executeRequest(requestId);
}
}
// ========== DISPUTE RESOLUTION ==========
function fileDispute(uint256 parcelId, string memory details)
external
onlyRole(DISPUTE_RESOLVER_ROLE)
noActiveRequest(parcelId)
{
_parcels[parcelId].hasDispute = true;
emit DisputeFiled(parcelId, details);
}
function resolveDispute(uint256 parcelId)
external
onlyRole(DISPUTE_RESOLVER_ROLE)
{
_parcels[parcelId].hasDispute = false;
emit DisputeResolved(parcelId);
}
// ========== TAX ADMINISTRATION ==========
function setTaxRate(uint256 newRate) external onlyTaxAuthority {
require(newRate <= 1000, "Exceeds maximum 10% tax rate");
baseTaxRate = newRate;
}
function collectDelayedTax(uint256 parcelId, uint256 monthsDelayed)
external
onlyTaxAuthority
{
uint256 penalty = _parcels[parcelId].landRate
.mul(_parcels[parcelId].area)
.mul(lateTaxPenalty)
.mul(monthsDelayed)
.div(10000);
_parcels[parcelId].paymentHistory.push(PaymentDetail({
paymentReference: string(abi.encodePacked("PENALTY-", parcelId)),
currency: "LOCAL",
amount: penalty,
isForeign: false,
sourceBank: "Tax Authority",
proofOfPayment: string(abi.encodePacked("PENALTY-INVOICE-", parcelId)),
timestamp: block.timestamp
}));
emit TaxPaid(parcelId, penalty, "LOCAL");
}
// ========== VIEW FUNCTIONS ==========
function getParcelDetails(uint256 parcelId) public view returns (
LandParcel memory parcel,
Building[] memory buildings,
PaymentDetail[] memory payments
) {
parcel = _parcels[parcelId];
buildings = new Building[](parcel.buildingIds.length);
for (uint256 i = 0; i < parcel.buildingIds.length; i++) {
buildings[i] = _buildings[parcel.buildingIds[i]];
}
return (parcel, buildings, parcel.paymentHistory);
}
function getRequestDetails(uint256 requestId) public view returns (
SplitMergeRequest memory request,
LandParcel[] memory parcels
) {
request = _splitMergeRequests[requestId];
parcels = new LandParcel[](request.parcelIds.length);
for (uint i = 0; i < request.parcelIds.length; i++) {
parcels[i] = _parcels[request.parcelIds[i]];
}
}
// ========== PRIVATE FUNCTIONS ==========
function _executeRequest(uint256 requestId) private {
SplitMergeRequest storage request = _splitMergeRequests[requestId];
if (request.isMerge) {
_executeMerge(requestId);
} else {
_executeSplit(requestId);
}
request.approved = true;
}
function _executeSplit(uint256 requestId) private {
SplitMergeRequest storage request = _splitMergeRequests[requestId];
uint256 originalParcelId = request.parcelIds[0];
LandParcel storage original = _parcels[originalParcelId];
uint256[] memory newParcelIds = new uint256[](request.newAreas.length);
uint256 totalArea = 0;
for (uint i = 0; i < request.newAreas.length; i++) {
totalArea = totalArea.add(request.newAreas[i]);
uint256 newParcelId = _parcelIdCounter.current();
_parcelIdCounter.increment();
_parcels[newParcelId] = LandParcel({
id: newParcelId,
parcelNumber: request.newParcelNumbers[i],
owner: original.owner,
area: request.newAreas[i],
gpsPolygon: request.newGpsPolygons[i],
jurisdiction: original.jurisdiction,
landUseType: original.landUseType,
landRate: original.landRate,
hasDispute: false,
buildingIds: new uint256[](0),
parentParcels: _asSingletonArray(originalParcelId),
paymentHistory: original.paymentHistory,
lastTaxPaid: original.lastTaxPaid,
ipfsHash: original.ipfsHash // Default to original IPFS hash; update off-chain if necessary
});
_mint(original.owner, newParcelId);
_setTokenURI(newParcelId, original.ipfsHash);
newParcelIds[i] = newParcelId;
}
require(totalArea == original.area, "Area mismatch in split");
_burn(originalParcelId);
emit SplitCompleted(requestId, newParcelIds);
}
function _executeMerge(uint256 requestId) private {
SplitMergeRequest storage request = _splitMergeRequests[requestId];
// Verify parcels can be merged
uint256 totalArea = 0;
address commonOwner = _parcels[request.parcelIds[0]].owner;
string memory commonJurisdiction = _parcels[request.parcelIds[0]].jurisdiction;
for (uint i = 0; i < request.parcelIds.length; i++) {
LandParcel storage parcel = _parcels[request.parcelIds[i]];
require(parcel.owner == commonOwner, "Different owners");
require(keccak256(bytes(parcel.jurisdiction)) == keccak256(bytes(commonJurisdiction)), "Different jurisdictions");
require(!parcel.hasDispute, "Parcel has dispute");
totalArea = totalArea.add(parcel.area);
}
// Create merged parcel
uint256 newParcelId = _parcelIdCounter.current();
_parcelIdCounter.increment();
_parcels[newParcelId] = LandParcel({
id: newParcelId,
parcelNumber: request.newParcelNumbers[0],
owner: commonOwner,
area: totalArea,
gpsPolygon: "", // To be set by off-chain service
jurisdiction: commonJurisdiction,
landUseType: _parcels[request.parcelIds[0]].landUseType,
landRate: _calculateAverageRate(request.parcelIds),
hasDispute: false,
buildingIds: _combineBuildingIds(request.parcelIds),
parentParcels: request.parcelIds,
paymentHistory: _combinePaymentHistories(request.parcelIds),
lastTaxPaid: block.timestamp,
ipfsHash: "" // To be updated off-chain
});
_mint(commonOwner, newParcelId);
// Optionally, set token URI when IPFS hash is available
_setTokenURI(newParcelId, "");
// Burn original parcels
for (uint i = 0; i < request.parcelIds.length; i++) {
_burn(request.parcelIds[i]);
_parcelToRequest[request.parcelIds[i]] = 0;
}
emit MergeCompleted(requestId, newParcelId);
}
function _calculateAverageRate(uint256[] memory parcelIds) private view returns (uint256) {
uint256 total = 0;
for (uint i = 0; i < parcelIds.length; i++) {
total = total.add(_parcels[parcelIds[i]].landRate);
}
return total.div(parcelIds.length);
}
function _combineBuildingIds(uint256[] memory parcelIds) private view returns (uint256[] memory) {
uint256 totalBuildings = 0;
for (uint i = 0; i < parcelIds.length; i++) {
totalBuildings = totalBuildings.add(_parcels[parcelIds[i]].buildingIds.length);
}
uint256[] memory combined = new uint256[](totalBuildings);
uint256 counter = 0;
for (uint i = 0; i < parcelIds.length; i++) {
for (uint j = 0; j < _parcels[parcelIds[i]].buildingIds.length; j++) {
combined[counter] = _parcels[parcelIds[i]].buildingIds[j];
counter++;
}
}
return combined;
}
function _combinePaymentHistories(uint256[] memory parcelIds) private view returns (PaymentDetail[] memory) {
uint256 totalPayments = 0;
for (uint i = 0; i < parcelIds.length; i++) {
totalPayments = totalPayments.add(_parcels[parcelIds[i]].paymentHistory.length);
}
PaymentDetail[] memory combined = new PaymentDetail[](totalPayments);
uint256 counter = 0;
for (uint i = 0; i < parcelIds.length; i++) {
for (uint j = 0; j < _parcels[parcelIds[i]].paymentHistory.length; j++) {
combined[counter] = _parcels[parcelIds[i]].paymentHistory[j];
counter++;
}
}
return combined;
}
function _asSingletonArray(uint256 element) private pure returns (uint256[] memory) {
uint256[] memory array = new uint256[](1);
array[0] = element;
return array;
}
function _verifyKYC(address, string memory, bytes memory) private pure returns (bool) {
return true; // Integration with KYC provider should be implemented
}
// ========== OVERRIDES ==========
function _beforeTokenTransfer(
address from,
address to,
uint256 tokenId,
uint256 batchSize
) internal override(ERC721, ERC721Enumerable) {
super._beforeTokenTransfer(from, to, tokenId, batchSize);
require(!paused(), "Transfers paused");
require(!_parcels[tokenId].hasDispute, "Parcel has dispute");
require(_parcelToRequest[tokenId] == 0, "Parcel in split/merge process");
}
function _burn(uint256 tokenId) internal override(ERC721, ERC721URIStorage) {
super._burn(tokenId);
}
function tokenURI(uint256 tokenId)
public
view
override(ERC721, ERC721URIStorage)
returns (string memory)
{
return super.tokenURI(tokenId);
}
function supportsInterface(bytes4 interfaceId)
public
view
override(ERC721, ERC721Enumerable, AccessControl)
returns (bool)
{
return super.supportsInterface(interfaceId);
}
}
```
## 1. Introduction
### 1.1 Purpose and Context
This smart contract represents a decentralized land registry system built on
Ethereum blockchain technology. It serves as a tamper-proof solution for
recording, transferring, and managing ownership of land parcels and associated
buildings. The system transforms traditional paper-based land records into
non-fungible tokens (NFTs), providing immutable proof of ownership while
maintaining all critical property information on-chain.
The contract addresses several pain points in conventional land registry systems
including bureaucratic delays, fraudulent transactions, lack of transparency in
ownership history, and inefficient dispute resolution processes. By leveraging
blockchain technology, it creates a single source of truth for property
ownership that is accessible to all authorized parties while maintaining
necessary privacy controls.
## 2. Core Features
### 2.1 Parcel Lifecycle Management
The contract provides comprehensive tools for managing the entire lifecycle of
land parcels:
**Registration**: Each new land parcel is minted as an NFT containing all
relevant metadata including geographic boundaries (as GeoJSON polygons),
jurisdictional information, land use classification, and current valuation. The
registration process requires verification of both the property details and the
owner's identity through a KYC process.
**Transfers**: Ownership transfers are executed through a secure process that
automatically calculates and records applicable taxes. Each transfer maintains a
complete audit trail including payment details, tax calculations, and
participant information.
**Splitting and Merging**: The contract supports complex parcel modifications
through formal split and merge operations. When a parcel is split, the new
smaller parcels maintain references to their parent parcel. Merged parcels
inherit characteristics from their source parcels while creating a new unified
property record.
### 2.2 Financial Compliance
The built-in taxation system handles:
**Automated Tax Calculations**: Transfers are subject to configurable tax rates
that can vary based on property type, location, and transaction type (domestic
vs. foreign). The system automatically calculates the tax due and creates
permanent records of payments.
**Penalty Enforcement**: For overdue tax payments, the system calculates and
applies penalties based on the duration of delinquency. These penalties are
recorded as separate transactions in the property's payment history.
**Payment Tracking**: Every financial transaction related to a property is
permanently recorded including payment references, bank details, currency
information, and timestamps. This creates a complete financial history for audit
and compliance purposes.
## 3. Technical Architecture
### 3.1 Inheritance Structure
The contract utilizes several OpenZeppelin base contracts:
```
ERC721 - For NFT-based land title representation
ERC721Enumerable - For efficient parcel indexing and listing
ERC721URIStorage - For decentralized metadata storage
AccessControl - For role-based permissions
ReentrancyGuard - For protection against reentrancy attacks
Pausable - For emergency circuit breaker functionality
EIP712 - For structured data signing
```
### 3.2 Role Definitions
| Role | Responsibilities | Key Functions |
| ----------------------- | ---------------------------------------- | -------------------------------- |
| REGISTRAR\_ROLE | Property registration, transfer approval | registerParcel, approveTransfer |
| DISPUTE\_RESOLVER\_ROLE | Dispute management | fileDispute, resolveDispute |
| TAX\_AUTHORITY\_ROLE | Tax configuration and collection | setTaxRate, collectDelayedTax |
| COURT\_ROLE | Legal oversight | verifyInheritance, enforceOrders |
| DEFAULT\_ADMIN\_ROLE | System administration | grantRoles, pauseContract |
## 4. Detailed Functionality
```mermaid
graph TD
A[Start] --> B[Registration]
B --> B1[Registrar Verifies Identity]
B1 --> B2[Mint Parcel NFT]
B2 --> B3[Store Metadata]
A --> C[Transfer]
C --> C1[Check Dispute Status]
C1 --> C2[Verify Buyer ID]
C2 --> C3[Calculate Tax]
C3 --> C4[Record Payment]
C4 --> C5[Transfer NFT]
A --> D[Split/Merge]
D --> D1[Request Creation]
D1 --> D2[Multi-Sig Approval]
D2 --> D3[Execute Operation]
D3 --> D31[For Splits: Mint New NFTs]
D3 --> D32[For Merges: Create Unified Parcel]
A --> E[Dispute]
E --> E1[Flag Parcel]
E1 --> E2[Resolution Process]
E2 --> E3[Clear Flag]
A --> F[Taxation]
F --> F1[Automatic Calculation]
F1 --> F2[Payment Recording]
F2 --> F3[Penalty Enforcement]
subgraph Security Layer
G[Role-Based Access]
H[Reentrancy Guards]
I[Input Validation]
J[Emergency Pause]
end
B --> G
C --> G
D --> G
E --> G
B --> H
C --> H
D --> H
B --> I
C --> I
D --> I
A --> J
```
### 4.1 Parcel Registration Process
The registration workflow involves multiple verification steps:
1. **Data Submission**: A registrar submits the complete property details
including geographic coordinates, legal description, and owner information.
The coordinates must be provided as a valid GeoJSON polygon defining the
parcel boundaries.
2. **Identity Verification**: The owner's national ID is checked against
existing records to prevent duplicate registrations. A cryptographic
signature verifies the owner's consent to the registration.
3. **Document Storage**: All supporting legal documents are stored on IPFS, with
only the content hash recorded on-chain. This balances transparency with
storage efficiency.
4. **NFT Minting**: Upon successful verification, a new NFT is minted to
represent the property. This NFT contains all metadata and serves as the
immutable ownership record.
### 4.2 Ownership Transfers
The transfer process ensures secure and compliant property transactions:
**Initiation**: Either the current owner or an authorized registrar initiates
the transfer by specifying the recipient and transaction details. For private
sales, this typically follows an off-chain agreement between parties.
**Verification**: The system confirms the property has no active disputes or
restrictions. The buyer's identity is verified through their national ID to
prevent fraudulent transactions.
**Tax Calculation**: The system automatically computes the applicable transfer
tax based on the property value and transaction type. Foreign transactions incur
an additional surcharge.
**Payment Recording**: All payment details are recorded including the
transaction reference, amount, currency, and participating financial
institutions. This creates an auditable money trail.
**NFT Transfer**: Only after all checks are completed and taxes recorded does
the actual NFT transfer occur, officially changing the property ownership.
## 5. Advanced Operations
### 5.1 Parcel Subdivision
The parcel splitting functionality enables:
**Geometric Division**: A single parcel can be divided into multiple smaller
parcels with custom boundaries. Each new parcel receives its own GeoJSON polygon
definition.
**Inherited Attributes**: New parcels maintain references to their parent parcel
and inherit key characteristics like jurisdiction and land use type unless
explicitly overridden.
**Tax Treatment**: The system automatically prorates any outstanding tax
obligations across the new parcels based on their relative sizes.
### 5.2 Parcel Consolidation
Merging adjacent parcels involves:
**Pre-Merge Validation**: The system verifies all parcels share common
ownership, are in the same jurisdiction, and have no active disputes or
restrictions.
**Combined Metadata**: The new merged parcel combines characteristics from its
source parcels. The area becomes the sum of all components, while the land use
type defaults to the most restrictive classification among the sources.
**Historical Preservation**: Even after merging, the complete lineage of the
property remains available through the parent parcel references.
## 6. Security Considerations
### 6.1 Protection Mechanisms
**Multi-Signature Controls**: Critical operations like parcel splits and merges
require approval from multiple authorized registrars. This prevents unilateral
actions that could affect property records.
**Dispute Locking**: Properties under legal dispute are automatically locked
against transfers or modifications until the dispute is formally resolved.
**Emergency Pausing**: The contract includes a circuit breaker that allows
authorized administrators to temporarily halt all operations in case of detected
vulnerabilities or system compromises.
### 6.2 Audit Capabilities
**Comprehensive History**: Every property maintains a complete chronological
record of all ownership changes, modifications, and financial transactions.
**Immutable Records**: Once recorded, no party can alter or delete historical
data, ensuring the integrity of the land registry.
**Transparent Access**: Authorized auditors can access the complete transaction
history of any property for verification and compliance purposes.
## 7. Integration and Deployment
### 7.1 System Requirements
**Blockchain Network**: Designed for Ethereum-compatible networks with support
for ERC-721 tokens and EIP-712 signatures.
**Off-Chain Components**: Requires integration with:
* IPFS for document storage
* KYC provider for identity verification
* Geographic information system for boundary validation
### 7.2 Deployment Process
1. **Contract Compilation**: Verify the contract compiles without errors in the
target environment.
2. **Initial Deployment**: Deploy the contract to the chosen network with the
appropriate constructor parameters.
3. **Role Configuration**: Establish the initial set of administrators and
assign operational roles to authorized entities.
4. **Tax Configuration**: Set the initial tax rates and penalties according to
jurisdictional requirements.
5. **Integration Testing**: Thoroughly test all functions in a controlled
environment before processing live transactions.
file: ./content/docs/use-case-guides/template-libraries/evm-smart-contracts/state-machine.mdx
meta: {
"title": "State Machine"
}
# State Machine
This smart contract set implements a state machine. State machines are usually
used to represent a system where an entity goes through several sequential
states.
Each state has different functions and different roles associated with it. You
can call certain functions only if the state you are in is associated with that
function. The different roles associated with a state are the roles who are
allowed to perform transition to the next state from the given state.
Our templateset provides a powerful, highly customisable way to create a
statemachine for your use case.
## Description
### Representation of a state
Each state in the statemachine is represented using a `State` struct. A `struct`
is a data type in solidity used to represent an object containing different
attributes. The `State` struct is declared in the `StateMachine.sol` file as
follows:
```solidity
struct State {
// a boolean to check if the state is actually created
bool hasBeenCreated;
// a mapping of functions that can be executed when in this state
mapping(bytes4 => bool) allowedFunctions;
// a mapping of all roles that have been configured for this state
mapping(bytes32 => bool) allAllowedRoles;
// a list of all the roles that have been configured for this state
bytes32[] allowedRoles;
// a list of all the preconditions that have been configured for this state
function(bytes32, bytes32) internal view[] preConditions;
// a list of callbacks to execute before the state transition completes
function(bytes32, bytes32) internal[] callbacks;
// a list of states that can be transitioned to
bytes32[] nextStates;
// function that executes logic and then does a StateTransition
bytes4 preFunction;
}
```
To create a state, you can call the `createState` function which our templateset
provides out of the box. It is defined in the `StateMachine.sol` contract.
### Defining a state
To comprehensively represent your state, you need to define:
1. The next states that you can transition to from the given state (`nextStates`
field)
2. The functions you can access when you are in the given state
(`allowedFunctions` field)
3. The roles who can perform the state transition to the next state
(`allowedRoles` field)
Our templateset allows you to do this very easily through the functions
`addNextStateForState`, `addAllowedFunctionForState` and `addRoleForState`.
These functions are defined in the `StateMachine.sol` file.
One thing to note is that these three functions can be only called by the admin.
The admin is set to the address from which the contract creation transaction is
sent.
### Transitioning from a state
#### 1. Pre-conditions
There are usually some conditions that need to be satisfed before transitioning
to the next state. Our templateset allows you to effortlessly add these
conditions using the `preConditions` field. It is important to note that while
defining pre-condition functions, you need to ensure they throw an exception
when the condition fails. Add a pre-condition for a state using
`addPreConditionForState` function.
#### 2. Callbacks
Before you transition to another state, there may be several actions you need to
perform first. To accomodate for this, our templateset provides support for
`callbacks`. `callbacks` for a given state are functions which are called before
moving to the next state from the given state. Add a callback for a state using
`addCallbackForState` function.
#### 3. State transition
Once pre-conditions are satisfied and the callbacks have been executed, we are
ready to perform our state transition.
Generally in a state transition, there is some business logic to be executed,
followed by the actual change of state. Both these steps are bundled in a
function, let's call such a function a transition function for the state. This
transition function should be called when you want to transition from the given
state.
To build your transition function - all you need to do is define the business
logic you want to execute before the state transition. To perform the actual
state transition, you can simply call the `transitionState` function.
The `transitionState` function we provide does all the heavy lifting of a state
transition for you (for example: checking the preconditions, execution of
callbacks, etc.). It also verifies all the edge conditions associated with state
transitions so you don't need to worry about them.
After you have defined your transition function for a state, you can calculate
its function signature and set the `preFunction` field on the state to the
calculated function signature. You can do this by calling
`setPreFunctionForState` defined in the `StateMachine.sol` contract.
### History of state transitions
To have more transparency, our templateset also records the history of state
transtions.
We use a `StateTransition` struct to store information on a state transition. It
is defined in the `StateMachine.sol` contract as:
```solidity
struct StateTransition {
bytes32 fromState;
bytes32 toState;
address actor;
uint256 timestamp;
}
```
The history is stored as an array of `StateTransition`s. To view the transition
at a particular index in the `history` array, you can use the `getHistory`
function defined in the `StateMachine.sol` contract.
You can also get the length of the `history` array using `getHistoryLength`
function.
To view the while history, you could get the length of the history array using
`getHistoryLength` and call `getHistory` for each index in the array.
## Usage
### 1. Setting up the state machine states and flow
For your convenience the Generic.sol template contract has all the necessary
boilerplate code set up. So modify the different states and their relationships
at your leisure.
#### Creating states
```solidity
// Create the variable
bytes32 public constant STATE_START = "STATE_START";
// use the createState helper inside the setupStateMachine function
function setupStateMachine(address adminAddress) internal override {
...
createState(STATE_START);
...
// set the correct state as the starting state
setInitialState(STATE_START);
}
```
#### Defining flow and roles
```solidity
// Add next states to existing states to define flow
// Add Roles to state
function setupStateMachine(address adminAddress) internal override {
...
addNextStateForState(STATE_START, STATE_END);
addRoleForState(STATE_START, ROLE_ADMIN, adminAddress);
...
}
```
### 2. Understanding the deployment process and deploying the contract
Before we dive in you should know that the `Generic.sol` contract extends from
the `StateMachineMetadata.sol` contract. It's an interface on top of the base
`StateMachine.sol` contract that allows linking metadata to a contract. The
setup for this takes place during the deployment of the contract. So let's do
just that.
Now when deploying the contract you may want to bind some IPFS data to it. A
potential use case for this could be lifecycle tracing of a vehicle for example.
So you could define a state machine that describes the various states a vehicle
goes through (manufacturing, maintenance, etc...), but the vehicle itself has
several unchanging attributes that don't need to be stored on-chain (f.e. a
repair manual). IPFS is perfect for that.
Let's look at the constructor of our `Generic.sol` contract to get an idea of
the inputs it will need for deployment:
```solidity
constructor(
uint256 entityId,
string memory ipfsHash,
string memory baseURI
) {
address adminAddress = msg.sender;
_roles = [ROLE_ADMIN, ROLE_MANUFACTURER, ROLE_ONE, ROLE_TWO, ROLE_THREE, ROLE_FOUR];
_setRoleAdmin(ROLE_ADMIN, DEFAULT_ADMIN_ROLE);
_grantRole(DEFAULT_ADMIN_ROLE, adminAddress);
setupStateMachine(adminAddress);
_entityId = entityId;
_baseURI = baseURI;
_setEntityURI(_entityId, ipfsHash);
}
```
Since the binding of the metadata is key-value based, you can see the entityId
being the key and the value being the url/ipfshah or any other identifier to
retrieve your metadata from the web (for example the unique suffix of a
wetransfer link). The baseURI is an optional prefix that will be attached to the
passed identifier (here ipfsHash). This baseURI will be `ipfs://` here,
indicating the character of the metadata.
The reason we want to use an entityId as key to retrieve the bound value is to
provide flexibility to the end user. F.e., if only the deployer knows the key,
then only he can retrieve the value. Or maybe you want to extend the
functionalities of the `StateMachineMetadata.sol` contract, allowing users to
add metadata later on etc...
Your entityId can be any slug you want. But we recommend using a crypto library
to generate a unique 32 bytes long one at least, for obvious reasons.
Now to deploy our contract you define an entityId and the data you want to
upload to IPFS:
```typescript
// This identifier is used as a key to attach metadata to the smart contract
// See StateMachineMetadata.sol for more info
// This value is hardcoded here to make graph indexing of the metadata possible
export const entityId = 3073193977; // crypto.randomBytes(32).readUInt32LE()
// Let's define the metadata for our entity that we want to upload to IPFS
const metadata = {
param1: "param1",
param2: "param2",
};
// using our hardhat task to upload the data to IPFS
const jsonCid: string = await run("ipfs-upload-string", {
data: JSON.stringify(metadata),
ipfspath: `/generic-statemachine/metadata/metadata-${entityId}.json`,
ipfsnode,
});
// Then we deploy
const statemachine = await factory.deploy(
BigNumber.from(entityId),
jsonCid,
"ipfs://"
);
```
### 3. Indexing on-chain data
#### Statemachinemetadata Indexing Module
The statemachinemetadata indexing module has 3 main and 1 helper file:
1. `subgraph/datasource/statemachinemetadata.gql.json` - Schema definition file
2. `subgraph/datasource/statemachinemetadata.yaml` - Subgraph manifest template
file
3. `subgraph/datasource/statemachinemetadata.ts` - Mapping functions file
And a helper file at `subgraph/fetch/statemachinemetadata.ts`
#### 1. Statemachinemetadata Schema
We define 2 entities in the schema:
1. `StateMachineMetadataContract`
This is the entity modelling the `Generic.sol` statemachine contract.
* The field `currentState` holds the current state the entity represented by
the statemachine is in.
* The `stateTransitions` field hold a list of transitions that the entity has
gone through.
* The two fields `param1` and `param2` seem confusing, since we don't see
them as state variables on the `Generic.sol` contract or any of the
contracts that `Generic.sol` inherits. This is because they are not state
variables on the contract, but metadata for the entity that we have
uploaded on IPFS.
You can see them being set in the deploy script at
`deploy > 00_deploy_StateMachine.ts`.
If you wish to change the name of the parameters in the metadata from `param1`,
`param2` to your custom field name in the deploy script, please be sure to
propagate the changes in:
a. schema definition at `subgraph/datasource/statemachinemetadata.gql.json`
b. handler at `subgraph/datasource/statemachinemetadata.ts`
2. `StateTransition`
This is the entity representing the `Transition` event emitted by the
`Generic.sol` statemachine contract. Its fields represent the information
emitted by the event.
#### 2. Statemachinemetadata Subgraph Manifest Template
The field of interest to us in the subgraph manifest template at
`subgraph/datasource/statemachinemetadata.yaml` is the `eventHandlers` field.
Here, we list the events we want to listen to, as given here:
```yaml
- event: Transition(address,bytes32,bytes32)
handler: handleTransitions
```
We listen to the `Transition` event emitted by the `Generic` statemachine
contract. When that event is emitted, we call the `handleTransitions` mapping
function defined in `subgraph/datasource/statemachinemetadata.ts`
#### 3. Statemachinemetadata Mapping function
The mapping functions for the `statemachinemetadata` indexing module are defined
in `subgraph/datasource/statemachinemetadata.ts`
It is advisable to run `graph:config`, `graph:compile`, and `graph:codegen`
tasks before playing around with this file to generate types and classes.\*\*
Now that we have our types and classes, let's see how they are used.
The `handleTransitions` handler takes in the `Transition` event. Then, it
performs three main tasks:
* fetches the `StateMachineMetadataContract` entity which emitted the
`Transition` event
To do this, a custom fetcher was written
(`./subgraph/fetch/statemachinemetadata.ts`).
Inside the fetcher, you will see the hard coded `entityId` from the deploy
script `deploy > 00_deploy_StateMachine` again:
```typescript
const try_entityURI = sm.try_entityURI(BigInt.fromString(`3073193977`));
```
We fetch the metadata from IPFS using this entity ID and populate the fields on
the `StateMachineMetadataContract` entity accordingly:
```typescript
const try_entityURI = sm.try_entityURI(BigInt.fromString(`3073193977`));
const metadataURI = try_entityURI.reverted ? "" : try_entityURI.value;
if (metadataURI.includes("ipfs://")) {
const ipfsHash = metadataURI.replace("ipfs://", "");
const metadataURIBytes = ipfs.cat(ipfsHash);
if (metadataURIBytes) {
const metadataURIContent = json.try_fromBytes(metadataURIBytes);
if (
metadataURIContent.isOk &&
metadataURIContent.value.kind == JSONValueKind.OBJECT
) {
const entityMetadata = metadataURIContent.value.toObject();
const param1 = entityMetadata.get("param1");
const param2 = entityMetadata.get("param2");
contract.param1 = param1 ? param1.toString() : null;
contract.param2 = param2 ? param2.toString() : null;
contract.save();
}
}
}
```
It's really important that this entityId matches with the one defined in the
deployment script. Also we want to point out that general indexing logic can be
re-used based on the protocol prefix we defined in the deployment as well
(`ipfs://`).
* Then, we create a new `StateTransition` entity to keep a track of the events
emitted by the contract
* Finally, we save the changes in the storage
## Note
\*\* Before using this file, it is recommended to run the tasks `graph:config`,
`graph:compile` and `graph:codegen`.
The `graph:codegen` task is where the types/classes are generated based on the
entities defined in the schema (at `subgraphs > x.gql.json`). These
types/classes are imported and used in the mapping functions.
Without running this task, you will run into several `Cannot find module..`
linter errors while trying to use this file.
file: ./content/docs/use-case-guides/template-libraries/fabric-chaincodes/cbdc.mdx
meta: {
"title": "CBDC Chaincode"
}
## Disclaimer
This chaincode is provided solely for educational and prototyping purposes. - It
must not be used in live financial environments without thorough auditing,
testing, and tailoring for legal, regulatory, and security requirements. - CBDC
systems involve complex central banking policies, cryptographic controls,
compliance audits, and jurisdictional regulations that this simplified
implementation does not cover. - Any real-world deployment of such a contract
must go through a complete security audit, formal verification, and regulatory
alignment in the context of the target financial system.
```go
package main
import (
"encoding/json"
"fmt"
"regexp"
"strconv"
"strings"
"time"
"github.com/hyperledger/fabric-contract-api-go/contractapi"
)
// Regex pattern for account ID validation
var idPattern = regexp.MustCompile(`^[a-zA-Z0-9_.-]{4,64}$`)
const (
RoleCentralBank = "centralbank"
RoleRetailBank = "retailbank"
RoleAuditor = "auditor"
RetailTransferCap = 100000
MultisigThreshold = 500000
)
// CBDCContract defines the chaincode structure
type CBDCContract struct {
contractapi.Contract
}
// Account represents a CBDC wallet
type Account struct {
Owner string `json:"owner"`
Balance uint64 `json:"balance"`
CreatedAt string `json:"createdAt"`
LastActive string `json:"lastActive"`
Frozen bool `json:"frozen"`
Tags map[string]string `json:"tags"`
History []TransactionLog `json:"history"`
}
// TransactionLog stores audit trails for an account
type TransactionLog struct {
Action string `json:"action"`
Amount uint64 `json:"amount,omitempty"`
Counterparty string `json:"counterparty,omitempty"`
Timestamp string `json:"timestamp"`
Initiator string `json:"initiator"`
}
// Role mapping from MSP ID
func getRoleFromMSP(msp string) string {
switch strings.ToLower(msp) {
case "centralbankmsp":
return RoleCentralBank
case "retailbankmsp":
return RoleRetailBank
case "auditormsp":
return RoleAuditor
default:
return ""
}
}
// Role-based access control
func (c *CBDCContract) hasRole(ctx contractapi.TransactionContextInterface, allowedRoles ...string) bool {
mspID, err := ctx.GetClientIdentity().GetMSPID()
if err != nil {
return false
}
role := getRoleFromMSP(mspID)
for _, r := range allowedRoles {
if r == role {
return true
}
}
return false
}
// Enforce transfer caps for retail banks
func (c *CBDCContract) enforceTransactionCap(ctx contractapi.TransactionContextInterface, amount uint64) error {
mspID, err := ctx.GetClientIdentity().GetMSPID()
if err != nil {
return fmt.Errorf("unable to determine MSPID")
}
role := getRoleFromMSP(mspID)
if role == RoleRetailBank && amount > RetailTransferCap {
return fmt.Errorf("transfer amount exceeds retail bank cap of %d", RetailTransferCap)
}
return nil
}
// If multisig approval is needed
func (c *CBDCContract) multisigApprovalRequired(amount uint64) bool {
return amount > MultisigThreshold
}
// Stub for future multisig enforcement
func (c *CBDCContract) verifyMultisigApproval(ctx contractapi.TransactionContextInterface, txID string) error {
return nil // To be implemented
}
// Account ID validation
func validateID(id string) error {
if !idPattern.MatchString(id) {
return fmt.Errorf("invalid account ID format")
}
return nil
}
// Create or load account, and persist if new
func (c *CBDCContract) getOrCreateAccount(ctx contractapi.TransactionContextInterface, id string) (*Account, error) {
a, err := c.getAccount(ctx, id)
if err == nil {
return a, nil
}
ts, _ := ctx.GetStub().GetTxTimestamp()
timestamp := time.Unix(ts.Seconds, int64(ts.Nanos)).Format(time.RFC3339)
newAccount := &Account{
Owner: id,
Balance: 0,
CreatedAt: timestamp,
LastActive: timestamp,
Frozen: false,
Tags: make(map[string]string),
History: []TransactionLog{},
}
if err := c.saveAccount(ctx, id, newAccount); err != nil {
return nil, err
}
return newAccount, nil
}
// Load existing account from state
func (c *CBDCContract) getAccount(ctx contractapi.TransactionContextInterface, id string) (*Account, error) {
data, err := ctx.GetStub().GetState(id)
if err != nil {
return nil, err
}
if data == nil {
return nil, fmt.Errorf("account not found")
}
var acc Account
if err := json.Unmarshal(data, &acc); err != nil {
return nil, err
}
return &acc, nil
}
// Persist account to world state
func (c *CBDCContract) saveAccount(ctx contractapi.TransactionContextInterface, id string, acc *Account) error {
data, err := json.Marshal(acc)
if err != nil {
return err
}
return ctx.GetStub().PutState(id, data)
}
// Get client identity
func (c *CBDCContract) GetInvoker(ctx contractapi.TransactionContextInterface) (string, error) {
id, err := ctx.GetClientIdentity().GetID()
if err != nil || id == "" {
return "", fmt.Errorf("unable to retrieve or validate invoker ID")
}
return id, nil
}
// Central bank can issue tokens
func (c *CBDCContract) IssueTokens(ctx contractapi.TransactionContextInterface, recipient string, amount uint64) error {
if !c.hasRole(ctx, RoleCentralBank) {
return fmt.Errorf("only central bank can issue tokens")
}
if err := validateID(recipient); err != nil {
return err
}
if amount == 0 {
return fmt.Errorf("amount must be greater than zero")
}
invoker, err := c.GetInvoker(ctx)
if err != nil {
return err
}
account, err := c.getOrCreateAccount(ctx, recipient)
if err != nil {
return err
}
if account.Frozen {
return fmt.Errorf("account is frozen")
}
ts, _ := ctx.GetStub().GetTxTimestamp()
timestamp := time.Unix(ts.Seconds, int64(ts.Nanos)).Format(time.RFC3339)
account.Balance += amount
account.LastActive = timestamp
account.History = append(account.History, TransactionLog{"ISSUE", amount, recipient, timestamp, invoker})
if err := c.saveAccount(ctx, recipient, account); err != nil {
return err
}
return ctx.GetStub().SetEvent("TokensIssued", []byte(fmt.Sprintf("%s:%d", recipient, amount)))
}
// Central bank can burn tokens
func (c *CBDCContract) BurnTokens(ctx contractapi.TransactionContextInterface, account string, amount uint64) error {
if !c.hasRole(ctx, RoleCentralBank) {
return fmt.Errorf("only central bank can burn tokens")
}
if err := validateID(account); err != nil {
return err
}
if amount == 0 {
return fmt.Errorf("amount must be greater than zero")
}
invoker, err := c.GetInvoker(ctx)
if err != nil {
return err
}
a, err := c.getAccount(ctx, account)
if err != nil {
return err
}
if a.Frozen {
return fmt.Errorf("account is frozen")
}
if a.Balance < amount {
return fmt.Errorf("insufficient balance")
}
ts, _ := ctx.GetStub().GetTxTimestamp()
timestamp := time.Unix(ts.Seconds, int64(ts.Nanos)).Format(time.RFC3339)
a.Balance -= amount
a.LastActive = timestamp
a.History = append(a.History, TransactionLog{"BURN", amount, "", timestamp, invoker})
if err := c.saveAccount(ctx, account, a); err != nil {
return err
}
return ctx.GetStub().SetEvent("TokensBurned", []byte(fmt.Sprintf("%s:%d", account, amount)))
}
// Freeze an account
func (c *CBDCContract) FreezeAccount(ctx contractapi.TransactionContextInterface, account string) error {
if !c.hasRole(ctx, RoleCentralBank) {
return fmt.Errorf("only central bank can freeze accounts")
}
a, err := c.getAccount(ctx, account)
if err != nil {
return err
}
a.Frozen = true
return c.saveAccount(ctx, account, a)
}
// Unfreeze an account
func (c *CBDCContract) UnfreezeAccount(ctx contractapi.TransactionContextInterface, account string) error {
if !c.hasRole(ctx, RoleCentralBank) {
return fmt.Errorf("only central bank can unfreeze accounts")
}
a, err := c.getAccount(ctx, account)
if err != nil {
return err
}
a.Frozen = false
return c.saveAccount(ctx, account, a)
}
// Get account balance with no access control (can be restricted further)
func (c *CBDCContract) GetBalance(ctx contractapi.TransactionContextInterface, account string) (uint64, error) {
a, err := c.getAccount(ctx, account)
if err != nil {
return 0, err
}
return a.Balance, nil
}
// Get transaction history
func (c *CBDCContract) GetHistory(ctx contractapi.TransactionContextInterface, account string) ([]TransactionLog, error) {
a, err := c.getAccount(ctx, account)
if err != nil {
return nil, err
}
return a.History, nil
}
// Get account tags
func (c *CBDCContract) GetTags(ctx contractapi.TransactionContextInterface, account string) (map[string]string, error) {
a, err := c.getAccount(ctx, account)
if err != nil {
return nil, err
}
return a.Tags, nil
}
// Admin can tag accounts
func (c *CBDCContract) AdminAddTag(ctx contractapi.TransactionContextInterface, account, key, value string) error {
if !c.hasRole(ctx, RoleCentralBank) {
return fmt.Errorf("only central bank can tag accounts")
}
if len(key) > 32 || len(value) > 64 {
return fmt.Errorf("tag key/value too long")
}
a, err := c.getAccount(ctx, account)
if err != nil {
return err
}
a.Tags[key] = value
return c.saveAccount(ctx, account, a)
}
// Chaincode entry point
func main() {
chaincode, err := contractapi.NewChaincode(new(CBDCContract))
if err != nil {
panic(fmt.Sprintf("Error creating CBDC chaincode: %v", err))
}
if err := chaincode.Start(); err != nil {
panic(fmt.Sprintf("Error starting CBDC chaincode: %v", err))
}
}
```
This CBDC (Central Bank Digital Currency) chaincode is written for Hyperledger
Fabric and is intended strictly for educational and prototyping purposes. It is
not production-ready and must not be deployed in a live financial system without
substantial auditing, rigorous testing, and tailoring to specific regulatory and
operational requirements. Real-world CBDC implementations are complex, involving
monetary policy, central banking rules, and advanced security mechanisms, none
of which are fully captured in this simplified contract. The contract does not
include protections against replay attacks, does not implement cryptographic
signature verification, lacks privacy guarantees, and omits enforcement for
multisignature approvals or advanced compliance policies.
Conceptually, this chaincode simulates a basic CBDC management system deployed
on a permissioned Hyperledger Fabric network. It provides core features that a
central bank might need to issue and manage digital fiat currency. These
features include the ability to issue or burn currency, freeze or unfreeze
accounts, set metadata tags on accounts, enforce role-based access, and maintain
an account-level audit trail. Additionally, it includes a mechanism to apply
transaction limits for retail banks. The contract uses Go and the Fabric
Contract API and relies on standard transaction context interfaces to interact
with the ledger state.
The system recognizes three roles based on the MSP ID of the organization
invoking a transaction. The central bank has full administrative control,
allowing it to issue and burn tokens, freeze or unfreeze accounts, and add tags.
Retail banks are permitted to interact with the system under certain
constraints, such as a transfer cap. Auditors are not yet integrated but are
envisioned as read-only participants. These roles are determined by mapping the
MSP ID to predefined role labels, and permissions are enforced using a utility
method that checks if the invoker’s role is among those allowed for a specific
operation.
Data in the system is centered around the concept of an account. Each account
includes an owner ID, token balance, creation and last active timestamps, a
frozen flag, a tag map for metadata, and a list of transaction logs that serve
as the audit trail. When an account is created, it is initialized with default
values, and every transaction affecting the account updates its state and
appends a corresponding entry to its history. Account state is stored in the
ledger as a serialized JSON object.
The chaincode allows the central bank to issue tokens to any valid account.
Before issuing, it checks that the recipient ID is properly formatted, that the
amount is positive, and that the target account is not frozen. It then updates
the account balance, sets the last active timestamp, and logs the issuance
event. The burning of tokens follows a similar logic but deducts from the
account balance and ensures that the balance is sufficient to cover the burn
request. Both actions emit chaincode events for external observability.
Accounts can be frozen or unfrozen by the central bank. When an account is
frozen, it becomes ineligible for token issuance or burning. This provides a
simple control mechanism for suspending suspicious or non-compliant actors in
the system. In addition to these lifecycle operations, the chaincode supports
tagging, allowing the central bank to attach short metadata entries to accounts.
This could be used for tagging accounts as KYC-verified, associating them with a
branch ID, or any other administrative classification.
Several querying functions are exposed, including the ability to read an
account’s balance, view its transaction history, or retrieve its metadata tags.
Currently, these functions are unrestricted, meaning that any network
participant can query any account’s data. In a real-world deployment, this would
need to be restricted to protect financial privacy and enforce access policies,
potentially using Fabric’s private data collections or attribute-based access
control.
The chaincode also introduces a concept of transaction caps for retail banks. A
configurable threshold ensures that retail banks cannot process high-value
operations beyond a specified amount. However, these caps do not apply to the
central bank, which retains full authority over token issuance and burning.
Additionally, there is a placeholder mechanism for enforcing multisignature
approvals on high-value transactions. While the code identifies when such an
approval would be required, it does not currently implement any logic to
validate multiple approvals or signatures. This remains a stub for future
enhancement.
This chaincode implements a simplified CBDC (Central Bank Digital Currency)
logic using the Hyperledger Fabric framework. It demonstrates the key
responsibilities of a central bank in managing digital token issuance and
enforcement controls.
***
## Key Functionalities
* Role-based access via MSP ID mapping (Central Bank, Retail Bank, Auditor)
* Token issuance and burning (by Central Bank only)
* Account freezing and unfreezing
* Transfer caps for Retail Banks
* Metadata tagging for accounts
* Transaction history logging
* Event emission for observability
* Uses `contractapi` in Go for implementation
***
## Roles
Roles are inferred from MSP IDs:
```go
const (
RoleCentralBank = "centralbank"
RoleRetailBank = "retailbank"
RoleAuditor = "auditor"
)
Role resolution is done using:
func getRoleFromMSP(msp string) string {
switch strings.ToLower(msp) {
case "centralbankmsp":
return RoleCentralBank
case "retailbankmsp":
return RoleRetailBank
case "auditormsp":
return RoleAuditor
default:
return ""
}
}
```
***
## Account Structure
```go
type Account struct {
Owner string
Balance uint64
CreatedAt string
LastActive string
Frozen bool
Tags map[string]string
History []TransactionLog
}
```
Each account maintains metadata, balance, timestamps, and a full transaction
log.
***
## Transaction Logging
Audit logs are captured using:
```go
type TransactionLog struct {
Action string
Amount uint64
Counterparty string
Timestamp string
Initiator string
}
```
***
## Token Issuance (Central Bank Only)
```go
func (c *CBDCContract) IssueTokens(ctx contractapi.TransactionContextInterface, recipient string, amount uint64) error
```
* Only accessible by centralbank role
* Fails if recipient is frozen
* Updates balance and appends to history
* Emits TokensIssued event
***
## Token Burning (Central Bank Only)
```go
func (c *CBDCContract) BurnTokens(ctx contractapi.TransactionContextInterface, account string, amount uint64) error
```
* Deducts from account
* Fails if frozen or underfunded
* Emits TokensBurned event
***
## Account Freezing
```go
func (c *CBDCContract) FreezeAccount(ctx contractapi.TransactionContextInterface, account string) error
func (c *CBDCContract) UnfreezeAccount(ctx contractapi.TransactionContextInterface, account string) error
```
* Only the central bank may freeze/unfreeze accounts
* Prevents future operations on frozen accounts
***
## Metadata Tagging
```go
func (c *CBDCContract) AdminAddTag(ctx contractapi.TransactionContextInterface, account, key, value string) error
```
* Adds key-value metadata (e.g., "kyc": "verified")
* Length restrictions: key ≤ 32, value ≤ 64
***
## Account Queries
```go
func (c *CBDCContract) GetBalance(ctx, account string) (uint64, error)
func (c *CBDCContract) GetHistory(ctx, account string) ([]TransactionLog, error)
func (c *CBDCContract) GetTags(ctx, account string) (map[string]string, error)
```
* Currently unrestricted
* Should be secured using role-based visibility or private data
***
## Transfer Cap for Retail Banks
Retail banks are restricted from performing operations above a defined
threshold:
```go
const RetailTransferCap = 100000
```
Enforced via:
```go
func (c *CBDCContract) enforceTransactionCap(ctx contractapi.TransactionContextInterface, amount uint64) error
```
***
file: ./content/docs/building-with-settlemint/cli/settlemint/hasura/track.mdx
meta: {
"title": "Track"
}
{
Usage: settlemint hasura track|t Examples:
# Track all tables of the default database $ settlemint hasura track
# Track all tables of a specific database $ settlemint hasura track --database my-database
Track all tables in Hasura
Options: -a, --accept-defaults Accept the default and previously set values -d, --database <database> Database name (default: "default") -h, --help display help for command
# Get platform configuration in JSON format $ settlemint config -o json
# Get platform configuration in YAML format $ settlemint config -o yaml
Get platform configuration
Options: --prod Connect to your production environment -i, --instance <instance> The instance to connect to (defaults to the instance in the .env file) -o, --output <output> The output format (choices: "json", "yaml") -h, --help display help for command
Commands: application-access-token|aat [options] <name> Create a new application access token in the SettleMint platform. application|a [options] <name> Create a new application in the SettleMint platform. blockchain-network|bnw Create a blockchain network in the SettleMint platform blockchain-node|bn Create a blockchain node in the SettleMint platform insights|in Create a new insights integration-tool|it Create a new integration tool load-balancer|lb Create a load balancer in the SettleMint platform middleware|mw Create a middleware service in the SettleMint platform private-key|pk Create a private key in the SettleMint platform storage|st Create a storage service in the SettleMint platform workspace|w [options] <name> Create a new workspace in the SettleMint platform. help [command] display help for command
# Create an application access token and save as default $ settlemint platform create application-access-token my-token --accept-defaults -d
# Create an application access token with custom validity period $ settlemint platform create application-access-token my-token --validity-period ONE_DAY -a -d
Create a new application access token in the SettleMint platform.
Arguments: name The application access token name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default application access token --prod Connect to production environment --app, --application <application> The application unique name to create the application access token for (defaults to application from env) -v, --validity-period <period> The validity period for the token (choices: "DAYS_7", "DAYS_30", "DAYS_60", "DAYS_90", "NONE", default: "DAYS_7") -h, --help display help for command
# Create an application in a workspace $ settlemint platform create application my-app --accept-defaults
# Create an application and save as default $ settlemint platform create application my-app -d
# Create an application in a specific workspace $ settlemint platform create application my-app --workspace my-workspace
Create a new application in the SettleMint platform.
Arguments: name The application name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default application --prod Connect to production environment -w, --workspace <workspace> The workspace unique name to create the application in (defaults to workspace from env) -h, --help display help for command
# Create a Besu blockchain network and save as default $ settlemint platform create blockchain-network besu my-network --node-name validator-1 --accept-defaults -d
# Create a Besu blockchain network in a different application $ settlemint platform create blockchain-network besu my-network --application app-123 --node-name validator-1 --chain-id 12345 --gas-limit 10000000 --seconds-per-block 5
Create a new Besu blockchain network in the SettleMint platform.
Arguments: name The Besu blockchain network name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default blockchain network --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --app, --application <application> The unique name of the application to create the network in (defaults to application from env) --node-name <name> Name of the node --chain-id <chainId> The chain ID for the network --contract-size-limit <limit> Maximum contract size limit --evm-stack-size <size> EVM stack size --gas-limit <limit> Block gas limit --gas-price <price> Gas price in wei --seconds-per-block <seconds> Block time in seconds -h, --help display help for command
# Create a Besu blockchain node and save as default $ settlemint platform create blockchain-node besu my-node --node-type VALIDATOR --accept-defaults -d
# Create a Besu blockchain node in a different network $ settlemint platform create blockchain-node besu my-node --blockchain-network-id 12345 --node-type NON_VALIDATOR --accept-defaults
# Create a Besu blockchain node in a different application $ settlemint platform create blockchain-node besu my-node --application-id 123456789 --node-type NON_VALIDATOR --accept-defaults
Create a new Besu blockchain node in the SettleMint platform.
Arguments: name The Besu blockchain node name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default blockchain node --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --app, --application <application> The application unique name to create the node in (defaults to application from env) --blockchain-network <blockchainNetwork> Blockchain network unique name to add this node to --node-identity <nodeIdentity> EC DSA P256 private key to use as the node identity --node-type <nodeType> Type of the node (choices: "VALIDATOR", "NON_VALIDATOR") -h, --help display help for command
# Create a Blockscout insights service and save as default $ settlemint platform create insights blockscout my-blockscout --accept-defaults -d
# Create a Blockscout insights service in a different application $ settlemint platform create insights blockscout my-blockscout --application app-123
# Create a Blockscout insights service and connect to a specific load balancer $ settlemint platform create insights blockscout my-blockscout --load-balancer my-load-balancer
# Create a Blockscout insights service and connect to a specific blockchain node $ settlemint platform create insights blockscout my-blockscout --blockchain-node my-blockchain-node
Create a new Blockscout insights in the SettleMint platform.
Arguments: name The Blockscout insights name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default insights --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --application <application> Application unique name --load-balancer <loadBalancer> Load Balancer unique name (mutually exclusive with blockchain-node) --blockchain-node <blockchainNode> Blockchain Node unique name (mutually exclusive with load-balancer) -h, --help display help for command
# Create a Hasura integration and save as default $ settlemint platform create integration-tool hasura my-hasura --accept-defaults -d
# Create a Hasura integration in a different application $ settlemint platform create integration-tool hasura my-hasura --application app-123
Create a new Hasura integration tool in the SettleMint platform.
Arguments: name The Hasura integration tool name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default integration tool --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --application <application> Application unique name -h, --help display help for command
# Create an EVM load balancer and save as default $ settlemint platform create load-balancer evm my-lb --accept-defaults -d
# Create an EVM load balancer and connect to specific blockchain nodes $ settlemint platform create load-balancer evm my-lb --blockchain-network my-network --accept-defaults
# Create an EVM load balancer in a different application $ settlemint platform create load-balancer evm my-lb --application my-app --accept-defaults
Create a new EVM load balancer in the SettleMint platform.
Arguments: name The EVM load balancer name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default load balancer --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --app, --application <application> The application unique name to create the load balancer in (defaults to application from env) --blockchain-nodes <blockchainNodes...> Blockchain node unique names where the load balancer connects to (must be from the same network) --blockchain-network <blockchainNetwork> Blockchain network unique name where the load balancer connects to, can be skipped if the --blockchain-nodes option is used (defaults to network from env) -h, --help display help for command
Create a middleware service in the SettleMint platform
Options: -h, --help display help for command
Commands: graph|gr [options] <name> Create a new The Graph middleware in the SettleMint platform. smart-contract-portal|scp [options] <name> Create a new Smart Contract Portal middleware in the SettleMint platform. help [command] display help for command
# Create a graph middleware and save as default $ settlemint platform create middleware graph my-graph --accept-defaults -d
# Create a graph middleware in a different application $ settlemint platform create middleware graph my-graph --application my-app --blockchain-node node-123
# Create a graph middleware and connect to a specific load balancer $ settlemint platform create middleware graph my-graph --load-balancer my-load-balancer
# Create a graph middleware and connect to a specific blockchain node $ settlemint platform create middleware graph my-graph --blockchain-node my-blockchain-node
Create a new The Graph middleware in the SettleMint platform.
Arguments: name The The Graph middleware name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default middleware --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --application <application> Application unique name --blockchain-node <blockchainNode> Blockchain Node unique name (mutually exclusive with load-balancer) --load-balancer <loadBalancer> Load Balancer unique name (mutually exclusive with blockchain-node) -h, --help display help for command
# Create a smart contract portal middleware and save as default $ settlemint platform create middleware smart-contract-portal my-portal --accept-defaults -d
# Create a smart contract portal middleware in a different application $ settlemint platform create middleware smart-contract-portal my-portal --application my-app --blockchain-node node-123
# Create a smart contract portal middleware and connect to a specific blockchain node $ settlemint platform create middleware smart-contract-portal my-portal --blockchain-node my-blockchain-node
# Create a smart contract portal middleware and connect to a specific load balancer $ settlemint platform create middleware smart-contract-portal my-portal --load-balancer my-load-balancer
Create a new Smart Contract Portal middleware in the SettleMint platform.
Arguments: name The Smart Contract Portal middleware name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default middleware --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --application <application> Application unique name --load-balancer <loadBalancer> Load Balancer unique name (mutually exclusive with blockchain-node) --blockchain-node <blockchainNode> Blockchain Node unique name (mutually exclusive with load-balancer) --abis <abis...> Path to abi file(s) --include-predeployed-abis <includePredeployedAbis...> Include pre-deployed abis (run `settlemint platform config` to see available pre-deployed abis) -h, --help display help for command
Commands: hd-ecdsa-p256|hd [options] <name> Create a new HD-ECDSA-P256 private key in the SettleMint platform. hsm-ecdsa-p256|hsm [options] <name> Create a new HSM-ECDSA-P256 private key in the SettleMint platform. accessible-ecdsa-p256|acc [options] <name> Create a new ACCESSIBLE-ECDSA-P256 private key in the SettleMint platform. help [command] display help for command
# Create a private key and save as default $ settlemint platform create private-key hd-ecdsa-p256 my-key --accept-defaults -d
# Create a private key in a different application $ settlemint platform create private-key hd-ecdsa-p256 my-key --application my-app
# Create a private key linked to a blockchain node $ settlemint platform create private-key hd-ecdsa-p256 my-key --blockchain-node node-123
Create a new HD-ECDSA-P256 private key in the SettleMint platform.
Arguments: name The HD-ECDSA-P256 private key name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default private key --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --application <application> Application unique name --blockchain-node <blockchainNode> Blockchain Node unique name --trusted-forwarder-address <trustedForwarderAddress> The address of the trusted forwarder contract. Must inherit from OpenZeppelin's ERC2771Forwarder contract --trusted-forwarder-name <trustedForwarderName> The name of the trusted forwarder contract as known to OpenZeppelin's extension (e.g. 'OpenZeppelinERC2771Forwarder'). This exact name is required for the verification process --relayer-key-unique-name <relayerKeyUniqueName> Private key unique name to use for relaying meta-transactions -h, --help display help for command
# Create a private key and save as default $ settlemint platform create private-key hsm-ecdsa-p256 my-key --accept-defaults -d
# Create a private key in a different application $ settlemint platform create private-key hsm-ecdsa-p256 my-key --application 123456789
# Create a private key linked to a blockchain node $ settlemint platform create private-key hsm-ecdsa-p256 my-key --blockchain-node node-123
Create a new HSM-ECDSA-P256 private key in the SettleMint platform.
Arguments: name The HSM-ECDSA-P256 private key name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default private key --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --application <application> Application unique name --blockchain-node <blockchainNode> Blockchain Node unique name -h, --help display help for command
# Create a private key and save as default $ settlemint platform create private-key accessible-ecdsa-p256 my-key --accept-defaults -d
# Create a private key in a different application $ settlemint platform create private-key accessible-ecdsa-p256 my-key --application my-app
# Create a private key linked to a blockchain node $ settlemint platform create private-key accessible-ecdsa-p256 my-key --blockchain-node node-123
Create a new ACCESSIBLE-ECDSA-P256 private key in the SettleMint platform.
Arguments: name The ACCESSIBLE-ECDSA-P256 private key name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default private key --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --application <application> Application unique name --blockchain-node <blockchainNode> Blockchain Node unique name --trusted-forwarder-address <trustedForwarderAddress> The address of the trusted forwarder contract. Must inherit from OpenZeppelin's ERC2771Forwarder contract --trusted-forwarder-name <trustedForwarderName> The name of the trusted forwarder contract as known to OpenZeppelin's extension (e.g. 'OpenZeppelinERC2771Forwarder'). This exact name is required for the verification process --relayer-key-unique-name <relayerKeyUniqueName> Private key unique name to use for relaying meta-transactions -h, --help display help for command
Create a storage service in the SettleMint platform
Options: -h, --help display help for command
Commands: ipfs|ip [options] <name> Create a new IPFS storage in the SettleMint platform. minio|m [options] <name> Create a new MinIO storage in the SettleMint platform. help [command] display help for command
# Create an IPFS storage and save as default $ settlemint platform create storage ipfs my-storage --accept-defaults -d
# Create an IPFS storage in a different application $ settlemint platform create storage ipfs my-storage --application app-123
Create a new IPFS storage in the SettleMint platform.
Arguments: name The IPFS storage name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default storage --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --application <application> Application unique name -h, --help display help for command
# Create a MinIO storage and save as default $ settlemint platform create storage minio my-storage --accept-defaults -d
# Create a MinIO storage in a different application $ settlemint platform create storage minio my-storage --application app-123
Create a new MinIO storage in the SettleMint platform.
Arguments: name The MinIO storage name
Options: -a, --accept-defaults Accept the default values -d, --default Save as default storage --prod Connect to production environment -w, --wait Wait until deployed -r, --restart-if-timeout Restart if wait time is exceeded --provider <provider> Network provider (run `settlemint platform config` to see available providers) --region <region> Deployment region (run `settlemint platform config` to see available regions) --size <size> Network size (choices: "LARGE", "MEDIUM", "SMALL", default: "SMALL") --type <type> Network type (choices: "DEDICATED", "SHARED", default: "SHARED") --application <application> Application unique name -h, --help display help for command
Commands: application|a [options] <unique-name> Delete a application in the SettleMint platform. Provide the application unique name or use 'default' to delete the default application from your .env file. workspace|w [options] <unique-name> Delete a workspace in the SettleMint platform. Provide the workspace unique name or use 'default' to delete the default workspace from your .env file. help [command] display help for command
# Deletes the specified application by unique name $ settlemint platform delete application <application-unique-name>
# Deletes the default application in the production environment $ settlemint platform delete application default --prod
# Force deletes the specified application without confirmation $ settlemint platform delete application <application-unique-name> --force
Delete a application in the SettleMint platform. Provide the application unique name or use 'default' to delete the default application from your .env file.
Arguments: unique-name The unique name of the application, use 'default' to delete the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -f, --force Force delete the application without confirmation -h, --help display help for command
# Deletes the specified workspace by unique name $ settlemint platform delete workspace <workspace-unique-name>
# Deletes the default workspace in the production environment $ settlemint platform delete workspace default --prod
# Force deletes the specified workspace without confirmation $ settlemint platform delete workspace <workspace-unique-name> --force
Delete a workspace in the SettleMint platform. Provide the workspace unique name or use 'default' to delete the default workspace from your .env file.
Arguments: unique-name The unique name of the workspace, use 'default' to delete the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -f, --force Force delete the workspace without confirmation -h, --help display help for command
Commands: applications|a [options] List applications services|s [options] [typeOperands...] List the application services workspaces|w [options] List workspaces help [command] display help for command
}
## Applications
{
Usage: settlemint platform list applications|a Examples:
# List applications $ settlemint platform list applications
# List applications in wide format with more information $ settlemint platform list applications -o wide
# List applications in JSON format $ settlemint platform list applications -o json > applications.json
# List applications in YAML format $ settlemint platform list applications -o yaml > applications.yaml
List applications
Options: -w, --workspace <workspace> The workspace unique name to list applications for (defaults to workspace from env) -o, --output <output> The output format (choices: "wide", "json", "yaml") -h, --help display help for command
}
## Services
{
Usage: settlemint platform list services|s Examples:
# List the application services $ settlemint platform list services
# List the application services in wide format with more information (such as console url) $ settlemint platform list services -o wide
# List the application services in JSON format $ settlemint platform list services -o json > services.json
# List the application services in YAML format $ settlemint platform list services -o yaml > services.yaml
# List the application services for a specific application $ settlemint platform list services --application my-app
# List the application services for a specific application and type $ settlemint platform list services --application my-app --type middleware
# List the application services for multiple types $ settlemint platform list services --type blockchain-network blockchain-node middleware
List the application services
Options: --app, --application <application> The application unique name to list the services in (defaults to application from env) -t, --type <type...> The type(s) of service to list (choices: "blockchain-network", "blockchain-node", "load-balancer", "custom-deployment", "insights", "integration-tool", "middleware", "private-key", "storage") -o, --output <output> The output format (choices: "wide", "json", "yaml") -h, --help display help for command
}
## Workspaces
{
Usage: settlemint platform list workspaces|w Examples:
# List workspaces $ settlemint platform list workspaces
# List workspaces in wide format with more information $ settlemint platform list workspaces -o wide
# List workspaces in JSON format $ settlemint platform list workspaces -o json > workspaces.json
# List workspaces in YAML format $ settlemint platform list workspaces -o yaml > workspaces.yaml
List workspaces
Options: -o, --output <output> The output format (choices: "wide", "json", "yaml") -h, --help display help for command
Commands: blockchain-network|bnw [options] <unique-name> Restart a blockchain network in the SettleMint platform. Provide the blockchain network unique name or use 'default' to restart the default blockchain network from your .env file. blockchain-node|bn [options] <unique-name> Restart a blockchain node in the SettleMint platform. Provide the blockchain node unique name or use 'default' to restart the default blockchain node from your .env file. custom-deployment|cd [options] <unique-name> Restart a custom deployment in the SettleMint platform. Provide the custom deployment unique name or use 'default' to restart the default custom deployment from your .env file. insights|in Restart an insights service in the SettleMint platform integration-tool|it Restart an integration tool service in the SettleMint platform load-balancer|lb [options] <unique-name> Restart a load balancer in the SettleMint platform. Provide the load balancer unique name or use 'default' to restart the default load balancer from your .env file. middleware|mw Restart a middleware service in the SettleMint platform storage|st Restart a storage service in the SettleMint platform help [command] display help for command
# Restarts the specified blockchain network by id $ settlemint platform restart blockchain-network <blockchain network-id>
# Restarts the default blockchain network in the production environment $ settlemint platform restart blockchain-network default --prod
Restart a blockchain network in the SettleMint platform. Provide the blockchain network unique name or use 'default' to restart the default blockchain network from your .env file.
Arguments: unique-name The unique name of the blockchain network, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
# Restarts the specified blockchain node by id $ settlemint platform restart blockchain-node <blockchain node-id>
# Restarts the default blockchain node in the production environment $ settlemint platform restart blockchain-node default --prod
Restart a blockchain node in the SettleMint platform. Provide the blockchain node unique name or use 'default' to restart the default blockchain node from your .env file.
Arguments: unique-name The unique name of the blockchain node, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
# Restarts the specified custom deployment by id $ settlemint platform restart custom-deployment <custom deployment-id>
# Restarts the default custom deployment in the production environment $ settlemint platform restart custom-deployment default --prod
Restart a custom deployment in the SettleMint platform. Provide the custom deployment unique name or use 'default' to restart the default custom deployment from your .env file.
Arguments: unique-name The unique name of the custom deployment, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
Restart an insights service in the SettleMint platform
Options: -h, --help display help for command
Commands: blockscout|bs [options] <unique-name> Restart a insights in the SettleMint platform. Provide the insights unique name or use 'default' to restart the default insights from your .env file. help [command] display help for command
# Restarts the specified insights by id $ settlemint platform restart blockscout blockscout <insights-id>
# Restarts the default insights in the production environment $ settlemint platform restart blockscout blockscout default --prod
Restart a insights in the SettleMint platform. Provide the insights unique name or use 'default' to restart the default insights from your .env file.
Arguments: unique-name The unique name of the insights, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
Restart an integration tool service in the SettleMint platform
Options: -h, --help display help for command
Commands: hasura|ha [options] <unique-name> Restart a integration tool in the SettleMint platform. Provide the integration tool unique name or use 'default' to restart the default integration tool from your .env file. help [command] display help for command
# Restarts the specified integration tool by id $ settlemint platform restart hasura hasura <integration tool-id>
# Restarts the default integration tool in the production environment $ settlemint platform restart hasura hasura default --prod
Restart a integration tool in the SettleMint platform. Provide the integration tool unique name or use 'default' to restart the default integration tool from your .env file.
Arguments: unique-name The unique name of the integration tool, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
# Restarts the specified load balancer by id $ settlemint platform restart load-balancer <load balancer-id>
# Restarts the default load balancer in the production environment $ settlemint platform restart load-balancer default --prod
Restart a load balancer in the SettleMint platform. Provide the load balancer unique name or use 'default' to restart the default load balancer from your .env file.
Arguments: unique-name The unique name of the load balancer, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
Restart a middleware service in the SettleMint platform
Options: -h, --help display help for command
Commands: graph|gr [options] <unique-name> Restart a middleware in the SettleMint platform. Provide the middleware unique name or use 'default' to restart the default middleware from your .env file. smart-contract-portal|scp [options] <unique-name> Restart a middleware in the SettleMint platform. Provide the middleware unique name or use 'default' to restart the default middleware from your .env file. help [command] display help for command
# Restarts the specified middleware by id $ settlemint platform restart graph graph <middleware-id>
# Restarts the default middleware in the production environment $ settlemint platform restart graph graph default --prod
Restart a middleware in the SettleMint platform. Provide the middleware unique name or use 'default' to restart the default middleware from your .env file.
Arguments: unique-name The unique name of the middleware, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
# Restarts the specified middleware by id $ settlemint platform restart smart-contract-portal smart-contract-portal <middleware-id>
# Restarts the default middleware in the production environment $ settlemint platform restart smart-contract-portal smart-contract-portal default --prod
Restart a middleware in the SettleMint platform. Provide the middleware unique name or use 'default' to restart the default middleware from your .env file.
Arguments: unique-name The unique name of the middleware, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
Restart a storage service in the SettleMint platform
Options: -h, --help display help for command
Commands: ipfs|ip [options] <unique-name> Restart a storage in the SettleMint platform. Provide the storage unique name or use 'default' to restart the default storage from your .env file. minio|m [options] <unique-name> Restart a storage in the SettleMint platform. Provide the storage unique name or use 'default' to restart the default storage from your .env file. help [command] display help for command
# Restarts the specified storage by id $ settlemint platform restart ipfs <storage-id>
# Restarts the default storage in the production environment $ settlemint platform restart ipfs default --prod
Restart a storage in the SettleMint platform. Provide the storage unique name or use 'default' to restart the default storage from your .env file.
Arguments: unique-name The unique name of the storage, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
# Restarts the specified storage by id $ settlemint platform restart minio <storage-id>
# Restarts the default storage in the production environment $ settlemint platform restart minio default --prod
Restart a storage in the SettleMint platform. Provide the storage unique name or use 'default' to restart the default storage from your .env file.
Arguments: unique-name The unique name of the storage, use 'default' to restart the default one from your .env file
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -w, --wait Wait until restarted -h, --help display help for command
# Update a custom deployment $ settlemint custom-deployment update v1.0.0
# Update a custom deployment with a specific unique name $ settlemint custom-deployment update v1.0.0 --unique-name my-custom-deployment
Update a custom deployment in the SettleMint platform
Arguments: tag The tag to update the custom deployment to
Options: --unique-name <uniqueName> The unique name of the custom deployment to update. If not provided, will use SETTLEMINT_CUSTOM_DEPLOYMENT from env --prod Connect to your production environment --wait Wait for the custom deployment to be redeployed -h, --help display help for command
# Create a new solidity-token-erc20 smart contract set $ settlemint settlemint smart-contract-set create --project-name erc20-contracts --use-case solidity-token-erc20
Bootstrap your smart contract set
Options: -n, --project-name <name> The name for your smart contract set project --use-case <useCase> Use case for the smart contract set (run `settlemint platform config` to see available use cases) -i, --instance <instance> The instance to connect to -h, --help display help for command
Foundry commands for building and testing smart contracts
Options: -h, --help display help for command
Commands: build [options] [operands...] Build the smart contracts using Foundry/forge format [options] [operands...] Format the smart contracts using Foundry/forge network [options] [operands...] Start a development network Foundry/anvil test [options] [operands...] Test the smart contracts using Foundry/forge help [command] display help for command
Hardhat commands for building, testing and deploying smart contracts
Options: -h, --help display help for command
Commands: build [options] [operands...] Build the smart contracts using Hardhat deploy Deploy the smart contracts using Hardhat network [options] [operands...] Start a development network using Hardhat script Run a script using Hardhat test [options] [operands...] Test the smart contracts using Hardhat help [command] display help for command
Commands: local [options] Deploy the smart contracts using Hardhat/ignition to the local development network remote [options] Deploy the smart contracts using Hardhat/ignition to the remote network on the platform help [command] display help for command
}
### Local
{
Usage: settlemint smart-contract-set hardhat deploy local Examples:
# Deploy smart contracts to local network using Hardhat/Ignition $ settlemint scs hardhat deploy local
# Deploy a specific Ignition module $ settlemint scs hardhat deploy local --module ignition/modules/custom.ts
# Deploy with a clean deployment state $ settlemint scs hardhat deploy local --reset
# Deploy and verify contracts on Etherscan $ settlemint scs hardhat deploy local --verify
Deploy the smart contracts using Hardhat/ignition to the local development network
Options: -m, --module <ignitionmodule> The module to deploy with Ignition, defaults to "ignition/modules/main.ts" --deployment-id <deploymentId> Set the id of the deployment -r, --reset Wipes the existing deployment state before deploying -v, --verify Verify the deployment on Etherscan -h, --help display help for command
# Deploy smart contracts to remote network using Hardhat/Ignition $ settlemint scs hardhat deploy remote
# Deploy a specific Ignition module to remote network $ settlemint scs hardhat deploy remote --module ignition/modules/custom.ts
# Deploy with a clean deployment state to remote network $ settlemint scs hardhat deploy remote --reset
# Deploy and verify contracts on remote network $ settlemint scs hardhat deploy remote --verify
# Deploy to remote network with specific blockchain node $ settlemint scs hardhat deploy remote --blockchain-node my-node
# Deploy to production environment $ settlemint scs hardhat deploy remote --prod
Deploy the smart contracts using Hardhat/ignition to the remote network on the platform
Options: -m, --module <ignitionmodule> The module to deploy with Ignition, defaults to "ignition/modules/main.ts" --deployment-id <deploymentId> Set the id of the deployment -r, --reset Wipes the existing deployment state before deploying -v, --verify Verify the deployment on Etherscan --default-sender <defaultSender> Set the default sender for the deployment --parameters <parameters> A relative path to a JSON file to use for the module parameters --strategy <strategy> Set the deployment strategy to use (default: "basic") --blockchain-node <blockchainNode> Blockchain Node unique name (optional, defaults to the blockchain node in the environment) --prod Connect to your production environment -a, --accept-defaults Accept the default and previously set values -h, --help display help for command
Commands: remote [options] Run a Hardhat script on a remote network on the platform. local [options] Run a Hardhat script on a local development network. help [command] display help for command
# Run a Hardhat script on a remote network $ settlemint scs hardhat script remote --script scripts/deploy.ts
# Run a Hardhat script on a remote network with a specific blockchain node $ settlemint scs hardhat script remote --script scripts/deploy.ts --blockchain-node my-blockchain-node
# Run a Hardhat script on a remote network without compiling $ settlemint scs hardhat script remote --script scripts/deploy.ts --no-compile
Run a Hardhat script on a remote network on the platform.
Options: -s, --script <script> The script to run with Hardhat , e.g. "scripts/deploy.ts" --blockchain-node <blockchainNode> Blockchain Node unique name (optional, defaults to the blockchain node in the environment) --prod Connect to your production environment -a, --accept-defaults Accept the default and previously set values --no-compile Don't compile before running this task -h, --help display help for command
}
### Local
{
Usage: settlemint smart-contract-set hardhat script local Examples:
# Run a Hardhat script on a local network $ settlemint scs hardhat script local --script scripts/deploy.ts
Run a Hardhat script on a local development network.
Options: -s, --script <script> The script to run with Hardhat , e.g. "scripts/deploy.ts" --no-compile Don't compile before running this task -h, --help display help for command
}
## Test
{
Usage: settlemint smart-contract-set hardhat test Examples:
# Run tests using Hardhat $ settlemint scs hardhat test
# Get list of possible Hardhat test options $ settlemint scs hardhat test --help
# Run tests and stop on the first test that fails $ settlemint scs hardhat test --bail
# Run a specific test file $ settlemint scs hardhat test test/token.test.ts
Test the smart contracts using Hardhat
Options: -h, --help Get list of possible hardhat test options
Commands for managing TheGraph subgraphs for smart contract indexing
Options: -h, --help display help for command
Commands: build Build the subgraph codegen Codegen the subgraph types deploy [options] [subgraph-name] Deploy the subgraph remove [options] [subgraph-name] Remove a subgraph help [command] display help for command
# Deploy the subgraph $ settlemint scs subgraph deploy
# Deploy the subgraph with a specific name $ settlemint scs subgraph deploy my-subgraph
Deploy the subgraph
Arguments: subgraph-name The name of the subgraph to deploy (defaults to value in .env if not provided)
Options: --ipfs <ipfs-url> The IPFS URL to use for the subgraph deployment (defaults to https://ipfs.console.settlemint.com) -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -h, --help display help for command
# Remove a subgraph $ settlemint scs subgraph remove my-subgraph
Remove a subgraph
Arguments: subgraph-name The name of the subgraph to remove (defaults to value in .env if not provided)
Options: -a, --accept-defaults Accept the default and previously set values --prod Connect to your production environment -f, --force Force remove the subgraph without confirmation -h, --help display help for command