Platform Components/Database and Storage

Ipfs storage

Guide to using IPFS storage solutions in SettleMint

InterPlanetary File System (IPFS) is a decentralized and distributed file storage system designed to enhance the way data is stored and accessed on the web. Unlike traditional cloud storage solutions that rely on centralized servers, IPFS stores content across a peer-to-peer (P2P) network, making it efficient, secure, and resilient against failures and censorship.

Within SettleMint, IPFS is available as a storage option, allowing developers to store, retrieve, and manage files seamlessly for blockchain applications. It enables off-chain data storage, ensuring that blockchain networks remain efficient while still being able to reference large datasets securely. Learn more on IPFS here

Why use ipfs?

  • Decentralized & Fault-Tolerant – No single point of failure.
  • Content Addressability – Files are retrieved using unique cryptographic Content Identifiers (CIDs) rather than URLs.
  • Efficient Storage & Bandwidth Optimization – Files are de-duplicated and distributed across nodes.
  • Ideal for Blockchain Applications – Enables off-chain storage while linking data securely to on-chain smart contracts.
  • Scalable & Cost-Effective – No dependency on expensive centralized storage solutions.

SettleMint offers IPFS as a decentralized storage solution, allowing users to store data in a distributed, verifiable, and tamper-proof manner. This is particularly useful for storing large files, metadata, documents, NFTs, and other digital assets that would otherwise be expensive or inefficient to store directly on-chain.

Features of ipfs storage in settlemint

  • Seamless File Upload & Retrieval – Store files and retrieve them via CIDs.
  • Blockchain Integration – Reference IPFS-stored files within smart contracts.
  • Secure & Immutable Storage – Files stored on IPFS remain tamper-proof.
  • Enhanced Performance – Optimized file access through IPFS gateways.
  • Redundancy & Availability – Files are distributed across the network for increased resilience.

Api reference

SettleMint provides multiple IPFS APIs as shown in your dashboard:

APIEndpointPurpose
IPFS HTTP APIhttps://your-ipfs-name.gke-region.settlemint.com/api/v0Core IPFS node operations (add, cat, get, pin)
Gatewayhttps://your-ipfs-name.gke-region.settlemint.com/gateway/Public access to IPFS content via HTTP
Cluster APIhttps://your-ipfs-name.gke-region.settlemint.com/clusterManage content across the IPFS cluster
Pinning APIhttps://your-ipfs-name.gke-region.settlemint.com/pinningControl pinning operations using the standard IPFS Pinning API

Common IPFS HTTP API endpoints

HTTP MethodEndpointDescription
POST/api/v0/addUploads a file to IPFS and returns its CID
POST/api/v0/cat?arg=<CID>Retrieves a file's contents from IPFS using its CID
POST/api/v0/get?arg=<CID>Downloads a file (with directory structure)
POST/api/v0/pin/add?arg=<CID>Pins a file to prevent it from being garbage collected
POST/api/v0/pin/rm?arg=<CID>Unpins a file
POST/api/v0/pin/ls?arg=<CID>&type=allLists pinned content

For complete API reference, see the official IPFS HTTP API documentation.


Credentials & authentication

Your SettleMint IPFS instance provides the following credentials for authentication:

CredentialDescriptionUsed With
Cluster API UsernameIdentifies your IPFS cluster user (e.g., your-ipfs-id)Basic Auth for API requests
Cluster API PasswordPassword for authenticating API requestsBasic Auth for API requests
Peer IDUnique identifier for your node in the IPFS networkP2P communication
Public KeyUsed for cryptographic verificationP2P communication
Private KeyFor signing operations (keep secure)P2P operations/advanced functionality
Cluster Pinning JWTJWT token for authenticating to the IPFS Pinning APIRemote pinning services

These credentials can be found in your SettleMint dashboard under the IPFS storage instance details.

Authentication examples

Basic authentication (for HTTP API and Cluster API)

// Basic authentication with your Cluster API credentials
const username = "your-ipfs-id"; // Your Cluster API Username
const password = "your-password"; // Your Cluster API Password
const authString = btoa(`${username}:${password}`);
 
fetch("https://your-ipfs-name.gke-region.settlemint.com/api/v0/add", {
  method: "POST",
  headers: {
    "Authorization": `Basic ${authString}`
  },
  body: formData // Your file in FormData format
});

JWT authentication (for Pinning API)

// JWT authentication with your Cluster Pinning JWT Token
const jwtToken = "your-pinning-jwt-token";
 
fetch("https://your-ipfs-name.gke-region.settlemint.com/pinning/pins", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${jwtToken}`,
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    cid: "QmExample...",
    name: "important-file.txt"
  })
});

Usage examples

Here are practical examples of common IPFS operations with SettleMint:

1. Uploading a file

async function uploadToIPFS(file) {
  const formData = new FormData();
  formData.append('file', file);
  
  // Get credentials from your SettleMint dashboard
  const username = "your-ipfs-id"; // Your Cluster API Username
  const password = "your-password"; // Your Cluster API Password
  const authString = btoa(`${username}:${password}`);
  
  const response = await fetch("https://your-ipfs-name.gke-region.settlemint.com/api/v0/add", {
    method: "POST",
    headers: {
      "Authorization": `Basic ${authString}`
    },
    body: formData
  });
  
  const data = await response.json();
  console.log("File uploaded with CID:", data.Hash);
  return data.Hash; // Returns the CID of the uploaded file
}
 
// Example usage with file input
// const file = document.getElementById('fileInput').files[0];
// uploadToIPFS(file).then(cid => console.log("Use this CID in your smart contracts:", cid));

2. Retrieving a file

async function getFileFromIPFS(cid) {
  // Get credentials from your SettleMint dashboard
  const username = "your-ipfs-id";
  const password = "your-password";
  const authString = btoa(`${username}:${password}`);
  
  const response = await fetch(`https://your-ipfs-name.gke-region.settlemint.com/api/v0/cat?arg=${cid}`, {
    method: "POST",
    headers: {
      "Authorization": `Basic ${authString}`
    }
  });
  
  // For text files
  const content = await response.text();
  console.log("File content:", content);
  return content;
  
  // For binary files (uncomment as needed)
  // const blob = await response.blob();
  // return blob;
}

3. Pinning a file permanently

async function pinFile(cid) {
  // Get credentials from your SettleMint dashboard
  const username = "your-ipfs-id";
  const password = "your-password";
  const authString = btoa(`${username}:${password}`);
  
  const response = await fetch(`https://your-ipfs-name.gke-region.settlemint.com/api/v0/pin/add?arg=${cid}`, {
    method: "POST",
    headers: {
      "Authorization": `Basic ${authString}`
    }
  });
  
  const data = await response.json();
  console.log("Pinned:", data.Pins);
  return data;
}

4. Using the Gateway for public access

The IPFS gateway allows anyone to access content by CID through a standard web browser without authentication:

https://your-ipfs-name.gke-region.settlemint.com/gateway/ipfs/QmYourCidHere

This is useful for:

  • Sharing files publicly
  • Accessing IPFS content in web applications
  • Integrating IPFS content with smart contracts that need to read the data

5. Using with remote pinning services

SettleMint's IPFS implements the IPFS Pinning Service API, allowing you to pin content from other IPFS nodes:

async function remotePinFile(cid, name) {
  // Get pinning JWT token from your SettleMint dashboard
  const jwtToken = "your-pinning-jwt-token";
  
  const response = await fetch("https://your-ipfs-name.gke-region.settlemint.com/pinning/pins", {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${jwtToken}`,
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      cid: cid,
      name: name
    })
  });
  
  const data = await response.json();
  return data;
}

Interplanetary file system (ipfs)

InterPlanetary File System (IPFS) is an open-source, peer-to-peer distributed file system for storing and accessing content on the decentralized web. Unlike traditional HTTP-based systems that locate data by server address (URL), IPFS uses content addressing – identifying data by its content hash – to retrieve files from any node that holds them. By combining ideas from well-established technologies (like distributed hash tables and Git-like Merkle trees), IPFS enables a global, versioned, and content-addressable storage network.

This document provides a technical overview of IPFS, explaining its underlying architecture and practical usage for developers, architects, and technically inclined stakeholders. We will cover IPFS's foundational principles, how files are stored and retrieved via content IDs (CIDs), the core components and protocols that make up IPFS, methods of interacting with the network, common usage patterns (from file sharing to dApp integration), as well as the benefits, limitations, and best practices for deploying IPFS in production environments.

Foundational principles of ipfs

IPFS is built on several key principles that distinguish it from traditional file storage and sharing systems:

Content addressing and cids

At the heart of IPFS is content addressing – every piece of data is identified by a content identifier (CID) which is derived from the data itself (via a cryptographic hash). In simpler terms, the address of a file in IPFS is its content, not the location of a server. This means that if two files have exactly the same content, they will have the same CID, and that CID will always refer to that content regardless of where it is stored.

When a file is added to IPFS, it's split into fixed-size blocks (chunks) and each block is hashed; a final hash (CID) is produced for the entire file (often represented as a root node linking all the chunks). The CID is effectively a unique fingerprint of the content, and even a small change in the file will produce a completely different CID.

Content addressing provides strong integrity guarantees: if you fetch data by its CID, you can verify (by re-hashing) that the content matches the CID you requested. This removes the need to trust a particular server – you're trusting the cryptographic hash. CIDs are designed with a flexible format (using multihash and multicodec conventions) that includes metadata about the hashing and encoding, but conceptually a CID is just a hash of content.

In summary, IPFS's content addressing decouples the what (the data) from the where (the location), enabling data to be retrieved from any peer in the network that has it.

Merkle dag (content linking and immutability)

IPFS represents data as a Merkle Directed Acyclic Graph (Merkle DAG) – a structure in which each node (file block or object) is linked via hashes to other nodes. Every file added to IPFS is stored as a Merkle DAG: if the file is small enough it might be a single block (node) whose CID is the hash of the file, but larger files are broken into many hashed blocks linked together (Lesson: Turn a File into a Tree of Hashes | IPFS Primer).

The top-level node (often called a root) contains links (hash pointers) to its constituent blocks, which may themselves link to sub-blocks, forming a tree of hashes. Because each link is a hash, the entire structure is self-authenticating: the root CID effectively seals the content of the whole file tree.

The use of Merkle DAGs means content is immutable – once data is added, that exact data will always correspond to the same CID. If you update a file, the modified file will produce a new CID, while the old version remains addressable by the old hash (this enables versioning, as discussed later). Merkle DAGs also enable deduplication: if two files share common chunks, those chunks (being identical content) have the same CID and can be stored only once and referenced in multiple graphs, saving space.

IPFS's data model, called IPLD (InterPlanetary Linked Data), generalizes this Merkle DAG structure so that many data types (files, directories, Git trees, blockchains, etc.) can be represented and linked by their hashes. On top of IPLD, IPFS uses UnixFS as a higher-level schema for files and directories, allowing hierarchical file systems to be built using Merkle DAG nodes (for example, directories are nodes that list hashes of their children).

The Merkle DAG approach gives IPFS its properties of content integrity and naturally supports a versioned file system (much like how Git commits form a DAG of versions).

Peer-to-peer networking and decentralization

IPFS operates over a distributed peer-to-peer (P2P) network of nodes rather than client-server architecture. Any computer running an IPFS node can participate in the network, storing data and fulfilling requests from others. There are no central servers holding the authoritative copy of content; instead, content is shared and cached by many peers.

When you request a CID on IPFS, the system doesn't query a single location – it asks the network which peers have the content and retrieves it from whichever peer (or peers) can serve it fastest. This P2P design makes IPFS inherently decentralized and resilient: even if some nodes leave or go offline, data can still be retrieved from other nodes that have it, with no single point of failure.

Peers discover and communicate with each other using a networking library called libp2p, which handles peer addressing, secure transport, and multiplexing for the IPFS network. Each IPFS node has a unique Peer ID (derived from a cryptographic key) which is used to identify it in the network, and nodes connect to each other via swarm addresses (multiaddresses) over various transport protocols (TCP, UDP, QUIC, etc.).

This peer network forms the substrate over which IPFS content is distributed. A new node joining IPFS initially connects to a set of bootstrap peers and then learns about other peers progressively. In essence, IPFS creates a global swarm of peers that collectively store and serve content, akin to a sophisticated BitTorrent-like swarm but for a unified filesystem.

Ipfs architecture and core components

Under the hood, IPFS is composed of several interconnected subsystems that work together to enable content-addressed storage and retrieval. The core components of IPFS include the node software (sometimes called a Kubo node, formerly go-ipfs), the distributed hash table for peer/content discovery, the Bitswap exchange protocol, and the IPLD data model layer. Let's explore these components in more detail:

Ipfs nodes and repositories

An IPFS node is a program (and by extension, the machine running it) that participates in the IPFS network. Each node stores data in a local repository which contains the content blocks the node has pinned or cached, as well as indexing information. Nodes can be run by anyone – from personal laptops to servers – and all nodes collectively form the IPFS network.

Each node is identified by a Peer ID, and can have multiple network addresses through which it connects to others. IPFS nodes communicate over libp2p, meaning they use a modular networking stack that can run over various transports and apply encryption and NAT traversal as needed. When running, a node continuously maintains connections to a set of peers.

Nodes do not automatically replicate all data; instead, a node stores only the content it intentionally adds or "pins", plus any other content it has fetched (which it may cache temporarily). By default, IPFS treats stored data like a cache – it may be garbage-collected if not pinned (more on pinning in a later section). This design ensures that participating in IPFS doesn't mean storing the entire network's data, only what each node finds relevant.

The node exposes a few interfaces for users/applications: a command-line interface, a RESTful HTTP API, and optionally a gateway interface for browser access. In essence, an IPFS node is a self-contained peer that can store content (in a local Merkle block store), connect to other peers, advertise the content it holds, and fetch content from others upon request.

Distributed hash table (dht) for content routing

To locate which nodes have a given piece of content (CID), IPFS relies on a distributed hash table (DHT) called Kademlia. The IPFS DHT is a decentralized index that maps CIDs to the peer IDs of nodes that can provide that content. When a node adds content to IPFS, it announces (publishes) to the DHT that "Peer X has content with CID Y". Later, when some node wants to retrieve that CID, it performs a DHT lookup to find provider records – essentially the network addresses of peers who have the content.

The DHT is spread across all IPFS nodes (or specifically, those that support the DHT – some light nodes might use delegate servers). It uses Kademlia's XOR-based routing: each peer is responsible for a portion of the hash space and knows how to route queries closer to the target CID's key. In practical terms, an IPFS node searching for content will query the DHT by hashing the CID and finding the closest peers in the key space, who either know the provider or can refer the query further along.

The public IPFS DHT (sometimes called the Amino DHT) is a global, open network averaging thousands of peers. It is designed to handle churn (peers joining/leaving) gracefully and to find providers within a short time (most lookups complete in well under 2 seconds on the public network. The DHT makes IPFS content routing decentralized – there is no central index server, the "who has what" information is distributed among all peers.

In addition to the DHT, IPFS can also use mDNS (multicast DNS) for discovering peers on a local network (useful for LAN or offline scenarios), and can fall back to delegated routing (asking a trusted server to perform DHT queries on behalf of lightweight nodes) in constrained environments. But the primary mechanism is the Kademlia DHT which allows any node to ask the network for providers of a given CID.

Bitswap: the block exchange protocol

Once you know which peers have the content you want, the next step is to retrieve the data. IPFS uses a protocol called Bitswap to coordinate the transfer of content blocks between peers.. When an IPFS node needs blocks (identified by CIDs), it sends out Bitswap wantlist messages to peers it's connected to, asking for those CIDs. Peers that have the requested blocks will respond by sending them back.

A key feature of Bitswap is that it's not restricted to a single file "swarm" – an IPFS node might be simultaneously exchanging blocks for many different files with many peers. Bitswap also allows parallel downloads: if multiple peers have a block, a node can fetch different blocks from different peers, increasing throughput for large files. Essentially, Bitswap enables a node to assemble content by grabbing pieces from any peers that can provide them, which can dramatically speed up retrieval for popular content (swarming).

Peers running Bitswap maintain a ledger to incentivize fairness (they track data exchanged and generally prefer to send to peers who reciprocate, to avoid freeloaders). Interestingly, Bitswap can also discover content in the process of transfer – if you connect to some peers and request a block, even if they don't have it, they might forward the request or later receive the block and then send it, functioning as a dynamic supply network. This means Bitswap acts as both a data transfer protocol and a limited content discovery mechanism (for example, a node might learn about a provider when that provider responds to a third party's request in a swarm).

Overall, Bitswap is the engine that moves blocks around in IPFS: it's how data actually gets from point A to point B (or C, D, etc.), once point B knows that point A (and others) have what it needs.

Ipld and data formats

IPLD (InterPlanetary Linked Data) is the data model layer of IPFS that defines how structured data is represented as content-addressable objects. All content in IPFS – files, directories, and other complex data – is expressed in terms of IPLD nodes and links. An IPLD node can be thought of as a small data object (e.g., a file chunk or a directory listing) with a content-addressable identifier (CID). Links between IPLD nodes are just CIDs pointing to other nodes, which, thanks to content addressing, also serve as cryptographic pointers. The Merkle DAG we discussed earlier is essentially an IPLD instance.

IPLD is designed to be flexible: it supports multiple codecs and formats (through the multicodec mechanism) so that it can interoperate with data from other systems. For example, Git commits and Ethereum blocks can both be represented as IPLD nodes – IPFS doesn't just handle "files" but any data that can be content-addressed. In the context of typical file storage, the main IPLD format is UnixFS, which defines how file data and metadata (like filenames, sizes, directory structure) are represented in the DAG. When you add a file via ipfs add, it is chunked and encoded into an IPLD UnixFS DAG automatically.

IPLD gives IPFS advantages like easy upgradability and interoperability: new data structures or hash functions can be introduced without breaking the system, because CIDs are self-describing and IPLD provides a common framework. IPLD is the component that ensures IPFS isn't limited to a single file format or data type – it's a universal graph layer where any content-addressed data structure can be modeled and linked.

Storing and retrieving files on ipfs

One of the core functions of IPFS is, of course, adding files and getting them back. Let's walk through how files are stored and retrieved in IPFS using content addressing and the components above:

Adding (storing) a file

Suppose you want to store a file on IPFS. Using the IPFS CLI or API, you run ipfs add <filename>. The IPFS node first chunkifies the file into blocks (default ~256 KB each) and generates cryptographic hashes for each block. If the file is small enough to fit in one block, that block's hash is the file's CID. If the file is larger, IPFS creates a Merkle DAG: it will create a root IPFS object (a kind of meta-block) that contains the hashes (CIDs) of all the file's chunks as links. This root object gets its own hash which becomes the CID representing the entire file.

IPFS then stores all these blocks in the local repository. As a final step, the node announces to the network that it has this content. It does so by publishing a provider record in the DHT for the file's root CID (and possibly for each block CID) – essentially telling the DHT, "Peer X can provide CID Y". Once added, the content is now available to any other IPFS peer that requests it by that CID.

Notably, adding a file to IPFS does not mean the whole world immediately gets a copy – it means the file is now available on the network through your node. Other peers can retrieve it if they know the CID or discover it. Also, IPFS ensures identical content isn't duplicated: if you add a file that contains some blocks already present on your node (or if you add the exact same file twice), it will reuse the existing blocks rather than store duplicates, thanks to content-addressing (this deduplication can work across files and even across users in the network in cases where the same chunks are shared).

Retrieving a file

Now, how does someone fetch that file using IPFS? Given the CID of the content, an IPFS node will perform a lookup to find who has the data, then fetch the data block by block. In practice, the process works in two phases: content routing and content transfer.

First, the node uses the DHT (or other discovery methods) to ask, "Who has CID X?" It sends queries through the DHT network until it finds one or more provider records for that CID (e.g., it learns that Peer X at such-and-such address can serve it). With provider info in hand, the node opens connections (via libp2p) to one or more of those peers.

Next comes the Bitswap phase: the node sends a wantlist for the CID (if it's a multi-block file, it will request the root block first, then proceed to request the linked blocks). The peer(s) holding the data respond by sending the blocks over. If multiple peers have the content, the downloading node can get different chunks from different peers in parallel, potentially speeding up the transfer. As blocks arrive, IPFS verifies each block's hash against the expected CID, ensuring data integrity.

When all the pieces are retrieved, IPFS assembles them (following the DAG links) to reconstruct the original file. The user who requested the file can now read the content (e.g., the ipfs cat <CID> command will output the file's data once fetched). Importantly, the act of retrieving also caches the data on the downloader's node: now that node can serve the file to others as well, at least temporarily.

By design, IPFS retrieval is location-agnostic – it doesn't matter where the content comes from, as long as the content hash matches. You could get one chunk from a server across the ocean and another from a peer on your local network; the resulting file is verified and identical to the original. This distributed retrieval provides robustness and potentially better performance through locality (if a nearby node has the data) and redundancy. And since content is addressed by hash, IPFS ensures you never get the wrong file – if someone tries to send bogus data, the hash won't match and it will be rejected.

Data persistence and pinning

By design, IPFS is agnostic about persistence: it doesn't automatically make content permanent or highly available; it simply provides the mechanism to distribute and retrieve it. Persistence in IPFS is the responsibility of nodes that care about the data. When you add a file to IPFS, your local node now has a copy and will serve it to others – as long as you keep that node running and don't remove the data. Other nodes that download that file may cache it, but caches are not guaranteed to stay forever (nodes have limited storage and will eventually clean up data that isn't explicitly marked to keep).

The act of marking data as "do not delete" on an IPFS node is called pinning. Pinning a CID on a node tells that node to store the data indefinitely, exempt from garbage collection. For example, after adding content, you'd typically pin it on at least one node (often the same node that added it is automatically pinning it by default). If content is not pinned, an IPFS node treats it as cache – it might be dropped if space is needed or if the node operator runs ipfs repo gc (garbage collect). Therefore, to achieve persistence, someone must pin the data on a persistent IPFS node.

In a decentralized context, this could be the original uploader or any number of volunteers or service providers who decide the content is worth keeping. IPFS itself doesn't replicate content across the network without instruction; you either rely on others to fetch and thereby temporarily host it, or you use additional services to distribute copies. Many decentralized storage setups use multiple IPFS nodes (or pinning services) to ensure there are always several online copies of important data. If no node pins a piece of content and no

Use cases for ipfs in blockchain applications

1. Smart contract data storage

  • Instead of storing large data on-chain, store it off-chain on IPFS and reference the CID in smart contracts.
  • Example: Legal documents, digital agreements, audit records.

2. NFT metadata & digital assets

  • Store metadata, images, and media for NFTs in a decentralized and tamper-proof manner.
  • Example: NFT artwork, game assets, token metadata.

3. Decentralized identity & credentials

  • Store and verify identity documents, certificates, and credentials securely on IPFS.
  • Example: Verifiable credentials in education, healthcare, and finance.

4. Immutable data storage for regulatory compliance

  • Ensure auditable and tamper-proof records for compliance-heavy industries.
  • Example: Financial records, compliance reports, supply chain tracking.

Alternative Ways to Interact with IPFS

Command

File Manager Interface

SettleMint provides a convenient web-based File Manager for your IPFS storage, allowing you to manage files without writing any code.

Key features of the file manager

  • Upload Files – Drag-and-drop or select files to upload to your IPFS node
  • Browse Files – View all files stored on your IPFS node
  • Search – Find files using CID or filenames
  • Pin Management – Pin/unpin files to control persistence
  • File Details – View metadata including size, CID, and pin status
  • Copy CIDs – Easily copy content identifiers for use in applications

To access the File Manager:

  1. Navigate to your IPFS instance in the SettleMint dashboard
  2. Click the File Manager tab
  3. Use the Import button to add new files

The File Manager provides an easy-to-use interface for basic IPFS operations without requiring command-line tools or programming knowledge, making it ideal for testing and management tasks.

Best practices & considerations

  • Pinning Important Files – IPFS operates on a caching mechanism, meaning files may be garbage-collected if not pinned.
  • CID Management – Since CIDs are content-based hashes, file updates result in new CIDs. Maintain proper reference tracking.
  • Security & Privacy – IPFS is public by default. For sensitive data, consider encryption before storing files.

Troubleshooting

  • File Not Found? – Ensure the file is pinned and accessible through an active IPFS node.
  • Slow Retrieval? – Use SettleMint's dedicated IPFS gateway or public IPFS gateways for faster access.
  • Storage Limitations? – Consider using external pinning services to maintain long-term file availability.

For further assistance, refer to SettleMint's documentation or the official IPFS documentation.


Additional resources


IPFS provides a scalable, decentralized, and efficient storage solution for blockchain applications. Within SettleMint, IPFS can be easily used as a storage option, allowing users to store, retrieve, and reference files with minimal setup. By integrating IPFS into blockchain workflows, developers can ensure secure, tamper-proof, and cost-efficient off-chain storage, while keeping essential references on-chain.

On this page