Setup graph middleware
Setup read middleware
Summary
To set up a graph middleware in SettleMint, you'll begin by ensuring that your application and blockchain node are ready. The graph middleware will serve as your read layer, enabling powerful querying of on-chain events using a GraphQL interface. This is particularly useful when you want to retrieve and analyze historical smart contract data in a structured, filterable format.
First, you'll need to add the middleware itself. Head to the middleware section inside your application on the SettleMint platform. Click add a middleware, and select graph as the type. Assign a name, pick the blockchain node (where your smart contract is deployed), configure the deployment settings, and confirm. This action will provision the underlying infrastructure required to run your subgraph.
Next, you will create the subgraph package in code studio. The subgraph folder contains all the code and configuration required for indexing and querying your smart contract's events. You will define a subgraph.config.json file that lists the network (via chain ID), your contract address, and the data sources (i.e., smart contracts and associated modules) that the subgraph will index.
Inside the datasources folder, you will create a userdata.yaml manifest file that outlines the smart contract address, ABI path, start block, and event-handler mappings. This YAML file connects emitted events like ProfileCreated, ProfileUpdated, and ProfileDeleted with specific AssemblyScript functions that define how the data is processed and stored.
You will then define the schema in userdata.gql.json. This is your GraphQL schema, which defines the structure of your indexed data. Entities like UserProfile, ProfileCreated, and ProfileUpdated are defined here, each with the fields to be stored and queried later via GraphQL.
Once the schema is ready, you will implement the mapping logic in userdata.ts, which listens for emitted events and updates the subgraph's entities accordingly. A helper file inside the fetch directory will provide utility logic to create or retrieve entities without code repetition.
After writing all files, you will run the codegen, build, and deploy scripts using the provided task buttons in code studio. These scripts will compile your schema and mapping into WebAssembly (WASM), bundle it for deployment, and push it to the graph middleware node.
Once deployed, you will be able to open the graph middleware's GraphQL explorer and run queries against your indexed data. You can query by ID or use the plural form to get a list of entries. This enables your application or analytics layer to fetch historical state data in a fast and reliable way.
How to setup graph middleware and api portal in SettleMint platform
Middleware acts as a bridge between your blockchain network and applications, providing essential services like data indexing, API access, and event monitoring. Before adding middleware, ensure you have an application and blockchain node in place.
How to add middleware
First ensure you're authenticated:
Create a middleware:
Get your access token from the platform UI under user settings → API tokens.
Manage middleware
Subgraph folder structure in code studio ide
Subgraph deployment process
1. Collect constants needed
Find the chain ID of the network from igntition>deployments folder name (chain-ID) or from the platform UI at blockchain networks > selcted network > details page, it will be something like 47440.
Locate the contract address, deployed contract address is stored in deployed_addresses.json file located in igntition>deployments folder.
2. Building subgraph.config.json file
This file is the foundational configuration for your subgraph. It defines how and where the subgraph will be generated and which contracts it will be tracking. Think of it as the control panel that the subgraph compiler reads to understand what contracts to index, where to start indexing from (which block), and which folder contains the relevant configurations (e.g., YAML manifest, mappings, schema, etc.).
Each object in the datasources array represents a separate contract. You specify the contract's name, address, the block number at which the indexer should begin listening, and the path to the module folder (which holds the YAML manifest and mapping logic). This file is essential when working with Graph CLI or SDKs for compiling and deploying subgraphs.
When writing this file from scratch, you will need to gather the deployed contract address, decide the indexing start block (can be 0 or a specific block to save resources), and organize contract-related files in a logical module folder.
3. Create userdata.yaml file
This is the YAML manifest file that tells the subgraph how to interact with a specific smart contract on-chain. It defines the contract's ABI, address, the events to listen to, and the mapping logic that should be triggered for each event.
The structure must follow a strict YAML format, wrong indentation or a missing property can break the subgraph. Under the source section, you provide the contract's address, the ABI name, and the block from which indexing should begin.
The mapping section details how the subgraph handles events. It specifies the API version, programming language (AssemblyScript), the entities it will touch, and the path to the mapping file. Each eventHandler entry pairs an event signature (from the contract) with a function that will process it. When writing this from scratch, ensure that all event signatures exactly match those in your contract (parameter order and types must be accurate), and align them with the corresponding handler function names.
4. Create userdata.gql.json file
This JSON file defines the GraphQL schema that powers your subgraph's data structure. It outlines the shape of your data, which entities will be stored in the Graph Node's underlying database, and the fields each entity will expose to users via GraphQL queries.
Every event-based entity (like ProfileCreated, ProfileUpdated, ProfileDeleted) is linked to the main entity (here, UserProfile) to maintain a historical audit trail. Each entity must have an id field of type ID!, which serves as the primary key.
You then define all other fields with their data types and nullability. When writing this schema, think in terms of how data will be queried: What information will consumers of the subgraph want to retrieve? The names and types must exactly reflect the logic in your mapping files. For reuse across projects, just align this schema with the domain model of your contract.
5. Create userdata.ts file
This file contains the event handler functions written in AssemblyScript. It directly responds to the events emitted by your smart contract and updates the subgraph's store accordingly. Each exported function matches an event in the YAML manifest. Inside each function, the handler builds a unique ID for the event (usually combining the transaction hash and log index), processes the event payload, and updates or creates the relevant entity (here, UserProfile).
The logic can include custom processing like formatting values, filtering, or even transforming data types. This file is where your business logic resides, similar to an event-driven backend microservice. You should keep this file modular and focused, avoiding code repetition by calling reusable helper functions like fetchUserProfile. When writing this from scratch, always import the generated event types and schema entities, and handle edge cases like entity non-existence or inconsistent values.
6. Create another userdata.ts in the fetch folder
This is a helper utility designed to avoid redundancy in your mapping file. It abstracts the logic of either loading an existing entity or creating a new one if it doesn't exist.
It enhances reusability and reduces boilerplate in each handler function. The naming convention of this file usually mirrors the module or entity it's associated with (e.g., fetch/userdata.ts).
The logic inside the function uses the userId (or other unique identifier) as a string key and ensures that all required fields have a default value. When writing this from scratch, ensure every field in your GraphQL schema has an initialized value to prevent errors during Graph Node processing.
Codegen, build and deploy subgraph
Run codegen script using the task manager of the ide
Run graph build script using the task manager of the ide
Run graph deploy script using the task manager of the ide
Why we see a duplicay in the graphql schema -
In The Graph's autogenerated schema, each entity is provided with two types of queries by default:
-
Single-Entity Query:
userProfile(id: ID!): UserProfile
Fetches a singleUserProfile
by its unique ID. -
Multi-Entity Query:
userProfiles(...): [UserProfile]
Fetches a list ofUserProfile
entities, with optional filters to refine the results.
Why This Duplication Exists - Flexibility in Data Access: By offering both single-entity and multi-entity queries, The Graph allows you to choose the most efficient way to access your data. If you know the exact ID, you can use the single query for a quick lookup. If you need to display or analyze a collection of records, the multi-entity query is available. - Optimized Performance: Retrieving a specific record via the single-entity query avoids unnecessary overhead that comes with filtering through a list, ensuring more efficient data access when the unique identifier is known. - Catering to Different Use Cases: Different parts of your application may require different query types. Detailed views might need a single record (using userProfile), while list views benefit from the filtering and pagination offered by userProfiles. - Consistency Across the Schema: Generating both queries for every entity ensures a consistent API design, which simplifies development by providing a predictable pattern for data access regardless of the entity.
Graph middleware - querying data
We can query based on the ID
Or we can query to return all entries
Congratulations.!!
You have succesfully configured graph middleware and deployed subgraphs to enable smart contract indexing. With this you have both read and write middleware for your smart contracts.
This marks the end of the core Web3 development, from here we will proceed to adding off-chain database and storage options to enable us to have a holistic backend and storage layer for our application.