Welcome to the "Getting Started" documentation page! Whether you're new to our platform or looking for a refresher, this guide will walk you through the essential steps to kickstart your journey.
The Global Indexer is a pivotal tool within the Guardian ecosystem on the Hedera Network, designed to optimize data search, retrieval, and management. It offers advanced search capabilities across all Guardian data, including policies and documents, while improving data storage and indexing for efficient analytical queries and duplicate checks. With access to a comprehensive dataset from all Guardian instances, the Indexer ensures thorough data retrieval. Its user-friendly interface simplifies navigation, and its integration with Hedera and IPFS enhances the handling of large datasets and complex queries, making it an essential component for efficient data management.
Before we begin, lets figure out what type of user you are:
Welcome to the Managed Guardian Service (MGS)! We enable applications a way to mint emissions & carbon offset tokens without worrying about the complexities of managing the technology infrastructure.
Note: We are currently in the Beta phase. Documentation and usage are subject to change.
Overview
With regard to ecological markets, business leaders will find themselves in these four phases:
Creating Verified Supply
Establishing Demand
Buying & Selling
Offsetting
There are many rationales that can be applied here such as Greenhouse Gas Emission Profiles, Renewable Energy Credits, and Carbon Offsets. While emission allowances are subject to government regulation, a Carbon Offset, for example, is an intangible asset that is created in a process involving a project or program whose activity can be claimed to reduce or remove carbon as a result, that is independently verified and turned into a carbon offset. These offsets are minted, or issued, by an environmental registry that created the standard methodology or protocol used to create the verified carbon offset claim. The offset then represents the original owner’s property right claim to the carbon-related benefits. The asset owner(s) can then sell their credits directly to buyers, or at wholesale. The ultimate end-user has the right to claim the benefits and can retire the offset permanently – usually as part of a netting process, where the claimed CO2 benefits are subtracted from that end-users other Greenhouse Gas (GHG) emissions.
The process to create renewable energy or carbon offset claims that can be validated and verified to be turned into a product is called measurement, reporting, and verification (MRV) data. Today, this process of collecting the supporting data for these carbon offsets is heavily manual and prone to errors. The main factors driving these error-prone are:
Poor data quality
Lack of assurance
Potential double counting
Greenwashing
This is where the Guardian solution which leverages a Policy Workflow Engine (PWE), is a sensible approach to ameliorate the issue with the current processes. The dynamic PWE can mirror the standards and business requirements of regulatory bodies. In particular, the Guardian solution offers carbon markets the ability to operate in a fully auditable ecosystem by including:
W3C Decentralized Identifiers (DIDs): Decentralized Identifiers (DIDs) are a new type of globally unique identifier that are designed to enable individuals and organizations to generate their own identifiers using systems they trust.
W3C Verifiable Credentials (VCs): A digital document that consists of information related to identifying a subject, information related to the issuing authority, information related to the type of credential, information related to specific attributes of the subject, and evidence related to how the credential was derived.
W3C Verifiable Presentations (VPs): A collection of one or more VCs.
Types of Users
About Schemas
Watch this quick 2-minute video to learn about Schemas:
New User Without an MGS Account
Welcome to the "Getting Started" documentation page! Whether you're new to our platform or looking for a refresher, this guide will walk you through the essential steps to kickstart your journey.
Step 1: Access the Indexer Homepage
Open in your browser.
Overall lack of trust.
Public ledger technologies.
Policy workflow engines through fully configurable and human-readable “logic blocks” accessible through either a user interface or an application programming interface (API).
On the "Welcome to Indexer" page, click the "Log In Through MGS" button.
You will be redirected to the MGS Login Screen.
Step 2: Sign Up for an MGS Account
At the MGS Login Screen:
Click the "Don’t Have an Account? Sign Up" link at the bottom of the page.
Step 3: Review Terms and Conditions
Carefully read the Terms and Conditions.
Click "Accept" to proceed.
Step 4: Fill Out the Request Form
Provide the following information:
Username.
Email address.
Password.
Click "Request Access" to submit the form.
Step 5: Authenticate with Your New Up Tenant Admin Account
Welcome to the "Getting Started" documentation page! Whether you're new to our platform or looking for a refresher, this guide will walk you through the essential steps to kickstart your journey.
Step 1: Choose a Method to Access the Indexer
Option 1: Log in through the MGS Sidebar Menu:
If you are logged into your MGS account, locate the quick link to the Indexer at the bottom of the sidebar menu and click it.
You will be redirected to the Indexer application.
Option 2: Access the Indexer Homepage:
Open in your browser.
On the "Welcome to Indexer" page, click the "Log In Through MGS" button.
Step 2: Authenticate Your Session
If You Are Already Logged Into MGS:
The Indexer will detect your existing MGS session and automatically authenticate you. You’ll be directed into the Indexer without needing to log in again.
Step 3: Log in Using Your Credentials
At the MGS Login Screen:
Select the appropriate tab:
"Admin": For Tenant Admin users.
Custom MGS ChatGPT Assistant
The Managed Guardian Service (MGS) Custom GPT is a specialized AI assistant designed to facilitate users in understanding and utilizing the Managed Guardian Service platform. This tool represents an advanced version of ChatGPT, customized specifically to cater to the needs of users engaging with MGS. It has been meticulously programmed with a comprehensive range of MGS-related documentation, policies, and operational guidelines.
The primary function of the MGS Custom GPT is to act as an interactive knowledge base. It helps users navigate the complexities of the MGS platform, which is pivotal for businesses involved in emissions reporting, carbon offset, and renewable energy credit creation. This AI assistant stands out due to its ability to quickly process and provide insights from a vast array of MGS documentation, which has been integrated into its system.
Key Features of MGS Custom GPT
Customized Assistance: Tailored specifically for the Managed Guardian Service, it provides focused and relevant information, making it a reliable source for MGS-related queries.
Rich Knowledge Base: Equipped with extensive data from MGS documentation, it can answer a wide range of questions, from basic setup to complex operational procedures.
Efficient Query Resolution: Designed to interpret and respond to user queries by referencing the integrated MGS documentation, ensuring accurate and up-to-date information.
How to Use MGS Custom GPT:
Ask Specific Questions: Users can inquire about specific aspects of MGS, such as setting up user profiles, managing tenants, or understanding policy implementations.
Seek Clarifications: If there are aspects of the MGS documentation or operations that are unclear, the tool can provide detailed explanations.
Explore Features: Users can explore different functionalities and features of the MGS platform through interactive questioning.
The Managed Guardian Service Custom GPT serves as an invaluable asset for users, significantly enhancing their ability to effectively utilize the MGS platform. By providing instant access to a wealth of information and guidance, it empowers users to make the most out of the MGS services and capabilities.
About MGS Vault
Watch this quick video to learn about the Managed Guardian Service Vault
User-Friendly Interface: Simplifies the user's experience with the MGS platform, offering step-by-step guidance and clarifications on various aspects of the service.
Support for Various MGS Aspects: Capable of assisting with vault setups, Hedera account integration, policy understanding, token operations, and trust chain features.
Troubleshooting and Support: The tool can offer troubleshooting advice and guide users through resolving common issues encountered on the MGS platform.
Changelog
Tenant Operations
Tenant Admins
Beta v10.1
We are thrilled to bring you MGS Beta v10.1, aligned with open-source Guardian 3.0 and featuring key updates designed to enhance security, user experience, and accessibility.
New Features
Seamless Email-Based Login & Tenant Selection
Logging in to MGS is now smoother than ever! We've removed the friction of manually entering Tenant IDs. Instead, users can now log in using just their email and password, with a tenant selection screen for those associated with multiple tenants. This streamlined approach eliminates confusion, improves accessibility, and enhances the overall user experience.
Methodology Breakdown
A set of regulations or instructions that specify how carbon offset projects are created, validated, confirmed, and tracked are referred to as policies in the Guardian. These regulations aid in ensuring that carbon offsets are legitimate, quantifiable, and capable of reducing or eliminating actual emissions.
The Guardian platform offers a framework for developing and overseeing carbon offset projects in accordance with a number of widely accepted international norms, such as the Verified Carbon Standard (VCS) or the Gold Standard. For various carbon offset project types, such as renewable energy, energy efficiency, forestry, or agriculture, these standards provide specific requirements.
The Guardian platform's policies are made to be flexible and adaptable to the particular requirements and objectives of each carbon offset project. They cover a variety of options and conditions, such as project parameters, baseline emissions, additionality standards, monitoring techniques, and reporting needs.
Project developers can make sure that their carbon offset projects adhere to the highest standards of reliability and quality by establishing policies within the Guardian platform. Involved parties, investors, and buyers of carbon offsets who want to make sure that their investments contribute to actual and significant emissions reductions or removals can also receive transparency and accountability from them.
Watch the videos in this youtube playlist to learn how to breakdown methodologies and create policies for the Guardian:
Beta v5.1
This minor upgrades brings the monthly guardian update into MGS
New
Core Guardian Upgrade to v2.21
For the full changelog and release notes on the open-source Guardian please visit:
About Dry Run
Watch this quick video to learn about Dry Run:
About Retirement
Watch this quick video to learn about Retirement Contracts
Key Features of Managed Guardian Service (MGS)
Managed Guardian Service (MGS) builds on the core capabilities of the , incorporating powerful cloud-driven enhancements to streamline and elevate your carbon market and environmental data management experience. Here’s a look at the key features that make MGS the ideal solution:
Use the UI or APIs to create your own digital methodologies
Policies are one of the most important concepts to understand in the Guardian. We recommend that you take a moment and watch the video about Policies in the "." The Policy Workflow Engine defines and validates all requirements for methodologies. We give you the option to create them using the UI and also APIs. Make sure to read our
Beta v10.2
We’re excited to introduce MGS Beta v10.2, bringing important enhancements and fixes to improve system stability, user experience, and performance.
New Features & Enhancements
Trustchain Stability Fix
Resolved an issue where accessing the Trustchain triggered a 422 error if the associated policy lacked a mint block. This fix ensures Trustchain visibility remains consistent and error-free regardless of policy configurations.
Improved Multi-Account Login Experience
Users with multiple accounts tied to the same email can now easily select which account to log into during authentication. This streamlined selection process enhances usability across tenants and roles.
Token Minting Performance Optimization
Addressed major performance bottlenecks during token minting—especially for high-load policies like. This update significantly improves loading speed and prevents UI freezing during heavy operations.
Beta v11
We’re excited to introduce MGS Beta v11 — a release that reaffirms our commitment to uptime, enterprise integration, security, and Guardian innovation.
New Features & Enhancements
Infrastructure Resilience & Uptime Enhancements
MGS Beta v11 introduces major upgrades to our core infrastructure — built to support seamless, zero-downtime deployments. This behind-the-scenes improvement ensures that updates, hotfixes, and feature rollouts happen without interrupting your operations.
Whether you're streaming MRV data, minting tokens, or managing live policies, your work stays uninterrupted. These enhancements reinforce one of MGS’s core promises: maximum uptime, continuous performance, and uninterrupted trust.
Azure B2C Single Sign-On Integration
For enterprise teams building custom front ends, MGS now supports Azure Active Directory B2C (SSO) integration. Organizations can authenticate users through their own identity systems while seamlessly accessing MGS — no separate login required.
Web3.Storage
Overview
With the recent shift in the Managed Guardian Service (MGS) infrastructure, incorporating web3.storage as a critical component for data storage and management is essential. This guide provides an introduction to web3.storage, outlining its significance, operational mechanics, and the steps required for Tenant Admins to integrate it with their MGS tenants.
Chapter 17: (Reserved for completion)
Complete guide to deploying, monitoring, and maintaining carbon certification policies in production environments
Chapter 16 covered advanced policy patterns and testing. Chapter 17 focuses on the critical final phase: deploying policies to production, managing live carbon credit certification systems, handling upgrades, and ensuring ongoing operational excellence.
This chapter addresses the real-world challenges of running production carbon registries with financial and environmental stakeholders depending on reliable, accurate policy execution.
Production Deployment Architecture
About Policies
Watch this quick 2-minute video to learn about policies:
Beta v3
We're thrilled to announce MGS is upgrading from Beta v2 to the powerful Beta v3
Now, with a Secret Access Token generated, you can proceed with configuring a new or existing tenant in MGS. If you wish to create a new tenant, log in as Tenant Admin, select the ‘Tenants’ menu option, and click ‘+ Add New Tenant’. In the modal window, enter the Tenant Name, choose the appropriate Network from the list, and select ‘filebase’ among the IPFS Storage Provider options. In the ‘filebase token’ field, enter the Secret Access Token you copied earlier. Click the ‘Create Tenant’ button to finalize the creation of this tenant.
If you need to change the IPFS Storage Provider for an existing tenant, select the ‘Tenants’ menu item, find the tenant, and click the ‘Open’ button. Then, go to the Settings tab, select ‘filebase’ from the IPFS Storage Provider list, and paste the Secret Access Token you copied earlier into the filebase token field. Click the Save Changes button at the bottom of the page to apply your changes.
Beta v5
This release includes updates to the IPFS storage providers available
New
New Hedera Testnet reconfiguration after testnet reset
Web3.storage Validation
User Invite Status Tracking
A new UI panel within the Tenant dashboard now displays the real-time status of user invites. Admins can track if an invite was sent, accepted, or expired—with support for resending expired invites—making user onboarding more transparent and manageable.
Direct Guardian-to-Indexer Connection (Enhanced Integration)
Guardian instances can now be directly connected to the MGS hosted Indexer, enabling access to advanced UI elements and functionality previously limited to manual local setups. This integration is authenticated, streamlined, and ensures high availability with minimal user intervention.
HashScan Integration for Hedera Links
All system-generated links for Topics, Tokens, and Hedera Account IDs have been updated from LedgerWorks to HashScan. This resolves broken link issues and ensures consistent access to Hedera ledger data.
Updates
Guardian Upgrade to v3.1.1
MGS is now upgraded to align with open-source Guardian v3.1.1. This ensures continued compatibility and leverages the latest core improvements from the Guardian ecosystem.
Two-Factor Authentication for SR and Policy Users
Security is non-negotiable. MGS now enforces Two-Factor Authentication (2FA) for both Standard Registry and Policy User accounts. This ensures only verified users can access sensitive workflows and manage assets across tenants.
Aligned with Guardian Open Source v3.2
MGS now supports the latest Guardian 3.2 release, bringing expanded interoperability, rich data visualizations, and admin-friendly controls.
Highlights include:
web3.storage is a platform designed to facilitate easy and efficient storage of data on the decentralized web. Utilizing IPFS (InterPlanetary File System) and Filecoin, web3.storage offers a robust and scalable solution for data storage needs, particularly suited for applications within the web3 ecosystem.
Guardian Production Infrastructure
Production carbon registries require robust infrastructure supporting high availability, data integrity, and regulatory compliance.
[This chapter is in progress]
Self-custody of keys via MGA Vault
High Availability
Improved DB CPU consumption
More pre-loaded open-source policies
This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit: https://github.com/hashgraph/guardian/releases
In response to the evolving needs of our Managed Guardian Service (MGS) infrastructure, we are thrilled to introduce the Managed IPFS node as a pivotal addition to our suite of data storage solutions. This section is dedicated to providing a comprehensive understanding of the Managed IPFS node, highlighting its importance, operational dynamics, and integration process for Tenant Admins within the MGS ecosystem.
What is a Managed IPFS Node?
The Managed IPFS node is a fully managed and hosted service provided by MGS, designed to streamline the storage and management of data on the decentralized web. Leveraging the power of the InterPlanetary File System (IPFS), our Managed IPFS node offers a seamless and scalable approach to handling vast amounts of data with enhanced security, redundancy, and ease of access.
filebase
Overview
In response to evolving data management needs within the Managed Guardian Service (MGS) infrastructure, integrating filebase as a key IPFS provider has become imperative. This documentation serves as a comprehensive guide to incorporating Filebase, emphasizing its importance, functionality, and the step-by-step process required for Tenant Admins to seamlessly integrate it with their MGS setup.
What is filebase?
filebase is a pioneering platform that leverages the InterPlanetary File System (IPFS) to offer scalable and decentralized data storage solutions. By harnessing the power of IPFS and blockchain technology, Filebase provides users with a secure, efficient, and cost-effective method for storing data across a distributed network. This platform is exceptionally well-suited for applications demanding high data integrity, availability, and redundancy — characteristics that align with the core objectives of the MGS ecosystem.
The integration of filebase with MGS not only enhances the platform's data storage capabilities but also aligns with the overarching goal of leveraging decentralized technologies for improved security and efficiency. This guide will navigate through the necessary steps to integrate filebase with MGS, ensuring a smooth transition for Tenant Admins aiming to optimize their data management strategies within the MGS framework.
Tenant APIs
Can't create a policy? Use one that we have already preloaded for you!
There's nothing worse than wanting to jump into the action, but not having all of the tools! The open-source Guardian community is ever growing and so is the collection of tested policies. As more become available, we'll add them to the list of preloaded policies for you to quickly drop them in.
Multi-tenancy
Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers. Each environment is called a tenant. During the beta phase, we will allow up to 3 tenants per each Tenant Admin account. Additionally, you will also be able to select which Hedera network you'd like to point your tenant to the Hedera Mainnet, Testnet, and Previewnet. Let your imagination run wild on how you will use this feature. They can serve your customers, act as sandbox/production environments, or even offer different use case designs. We look forward to hearing everyone's feedback on this!
Flexible Data Storage with IPFS Storage Providers
The Managed Guardian Service (MGS) enhances its data storage capabilities through integration with various IPFS Storage Providers, ensuring that organizations and individuals have access to a decentralized and secure method for managing their digital assets and environmental data. This approach not only bolsters data integrity and accessibility but also aligns with the decentralized ethos of blockchain technologies, offering a robust solution for the storage of sensitive information across a distributed network.
Integrate with your system
The Managed Guardian Service is a hosted environment where we provide you with resources, tools, and support. Once registered for the Managed Guardian Service, users will be given two options to get started. One option is to use a simple user interface to develop policies and run proof of concepts quickly. The other option is APIs for a fully customizable application experience.
Secure self-custody with the MGS Vault
The Managed Guardian Service Vault is designed to benefit organizations and individuals looking to securely store their user account secrets, such as private keys. The Vault solution leverages the open-source version of Hashicorp Vault and is intended to be used with the Managed Guardian Service. Keep in mind, that MGS has on it's roadmap, integrations with many other popular vaults, so requests are welcome. Once registered for the Managed Guardian Service, users will need to configure their profiles. They may choose to bring their own compatible vault or use the MGS vault solution we deployed across all major cloud provider Marketplaces. Examples of those marketplaces include the Microsoft Azure Marketplace, Google Cloud Platform Marketplace, and AWS Marketplace.
Hosted Indexer: Data Tracking and Retrieval
The Indexer in Managed Guardian Service (MGS) enables tracking and retrieval of data across carbon offsets, policies, and transactions. It offers advanced search capabilities, allowing users to quickly locate specific records, such as policy updates or carbon credit histories, by filtering attributes like project type and issuance date. Designed to improve data transparency and accessibility, the Indexer supports compliance reporting, impact analysis, and audit trails, ensuring that all indexed data is up-to-date, traceable, and easily accessible for informed decision-making.
Get access to our support desk
The monitoring and alerting system is the backbone of our service. It allows us to detect any issues before they manifest themselves to users and enables us to take timely action. MGS is widely covered by monitoring and alerts to allow us to react, prevent, and analyze any issues that can happen. However, In the event that technical support is needed, the MGS team has a help desk with SLAs to address needs.
Welcome to the "Getting Started" documentation page! Whether you're new to our platform or looking for a refresher, this guide will walk you through the essential steps to kickstart your journey.
Click on "Sign Up" and enter your username, email, password, and agree to the terms of use.
Access will be automatically granted upon completion.
Step 2: Admin Login and Tenant Configuration
Log in with your Admin Email and Password.
If its a new Admin account then you will be able to do the following:
Access the Tenant Admin screen; configure your subscription under the Subscription tab.
Navigate to the Tenants tab and click "Add New Tenant" for tenant configuration.
Set configurations like Tenant Name, Network Selection (Testnet, Mainnet, or Previewnet) and IPFS Storage Provider (Managed IPFS Node, Filebase, Web3.Storage).
Enter necessary API Key and API Proof values (refer to documentation for creation instructions) in case of Web3. Storage/Filebase.
Step 4: Inviting Users and Customizing Tenant Branding
Use the Users tab to invite new members to your tenant by entering their email address and assigning a role. Select Standard Registry for users who will be managing and publishing registry policies. Choose User for individuals who will interact with the policies published by the registry.
Customize tenant branding with unique names, colors, logos, and background images.
Adjust IPFS Service Providers and modify API keys and proofs as needed.
Step 5: Setting Up a Standard Registry User Account
The first user is typically a Standard Registry account. This user establishes methodology requirements and issues tokens.
Vault Selection
Follow the on-screen instructions to select a vault. The MGS Vault is designed for organizations or individuals seeking a secure, self-custody option for storing account secrets like private keys.
ℹ️ Note: Vault selection is required for Mainnet, but may be skipped when working on Testnet.
Refer to the and step-by-step setup guides.
Hedera Account Credentials
Step 6: Exploring Advanced Features
In the side bar, navigate to the Policy tab and the Schemas section to create and manage schemas.
Dive into features like Artifacts, Modules, Policies, Tools, and Tokens.
Learn to create policies from scratch or import them using the Policies tab.
Step 7: Testing Policies with Dry Run Mode
Use Dry Run mode to test policies in a simulated environment. Create virtual users and interact with policies as real-world users would.
Step 8: Publishing and Inviting Policy Users
Publish your policies for interaction by Policy users.
Invite policy users to your tenant to submit data and engage with published policies.
Step 9: Setting Up Policy User Account
Similar to steps 4 and 5, Policy users need to be invited, and will also need to follow the steps to finish setting up their user account. Policy users then engage with the specific Standard Registry and interact with policies.
Step 10: Final Steps for Policy Users
Explore the List of Tokens and Policies tabs to associate with tokens and access published policies.
Use advanced search features for finding relevant policies for MRV activities.
Step 11: Use the MGS Custom GPT Assistant
Feel free to use the Managed Guardian Service custom GPT . The Managed Guardian Service (MGS) Custom GPT is a specialized AI assistant designed to facilitate users in understanding and utilizing the Managed Guardian Service platform. This tool represents an advanced version of ChatGPT, customized specifically to cater to the needs of users engaging with MGS. It has been meticulously programmed with a comprehensive range of MGS-related documentation, policies, and operational guidelines.
Note: If you are currently using the open-source Guardian APIs, migrating to Managed Guardian Service is really easy!
Simply change the API URL from what you are currently using (i.e. http://localhost:3002/api/v1/) to https://guardianservice.app/api/v1/
Beta v10
We are thrilled to bring you MGS Beta V10, aligned with open-source Guardian 3.0 and featuring key updates designed to enhance security, user experience, and data accessibility.
New Features
Database Vault Restriction for Mainnet API Access
To ensure data security on the Mainnet, API access to the Hashicorp vault is now restricted, aligning API functionality with UI standards. This measure prevents unauthorized use of the vault, providing an added layer of protection for mainnet operations.
Enhanced Tenant ID Accessibility
Tenant ID visibility has been streamlined for improved user experience. Previously accessible only by email or Admin login, the Tenant ID is now displayed directly within the tenant dashboard, making it easier for users to locate this essential information.
User Role Selection on Invite
To improve the invitation flow, Tenant Admins can now select user roles (Standard Registry or User) when inviting new users. This update allows for better management of permissions and access levels, streamlining the onboarding process. Invitations are tailored to reflect the specific user role, enhancing clarity and usability.
Indexer Enhancements
Login Access for Indexer Interface
We've added a dedicated Login/Signup button to the Indexer UI page, allowing users to directly access the login page without redirecting through the main MGS portal. This update simplifies access to Indexer functionality, making navigation smoother and more intuitive.
Global Search Integration with Indexer
The Indexer is now integrated with Global Search, enabling users to search policies across the entire Hedera Testnet and Mainnet from within MGS. This enhancement improves search capabilities, providing faster, more comprehensive access to policies across instances and standard registries.
Customizable Indexer Hosting
In this release, we’ve added options for hosting and customizing the Indexer. Organizations can now configure the Indexer to suit specific search and data access needs, making MGS even more flexible and adaptable to unique use cases.
Updates
MGS Upgrade to Open-Source Guardian 3.0
In alignment with the latest open-source Guardian version 3.0, MGS Beta V10 includes all the latest features and improvements from the Guardian platform. This update ensures that MGS users have access to cutting-edge functionality, security enhancements, and performance optimizations.
For the full changelog and release notes on the open-source Guardian please visit:
Beta v1
We launched into production with the Managed Guardian Service Beta v1. It has all of the core features included in the open-source Guardian, but with some special cloud-driven features.
New
Below is a list of features that are included in the initial launch of the Managed Guardian Service Beta v1
Open-Source Guardian Version 2.7
Introduction to Admin Users
Multi-tenancy
Pre-loaded Policies
Carbon Offsets Policies:
Verra Redd+ VM0007 Developed by Envision Blockchain
Carbon Reduction Measurement - GHG Corporate Standard Developed by TYMLEZ
Downloadable APIs
This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit:
Integrating Managed IPFS Node with MGS Tenants
Tenant Admins play a crucial role in integrating the Managed IPFS node with their MGS tenants. Here's a step-by-step guide to get you started:
For New Tenants
Click the Add New Tenant Button
Fill out the Tenant Name and select the appropriate Network
When asked for the IPFS Storage Provider click the drop down and select Managed IPFS node
For Existing Tenants
From the Tenant Admin screen, click the "Open" button
Navigate to the "Settings" tab
When asked for the IPFS Storage Provider click the drop down and select Managed IPFS node
Save Changes
Beta v2
The Beta v2 Release includes new features such as improved tenant and user management, full asset lifecycles such as asset retirement, and much more.
New
Below is a list of improvements that are included in the Managed Guardian Service Beta v2
Upgrade to core open-source Guardian v2.8 (Retirement process for assets, Matched Assets, 3rd Party Content Providers, Modular Benefit Projects, and LedgerWorks Eco Explorer Implementation)
Adding 2 DOVU CRU methodologies to the preloaded policies (Agrecalc and Cool Farm)
Extend the POST/tenants/invite endpoint with the ability to return the inviteId in response
Updated Swagger Documentation for Beta V2
Enable Admins to manage users
Improved Internal Alerts
Enhanced Autoscaling for performance loading
Bug fixes
This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit:
Beta v9
We are excited to introduce MGS Beta V9, packed with new features and improvements to enhance your experience and provide greater flexibility in managing your operations.
New
Email Alert on Successful Publishing of Policies
To improve communication and ensure that users are promptly informed, we have added an email notification feature. Whenever there is a successful publishing of any methodology on Testnet/Mainnet, an email alert with detailed information will be sent to the user.
Further Evolution of Policy Comparison (Mass Diff)
We have extended our policy comparison functionality to allow for mass-comparison of policies across the entire ecosystem. Users can now search for local policies similar or different to a given policy based on a similarity threshold, without needing to import them into the Guardian instance. This feature enhances the efficiency and breadth of policy analysis.
Updates
UI Improvements
We have made several enhancements to the user interface, including updates to dialog boxes, notification bars, and login screens. These improvements aim to provide a more cohesive and user-friendly experience.
Obsolete Banner Display
The obsolete banner, which should appear at the top of the page when launching MGS, is now functioning correctly and will be displayed as intended.
Policies with Tools in Dry Run Mode: Performance Improvement
The performance of executing policies with tools referenced in dry run mode has been significantly improved. Users will now experience faster execution speeds when importing and running these policies.
For the full changelog and release notes on the open-source Guardian please visit:
Compatible IPFS Storage Providers
To ensure seamless integration and compatibility with a wide range of data storage needs within the Managed Guardian Service (MGS) framework, we have expanded our list of supported IPFS (InterPlanetary File System) storage providers. Each provider brings unique features and benefits tailored to different requirements, offering flexibility and choice to our users. Whether you are looking for enhanced security, specific geographic data residency, cost-efficiency, or scalability, our diverse range of compatible IPFS storage providers ensures that your data storage needs are met with the highest standards. Below is a list of IPFS storage providers that are fully compatible with MGS, designed to enhance your experience and optimize your data management strategy within the MGS ecosystem.
This is a patch update to Beta v3.2 will fix some known issues. Essentially, this enables easier token discoverability and smoother operations of large policies.
New
Guardian Core patches
Fixing an issue that the TokenId created are published with UUID formatting instead of a tokenId property.
Improvement of how the Policy Service handles large policies.
This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit:
We're excited to announce MGS Beta v6. This includes several new features and improvements designed to enhance your experience and provide more options and flexibility for managing your operations.
New
1. Filebase Support Added
In our continuous effort to expand and improve the IPFS solutions available in MGS, we have now added Filebase as an additional option. This integration allows users to choose Filebase for their IPFS needs, alongside the existing options. With Filebase support, users can leverage its unique features and benefits as part of their workflow in MGS.
Understanding the importance of effective communication, especially during downtimes, we have introduced a new notification system for all users. This feature is designed to inform users about any planned or unexpected downtime promptly. Here's what makes the downtime notification system stand out:
Location and Visibility: The notification is prominently displayed at the top of the screen when enabled, ensuring maximum visibility.
Interactivity: Users can dismiss the notification with a simple click of the [X] button. Once closed, it will not reappear until a new message is issued.
3. Enhanced Final User Profile Setup Wizard Descriptions
To make the setup process as smooth and understandable as possible, we have added helpful descriptions to each step of the Final Setup wizard. These descriptions are designed to provide users with clear information about what is required at each step, ensuring that both Standard Registry and Default User roles can be configured with ease and confidence.
4. Update to Guardian v2.22
Beta v6 includes Guardian version 2.22, bringing all the latest improvements and fixes from the Guardian platform into MGS.
For the full changelog and release notes on the open-source Guardian please visit:
IPFS Storage Providers
Overview
As part of the evolving Managed Guardian Service (MGS) platform, Tenant Admins now have the flexibility and autonomy to select their own IPFS (InterPlanetary File System) storage providers. They must select and configure their preferred IPFS storage provider prior to creating a tenant.
This new feature significantly enhances the customization and control Tenant Admins have over their data storage solutions within the MGS ecosystem. This introduction aims to guide Tenant Admins through the process of selecting and integrating an IPFS storage provider with their MGS tenant.
The Role of IPFS in MGS
IPFS is a peer-to-peer network protocol that enables decentralized data storage and sharing. In the context of MGS, it serves as a backbone for storing digital environmental assets securely and efficiently. Choosing the right IPFS storage provider is crucial for optimizing data accessibility, redundancy, and overall system performance.
Importance of Selecting an IPFS Provider
Customized Data Storage Solutions: Tenant Admins can choose a provider that best fits their specific data storage needs and requirements.
Enhanced Data Sovereignty: By selecting their own provider, Tenant Admins have greater control over where and how their data is stored.
Scalability and Flexibility: Different providers offer varying levels of scalability and flexibility, allowing Tenant Admins to tailor their storage solutions as their needs evolve.
Steps for Tenant Admins
Research and Evaluate IPFS Providers: Understand the offerings, features, and pricing models of various IPFS storage providers. Key factors to consider include storage capacity, redundancy, security measures, and network performance.
Compatibility with MGS: Ensure that the chosen IPFS provider is compatible with the MGS platform. This compatibility is essential for seamless integration and operation within the MGS ecosystem.
Integration Process: Follow the specific steps provided to integrate the selected IPFS storage provider with your MGS tenant. This may involve configuring API connections, setting up access credentials, and customizing storage settings.
Conclusion
The ability to select their own IPFS storage providers empowers Tenant Admins with greater control and flexibility in managing their data storage solutions within the MGS platform. This feature aligns with the overarching goal of MGS to provide a customizable, secure, and efficient environment for managing digital environmental assets. Tenant Admins are encouraged to take advantage of this feature to optimize their MGS experience and meet their specific data storage needs.
Setting up filebase
Logging into filebase
To start using filebase as an IPFS provider in MGS, you need to first register your account. If you already have an account, you can directly go to https://console.filebase.com. If you don’t, follow these steps:
To sign up for a filebase account, navigate to https://filebase.com. To create a new account, click the ‘Try for Free’ button in the top right corner of the webpage.
Next, fill out the form fields, including an email address and password, and agree to the filebase terms to create your account.
You will receive an email with confirmation instructions. Click the link included in the email to confirm your account and complete the registration process. Once finished, you can access the .
Buckets
Buckets are like file folders; they store data and associated metadata. Buckets are containers for objects. Navigate to the Buckets dashboard by clicking on the ‘Buckets’ menu option. Here you can view your existing buckets and create new ones.
If you already have the Bucket you wish to use with MGS, skip this step. To create a new bucket, click the ‘Create Bucket’ button in the top right corner of the webpage, enter the name for the new bucket, and click the ‘Create Bucket’ button.
If successful, you will be redirected to the Bucket dashboard with your newly created bucket.
Access Keys
The Access Keys menu option leads you to the access keys dashboard. Here you can view, manage, and rotate your access keys. From this menu, you can also generate a Secret Access Token to be used with MGS. To generate this token, click the dropdown menu for 'Choose Bucket to Generate Token', then select the IPFS filebase Bucket you intend to use.
Copy the generated Secret Access Token.
Integrating Web3.Storage with MGS Tenants
Tenant Admins play a crucial role in integrating Web3.Storage with their MGS tenants. Here's a step-by-step guide to get you started:
For New Tenants
Click the Add New Tenant Button
Fill out the Tenant Name and select the appropriate Network
When asked for the IPFS Storage Provider click the drop down and select Web3.Storage
Fill out the IPFS Storage API Key and IPFS Storage API Proof that you obtained from the .
For Existing Tenants
From the Tenant Admin screen, click the "Open" button
Navigate to the "Settings" tab
When asked for the IPFS Storage Provider click the drop down and select Web3.Storage.
Fill out the settings and click "Save Changes."
Beta v7
We're thrilled to introduce MGS Beta v7, featuring significant updates and enhancements to optimize your experience and increase the flexibility for managing your operations.
New
Fix for IPFS Resolution Issue
When using the MGS hosted IPFS storage provider option; we've resolved the "IPFS not resolved" error, enhancing the stability and reliability of our IPFS integrations.
Enhancements in Policy Import Process
Addressing performance issues, we've fixed a critical bug in the circuit traversal loop during policy comparisons, significantly reducing processing times and CPU utilization. Additional optimizations have also been made to improve memory usage.
MGS Vault Integration for BYOD Key
To further secure and customize your experience, we've integrated MGS Vault to support Bring Your Own DID (BYOD) Key, allowing for enhanced security and personalization within the MGS framework.
User Interface Improvements
This version rolls out several UI enhancements designed to improve interaction and usability across the MGS platform.
Update to Guardian v2.23
Continuing our commitment to staying current with the latest technological advances, MGS has been updated to Guardian core version 2.23, incorporating all the new features and improvements.
Beta v7 includes Guardian version 2.23, bringing all the latest improvements and fixes from the Guardian platform into MGS.
For the full changelog and release notes on the open-source Guardian please visit:
Return Tenant Related Settings
Return Tenant related settings.
GET/tenants/settings
Get Tenant related settings. For Tenant Admin role only.
This Beta v4 release bring a new UI, new features, introduction of AI, and more!
New
Revolutionized User Interface: Navigate with ease and enjoy a more intuitive experience.
Custom Tenant Branding: Tailor every one of your tenant spaces with unique branding elements for a personalized touch.
Enhanced Standard Registry Attributes: Dive into a more comprehensive and detailed asset management journey.
MGS Vault Additions: Secure your data with integration options including Azure Key Vault and GCP Secret Manager. Learn more about MGS Vault configurations .
Core Guardian Upgrade to v2.20: Experience the pinnacle of our foundational technology, ensuring efficiency and reliability.
AI-Powered Search Capabilities: Navigate through data with unprecedented ease and intelligence.
For the full changelog and release notes on the open-source Guardian please visit:
Beta v8
We are excited to introduce MGS Beta V8, packed with new features and improvements to enhance your experience and provide greater flexibility in managing your operations.
New
Improve the UI/UX for OpenSource Policy Import Function
We have added a cleaner and more intuitive way to search for open-sourced policies directly within MGS, making it easier to find and import the policies you need.
Expose APIs for User Setup Flow
To improve integration capabilities, we've exposed public APIs for user creation functionalities (e.g., Standard Registry and Policy Users). This allows customers to seamlessly integrate MGS into their existing systems, managing user setup processes including IPFS storage providers and vault selections through their own interfaces.
Policy Lifecycle Management
Addressing performance inefficiencies, we've optimized the policy service to handle policy states more effectively. By managing obsolete policies post-Hedera testnet reset, we've reduced unnecessary load and SaaS infrastructure costs. Users can now better manage their policy data, minimizing potential data loss and improving overall satisfaction.
Updates
Update MGS to Guardian v2.24
In our commitment to staying current with technological advances, MGS has been updated to open-source Guardian version 2.24. This update brings all the latest features and improvements from the Guardian platform into MGS.
Beta V8 includes Guardian version 2.24, incorporating the latest improvements and fixes from the Guardian platform to enhance the overall functionality and reliability of MGS.
For the full changelog and release notes on the open-source Guardian please visit:
Send Invite Link
Send Invite link.
POST /tenants/invite
Send an Invite link for a new user. For Tenant Admin role only.
Request Body
Name
Type
Description
Part I: Foundation and Preparation
Establishing the foundational knowledge for methodology digitization on Guardian platform
Overview
Part I provides the essential foundation for understanding methodology digitization, the Guardian platform, and the VM0033 reference methodology. This part consists of three focused chapters designed to prepare readers for the technical implementation phases that follow.
Part V: Calculation Logic Implementation
Status: ✅ Complete and Available
Implementation Focus: VM0033 emission reduction calculations, Guardian Tools architecture, and comprehensive testing frameworks
This part covers the implementation of calculation logic in Guardian environmental methodologies, with VM0033 as the primary example and AR Tool 14 demonstrating Guardian's Tools architecture.
Part Overview
Part V provides comprehensive guidance on implementing and testing calculation logic for environmental methodologies in Guardian:
Beta v3.1
Introducing MGS Beta v3.1 - delivering enhanced tenant logs, a faster Guardian experience, Guardian v2.12, and more pre-loaded policies!
New
Tenant Logs
Two-Factor Authentication (2FA) Setup Guide
Overview
Two-factor authentication (2FA) adds an extra layer of security to your MGS account. Once enabled, signing in will require your password and a one-time code from your mobile device. This applies to all user types, including Tenant Admins, Standard Registry and Policy User accounts.
Access Your Profile
Log into your MGS account.
Click your user name at the bottom left of the sidebar.
Delete Tenant User
Delete Tenant User
DELETE/tenants/{tenantId}/users/{userId}
Delete Tenant User
Return user Tenants
Return user Tenants only.
GET/tenants/user
Return user Tenants. For Tenant Admin role only.
Return Users for Tenant
Return Tenant Users
POST/tenants/{tenantId}/users
Return users for Tenant. For Tenant Admin role only.
Create New Tenant
Create new Tenant.
PUT/tenants/user
Create new Tenant. For Tenant Admin role only.
Delete Tenant
Delete Tenant
POST/tenants/delete
Delete Tenant and all related data. This action can't be undo. For tenant admin role only.
Cost Optimization: With the ability to choose from various providers, Tenant Admins can select a cost-effective solution that aligns with their budget constraints.
Testing and Validation: After integration, thoroughly test the setup to ensure that data storage and retrieval functionalities are working correctly and efficiently within your MGS tenant.
Complete implementation of VM0033 emission reduction calculations using Guardian's customLogicBlock, including baseline emissions, project emissions, leakage calculations, and final net emission reductions with real JavaScript production code.
Foundation concepts and architectural framework for parameter relationships and dependencies in environmental methodologies, establishing patterns for future FLD implementation.
Complete guide to building Guardian Tools using AR Tool 14 as practical example, covering the extractDataBlock → customLogicBlock → extractDataBlock mini-policy pattern for standardized calculation tools.
Comprehensive testing framework using Guardian's dry-run mode and customLogicBlock testing interface, with validation against VM0033 test artifacts at every calculation stage.
Completed Parts I-IV: Foundation through Policy Workflow Implementation
Understanding of Guardian's Policy Workflow Engine (PWE)
Basic JavaScript programming knowledge
Familiarity with environmental methodology calculations
Learning Outcomes
After completing Part V, you will be able to:
✅ Implement calculation logic using Guardian's customLogicBlock with real production examples
✅ Build Guardian Tools using the extractDataBlock and customLogicBlock pattern
✅ Test and validate calculations using Guardian's dry-run mode and testing interfaces
✅ Debug calculation issues using Guardian's built-in debugging tools
✅ Create production-ready environmental methodology implementations
Next Steps
Part V completes the core implementation knowledge needed for Guardian methodology digitization. Future parts will cover:
Part VI: Integration and Testing - End-to-end policy testing and API automation
Part VII: Deployment and Maintenance - Production deployment and user management
Part VIII: Advanced Topics - External system integration and troubleshooting
Part V Complete: You now have comprehensive knowledge of calculation logic implementation in Guardian, from individual customLogicBlocks to complete testing frameworks. These skills enable building production-ready environmental methodologies with confidence in calculation accuracy.
Tenant Admins can now access comprehensive logs specific to their tenant activity.
Guardian v2.12 Upgrade
Improved minting speed due to new batching process.
Enhanced error handling for smoother operation.
Improved memory performance for faster processing.
Artifact tagging for easier identification and handling.
Enhanced policy configurator now offers customizable "themes".
Overall, expect a quicker user experience.
More Pre-loaded Policies
Addition of more pre-loaded policies for a more comprehensive policy creator experience.
This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit: https://github.com/hashgraph/guardian/releases
Click the three-dot (ellipsis) menu next to your user name and select Profile.
Find "Security Settings."
Click Setup next to Two-factor authentication.
Start the Setup Process
A window will open titled “Enable two-factor authentication.”
Scan the QR Code or Enter the Key
Open an authenticator app on your mobile device (such as Authy, Google Authenticator, or similar).
Scan the QR code displayed on the screen.
If you cannot scan the code, copy the provided key and enter it manually into your authentication app.
Enter the Code From Your Authenticator App
The authenticator app will generate a 6-digit code.
Enter this code in the “Code” field.
Click Enable.
Download Your Recovery Codes
After enabling 2FA, you will be prompted to download your recovery codes.
Save these codes in a safe place. If you ever lose access to your authenticator app, you can use a recovery code to log in.
2FA Status Confirmation
Once setup is complete, your profile will display:
Two-factor authentication true: Active
You can deactivate 2FA at any time from this screen if needed.
Additional Notes
2FA is optional but strongly recommended for all users.
The setup process is the same for both Standard Registry and Policy User accounts.
If you lose both your authenticator app and recovery codes, contact MGS support for assistance.
If its an existing Admin account with different users having same email address, then we will be able to select from list of users (SRs/Users/Tenants).
You’ll need to enter your Hedera Account ID and Private Key.
Choose the ED25519 key type — do not select ECDSA.
Download or copy the DER Encoded Private Key — do not use the HEX Encoded format.
For Mainnet:
Use a Hedera-enabled wallet (e.g., ).
Create a Mainnet account and ensure it is funded with HBAR.
Export the ED25519 key in DER Encoded format.
⚠️ Only ED25519 keys in DER format are supported by the Managed Guardian Service.
Digital Identity (DID) Setup
Next, set up your digital identity. You can either:
Allow MGS to create a new DID document for you, or
Select "Bring Your Own DID", in which case you’ll need to input your existing DID keys.
Organization Profile
Fill out your company profile. This information will appear in the Standard Registry Hedera Topic for network visibility.Fill out the company profile for visibility in the standard registry Hedera topic.
Upon completion of Part I, proceed to Part II: Analysis and Planning (coming soon) to begin systematic methodology analysis and implementation planning.
Common templates, frameworks, and systems used across all parts of the Methodology Digitization Handbook
Overview
This directory contains shared infrastructure used across all parts (I-VIII) of the Methodology Digitization Handbook to ensure consistency, quality, and maintainability.
Shared Components
Standard templates for consistent content structure across all chapters and parts
System for ensuring accurate VM0033 references throughout all handbook content
System for linking handbook content with existing Guardian documentation
Comprehensive collection of test artifacts, Guardian implementations, calculation tools, and validation materials including:
VM0033 Reference Materials: Complete methodology documentation and Guardian policy implementation
Test Data & Validation: Official test cases, real project data, and Guardian VC documents
Guardian Tools & Code: Production implementations including AR Tool 14 and calculation JavaScript
Schema Templates: Excel-first schema development templates for Guardian integration
Usage Guidelines
For Content Developers
Use Standard Templates: All chapters must follow templates in templates/
Follow VM0033 Integration: Use vm0033-integration/ system for all methodology references
Link Guardian Docs: Follow guardian-integration/ patterns for existing documentation
For Methodology Implementers
Start with Artifacts: Use test artifacts and reference implementations as foundation
Validate Calculations: All implementations must match test artifact results exactly
Use Production Code: Reference er-calculations.js and AR-Tool-14.json for proven patterns
For Part Maintainers
Reference Shared Systems: Link to shared infrastructure rather than duplicating
Contribute Improvements: Enhance shared systems for all parts
Update Artifacts: Keep artifact collection current with platform changes
Integration with Parts
Each part should reference these shared systems:
Maintenance
Shared System Updates
Updates to shared systems benefit all parts automatically
Version control ensures consistency across handbook
Centralized maintenance reduces duplication
Artifact collection updated with Guardian platform evolution
Quality Assurance
Calculation Accuracy: All artifacts validated against methodology requirements
Guardian Compatibility: Production code tested in Guardian environment
Test Coverage: Comprehensive test cases covering all calculation scenarios
Documentation Quality: All artifacts include usage instructions and integration examples
Complete Shared Infrastructure: This comprehensive shared system provides templates, integration frameworks, and a complete artifact collection including production Guardian implementations, test data, and validation materials. Everything needed for methodology digitization is centralized here for consistency and efficiency.
Artifact Collection Highlights: The artifacts collection includes real production code (er-calculations.js), complete Guardian Tools (AR-Tool-14.json), official test cases (VM0033_Allcot_Test_Case_Artifact.xlsx), and Guardian-ready documents (final-PDD-vc.json) for comprehensive testing and validation.
Part VIII: Advanced Topics and Best Practices
Advanced integration techniques, troubleshooting procedures, and expert-level methodology implementation patterns
Part VIII covers advanced topics for expert-level methodology implementation, including sophisticated external system integration, comprehensive troubleshooting procedures, and best practices learned from production deployments.
Overview
Building on operational deployment from Part VII, Part VIII addresses complex integration scenarios, advanced troubleshooting techniques, and optimization strategies for large-scale methodology implementations serving thousands of users.
Production issues demand systematic troubleshooting and resolution procedures
Performance optimization enables scaling to enterprise-level deployments
Best practices prevent common pitfalls and ensure long-term success
Part VIII Structure
Bidirectional data exchange between Guardian and external platforms. Covers data transformation using VM0033's dataTransformationAddon block and external data reception using MRV configuration patterns from metered energy policies.
Common problems encountered during methodology digitization and their solutions, with specific examples from VM0033 implementation. Covers debugging techniques, performance optimization, and issue resolution.
Prerequisites
From Previous Parts
Parts I-VII: Complete methodology implementation through production deployment
Experience with methodology operations and user management
Understanding of production system monitoring and maintenance
Technical Requirements
Advanced Guardian platform knowledge and API expertise
Experience with external system integration and data transformation
Understanding of production troubleshooting and debugging techniques
Learning Outcomes
After completing Part VIII, you will be able to:
Advanced Integration Mastery
Implement data transformation using dataTransformationAddon blocks with JavaScript
Configure external data reception using externalDataBlock and MRV patterns
Handle Guardian-to-external system data export and formatting
Set up automated monitoring data collection from external devices and systems
Expert Troubleshooting
Diagnose and resolve complex methodology implementation issues
Optimize performance for large-scale production deployments
Implement comprehensive monitoring and alerting systems
Handle edge cases and unusual integration scenarios
Best Practices Implementation
Apply proven patterns from successful methodology deployments
Avoid common pitfalls and implementation mistakes
Optimize for maintainability, scalability, and performance
Establish expert-level quality assurance and testing procedures
Implementation Timeline
Chapter 27 (External Integration): 3-4 hours
Advanced integration pattern implementation
Enterprise system connectivity and data transformation
Chapter 28 (Troubleshooting): 2-3 hours
Comprehensive troubleshooting procedures and issue resolution
Performance optimization and advanced debugging techniques
Total Part VIII Time: 5-7 hours for advanced mastery and expert-level implementation
Status
✅ Available - Part VIII chapters are complete and ready for use.
Completion: Part VIII completes the Methodology Digitization Handbook, providing comprehensive coverage from foundation concepts through expert-level implementation and troubleshooting.
Templates
Standard templates for consistent content structure across all handbook parts
Overview
These templates ensure consistent structure, formatting, and quality across all chapters in the Methodology Digitization Handbook (Parts I-VIII).
Available Templates
Standard structure for individual chapter sections with:
Reading Time Constraints: Specific time limits per template type
Dual Audience Focus: Content serves both Verra maintenance and newcomer learning
Practical Focus: Emphasis on actionable guidance over theory
Accuracy Requirements: All examples must be user-validated
Template Customization
Part-Specific Adaptations
Templates can be adapted for specific parts while maintaining core structure:
Part-specific learning objectives
Relevant Guardian documentation references
Appropriate VM0033 examples for the part's focus
Chapter-Specific Modifications
Individual chapters may modify templates for specific needs:
Additional sections for complex topics
Specialized validation procedures
Extended examples for difficult concepts
Custom formatting for technical content
Quality Assurance
Template Compliance Validation
Template Usage: All handbook content must follow these templates to ensure consistency, quality, and maintainability across all parts.
Azure B2C Single Sign-On (SSO) Integration Guide
Overview
Managed Guardian Service (MGS) supports Single Sign-On (SSO) through Azure B2C for organizations integrating their own front-end application with MGS. This capability is available as part of the Cortex integration pattern, allowing organizations to use their existing Azure B2C tenant for authentication. Azure B2C SSO is not available in the default MGS UI—it is supported only for integrated front ends.
Key Points
Azure B2C SSO can be enabled for any MGS tenant, but configuration is tenant-specific (one Azure B2C connection per tenant).
All Azure B2C application setup and management must be performed in the end user’s Azure portal before connecting to MGS.
Only tenant admins can configure Azure B2C SSO in MGS.
Prerequisites
An Azure B2C tenant and application registered in the organization’s Azure portal.
The following details from Azure B2C:
Issuer URL
Application (Client) ID
Enabling Azure B2C SSO in MGS
1. Create or Select a Tenant
Log into the MGS admin interface.
As a tenant admin, create a new tenant or select an existing tenant from the “Tenants” list.
2. Access the Azure B2C Tab
Click “Open” for the desired tenant.
Navigate to the Azure B2C tab in the tenant configuration.
3. Enable Azure B2C
Click the Enable button.
4. Enter Azure B2C Details
Fill in the following fields using information from your Azure B2C portal:
Application (Cliet) ID (from the Azure B2C registered application)
JWKS URL (public key set endpoint, typically available from Azure B2C)
Click Save Changes.
5. Confirm Configuration
Once saved, MGS will use your Azure B2C settings for authentication to this tenant through your integrated/custom front end.
Notes
Azure B2C setup and application registration must be completed in your own Azure portal. MGS only connects to the already-configured Azure B2C app.
If you need to disable or update Azure B2C, use the Disable button or update the configuration fields as needed.
Azure B2C SSO is not available on the default MGS user interface; it is supported only through integrated or custom UI implementations following the Cortex integration pattern.
Troubleshooting
Ensure all URLs and IDs are entered correctly from your Azure B2C portal.
For issues with SSO login, verify the Azure B2C configuration and application permissions in Azure.
Contact your organization’s Azure administrator or MGS support for assistance.
Part VI: Integration and Testing
Complete methodology validation and production deployment preparation using Guardian's testing and API frameworks
Part VI transforms your methodology implementation from working calculations into production-ready systems. Using VM0033's patterns and Guardian's testing capabilities, you'll learn to validate complete workflows, automate operations through APIs, and prepare for large-scale deployment.
Overview
Building on the calculation logic from Part V, Part VI focuses on system-level validation and operational readiness. These chapters teach you to test methodology implementations as complete systems, automate workflows through Guardian's API framework, and validate production readiness using real-world scenarios.
Test complete methodology workflows across all stakeholder roles using Guardian's dry-run capabilities. Learn to create virtual users, simulate multi-year project lifecycles, and validate complex state transitions using VM0033's production patterns.
Key Learning:
Multi-role testing with virtual users (Project Proponent, VVB, Standard Registry)
Complete workflow simulation from PDD submission to token issuance
Production-scale data validation and performance testing
Automate methodology operations using Guardian's REST API framework. Build production-ready automation systems, integrate with external monitoring platforms, and create comprehensive testing suites for continuous validation.
Key Learning:
Guardian API authentication and endpoint mapping
Automated data submission using VM0033 policy block APIs
Virtual user management for programmatic testing
Cypress integration for automated regression testing
Prerequisites
From Previous Parts
Part I-II: Understanding of Guardian platform and methodology analysis
Part III: Production-ready schemas for data capture and processing
Part IV: Complete policy workflows with stakeholder role management
Part V: Working calculation logic with individual component testing
Technical Requirements
Guardian platform access with API capabilities
VM0033 policy imported and configured for dry-run testing
Development tools for API testing (Postman, curl, or similar)
Basic understanding of automated testing concepts
Learning Outcomes
After completing Part VI, you will be able to:
Testing Mastery
Multi-Role Testing: Create and manage virtual users for complete stakeholder workflow validation
Production Simulation: Test methodology implementations under realistic data volumes and user loads
Integration Validation: Ensure seamless operation between schemas, workflows, and calculations
Error Handling: Validate error conditions and edge cases across complete methodology workflows
API Integration Excellence
Automated Operations: Build production-ready automation systems using Guardian's API framework
External Integration: Connect methodology workflows with monitoring systems and external registries
Testing Automation: Create comprehensive testing suites for continuous validation and regression testing
Production Deployment: Prepare methodology implementations for large-scale operational deployment
Production Readiness
Scalability Validation: Confirm methodology implementations handle production user volumes and data processing
Operational Monitoring: Implement monitoring and alerting for production methodology operations
Maintenance Procedures: Establish procedures for ongoing methodology maintenance and updates
Stakeholder Readiness: Prepare documentation and training materials for methodology users
Success Metrics
For Methodology Developers
Confident deployment of methodology implementations in production environments
Automated testing reducing manual validation effort by >80%
Integration capabilities enabling connection with organizational systems
Scalable operations supporting hundreds of concurrent projects
For Technical Teams
Complete testing coverage validating all methodology workflows and calculations
API automation enabling programmatic methodology operations and integration
Production monitoring and alerting systems ensuring methodology reliability
Reduced operational overhead through automated workflow processing
Improved data quality through comprehensive validation and error handling
Enhanced user experience through reliable, scalable methodology implementations
Lower support burden through robust testing and error prevention
Implementation Timeline
Chapter 22 (End-to-End Testing): 3-4 hours
Multi-role testing framework setup and execution
VM0033 complete workflow validation
Production-scale testing and performance validation
Chapter 23 (API Integration): 2-3 hours
Guardian API authentication and endpoint mapping
Automated workflow development and testing
External system integration patterns
Total Part VI Time: 5-7 hours for complete integration and testing mastery
Getting Started
Begin with Chapter 22: End-to-End Policy Testing to establish comprehensive testing frameworks, then proceed to Chapter 23: API Integration and Automation to automate operations and prepare for production deployment.
Part VI completes your methodology digitization journey, transforming individual components into production-ready systems that scale to serve thousands of users while maintaining accuracy and compliance with methodology requirements.
Next Steps: After completing Part VI, your methodology implementation is ready for production deployment. Parts VII-VIII (coming soon) will cover deployment procedures, maintenance protocols, and advanced integration patterns.
Understand stakeholder ecosystem and roles
Grasp emission sources and carbon pools
Learn monitoring requirements and verification processes
Understand blockchain integration and user management
<!-- In each part's README.md -->
## Content Development Guidelines
This part follows the shared handbook infrastructure:
- **Templates**: [Shared Templates](../_shared/templates/README.md)
- **VM0033 Integration**: [VM0033 System](../_shared/vm0033-integration/README.md)
- **Guardian Integration**: [Guardian System](../_shared/guardian-integration/README.md)
- **Artifacts Collection**: [Test Data & Implementation Examples](../_shared/artifacts/README.md)
## Testing & Validation
All examples and implementations in this part are validated against:
- **Official Test Cases**: VM0033_Allcot_Test_Case_Artifact.xlsx
- **Production Code**: er-calculations.js and AR-Tool-14.json
- **Guardian Integration**: final-PDD-vc.json and vm0033-policy.json
## Template Compliance Checklist
For each chapter section:
- [ ] Follows appropriate template structure
- [ ] Includes all required elements
- [ ] Uses proper GitBook formatting
- [ ] Marks user input requirements
- [ ] Links to Guardian documentation appropriately
- [ ] Meets reading time constraints
- [ ] Serves dual audience effectively
Step By Step Process
Following are the steps to follow to generate Web3.Storage API values:
Create an account on https://web3.storage, please specify the email you have access to as the account authentication is based on the email validation. Make sure to follow through the registration process to the end, choose an appropriate billing plan for your needs (e.g. 'starter') and enter your payment details.
Install w3cli as described in the corresponding section of the web3.storage documentation.
You'll need Node version 18 or higher, with NPM version 7 or higher to complete the installation
Methodology analysis and digitization planning using the approach developed during VM0033 digitization
Part II transforms the foundation established in Part I into practical, actionable digitization plans through analysis and planning. Building directly on your understanding of methodology digitization concepts, VM0033 domain knowledge, and Guardian platform capabilities, this section shares the workflow developed during the first VM0033 methodology digitization project.
The four chapters in Part II follow the sequence we found most effective during VM0033 digitization: methodology decomposition → equation mapping → tool integration → test artifact development. Each chapter builds incrementally toward the technical implementation phases that will come in Part III, ensuring you have the analysis and planning foundation needed for successful digitization.
The Approach We Developed
Part II follows the approach developed during VM0033 digitization work. This workflow emerged from experience working through environmental methodology requirements and represents what was learned about moving from methodology understanding to implementation readiness.
The Analysis Approach:
Methodology Analysis (Chapter 4): Break down methodology documents into manageable components using structured reading techniques
Mathematical Component Extraction (Chapter 5): Use recursive analysis to identify all equations and parameter dependencies
External Dependencies Integration (Chapter 6): Handle CDM tools, VCS modules, and other external calculation resources
Validation Framework Creation
This sequence ensures that no critical elements are missed while building toward implementation. Each step validates and builds upon previous work, reducing the risk of discovering missing requirements during technical development.
Why This Approach Works: Starting with broad understanding, then progressively narrowing focus to specific components, handling external dependencies, and finally creating validation frameworks helps manage methodology complexity while ensuring important requirements aren't missed. This natural problem-solving approach works well for methodology digitization by keeping each stage manageable while building toward implementation.
Chapter Progression and Learning Objectives
Focus: Approach to reading and analyzing methodology PDFs, identifying key components, stakeholders, and workflow requirements.
What You'll Learn: Techniques for breaking down methodologies like VM0033 into digitization-ready components. You'll learn structured reading approaches that focus on core methodology sections, parameter extraction techniques, and recursive analysis fundamentals that serve as the foundation for all subsequent work.
VM0033 Application: Step-by-step analysis of VM0033's structure, demonstrating how to identify and prioritize the most critical sections for digitization. You'll see how VM0033's complexity can be decomposed into manageable components while maintaining the integrity of the overall methodology requirements.
Focus: Mathematical component extraction using recursive analysis techniques starting from final emission reduction formulas.
What You'll Learn: The recursive approach to equation mapping that helps ensure no mathematical dependencies are missed. You'll learn parameter classification systems, dependency tree construction, and documentation techniques that create calculation frameworks ready for implementation.
VM0033 Application: Mapping of VM0033's emission reduction equations, including baseline emissions, project emissions, and leakage calculations. You'll work through actual VM0033 equations using the recursive approach, building dependency trees that capture all parameter relationships.
Focus: Approach to handling external tools and modules that methodologies reference, creating unified calculation frameworks.
What You'll Learn: Integration techniques for CDM tools, VCS modules, and other standardized calculation components. You'll learn to create cohesive calculation systems that integrate multiple external dependencies while managing versioning and compatibility requirements.
VM0033 Application: Integration of the tools implemented for VM0033, including AR-Tool14 for biomass calculations, AR-Tool05 for fossil fuel emissions, and AFLOU for risk assessment. You'll see how to create unified frameworks that incorporate external calculation resources while maintaining VM0033's specific requirements.
Focus: Creating test spreadsheets that serve as validation benchmarks for digitized methodology implementations.
What You'll Learn: How to work with Verra to develop test scenarios using real Allcot project data, covering all methodology pathways and creating input datasets. You'll learn to create test artifacts that serve as accuracy standards for digital policy validation.
VM0033 Application: Development of VM0033 test spreadsheet with multiple project scenarios covering different wetland types and restoration activities. Using the actual VM0033 test case artifact, you'll understand how test frameworks validate digitized methodologies.
Building on Part I Foundation
Part II assumes you have completed Part I and builds directly on that foundation. The concepts introduced in Part I - methodology digitization principles, VM0033 domain knowledge, and Guardian platform capabilities - form the essential context for the analysis and planning techniques introduced in Part II.
Progressive Technical Depth: While Part I focused on understanding and context, Part II introduces the technical rigor needed for implementation. However, the technical depth remains focused on analysis and planning rather than coding or configuration. You'll work with methodology content, equation structures, and test frameworks, but the implementation details come in Part III.
Practical Industry Focus: Every technique in Part II comes from real-world digitization projects. The approaches, recursive analysis methods, and integration techniques represent practices used successfully in methodology implementations like VM0033.
Part II Completion and Part III Readiness
Completing Part II ensures you have the analysis and planning foundation needed for Part III (Schema Design and Development). The approach developed through these four chapters provides the detailed understanding required for technical implementation.
What You'll Have Accomplished:
Methodology analysis skills applicable to any environmental methodology
Mathematical component extraction using recursive techniques
External tool integration planning and unified framework design
Validation framework with test artifacts serving as accuracy benchmarks
Preparation for Part III: The detailed analysis and planning work in Part II directly supports the schema design and policy workflow development covered in Part III. The parameter classifications, dependency trees, and test artifacts created in Part II become the foundation for Guardian schema development and policy implementation.
Time Investment and Learning Approach
Part II is designed for focused, practical learning with each chapter requiring 15-20 minutes of reading time. The total investment of approximately 60-80 minutes provides comprehensive analysis and planning capabilities that significantly reduce the time required for technical implementation phases.
Recommended Approach: Complete Part II chapters sequentially, as each builds on previous analysis work. The systematic progression ensures you develop comprehensive analysis capabilities while maintaining practical focus on implementation preparation.
The industry techniques introduced in Part II represent knowledge gained through real-world methodology digitization experience. Mastering these systematic approaches provides the foundation for efficient, accurate methodology implementation across any environmental standard or framework.
Chapter Navigation
Chapter
Title
Focus
Reading Time
Sequential Learning: Complete chapters in order for optimal learning progression and systematic skill development.
Ready to Begin: With Part I foundation complete, you're prepared for the systematic analysis and planning techniques in Part II. Start with Chapter 4 to begin learning the methodology digitization approach we developed.
Chapter 12: Schema Testing and Validation Checklist
After defining schemas, you need to test and validate them before deployment. This chapter provides a practical checklist to ensure your schemas work correctly and provide good user experience.
Schema Validation Checklist
1. Set Default, Suggested, and Test Values
Add values to help users and enable testing. These are helpful but not mandatory.
In Guardian Schema Editor:
Default Value: Pre-filled value that appears when users first see the field
Suggested Value: Recommended value shown to guide users
Test Value: Value used for testing schema functionality
Example Values Setup:
Benefits:
Users see helpful starting values
Testing becomes easier with pre-filled data
New users understand expected input formats
2. Preview and Test Schema Functionality
Use Guardian's preview feature to test your schema before deployment.
Preview Testing Process:
Click "Preview" in Guardian schema interface
Fill out form fields using test values
Test conditional logic by changing enum selections
Verify required field validation works
Test These Elements:
All enum selections show/hide correct fields
Required fields prevent form submission when empty
Field types validate input correctly (numbers, dates, emails)
Help text displays properly
3. Update Schema UUIDs in Policy Workflows
Insert your new schema UUIDs where documents are requested or listed in policy workflow blocks.
UUID Replacement Process:
Copy new schema UUID from JSON schema (click hamburger menu next to schema row, click on "Schema")
Open policy workflow configuration
Find blocks that use old schema references:
requestVcDocumentBlock
4. Verify Test Artifact Completeness
Ensure no fields are missing compared to your test artifact design from Part II.
Completeness Check:
Open your test artifact spreadsheet from Part II analysis
List all required parameters from methodology
Check each parameter has corresponding schema field
Verify calculation fields capture all intermediate results
Missing Field Checklist:
All methodology parameters have schema fields
Calculation intermediate results are captured
Evidence requirements have file upload fields
Conditional parameters appear based on method selection
5. Optimize Logical Flow and User Experience
Organize fields and sections for intuitive user experience.
UX Organization Principles:
Logical Grouping: Group related fields together
Progressive Disclosure: Basic information first, complex details later
Clear Labels: Use terminology familiar to domain experts
Helpful Ordering: Required fields before optional ones
Calculation parameters grouped by methodology section
Evidence fields grouped near related data fields
Example Logical Flow:
Once schemas pass this validation checklist, they're ready for integration into Guardian policy workflows. Well-tested schemas provide:
Smooth user experience for data entry
Accurate data types for calculations
Proper validation to prevent errors
Clear organization for efficient workflows
The next part of the handbook covers policy workflow design, where these validated schemas integrate with Guardian's policy engine to create complete methodology automation.
Chapter 19: Formula Linked Definitions (FLDs)
Understanding Guardian's parameter relationship framework for environmental methodologies
This chapter details the use of Formula Linked Definitions (FLDs) and how it enables users to view/cross-check human readable mathematical representations of the customLogicBlock calculations whenever they look at relevant schemas, policies or documents with data. It will also describe how to create Formula Linked Definitions by linking the relevant fields in schemas with the parameters in the mathematical equations of Methodology.
Once the FLDs are created, when the particular Verifiable Credentials (VCs)/Schemas are viewed in the published policy. The formulas will be displayed alongside the relevant fields enabling users such as VVBs and auditors to verify that the formulas are in sync with the methodology and the calculations are accurate.
When navigating to the "Manage Formulas" from the sidebar in Guardian, you can choose to create a new formula or import the formula using the .formula file. For this documentation we will look at creating a new formula (FLD) from scratch.
Once you click on create a new formula, you will see three tabs:
Overview Tab
In this tab, you would put in basic details about your formula such as name, description and the policy it belongs to.
Edit Formula
There are 4 types of items available in order to compose a formula:
Constants are the fixed values that can be used in a formula. This item contains three fields where you can fill constant's:
Name
Description
Value
Variables are going to be the data coming in from the documents. This can be linked to a particular field in the schemas of the policy or a component of another FLD formula. Along with the name and description, this item also has a
Link (Input) field where the particular field from the schemas/component from other forumlas (FLDs) can be added.
Formulas item can be used to input the Mathematical Formula. Along with name and description fields, formula item also has
Formula field where the Mathematical formula can be added with the built in Math keyboard or LaTex form.
Link (Output) field which indicates the field in the document schema where the result of the calculation defined in CustomLogicBlock is located
Text a component which allows the description of the calculation algorithm without using mathematical notation. This component does not require any specific syntax. Text item contains the following fields:
Name of the text
Description of the text
Using the combination of the above 4 items, a Formula Linked Definitions can be generated which will explain the code/calculations that happen in the CustomLogicBlock. The best approach is to go from bottom to top i.e. create all the small formulas and variables/constants it is related to and then work you way up to create the final formula that represents the Main Formula of the methodology. A formula item can be used inside another formula which will create a heirarchy for the end users to track how each component is being calculated.
In order to have better readability, it is recommended to add relevant name and descriptions for the above items.
Attach Files
Here you can attach all the relevant documents concerned with the Methodology that can help with the verification of the Formulas. This will help the users (vvb, auditors etc.) to be able to look up the documents in guardian itself instead of finding it on the Internet. The files that are attached will be shown to the users in Files tab when the published document is viewed (refer to Viewing Formula Linked Definitions)
Viewing Formula Linked Definitions
Once the policy and the formulas are published, all the relevant document (VC) will have a button besides the linked fields to view the FLD. Once clicked, the Formula display dialogue shows all linked formulas and provides facilities to navigate through the components of these formulas. In the dialog, all the relationships that were added can be seen along with its value that was filled by the user. This makes the verification of the calculations and formulas easier.
Along with the formulas, there will be a Files tab which will show all the files attached by the FLD developer (usually the policy developer)
Chapter Summary
Formula Linked Definitions provide a structured approach to managing parameter relationships in Guardian methodologies so that the users can cross-verify that the formulas used and the calculations behind the scenes (CustomLogicBlock) is correct.
Key takeaways:
FLDs enable users to view human readable mathematical representations of the calculations taking place in the CustomLogicBlock
VM0033 offers clear examples of parameter relationships suitable for FLD implementation
FLDs allows to browse associations between fields in schemas/documents and the corresponding variables in the displayed math formulas.
Guardian platform allows users to navigate the hierarchy of formulas and the data they represent, and view mapping variables in the formula to fields in schemas.
Next Steps
Chapter 20 will demonstrate implementing specific AR Tool calculation patterns, showing how the parameter relationships we've identified in FLDs translate into working calculation code for biomass and soil carbon assessments.
Guardian's schema system is more sophisticated than simple data collection forms. When implementing VM0033, we needed to translate over 400 structured data components for wetland restoration methodology requirements into Guardian's schema architecture. This required understanding how schemas integrate with Guardian's broader platform capabilities while maintaining usability for different stakeholder types.
This chapter demonstrates schema development foundations using VM0033 implementation as a concrete example. VM0033's complexity provides practical examples of architectural patterns, design principles, and implementation approaches that apply to environmental methodology digitization more broadly.
The schema architecture establishes the foundation for translating methodology requirements from Part II analysis into working Guardian data structures. Rather than building everything at once, establishing architectural understanding first enables building schemas that handle complexity while remaining practical for real-world use.
VM0033 Schemas
Guardian Schema System Foundation
Guardian schemas serve multiple functions beyond data collection. They define data structures, generate user interfaces, implement validation rules, support calculation frameworks, and create audit trails through Verifiable Credentials integration.
Guardian Schema Functions:
Data Structure Definition: Specify exactly what information gets collected and how it's organized
User Interface Generation: Automatically create forms that stakeholders use for data input
Validation Rule Implementation: Ensure data meets methodology requirements before acceptance
Calculation Framework Support: Provide data structures that calculation logic operates on
VM0033 demonstrates how these functions work together. The methodology's complex calculation requirements needed schemas that could capture parameter data accurately, generate usable interfaces for Project Developers and VVBs, validate data according to VM0033 specifications, and support calculation workflows for emission reduction quantification.
JSON Schema Integration: Guardian builds on JSON Schema specifications for data structure definitions. Every parameter identified in Part II analysis translates into JSON Schema field definitions with appropriate types, validation rules, and relationships.
Verifiable Credentials Structure: Each schema generates Verifiable Credentials (VCs) that create cryptographic proof of data integrity. For VM0033, this means every project submission, monitoring report, and verification result becomes an immutable record with full audit trail capabilities.
Schema Content Classifications
Guardian organizes schema content into five distinct types, each serving different purposes in methodology digitization. VM0033 uses all five types across its schema implementation:
general-data: Basic project information, stakeholder details, geographic data, and descriptive content that doesn't require complex validation. VM0033's project description sections use general-data for project locations, implementation schedules, and stakeholder consultation results.
parameter-data: Methodology-specific parameters with equations, units, data sources, and justifications. These components implement the mathematical framework from Part II analysis. VM0033's parameter-data includes biomass density values, emission factors, and quantification approach selections.
validation-data: Calculation results, emission reduction outcomes, and verification results that require special audit trail handling. VM0033's validation-data captures final carbon stock calculations, emission reduction totals, and VVB verification decisions.
tool-integration: External tool implementations including AR Tools, VCS modules, and methodology-specific calculation frameworks. VM0033 integrates AR Tool 5 for fossil fuel emissions and AR Tool 14 for biomass calculations through tool-integration components.
guardian-schema: Complex nested schemas and advanced Guardian features requiring sophisticated configuration. VM0033's monitoring period management and multi-year calculation tracking use guardian-schema features for handling temporal data relationships.
This classification system helps organize complex methodologies like VM0033 while ensuring each component uses appropriate Guardian features and validation approaches.
Two-Part Schema Architecture
For VM0033 we implemented a two-part schema structure that separates project description from calculation implementation. This pattern worked because methodologies have foundational project information that establishes context, and calculation machinery that processes that information into emission reduction or removal results.
Project Description Foundation
The Project Description schema establishes all foundational project information while supporting multiple certification pathways. For VM0033, this meant supporting both VCS-only projects and VCS+CCB projects through conditional logic that adapts the interface based on certification selection.
Stakeholder Information: Project developer details, VVB assignments, and community consultation documentation
Methodology Implementation
VM0033's Project Description schema contains 3,779 rows of structured data. This demonstrates how complex environmental methodologies require extensive information capture while maintaining usability for stakeholder workflows.
Why This Foundation Approach Works: Establishing clear project context before calculations helps stakeholders understand what they're implementing and why. The foundation information also provides the context that calculation engines need to process parameters correctly.
Calculations and Parameter Engine
The Calculations section implements VM0033's computational requirements through structured parameter management and automated calculation workflows. This architecture handles the recursive calculation dependencies identified during Part II analysis.
Calculation Engine Components:
Monitoring Period Inputs: Time-series data collection framework with 47 structured fields handling annual data requirements across 100-year crediting periods. This component manages the temporal aspects of VM0033's monitoring requirements.
Annual Input Parameters: Year-specific parameter tracking with 44-50 configured fields supporting VM0033's requirement for annual updates to key variables like biomass density, emission factors, and area measurements.
Baseline Emissions Calculation: 204-field calculation engine implementing VM0033's baseline scenario quantification including soil carbon stocks, biomass calculations, and greenhouse gas emissions across all relevant carbon pools.
Project Emissions Calculation: 196-203 field calculation framework processing project scenario emissions with restoration activity impacts, modified emission factors, and project-specific boundary conditions.
Net ERR Calculation: 21-field validation engine that processes baseline and project calculations into final emission reduction results, including leakage accounting, uncertainty deductions, and buffer requirements.
This calculation architecture handles VM0033's complex dependencies where final results depend on annual calculations, which depend on monitoring data, which depend on project-specific parameters established in the Project Description foundation.
Guardian Field Mapping Patterns
Translating methodology parameters into Guardian field configurations requires patterns that preserve methodology integrity while generating usable interfaces. VM0033's implementation established consistent approaches for different types of methodology content.
Standard Parameter Field Structure
Every methodology parameter from Part II analysis translates into Guardian fields using a consistent structure that captures all necessary information for implementation and validation.
Required Parameter Fields:
Description: Clear explanation of what the parameter represents and how it's used in methodology calculations
Equation: Reference to specific methodology equations where the parameter appears
Source of data: Methodology requirements for how this parameter should be determined
For example, VM0033's BD (Biomass Density) parameter implementation:
This pattern ensures that every parameter implementation maintains full methodology traceability while providing clear guidance for data collection and validation.
Conditional Logic Implementation Patterns
VM0033's multiple calculation pathways required conditional logic that shows relevant fields based on user selections while maintaining methodology coverage.
Conditional Logic Examples from VM0033:
Certification Type Selection:
Selecting "VCS v4.4" shows core VCS requirements
Selecting "VCS + CCB" adds community and biodiversity benefit documentation requirements
Each pathway maintains methodology compliance while avoiding unnecessary complexity
Quantification Approach Selection:
"Direct method" shows field measurement data entry forms
Each method implements VM0033's approved calculation approaches
Soil Emission Calculation Selection:
CO2 approach selection determines which soil carbon stock calculation methods appear
CH4 and N2O approach selections control emission factor parameter visibility
Each combination implements VM0033's flexible calculation framework
This conditional structure ensures users see only methodology-relevant fields based on their project characteristics, reducing complexity while ensuring requirements coverage.
UX Patterns
Progressive Disclosure: Complex calculation parameters appear only after basic project information completion. This prevents overwhelming initial experiences while ensuring users understand project context before diving into technical details.
Role-Based Interface: Different stakeholder roles see appropriate field sets:
Project Developers see data entry requirements with guidance
VVBs see verification-focused interfaces with tabs for validation & verification reports
Standard Registry sees approval-focused documentation with key decision points highlighted
Contextual Help: We're working on a new feature to enable field-level methodology references, calculation explanations and source justifications in Guardian schemas.
Validation Checks: Real-time validation feedback helps users understand data requirements and correct issues immediately rather than discovering problems during submission review.
Next Steps
This chapter established the architectural foundation for Guardian schema development using patterns demonstrated through VM0033's production implementation. The two-part architecture, field mapping patterns, and other techniques provide the framework for implementing granular data collection effectively.
The next chapter applies these principles to PDD schema development, demonstrating how to implement project description requirements and calculation frameworks using the patterns and techniques established here.
Chapter 6: Tools and Modules Integration
One of the most challenging aspects of VM0033 digitization was handling the external calculation tools that the methodology references. These aren't just simple formulas - they're complete calculation systems developed by other organizations with their own parameter requirements, validation rules, and output formats. This chapter shares our experience integrating the three tools we implemented: AR-Tool05 for fossil fuel emissions, AR-Tool14 for biomass calculations, and the AFLOU non-permanence risk tool.
The integration challenge went beyond just implementing calculations. Each tool was designed as a standalone system, but we needed to make them work seamlessly within VM0033's calculation framework while maintaining their original logic and validation requirements. The approach we developed balances faithful implementation of tool requirements with practical usability in the Guardian platform.
Understanding External Tool Dependencies
When we first analyzed VM0033, we found references to numerous CDM tools and VCS modules scattered throughout the methodology. Initially, this seemed overwhelming - how could we possibly implement all these external systems? The recursive analysis from Chapter 5 helped us understand which tools were actually needed for our mangrove restoration focus.
VM0033's Tool References: The methodology mentions over a dozen external tools, but our boundary condition analysis revealed that the Allcot ABC Mangrove project only required three:
AR-Tool05: For calculating fossil fuel emissions from project activities
AR-Tool14: For estimating carbon stocks in trees and shrubs
AFLOU Non-permanence Risk Tool: For assessing project risks that might reverse carbon benefits
Why Only These Three: The Allcot project boundary decisions eliminated the need for most other tools. No fire reduction premium meant no fire-related tools. Mineral soil only meant no peat-specific calculations. Simple planting activities meant minimal fossil fuel calculations.
Tool Integration Strategy: Rather than trying to implement complete standalone versions of each tool, we focused on integrating the specific calculation procedures that VM0033 actually uses from each tool.
Reference Materials: For tool integration context, see our and in our Artifacts Collection. The contains real project data for validation (covered in Chapter 7).
Tool vs. Methodology Calculations
Distinguishing Tool Logic from Methodology Logic: VM0033 uses tool calculations as components within its larger framework. For example, AR-Tool14 calculates biomass for a single tree or plot, but VM0033 scales this across multiple strata and time periods. Understanding this distinction helped us design integration that preserves tool accuracy while meeting methodology requirements.
Data Flow Management: Each tool expects inputs in specific formats and produces outputs that need to be transformed for use in VM0033 calculations. We had to map data flows carefully to ensure information passes correctly between tool calculations and methodology calculations.
AR-Tool05: Fossil Fuel Emission Calculations
AR-Tool05 handles emissions from fossil fuel use in project activities. Even though the Allcot project excludes fossil fuel emissions (mangrove planting doesn't require heavy machinery), we implemented this tool because it's commonly needed in other restoration projects.
Tool Purpose: AR-Tool05 provides standardized approaches for calculating CO₂ emissions from equipment, vehicles, and energy use during project implementation. This includes direct fuel combustion and indirect emissions from electricity consumption.
Integration Challenge: AR-Tool05 is designed as a comprehensive energy accounting system, but VM0033 only needs specific emissions calculations. We had to extract the relevant calculation procedures while maintaining the tool's validation logic.
Key Calculation Components We Implemented:
Direct Combustion Emissions: Calculate CO₂ from fuel burned in vehicles and equipment using fuel consumption data and standard emission factors.
Equipment-Specific Calculations: Different equipment types (boats, trucks, generators) have different fuel consumption patterns and emission factors that the tool accounts for systematically.
Activity-Based Scaling: The tool calculates emissions per activity (hours of operation, distance traveled, area covered) which VM0033 then scales across project implementation schedules.
AR-Tool05 Implementation Approach
Simplified Parameter Collection: Instead of implementing AR-Tool05's complete equipment catalog, we focused on the equipment types commonly used in mangrove restoration: boats for site access, small equipment for planting, and vehicles for transportation.
Validation Logic: AR-Tool05 includes validation rules for fuel consumption rates and emission factors. We preserved this validation because it catches data entry errors that could significantly affect results.
Output Integration: AR-Tool05 produces total CO₂ emissions that get added to VM0033's project emission calculations. The integration required unit conversions and time period alignment with VM0033's annual calculation cycles.
AR-Tool14: Biomass and Carbon Stock Calculations
AR-Tool14 is central to mangrove restoration because it provides the standardized allometric equations for calculating carbon storage in trees and shrubs. This tool became one of our most important integrations because it directly affects the project's carbon benefit calculations.
Tool Purpose: AR-Tool14 contains allometric equations that estimate biomass from tree measurements (diameter, height, species). These equations were developed from extensive field research and provide standardized approaches for different forest types and species groups.
Why This Tool Matters: Without AR-Tool14, every project would need to develop its own biomass equations, which is expensive and time-consuming. The tool provides scientifically validated equations that are accepted by carbon standards worldwide.
VM0033 Integration Points: VM0033 uses AR-Tool14 calculations in several places:
Baseline biomass estimation for existing vegetation
Project biomass growth projections over time
Above-ground and below-ground biomass calculations
Dead wood and litter biomass when included
AR-Tool14 Implementation Details
Species-Specific Equations: AR-Tool14 includes different allometric equations for different species groups. For mangrove restoration, we implemented equations specific to tropical wetland species that match the restoration targets in the Allcot project.
Multi-Component Calculations: The tool calculates separate estimates for above-ground biomass, below-ground biomass, dead wood, and litter. VM0033 uses these component estimates in different parts of its calculation framework.
Growth Projection Logic: AR-Tool14 provides approaches for projecting biomass growth over time using diameter increment data. This became critical for VM0033's long-term carbon benefit projections.
Parameter Requirements We Mapped:
Tree diameter at breast height (DBH) measurements
Tree height measurements for species without height-specific equations
Species identification or species group classification
Site condition factors (soil type, climate region, management intensity)
Handling AR-Tool14 Complexity
Equation Selection Logic: AR-Tool14 contains dozens of allometric equations for different species and conditions. We implemented selection logic that chooses appropriate equations based on user-provided species and site information.
Unit Management: The tool uses various units for different equations (DBH in cm, height in m, biomass in kg or tons). Our implementation handles unit conversions automatically to prevent errors.
Validation and Error Handling: AR-Tool14 includes validation rules for measurement ranges and species applicability. We preserved these validations because they prevent calculation errors from invalid input data.
AFLOU Non-Permanence Risk Assessment
The AFLOU (Agriculture, Forestry, and Other Land Use) non-permanence risk tool assesses the likelihood that carbon benefits might be reversed due to various risk factors. This tool was essential for VM0033 because it determines buffer pool contributions that affect final credit calculations.
Tool Purpose: AFLOU evaluates project risks across multiple categories (natural disasters, management failures, political instability, economic factors) and calculates a risk score that determines what percentage of credits must be held in buffer pools.
Why Risk Assessment Matters: Carbon projects can lose stored carbon through storms, fires, disease, or management changes. The AFLOU tool provides standardized risk assessment that ensures projects contribute appropriately to insurance buffer pools.
Integration with VM0033: VM0033 uses AFLOU risk scores to calculate buffer pool contributions that reduce the net credits a project can claim. Higher risk scores mean higher buffer contributions and fewer credits available for sale.
AFLOU Implementation Approach
Risk Category Assessment: AFLOU evaluates risks across multiple standardized categories. For mangrove restoration, the most relevant categories include:
Management and financial risks (funding stability, technical capacity)
Market and political risks (land tenure, regulatory changes)
Scoring Integration: AFLOU produces risk scores that feed into VM0033's buffer pool calculations. We implemented the scoring logic while simplifying the user interface to focus on risks most relevant to mangrove restoration.
Project-Specific Customization: The tool allows project-specific risk assessments based on local conditions. Our implementation guides users through risk evaluation while maintaining consistency with AFLOU's standardized approaches.
Creating Unified Integration Framework
Rather than implementing three separate tools, we designed a unified integration framework that manages data flows between tools and VM0033 calculations while maintaining each tool's specific requirements.
Shared Parameter Management: Many parameters are used by multiple tools. For example, tree species information affects both AR-Tool14 biomass calculations and AFLOU risk assessments. Our framework ensures parameter consistency across tool integrations.
Calculation Sequencing: Some tool calculations depend on outputs from other tools. Our framework manages calculation sequences to ensure data is available when needed while handling dependencies gracefully.
Validation Coordination: Each tool has its own validation requirements, but some validations overlap or conflict. We designed validation logic that satisfies all tool requirements while providing clear feedback to users about any issues.
Framework Benefits
Consistent User Experience: Users interact with a single interface that handles all tool integrations rather than switching between different tool interfaces.
Data Quality Assurance: The unified framework ensures data consistency across all tool calculations and catches errors that might arise from parameter mismatches between tools.
Maintenance Efficiency: Updates to tool calculations or requirements can be managed in one place rather than updating multiple separate integrations.
Practical Integration Lessons
Start with Core Functionality: Our initial approach tried to implement complete tool functionality, which was overwhelming. It worked much better to start with the specific functions VM0033 actually uses and expand from there.
Preserve Tool Validation: Each tool's validation logic exists for good reasons - usually to prevent calculation errors or inappropriate application. Preserving this validation prevented problems during implementation and ongoing use.
Plan for Tool Updates: CDM tools and VCS modules get updated periodically. We designed our integration to accommodate updates without requiring complete reimplementation.
Test with Known Results: Each tool typically includes example calculations or test cases. We used these to validate our integration implementation before connecting it to VM0033 calculations.
Document Integration Decisions: When tools provide multiple calculation options, we documented which options we implemented and why. This helped with maintenance and troubleshooting later.
Integration Testing and Validation
Tool-Level Testing: We first tested each tool integration separately using the tool's own test cases and examples to ensure calculation accuracy.
VM0033 Integration Testing: After individual tool testing, we tested the complete integration using VM0033 calculation examples to ensure data flows correctly through the full calculation chain.
Cross-Tool Consistency: We tested scenarios where multiple tools use the same input parameters to ensure consistent results and catch parameter handling errors.
Edge Case Testing: Each tool handles edge cases (unusual measurements, boundary conditions) differently. We tested these scenarios to ensure graceful handling across the integrated system.
From Tool Integration to Test Artifacts
The tool integration work creates the foundation for comprehensive test artifact development in Chapter 7. Understanding how tools connect to VM0033 calculations enables creating test scenarios that validate not just methodology calculations, but also the integration points where tools provide inputs to methodology calculations.
Test Coverage Requirements: Tool integrations add complexity that must be covered in test artifacts. Tests need to validate tool calculations individually and integration points where tools connect to methodology calculations.
Parameter Coverage: Tools introduce additional parameters that must be included in test scenarios. The parameter mapping work from tool integration directly informs test artifact parameter requirements.
Validation Testing: Tool validation logic must be tested to ensure it properly prevents calculation errors without blocking valid parameter combinations.
Tool Integration Summary and Next Steps
Integration Framework Complete: You now understand the approach we used to integrate external calculation tools into VM0033 digitization.
Key Integration Outcomes:
External tool identification and prioritization based on project boundary conditions
AR-Tool05 integration for fossil fuel emission calculations
AR-Tool14 integration for biomass and carbon stock calculations
AFLOU integration for non-permanence risk assessment
Preparation for Chapter 7: The tool integration work provides essential components for test artifact development. The parameter requirements, calculation procedures, and validation logic from tool integration become key elements in comprehensive test scenarios.
Real-World Application: While we focused on three specific tools for the Allcot mangrove project, the integration approach applies to any external calculation tools referenced by environmental methodologies. The unified framework approach scales to handle additional tools as project requirements expand.
Implementation Reality: Tool integration took significant time during VM0033 digitization, but it provides reusable calculation capabilities that can be applied to other projects using the same tools.
Part III: Schema Design and Development
Practical schema development using Excel-first approach and Guardian's schema management features
Part III transforms your methodology analysis from Part II into working Guardian schemas through hands-on, step-by-step implementation. Using VM0033 as a concrete example, this section teaches practical schema development from architectural foundations through testing and validation.
The five chapters follow a logical progression: Guardian schema basics → PDD schema development → monitoring schema development → advanced schema management techniques → practical testing checklist.
Schema Development Approach
Part III focuses on practical schema development using proven patterns from VM0033 implementation. Rather than theoretical concepts, each chapter provides step-by-step instructions for creating working schemas that capture methodology requirements accurately.
Development Sequence:
Schema Architecture Foundations (Chapter 8): Guardian schema system basics and field mapping principles
PDD Schema Development (Chapter 9): Approach to building comprehensive PDD schemas step-by-step
Monitoring Schema Development (Chapter 10): Time-series monitoring schemas with temporal data management
Advanced Schema Techniques (Chapter 11): API schema management, field properties, Required types, and UUIDs
This hands-on approach ensures you can build production-ready schemas while understanding Guardian's schema management capabilities.
Chapter Progression and Learning Objectives
Focus: Guardian schema system fundamentals and the two-part architecture pattern used in VM0033.
What You'll Learn: Guardian's JSON Schema integration, Verifiable Credentials structure, and the proven two-part architecture (Project Description + Calculations) that handles methodology complexity. You'll understand how to map methodology parameters to Guardian field types.
Practical Skills: Field type selection, parameter mapping, and architectural patterns that simplify complex methodologies into manageable schema structures.
Focus: Step-by-step Excel-first approach to building comprehensive PDD schemas.
What You'll Learn: Complete PDD schema development process from Excel template through Guardian import. Includes conditional logic implementation, sub-schema creation, and essential field key management for calculation code readability.
Practical Skills: Excel schema template usage, Guardian field configuration, conditional visibility logic, and proper field key naming for maintainable calculation code.
Focus: Time-series monitoring schemas that handle annual data collection and calculation updates.
What You'll Learn: Monitoring schema development with temporal data structures, quality control fields, and evidence documentation. Covers field key management specific to time-series calculations and VVB verification workflows.
Practical Skills: Annual parameter tracking, temporal data organization, monitoring-specific field key naming, and verification support structures.
Focus: API schema management, standardized properties, Required field types, and UUID management.
What You'll Learn: Schema management with API operations, the four Required field types (None/Hidden/Required/Auto Calculate), standardized property definitions from GBBC specifications, and UUID management for efficient development.
Practical Skills: API schema updates, Auto Calculate field implementation, standardized property usage, and UUID-based schema version management.
Focus: Practical validation steps using Guardian's testing features before schema deployment.
What You'll Learn: Systematic testing approach using Default Values, Suggested Values, and Test Values. Covers schema preview testing, UUID integration into policy workflows, and user experience validation.
Practical Skills: Guardian schema testing tools usage, validation rule configuration, logical field organization, and pre-deployment checklist completion.
Building on Part II Foundation
Part III directly implements the analysis work from Part II. Your methodology decomposition, parameter identification, and test artifacts become the inputs for schema development.
Implementation Translation: The parameter lists, dependency trees, and calculation frameworks from Part II translate directly into Guardian schema configurations through the techniques taught in Part III.
Test Integration: Test artifacts from Chapter 7 integrate with schema testing in Chapter 12, ensuring implementations maintain accuracy while providing good user experience.
Part III Completion
Completing Part III provides you with:
Production-ready PDD and monitoring schemas for your methodology
Guardian schema development skills transferable to other methodologies
Understanding of schema testing and validation best practices
Schema management techniques for efficient development and maintenance
Preparation for Part IV: The schemas created in Part III integrate directly with Guardian policy workflow blocks. Your data structures and validation rules become the foundation for complete methodology automation.
Time Investment
Each chapter requires approximately 15-25 minutes reading plus 30-60 minutes hands-on practice:
Chapter 8: 20 min reading + 30 min practice (architectural understanding)
Chapter 9: 25 min reading + 60 min practice (comprehensive PDD schema development)
Chapter 10: 20 min reading + 45 min practice (monitoring schema development)
Chapter 11: 25 min reading + 45 min practice (advanced techniques)
Total Investment: ~3-4 hours for complete schema development capabilities
Chapter Navigation
Chapter
Title
Focus
Reading Time
Practice Time
Ready to Begin: With Part II analysis complete, you're prepared for hands-on schema development. Start with Chapter 8 for Guardian schema system foundations.
Part IV: Policy Workflow Design and Implementation
Building complete Guardian policies using your schemas from Part III
Part IV transforms your schemas from Part III into working Guardian policies that automate complete certification workflows. You'll learn Guardian's Policy Workflow Engine by building on VM0033's production policy, creating stakeholder workflows, and implementing token minting based on verified emission reductions/removals.
The five chapters progress logically: policy architecture understanding → workflow block configuration → VM0033 implementation deep dive → advanced patterns → testing and deployment.
Policy Development Approach
Part IV uses VM0033's complete policy implementation as your guide. You'll see how real production policies handle Project Developer submissions, VVB verification, and Standard Registry oversight through Guardian's workflow blocks.
Development Sequence:
Policy Architecture and Design Principles (Chapter 13): Guardian PWE fundamentals and integration with Part III schemas
Guardian Workflow Blocks and Configuration (Chapter 14): Step-by-step configuration of Guardian's 25+ workflow blocks
VM0033 Policy Implementation Deep Dive (Chapter 15): Complete analysis of VM0033's production policy patterns
Advanced Policy Patterns and Testing (Chapter 16): Multi-methodology support, testing strategies, and security patterns
This hands-on approach ensures you can build production-ready policies that handle real-world methodology requirements.
Chapter Progression and Learning Objectives
Focus: Guardian Policy Workflow Engine basics and integration with Part III schemas.
What You'll Learn: Guardian's workflow block system, event-driven architecture, and how to connect your schemas to policy automation. You'll understand stakeholder roles, permissions, and document flow patterns using VM0033's implementation.
Practical Skills: Policy architecture design, schema UUID integration, role-based access control, and workflow planning for methodology certification processes.
Focus: Step-by-step configuration of Guardian's workflow blocks for data collection, calculations, and token management.
What You'll Learn: Complete guide to Guardian's 25+ workflow blocks including data input blocks (requestVcDocumentBlock), calculation blocks (customLogicBlock), and token blocks (mintDocumentBlock). Each block is explained with VM0033 configuration examples.
Practical Skills: Workflow block configuration, form generation from schemas, calculation logic implementation, and token minting rule setup.
Focus: Complete analysis of VM0033's production policy with 37 schemas and 2 AR Tools.
What You'll Learn: How VM0033 implements Project Developer submission workflows, VVB verification processes, and Standard Registry oversight. You'll trace the complete flow from PDD submission to VCU token issuance using real policy configurations.
Practical Skills: Multi-stakeholder workflow design, document state management, verification workflows, and production policy patterns.
Focus: Multi-methodology support, comprehensive testing strategies, and production-grade security patterns.
What You'll Learn: Advanced policy architecture including multi-methodology integration, external data sources, comprehensive testing frameworks, and security implementations. You'll see how to optimize policies for performance and handle complex methodology requirements.
Practical Skills: Multi-methodology pattern design, policy testing automation, performance optimization, external API integration, and security implementation.
Focus: Production deployment strategies, monitoring, and operational excellence for Guardian policies.
What You'll Learn: Production deployment architecture, monitoring and alerting systems, incident response procedures, cost optimization, and stakeholder management for live policy operations.
Practical Skills: Production deployment configuration, monitoring setup, incident response planning, cost management, and policy lifecycle management.
Building on Part III Foundation
Part IV directly implements your schemas from Part III. Your schema UUIDs become references in policy workflow blocks, your field keys become calculation variables, and your validation rules become workflow automation.
Implementation Translation:
Part III PDD schema → requestVcDocumentBlock for project submission
Part III monitoring schema → requestVcDocumentBlock for monitoring reports
Schema field keys → customLogicBlock calculation variables
Direct Integration: VM0033 shows exactly how schemas integrate with policy workflows, providing concrete examples for your methodology implementation.
Practical Implementation Focus
Part IV emphasizes real-world policy development:
VM0033 Production Policy: Complete policy with 37 schemas extracted and analyzed
Stakeholder Workflows: Project_Proponent, VVB, and OWNER role implementations
Event-Driven Architecture: Real triggers, state changes, and workflow coordination
Token Minting Integration: From emission reduction calculations to VCU issuance
Part IV Completion
Completing Part IV provides you with:
Complete Guardian policy implementing your methodology
Multi-stakeholder workflows with proper access control
Token minting based on verified emission reductions
Production deployment and maintenance capabilities
Ready for Production: Your methodology will be fully automated on Guardian with proper stakeholder workflows, audit trails, and token management.
Time Investment
Each chapter requires approximately 20-30 minutes reading plus 45-90 minutes hands-on practice:
Chapter 13: 25 min reading + 60 min practice (policy architecture and planning)
Chapter 14: 30 min reading + 90 min practice (workflow block configuration)
Chapter 15: 25 min reading + 75 min practice (VM0033 implementation analysis)
Chapter 16: 30 min reading + 60 min practice (advanced patterns and integration)
Total Investment: ~5-6 hours for complete policy development capabilities
Chapter Navigation
Chapter
Title
Focus
Reading Time
Practice Time
Policy Development Path: Follow chapters sequentially to build from basic policy understanding to complete production deployment.
Ready to Begin: With Part III schemas complete, you're prepared for policy workflow development. Start with Chapter 13 for Guardian Policy Workflow Engine foundations.
Guardian Integration
Integration system for linking handbook content with existing Guardian documentation
Overview
This system ensures that handbook content properly references existing Guardian documentation from docs/SUMMARY.md rather than duplicating information, while maintaining focus on methodology digitization context.
VM0033 Integration
System for leveraging existing VM0033 documentation and requesting only Guardian-specific implementation details
Overview
This system ensures accurate VM0033 references by:
Using existing parsed documentation in docs/VM0033-methodology-pdf-parsed/
Chapter 28: Troubleshooting and Common Issues
Practical tips and solutions for common problems encountered during methodology digitization
This chapter provides informal, practical guidance for resolving common issues during Guardian methodology development. These tips come from real-world experience and can save significant development time.
Schema Building Best Practices
Field: "Project Area (hectares)"
Default Value: 100
Suggested Value: 500
Test Value: 250
Relationships field where you can add all the variables and constants that are related/used in the formula. This enables navigation in a Formula using its variables when the user is looking at the published formulas in the schemas/VC documents.
Text that needs to be added
Link (Output) which indicates the field in the document schema where the text should be shown.
Relationships field where you can select all the variables, constants and formulas that are related.
Unified integration framework for consistent data management
Testing and validation approaches for tool integrations
Based on docs/SUMMARY.md, the following Guardian documentation sections are relevant for methodology digitization:
Core Architecture References
Policy Workflow Engine References
Schema System References
User Management References
Installation and Setup References
Integration Patterns
Environment Setup Integration Pattern
Methodology Understanding Integration Pattern
Platform Overview Integration Pattern
Reference Integration Templates
Documentation Link Template
Cross-Reference Template
Content Integration Guidelines
What to Link vs. What to Explain
Always Link (Don't Duplicate)
Guardian installation procedures
Complete API documentation
Comprehensive feature explanations
Technical architecture details
User interface guides
Provide Methodology Context For
How Guardian features apply to methodology digitization
VM0033-specific implementation examples
Methodology developer workflow considerations
Integration points between Guardian and methodology requirements
Integration Quality Checklist
Maintenance Procedures
Link Validation
Documentation Sync Process
User Input Integration
Guardian-Specific User Input Requirements
Integration Success: This system ensures handbook content leverages existing Guardian documentation effectively while maintaining focus on methodology digitization and implementation.
for basic methodology questions
Requesting user input only for Guardian-specific implementation details, screenshots, and current system status
Available VM0033 Documentation
Parsed VM0033 Content
The system can access comprehensive VM0033 methodology content from:
docs/VM0033-methodology-pdf-parsed/VM0033-Methodology.md - Full methodology text
docs/VM0033-methodology-pdf-parsed/VM0033-Methodology_meta.json - Structured metadata and table of contents
What NOT to Ask Users
Basic methodology information available in parsed docs:
User experience challenges with Guardian implementation
Performance considerations and optimizations
Integration issues and solutions
Content Integration Guidelines
Using VM0033 Parsed Documentation
For methodology content, reference the parsed documentation directly:
Guardian Implementation Request Template
Only use this template for Guardian-specific details:
Content Validation System
VM0033 Content Integration Checklist
For each VM0033 reference:
Basic methodology content: Referenced from parsed documentation (docs/VM0033-methodology-pdf-parsed/)
Specific section citations: Include section numbers and page references
Guardian implementation: User input obtained for system-specific details only
Context appropriate: Content serves both maintenance and learning audiences
No assumptions: No hallucinated methodology details
Guardian Integration Checklist
For each Guardian reference:
Current status confirmed: Implementation status verified with user
Screenshots obtained: Current Guardian interface examples from user
Code examples validated: Guardian-specific configurations from user
Documentation links: References to existing Guardian documentation
Feature availability: Current Guardian capabilities confirmed
Implementation Guidelines
Content Creation Process
Check Parsed Documentation: First, check if VM0033 information is available in parsed docs
Reference Methodology Content: Use parsed documentation for basic methodology details
Identify Guardian Gaps: Determine what Guardian-specific information is needed
Request Guardian Details: Use templates to request only Guardian implementation details
Integrate Content: Combine methodology references with Guardian implementation
Quality Check: Ensure no methodology assumptions or hallucinations
Content Integration Examples
Methodology Reference Pattern
Quality Assurance
Content Review Process
Methodology Source Check: VM0033 content referenced from parsed documentation
Guardian Input Validation: Guardian-specific details obtained from user input only
Documentation Integration: Guardian references link to existing documentation
Accuracy Check: No methodology assumptions or hallucinations
Completeness Review: All Guardian implementation details obtained
Error Prevention
Use Parsed Documentation: Always check VM0033 parsed docs before asking users
No Methodology Assumptions: Never assume or hallucinate VM0033 content
Guardian-Specific Requests: Only request Guardian implementation details from users
Source Attribution: Always reference specific VM0033 sections from parsed docs
Clear Boundaries: Distinguish between methodology content and Guardian implementation
Common Mistakes to Avoid
❌ Wrong: Asking user "What does VM0033 say about blue carbon?" ✅ Right: Reference VM0033 parsed documentation for blue carbon definition
❌ Wrong: Asking user "What are VM0033 applicability conditions?" ✅ Right: Reference Section 4 of parsed VM0033 documentation
❌ Wrong: Assuming Guardian implementation details ✅ Right: Request specific Guardian screenshots and configurations from user
Maintenance
Ongoing Updates
VM0033 Changes: System for handling methodology updates
Guardian Updates: Process for updating Guardian references
User Feedback: Integration of user corrections and improvements
Documentation Sync: Keeping Guardian documentation references current
Version Control
Content Versioning: Track changes to user-provided content
Reference Updates: Maintain current links to Guardian documentation
Accuracy Tracking: Monitor and update VM0033 references as needed
Key Principle: Use existing VM0033 parsed documentation for methodology content. Only request Guardian-specific implementation details from users.
Critical Requirement: Never ask users for basic VM0033 methodology information that's already available in the parsed documentation. This prevents unnecessary interruptions and ensures efficient content creation.
## Guardian Documentation Integration for Setup
### Development Environment Setup Section
Instead of rewriting setup instructions:
{% hint style="info" %}
**Guardian Setup**: For complete Guardian platform setup instructions, see the [Installation Guide](../../../guardian/readme/getting-started/README.md).
{% endhint %}
**Methodology-Specific Setup Considerations**:
- [User input required: Specific setup requirements for methodology development]
- [User input required: Additional tools needed for methodology work]
- [User input required: Environment configuration for methodology testing]
**Quick Setup Validation**:
1. Follow the [Prerequisites](../../../guardian/readme/getting-started/prerequisites.md) guide
2. Complete [Building from Source](../../../guardian/readme/getting-started/installation/building-from-source-and-run-using-docker/README.md)
3. Verify methodology development capabilities: [User input required]
## Guardian Documentation Integration for Methodology Context
### Methodology Domain Knowledge Context
This content focuses on methodology understanding. For Guardian platform details, see:
- [Guardian Architecture](../../../guardian/architecture/README.md) - How Guardian supports methodology implementation
- [Policy Workflow Blocks](../../../guardian/standard-registry/policies/policy-creation/introduction/README.md) - Available blocks for methodology workflow
- [Schema Types](../../../guardian/standard-registry/schemas/available-schema-types.md) - Data structures for methodology requirements
**Methodology-Specific Context**: [User input required for methodology-specific domain knowledge]
## Guardian Documentation Integration for Platform Overview
### Architecture Overview Section
{% hint style="info" %}
**Detailed Architecture**: For comprehensive Guardian architecture documentation, see [Guardian Architecture](../../../guardian/architecture/README.md).
{% endhint %}
**Methodology Developer Focus**:
This section highlights Guardian architecture aspects most relevant to methodology digitization:
1. **Service Architecture for Methodologies**
- [Link to detailed architecture docs](../../../guardian/architecture/reference-architecture.md)
- [User input required: How methodologies use Guardian services]
2. **Data Flow for Methodology Workflows**
- [Link to data flow documentation](../../../guardian/architecture/schema-architecture.md)
- [User input required: Methodology data flow examples]
## [Guardian Feature] for Methodology Development
{% hint style="info" %}
**Complete Documentation**: For full details on [Guardian Feature], see [Link to Guardian Docs](../../../guardian/path/to/docs.md).
{% endhint %}
**Methodology Context**: [How this feature applies to methodology digitization]
**VM0033 Example**: [User input required: Specific VM0033 application]
**Key Points for Methodology Developers**:
- [Methodology-specific consideration 1]
- [Methodology-specific consideration 2]
- [Methodology-specific consideration 3]
**Next Steps**: [How this prepares for methodology implementation]
## Related Guardian Documentation
For deeper understanding of concepts covered in this section:
### Core Documentation
- **[Feature Name]**: [Link](../../../guardian/path/to/docs.md) - [Brief description of relevance]
- **[Feature Name]**: [Link](../../../guardian/path/to/docs.md) - [Brief description of relevance]
### API References
- **[API Category]**: [Link](../../../guardian/path/to/api-docs.md) - [Relevance to methodology work]
### Advanced Topics
- **[Advanced Feature]**: [Link](../../../guardian/path/to/advanced-docs.md) - [When this becomes relevant]
## Guardian Integration Quality Checklist
For each Guardian reference:
- [ ] Links to existing documentation rather than duplicating
- [ ] Provides methodology-specific context
- [ ] Explains relevance to VM0033 implementation
- [ ] Maintains focus on methodology digitization
- [ ] Includes user input requirements for examples
- [ ] Validates links are current and functional
#!/bin/bash
# Validate Guardian documentation links in Part I
echo "Validating Guardian documentation links..."
# Check all Guardian documentation references
find docs/methodology-digitization-handbook/part-1 -name "*.md" -exec grep -l "\.\.\/\.\.\/\.\.\/guardian\/" {} \; | while read file; do
echo "Checking Guardian links in $file"
# Extract and validate Guardian documentation links
grep -o "\.\.\/\.\.\/\.\.\/guardian\/[^)]*" "$file" | while read link; do
if [ ! -f "docs/guardian/${link#../../../guardian/}" ]; then
echo "BROKEN LINK: $link in $file"
fi
done
done
echo "Guardian link validation complete"
## Guardian Documentation Sync Process
### Monthly Sync
1. Review docs/SUMMARY.md for structural changes
2. Validate all Guardian documentation links
3. Update broken or moved references
4. Check for new relevant documentation
### Quarterly Review
1. Assess new Guardian features for methodology relevance
2. Update integration patterns as needed
3. Review user feedback on documentation usefulness
4. Optimize cross-reference effectiveness
### Annual Assessment
1. Comprehensive review of Guardian documentation integration
2. Update integration templates and patterns
3. Assess methodology developer needs evolution
4. Plan integration improvements
## Guardian Implementation Details Requiring User Input
### Environment Setup Content
- [ ] Current Guardian setup requirements for methodology development
- [ ] Specific tools and configurations needed for methodology work
- [ ] Guardian platform capabilities relevant to methodology digitization
- [ ] Screenshots of current Guardian interface
### Methodology Understanding Content
- [ ] How methodologies map to Guardian documentation structure
- [ ] Specific Guardian features used in methodology implementation
- [ ] Guardian workflow patterns relevant to methodologies
### Platform Overview Content
- [ ] Current Guardian architecture screenshots
- [ ] Methodology implementation details in Guardian
- [ ] Specific Guardian capabilities used for methodologies
- [ ] User interface examples from methodology implementations
<!-- Example: Referencing VM0033 definitions -->
According to VM0033 Section 3 (Definitions), a "Tidal Wetland" is defined as:
[Reference: docs/VM0033-methodology-pdf-parsed/VM0033-Methodology.md]
<!-- Example: Referencing applicability conditions -->
VM0033 applicability conditions (Section 4) specify that projects must:
[Reference: docs/VM0033-methodology-pdf-parsed/VM0033-Methodology.md]
## Guardian Implementation Detail Needed
**Chapter**: [Chapter Number and Title]
**Section**: [Specific Section]
**Guardian Feature**: [Specific Guardian capability or implementation]
**Required Information**:
- [ ] Current implementation status in Guardian
- [ ] Screenshots of Guardian interface
- [ ] Configuration files or code examples
- [ ] API endpoints or database schema
- [ ] User workflow in Guardian system
**Context**: How this Guardian implementation supports VM0033 methodology
**Note**: Basic VM0033 methodology details will be referenced from parsed documentation
<!-- CORRECT: Using parsed documentation for methodology content -->
## VM0033 Baseline Scenarios
According to VM0033 Section 6.1 "Determination of the Most Plausible Baseline Scenario",
the methodology requires [specific requirements from parsed documentation].
{% hint style="info" %}
**Guardian Implementation**: The following shows how Guardian implements VM0033 baseline scenario determination.
{% endhint %}
[USER INPUT NEEDED: Guardian screenshots and configuration for baseline scenario implementation]
<!-- INCORRECT: Asking user for basic methodology information -->
[USER INPUT NEEDED: What are VM0033 baseline scenario requirements?]
<!-- Standard pattern for referencing VM0033 content -->
**VM0033 Reference**: Section [X.X] - [Section Title]
**Source**: `docs/VM0033-methodology-pdf-parsed/VM0033-Methodology.md`
**Content**: [Direct reference to methodology requirements]
**Guardian Implementation**:
[USER INPUT NEEDED: How Guardian implements this VM0033 requirement]
Audit Trail Creation: Generate immutable records for every data submission and modification
Building complex schemas via Excel and importing them to Guardian is the fastest way to develop schemas, but there are important pitfalls to avoid:
⚠️ Guardian Duplicate Schema Issue Guardian doesn't distinguish between duplicate schemas during import and will create duplicates if the same schema is imported twice. This is especially problematic when teams make small adjustments to Excel schemas and are tempted to re-import the entire file.
Solution: Track schema versions carefully and delete duplicates manually when they occur. Consider maintaining a schema change log to avoid confusion.
Field Key Names from Excel Import
Issue: Key names of fields imported via Excel aren't human-readable by default. They appear as generic identifiers that make calculation code difficult to maintain.
Solution: Modify field keys manually after import:
Go to the schema's Advanced tab
Edit the Excel cell IDs in the key field
Use descriptive names that match your calculation variables
Required Fields and Auto-Calculate Pitfalls
Guardian offers three field requirement options:
Required: User must provide value
Non-required: Optional user input
Auto-calculate: Calculated via expressions
⚠️ Auto-Calculate Limitation Auto-calculate fields may reference fields from different schemas. If you leave referenced fields empty, the auto-calculate fields won't appear in the indexer.
Solution: Use non-required fields and implement calculations in custom logic blocks instead:
Development and Testing Workflow
Guardian Savepoint Feature
Use Guardian's savepoint feature to save progress of forms or certification processes, then resume from that stage even after making policy changes and re-triggering dry runs.
How to Use Savepoints:
Complete part of a workflow (e.g., PDD submission)
Create savepoint before making policy changes
Modify policy blocks
Restore savepoint and continue testing
This prevents having to fill out long forms repeatedly during development.
API Development vs Manual Forms
Tip: Using APIs to submit data is often faster than filling long forms manually during development.
API Development Workflow:
Fill form manually once with example values
Open Chrome DevTools → Network tab
Submit form and capture the request payload
Extract and modify payload for API testing
Custom Logic Block Testing
Thorough Testing Approach
Test custom logic blocks thoroughly using Guardian's testing features. Make sure all edge cases are covered and output VC documents are correct.
Testing Process:
Test with Minimal Data: Ensure calculations work with required fields only
Test with Maximum Data: Verify calculations with all optional fields populated
Test Edge Cases: Zero values, negative values, missing optional data
✅ Event Actor: Confirm event actor matches document ownership
✅ Block Permissions: Ensure viewing user has access to target block
✅ Policy State: Verify policy is in correct state (published/dry run)
✅ Browser Cache: Clear cache and refresh (sometimes needed for UI updates)
Performance and Optimization
Large Schema Performance
Issue: Forms with many fields (50+ fields) can load slowly and affect user experience.
Solutions:
Group Related Fields: Use schema composition to break large schemas into logical sections
Conditional Fields: Use conditional visibility to show only relevant fields
Progressive Disclosure: Show basic fields first, advanced fields on demand
Common Calculation Issues
Precision and Rounding
Issue: JavaScript floating-point arithmetic can cause precision issues in calculations.
Solution: Use fixed decimal precision for monetary and emission calculations:
Missing Validation
Issue: Calculations proceed with invalid or missing input data.
Solution: Add comprehensive input and output document validation using documentValidatorBlock as well as within code. Use debug function provided to add debug logs to the code.
✅ Use custom logic blocks instead of auto-calculate for cross-schema references
Development Workflow
✅ Use savepoints to preserve testing progress
✅ Capture API payloads from DevTools for faster testing
✅ Test custom logic blocks with all edge cases
✅ Use document history to debug calculation progressions
Troubleshooting
✅ Check event propagation when documents don't appear
✅ Validate input data before calculations
✅ Use fixed precision for financial/emission calculations
✅ Add delays between bulk API operations
These practical tips can prevent many common issues and significantly speed up development. Remember that methodical debugging and thorough testing are key to successful Guardian implementations.
Chapter 4: Methodology Analysis and Decomposition
When we first tackled digitizing VM0033, we quickly realized that jumping straight into coding or configuration would be overwhelming. A 130-page methodology document with complex calculations needed a systematic approach to break it down into manageable pieces. This chapter shares the analysis approach we developed during VM0033 digitization - what worked, what we learned, and how you can apply these techniques to other methodologies.
The analysis process transforms a complex PDF into organized components ready for digital implementation. Rather than trying to understand everything at once, we found it more effective to use structured reading techniques that focus on the most important sections for digitization while building understanding progressively.
Structured Reading Approach for Methodology Analysis
During VM0033 digitization, we developed a reading approach that prioritizes sections based on their importance for digital implementation. This approach emerged from trial and error - we initially tried to understand everything equally, which led to information overload.
Reading Priority Order We Used:
Applicability Conditions - Tells us what projects can use this methodology
Quantification of GHG Emission Reductions and Removals - Contains all the math we need to implement
Monitoring - Defines what data users need to collect
Project Boundary - Shows what's included in calculations
This order worked well because it builds understanding logically. We need to know what projects qualify before diving into calculations, and we need to understand the calculations before figuring out how to collect the required data.
First Pass - Structure Mapping: Start by reading the table of contents to understand how the methodology is organized. VM0033 follows the standard VCS format with 10 main sections, but we found that Section 8 (Quantification) contains most of the mathematical complexity we needed to implement.
Second Pass - Core Section Focus: Read the five priority sections thoroughly, taking notes on requirements that need to be implemented digitally. During this pass, we identified calculation procedures, parameter definitions, decision logic, and validation rules that would become digital components.
Third Pass - Integration Details: Read the remaining sections to understand how the methodology connects to external tools and handles edge cases. This reading helped us understand dependencies and special situations we needed to account for.
Note-Taking Techniques That Worked
Focus on Digital Implementation: As we read, we kept asking "What here needs to be automated?" and "What decisions does a user need to make?" This helped us identify the specific elements that would become features in our digital implementation.
Consistent Marking System: We developed a simple system for marking different types of content - equations got one color, parameters another, decision points a third. This made it easier to find information later when we were building the digital version.
Cross-Reference Tracking: We noted how different sections referenced each other, especially how the quantification section built on the boundary definitions and how monitoring requirements supported calculations. These connections were important for making sure our digital implementation maintained the methodology's logic.
Understanding the Three-Actor Workflow
Most carbon methodologies, including VM0033, work within a standard three-actor certification process. Understanding this workflow was crucial for designing our digital implementation because the platform needed to support all three actors and their interactions.
The Three Actors:
Standard Registry (Verra in VM0033's case): The organization that maintains the methodology and oversees the certification process. They approve projects, oversee validation and verification, and issue the final carbon credits.
Validation and Verification Body (VVB): Independent auditors who check that projects comply with the methodology requirements. They validate project designs initially and verify monitoring results ongoing.
Project Developer: The organization implementing the restoration project and seeking carbon credits. For VM0033, this would be whoever is planting and maintaining the mangroves.
How They Interact:
Project Registration: Project developer submits project documents to the registry
Validation: Project developer hires a VVB to validate their project design
Project Approval: Registry approves the project based on VVB validation
Monitoring: Project developer collects data and submits monitoring reports
When we designed the Guardian policy for VM0033, we built this workflow into the platform so that each actor has appropriate permissions and can only see and do what they're supposed to according to their role.
VM0033 Specific Considerations
For the Allcot ABC Mangrove project, we focused on mangrove restoration as the primary activity. The project involves planting mangroves in coastal areas where they had been lost or degraded. This kept our initial implementation focused rather than trying to handle all possible restoration activities that VM0033 theoretically allows.
The three-actor workflow works well for mangrove projects because:
Project developers can focus on planting and monitoring mangroves
VVBs can verify that restoration activities meet VM0033 requirements
The registry can issue credits knowing the work has been independently validated
Parameter Extraction and Organization
One of the most time-consuming parts of analysis was identifying all the parameters (data inputs) that users would need to provide. VM0033 has many parameters scattered throughout the document, and some are used in multiple calculations.
Parameter Types We Identified:
Monitored Parameters: Data that project developers collect through measurements. For mangrove projects, this includes things like tree diameter measurements, survival rates, soil samples, and water level measurements.
User-Input Parameters: Project-specific information that users provide during setup. This includes project area size, crediting period length, restoration activities planned, and location details.
Default Values: Standard values provided by VM0033 that can be used when site-specific measurements aren't available. These include default growth rates, carbon content factors, and emission factors.
Calculated Parameters: Values that get computed from other parameters using equations in the methodology. These form chains of calculations that we needed to map carefully.
Parameter Organization Approach
Systematic Extraction: We went through each section methodically, making lists of every parameter mentioned, along with its definition, units, and where it gets used. This was tedious but essential for making sure we didn't miss anything.
Reuse Identification: Many parameters appear in multiple calculations. Identifying these reuse opportunities helped us design efficient data collection where users enter information once and it gets used wherever needed.
Validation Requirements: Each parameter has requirements about valid ranges, formats, or dependencies. We documented these during analysis because they would become validation rules in our digital implementation.
Introduction to Recursive Analysis
When we first looked at VM0033's final calculation equation, it seemed simple. But we quickly realized that each term in that equation depends on other calculations, which depend on still other calculations, creating a complex web of dependencies.
Starting Point: VM0033's goal is calculating Net GHG Emission Reductions and Removals (NERRWE). The basic equation is:
NERRWE = BE - PE - LK
Where:
NERRWE = Net emission reductions from the wetland project
BE = Baseline emissions (what would have happened without the project)
PE = Project emissions (emissions from project activities)
LK = Leakage (emissions that might occur elsewhere because of the project)
The Challenge: Each of these terms (BE, PE, LK) has its own complex calculations with many sub-components. To implement this digitally, we needed to trace back from the final answer to identify every piece of data a user would need to provide.
Recursive Approach: Starting with NERRWE, we asked "What do we need to calculate this?" Then for each dependency, we asked the same question, continuing until we reached basic measured values or user inputs. This created a tree-like structure showing all the calculation dependencies.
Benefits of This Approach
Complete Parameter Discovery: Working backward from final results ensured we found all required inputs, even ones that are referenced indirectly through multiple calculation layers.
Logical Implementation Order: Understanding dependencies helped us sequence implementation so that basic inputs are collected before calculations that depend on them.
Validation Points: The dependency tree showed us where validation should happen - we could catch problems early rather than only discovering them at the final calculation stage.
Tools and External References
VM0033 references several external calculation tools that we needed to understand and integrate. During our first digitization attempt, we implemented the ones that were most essential for the mangrove restoration focus.
Reference Materials: For detailed VM0033 analysis, consult the and in our Artifacts Collection.
Tools We Implemented:
AR-Tool05: This CDM tool calculates emissions from fossil fuel use during project activities. For mangrove projects, this covers emissions from boats, equipment, and transportation used during planting and monitoring.
AR-Tool14: This CDM tool estimates carbon stocks in trees and shrubs using standard equations. We used this for calculating carbon storage in mangrove biomass as the trees grow.
AFLOU Non-permanence Risk Tool: This VCS tool assesses the risk that carbon benefits might be reversed. For mangrove projects, this considers risks like storm damage, disease, or land use changes.
Tool Integration Approach
Understanding Tool Purpose: For each tool, we figured out what specific problem it solves and how that fits into the overall VM0033 calculation framework.
Data Flow Mapping: We traced how data flows between VM0033 calculations and the external tools - what information goes in, what results come out, and how those results get used in other calculations.
Implementation Decisions: Rather than trying to implement every referenced tool perfectly, we focused on the core functionality needed for mangrove projects. This kept our initial implementation manageable while still meeting methodology requirements.
VM0033 Analysis Walkthrough
Let's walk through how we applied these analysis techniques to specific parts of VM0033, using examples from our actual digitization work.
Applicability Analysis: VM0033 Section 4 defines what projects can use the methodology. For mangrove restoration, the key requirements are that projects restore degraded tidal wetlands through activities like replanting native species and improving hydrological conditions. We identified the specific criteria that our digital implementation needed to check during project registration.
Calculation Structure: Section 8 contains VM0033's mathematical core. We found that baseline emissions calculations (what would happen without restoration) were quite complex, involving soil carbon loss, methane emissions, and biomass decay. Project emissions were simpler for mangrove planting but still required careful tracking of fossil fuel use and disturbance effects.
Monitoring Requirements: Sections 9.1 and 9.2 define what data projects need to collect. For mangrove restoration, this includes regular measurements of tree survival, growth rates, soil conditions, and water levels. We organized these into data collection schedules that could be built into the Guardian interface.
Practical Lessons Learned
Start Simple: We initially tried to handle all possible restoration activities VM0033 allows, but this created too much complexity. Focusing on mangrove planting first gave us a working system that we could expand later.
Document Everything: Even seemingly small details about parameter definitions or calculation procedures became important during implementation. Good documentation during analysis saved time later.
Test Understanding: We regularly tested our understanding by trying to work through example calculations manually. This helped us catch misunderstandings before they became implementation problems.
From Analysis to Implementation Planning
The analysis work creates a foundation for the more detailed equation mapping and parameter identification that comes in Chapter 5. Here's how the analysis results feed into subsequent work.
Parameter Lists: The parameters we identified during analysis become the basis for detailed dependency mapping in Chapter 5.
Calculation Structure: Our understanding of how VM0033's calculations fit together guides the recursive analysis work that systematically maps every mathematical dependency.
Tool Integration: The external tools we identified need detailed integration planning, which we'll cover in Chapter 6.
Validation Framework: The validation requirements we identified during analysis inform the test artifact development in Chapter 7.
Analysis Summary and Next Steps
Analysis Foundation Complete: You now understand the systematic approach we used to break down VM0033 into implementable components.
Key Analysis Outcomes:
Structured methodology reading with focus on implementation requirements
Three-actor workflow understanding with role and permission implications
Parameter extraction with classification and reuse opportunities identified
Introduction to recursive analysis concepts for dependency mapping
Preparation for Chapter 5: Your parameter extraction work and understanding of calculation structure from this chapter will be essential for the detailed equation mapping we'll cover next. Chapter 5 builds directly on this foundation to create complete mathematical dependency maps.
Applying to Other Methodologies: While we used VM0033 as our example, these analysis techniques apply to other environmental methodologies. The structured reading approach, parameter extraction methods, and recursive analysis concepts work for any methodology you might want to digitize.
Learning from Experience: These techniques represent what we learned during VM0033 digitization. They worked for us, but you might find improvements or adaptations that work better for your specific methodology or implementation approach.
Table of Contents
Navigation Tip: Use the sidebar navigation or click on any chapter title to jump directly to detailed chapter outlines.
Understanding the digitization process, Guardian platform capabilities, and the role of VM0033 as our reference methodology. This chapter establishes the context and objectives for methodology digitization.
Deep dive into the VM0033 methodology structure, applicability conditions, baseline scenarios, and emission reduction calculations. This chapter provides the domain knowledge foundation needed before digitization begins.
Comprehensive introduction to Guardian's architecture, Policy Workflow Engine (PWE), schema system, and key concepts specifically relevant to methodology digitization.
Systematic approach to reading and analyzing methodology PDFs, identifying key components, stakeholders, and workflow requirements. Includes techniques for extracting calculation logic and parameter dependencies using industry-proven recursive analysis techniques.
Step-by-step process for identifying all equations used in baseline emissions, project emissions, and leakage calculations. Covers recursive parameter analysis and dependency mapping using VM0033 examples with comprehensive mathematical component extraction.
Understanding and incorporating external tools and modules referenced in methodologies. Covers CDM tools, VCS modules, and other standard calculation tools used in VM0033, including unified calculation framework development.
Creating comprehensive test spreadsheets containing all input parameters, output parameters, and final emission reduction calculations. This artifact becomes the validation benchmark for the digitized policy, with real VM0033 test artifact examples.
Guardian schema system fundamentals, JSON Schema integration, and two-part architecture patterns. Establishes field mapping principles and architectural understanding for methodology schema development.
Step-by-step Excel-first approach to building comprehensive PDD schemas. Covers Guardian template usage, conditional logic implementation, sub-schema creation, and essential field key management for calculation code readability.
Time-series monitoring schema development with temporal data structures, annual parameter tracking, and field key management for time-series calculations. Includes VVB verification workflow support.
API schema management, standardized property definitions, Required field types (None/Hidden/Required/Auto Calculate), and UUID management for efficient schema development and maintenance.
Practical schema validation using Guardian's testing features including Default/Suggested/Test values, preview testing, UUID integration, and pre-deployment checklist for production readiness.
Guardian policy architecture fundamentals, workflow block system, event-driven communication, and design patterns. Establishes core concepts for building production-ready environmental policies using VM0033 as the implementation reference.
Complete guide to Guardian's workflow blocks including interfaceDocumentsSourceBlock, buttonBlock, requestVcDocumentBlock, and role management. Covers block configuration, permissions, event routing, and UI integration with practical VM0033 examples.
Deep technical analysis of VM0033 policy implementation using actual JSON configurations. Covers VVB approval workflows, project submission processes, and role-based access patterns with real Guardian block configurations extracted from production policy.
Advanced policy implementation patterns including transformation blocks for Verra API integration, document validation blocks, external data integration, policy testing frameworks, and demo mode configuration using VM0033 production examples.
Comprehensive guide to implementing VM0033 emission reduction calculations using Guardian's customLogicBlock. Covers baseline emissions, project emissions, leakage calculations, and final net emission reductions using real JavaScript implementation with VM0033 test artifacts validation.
Brief foundation chapter establishing FLD concepts for parameter relationship management in Guardian methodologies. Covers parameter reuse patterns and integration with customLogicBlock calculations using VM0033 examples.
Complete guide to building Guardian Tools using AR Tool 14 as practical example. Covers Tools as mini-policies, extractDataBlock workflows, customLogicBlock integration, and production implementation patterns for standardized calculation tools that integrate with multiple methodologies.
Comprehensive testing using Guardian's built-in testing capabilities including dry-run mode and customLogicBlock testing interface. Covers interactive testing with three input methods, validation against VM0033 test artifacts, testing at every calculation stage, and API-based automated testing using Guardian's REST APIs.
Testing complete methodology workflows across all stakeholder roles using Guardian's dry-run capabilities and VM0033 production patterns. Covers multi-role testing frameworks, virtual user management, production-scale data validation, and cross-component integration testing.
Automating methodology operations using Guardian's REST API framework. Covers authentication patterns, VM0033 policy block API structure, dry-run operations with virtual users, automated workflow execution, and Cypress testing integration for production deployment.
Part VII: Deployment and Maintenance
Chapter 24: User Management and Role Assignment
🚧 In Development - Setting up user roles, permissions, and access controls for different stakeholders in the methodology workflow. Covers user onboarding, organization management, security policies, and role-based access controls.
Chapter 25: Monitoring and Analytics - Guardian Indexer
🚧 In Development - Implementing monitoring, logging, and analytics for deployed methodologies using Guardian Indexer. Covers usage analytics, compliance reporting, audit trails, and performance monitoring.
Chapter 26: Maintenance and Updates
🚧 In Development - Strategies for maintaining deployed methodologies, handling methodology updates, and managing backward compatibility. Covers version management, bug fixing, and regulatory change management.
✅ Available - Bidirectional data exchange between Guardian and external platforms. Covers data transformation using dataTransformationAddon blocks and external data reception using MRV configuration patterns.
✅ Available - Practical tips and solutions for common problems encountered during methodology digitization. Covers schema development pitfalls, development workflow optimization, custom logic testing, and event troubleshooting.
Complete code examples, schema definitions, and configuration files for the VM0033 implementation.
Appendix B: Guardian Block Reference Guide
Quick reference guide for all Guardian policy workflow blocks with methodology-specific usage examples.
Appendix C: Calculation Templates and Examples
Reusable calculation templates and examples for common methodology patterns.
Appendix D: Testing Checklists and Templates
Comprehensive checklists and templates for testing methodology implementations.
Appendix E: API Reference for Methodology Developers
Focused API documentation for methodology-specific use cases and automation.
Appendix F: Glossary and Terminology
Comprehensive glossary of terms used in methodology digitization and Guardian platform.
Chapter Organization
Consistent Structure: Each chapter follows the same format for easy navigation and learning.
Section
Description
Estimated Reading Time
Total Time: 20-30 hours
Comprehensive coverage of all aspects of methodology digitization from foundation to advanced topics.
Part I-III: 12-16 hours
Essential knowledge for understanding Guardian platform and designing data structures.
Part IV-V: 8-11 hours
Core implementation skills for policy workflows and calculation logic.
Part VI-VIII: 5-8 hours
Prerequisites
Before You Begin: Ensure you have the following prerequisites in place.
Basic understanding of environmental methodologies and carbon markets
Familiarity with JSON and basic programming concepts
Access to Guardian platform instance for hands-on practice
VM0033 methodology document for reference
Next Steps: Ready to begin? Start with the detailed chapter outlines or jump directly to Chapter 1.
Chapter 7: Test Artifact Development
Creating comprehensive test artifacts was one of the most valuable parts of our VM0033 digitization work, and we couldn't have done it without Verra's help. The test artifacts became our foundation for schema design, calculation verification, and ongoing validation. This chapter explains how we worked with Verra to develop test cases using real Allcot project data and how these artifacts guided every aspect of our implementation.
The collaboration with Verra was crucial because they provided the methodology expertise needed to create realistic test scenarios, while Allcot provided real project data from their ABC Mangrove Senegal project. This combination gave us authentic test cases that reflected actual project conditions rather than hypothetical examples we might have created ourselves.
The Collaborative Approach with Verra
When we started digitization work, we realized that creating accurate test cases would require deep methodology expertise that we didn't have. We needed someone who understood VM0033's intricacies and could create test scenarios that properly exercised all the calculation pathways we had identified through recursive analysis.
Verra's Contribution: Verra brought methodology expertise to help us understand which test scenarios would be most valuable and how to structure test cases that would validate both individual calculations and overall methodology compliance.
Allcot's Data Contribution: Allcot provided real project data from their ABC Mangrove Senegal project, including:
Actual PDD data with site-specific parameters
Real emission reduction calculations from their project development work
Authentic assumptions about growth rates, mortality, and site conditions
Practical boundary condition decisions for a working mangrove project
Our Role: We provided the technical framework needs - what parameters we needed, how calculations would be structured in Guardian, and what validation scenarios would help us verify digital implementation accuracy.
The Result: Two comprehensive Excel artifacts that became our validation benchmarks - the detailed test case artifact and the original ER calculations from Allcot's PDD work.
Why This Collaboration Worked
Real Project Grounding: Using actual Allcot project data meant our test cases reflected real-world conditions and decision-making rather than theoretical scenarios.
Methodology Validation: Verra's involvement ensured our test cases properly interpreted VM0033 requirements and followed accepted calculation procedures.
Implementation Focus: Our technical requirements kept the test development focused on what we actually needed for digitization rather than creating comprehensive academic examples.
Understanding the Allcot ABC Mangrove Project Data
The Allcot ABC Mangrove Senegal project provided an ideal test case because it represented a straightforward mangrove restoration approach with well-documented assumptions and calculations.
Project Characteristics:
Total Area: 7,000 hectares across 4 strata with different baseline conditions
Planting Approach: Manual propagule planting by local communities - no heavy machinery
Species Focus: Rhizophora mangle (red mangrove) with known allometric equations
Timeframe: 40-year crediting period starting in 2022
Key Project Parameters from ER Calculations:
Planting Density: 5,500 trees per hectare initially planted
Growth Model: Chapman-Richards function for DBH growth over time
Root:Shoot Ratio: 0.29 for below-ground biomass calculations
Boundary Simplifications:
No fire reduction premium (eliminated fire calculations)
No fossil fuel emissions (simple planting activities)
Mineral soil only (no peat calculations)
No wood products (no harvesting planned)
How Project Data Informed Test Scenarios
Realistic Parameter Ranges: The Allcot data showed us realistic ranges for key parameters - growth rates that reflect actual site conditions, mortality patterns based on field experience, and carbon accumulation rates based on literature and site measurements.
Calculation Complexity: The project showed us how many calculations were actually needed vs. the full VM0033 complexity. This helped us focus test development on calculations that would actually be used.
Multi-Stratum Scenarios: With 4 different strata having different baseline biomass levels (1149, 2115, 2397, 1339 t C/ha), we could test how calculations handle different starting conditions and scaling across project areas.
Test Artifact Structure and Organization
The test artifacts we developed with Verra create a comprehensive validation framework organized around VM0033's calculation structure.
Primary Test Case Artifact: VM0033_Allcot_Test_Case_Artifact.xlsx This artifact contains the complete parameter set and calculation framework needed for Guardian implementation:
Project Boundary Definition: Documents exactly which carbon pools and emission sources are included/excluded, providing the conditional logic needed for Guardian's schema design.
Quantification Approach Selection: Shows which calculation methods are used (field data vs. proxies, stock approach vs. flow approach) and when different parameters are required.
Stratum-Level Parameters: Complete parameter sets for all 4 project strata, showing how site conditions vary and how this affects calculation requirements.
Temporal Boundaries: Peat depletion time (PDT) and soil organic carbon depletion time (SDT) calculations for each stratum, though simplified for mineral soil conditions.
Annual Calculation Framework: Year-by-year calculations from 2022 to 2061 showing how parameters change over time and how calculations scale across the 40-year crediting period.
Monitoring Requirements: Complete parameter lists organized by validation vs. monitoring periods, showing when different data needs to be collected.
Supporting ER Calculations Artifact
Original Allcot Calculations: ER_calculations_ABC Senegal.xlsx This artifact contains the original project calculations that Allcot developed for their PDD:
Assumptions and Parameters: Detailed documentation of all project assumptions including growth models, mortality rates, allometric equations, and site-specific factors.
Growth Projections: Complete DBH growth projections using Chapman-Richards model, providing year-by-year diameter estimates that feed into biomass calculations.
Calculation Results: Annual emission reduction calculations over the 40-year period, providing expected results that our digital implementation should match.
Validation Benchmarks: Final totals and annual averages that became our accuracy targets during implementation testing.
How Test Artifacts Guided Schema Design
The test artifacts became our primary reference during Guardian schema development because they showed us exactly what data users would need to provide and how it would be structured.
PDD Schema Requirements: The project boundary and quantification approach selections from the test artifact directly translated into conditional field requirements in our PDD schema design.
Monitoring Report Structure: The annual calculation requirements showed us which parameters needed to be collected each year vs. only at validation, informing our monitoring report schema organization.
Parameter Grouping: The test artifact's organization by strata, time periods, and calculation components helped us design schema sections that match how users actually think about project data.
Validation Logic: The conditional parameter requirements (like "when fire reduction premium = true") became validation rules in our schema design that show/hide fields based on user selections.
From Test Artifact to Guardian Implementation
Direct Translation: Many sections of the test artifact could be directly translated into Guardian schema fields. For example, the stratum-level input parameters became repeating sections in our project schema.
Calculation Verification: The test artifact calculations became our verification benchmark - our Guardian implementation needed to produce the same results using the same input parameters.
User Experience Insights: Seeing how parameters were organized in the test artifact helped us understand how to structure Guardian forms and data collection workflows.
Verification and Validation Process
The test artifacts enabled systematic verification of our Guardian implementation by providing known-good calculation results that we could compare against our digital calculations.
Baseline Verification: Using the test artifact's baseline biomass values and parameters, we verified that our Guardian calculations produced matching baseline calculations.
Project Calculation Testing: The annual growth projections and biomass calculations from the test artifact became our benchmark for testing AR-Tool14 integration and biomass calculation accuracy.
Net Emission Reductions: The final ER calculations provided year-by-year targets that our complete Guardian implementation needed to match within acceptable precision tolerances.
Parameter Validation: The test artifact showed us which parameter combinations were valid and which should trigger validation errors, informing our schema validation rule design.
Testing Methodology We Used
Individual Component Testing: We tested each calculation component (baseline, project, leakage) separately using test artifact parameters to isolate any calculation errors.
Integration Testing: After individual components worked correctly, we tested the complete calculation chain using full test artifact scenarios.
Precision Analysis: We documented acceptable precision differences between our calculations and test artifact results, accounting for rounding differences and calculation sequence variations.
Edge Case Testing: The test artifact parameters helped us identify edge cases (like zero values, boundary conditions) that needed special handling in our implementation.
Real-World Application Benefits
Having comprehensive test artifacts based on real project data provided benefits throughout our digitization work and continues to be valuable for ongoing development.
Implementation Confidence: Knowing our calculations matched real project calculations gave us confidence that our Guardian implementation would work correctly for actual projects.
Schema Validation: The test artifacts helped us verify that our Guardian schemas could handle real project complexity and data requirements.
User Testing: When we tested Guardian with potential users, having realistic test data made the testing sessions much more meaningful than using hypothetical examples.
Documentation Reference: The test artifacts became our reference for writing user documentation and help text, providing concrete examples of how parameters are used.
Quality Assurance: Ongoing development work uses the test artifacts as regression tests to ensure code changes don't break existing calculation accuracy.
Long-Term Value
Maintenance Reference: When we need to modify calculations or add new features, the test artifacts provide a comprehensive reference for ensuring changes maintain calculation accuracy.
Expansion Foundation: If we extend Guardian to handle additional VM0033 features or variations, the test artifacts provide a foundation for developing additional test scenarios.
Training Resource: The test artifacts help new team members understand VM0033 requirements and Guardian implementation by providing concrete examples of complete calculation scenarios.
Lessons from Test Artifact Development
Collaboration is Essential: We could not have created effective test artifacts without Verra's methodology expertise and Allcot's real project data. The collaborative approach was crucial for creating useful validation tools.
Real Data Matters: Using actual project data rather than hypothetical scenarios made our test artifacts much more valuable for validating implementation accuracy and user experience.
Comprehensive Coverage: Attempting to create test scenarios that cover all calculation pathways, parameter combinations, and edge cases requires systematic organization and significant effort.
Living Documents: Test artifacts need to be maintained and updated as understanding improves and requirements evolve. We continue to reference and occasionally update our artifacts based on implementation experience.
Implementation Integration: Test artifacts are most valuable when they're designed from the beginning to support the specific implementation work being done, rather than created as general methodology examples.
Test Artifact Development Summary and Implementation Readiness
Validation Framework Complete: You now understand how collaborative test artifact development creates the foundation for accurate methodology digitization.
Key Test Development Outcomes:
Collaborative development approach with Verra methodology expertise and Allcot real project data
Comprehensive test case artifact covering all VM0033 calculation components and parameter requirements
Original ER calculations providing validation benchmarks and expected results
Schema design guidance through realistic parameter organization and conditional logic examples
Implementation Readiness: The systematic analysis and planning work completed in Part II provides comprehensive foundation for technical implementation. The methodology analysis, equation mapping, tool integration, and test artifact development create detailed requirements and validation frameworks that directly support schema design and policy development.
Real-World Validation: Using actual project data from a real mangrove restoration project ensures that digitization work addresses practical implementation needs rather than theoretical scenarios, improving the likelihood of successful deployment and user adoption.
Collaborative Success: The test artifact development demonstrates the value of combining technical digitization expertise with domain knowledge and real project experience to create comprehensive validation frameworks.
Field 1 - Description: "Biomass density of vegetation in stratum i"
Field 2 - Unit: "t d.m. ha-1"
Field 3 - Equation: "Equation 15, Equation 23"
Field 4 - Source of data: "Field measurements or literature values"
Field 5 - Value applied: [Stratum-specific data table]
Field 6 - Justification: [Required text explanation]
# Example: Check for duplicate schemas via API
GET /api/v1/schemas
# Look for schemas with identical names but different UUIDs
// Before: Unreadable keys from Excel import
document.credentialSubject.field_1
document.credentialSubject.field_2
// After: Readable keys after manual editing
document.credentialSubject.projectArea
document.credentialSubject.emissionReductions
// In customLogicBlock instead of auto-calculate
const projectArea = document.credentialSubject.projectArea || 0;
const emissionFactor = artifacts[0].emissionFactor || 1;
const totalEmissions = projectArea * emissionFactor;
// Output with calculated value
outputDocument.credentialSubject.calculatedEmissions = totalEmissions;
// Use for API testing
await fetch(`/api/v1/policies/${policyId}/blocks/${blockId}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(pddPayload)
});
This chapter covers essential advanced techniques for schema management that extend beyond the Excel-first approach. You'll learn API-based schema operations, field properties, the four Required field types, and UUID management for efficient schema development.
These techniques are crucial for efficient schema management, especially when working with complex methodologies or managing multiple schemas across policies.
API-Based Schema Management
While Excel-first approach works well for initial development, API operations maybe be helpful for schema updates, bulk operations, and automated workflows if you're familiar with backend programming. Guardian provides comprehensive schema APIs for create, read, update, and delete operations.
When to Use Schema APIs
API operations are essential for:
Schema Updates: Modifying existing schemas without rebuilding from Excel
Bulk Operations: Managing multiple schemas across different policies
Integration: Connecting schema management to external workflows
Field Key Updates: Programmatically renaming field keys for better calculation code
For detailed API operations, see Schema Creation Using APIs.
Importance of Good Key Names
Field key names are crucial for calculation code readability and maintenance. Good key names become especially important when schemas are used in complex calculations and policy workflows.
carbonStock - Ambiguous (baseline? project? which period?)
Impact on Calculation Code:
Standardized Property Definitions
Guardian's Property Glossary provides standardized data definitions based on the GBBC dMRV Specification that ensure data consistency and comparability across different methodologies and projects. These standardized properties enable interoperability and universal data mapping.
Understanding Standardized Properties
For complete property definitions, see Available Schema Types and Property Glossary.
Purpose of Standardized Properties:
Data Consistency: Ensure uniform interpretation of data across different methodology schemas
Cross-Methodology Comparability: Enable comparison of projects using different methodologies
Enhanced Searchability: Allow efficient data retrieval across the Guardian ecosystem
GBBC Compliance: Align with industry-standard dMRV specifications
Key Standardized Property Categories
Organization Properties:
AccountableImpactOrganization: Project developers and responsible entities
Signatory: Agreement signatories with defined roles (IssuingRegistry, ValidationAndVerificationBody, ProjectOwner, VerificationPlatformProvider)
Address: Standardized address format with addressType, city, state, country
Project Properties:
ActivityImpactModule: Core project information including classification (Carbon Avoidance/Reduction/Removal)
GeographicLocation: Standardized location data with longitude, latitude, geoJsonOrKml
MitigationActivity: Mitigation activity classification and methods
Credit Properties:
CRU (Carbon Reduction Unit): Standardized carbon credit structure with quantity, unit, vintage, status
REC (Renewable Energy Certificate): Renewable energy certificate format with recType, validJurisdiction
CoreCarbonPrinciples: Core carbon principles compliance including generationType, verificationStandard
Verification Properties:
Validation: Standardized validation structure with validationDate, validatingPartyId, validationMethod
VerificationProcessAgreement: Verification agreements with signatories, qualityStandard, mrvRequirements
Attestation: Attestation structure with attestor, signature, proofType
Using Standardized Properties in Schemas
Example: Geographic Location Implementation:
Using standardized GeographicLocation structure:
longitude (string): Longitude coordinate
latitude (string): Latitude coordinate
geoJsonOrKml (string): Geographic boundary data
Example: Carbon Credit Implementation:
Using standardized CRU structure:
quantity (string): Amount of credits
unit (enum): CO₂e or other unit specification
vintage (string): Year of emission reduction
Benefits of Standardized Properties
Cross-Methodology Interoperability: Projects can transition between methodologies while preserving core data structure:
Registry Aggregation: Registries can aggregate and compare data from different methodology implementations using consistent property structures.
Automated Quality Control: Standardized properties include built-in validation rules ensuring data consistency and preventing incomplete submissions.
Four Types of Required Field Settings
Guardian provides four distinct Required field settings that control field behavior and visibility. Understanding these types is crucial for proper schema design.
Required Field Types
1. None
Behavior: Optional field, visible to users
Use Case: Optional project information, supplementary data
Example: project_website_url, additional_notes
2. Hidden
Behavior: Not visible to users, used for system data or autocalculatable fields where expression is defined within custom logic block
Use Case: Net VCUs, baseline emission final calculations
Behavior: Not visible to users, calculated automatically
Use Case: LHS parameters of equations, intermediate calculation results
Assignment: Must be assigned via expression field or custom logic block
Auto Calculate Field Details
Auto Calculate fields are essential for methodology calculations but require special handling:
Purpose:
Store Left-Hand Side (LHS) parameters of methodology equations
Hold intermediate calculation results for complex formulas
Maintain calculated values for audit trails and verification
Assignment Methods:
Expression Field: Set calculation formula directly in schema UI. Note that via UI, you'd only be able to access variables within that particular schema. Subschema or other schema variables won't be available.
Custom Logic Block: Assign values through policy calculation blocks, this is the most powerful and comprehensive approach so far.
Auto Calculate Example:
Calculation Assignment:
Schema UUIDs and Efficient Development
Every Guardian schema receives a unique identifier (UUID) when created. Understanding and leveraging schema UUIDs enables efficient development workflows, especially for large-scale policy management.
Schema UUID Structure
Guardian schema UUIDs follow this format:
UUID Properties:
Unique: Each schema gets a distinct identifier
Persistent: UUID remains constant after schema creation
Reference: Used in policy blocks to reference specific schemas
Immutable: UUID doesn't change when schema content is updated
UUID Benefits for Development
1. Bulk Find and Replace Operations
When updating policies with new schema versions, UUIDs enable efficient bulk operations. Open policy JSON in your favorite editor and do a complete find and replace instead of manually selecting schema from dropdown at multiple places.
2. Policy Block Configuration
Policy workflow blocks reference schemas by UUID:
Best Practices Summary
API Management: Use APIs for schema updates and bulk operations rather than recreating schemas from Excel.
Field Key Quality: Invest time in meaningful field key names during initial development - changing them later requires calculation code updates.
Required Type Planning: Choose appropriate Required types based on field purpose:
Use Auto Calculate for methodology equation results (only simple ones accessing variables from same schema)
Use Required for essential user inputs
Use Hidden for intermediate results or calculation related fields defined in custom logic block
Use None for optional information
Testing Integration: Test schema changes across all policy blocks that reference the schema UUIDs.
Ready for Next Steps
This chapter covered the essential advanced techniques: API schema management, proper field naming, Required field types, and UUID management. These concepts are fundamental for efficient methodology implementation and policy management.
The next chapter focuses on testing and validation checklist that ensure schema implementations meet production requirements and maintain accuracy across complex methodology calculations.
Chapter 10: Monitoring Report Schema Development
This chapter teaches you how to build monitoring report schemas that handle time-series data collection and calculation updates. You'll learn the exact field-by-field process used for VM0033's monitoring schema, building on the PDD foundation from Chapter 9.
By the end of this chapter, you'll know how to create the structure yourself, understanding temporal data management, annual parameter tracking, and calculation update mechanisms.
Monitoring Schema Purpose and Structure
Monitoring schemas extend your PDD implementation to handle ongoing project operations over crediting periods. Unlike PDD schemas that capture initial project design, monitoring schemas handle:
Artifacts Collection
Comprehensive collection of test artifacts, calculation implementations, Guardian tools, and reference materials for methodology digitization
Overview
This directory contains essential artifacts used throughout the methodology digitization process, including real test data, production implementations, Guardian tools, and validation materials. All artifacts have been tested and validated for accuracy against their respective environmental methodologies.
Methodology Digitization Handbook
A comprehensive guide to digitizing environmental methodologies on Guardian platform
Summary
The Methodology Digitization Handbook is a comprehensive guide for transforming environmental methodologies from PDF documents into fully functional, automated policies on the Guardian platform. Using VM0033 (Methodology for Tidal Wetland and Seagrass Restoration) as our primary reference example, this handbook provides step-by-step instructions, best practices, and real-world examples for every aspect of the digitization process.
Chapter 3: Guardian Platform Overview
Guardian is a production platform specifically engineered for digitizing environmental certification processes and creating verifiable digital assets. This chapter provides the technical foundation for understanding how complex methodologies like VM0033 are transformed into automated, blockchain-verified workflows that maintain scientific rigor while dramatically improving process efficiency.
Technical Focus Areas:
Architecture Design: How Guardian's microservices architecture supports methodology complexity at scale
Policy Workflow Engine: The core system that converts methodology requirements into executable digital processes
Chapter 13: Policy Workflow Architecture and Design Principles
Understanding Guardian's Policy Workflow Engine and connecting your Part III schemas to automated certification workflows
Part III gave you production-ready schemas. Chapter 13 transforms those static data structures into living, breathing policy workflows that automate your entire methodology certification process.
Guardian's Policy Workflow Engine (PWE) operates on a simple but powerful principle: connect modular blocks together to create sophisticated automation. Think of it like building with LEGO blocks, where each block serves a specific purpose but gains meaning through its connections with others.
Confirm integration with Guardian Tools like AR Tool 14
Quality Assurance Standards
Validation Criteria
✅ Calculation Accuracy: All calculations must match methodology requirements exactly
✅ Guardian Compatibility: All artifacts tested with Guardian platform
✅ Production Ready: Code and configurations validated in production environment
✅ Documentation Complete: All artifacts include usage instructions and validation results
File Integrity
All JSON files validated for proper formatting
All Excel files tested for calculation accuracy
All JavaScript code tested in Guardian environment
All PDF documents verified for completeness
Integration with Handbook Parts
Part III (Schema Design)
Use schema templates for consistent schema development
Reference PDD and Monitoring schema examples
Follow Excel-first approach patterns
Part IV (Policy Workflow)
Reference vm0033-policy.json for production workflow patterns
Use AR Tool integration examples for Guardian Tools
Part V (Calculation Logic)
Use er-calculations.js for customLogicBlock implementation
Reference AR-Tool-14.json for Guardian Tools development
Test with final-PDD-vc.json for validation
Common Usage Patterns
For New Methodology Implementation
Start with schema-template-excel.xlsx for schema design
Reference VM0033-Methodology.md for methodology understanding
Use er-calculations.js patterns for calculation implementation
Validate against test artifacts like VM0033_Allcot_Test_Case_Artifact.xlsx
For Guardian Tools Development
Study AR-Tool-14.json for three-block pattern implementation
Reference ar-am-tool-14-v4.1.pdf for methodology understanding
Test customLogicBlocks with final-PDD-vc.json input
Validate results against Excel test artifacts
Compare with real-world project data
Complete Artifact Collection: This collection provides everything needed for Guardian methodology digitization, from initial schema design through production deployment and testing.
Regular Updates: Artifacts are continuously updated based on Guardian platform evolution and methodology refinements. Always use the latest versions for development.
Production Validation Required: While all artifacts are tested, always validate in your specific Guardian environment before production deployment.
The Block-Event Architecture
Guardian policies work through workflow blocks that communicate via events. When a user submits a document, completes a calculation, or makes an approval decision, these actions trigger events that flow to other blocks, creating automated workflows.
VM0033 demonstrates this perfectly. For instance - when a Project Developer submits a PDD using a requestVcDocumentBlock, it triggers events that:
Refresh the document grid for Standard Registry review
Update project status to "Waiting to be Added" (Listing process)
Enable VVB assignment workflow once registry accepts the listing
Workflow Block Categories
Guardian provides four main block categories:
Data Input and Management: Collect and store information
requestVcDocumentBlock: Generate forms from your Part III schemas
sendToGuardianBlock: Save documents to database or Hedera blockchain
interfaceDocumentsSourceBlock: Display document grids with filtering capabilities
Logic and Calculation: Process and validate data
customLogicBlock: Execute JavaScript or Python calculations for emission reductions
documentValidatorBlock: Validate data against your methodology rules
switchBlock: Create conditional workflow branches
Token and Asset Management: Handle credit issuance - retirement lifecycle
mintDocumentBlock: Issue VCUs(tokens) based on verified emission reductions or removals
tokenActionBlock: Transfer, retire, or manage existing tokens
retirementDocumentBlock: Permanently remove tokens from circulation
Container and Navigation: Organize user experience
interfaceContainerBlock: Create tabs, steps, and layouts
policyRolesBlock: Manage user role assignment
buttonBlock: Add custom actions and state transitions
From Part III Schemas to Policy Workflows
Your schemas become the foundation for workflow automation. Here's how they connect:
Schema UUID Integration
Each schema from Part III has a unique UUID that becomes a reference in policy blocks:
That schema UUID (#9122bbd0-d96e-40b1-92f6-7bf60b68137c) is your PDD schema from Part III. Guardian automatically generates a form with all your schema fields, validation rules, and input types.
Field Key Mapping
Schema field keys become variables in calculation blocks:
Validation Rule Translation
Schema validation rules automatically enforce data quality:
Required fields become mandatory form inputs
Number ranges become input validation
Enum values become dropdown selections
Pattern matching ensures data format consistency
Role-Based Workflow Design
Environmental methodologies require clear stakeholder separation. Guardian implements this through role-based access control:
Standard Stakeholder Roles
OWNER (Standard Registry)
Manages the overall certification program and policy
Approves VVBs and validates projects
Authorizes token minting(issuance)
Review all documentation received from developer or VVB, request clarifications
Maintains audit trails and program integrity
Project_Proponent (Project Developer)
Submits PDDs and monitoring reports
Assigns VVBs for validation/verification
Receives carbon credits(minted tokens) upon successful verification
Tracks project status and documentation
VVB (Validation and Verification Body)
Registers as independent auditor
Validates project submissions
Verifies monitoring reports
Submits validation/verification reports
Document Access Patterns
Each role sees different views of the same data:
Project Developers only see their own projects, while Standard Registry sees all projects for oversight. VVBs see projects assigned to them for validation/verification.
Event-Driven Workflow Patterns
Traditional workflows are linear: Step 1 → Step 2 → Step 3. Guardian workflows are event-driven, allowing flexible, responsive automation.
Event Types and Flow
RunEvent: Triggered when a block completes RefreshEvent: Updates UI displays and document grids TimerEvent: Time-based triggers for deadlines or schedules ErrorEvent: Handles validation failures and error recovery
VM0033 shows sophisticated event patterns:
When a Project Developer clicks "Add Project" (Button_0 output), it triggers the save_added block, which stores the project and refreshes the interface.
Multi-Path Workflows
Events enable conditional branching. A VVB's validation decision creates different event paths:
This flexibility mirrors real-world certification processes where outcomes depend on validation results, not predetermined sequences.
VM0033 Architecture Patterns
VM0033's production policy demonstrates proven architecture patterns worth understanding:
Three-Tier Stakeholder Design
Tier 1: Registration and Setup
VVB registration and approval
Project listing and initial review
Role assignment and permissions setup
Tier 2: Validation and Verification
Project validation workflows
Monitoring report submission and verification
Document review and approval processes
Tier 3: Token Management and Audit
Emission reduction calculation and validation
VCU token minting based on verified results
Trust chain generation and audit trail creation
Document State Management
VM0033 tracks document states throughout the certification lifecycle:
Each state transition triggers appropriate events, adds status values, notifications, and access control changes.
Practical Implementation Strategy
Reuse Rather Than Rebuild: Instead of creating policies from scratch, import existing policies like VM0033, remove their schemas, add your Part III schemas, and modify the workflow logic. This approach saves weeks of development time and provides proven workflow patterns as your foundation.
To reuse VM0033: Import the policy → Delete existing schemas → Import your Part III schemas → Update schema IDs at relevant places with bulk find and replace → Modify token minting rules → Test with your data.
Start with Document Flow
Begin by mapping your methodology's document flow:
What documents need submission? (PDD, monitoring reports)
Who reviews each document? (Registry, VVBs)
What approvals are required? (Validation, verification)
When are tokens minted? (After verification approval)
Schema Integration Planning
Map your Part III schemas to workflow purposes:
PDD Schema: Project submission and validation workflow
Monitoring Schema: Ongoing reporting and verification workflow
Your Part III schemas contain calculation fields that become variables in customLogicBlock:
The calculation results would feeds directly into schemas to be reviewed by VVB/Registry and later accessed via mintDocumentBlock for VCU issuance.
Development Workflow
Phase 1: Architecture Planning
Map stakeholder roles and permissions
Design document flow and state transitions
Plan event connections between workflow blocks
Identify calculation requirements and token minting rules
Phase 2: Block Configuration
Configure data input blocks with Part III schemas
Set up calculation blocks with methodology formulas
Create container blocks for user interface organization
Connect blocks through event definitions
Phase 3: Testing and Refinement
Test complete workflows with sample data
Validate calculations against Part III test artifacts
Refine user interfaces and error handling
Optimize performance and user experience
Key Takeaways
Guardian's Policy Workflow Engine transforms static schemas into dynamic certification workflows. The event-driven architecture provides flexibility while maintaining audit trails and stakeholder separation.
VM0033 offers a proven template for environmental methodology automation. Rather than building from scratch, leverage existing patterns and focus your effort on methodology-specific calculations and business rules.
Part III schemas integrate seamlessly with policy workflows. Schema UUIDs become block references, field keys become calculation variables, and validation rules become workflow automation.
Next Steps: Chapter 14 explores Guardian's 25+ workflow blocks in detail, showing step-by-step configuration for data collection, calculations, and token management using VM0033's production examples.
Prerequisites Check: Ensure you have:
Completed Part III with production-ready schemas
Access to Guardian platform for hands-on practice
VM0033.policy file for reference and reuse
Understanding of your methodology's stakeholder workflow
Time Investment: ~25 minutes reading + ~60 minutes hands-on practice with Guardian policy architecture and planning
Annual Data Collection: Time-series parameter measurements across project lifetime
Calculation Updates: Dynamic recalculation of emission reductions based on new monitoring data
Quality Control: Data validation and evidence documentation for verification activities
Temporal Relationships: Maintaining connections between annual data and cumulative results
Usually, there's always a section on methodology PDF(including VM0033) on data and parameters to be monitored. Typcially, those fields are submitted as part of Monitoring report.
Subsection of Herbaceous Vegetation Stratum Data for Project in MR schema
Building the Primary Monitoring Schema
Step 1: Create Main Monitoring Schema Header
Start your monitoring Excel file with the main schema structure:
This establishes the monitoring schema as a Verifiable Credentials type that will create on-chain records for each monitoring submission.
Step 2: Add Temporal Framework Fields
The first fields should establish the temporal context for monitoring data:
These fields establish when the monitoring data was collected and create unique identifiers for each monitoring period.
Step 3: Add Monitoring Period Input Structure
Create the main monitoring data collection framework:
This references a sub-schema containing the detailed monitoring parameter collection fields.
Step 4: Create Monitoring Period Inputs Sub-Schema
Create a new worksheet "(New) Monitoring Period Inputs" with the monitoring parameter structure:
Monitoring Period Inputs Sheet
Implementing Stratum-Level Data Collection
Creating Stratum Monitoring Sub-Schemas
For methodologies with multiple strata like VM0033, create stratum-specific monitoring:
Create "(New) MP Herbaceous Vegetat 1" worksheet(names are trimmed to accomodate excel's limitations):
Adding Change Detection Fields
Monitor changes from baseline or previous periods:
Create "(New) Annual Inputs Paramet 1" worksheet with project-specific parameters:
Implementing Quality Control and Evidence Collection
Adding Data Quality Indicators
Include quality control fields in your monitoring schemas:
Create "Data quality level (enum)" tab:
Evidence Documentation Structure
Add fields for verification evidence:
Calculation Update Mechanisms
Adding Calculation Fields
Include fields that trigger calculation updates:
Linking to PDD Parameters
Ensure monitoring parameters connect to PDD estimates:
Temporal Boundary Management
Crediting Period Tracking
Add fields to manage crediting periods:
Create "Crediting period (enum)" tab:
Historical Data References
Enable access to previous monitoring data:
VVB Verification Support Fields
Adding Verification Workflow Fields
Include fields supporting VVB verification activities:
Create "Verification status (enum)" tab:
Audit Trail Fields
Maintain audit trail for verification:
Advanced Monitoring Features
Conditional Monitoring Based on PDD Selections
Link monitoring requirements to PDD method selections:
Multi-Year Averaging Fields
For parameters requiring multi-year tracking:
Uncertainty Quantification
Add uncertainty tracking as required by methodology:
Performance Optimization for Long-term Monitoring
Efficient Data Structure Design
Use sub-schemas to group related annual data:
Create efficient annual data structure in "(New) Project Emissions Annual":
Archive and Retrieval Planning
Include fields supporting long-term data management:
Testing Your Monitoring Schema
Guardian provides built-in validation when importing Excel schemas and testing schema functionality through the UI.
Validation Checklist for Monitoring Schemas
Before deploying, verify:
Temporal fields properly identify monitoring periods
Parameter names match PDD schema conventions
Calculation fields properly reference annual data
Quality control fields support verification requirements
Evidence fields accept appropriate file types
Sub-schemas properly handle stratum-level data
Conditional logic aligns with PDD method selections
Important: Field Key Management for Monitoring Schemas
Just like PDD schemas, Guardian generates default field keys when importing monitoring Excel schemas. This is especially important for monitoring schemas since they often have time-series calculations.
After Import - Review and Rename Field Keys:
Navigate to schema management in Guardian
Open your imported monitoring schema for editing
Review each field's "Field Key" property
Rename keys for calculation-friendly monitoring code:
monitoring_year_t instead of G5
carbon_stock_current_period instead of carbonStockCurrentPeriod
emission_reduction_annual instead of emissionReductionAnnual
biomass_change_since_baseline instead of biomassChangeSinceBaseline
Why This Matters for Monitoring: Time-series calculations rely heavily on clear field naming:
Integration Testing with PDD Schema
Test parameter name consistency between PDD and monitoring field keys
Validate calculation updates when monitoring data changes
Verify temporal relationship tracking works correctly
Test VVB verification workflow with monitoring submissions
Validate cumulative calculation accuracy over multiple periods
Trigger Automatic Calculations
Monitoring data submission triggers emission reduction calculations
Updated results flow to token minting calculations
Quality control validation occurs before calculation updates
Support Verification Processes
VVB receives monitoring data with evidence documentation
Verification decisions update project status and calculation eligibility
Approved monitoring data enables token issuance for the monitoring period
Best Practices for Monitoring Schema Development
Parameter Consistency: Ensure monitoring parameter names and units exactly match PDD schema definitions to enable proper calculation updates.
Quality Control Integration: Include quality indicators and evidence fields for every critical measurement to support verification workflows.
Performance Planning: It's important to design efficient sub-schema structures that maintain performance as historical monitoring data accumulates over project lifetimes.
Temporal Logic: Plan temporal relationships carefully to support both period-specific and cumulative calculations across crediting periods.
Evidence Management: Include appropriate file upload and documentation fields to support verification requirements and audit trail maintenance.
VVB Workflow Design: Design verification support fields that enable efficient VVB review and approval processes without overwhelming interfaces.
Maintain and update existing digitized methodologies
Ensure compliance with evolving regulatory requirements
Optimize methodology performance and user experience
Methodology Developers and Carbon Market Professionals
New to Guardian ecosystem seeking to digitize methodologies
Environmental consultants expanding into digital MRV
Carbon project developers wanting to understand the digitization process
Technical Implementers
Developers working on Guardian-based solutions
System integrators connecting Guardian with external systems
QA teams testing methodology implementations
Regulatory and Compliance Teams
Key Features and Benefits
Complete Process Coverage: From initial PDF analysis to production deployment with VM0033 digitization example throughout.
Features
Description
Comprehensive Coverage
• Complete process from PDF analysis to deployment
• Real examples from VM0033 implementation
• Practical focus with actionable steps
• Best practices from successful digitizations
Why VM0033?
• 135-page methodology that covers most challenges
• Active use in blue carbon projects
• Guardian policy being used by Verra in production
• Built in collaboration with Verra & Allcot with real project data and testing
Streamlined Structure
• 27 focused chapters across 8 parts
• 20-30 hours total reading time
• Practical, hands-on approach throughout
• Reduced complexity while maintaining comprehensive coverage
Handbook Structure and Flow
Total Time Investment: 20-30 hours for complete reading
Part I: Foundation (Chapters 1-3) - 20-30 minutes
Purpose: Establish understanding of methodology digitization and Guardian platform Outcome: Clear comprehension of the digitization process and platform capabilities
Chapter 1: Introduction to Methodology Digitization
Chapter 2: Understanding VM0033 Methodology
Chapter 3: Guardian Platform Overview for Methodology Developers
Part II: Analysis and Planning (Chapters 4-7) - 30-40 minutes
Purpose: Systematic analysis of methodology documents and preparation for digitization Outcome: Complete understanding of methodology requirements and test artifacts
Chapter 4: Methodology Analysis and Decomposition
Chapter 5: Equation Mapping and Parameter Identification
Chapter 6: Tools and Modules Integration
Chapter 7: Test Artifact Development
Part III: Schema Design and Development (Chapters 8-12) - 3-4 hours
Purpose: Practical schema development and Guardian management features Outcome: Production-ready PDD and monitoring schemas with testing validation
Chapter 8: Schema Architecture and Foundations
Chapter 9: Project Design Document (PDD) Schema Development
Chapter 12: Schema Testing and Validation Checklist
Part IV: Policy Workflow Design and Implementation (Chapters 13-17) - 3-4 hours
Purpose: Transform Part III schemas into complete Guardian policies with automated workflows Outcome: Production-ready policies with stakeholder workflows and token minting
Chapter 13: Policy Workflow Architecture and Design Principles
Chapter 14: Guardian Workflow Blocks and Configuration
Chapter 15: VM0033 Policy Implementation Deep Dive
Chapter 16: Advanced Policy Patterns and Testing
Chapter 17: Policy Deployment and Production Management
Part V: Calculation Logic Implementation (Chapters 18-21) - 2-3 hours
Purpose: Convert methodology equations into executable code and implement comprehensive testing Outcome: Production-ready calculation implementations with Guardian's testing framework
Chapter 18: Custom Logic Block Development
Chapter 19: Formula Linked Definitions (FLDs)
Chapter 20: Guardian Tools Architecture and Implementation
Chapter 21: Calculation Testing and Validation
Part VI: Integration and Testing (Chapters 22-23) - 1-2 hours
Purpose: End-to-end testing and API automation for production deployment Outcome: Production-ready methodology with testing coverage and API integration
🏗️ - Schema development and testing (Available Now)
⚙️ - Complete policy workflow development (Available Now)
🧮 - CustomLogicBlock development, Guardian Tools, and testing (Available Now)
🔗 - End-to-end testing, API integration, and production deployment validation (Available Now)
🚀Part VII: Deployment and Maintenance - User management, monitoring, and maintenance procedures (In Progress)
⚡ - External integration and troubleshooting (In Progress)
Available Content
Parts I-VI are now available with all twenty-three chapters complete and ready for use, covering the complete foundation through production deployment and API integration.
Part
Status
Chapters
Description
Part I
✅ Available
Foundation concepts, VM0033 overview, Guardian platform introduction
Part II
✅ Available
Methodology analysis, equation mapping, tools integration, test artifacts
Part III
✅ Available
Shared Resources
🔧Shared Resources- Templates, integration guides, and reference materials
📄Templates - Standardized chapter and section templates
This handbook represents the collective knowledge and experience of the Guardian community, with special thanks to the Verra and Allcot team for their collaboration on the VM0033 implementation that serves as our primary example throughout this guide.
Integration Capabilities: Technical mechanisms for embedding methodology logic within broader certification workflows
Implementation Framework: Systematic approach to transforming methodology documents into functional digital systems
Guardian Architecture for Methodologies
Guardian's microservices architecture provides the technical foundation needed to handle the computational and organizational complexity of advanced environmental methodologies like VM0033 at production scale.
Core Technical Components:
Service Architecture:
guardian-service: Central orchestration service managing policy execution and business logic
policy-service: Workflow execution engine that processes methodology-specific rules and requirements
worker-service: Dedicated calculation processing service handling intensive computational tasks
api-gateway: External integration hub providing secure interfaces for data exchange and validation
frontend: Multi-role user interface system supporting complex stakeholder interactions
Architecture Benefits for Complex Methodologies:
Computational Scalability: Distributed processing handles simultaneous calculation across multiple carbon pools, thousands of monitoring points, and multi-decade time series
Stakeholder Complexity: Service separation enables tailored interfaces and access control for diverse stakeholder types (project developers, validators, registries, technical experts)
Reliability at Scale: Microservices isolation ensures that processing intensive calculations doesn't impact user interface responsiveness or data integrity
Integration Flexibility: Modular design supports integration with external validation systems, monitoring equipment, and third-party calculation tools
Integration Capabilities:
Hedera Hashgraph: Immutable record-keeping
IPFS: Decentralized document storage
External APIs: Data validation and verification
Result: VM0033's extensive documentation, monitoring data, and verification records stored in tamper-proof, auditable formats
The Policy Workflow Engine (PWE) is Guardian's core innovation, transforming certification processes into dynamic, executable workflows for environmental asset creation and verification.
Complexity Consideration: VM0033 methodology contains intricate decision trees and calculation procedures requiring careful mapping to Guardian's workflow blocks for complete compliance.
Core PWE Concept: Environmental certification processes are sophisticated workflows where methodology-specific requirements (like VM0033's carbon accounting) are embedded within broader certification procedures involving multiple stakeholders, decision points, data collection, calculations, and verification steps.
PWE Components for VM0033:
Workflow Block Types:
Container Blocks: Organize processes into logical groupings
Step Blocks: Guide users through sequential procedures
Calculation Blocks: Handle mathematical operations for carbon accounting
Request Blocks: Manage extensive data collection requirements
Guardian's schema system provides the foundation for structured data management, defining data structures, validation rules, and relationships that ensure methodology compliance and enable automated processing.
Schema Architecture:
System vs. Custom Schemas:
System Schemas: Core platform functionality
Custom Schemas: Methodology-specific data (PDD, MR, Project & Baseline emissions within them etc.)
VM0033 Data Requirements:
Project boundaries and baseline conditions
Monitoring results and stakeholder information
Calculation parameters with specific validation requirements
Complex relationships between data elements
Key Capabilities:
Verifiable Credentials Integration:
Purpose: Extensive documentation and verification requirements
Guardian's integration with Hedera Hashgraph provides immutable record-keeping essential for environmental asset verification and trading, ensuring all methodology implementation activities are recorded in tamper-proof, publicly auditable formats.
Production Validation: VM0033's Guardian implementation successfully deployed in production, demonstrating platform capability to handle complex, real-world methodology requirements at scale.
Result: Automated verification compliance with appropriate stakeholder involvement
Credit Issuance (VM0033 Embedded):
Certification Process: Complete credit issuance workflow incorporating VM0033's calculation procedures, buffer requirements, and metadata within broader registry standards
Guardian: Token blocks + minting procedures ensuring compliance with both registry standards and VM0033 methodology requirements
Result: Automated credit issuance where VM0033 compliance is embedded within complete certification process
Platform Capability Demonstration: Guardian's flexible architecture, comprehensive workflow blocks, and robust data management transform complete certification processes - with embedded methodology requirements like VM0033 - into automated, verifiable, auditable digital workflows that maintain full compliance while enabling efficient processing from project registration through credit issuance.
- Working examples, test cases, and validation tools
- Python tool for data extraction and validation
Key Capabilities Covered
Guardian's microservices architecture for methodology complexity
Policy Workflow Engine for automated compliance
Schema system for structured data management
Blockchain integration for immutable records
VM0033 complexity mapping to Guardian features
Part I Complete: You now have the complete foundation needed for methodology digitization - conceptual understanding, domain knowledge, and technical platform capabilities. You're ready to begin systematic methodology analysis in Part II.
Chapter 9: Project Design Document (PDD) Schema Development
This chapter teaches you how to build Guardian schemas step-by-step for PDD implementation. You'll learn the exact field-by-field process used for VM0033, translating methodology analysis from Part II into working Guardian schema structures.
By the end of this chapter, you'll know how to create the VM0033 PDD schema like structure yourself, understanding each Guardian field type, conditional logic implementation, and how methodology parameters become functional data collection forms.
Guardian Schema Development Process
Complex Guardian schemas can be built using Excel templates that define the data structure, and then imported into Guardian. The schema template shows all available field types and their configuration options.
Alternative Schema Building Methods:
Excel-first approach (recommended for complex methodologies): Design in Excel, then import - covered in this chapter
Guardian UI approach: Build directly in Guardian interface - see Creating Schemas Using UI
Excel-first approach also enables easier collaboration with carbon domain experts and non-technical stakeholders to provide better feedback with back-and-forth when schemas are complex.
Schema Template Structure
Every Guardian schema follows this Excel structure:
Required Field
Field Type
Parameter
Visibility
Question
Allow Multiple Answers
Answer
Field Configuration Meaning:
Required Field: Whether users must complete this field before submission
Field Type: Data type (String, Number, Date, Enum, Boolean, Sub-Schema, etc.)
Parameter: Reference to enum options or calculation parameters
Visibility: Field display conditions (TRUE=always visible, FALSE=hidden unless condition met)
Building the Primary Schema Structure
Let's build a PDD schema step-by-step, starting with the main schema definition like VM0033's "Project Description (Auto)" tab.
Step 1: Create Main Schema Header
Start your Excel file with these header rows:
This establishes your schema as a Verifiable Credentials type that Guardian will process into on-chain records.
Step 2: Add Certification Pathway Selection
The first functional field should be your primary conditional logic driver. For VM0033, this is certification type selection:
This creates an enum field that determines which additional requirements appear. The parameter reference "Choose project certific (enum)" points to a separate enum tab defining the options.
Create the Enum Tab: Add a new worksheet named "Choose project certific (enum)" with(sheet names might be trimmed to accomodate excel's limitations):
Step 3: Add Conditional Sub-Schemas
Based on the certification selection, different sub-schemas should appear. Add conditional schema references:
The VCS sub-schema always appears (TRUE visibility), while CCB appears only when CCB certification is selected (FALSE = conditional visibility based on enum selection).
Step 4: Create Sub-Schema Structures
VCS Project Description Sub-Schema
Create a new worksheet "VCS Project Description v4.4" with basic project information:
CCB Sub-Schema (Conditional)
Create "CCB" worksheet for additional community/biodiversity requirements:
Implementing Project Information Fields
Geographic Data Capture
Add geographic information fields to your main schema or sub-schema:
Create the unit selection enum tab "AcresHectares (enum)":
Project Timeline Fields
Adding Methodology-Specific Parameters
Now translate your Part II parameter analysis into Guardian fields. For VM0033's biomass parameters:
Step 1: Add Parameter Collection Fields
Step 2: Add Calculation Method Selection
Create the method enum tab:
Step 3: Add Method-Specific Parameter Fields
Add conditional fields that appear based on method selection:
These fields have FALSE visibility, meaning they appear conditionally based on the method selection enum.
Integrating AR Tools and External Modules
Adding AR Tool Integration
VM0033 uses AR Tool 14 for biomass calculations. Add tool integration:
Create AR Tool Sub-Schema
Create "AR Tool 14" worksheet for tool-specific parameters:
Implementing Baseline and Project Calculations
Baseline Scenario Fields
Create a sub-schema for baseline emissions:
Create "(New) Final Baseline Emissions" worksheet:
FALSE: Visible only when referenced condition is met
Hidden: Never visible to users (system fields)
Complex Conditional Logic
For multiple conditions, Guardian evaluates enum selections to determine field visibility. The FALSE visibility fields become visible when their referenced enum is selected appropriately.
Quality Control and Validation
Required Field Validation
Use "Yes" in Required Field column to enforce completion:
Data Type Validation
Guardian automatically validates based on Field Type:
Number: Only accepts numeric values
Date: Validates date format (2000-01-01)
Email: Validates email format
URL: Validates URL format
Pattern Validation
For custom validation patterns:
Testing Your Schema Structure
Validation Checklist
Before importing to Guardian, verify:
All enum references have corresponding enum tabs
Required Field column uses only Yes/No
Field Types match Guardian template options
Visibility logic is consistent (TRUE/FALSE/Hidden)
Import Testing and Schema Refinement
Save Excel file with proper structure
Import to Guardian
Test conditional logic with different selections
Validate auto-calculate fields
Important: Field Key Management
When Guardian imports Excel schemas, it generates default field keys that may not be meaningful for calculation code. For example:
Excel field "Biomass density (t d.m. ha⁻¹)" becomes field key "G5" as per excel cell it was found in
Default keys make calculation code harder to read and maintain
Best Practice: After import, open the schema in Guardian UI to rename field keys:
Navigate to schema management in Guardian
Open your imported schema for editing
Review each field's "Field Key" property
Rename keys to be calculation-friendly:
Why This Matters: Meaningful field keys make calculation code much easier to write and maintain:
Connecting to Monitoring Schemas
Your PDD schema establishes the foundation that monitoring schemas build upon. Key connections:
Parameter Continuity
Ensure PDD parameters have corresponding monitoring equivalents:
PDD: Initial biomass density estimate
Monitoring: Annual biomass density measurements
Calculation Consistency
Use same parameter names and calculation approaches:
PDD parameter key: biomass_density_initial
Monitoring parameter key: biomass_density_year_t
Conditional Logic Alignment
Method selections in PDD should drive monitoring parameter requirements:
Direct method PDD → Direct measurement monitoring fields
Start Simple: Begin with basic project information, then add complexity systematically.
Test Incrementally: Validate each section before adding the next level of complexity.
Use Sub-Schemas: Break complex sections into manageable sub-schema components.
Plan Conditionals: Design conditional logic to reduce user interface complexity while maintaining requirement coverage.
Link to Analysis: Every parameter should trace back to specific methodology requirements from Part II analysis.
Validate with Stakeholders: Test schema workflows with actual Project Developers and VVBs before production deployment.
The next chapter builds on this PDD foundation to create monitoring schemas that handle time-series data collection and calculation updates over project lifetimes.
Chapter 5: Equation Mapping and Parameter Identification
After completing the analysis approach in Chapter 4, we faced the challenge of extracting all the mathematical components from VM0033's 130-page methodology. The document contained dozens of equations scattered across different sections, with complex dependencies between parameters that weren't always obvious. This chapter shares the recursive analysis approach we developed to systematically map every calculation and identify all required parameters.
The recursive analysis technique works backwards from the final calculation goal to identify every single input needed. Instead of trying to read through equations linearly, we start with what we want to calculate and trace backwards until we reach basic measured values or user inputs. This approach ensured we didn't miss any dependencies and helped us understand how all the calculations fit together.
Understanding the Recursive Analysis Approach
When we first looked at VM0033's main equation, it seemed straightforward:
NERRWE = GHGBSL - GHGWPS + FRP - GHGLK
Where:
NERRWE = Net CO₂e emission reductions from the wetland project activity
GHGBSL = Net CO₂e emissions in the baseline scenario
GHGWPS = Net CO₂e emissions in the project scenario
FRP = Fire reduction premium (bonus for reducing fire risk)
But each of these terms turned out to have its own complex calculations. GHGBSL alone involved multiple sub-calculations for different types of emissions, time periods, and restoration activities. We quickly realized we needed a systematic way to trace through all these dependencies.
The Recursive Process We Used:
Start with final goal: NERRWE (what we ultimately want to calculate)
Identify direct dependencies: GHGBSL, GHGWPS, FRP, GHGLK
For each dependency, repeat the process: What do we need to calculate GHGBSL?
Continue until reaching basic inputs: Measured values, user inputs, or default factors
This process revealed that calculating NERRWE for a mangrove project requires hundreds of individual parameters and intermediate calculations, many of which weren't obvious from just reading the methodology sequentially.
Why This Approach Worked
Comprehensive Coverage: Working backwards ensured we found every required input, even parameters that were buried deep in sub-calculations or referenced indirectly through multiple layers.
Logical Implementation Order: Understanding dependencies helped us plan implementation sequence - we knew we needed basic measurements before intermediate calculations, and intermediate calculations before final results.
Error Prevention: The dependency mapping showed us where validation should happen at each step, rather than only discovering problems at the final calculation stage.
Parameter Classification System
As we traced through VM0033's calculations, we realized we needed to organize the hundreds of parameters we were discovering. We developed a classification system that helped us understand what data users would need to provide and when.
Parameter Categories We Used:
Monitored Parameters
These are values that project developers collect through field measurements or laboratory analysis. The Allcot ABC Mangrove project shows how these measurements connect to actual calculations:
Tree Measurements: The project tracks baseline biomass (ABSL,i) and project biomass (AWPS,i) for each stratum. For example, Stratum 1 starts with 1149 t C/ha baseline biomass, while Stratum 3 has 2397 t C/ha - these differences required separate tracking because they feed into different calculation pathways.
Soil Measurements: Soil sampling provides bulk density (BD), organic matter content (%OMsoil), and carbon content (%Csoil) that the recursive analysis revealed are needed for soil carbon change calculations. The project requires "stratum and horizon average" values since conditions vary within each restoration area.
Site Conditions: Sediment accretion rates (SA) and ecosystem classifications affect growth projections and carbon accumulation calculations. The recursive analysis showed these seemingly simple inputs actually influence multiple calculation branches.
Project Activity Data: Area measurements for each stratum (ranging from 1090 to 2222 hectares in the Allcot project) become critical because all carbon calculations get multiplied by area - missing or incorrect area data would invalidate all results.
User-Input Parameters
These are project-specific values that users provide during setup or periodically update:
Project Description: Project area size, crediting period length, restoration activities planned, geographic location.
Management Decisions: Choice of monitoring frequency, selection of calculation methods where VM0033 provides options, decisions about which optional calculations to include.
Economic Data: Costs for fossil fuel use calculations (needed for AR-Tool05), labor and equipment information for project emission calculations.
Default Values
VM0033 provides standard values that can be used when site-specific measurements aren't available:
Growth Factors: Default allometric equations for different mangrove species, default root-to-shoot ratios, standard wood density values.
Emission Factors: Default factors for methane and nitrous oxide emissions, fossil fuel emission factors from AR-Tool05, decomposition rates for different organic matter types.
Conversion Factors: Units conversions, carbon content factors, global warming potential values for different greenhouse gases.
Calculated Parameters
These values get computed from other parameters using VM0033's equations:
Intermediate Calculations: Area-weighted averages across different project zones, annual growth increments, cumulative totals over time periods.
Complex Dependencies: Parameters that depend on multiple inputs and conditional logic, such as eligibility determinations that vary based on site conditions and project activities.
Building Parameter Dependency Trees
The most challenging part of our recursive analysis was mapping how parameters depend on each other. Some dependencies were simple and direct, while others involved complex conditional logic or calculations that changed over time.
Simple Dependencies: Many parameters have straightforward relationships. For example, total project carbon stock depends on individual tree biomass calculations, which depend on DBH measurements and species-specific allometric equations.
Conditional Dependencies: VM0033 includes many calculations that only apply under certain conditions. Fire reduction premiums only apply if projects reduce fire risk. Methane emission calculations depend on whether soil stays flooded or gets drained.
Time-Dependent Relationships: Many calculations change over time as trees grow and conditions change. We had to map not just what parameters were needed, but when they were needed and how they changed over the project lifetime.
Dependency Mapping Process
Visual Mapping: We created flowcharts and tree diagrams showing how parameters related to each other. This helped us see the big picture and identify where we might have missed connections.
Calculation Sequences: We documented the order in which calculations need to happen, ensuring that required inputs are available before calculations that depend on them.
Validation Points: The dependency trees showed us where to include validation checks - if a parameter fails validation, which calculations would be affected, and how to provide helpful error messages.
Working Through VM0033's Key Calculations with Allcot Project Examples
Let me walk through how we applied recursive analysis to VM0033's main calculation components, using the actual Allcot ABC Mangrove project to show how boundary decisions simplify the recursive analysis.
Baseline Emissions (GHGBSL) Analysis
The Allcot project made a key decision that simplified baseline calculations: "Does the project quantify baseline emission reduction? = False". This eliminated entire calculation branches from our recursive analysis.
What This Decision Meant: Instead of calculating emissions from continued degradation, the project only claims benefits from restoration activities. This removed complex soil carbon loss calculations that would have required:
Peat depletion rates (not applicable - all mineral soil)
Soil organic carbon loss rates
Temporal boundary calculations (PDT and SDT both = 0)
Simplified Baseline for Allcot: With mineral soil across all strata and no baseline emission reduction claims, the baseline scenario becomes straightforward - track existing biomass levels (1149, 2115, 2397, 1339 t C/ha across the four strata) without complex degradation modeling.
Recursive Analysis Benefit: By starting with NERRWE and working backwards, we discovered early that the boundary decisions eliminated major calculation branches, allowing us to focus implementation effort on the actual requirements rather than building unused functionality.
Project Emissions (GHGWPS) Analysis
Project emissions include both the carbon benefits from restoration and any emissions caused by project activities.
Carbon Benefits (Negative Emissions):
Tree Growth: Mangroves sequester carbon as they grow, calculated using AR-Tool14 equations
Soil Improvement: Restoration improves soil conditions, reducing carbon loss rates
Project Activity Emissions (Positive Emissions):
Fossil Fuel Use: Boats, equipment, and transportation for project activities (calculated using AR-Tool05)
Disturbance Effects: Temporary emissions from site preparation activities
Parameter Dependencies We Mapped:
Tree growth rates (species-specific, site conditions)
Fuel consumption for project activities (equipment types, distances, frequencies)
Soil improvement rates (depends on restoration techniques and site conditions)
Tools Integration Through Recursive Analysis
VM0033 references external tools (AR-Tool05, AR-Tool14, AFLOU) that have their own parameter requirements. Recursive analysis helped us understand how these tools fit into the overall calculation framework.
Calculation Reference: See the complete equation mapping and parameter dependencies in our and available in the Artifacts Collection.
AR-Tool14 for Biomass Calculations:
Inputs Required: Tree diameter measurements, species identification, site conditions
Outputs Provided: Above-ground and below-ground biomass estimates
Integration Point: Biomass outputs feed into project emission calculations
Outputs Provided: CO₂ emissions from project activities
Integration Point: Fossil fuel emissions get added to project emission totals
Handling Conditional Calculations and Alternative Methods
VM0033 includes many situations where calculations depend on project-specific conditions or where multiple calculation methods are available. Our recursive analysis had to account for these variations, and the Allcot ABC Mangrove project provides concrete examples of how these decisions affect implementation.
Allcot ABC Mangrove Project Boundary Decisions:
From the project boundary analysis in our test artifact, the Allcot ABC Mangrove project made specific choices about what to include in calculations:
Carbon Pools Included:
Above-ground tree biomass (CO₂): Included - This is the main carbon benefit from planting mangroves
Below-ground tree biomass (CO₂): Included - Root systems store significant carbon in mangrove restoration
Soil organic carbon: Excluded in baseline, Included in project - The project improves soil conditions over time
Carbon Pools Excluded:
Litter and Dead Wood: Excluded - Methodology allows these to be optional for wetland restoration
Wood Products: Excluded - No harvesting planned in the mangrove restoration project
Non-tree Biomass: Excluded - Focus is on tree restoration, not herbaceous vegetation
Greenhouse Gas Sources:
Methane (CH₄) from soil microbes: Excluded - Conservatively omitted to simplify calculations
Nitrous oxide (N₂O): Excluded - Also conservatively excluded
The Allcot project made specific methodological choices that affected parameter requirements:
Soil Carbon Approach: "Total stock approach" - This means comparing final soil carbon stocks rather than tracking annual loss rates Baseline Emission Reductions: False - The project doesn't claim benefits from stopping degradation, only from restoration activities
NERRWE-max Cap: False - No maximum cap on annual credit generation Fire Reduction Premium: False - No fire risk reduction claimed (this removed all fire-related parameters from our implementation)
Conditional Parameter Logic from Allcot Project
Soil Type Conditions: All four strata in the Allcot project have "Mineral soil" type, which means:
Peat-related parameters (Depthpeat,i,t0, Ratepeatloss-BSL,I) are "Not applicable"
Soil disturbance parameters don't apply
Temporal boundary calculations are simplified (PDT = 0, SDT = 0 for all strata)
Project Activity Dependencies: Since Fire Reduction Premium = False:
All fire-related emission factors are excluded
GWP factors for CH₄ and N₂O only needed if soil methane/nitrous oxide included
Burning emission calculations completely skipped
Site-Specific vs. Default Values: The Allcot project required site-specific measurements for:
Soil bulk density (BD) - "User provide stratum and horizon average in the value applied field"
Soil carbon content (%OMsoil, %Csoil) - Collected through soil sampling data upload
Tree measurements for biomass calculations (ABSL,i and AWPS,i values)
No peat soil calculations needed (all mineral soil)
No fire premium calculations (eliminated ~15 parameters)
No wood product calculations (eliminated long-term storage complexity)
No fossil fuel tracking for project activities (simple planting operation)
Monitoring Frequency: The project uses annual monitoring with field measurements for tree growth, avoiding the need for complex growth modeling between measurement periods.
Stratum Management: Four distinct strata with different baseline biomass values (1149, 2115, 2397, 1339 t C/ha), each requiring separate parameter tracking but using the same calculation procedures.
Managing Calculation Alternatives
Implementation Strategy: Rather than trying to implement every possible variation initially, we focused on the most common approaches for mangrove restoration projects. This kept our initial implementation manageable while still meeting methodology requirements.
Future Expansion: The dependency maps we created during recursive analysis provide roadmaps for adding additional calculation options later as needed.
Creating Documentation and Validation Framework
The recursive analysis process generated extensive documentation that became essential for both implementation and ongoing maintenance.
Parameter Documentation: For each parameter we identified, we documented:
Calculation Flowcharts: We created visual diagrams showing how data flows through the calculation system from basic inputs to final results. These flowcharts helped us:
Verify our understanding of VM0033's requirements
Plan implementation sequence
Design user interfaces that collect information in logical order
Create validation checks at appropriate points
Validation Logic: The dependency trees revealed exactly where validation should happen:
Input Validation: Check individual parameters as users enter them
Intermediate Validation: Verify calculated values make sense before using them in subsequent calculations
Final Validation: Confirm overall results are reasonable and meet methodology requirements
Practical Lessons from VM0033 Implementation
Start Simple, Build Complexity: We initially tried to map every possible calculation path in VM0033, which was overwhelming. It worked better to start with the most basic mangrove restoration scenario and add complexity gradually.
Documentation is Critical: The recursive analysis generates a lot of information. We learned to document everything systematically because details that seemed obvious at the time became confusing weeks later during implementation.
Test Understanding Early: We regularly tested our understanding by working through example calculations manually. This helped us catch misunderstandings in the recursive analysis before they became implementation problems.
Plan for Iteration: Our first attempt at recursive analysis missed some dependencies and misunderstood some relationships. Building in time for multiple iterations helped us refine our understanding and improve the parameter mapping.
From Parameter Mapping to Implementation Planning
The recursive analysis and parameter identification work creates the foundation for the tool integration and test artifact development covered in the next chapters.
Tool Integration Preparation: Understanding parameter dependencies helps identify which external tools are needed and how they integrate with methodology-specific calculations.
Test Artifact Requirements: The complete parameter lists and calculation sequences become the basis for creating comprehensive test spreadsheets that validate implementation accuracy.
Schema Design Foundation: Although schema design comes in Part III, the parameter classification and dependency mapping from this chapter directly informs what data structures and validation rules we'll need.
Parameter Mapping Summary and Next Steps
Mathematical Foundation Complete: You now understand the systematic approach we used to extract and organize all mathematical components from VM0033.
Key Analysis Outcomes:
Recursive analysis technique for complete dependency mapping
Parameter classification system (monitored, user-input, default, calculated)
Dependency tree construction with validation point identification
Conditional calculation management and alternative method handling
Preparation for Chapter 6: The parameter dependencies and tool integration points identified in this chapter become the focus of Chapter 6, where we'll cover systematic integration of AR-Tool05, AR-Tool14, and AFLOU non-permanence risk tool.
Real-World Application: While we used VM0033 as our example, the recursive analysis technique works for any methodology with complex calculations. The approach of starting from final results and working backwards systematically ensures comprehensive coverage regardless of methodology complexity.
Implementation Reality: This recursive analysis work took several weeks during VM0033 digitization, but it prevented months of problems later by ensuring we understood all dependencies before starting implementation.
Chapter 21: Calculation Testing and Validation
Comprehensive testing and validation using Guardian's dry-run mode and testing framework with VM0033 and AR Tool 14 test artifacts
This chapter demonstrates how to leverage Guardian's built-in testing capabilities to validate environmental methodology calculations. Using Guardian's dry-run mode, customLogicBlock testing interface, and our comprehensive VM0033 and AR Tool 14 test artifacts, you'll learn to validate calculations at every stage: baseline, project, leakage, and final net emission reductions.
Learning Objectives
After completing this chapter, you will be able to:
Utilize Guardian's dry-run mode for comprehensive policy testing
Use Guardian's customLogicBlock testing interface for debugging calculations
Validate calculations against methodology test artifacts at each stage
Test baseline emissions, project emissions, leakage, and net emission reductions
Debug calculation discrepancies using Guardian's built-in tools
Implement automated testing using Guardian's API framework
Create test suites using real methodology test data
Prerequisites
Completed Chapters 18-20: Custom Logic Block Development, Formula Linked Definitions, and Guardian Tools Architecture
Access to test artifacts: , ,
Understanding of Guardian dry-run mode
Familiarity with Guardian testing interface
Guardian's Built-in Testing Framework
Why Guardian's Native Testing is Essential
Environmental methodology calculations directly impact carbon credit credibility and market trust. Guardian provides comprehensive testing capabilities specifically designed for environmental methodologies:
Dry-run mode - Complete policy execution without blockchain transactions
CustomLogicBlock testing interface - Interactive testing and debugging
Virtual users - Multi-role workflow testing
Artifact tracking - Complete audit trail of calculations
Our methodology implementation includes comprehensive test artifacts extracted from the official VM0033 test spreadsheet:
- Complete Allcot test case with all calculation stages
- Complete Guardian Verifiable Credential with net ERR data and test calculations
- JavaScript implementation of emission reduction calculations
Understanding VM0033 Test Data Structure
The VM0033 test artifacts provide validation data for all calculation stages:
Key Test Values from VM0033 Allcot Test Case:
Baseline Emissions: Multiple ecosystem types and emission sources
Project Emissions: Restoration activities and maintenance
Leakage: Market and activity displacement calculations
Net Emission Reductions: Final creditable emission reductions
Using Guardian's CustomLogicBlock Testing Interface
Interactive Testing and Debugging
Guardian provides a powerful testing interface specifically designed for customLogicBlock validation. This interface allows you to test calculation logic independently without running the entire policy.
Accessing the Testing Interface
Following Guardian's testing documentation:
Navigate to Policy Editor - Open your methodology policy in draft mode
Select customLogicBlock - Click on the calculation block you want to test
Enter Testing Mode - Click the "Test" button in the block configuration
Configure Test Data - Use schema-based input, JSON editor, or file upload
Testing Input Methods
Guardian supports three primary input methods for testing:
a. Schema-Based Input
Select a data schema from dropdown list
Dynamic form generated based on schema
Ideal for structured and guided input interface
b. JSON Editor
Direct JSON-formatted data input
Best for advanced users needing precise control
Supports complex data structures
c. File Upload
Upload JSON file containing test data
Must be well-formed JSON
Perfect for using our VM0033 test artifacts
Testing VM0033 Calculations
Step 1: Get the PDD VC generated after submitting the new project data
Using our artifact, fill in the JSON input data
Step 2: Execute Test
Open CustomLogicBlock - Navigate to baseline calculation block in policy editor
Upload Test Data - Use file upload method with baselineTestInput JSON
Run Test - Execute the calculation
Validate Results - Compare outputs against expected values from VM0033 spreadsheet
Step 3: Using Debug Function
Guardian provides a debug() function for calculation tracing:
Debug output appears in the Logs tab of the testing interface.
CustomLogicBlock Testing Interface - Interactive testing and debugging with multiple input methods
Dry-Run Mode - Complete policy workflow testing without blockchain transactions
Test Artifact Integration - Validation against official methodology test cases
API Testing Framework - Automated testing using Guardian's REST APIs
Key Testing Workflow
Extract test data from methodology spreadsheets like VM0033_Allcot_Test_Case_Artifact.xlsx
Test individual calculations using CustomLogicBlock testing interface
Validate complete workflows using dry-run mode with virtual users
Compare results against expected values from official test cases
Next Steps
This completes Part V: Calculation Logic Implementation. With comprehensive testing validation, your Guardian methodology implementations are ready for production deployment with confidence in calculation accuracy.
References and Further Reading
VM0033 Test Artifacts - Complete test dataset for validation
Chapter 22: End-to-End Policy Testing
Testing complete methodology workflows across all stakeholder roles using Guardian's dry-run capabilities and VM0033 production patterns
Part V covered calculation writing and testing within individual blocks. Chapter 22 takes you beyond component testing to validate entire methodology workflows. Using Guardian's dry-run mode and VM0033's multi-stakeholder patterns, you'll learn to test complete project lifecycles from PDD submission through VCU token issuance.
Real-world methodology deployment demands testing workflows that span months of project activity, multiple stakeholder roles, and hundreds of documents. Guardian's dry-run system lets you simulate these workflows without blockchain costs or time delays.
Multi-Role Testing Framework
Chapter 27: Integration with External Systems
Strategies for data exchange between Guardian and external platforms
This chapter demonstrates two critical integration patterns for connecting Guardian policies with external environmental registry systems. You'll learn how to transform Guardian data for external platforms like Verra Project Hub and how to receive MRV data from external devices and systems.
Integration Architecture Overview
Guardian's policy workflow engine supports bidirectional integration with external systems through specialized workflow blocks and API endpoints. This enables Guardian to function as both a data provider and consumer in complex environmental certification ecosystems.
Two Primary Integration Patterns:
// With good field keys - self-documenting
const totalEmissions = (
data.biomass_density_stratum_i * data.area_hectares_stratum_i *
data.carbon_fraction_tree * data.co2_conversion_factor
);
// With poor field keys - requires comments and documentation
const totalEmissions = (
data.field0 * data.field1 * data.field2 * data.G5
); // What calculation is this performing?
Row 5: Yes | Object | | | Project Location | No |
Row 10: Yes | Object | | | Carbon Credits | No |
VM0033 Project → Standard Properties → Different Methodology
GeographicLocation ✓
AccountableImpactOrganization ✓
CRU (Carbon Credits) ✓
Validation Records ✓
Row 1: Monitoring Report (Auto)
Row 2: Description | Monitoring period input parameters for measuring carbon stock changes and GHG emissions
Row 3: Schema Type | Verifiable Credentials
Row 4: Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Row 5: Yes | Number | | | Monitoring year | No | 7
Row 6: Yes | Number | | | Monitoring period (years since project start) | No | 1
Row 7: Yes | Date | | | Monitoring report submission date | No | 2000-01-01
Row 8: Yes | String | | | Monitoring period identifier | No | MP-2024-01
Row 9: Yes | (New) Monitoring Period Inputs | | | Monitoring Period Inputs | No |
(New) Monitoring Period Inputs
Description | Monitoring period input parameters for measuring carbon stock changes and GHG emissions
Schema Type | Verifiable Credentials
Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Yes | Boolean | | | Baseline Aboveground non-tree biomass | No | True
No | (New) MP Baseline Herbaceous V | | | Baseline herbaceous vegetation monitoring data | Yes |
Yes | Number | | | Monitoring year | No | 7
Yes | (New) MP Herbaceous Vegetat 1 | | | Measurements for each stratum | Yes |
(New) MP Herbaceous Vegetation Stratum Data for Project
Description | Stratum-level herbaceous vegetation monitoring
Schema Type | Sub-Schema
Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Yes | String | | | Stratum number | No | 1
Yes | Number | | | Carbon stock in herbaceous vegetation (t C/ha) - CBSL-herb,i,t | No | 1.5
Yes | Number | | | Initial time T for measurement - Start_T (BSL) | No | True
Yes | Number | | | Carbon stock at time T - CBSL-herb,i,(t-T) | No | 0.5
Yes | Number | | | Change in carbon stock since last period | No | 0.2
Yes | String | | | Explanation for significant changes | No | example
Yes | Boolean | | | Data quality meets methodology requirements | No | True
(New) Annual Inputs Parameters Baseline
Description | Annual input parameters for project calculations
Schema Type | Sub-Schema
Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Yes | Number | | | Area of stratum (ha) – Ai,t | No | 1
Yes | Number | | | Change in project tree-biomass carbon stock (t CO₂-e yr⁻¹) – ΔCTREE_WPS,i,t | No | 1
Yes | Number | | | CO₂ emissions from project soil (t CO₂e ha⁻¹ yr⁻¹) – GHGWPS-insitu-CO₂,i,t | No | 1
Yes | Number | | | Percentage of organic carbon in project soil (%) – C%WPS-soil,i,t | No | 1
Yes | Enum | Data quality level (enum) | | Data quality level for this measurement | No | High
Yes | String | | | Quality control procedures followed | No | example
Yes | Image | | | Site photograph for verification | No | ipfs://05566a658a44c6f747b5f82a2de1e0bf
Yes | String | | | GPS coordinates of measurement location | No | example
Schema name | Monitoring Report (Auto)
Field name | Data quality level for this measurement
Loaded to IPFS | No
High |
Medium |
Low |
Yes | String | | | Measurement methodology used | No | example
Yes | Date | | | Date of field measurement | No | 2000-01-01
Yes | String | | | Personnel responsible for measurement | No | example
No | String | | | Laboratory analysis results | No | example
No | Image | | | Laboratory report scan | No | ipfs://05566a658a44c6f747b5f82a2de1e0bf
No | Auto-Calculate | | | Updated baseline emissions (t CO2e) | No | 150.5
No | Auto-Calculate | | | Updated project emissions (t CO2e) | No | 45.2
No | Auto-Calculate | | | Net emission reductions this period (t CO2e) | No | 105.3
No | Auto-Calculate | | | Cumulative emission reductions (t CO2e) | No | 850.7
Yes | Number | | | Initial PDD estimate for comparison | No | 1
Yes | Number | | | Variance from PDD estimate (%) | No | 5.2
Yes | String | | | Explanation for variance | No | example
Yes | Enum | Crediting period (enum) | | Crediting period | No | 1st period (0-10 years)
Yes | Number | | | Year within current crediting period | No | 3
Yes | Boolean | | | Final monitoring report for this period | No | False
Schema name | Monitoring Report (Auto)
Field name | Crediting period
Loaded to IPFS | No
1st period (0-10 years) |
2nd period (10-20 years) |
3rd period (20-30 years) |
[continue as needed for methodology requirements]
No | String | | Hidden | Previous monitoring report ID | No | example
No | Number | | | Change since previous monitoring period | No | 2.5
Yes | Boolean | | | Significant changes requiring explanation | No | False
Yes | String | | | VVB assigned for verification | No | example
No | Date | | | VVB site visit date | No | 2000-01-01
No | Enum | Verification status (enum) | | Verification status | No | Under review
No | String | | | VVB comments | No | example
No | Boolean | | | Verification approved | No | False
Schema name | Monitoring Report (Auto)
Field name | Verification status
Loaded to IPFS | No
Under review |
Approved |
Requires revision |
Rejected |
No | String | | Hidden | Monitoring report version | No | v1.0
No | Date | | Hidden | Last modification date | No | 2000-01-01
No | String | | Hidden | Modification log | No | example
No | Number | | FALSE | Direct measurement biomass (if direct method selected) | No | 1
No | Number | | FALSE | Indirect calculation biomass (if indirect method selected) | No | 1
Yes | Number | | | 3-year average carbon stock | No | 12.5
Yes | Number | | | 5-year trend in carbon accumulation | No | 0.8
Yes | String | | | Trend analysis explanation | No | example
Yes | Number | | | Measurement uncertainty (%) | No | 5.0
Yes | String | | | Uncertainty calculation method | No | example
Yes | Number | | | Confidence interval lower bound | No | 10.2
Yes | Number | | | Confidence interval upper bound | No | 14.8
Yes | (New) Project Emissions Annual | | | Project Emissions Annual Data | No |
(New) Project Emissions Annual
Description | Annual project emissions data
Schema Type | Sub-Schema
Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Yes | Number | | | Year | No | 2024
Yes | String | | | Data collector | No | example
[Include only essential annual fields to maintain performance]
No | String | | Hidden | Archive status | No | Active
No | Date | | Hidden | Archive date | No | 2000-01-01
No | Boolean | | Hidden | Available for new calculations | No | True
// With good field keys - monitoring calculation
const annualChange = data.carbon_stock_current_period - data.carbon_stock_previous_period;
const cumulativeER = data.emission_reduction_total + annualChange;
// With default keys - confusing for time-series
const annualChange = data.field5 - data.field12;
const cumulativeER = data.field8 + annualChange;
Guardian's dry-run mode creates a sandbox environment where you can simulate multiple users working simultaneously on different parts of your methodology. This approach mirrors production deployment while keeping testing fast and cost-effective.
Setting Up Dry-Run Testing Environment:
Import VM0033 Policy - Start with the VM0033 policy from shared artifacts
Enable Dry-Run Mode - Switch policy status from Draft to Dry-Run
Create Virtual Users - Set up users for each role (Project Proponent, VVB, OWNER)
Execute Complete Workflows - Test full project lifecycle with role transitions
Choose role during dry run
Switch role UI
VVB documents review UI for Registry role
Creating Virtual Users for Multi-Role Testing
Guardian allows Standard Registry users (OWNER role) to create virtual users for testing different stakeholder workflows. This feature enables testing approval chains and document handoffs. You can do so via API as well.
Testing User Progression Pattern:
Project Developer submits PDD using VM0033 project description schema
Standard Registry reviews and lists project on their platform
VVB accepts validation assignment from project proponent and conducts project review
VVB submits validation report with project assessment
Standard Registry approves or rejects project based on VVB validation
Project Developer submits monitoring reports over crediting period
VVB verifies monitoring data and submits verification reports
Standard Registry issues VCU tokens based on verified emission reductions
VM0033 Complete Workflow Testing
Let's walk through testing VM0033's complete workflow using the navigation structure from the policy JSON. This demonstrates how dry-run testing validates stakeholder interactions across the full methodology implementation.
Project Proponent Workflow Testing
Step 1: Project Creation and PDD Submission
The Project Proponent starts by accessing the "Projects" section and creating a new project using VM0033's PDD schema.
New Project Form
Testing should validate:
PDD form captures all required VM0033 parameters
Conditional schema sections display based on certification type (VCS vs VCS+CCB)
Calculation inputs integrate with custom logic blocks
Document submission creates proper audit trail
VC document submitted
Step 2: VVB Selection and Assignment
After PDD submission and approval by registry, the project developer selects a VVB for validation. Testing confirms:
VVB selection interface displays approved VVB list
Assignment notification reaches selected VVB
Project status updates reflect VVB assignment
Document access permissions transfer correctly
Project approval/rejection UI within SR role
VVB selection via dropdown
VVB Workflow Testing
Step 3: Project Validation Process
VVBs access assigned projects through their dedicated interface. Validation testing includes:
Project document review and download capabilities
Validation checklist and assessment tools
Site visit data collection and documentation
Validation report submission using VM0033 validation schema
Project review UI
Validation Report UI
Validation Report Form
Step 4: Monitoring Report Verification
During the crediting period, VVBs verify monitoring reports:
Annual monitoring data review and validation
Field measurement verification against monitoring plan
Calculation accuracy assessment using VM0033 test artifacts
Verification report submission with emission reduction confirmation
Validated & approved projects see monitoring report button
Add report dialog
Assigned to Earthood
VVB can view the report submitted with auto-calculated values
Standard Registry (OWNER) Workflow Testing
Step 5: Project Pipeline Management
Standard Registry manages the complete project pipeline:
Project listing approval after successful validation
VVB accreditation and performance monitoring
Monitoring report review and compliance tracking
Token issuance authorization based on verified reductions
A verified presentation of tokens minted, each mint must have a trace of all the steps and data backing it.
Testing Workflow State Transitions
Guardian policies manage complex state transitions across multiple documents and stakeholders. Effective testing validates these transitions handle edge cases and error conditions properly.
Document Status Flow Testing:
Potential Error Conditions:
VVB rejection scenarios and resubmission workflows
Incomplete document submission handling
Calculation errors and correction procedures
Role permission violations and access control
Concurrent user conflicts and resolution
Integration Testing with Production-Scale Data
Large Dataset Processing Validation
VM0033 projects can involve hundreds of hectares with complex stratification requiring extensive monitoring data. Testing with realistic data volumes validates performance and accuracy under production conditions.
Creating Test Datasets Based on VM0033 Allcot Case:
Using the VM0033_Allcot_Test_Case_Artifact.xlsx as foundation, create expanded datasets:
Multi-Year Monitoring Period Simulation
VM0033 projects can operate over 100-year crediting periods with annual monitoring in best scenarios. Testing long-term scenarios validates data consistency and calculation accuracy across extended timeframes using data patterns from our VM0033 test case artifact.
Testing should validate:
Calculation consistency across monitoring/crediting periods
Carbon stock accumulation tracking over decades
Emission reduction trend validation
Cross-Component Integration Validation
Schema-Workflow-Calculation Integration Testing
Part VI testing validates that components from Parts III-V work together seamlessly. This integration testing catches issues that component testing misses.
Schema Field Mapping Validation:
Using VM0033's schema structure, test field key consistency:
Important blocks for integration testing:
Test document flow through complete policy execution:
requestVcDocumentBlock captures schema data correctly
customLogicBlock processes schema fields without errors
mintTokenBlock uses calculation outputs for token quantities
External Tool Integration
VM0033 integrates AR-Tool14 and AR-Tool05 for biomass and soil carbon calculations. Make sure you validate that these tools work correctly within complete policy execution.
Testing Best Practices and Procedures
Incremental Testing Approach
Start with simple workflows and progressively add complexity. This approach isolates issues and builds confidence in policy functionality.
Testing Progression:
Single User, Single Document - Basic PDD submission and processing
Single User, Complete Project - Full project lifecycle for one user type
Multi-User, Single Project - Role interactions and handoffs
Multi-User, Multiple Projects - Concurrent operations and scaling
Production Simulation - Full-scale testing with realistic data volumes
Dry-Run Artifacts and Validation
Guardian's dry-run mode creates artifacts that help validate testing results and provide audit trails for methodology compliance.
Dry-Run Artifacts:
Transaction Log: Mock blockchain transactions that would occur in production
Document Archive: Complete document history with version tracking
IPFS Files: Files that would be stored in distributed storage
Token Operations: Credit issuance and transfer records
Audit Trail: Complete workflow execution history
Menu bar showing artifacts tab
Test Data Management and Version Control
Maintain test datasets that evolve with your methodology. Version control ensures testing remains valid as policies change.
Sample Test Data Organization:
Each test case should include:
Input parameters matching your schema structure
Expected calculation results from methodology spreadsheets
Documentation explaining test scenario purpose
Success criteria and validation checkpoints
Chapter Summary
End-to-end testing validates that your methodology digitization works correctly under real-world conditions. Guardian's dry-run capabilities provide the foundation for this testing, enabling multi-role workflows, production-scale data processing, and component integration validation.
Key Testing Strategies:
Multi-Role Testing Framework:
Virtual user creation and management
Complete stakeholder workflow simulation
Role transition and permission testing
Document handoff validation
Production-Scale Validation:
Large dataset processing performance
Multi-year monitoring period simulation
Concurrent user and project handling
Integration with external systems
Cross-Component Integration:
Schema-workflow-calculation consistency
Field mapping and data flow validation
External tool integration testing
End-to-end document processing
Testing Workflow:
Setup dry-run environment with VM0033 policy configuration
Create virtual users representing each methodology stakeholder
Execute complete workflows following VM0033 navigation patterns
Validate integration between schemas, workflows, and calculations
Test production scenarios with realistic data volumes and timeframes
Document results and maintain test case version control
This testing approach ensures your methodology implementation handles the complexity and scale requirements of production carbon credit programs while maintaining accuracy and compliance with methodology requirements.
Next Steps: Chapter 23 covers API integration and automation, building on the testing foundation established here to enable programmatic methodology operations and external system integration.
Data Transformation for External Systems: Converting Guardian project data to external system formats
External Data Reception: Accepting monitoring data from external devices and aggregating systems
Use Case 1: Transforming Data for External Systems
Introduction to dataTransformationAddon
Guardian's dataTransformationAddon block enables transformation of Guardian project data into formats required by external registry systems. This block executes JavaScript transformation code that converts Guardian document structures into external API formats.
Primary Applications:
Submitting project data to Verra Project Hub
Integrating with Gold Standard registry systems
Preparing data for CDM project submissions
Custom registry platform integration
VM0033 DataTransformation Implementation
The VM0033 policy demonstrates production-grade data transformation in the project-description block:
Data transformation block in VM0033
Transformation Code Structure
The dataTransformationAddon block executes JavaScript code that transforms Guardian documents into any format needed. Here's the core transformation pattern from VM0033:
Data Transformation Best Practices
1. Field Mapping Strategy
2. Data Type Conversions
3. Complex Object Transformations
Use Case 2: Receiving Data from External Systems
External Data Reception Architecture
Guardian's externalDataBlock enables reception of monitoring data from external devices, IoT sensors, and third-party MRV systems. This pattern can be used for automated monitoring reports and real-time project tracking. It is the approaches used in Gold standard's metered energy cooking policy implemented on Guardian.
External MRV data integration flow in metered policy
External devices/servers use the config to prepare a VC and send data to /external endpoint
externalDataBlock processes and validates incoming data
Data aggregates into monitoring reports with a frequency set in the timer block.
MRV Configuration Download Pattern
Guardian implements a download-based pattern for external data integration. When a project is validated, a comprehensive MRV configuration file becomes available for download:
External Data Submission Endpoint
Guardian exposes an /external endpoint for receiving data from external systems:
Endpoint Structure:
Authentication:
Data Payload Format:
ExternalDataBlock Implementation
The externalDataBlock handles incoming external data with validation and processing:
MRV Sender Integration
Guardian includes an MRV sender tool that simulates external data submission. The source code is available here - https://github.com/hashgraph/guardian/tree/main/mrv-sender
Hedera Integration: Account ID and private key for blockchain transactions
Schema Context: Complete JSON-LD schema definition with field types
DID Documents: Verification methods and authentication keys
Policy References: Policy ID, tag, and document reference for linking
Data Generation Options:
Values Mode: Use specific values for each field
Templates Mode: Use predefined data templates
Random Mode: Generate random values within specified ranges
Chapter Summary
This chapter demonstrated Guardian's bidirectional integration capabilities through two essential patterns:
Data Transformation for External Systems using dataTransformationAddon blocks enables Guardian to export project data in formats required by external registries. The VM0033 implementation shows production-grade JavaScript transformation code that converts Guardian documents into external system formats.
External Data Reception using externalDataBlock and MRV configurations enables automated monitoring data collection from external devices and systems. The metered energy policy pattern demonstrates how projects generate downloadable MRV configuration files that external systems use to submit data back to Guardian.
Key Implementation Elements:
JavaScript-based data transformation within Guardian policy blocks
Comprehensive MRV configuration files with schema definitions and DID documents
Hedera blockchain integration for secure data transactions
Schema validation and document verification for incoming data
Timer-based aggregation for monitoring report generation
These integration patterns enable Guardian to function as a comprehensive platform in environmental certification ecosystems, supporting both automated data collection and seamless registry integration.
Next Steps: Chapter 28 will explore advanced Guardian features including multi-methodology support, AI-powered search capabilities, and future platform developments.
Automating methodology operations and integrating with external systems using Guardian's REST API framework
Chapter 22 covered manual testing workflows. Chapter 23 shows you how to automate these processes using Guardian's comprehensive API framework. Using the same VM0033 patterns, you'll learn to automate data submission, integrate with monitoring systems, and build testing frameworks that scale.
Guardian's APIs enable programmatic access to all functionality available through the UI. This automation capability transforms methodology operations from manual processes into scalable, integrated systems that connect with existing organizational infrastructure.
Guardian API Framework Overview
Authentication and API Access
Guardian uses JWT-based authentication for API access. All API calls require authentication headers except for initial login and registration endpoints.
Access Token API:
Refresh token is available in response of login(or loginByEmail) endpoints
Base API URL Pattern: All Guardian APIs follow the pattern: https://guardianservice.app/api/v1/. If you're using local setup - host would update to http://localhost:3000 depending on your port configuration.
For dry-run operations, the typical URL structure is:
Submitting data via APIs is much faster than manual form filling if schema is too big. Using the we analyzed, here's how API endpoints map to actual policy blocks:
VM0033 Key Block IDs from Policy JSON:
PDD Submission Block: 55df4f18-d3e5-4b93-af87-703a52c704d6 - UUID of add_project_bnt
Monitoring Report Block: 53caa366-4c21-46ff-b16d-f95a850f7c7c - UUID of add_report_bnt
For every dry run triggered, these IDs change so make sure you have the latest ones.
Using dry-run APIs, you can execute complete VM0033 workflows programmatically to validate methodology implementation.
Complete VM0033 Workflow Automation:
Automated Testing Frameworks
Cypress Testing Integration
Building on Guardian's API patterns, you could create automated testing suites that validate methodology implementation across multiple scenarios.
VM0033 Cypress Test Suite(Sample):
Chapter Summary
API integration transforms Guardian methodology implementations from manual processes into automated, scalable systems. Using VM0033's patterns, you can automate data submission, integrate with external monitoring systems, build comprehensive testing frameworks, and manage production operations efficiently.
Key API Integration Patterns:
Automated Data Submission:
PDD and monitoring report API automation using requestVcDocumentBlock endpoints
Multi-year monitoring data generation and submission workflows
Error handling and validation for automated submissions
Dry-Run API Operations:
Virtual user creation and management for multi-stakeholder testing
Programmatic workflow execution and validation
Artifact collection and analysis for testing validation
External System Integration:
IoT sensor data transformation and submission to Guardian monitoring workflows
Registry integration with automated project listing and status synchronization
Real-time data pipeline integration for continuous monitoring operations
Production API Management:
Rate limiting and retry logic for robust production operations
Performance testing and load validation for production scalability
Error handling and monitoring for long-term operational reliability
Implementation Workflow:
Establish API authentication and access token management
Map policy block IDs to API endpoints using policy JSON structure
Build automation scripts for data submission and workflow execution
Create testing frameworks using Cypress and Guardian's dry-run APIs
API integration enables methodology implementations that scale from prototype testing to production operations, supporting hundreds of projects and thousands of stakeholders while maintaining accuracy and compliance with methodology requirements.
Next Steps: This completes Part VI: Integration and Testing. Your methodology implementation is now ready for production deployment with comprehensive testing coverage and scalable API automation capabilities.
Chapter 1: Introduction to Methodology Digitization
Methodology digitization transforms how environmental certification actually works in carbon markets. Instead of manual processes where projects spend months navigating paper-based workflows, digitization creates automated, blockchain-verified systems that can handle the complexity of modern carbon methodologies while maintaining the rigor these markets require.
This isn't just about converting PDFs to digital forms. We're talking about recreating entire certification processes - from project registration through credit issuance - as executable digital policies where methodology requirements like VM0033 become part of streamlined, transparent workflows.
What You'll Learn: Core concepts for methodology digitization using VM0033 as a working example. You'll understand why digitization is becoming essential and how the Guardian platform makes complex methodology implementation practical.
Row 1: Project Description (Auto)
Row 2: Description
Row 3: Schema Type | Verifiable Credentials
Row 4: Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Schema name | Project Description (Auto)
Field name | Choose project certification type
Loaded to IPFS | No
VCS v4.4 |
CCB v3.0 & VCS v4.4 |
Row 6: No | VCS Project Description v4.4 | | TRUE | VCS Project Description | No |
Row 7: No | CCB | | FALSE | CCB & VCS Project Description | No |
VCS Project Description v4.4
Description
Schema Type | Sub-Schema
Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Yes | String | | | Project title | No | example
Yes | String | | | Project ID | No | example
Yes | URL | | | Project Website | No | https://example.com
Yes | Date | | | Start Date | No | 2000-01-01
Yes | Date | | | End Date | No | 2000-01-01
CCB
Description
Schema Type | Sub-Schema
Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Yes | String | | | CCB Standard | No | example
Yes | String | | | CCB Project Type | No | example
Yes | Date | | | Auditor Site Visit Start Date | No | 2000-01-01
Yes | Number | | | Latitude (Decimal Degrees) | No | 1
Yes | Number | | | Longitude (Decimal Degrees) | No | 1
Yes | Number | | | Acres/Hectares | No | 1
Yes | Enum | AcresHectares (enum) | | Acres/Hectares | No | Acres
Schema name | Project Description (Auto)
Field name | Acres/Hectares
Loaded to IPFS | No
Acres |
Hectares |
Yes | Date | | | Project Start Date | No | 2000-01-01
Yes | Date | | | Project End Date | No | 2000-01-01
Yes | Number | | | Crediting Period Length (years) | No | 10
Yes | String | | | Stratum number | No | example
Yes | Number | | | Area of stratum (ha) – Ai,t | No | 1
Yes | Number | | | Biomass density (t d.m. ha-1) | No | 1
Yes | String | | | Data source for biomass density | No | example
Yes | String | | | Justification for parameter selection | No | example
Yes | Enum | Which method did you us (enum) | | Which method did you use for estimating change in carbon stock in trees? | No | Between two points of time
Schema name | Project Description (Auto)
Field name | Which method did you use for estimating change in carbon stock in trees?
Loaded to IPFS | No
Between two points of time |
Difference of two independent stock estimations |
No | Number | | FALSE | Mean annual change in carbon stock (t CO2e yr-1) | No | 1
No | Number | | FALSE | Carbon fraction of tree biomass (CF_TREE) | No | 1
No | Number | | FALSE | Default mean annual increment (Δb_FOREST) | No | 1
Yes | AR Tool 14 | | | AR Tool 14 | No |
AR Tool 14
Description | Biomass estimation using AR Tool 14
Schema Type | Tool-integration
Tool | AR Tool 14
Tool Id | [tool message id if available]
Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Yes | Number | | | Tree height (m) | No | 1
Yes | Number | | | Diameter at breast height (cm) | No | 1
Yes | Number | | | Wood density (g cm-3) | No | 1
Yes | (New) Final Baseline Emissions | | | Baseline Emissions | No |
(New) Final Baseline Emissions
Description
Schema Type | Sub-Schema
Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
Yes | Number | | | Year t | No | 1
Yes | String | | | Stratum number | No | example
Yes | Enum | It's a baseline scenari (enum) | | It's a baseline scenario or project scenario? | No | Baseline scenario
Yes | Number | | | Mean annual change in carbon stock in trees (t CO2e yr-1) | No | 1
No | Auto-Calculate | | | Total Emission Reductions (t CO2e) | No | 2
Yes | Image | | | Site photograph | No | ipfs://05566a658a44c6f747b5f82a2de1e0bf
No | String | | | Document description | No | example
No | Help Text | {"color":"#FF0000","size":"14px"} | | Parameter Help | No | This parameter represents...
No | String | | Hidden | Internal project ID | No | example
Yes | String | | | Project Developer Name | No | example
Yes | Pattern | [0-9]{4} | | Four-digit year | No | 2024
// With good field keys
const totalEmissions = data.biomass_density_stratum_i * data.area_hectares;
// With default keys
const totalEmissions = data.field0 * data.field1; // What do these represent?
// Structure from final-PDD-vc.json artifact
const vm0033TestData = {
"document": {
"credentialSubject": [{
// Complete VM0033 test case data including:
// - Baseline emissions calculations
// - Project emissions calculations
// - Leakage calculations
// - Final net emission reduction results
// - All intermediate calculation values
}]
}
};
// Example debugging in customLogicBlock
function calculateBaseline(document) {
const baseline = document.baseline_scenario;
// Calculate fire emissions
const fireEmissions = baseline.area_data.baseline_fire_area *
baseline.emission_factors.fire_emission_factor;
debug("Fire Emissions Calculation", {
area: baseline.area_data.baseline_fire_area,
factor: baseline.emission_factors.fire_emission_factor,
result: fireEmissions
});
// Calculate total baseline emissions
const totalBaseline = fireEmissions + /* other calculations */;
debug("Total Baseline Emissions", totalBaseline);
return totalBaseline;
}
# From /e2e-tests folder
npm install cypress --save-dev
# Configure authorization in cypress.env.json
{
"authorization": "your_access_token_here"
}
# Run specific methodology tests
npx cypress run --spec "tests/vm0033-methodology.cy.js"
# Start dry-run mode
PUT /api/v1/policies/{policyId}/dry-run
# Create virtual user
POST /api/v1/policies/{policyId}/dry-run/user
# Execute block dry-run
POST /api/v1/policies/{policyId}/dry-run/block
# Get transaction history
GET /api/v1/policies/{policyId}/dry-run/transactions
# Get artifacts
GET /api/v1/policies/{policyId}/dry-run/artifacts
# Restart policy execution
POST /api/v1/policies/{policyId}/dry-run/restart
PDD: Draft → Submitted → Under Review → Validated → Approved
Monitoring Report: Draft → Submitted → Under Verification → Verified → Credits Issued
VVB Status: Applicant → Under Review → Approved → Active → Suspended/Revoked
// Generate multiple project instances for load testing
function generateTestProjects(baseProject, count) {
const testProjects = [];
for (let i = 0; i < count; i++) {
const project = JSON.parse(JSON.stringify(baseProject));
project.project_details.G5 = `Test Project ${i + 1}`;
project.baseline_scenario.area_data.total_project_area = 100 + (i * 50);
testProjects.push(project);
}
return testProjects;
}
// Test concurrent project submissions
const multipleProjects = generateTestProjects(vm0033BaseProject, 25);
Authorization header can be extracted via dev tools console
What is Methodology Digitization?
The Challenge: Carbon markets still rely heavily on manual processes. Project developers submit PDFs, validators review paper documents, and registries track everything through email chains and spreadsheets. This works, but it's slow, error-prone, and difficult to verify.
Our Approach: Instead of digitizing documents, we digitize entire certification processes. We transform workflows themselves into automated, blockchain-verified systems where methodology requirements are embedded directly into the certification process. Every step becomes traceable, calculations are automated, and stakeholders can work within a single platform rather than juggling multiple systems.
Immutable transparency: Every transaction and decision recorded on Hedera Hashgraph for complete audit trails
Process efficiency: Certification workflows accelerated from weeks to hours through automation
Systematic accuracy: Embedded validation logic prevents implementation mistakes that occur in manual processes
Implementation Approach:
Systematic analysis of certification workflows and stakeholder interactions across the complete process
Technical mapping of roles, data flows, and decision points within certification frameworks
Integration design where methodology requirements (like VM0033) are embedded into automated certification workflows
Policy implementation as executable digital workflows that maintain methodology precision while automating processes
Validation framework ensuring both methodology integrity and certification standard compliance
VM0033 Example: The Digital Policy for Tidal Wetland and Seagrass Restoration demonstrates how digitization transforms entire certification processes:
Scope: Complete blue carbon project certification from registration to credit issuance
Stakeholders: Full ecosystem including Project Developers, VVBs, Registry Operators, and communities
Embedded Methodology: VM0033's specific requirements for soil carbon accounting and monitoring integrated into broader certification workflows
Process Automation: Manual certification steps (document review, calculation verification, stakeholder coordination) converted to automated digital workflows
Result: Complete digital certification process where VM0033 methodology requirements are embedded within automated policy workflows
Production Impact: VM0033 digitization resulted in the first fully automated blue carbon project certification workflow in production use on Verra's platform.
Why VM0033 Works as Our Reference:
Market significance: Leading methodology in the rapidly expanding blue carbon sector
Technical complexity: 130-page methodology with sophisticated calculation requirements ideal for demonstrating digitization capabilities
Real-world validation: Currently in production use, proving the digitization approach works at scale
Comprehensive scope: Global applicability across diverse coastal restoration contexts provides robust testing ground
Guardian Platform Overview
Guardian is a production-ready platform for environmental asset tokenization and certification workflow digitization, built on Hedera Hashgraph's distributed ledger technology. The platform is designed to handle the complexity requirements of real environmental methodologies while maintaining the performance and reliability needed for carbon market operations.
Technical Architecture:
Policy Workflow Engine (PWE): Configurable workflow system that adapts to any environmental methodology's specific requirements
Microservices Design: Distributed architecture with dedicated services for authentication, policy execution, calculation processing, and data management
Hedera Hashgraph Integration: Immutable transaction recording and consensus mechanisms for audit trail integrity
See Guardian architecture for detailed technical specifications and the Artifacts Collection for working examples and validation tools.
The VM0033 Case Study
VM0033 (Methodology for Tidal Wetland and Seagrass Restoration) serves as the ideal digitization case study due to its comprehensive complexity and ongoing real-world production use by Verra.
Methodology Scope and Complexity
Ecosystem Coverage:
Tidal Forests: Mangroves and other woody vegetation under tidal influence
Tidal Marshes: Emergent herbaceous vegetation in intertidal zones
Seagrass Meadows: Submerged aquatic vegetation in shallow coastal waters
See Guardian's schema system for data validation details.
Real-World Digitization Challenges
Scale and Complexity:
Parameter Management: Hundreds of parameters across multiple strata
Long-term Projections: 100-year data requirements for permanence calculations
Ecological Zones: Numerous variables with specific calculation and validation rules
Schema Design: Substantial complexity in data structure management
Complexity Reality Check: VM0033 requires managing hundreds of parameters across multiple strata, with some calculations requiring 100-year data projections. This scale requires systematic approaches and robust data management strategies.
API Integration: Guardian's RESTful APIs enable integration with existing monitoring systems, data collection platforms, and verification tools for seamless workflow incorporation.
Foundation Complete: You now understand methodology digitization concepts and Guardian's role in it. Chapter 2 will provide the VM0033 domain knowledge needed before we begin technical implementation.
# VM0033 Policy ID from dry-run URL or policy JSON
POLICY_ID="689d5badaf8487e6c32c8a2a"
# PDD Submission endpoint
POST https://guardianservice.app/api/v1/policies/689d5badaf8487e6c32c8a2a/blocks/55df4f18-d3e5-4b93-af87-703a52c704d6
{Pass bearer token in Authorization header}
{With request Body - available in artifacts as [PDD_MR_request_body.json](../../_shared/artifacts/PDD_MR_request_body.json) }
# Monitoring Report submission endpoint
POST https://guardianservice.app/api/v1/policies/689d5badaf8487e6c32c8a2a/blocks/53caa366-4c21-46ff-b16d-f95a850f7c7c
{Pass bearer token in Authorization header}
{With request body - available in artifacts as [PDD_MR_request_body.json](../../_shared/artifacts/PDD_MR_request_body.json) }
Chapter 20: Guardian Tools Architecture and Implementation
Building standardized calculation tools using Guardian's extractDataBlock and customLogicBlock mini-policy pattern
This chapter details how to build Guardian Tools - think of them as mini policies that implement standardized calculation methodologies like CDM AR Tools. Using AR Tool 14 as our example, you'll learn the complete architecture for creating reusable calculation tools that can be integrated into any environmental methodology.
Learning Objectives
After completing this chapter, you will be able to:
Understand Guardian's Tools architecture as re-usable mini policies with data extraction and calculation blocks
Analyze AR Tool 14's production implementation in Guardian format
Build extractDataBlock workflows for schema input/output operations
Implement standardized calculation logic using customLogicBlock
Create modular, reusable tools for integration across multiple methodologies
Test and validate tool calculations against methodology test artifacts
Prerequisites
Completed Chapter 18: Custom Logic Block Development
Understanding of Guardian workflow blocks from Part IV
Access to AR Tool 14 artifacts: and
Familiarity with extractDataBlock documentation
What is AR Tool 14?
AR Tool 14 is a CDM (Clean Development Mechanism) methodological tool for "Estimation of carbon stocks and change in carbon stocks of trees and shrubs in A/R CDM project activities." It provides standardized methods for:
Primary Purpose
Tree biomass estimation using allometric equations, sampling plots, or proportionate crown cover
Shrub biomass estimation based on crown cover measurements
Carbon stock changes calculated between two points in time or as annual changes
Uncertainty management with discount factors for conservative estimates
Key Calculation Methods
From the , the tool supports multiple approaches:
Measurement of sample plots - Stratified random sampling and double sampling
Modelling approaches - Tree growth and stand development models
Proportionate crown cover - For sparse vegetation scenarios
Direct change estimation - Re-measurement of permanent plots
Guardian Tools Architecture
Mini-Policy Pattern
Guardian Tools usually follow a three-block pattern:
Block Flow Architecture
The Tool workflow follows this pattern:
Input Event → get_ar_tool_14 (extractDataBlock)
Data Processing → calc_ar_tool_14 (customLogicBlock)
Output Event → set_ar_tool_14 (extractDataBlock)
extractDataBlock: Data Input/Output Engine
Understanding extractDataBlock
The extractDataBlock is Guardian's mechanism for working with embedded schema data. From the documentation:
"This block is used for VC documents which are based on (or 'conform to') a schema which contains embedded schemas, extractDataBlock provides means to extract a data set which corresponds to any of these embedded schemas (at any depth level), and if required after processing to return the updated values back into the VC dataset to their original 'place'."
AR Tool 14 Schema Integration
In our AR Tool 14 implementation, the extractDataBlock references schema #632fd070-d788-49ae-889b-cd281c6c7194&1.0.0 which is published version of Tool 14 schema. You can see schema excel within :
This extracts the AR Tool 14 input data structure from the parent document, containing parameters like:
Tree measurements - DBH, height, species data
Plot information - Area, sampling design, stratum details
Calculation methods - Selected approaches for biomass estimation
Uncertainty parameters - Confidence levels and discount factors
Data Extraction Process
When a policy workflow calls the AR Tool 14, the extraction process works as follows:
customLogicBlock: AR Tool 14 Calculation Engine
Production JavaScript Implementation
The AR Tool 14 customLogicBlock contains the actual calculation engine. From our artifact, here's the implementation structure:
Stratified Random Sampling Implementation
Code for stratified random sampling from AR Tool 14:
Uncertainty Management System
AR Tool 14 also implements a sophisticated uncertainty discount system:
Building Your Own Tool
Step 1: Define Tool Schema
First, create a schema that captures all the input parameters for your calculation methodology:
Step 2: Implement Tool Policy Structure
Create the three-block tool structure:
Step 3: Implement Calculation Logic
Build your customLogicBlock calculation function following the Guardian pattern:
Tool Integration in Parent Policies
Calling Tools from Methodologies
Guardian Tools are designed to be called from parent methodology policies. Here's how VM0033 would integrate AR Tool 14:
Tool Event Configuration
Tools communicate with parent policies through Guardian's event system:
Testing and Validation Framework
Unit Testing Tool Calculations
Test individual calculation functions against methodology test cases:
Best Practices for Guardian Tools
Design Principles
Single Responsibility: Each tool should implement exactly one methodology or calculation standard
Modular Architecture: Break complex calculations into testable functions
Error Resilience: Handle edge cases and invalid inputs gracefully
Chapter Summary
Guardian Tools provide a powerful architecture for implementing standardized calculation methodologies as reusable mini policies. Key concepts:
Tools are like mini policies that follow the extractDataBlock → customLogicBlock → extractDataBlock pattern
AR Tool 14 demonstrates complete implementation of complex biomass calculations with uncertainty management
extractDataBlock handles schema-based data input and output operations automatically
customLogicBlock contains the actual methodology calculation logic in JavaScript
Next Steps
Chapter 21 will demonstrate comprehensive testing and validation frameworks for custom logic blocks for individual tools and complete policy.
References and Further Reading
- Complete tool policy configuration
- Original CDM methodology document
Guardian extractDataBlock Documentation
Tool Building Success: You now understand how to build complete Guardian Tools using the extractDataBlock and customLogicBlock pattern. The AR Tool 14 example provides a production-ready template for implementing any standardized calculation methodology in Guardian.
Chapter 16: Advanced Policy Patterns
Exploring advanced Guardian policy features for production methodologies including external data integration, document validation, API transformation, and policy testing
Building on VM0033's implementation patterns from Chapter 15, Chapter 16 explores advanced features that enable production-scale policy deployment. These patterns handle external data integration, document validation, API transformations, and testing workflows essential for real-world carbon credit programs.
1. Data Transformation Blocks for API Integration
Verra Project Hub API Integration
VM0033 implements a dataTransformationAddon that converts Guardian project submissions into Verra's Project Hub compatible API payloads, enabling automatic project registration with external registries.
VM0033 Project Description Transformation Block
The transformation block in VM0033 (tag: project-description) demonstrates how Guardian can transform internal project data into external API formats:
Key Transformation Features:
API Compatibility: Creates Verra Project Hub API-compatible JSON structure
Data Mapping: Maps Guardian schema fields to external registry requirements
Standard Integration: Handles VCS and CCB standard-specific fields
Default Values: Sets appropriate defaults for registry submission status
Implementation Pattern:
Implementation Use Cases
Carbon Registry Integration:
Automatic project listing with Verra, Gold Standard, or other registries
Real-time status synchronization between Guardian and external systems
Standardized data exchange for multi-registry projects
Corporate Reporting:
Transform carbon project data for corporate sustainability reporting
Generate API payloads for ESG reporting platforms
Create standardized data formats for carbon accounting systems
2. Document Validation Blocks
Guardian's documentValidatorBlock ensures document integrity and compliance throughout policy workflows. This block validates document structure, content, and relationships before processing continues.
Document Validation Architecture
Validation Types:
Schema Validation: Ensures documents conform to defined JSON schemas
Ownership Validation: Verifies document ownership and assignment rules
Content Validation: Checks specific field values and business logic
Relationship Validation: Validates links between related documents
Condition Types:
Type
Description
Example Use Case
Practical Validation Examples
Project Eligibility Validation:
VVB Assignment Validation:
Note: While VM0033 doesn't use documentValidatorBlock in its current implementation, it relies on other validation mechanisms including documentsSourceAddon filters and customLogicBlock validations to ensure document integrity.
3. External Data Integration
Guardian's externalDataBlock enables policies to integrate with external APIs and data providers for real-time environmental monitoring and verification.
External Data Block Architecture
Example 1: Kanop Environmental Data Integration
Kanop provides satellite-based MRV technology for nature-based carbon projects. Integration enables automatic data retrieval for biomass monitoring, forest cover analysis, and carbon stock assessments. External data block can be used to integrate and get data from Kanop.
Example 2: IoT Device Integration for Cookstove Projects
For metered cookstove projects, external data blocks can integrate with IoT devices to collect real-time usage data:
IoT Data Processing:
Real-Time Data Validation
External data integration includes validation mechanisms to ensure data quality:
4. Policy Testing Framework
Guardian provides robust testing capabilities for policy validation before production deployment, including manual dry-run testing and programmatic test automation.
Dry-Run Mode Testing
Dry-run mode enables complete policy testing as the name suggests. Policy developer can take up different roles and simulate the entire process end to end to verify everything works.
Starting Dry-Run Mode:
You can trigger dry run either via policy editor UI or API
Dry-Run Features:
Virtual Users: Create test users without real Hedera accounts
View IPFS Files: Check files that would be stored in IPFS
Programmatic Policy Testing
Guardian supports automated policy testing with predefined test scenarios and expected outcomes.
Adding Test Cases:
Tests are embedded in policy files and executed programmatically:
Running Automated Tests:
Test Result Analysis
Test Failure Analysis
When tests fail, Guardian provides detailed comparison and debugging information:
Testing Best Practices:
Test Coverage Strategy: Test each stakeholder workflow independently, validate all document state transitions, test error handling and edge cases
Test Data Management: Create realistic test datasets matching production scenarios, use boundary value testing for numerical inputs
Continuous Testing: Run tests after each policy modification, automate testing in CI/CD pipelines
5. Demo Mode for Simplified Testing
Guardian provides Demo Mode as a simplified approach to policy testing, particularly useful for novice users and quick policy validation. Demo mode is selected during policy import.
Demo Mode Features
Demo Mode operates similarly to dry-run but with enhanced user interface simplification:
Read-Only Policy Processing: All policy processing is read-only, policy editing is not possible
No External Communication: No communication with external systems such as Hedera network or IPFS
Simplified UI: Streamlined interface designed for ease of use
Local Storage: All artifacts stored locally similar to dry-run mode
Summary
Chapter 16 demonstrated Guardian's advanced policy patterns essential for production deployment:
Data Transformation: VM0033's project-description transformation block converts Guardian project data to Verra API-compatible formats for automatic registry integration
Document Validation: documentValidatorBlock provides robust validation with condition-based rules for ensuring document integrity and business logic compliance
External Data Integration: externalDataBlock enables integration with providers like Kanop for satellite monitoring and IoT devices for real-time environmental data
These patterns enable Guardian policies to integrate with real-world carbon markets, environmental monitoring systems, and corporate reporting platforms while maintaining data integrity and audit trails.
Next Steps: Part V covers the calculation logic implementation, diving deep into methodology-specific emission reduction calculations and the JavaScript calculation engine that powers Guardian's environmental accounting.
Prerequisites Check: Ensure you have:
Completed Chapters 14-15 (Policy architecture and VM0033 implementation)
Access to external API documentation for your methodology
Test datasets for policy validation
Understanding of your methodology's data requirements
Time Investment: ~25 minutes reading + ~90 minutes hands-on testing with dry-run mode
Practical Exercises:
Dry-Run Testing: Import and set up VM0033 in dry-run mode and test complete project lifecycle
External Data Integration: Configure external data block for your methodology's monitoring requirements
Document Validation: Implement validation rules for your specific business logic
API Transformation: Create transformation block for your target registry's API format
Chapter Outlines
Purpose: Establish the foundation for understanding methodology digitization on Guardian platform.
Key Topics:
What is methodology digitization and why it matters
Guardian platform's role in environmental asset tokenization
Overview of the digitization process from PDF to working policy
VM0033 as our reference case study - why it was chosen
Benefits of digitization: transparency, efficiency, automation
Common challenges and how this handbook addresses them
Setting up your development environment
VM0033 Context: Introduction to VM0033's significance in blue carbon markets and its complexity as a comprehensive tidal wetland restoration methodology.
Purpose: Teach systematic approach to analyzing methodology documents for digitization.
Key Topics:
Structured reading techniques for methodology PDFs
Identifying workflow stages and decision points
Mapping stakeholder interactions and document flows
Extracting data requirements and validation rules
Understanding temporal boundaries and crediting periods
Identifying calculation dependencies and parameter relationships
VM0033 Context: Step-by-step analysis of VM0033 document, breaking down its content into digestible components and identifying digitization priorities.
Purpose: Master the process of extracting and organizing all mathematical components of a methodology.
Key Topics:
Recursive equation analysis starting from final emission reduction formula
Parameter classification: monitored vs. non-monitored vs. user-input
Building parameter dependency trees
Identifying default values and lookup tables
Handling conditional calculations and alternative methods
Creating calculation flowcharts and documentation
VM0033 Context: Complete mapping of VM0033's emission reduction equations, including baseline emissions, project emissions, and leakage calculations with all parameter dependencies.
Purpose: Create comprehensive test cases that validate the digitized methodology.
Key Topics:
Designing test scenarios covering all methodology pathways
Creating input parameter datasets for testing
Establishing expected output benchmarks
Building validation spreadsheets with all calculations
Documenting test cases and acceptance criteria
Version control for test artifacts
VM0033 Context: Development of complete VM0033 test spreadsheet with multiple project scenarios, covering different wetland types, restoration activities, and calculation methods.
Purpose: Build comprehensive PDD schemas using Excel-first approach with step-by-step implementation.
Key Topics:
Excel schema template usage and structure
Step-by-step PDD schema construction process
Conditional logic implementation with enum selections
Sub-schema creation and organization
Field key management for calculation code readability
Guardian import process and testing
VM0033 Context: Complete walkthrough of building VM0033 PDD schema from Excel template, including certification pathway conditionals and calculation parameter capture.
Purpose: Master API schema management, field properties, and advanced Guardian features.
Key Topics:
API-based schema operations and updates
Field key naming best practices for calculation code
Standardized Property Definitions from GBBC specifications
Four Required field types: None, Hidden, Required, Auto Calculate
Schema UUID management for efficient development
Bulk operations and version control strategies
VM0033 Context: Advanced schema management techniques used in VM0033 development, including Auto Calculate field implementation for equation results and UUID management for policy integration.
Purpose: Validate schemas using Guardian's testing features before deployment.
Key Topics:
Default Values, Suggested Values, and Test Values configuration
Schema preview testing and functionality validation
UUID integration into policy workflow blocks
Test artifact completeness checking
Field validation rules and user experience optimization
Pre-deployment checklist and user testing
VM0033 Context: Practical testing approach used for VM0033 schema validation, including systematic testing of conditional logic and calculation field behavior.
Purpose: Establish foundational understanding of Guardian policy architecture and design patterns for environmental methodology implementation.
Key Topics:
Guardian policy architecture fundamentals and component overview
Event-driven workflow block communication system
Policy lifecycle management and versioning strategies
Hedera blockchain integration for immutable audit trails
Document flow design patterns and state management
Security considerations and access control architecture
VM0033 Context: Guardian policy architecture analysis using VM0033 production implementation as reference for tidal wetland restoration methodology digitization.
Document filtering and status management implementations
Button configuration patterns for workflow transitions
End-to-end integration patterns and event routing
VM0033 Context: Complete analysis of VM0033 production policy JSON with extracted block configurations, focusing on real-world implementation patterns for tidal wetland restoration certification.
Purpose: Advanced Guardian policy implementation patterns using production VM0033 configurations.
Key Topics:
Transformation blocks for external API integration (Verra project hub)
Document validation blocks for data integrity and business rule enforcement
External data integration patterns (Kanop satellite monitoring, IoT devices)
Policy testing frameworks including dry-run mode and programmatic testing
Demo mode configuration for training and development environments
Production deployment patterns and error handling strategies
VM0033 Context: Real implementation examples from VM0033 production policy including dataTransformationAddon for Verra API integration, documentValidatorBlock configurations, and comprehensive testing approaches.
Purpose: Implement emission reduction calculations using JavaScript in Guardian's customLogicBlock.
Key Topics:
Guardian customLogicBlock architecture and JavaScript execution environment
Document input/output handling with credentialSubject field access
VM0033 baseline emissions, project emissions, and net emission reduction calculations
Schema field integration and Auto Calculate field implementation
Error handling and validation within calculation blocks
Testing calculation logic outside and within Guardian environment
VM0033 Context: Complete implementation of VM0033 emission reduction calculations using real production JavaScript from er-calculations.js artifact, including field mapping to PDD and monitoring report schemas.
Purpose: Brief foundation chapter establishing FLD concepts for parameter relationship management in Guardian methodologies.
Key Topics:
FLD concept and basic architectural understanding
Parameter reuse across multiple schema documents in policy workflows
VM0033 parameter relationship examples suitable for FLD implementation
Integration patterns with customLogicBlock calculations
Basic design principles for FLD frameworks
VM0033 Context: Concise overview establishing FLD concepts with VM0033 parameter relationship examples, focusing on foundational understanding rather than detailed implementation.
Purpose: Build Guardian Tools using extractDataBlock and customLogicBlock patterns, with AR Tool 14 as practical example.
Key Topics:
Guardian Tools architecture as mini-policies with three-block pattern
ExtractDataBlock workflows for schema-based data input/output operations
CustomLogicBlock integration for standardized calculation implementations
AR Tool 14 complete implementation with stratified random sampling
Tool versioning, schema evolution, and production deployment patterns
Tool integration patterns for use across multiple methodologies
VM0033 Context: Real AR Tool 14 implementation from Guardian production artifacts showing complete biomass calculation tool that integrates with VM0033 wetland restoration methodology.
Purpose: Comprehensive testing using Guardian's dry-run mode and customLogicBlock testing interface with VM0033 and AR Tool 14 test artifacts.
Key Topics:
Guardian's customLogicBlock testing interface with three input methods (schema-based, JSON editor, file upload)
Interactive testing and debugging with Guardian's built-in debug() function
Dry-run mode for complete policy workflow testing without blockchain transactions
Test artifact validation using final-PDD-vc.json and official methodology spreadsheets
Testing at every calculation stage: baseline, project, leakage, and net ERR
API-based automated testing using Guardian's REST APIs and Cypress framework
Best practices for test data management and systematic testing approaches
VM0033 Context: Practical testing implementation using VM0033_Allcot_Test_Case_Artifact.xlsx and final-PDD-vc.json with Guardian's testing interface, demonstrating complete validation workflow from individual calculations to full policy testing.
Purpose: Automating methodology operations using Guardian's REST API framework for production deployment and integration.
Key Topics:
Guardian API authentication patterns with JWT tokens and refresh token management
VM0033 policy block API structure using real block IDs for PDD and monitoring report submission
Dry-run API operations with virtual user creation and management for automated testing
Automated workflow execution class demonstrating complete VM0033 project lifecycle via APIs
Cypress testing integration for automated methodology validation and regression testing
VM0033 Context: Practical API automation using VM0033 policy endpoints, demonstrating automated data submission, virtual user workflows, and production API patterns for scalable methodology operations.
Part VII: Deployment and Maintenance
Chapter 24: User Management and Role Assignment
Purpose: Set up and manage users, roles, and permissions for deployed methodologies.
Key Topics:
User onboarding and account management
Role assignment and permission configuration
Organization management and multi-tenancy
Access control and security policies
User training and support procedures
Audit and compliance reporting
VM0033 Context: User management for VM0033 implementation, including VVB accreditation, project developer registration, and Verra administrator roles.
Chapter 25: Monitoring and Analytics - Guardian Indexer
Purpose: Monitoring and analytics for deployed methodologies and data submitted via Indexer
Key Topics:
Usage analytics and reporting
Data export and reporting capabilities
Compliance monitoring and audit trails
VM0033 Context: Viewing all data on Indexer, tracking project registrations, credit issuances
Chapter 26: Maintenance and Updates
Purpose: Maintain and evolve deployed methodologies over time.
Key Topics:
Maintenance procedures and schedules
Bug fixing and issue resolution
Methodology updates and regulatory changes
User feedback integration and feature requests
Long-term support and lifecycle planning
VM0033 Context: Maintenance strategy for VM0033 implementation, including handling Verra methodology updates and regulatory changes.
Purpose: Provide solutions for common problems encountered during methodology digitization.
Key Topics:
Common digitization pitfalls and solutions
Debugging techniques and tools
Data quality issues and resolution
User experience problems and fixes
Integration and compatibility issues
VM0033 Context: Some specific troubleshooting scenarios encountered during VM0033 implementation and their solutions.
Implementation Notes
Each chapter will include:
Practical Examples: Real code, configurations, and screenshots from VM0033 implementation
Best Practices: Lessons learned and recommended approaches
Common Pitfalls: What to avoid and how to prevent issues
Testing Strategies: How to validate each component
Performance Considerations: Optimization tips and scalability guidance
Maintenance Notes: Long-term considerations and update strategies
The handbook is designed to be both a learning resource and a reference guide, with clear navigation between conceptual understanding and practical implementation.
# Run all policy tests
POST /api/v1/policies/{policyId}/tests/run
# Run specific test
POST /api/v1/policies/{policyId}/tests/{testId}/run
# Get test results
GET /api/v1/policies/{policyId}/tests/{testId}/results
VM0033 "Methodology for Tidal Wetland and Seagrass Restoration" is a sophisticated 130-page framework designed specifically for blue carbon projects. Understanding this methodology is essential because it represents the technical complexity that modern digitization platforms must handle - comprehensive calculation requirements, multiple stakeholder roles, and intricate validation logic that must all be preserved when moving from manual to automated processes.
Digitization Context: VM0033 demonstrates why methodology digitization is more than document conversion. The methodology's complexity requires sophisticated digital systems that can embed technical requirements within automated certification workflows while maintaining scientific rigor.
VM0033 Scope and Applicability
VM0033 addresses tidal wetland restoration across three interconnected ecosystem types, reflecting the scientific understanding that coastal restoration requires integrated approaches rather than isolated interventions. This systems-thinking approach creates complexity that demands sophisticated digital implementation.
Ecosystem Coverage:
Tidal Forests: Mangroves and woody vegetation under tidal influence, representing some of the most carbon-dense ecosystems on Earth
Tidal Marshes: Emergent herbaceous vegetation in intertidal zones, providing critical habitat while storing substantial carbon in soils
Seagrass Meadows: Submerged aquatic vegetation in shallow coastal waters, supporting marine biodiversity while sequestering carbon in biomass and sediments
Core Definition: "Re-establishing or improving the hydrology, salinity, water quality, sediment supply and/or vegetation in degraded or converted tidal wetlands."
This definition emphasizes that restoration goes beyond simple replanting to address the fundamental processes that support healthy wetland function.
Eligible Project Activities
VM0033 recognizes that successful restoration requires addressing multiple stressors simultaneously rather than implementing single interventions. The methodology organizes eligible activities into four primary categories:
Address underlying stressors favoring invasive species
Applicability Requirements and Exclusions
VM0033 includes specific requirements to ensure projects deliver genuine emission reductions without causing negative impacts elsewhere. Project areas must be free of displaceable land uses, demonstrated through evidence of abandonment for two or more years, economic unprofitability, or legal prohibitions on alternative uses. This requirement prevents projects from simply displacing activities to other locations where they might cause emissions.
Critical Exclusions: VM0033 projects cannot include commercial forestry, water table lowering (except specific conversions), organic soil burning, or nitrogen fertilizer application during the crediting period.
The methodology excludes several activities that could undermine restoration objectives or create perverse incentives. Commercial forestry is prohibited in baseline activities to prevent projects from claiming credit for avoiding timber harvest that was never economically viable. Water table lowering is generally prohibited except for specific conversions from open water to tidal wetland. Organic soil burning and nitrogen fertilizer application are excluded due to their potential to increase greenhouse gas emissions and compromise ecosystem integrity.
Project Boundaries and Temporal Considerations
VM0033 establishes sophisticated temporal boundaries that account for the long-term nature of soil carbon dynamics in coastal systems. The methodology introduces two innovative concepts that address a fundamental challenge in wetland carbon accounting: how to claim credit for preserving carbon stocks that are finite and will eventually be depleted even under restoration scenarios.
Key Innovation: VM0033's PDT and SDT concepts provide practical tools for addressing finite soil carbon stocks while maintaining scientific rigor in long-term carbon accounting.
Temporal Boundary Concepts:
Peat Depletion Time (PDT) - Organic Soils:
Definition: Time when all peat disappears or reaches no further oxidation level
Calculation Factors: Average organic soil depth above drainage limit, soil loss rate from subsidence/fire
Requirement: Conservative estimates remaining constant over time
Soil Organic Carbon Depletion Time (SDT) - Mineral Soils:
Eroded Soils: Conservatively set at 5 years
Excavated/Drained Soils: Based on average organic carbon stock and oxidation loss rate
Purpose: Limits period for claiming emission reductions from restoration
These temporal concepts reflect VM0033's practical approach to carbon accounting in dynamic coastal environments where complete permanence is unrealistic but significant climate benefits can still be achieved through restoration activities.
Geographic Boundary Requirements:
Mandatory Stratification Factors:
Organic vs. mineral soil areas
Seagrass meadows vs. other wetland types
Native ecosystems vs. degraded areas
Purpose: Ensure emission calculations reflect diverse project conditions
Salinity Stratification (Unique VM0033 Feature):
Basis: Methane emissions vary significantly with salinity levels
Requirements: Stratify by salinity averages and low points during peak emission periods
Timing: Focus on growing seasons in temperate ecosystems
Result: Accurate methane accounting across salinity gradients
Sea Level Rise Integration:
Assessment Required: Potential area loss due to sea level rise
Procedures: Estimate eroded strata areas over time
Purpose: Ensure emission reduction claims remain valid under changing climate
CH₄: Emissions from soil and biomass (salinity-dependent)
N₂O: Emissions from soil and biomass
Flexibility: Conservative approaches allowed where direct measurement not feasible
The comprehensive boundary approach recognizes that tidal wetland restoration involves complex, interconnected systems where changes in one component affect multiple others. Safeguards prevent double-counting and leakage that could undermine project integrity while ensuring that complex requirements can be translated into automated policy workflows for diverse coastal restoration contexts.
Baseline Scenarios and Project Activities
VM0033 recognizes that tidal wetland systems exist along a continuum from highly degraded to fully functional ecosystems. The baseline scenario represents what would occur without the restoration project, serving as the reference point for measuring emission reductions.
Baseline Scenario Determination:
Analysis Requirements:
Systematic analysis of historical trends, current conditions, likely future developments
Consider continued degradation, drainage, natural recovery potential, existing management practices
Account for regulatory frameworks and protected area designations
Degraded Wetland Baselines:
Organic Soils: Continued oxidation releasing stored carbon as CO₂, subsidence from decomposition
Mineral Soils: Continued erosion and organic carbon loss, particularly from wave action/altered hydrology
Fire-Prone Areas: Organic soil combustion as additional emission source
Equilibrium Consideration: Carbon loss rates may decrease as readily available organic matter depletes
Grazing Management: Modify livestock access/timing while potentially maintaining traditional use
Adaptive Approach: Moderate grazing may be beneficial in historically grazed systems
Key Principle: Successful restoration requires addressing multiple stressors simultaneously with adaptive management approaches that maintain rigorous emission reduction standards.
Stakeholder Ecosystem and Roles
VM0033 projects operate within a complex network of stakeholders, each bringing distinct expertise, responsibilities, and interests to coastal restoration initiatives. The methodology's success depends on effective coordination among these diverse participants, from technical specialists to local communities to financial institutions. Understanding this stakeholder ecosystem is crucial for project implementation and for designing digital platforms that can accommodate varied needs and capabilities.
Guardian Integration: The platform's roles and permissions system accommodates VM0033's diverse stakeholder types, from project proponents to VVBs, each with different access needs and responsibilities.
Key Stakeholder Types:
Project Proponents (Primary Drivers):
Entity Types: Government agencies, non-profits, private companies, collaborative partnerships
The interconnected nature of these stakeholder relationships requires coordination mechanisms that can accommodate diverse interests while maintaining focus on restoration objectives and carbon market requirements. Digital platforms must support these complex relationships through appropriate access controls, communication tools, and workflow management capabilities.
Emission Sources and Carbon Pools
VM0033 addresses the complex biogeochemical processes occurring in tidal wetland systems through comprehensive accounting of multiple greenhouse gas sources and carbon pools. Understanding these sources and pools is essential for accurate emission reduction quantification and for designing monitoring programs that capture all significant changes in greenhouse gas fluxes.
Carbon Pools Overview
Primary Pools:
Soil organic carbon (most significant)
Aboveground biomass (trees, shrubs, herbaceous)
Carbon Storage in Wetland Systems
Soil organic carbon represents the most significant carbon pool in most wetland systems, with the potential to accumulate enormous quantities over centuries or millennia under anaerobic conditions. This carbon exists in forms ranging from recently deposited plant material to highly decomposed organic matter that can persist for thousands of years. The methodology distinguishes between autochthonous carbon derived from internal vegetation and allochthonous carbon from upstream, tidal, or atmospheric sources. Projects can only claim credit for carbon that wouldn't accumulate under baseline conditions, preventing overestimation of restoration benefits.
Biomass carbon pools encompass both aboveground components including trees, shrubs, and herbaceous vegetation, and belowground root systems. Wetland systems can achieve remarkable productivity under appropriate conditions, ranking among Earth's most productive ecosystems. However, biomass carbon stocks are highly variable based on species composition, age structure, and environmental conditions. Quantification requires specialized procedures adapted for wetland systems, particularly for herbaceous vegetation with significant seasonal variability.
Dead wood and litter can represent substantial carbon pools in forested wetland systems. Under anaerobic conditions, these materials accumulate carbon rather than decomposing rapidly. However, they become emission sources when exposed to aerobic conditions through drainage or other disturbances, requiring careful consideration in project design and monitoring.
Greenhouse Gas Dynamics
Carbon dioxide (CO₂) represents the most significant greenhouse gas flux in most wetland restoration projects. Emissions occur primarily through soil organic carbon oxidation when anaerobic soils are exposed to oxygen through drainage or excavation activities. These emissions can continue for years or decades depending on soil carbon content and environmental conditions. Removals occur through photosynthesis and subsequent carbon storage in biomass and soil pools, with plant material decomposing under anaerobic conditions to form stable organic matter.
Methane (CH₄) presents a unique challenge in wetland carbon accounting due to its natural production through anaerobic decomposition processes. Methane emissions vary significantly based on salinity, temperature, vegetation type, and organic matter availability. The salinity effect is particularly important: freshwater systems typically produce more methane than saltwater systems because sulfate in seawater inhibits methanogenic bacteria. VM0033 addresses this variability through stratification by salinity conditions and provides default emission factors when site-specific data are unavailable.
Nitrous oxide (N₂O) emissions occur primarily at the interface between aerobic and anaerobic zones where nitrification and denitrification processes take place. While typically smaller in magnitude than CO₂ or methane fluxes, N₂O emissions are significant due to the gas's high global warming potential. The methodology allows conservative approaches that avoid overestimation while capturing significant sources, with options for direct monitoring or use of conservative default values.
The comprehensive approach to greenhouse gas accounting ensures that VM0033 projects deliver net climate benefits by accounting for all significant emission sources and removals. This thorough accounting builds confidence in the methodology's environmental integrity while providing practical guidance for project implementation across diverse coastal restoration contexts.
Monitoring Requirements and Verification Processes
VM0033's monitoring requirements address the complexity of tracking carbon dynamics across multiple pools and greenhouse gas sources in dynamic coastal environments. The monitoring program must capture measurable changes while accounting for natural variability and measurement uncertainty inherent in wetland systems.
Monitoring Program Objectives
Wetland restoration projects operate over multi-decade timeframes, requiring monitoring systems that can demonstrate carbon performance throughout extended crediting periods. The monitoring program serves three primary functions:
Performance Verification: Quantifying actual carbon sequestration and emission reductions against projected baselines
Adaptive Management: Identifying restoration challenges early to enable corrective actions
Compliance Documentation: Providing verifiable evidence of methodology adherence for carbon market participation
Core Monitoring Components
Soil Carbon Monitoring
Soil carbon represents the largest carbon pool in most wetland systems but presents significant measurement challenges due to high spatial variability and slow rates of change. VM0033 requires establishment of permanent monitoring plots with precise geospatial coordinates to enable repeated measurements over time.
The methodology specifies stratified sampling approaches based on ecosystem type, soil characteristics, and restoration activities. Each stratum requires sufficient sample plots to achieve statistical significance when scaling plot-level measurements to project-level estimates. Soil sampling protocols address sampling depth, timing, and laboratory analysis procedures to ensure consistency and accuracy.
Soil carbon changes occur gradually, requiring monitoring programs with sufficient statistical power to detect meaningful changes above background variability. The methodology provides guidance on sampling intensity and frequency based on expected rates of change and required precision levels.
Biomass Carbon Monitoring
Tree and shrub monitoring follows established forestry protocols adapted for wetland conditions. Standard diameter and height measurements combine with species-specific allometric equations to estimate biomass and carbon content. The methodology incorporates procedures from CDM AR-Tool14 for woody biomass quantification.
Herbaceous vegetation monitoring requires different approaches due to seasonal variability and diverse growth forms. Monitoring protocols must account for seasonal patterns, species composition changes, and disturbance effects while providing reliable estimates of carbon stock changes.
Hydrological Monitoring
Hydrological conditions directly influence ecosystem function and carbon dynamics. Continuous monitoring of water levels documents changes in hydroperiod and water depth that affect both ecosystem restoration success and carbon sequestration rates.
Salinity monitoring tracks water chemistry changes that influence species composition and biogeochemical processes, particularly methane emissions. The methodology requires stratification by salinity conditions due to significant effects on greenhouse gas production rates.
Vegetation Community Monitoring
Vegetation monitoring documents changes in species composition, cover, and structural characteristics resulting from restoration activities. This monitoring validates restoration success, documents habitat improvements, and supports carbon stock change calculations.
Monitoring protocols must be appropriate for target ecosystem types and restoration objectives, incorporating quantitative sampling methods, qualitative condition assessments, and photographic documentation of temporal changes.
Verification Process Requirements
Independent verification provides objective assessment of project implementation and carbon performance. Verification bodies (VVBs) must possess expertise in both carbon accounting methodologies and wetland ecology to adequately evaluate project compliance.
Verification Scope and Activities
The verification process encompasses multiple assessment components:
Field Verification: On-site assessment of restoration implementation, monitoring equipment, and ecosystem conditions
Data Validation: Review of monitoring data quality, calculation procedures, and quality assurance measures
Methodology Compliance: Evaluation of project adherence to VM0033 requirements and procedures
Stakeholder Consultation: Interviews with project personnel, local communities, and relevant stakeholders
Verification Timeline
Initial validation occurs before credit issuance, confirming project design compliance with VM0033 requirements. Periodic verification throughout the crediting period validates ongoing performance and continued methodology compliance.
Quality Assurance Framework
VM0033 requires comprehensive quality assurance measures throughout the monitoring program:
Equipment Calibration: All monitoring equipment requires regular calibration and maintenance according to manufacturer specifications. GPS units, water level sensors, and laboratory equipment need documented calibration schedules.
Data Management Systems: Monitoring data must be stored in secure systems with backup procedures and clear chain of custody documentation. Data management systems must ensure long-term preservation while enabling independent verification access.
Personnel Training: Monitoring staff require training in standardized procedures to ensure consistency across time periods and personnel changes. Training documentation and competency verification are required.
Documentation Standards: All monitoring activities require detailed documentation including protocols, equipment specifications, environmental conditions, and quality control measures.
Implementation Challenges and Solutions
Site Access Limitations: Coastal wetland sites may be inaccessible during certain seasons or weather conditions. Monitoring programs require contingency plans and flexible scheduling to maintain data continuity.
Equipment Durability: Saltwater environments and extreme weather conditions can compromise monitoring equipment. Projects need maintenance schedules, backup equipment, and weather-resistant installations.
Natural System Variability: Wetland systems exhibit natural variation across multiple temporal scales. Monitoring programs must distinguish between natural fluctuations and restoration-induced changes through appropriate statistical approaches and baseline data collection.
Long-term Program Consistency: Multi-decade projects face inevitable personnel turnover. Detailed standard operating procedures, training programs, and institutional knowledge management systems help maintain monitoring consistency.
Guardian Integration: The platform supports monitoring through mobile data collection applications, external dMRV platform integrations, automated quality validation, and integrated verification workflows connecting project teams with verification bodies.
The comprehensive monitoring and verification requirements ensure that VM0033 projects deliver measurable, verifiable carbon benefits while maintaining the scientific rigor necessary for carbon market credibility. These requirements, while demanding, provide the foundation for scaling coastal ecosystem restoration through market-based mechanisms.
Methodology Relationships and Integration
VM0033 operates within an interconnected framework of environmental methodologies and standardized tools. The methodology builds upon established procedures while introducing innovative approaches specific to tidal wetland restoration. Understanding these relationships is essential for effective implementation and recognizing opportunities for cross-methodology integration.
CDM Tool Integration: VM0033 incorporates multiple CDM tools (AR-Tool02, AR-Tool03, AR-Tool14, AR-Tool05) that are available as in Guardian's methodology library.
Foundation on CDM Tools
VM0033 leverages several Clean Development Mechanism (CDM) tools that provide standardized approaches for common carbon accounting challenges:
AR-Tool02 - Additionality Assessment: This combined tool provides the framework for VM0033's additionality demonstration, ensuring consistency with established approaches for proving that projects would not occur without carbon market incentives. The tool's structured approach helps project developers navigate complex additionality requirements while maintaining credibility with verification bodies.
AR-Tool03 - Statistical Sampling: This tool informs VM0033's approach to determining appropriate sample sizes for biomass and carbon stock measurements. It ensures monitoring programs achieve sufficient statistical power to detect meaningful changes while avoiding unnecessarily intensive sampling that could compromise project economics.
AR-Tool14 - Woody Biomass Quantification: VM0033 directly incorporates procedures from this tool for estimating carbon stocks and changes in trees and shrubs. This integration ensures consistency with established forestry carbon accounting while adapting to the unique challenges of wetland environments.
AR-Tool05 - Fossil Fuel Emissions: The methodology uses this tool to account for emissions from project implementation activities including equipment operation, transportation, and prescribed burning. This ensures comprehensive accounting of all significant emission sources in net benefit calculations.
VCS Methodology Relationships
VM0033's development built upon lessons from related VCS methodologies, particularly those addressing coastal and wetland ecosystems:
VM0024 - Coastal Wetland Creation: This earlier methodology provided important precedents for coastal ecosystem carbon dynamics, though VM0033 significantly expands the scope to include restoration activities and addresses a broader range of ecosystem types.
Cross-Methodology Learning: VM0033's approaches to addressing sea level rise, stakeholder engagement complexity, and ecosystem service integration provide models that inform development of other environmental methodologies.
VCS Module Integration
The methodology incorporates several VCS modules that provide standardized approaches for common implementation challenges:
VMD0005 - Wood Products: This module enables VM0033 projects to account for carbon storage in harvested wood products, recognizing that coastal forests may require strategic harvesting before tree mortality due to sea level rise impacts.
VMD0016 - Area Stratification: This module provides guidance for dividing project areas into homogeneous units for monitoring and accounting. It's particularly important for VM0033 given the high spatial variability in coastal ecosystems and requirements for stratification based on ecosystem type, soil characteristics, and restoration activities.
VMD0019 - Future Projections: This module supports VM0033's baseline scenario development, particularly for incorporating sea level rise impacts and long-term ecosystem trajectories. It provides standardized approaches for integrating climate change projections into baseline development.
VMD0052 - Wetland Additionality: Developed specifically to support VM0033 implementation, this module provides detailed guidance for demonstrating additionality in wetland restoration contexts where multiple benefits beyond carbon sequestration may motivate project development.
Scientific Literature Integration
VM0033 incorporates extensive scientific literature to inform default values, calculation procedures, and monitoring approaches. The methodology references peer-reviewed studies to ensure carbon accounting reflects current scientific understanding of wetland carbon dynamics.
The approach balances scientific rigor with practical implementation requirements. Default values and procedures are based on comprehensive literature reviews but designed to be conservative and applicable across diverse geographic and ecological contexts.
Regulatory Framework Coordination
VM0033's relationship with regulatory frameworks varies by jurisdiction but often involves coordination with existing wetland protection and restoration programs. Many jurisdictions have established wetland conservation policies that may complement or conflict with carbon market objectives.
The methodology anticipates integration with existing environmental monitoring and reporting systems, recognizing that many restoration projects occur within broader environmental management programs. This integration can reduce monitoring costs and improve data quality while ensuring compliance with multiple regulatory requirements.
International Framework Alignment
VM0033 aligns with several international environmental frameworks:
Ramsar Convention on Wetlands: The methodology supports wetland conservation objectives while providing economic incentives for restoration.
Convention on Biological Diversity: VM0033 projects often deliver biodiversity co-benefits that support national biodiversity strategies.
UNFCCC: The methodology contributes to national climate commitments while providing practical implementation tools at project scales.
Innovation and Contribution
VM0033 contributes several innovations to the broader methodology landscape:
Temporal Boundary Concepts: The Peat Depletion Time (PDT) and Soil Organic Carbon Depletion Time (SDT) concepts provide practical approaches to addressing long-term carbon dynamics that may be applicable to other ecosystem types.
Sea Level Rise Integration: The methodology's systematic approach to incorporating climate change impacts provides a model for other methodologies addressing climate-vulnerable ecosystems.
Comprehensive GHG Accounting: VM0033's integration of multiple greenhouse gases and carbon pools provides a model for comprehensive carbon accounting that addresses the full range of climate impacts from ecosystem management.
Guardian Platform Integration
Understanding VM0033's methodology relationships provides essential context for Guardian platform implementation. The platform's modular architecture enables reuse of common tools and procedures across multiple methodologies while maintaining specific requirements for each methodology.
Cross-methodology references and shared calculation procedures must be reflected in policy workflows that can accommodate the interconnected nature of environmental methodologies. This integration capability is crucial for scaling environmental asset tokenization across diverse project types and geographic contexts.
The methodology's sophisticated integration requirements demonstrate both the challenges and opportunities in environmental asset digitization, where complex ecological and regulatory systems must be translated into automated workflows that maintain scientific rigor while enabling efficient implementation and verification.
Preparing for Guardian Implementation
With this deep understanding of VM0033's requirements, stakeholders, and processes, you're now prepared to explore how Guardian's technical architecture can accommodate this methodology's complexity. The platform's Policy Workflow Engine must handle VM0033's sophisticated temporal boundaries, multi-stakeholder processes, and comprehensive monitoring requirements.
Key implementation considerations include:
Workflow Complexity: VM0033's multiple project activity types and stakeholder roles require flexible workflow designs that can accommodate diverse restoration approaches while maintaining consistent carbon accounting standards.
Data Management: The methodology's extensive monitoring requirements necessitate robust data collection, validation, and storage systems that can handle long-term datasets with high spatial and temporal resolution.
Calculation Engines: VM0033's sophisticated carbon accounting procedures, including PDT and SDT calculations, require automated calculation engines that can handle complex biogeochemical models while maintaining transparency and auditability.
Integration Capabilities: The methodology's relationships with CDM tools and other VCS methodologies require platform capabilities for cross-methodology integration and shared calculation procedures.
Related Resources
- Complete parsed methodology document
- Working test scenarios with real project data
- Real Allcot project calculations
- Complete validation tools and reference materials
Key Concepts Covered
VM0033 scope and applicability conditions
Baseline scenarios and project activities
Complex stakeholder ecosystem requirements
Carbon pools and emission sources
Domain Knowledge Complete: You now understand VM0033's complexity and requirements. Chapter 3 will show how Guardian's architecture handles this complexity through automated workflows.
All the content in this chapter - including technical details, calculation procedures, and requirements referenced are derived from the actual VM0033 methodology document to ensure accuracy and completeness.
Special Feature: Long-term carbon storage in wood products (trees harvested before sea level rise dieback)
: Technical expertise + project management + local ecological/social understanding
Chapter 14: Guardian Workflow Blocks and Configuration
Step-by-step configuration of Guardian's workflow blocks for complete methodology automation
Chapter 13 introduced Guardian's block-event architecture. Chapter 14 gets hands-on, showing you how to configure each workflow block type using real examples from VM0033's production policy.
Guardian provides over 25 workflow blocks, each serving specific purposes in methodology automation. Rather than memorizing every block parameter, this chapter teaches you configuration patterns that apply across different block types.
Configuration Fundamentals
Block Configuration Methods
Guardian offers three ways to configure workflow blocks:
Properties Tab: Visual interface for common settings
Events Tab: Graphical event connection management
JSON Tab: Direct JSON manipulation for advanced configurations
Block Structure Basics
Every Guardian workflow blocks follow similar JSON structure:
Key Configuration Elements:
id: Unique identifier (Guardian auto-generates)
blockType: Defines block functionality
tag: Human-readable name for referencing in events
permissions: Which roles can access this block
Permission Patterns
Guardian uses role-based permissions consistently across blocks:
["OWNER"]: Standard Registry only
["Project_Proponent"]: Project Developers only
["VVB"]: Validation/Verification Bodies only
Data Input and Management Blocks
These blocks handle document collection, storage, and display.
requestVcDocumentBlock: Schema-Based Forms
Transforms your Part III schemas into interactive forms. VM0033 uses this for PDD and monitoring report submission.
Basic Configuration:
VM0033 Example - PDD Submission: The VM0033 policy uses new_project block to collect Project Design Documents. The schemaId references the PDD schema, automatically generating form fields for:
Project details and location
Baseline emission calculations
Project emission parameters
Monitoring plan specifications
Your Implementation: Replace VM0033's schema ID with your Part III PDD schema UUID. Guardian automatically creates form fields based on your schema structure.
sendToGuardianBlock: Document Storage
Stores submitted documents to database or Hedera blockchain with status tracking.
Database Storage Configuration:
Hedera Blockchain Storage Configuration:
Storage Options Comparison:
Feature
Database
Hedera
Usage
Status Management: The options array sets document status values that other blocks can filter on. For example:
"Waiting to be Added": New submissions awaiting registry review
"Waiting for Validation": Projects ready for VVB assignment
"Validated": Successfully validated projects
Key Configuration Fields:
entityType: Groups related documents (e.g., "project", "report")
topic: Hedera topic for blockchain storage organization. Helpful with Guardian indexer querying.
documentType: "vc" for Verifiable Credentials, "vp" for Verifiable Presentations
interfaceDocumentsSourceBlock: Document Grids
Displays document collections with filtering, search, and action buttons. Works with child documentsSourceAddon blocks to provide data.
VM0033 Real Configuration - Project Pipeline:
Key Configuration Properties:
uiMetaData.fields: Array defining grid columns and their properties
dataType: Handled by child documentsSourceAddon blocks
bindBlock: References another block (buttonBlock) to embed in the column
bindGroup
Field Type Details:
Text Fields:
Button Fields:
Block Fields (for embedded buttons):
Required Child Blocks: interfaceDocumentsSourceBlock must have child documentsSourceAddon blocks that provide the actual data. The bindGroup property links specific columns to specific data sources.
Logic and Calculation Blocks
These blocks process data, validate inputs, and execute methodology calculations.
customLogicBlock: Calculation Engine
Executes JavaScript or Python for emission reduction calculations using schema field data.
VM0033 Real Configuration:
Key Configuration Properties:
expression: JavaScript or Python code as a string
permissions: Which roles can trigger the calculation
defaultActive: Whether the block executes automatically
onErrorAction: How to handle calculation errors
VM0033 JavaScript Example:
Your Implementation: Use your Part III schema field names as JavaScript variables. The calculation result creates new document fields accessible by other blocks.
documentValidatorBlock: Data Validation
Validates documents against methodology rules beyond basic schema validation.
Configuration Pattern:
Validation Rules:
Field value comparisons (>=, <=, ==, !=)
Cross-field validation (one field depends on another)
Date range checking for monitoring periods
switchBlock: Conditional Branching
Creates different workflow paths based on data values or user decisions.
Configuration Pattern:
VM0033 Usage: VVB validation decisions create different paths:
Approved: Project proceeds to monitoring phase
Rejected: Project returns to developer for revision
Conditional Approval: Project requires minor corrections
Token and Asset Management Blocks
These blocks handle carbon credit lifecycle from calculation to retirement.
mintDocumentBlock: Token Issuance
Issues VCU tokens based on verified emission reduction calculations.
VM0033 Real Configuration:
Key Configuration Properties:
rule: JSON path to calculated emission reduction value (without "document.credentialSubject.0." prefix)
tokenId: UUID of the token template defined in policy configuration
accountType:
"default"
Token Template Reference: The tokenId must match a token defined in the policy's policyTokens array:
VM0033 Integration: VM0033 uses automatic_report customLogicBlock to calculate emission reductions, which outputs the net_GHG_emissions_reductions_and_removals.NERRWE field that the mint block references.
tokenActionBlock: Token Operations
Handles token transfers, retirements, and account management.
Configuration Pattern:
Available Actions:
"transfer": Move tokens between accounts
"freeze": Temporarily lock tokens
"unfreeze": Unlock frozen tokens
retirementDocumentBlock: Permanent Token Removal
Permanently removes tokens from circulation with retirement certificates.
Configuration Pattern:
Container and Navigation Blocks
These blocks organize user interfaces and manage workflow progression.
interfaceContainerBlock: Layout Organization
Creates tabs, or a simple basic vertical layout for organizing workflow interfaces.
Tab Container Pattern:
policyRolesBlock: Role Assignment
Manages user role selection and assignment within policies.
Configuration Pattern:
buttonBlock: Custom Actions
Creates buttons for state transitions and custom workflow actions. Used for approve/reject decisions with optional dialogs.
VM0033 Real Configuration - Approve/Reject Buttons:
Button Types:
selector: Simple button that sets a field value
selector-dialog: Button with confirmation dialog for additional input
Button Configuration Properties:
tag: Button identifier for event configuration (Button_0, Button_1, etc.)
field: Document field to modify (typically "option.status")
value: Value to set when button is clicked
uiClass: CSS class for styling (btn-approve, btn-reject, etc.)
VM0033 Event Integration:
Each button output (Button_0, Button_1) can trigger different target blocks, allowing different workflows based on which button is clicked.
Event Configuration Patterns
Events connect blocks together, creating automated workflows. Guardian provides both graphical and JSON-based event configuration.
Visual Event Configuration
The Events tab provides an intuitive interface for connecting blocks:
Event Configuration Fields:
Event Type: Output Event (triggers when block completes)
Source: Current Block (the triggering block)
Output Event: RunEvent (completion trigger)
Target: Next Block (destination block)
Basic Event Structure
Common Event Patterns
Document Submission Flow:
UI Refresh After Save:
Advanced Block Configuration
Dynamic Filtering with filtersAddon
Creates dynamic document filters based on status, date, or custom criteria.
VM0033 Real Configuration:
Key Configuration Properties:
type: Filter UI type - "dropdown" for select options, "text" for input fields
queryType: Filter logic - "equal", "not_equal", "contains", etc.
Document Data Source with documentsSourceAddon
Provides filtered document collections to interfaceDocumentsSourceBlock parent containers.
VM0033 Real Configuration:
Key Configuration Properties:
dataType: Document type - "vc-documents" for Verifiable Credentials, "vp-documents" for Verifiable Presentations
schema: Schema UUID to filter documents by
filters: Array of filter conditions to apply to document collection
Administrative functions: Registry only (["OWNER"])
Error Handling
Include validation and error handling blocks:
Pre-validation before expensive operations
Clear error messages for user guidance
Fallback paths for edge cases
Performance Optimization
Optimize for user experience:
Use onlyOwnDocuments: true for large document sets
Implement pagination for document grids
Cache calculation results where appropriate
Testing Your Block Configuration
Configuration Validation
Test block configurations incrementally using Guardian's policy editor:
Individual Block Testing: Configure each block using Properties tab, verify JSON structure
Event Chain Testing: Use Events tab to connect blocks, test trigger flows
Role Permission Testing: Switch user roles to verify permission restrictions
Data Flow Testing: Submit test data through complete workflows using policy dry runs
Guardian UI Testing Tips:
Properties Tab: Quick validation of basic settings and permissions
JSON Tab: Verify complex configurations and nested structures
Events Tab: Visual verification of workflow connections and event flows
Policy Preview: Test complete workflows before publishing
Common Configuration Issues
Schema Reference Errors:
Verify schema UUIDs match your Part III schemas
Check field path references in grids and calculations
Permission Problems:
Ensure users have appropriate roles assigned
Check onlyOwnDocuments settings for document visibility
Event Connection Issues:
Verify source and target block tags match exactly
Check event input/output types are compatible
Integration with Part III Schemas
Schema Field Mapping
Your Part III schemas become form fields and calculation variables:
PDD Schema → Form Fields:
Monitoring Schema → Calculation Variables:
Validation Rule Integration
Schema validation rules automatically apply to requestVcDocumentBlock forms:
Required fields become mandatory
Number ranges enforce min/max values
Pattern validation ensures data format consistency
Enum values create dropdown selections
Next Steps and Chapter 15 Preview
Chapter 14 covered Guardian's workflow blocks and configuration patterns. You now understand how to:
Configure data input blocks with your Part III schemas
Set up calculation blocks for emission reduction formulas
Create token management workflows for VCU issuance
Design user interfaces with container and navigation blocks
Chapter 15 Deep Dive: Now that you understand individual blocks, Chapter 15 analyzes VM0033's complete policy implementation, showing how these blocks work together in a production methodology. You'll trace the complete workflow from PDD submission to VCU token issuance, understanding real-world policy patterns.
Schema Field: "project_title" → Form Input: Text field with validation
Schema Field: "baseline_emissions" → Form Input: Number field with units
Schema Field: "monitoring_frequency" → Form Input: Dropdown selection
Complete end-to-end analysis of VM0033 tidal wetland restoration policy implementation in Guardian
Chapter 14 covered individual workflow blocks. Chapter 15 dissects VM0033's complete policy implementation, showing how blocks connect into multi-stakeholder certification workflows that automate the entire lifecycle from project submission to VCU token issuance.
VM0033 represents Guardian's most advanced and production-ready methodology implementation, featuring complex emission calculations, multi-role workflows, and state management across the complete credit certification process.
Let's examine VM0033's VVB approval workflow as our first detailed use case. This workflow demonstrates how Guardian's interfaceDocumentsSourceBlock, documentsSourceAddon, buttonBlock, and status update mechanisms work together to create an advanced approval system.
The VVB Approval Interface Architecture
When a Standard Registry (OWNER) needs to approve VVB registrations, VM0033 creates an interface with three different document states and interactive approval controls.
1. Main Interface Block Configuration
The VVB approval interface starts with an interfaceDocumentsSourceBlock that defines the UI layout:
How This Creates the UI:
Owner Column: Shows the DID of who submitted the VVB registration (document.issuer)
Text Column: Displays the first field from the VVB registration form (document.credentialSubject.0.field0)
2. Document Source Configuration - The Filtering Engine
Three separate documentsSourceAddon blocks are used to populate VVB documents with different statuses in the same interface:
A. Documents Waiting for Approval:
B. Approved Documents:
C. Rejected Documents:
How the Filtering Works:
Multiple Filters = AND Logic: type = "vvb" AND option.status = "Waiting for Approval"
defaultActive: Only "waiting for approval" shows immediately (defaultActive: true), others show when status changes
Schema Filtering: All use the same VVB registration schema (#41db8188-04c1-4f57-b73e-4b7d2efc797c
3. Button Workflow Implementation
The approval buttons are defined in a separate buttonBlock that gets embedded in the interface:
Button Behavior Differences:
Approve Button (Button_0):
Type: "selector" = direct action
Sets option.status = "APPROVED" immediately
4. Status Update Processing
When buttons are clicked, Guardian routes events to status update blocks:
What Happens During Status Update:
Button Click: User clicks "Approve" or "Reject"
Event Trigger: Button emits Button_0 or Button_1 events
Event Routing: Guardian routes to corresponding update_approve_document_status block
Complete VVB Approval Flow Summary
Initial State:
VVB submits registration → Document created with type: "vvb", option.status: "Waiting for Approval"
Document appears in "documents to approve" filter with Approve/Reject buttons
Approval Flow:
OWNER clicks "Approve" → option.status changes to "APPROVED" → Document type changes to "approved_vvb"
Document disappears from "waiting for approval" and appears in "approved documents" with Revoke button
Rejection Flow:
OWNER clicks "Reject" → Dialog opens for reason → option.status changes to "REJECTED" → Document type changes to "rejected_vvb"
Document disappears from "waiting for approval" and appears in "rejected documents" section
This was one simple example of how Guardian's block system can create powerful, multi-state workflows with automatic UI updates and proper audit trails.
Use Case 2: Project Submission and Calculation Workflow Deep Dive
Let's examine how Project_Proponents submit PDDs and how VM0033 processes them through form generation, data storage, and calculation integration. This workflow showcases Guardian's ability to transform schemas into working forms and process complex scientific data.
The Project Submission Architecture
When Project_Proponents create new projects, VM0033 transforms your Part III PDD schema into a working form, processes the submission through automated calculations, and stores the results for validation workflows.
1. Project Submission Form Block
The project submission starts with a requestVcDocumentBlock that generates forms from schema:
How This Creates the Project Form:
Schema Integration: Guardian reads the PDD schema (#55df4f18-d3e5-4b93-af87-703a52c704d6) from Part III and automatically generates form fields
Dialog Type: Opens as modal dialog (type: "dialog") with title "New project"
JavaScript processes all emission calculations using VM0033 methodology formulas
Calculated results added to original document structure
Step 5: Document Storage
Enhanced document (original + calculations) stored in database
Document ready for validation assignment and approval workflows
Later moved to Hedera after validation approval
Key Technical Insights:
Schema-to-Form Integration: Guardian automatically creates complex forms from JSON Schema definitions, eliminating manual UI development
Event-Driven Processing: Form submission triggers calculation workflows through Guardian's event system, enabling sophisticated processing chains
Cost-Optimized Storage: Working documents in database, final documents on blockchain optimizes cost while maintaining audit integrity
Data Enhancement
This demonstrates how VM0033 transforms simple form submissions into scientifically processed project documents ready for carbon credit certification workflows.
OWNER (Standard Registry) Role Workflow
The OWNER represents the Standard Registry (Verra) and manages the overall certification program. VM0033 implements their workflow through a tabbed interface that organizes different operational areas.
Verra Header Structure
The OWNER interface uses VM0033's Verra_header container that creates a tabbed navigation system:
OWNER Navigation Tabs:
Approve VVB: VVB registration management (detailed in Use Case 1)
Projects Pipeline: Project listing and status management
Monitoring Reports: Report review and approval workflows
Validation & Verification: Oversight of VVB activities
1. VVB Management (approve_VVB)
VVB Approval Workflow: Detailed in Use Case 1, this section manages the complete VVB lifecycle from registration through approval and ongoing management.
2. Project Pipeline Management
Project Status Oversight: OWNER reviews all project submissions, approvals, and workflow progression across all Project_Proponents.
3. Monitoring Reports Review
Report Validation: OWNER has oversight access to all monitoring reports and can review calculation accuracy and methodology compliance.
4. Validation & Verification Oversight
VVB Performance Monitoring: OWNER tracks VVB validation and verification activities, ensuring quality and compliance across all assignments.
5. Token Management and Trust Chains
VCU Issuance Control: OWNER controls final token minting decisions and maintains complete audit trails for all issued carbon credits.
Project_Proponent Role Workflow
The Project_Proponent drives the main certification workflow from project creation through monitoring report submission. VM0033 policy provides them with a dedicated header container and navigation structure.
Project_Proponent Header Structure
The Project_Proponent interface uses VM0033's Project_Proponent_header container:
Project_Proponent Navigation Structure:
Projects: Project creation and management (Projects_pp)
Communication: Receives feedback and requests for additional information
6. Token Management
VCU Receipt: Final step where Project_Proponent receives issued carbon credits
Token Display: Shows minted VCUs with quantity and metadata
Transfer Capability: Can transfer or retire tokens as needed
Audit Trail: Complete history from project submission to token receipt
VVB Role Workflow
VVBs provide independent validation and verification services. VM0033 policy structures their workflow through a dedicated header container with role-specific navigation.
VVB Header Structure
The VVB interface uses VM0033's VVB_Header container:
VVB Navigation Structure:
VVB Documents: Registration and credential management
Report Submission: VVBs submit detailed validation and verification reports
Validation Reports: Document project eligibility and methodology compliance
Verification Reports: Confirm monitoring data accuracy and calculations
Status Updates: Reports trigger project status changes upon submission
6. Minting Events Participation
Token Issuance: VVBs participate in final token minting decisions
Final Review: Last verification before token issuance
Minting Approval: Confirm readiness for VCU generation
Audit Trail: Complete validation/verification history attached to tokens
End-to-End Workflow Integration
VM0033's real power emerges from connecting individual role workflows into seamless automation. Here's how documents flow through the complete certification process:
Enhanced documents contain both original data and calculated results
Consistent calculation logic across project and monitoring phases
This end-to-end integration creates a seamless experience where stakeholders focus on their expertise while Guardian handles workflow coordination, document routing, and audit trail generation automatically.
Key Implementation Takeaways
1. Role-Based Interface Design
VM0033 succeeds through clear separation of stakeholder interfaces. Each role sees only relevant documents and actions, reducing complexity while maintaining complete audit trails.
2. Document Lifecycle Management
Status-driven filtering automatically routes documents to appropriate stakeholders at each certification stage, eliminating manual coordination overhead.
3. Schema-Driven Development
Form generation from JSON schemas enables rapid methodology adaptation while ensuring data consistency across all workflow stages.
4. Event-Driven Architecture
Guardian's event system coordinates between roles without tight coupling, enabling flexible workflow modifications and easy extension for additional stakeholder types.
5. Cost-Optimized Blockchain Integration
Strategic use of database storage for working documents and Hedera storage for final records optimizes costs while maintaining audit integrity.
Practical Implementation Guidance
For Your Methodology Implementation:
1. Start with VM0033 as Foundation: Import VM0033.policy, replace schemas with your Part III designs, then modify role workflows and calculation logic.
2. Map Stakeholder Workflows First: Define your specific stakeholder roles and their document review processes before implementing detailed block configurations.
3. Design Status Progression: Plan document status values and transitions to drive automatic workflow routing between stakeholder roles.
4. Implement Role Sections: Create navigation sections for each stakeholder role, ensuring users see only relevant documents and actions.
5. Test Complete Workflows: Validate end-to-end document flows from initial submission through final token issuance with realistic test data.
Advanced Implementation Patterns
Navigation Structure Implementation
VM0033's navigation structure from the policy configuration drives the role-based interface organization:
Navigation Level System:
Level 1: Primary navigation tabs (main sections)
Level 2: Sub-sections within primary tabs
Block Mapping: Each navigation item maps to specific workflow blocks
Container Block Hierarchy
VM0033's container organization creates the role-based workflow structure:
Role-Based Project Interface Implementation
Each role sees different views of the same project data through permission-based filtering:
Project_Proponent Project View:
VVB Project View:
Key Interface Differences:
Project_Proponent: Shows assign button, status text, focuses on project management
VVB: Shows operation buttons for approval/rejection actions
OWNER: Shows all projects with administrative oversight capabilities
Document Filtering and Status Management
VM0033 uses advanced filtering to show role-appropriate documents:
Project_Proponent Filter (Own Documents Only):
VVB Filter (Assigned Documents):
OWNER Filter (All Documents):
Status Progression Management:
VM0033 manages document status through workflow stages:
Project_Proponent Submission: "Waiting to be Added"
OWNER Approval: "Approved for Assignment"
VVB Assignment: "Assigned for Validation"
Each status change triggers automatic document filtering updates across all user interfaces.
Token Management Implementation
VM0033's token management connects calculation results to VCU issuance:
Token Minting Process:
Calculation Completion: customLogicBlock calculates final emission reductions
Token Minting: VCUs issued based on calculated emission reductions
Summary: VM0033 Policy Implementation
Chapter 15 demonstrated how VM0033 transforms Guardian's block system into production-ready multi-stakeholder workflows. Through detailed analysis of VVB approval workflows, project submission processes, and role-based interfaces, we examined how JSON configurations create working certification systems.
Key Technical Achievements:
Role-Based Architecture: Each stakeholder (OWNER, Project_Proponent, VVB) receives tailored interfaces with appropriate permissions and document filtering
Event-Driven Coordination: Button clicks trigger status updates that automatically refresh filtered document views across all user interfaces
Schema-Driven Form Generation: Part III schemas automatically generate working forms with calculation integration
Cost-Optimized Storage
VM0033 policy demonstrates Guardian's ability to implement complex environmental methodologies as automated workflows. The policy serves as both a working carbon credit system and a template for implementing other methodologies using similar patterns.
Implementation Readiness: VM0033's patterns directly apply to your methodology implementation. The role structures, document filtering, and workflow coordination patterns adapt to different stakeholder arrangements and certification requirements.
Next Steps: Chapter 16 explores advanced policy patterns including multi-methodology support, external data integration, and production optimization techniques using VM0033's proven implementation as a foundation.
Prerequisites Check: Ensure you have:
Completed Chapters 13-14 (Policy architecture and block configuration)
Access to VM0033.policy file for hands-on analysis
Understanding of your methodology's stakeholder workflow requirements
Part III schemas ready for integration
Time Investment: ~45 minutes reading + ~120 minutes hands-on VM0033 analysis and workflow tracing
Practical Exercises:
VM0033 Workflow Tracing: Follow a complete project lifecycle through VM0033's policy editor
Calculation Analysis: Examine VM0033's emission calculation engine and map to your methodology
Role Simulation: Test VM0033 workflows from each stakeholder perspective (OWNER, Project_Proponent, VVB)
Converting methodology equations into executable code using Guardian's customLogicBlock
This chapter teaches you how to implement methodology calculations as working code that produces accurate emission reductions or removals. You'll learn to translate VM0033's mathematical formulas into executable functions, using the ABC Mangrove's real world data artifact as your validation benchmark. By the end, you'll write code that transforms methodology equations into verified carbon credit calculations.
Learning Objectives
After completing this chapter, you will be able to:
Translate methodology equations into executable JavaScript or Python code
Implement formulas for baseline emissions, project emissions, and net emission reductions
Process monitoring data through mathematical models defined in VM0033 methodology
Validate equation implementations against Allcot test artifact input/output data
Handle data precision and validation requirements for accurate calculations
Structure mathematical calculations for production-ready environmental credit systems
Prerequisites
Completed Part IV: Policy Workflow Design and Implementation
Understanding of VM0033 methodology and equations from Part I
Basic programming knowledge for implementing mathematical formulas (JavaScript or Python)
Access to validation artifacts: , , and
Guardian customLogicBlock: Your Calculation Engine
The Mathematical Execution Environment
Guardian's is your calculation engine for environmental methodologies - it's where mathematical equations become executable code. Think of it as a computational engine that processes monitoring data through formulas to produce emission reductions that match methodology equations precisely.
You can write your calculations in JavaScript or Python - Guardian supports both languages. Most of our examples use JavaScript, but the concepts apply equally to Python.
Understanding Your Input Data
Every customLogicBlock receives Guardian documents through arguments[0]. These contain the measured variables and parameters needed for your methodology equations - real data from environmental monitoring. Here's the data structure you'll process through mathematical formulas:
This is actual data from the ABC Blue Carbon Mangrove Project in Senegal - the same project used in our test case spreadsheet.
Accessing Data Like a Pro
Field Access Patterns from Production Code
Let's look at how VM0033's production code accesses data. These utility functions from make your code clean and readable:
The ?? operator provides safe defaults when data might be missing.
Building Your Calculation Engine
The Main Calculation Function
Every customLogicBlock starts with a main function that processes the documents. Here's the pattern from VM0033's production code:
Processing Project Instances
Each project instance represents a restoration site. The processInstance function is where you implement the methodology calculations:
Implementing Baseline Emission Equations
From Methodology Equations to Code
Baseline emissions implement the scientific equations from VM0033 Section 8.1 - representing the "business as usual" scenario without restoration. Each equation in the methodology PDF more or less becomes a function in your code.
Example: VM0033 Equation 8.1.1 - Soil CO2 Emissions
Implementing Project Emission Equations
Translating VM0033 Section 8.2 Equations
Project emissions implement equations from VM0033 Section 8.2 - the restoration scenario calculations. These equations typically show reduced emissions and increased sequestration compared to baseline.
VM0033 Section 8.5 - The Final Scientific Calculation
This implements VM0033's core equation that transforms baseline and project emissions into verified carbon units (VCUs). Each line of code corresponds to specific equations in Section 8.5 of the methodology.
Example: VM0033 Equation 8.5.1 - Net Emission Reductions
Handling Real-World Data Challenges
Defensive Programming Patterns
Real project data is messy. Projects miss monitoring periods, equipment fails, and data gets corrupted. They might send a different data type than you might expect. Your code needs to handle this gracefully:
Error Handling
Validation: Allcot Test Artifact as Your Benchmark
Ensuring Mathematical Accuracy
The is your validation benchmark - it contains input parameters and expected output results calculated manually according to VM0033 methodology equations. Your code must reproduce these results exactly to ensure mathematical accuracy.
Your equation implementations must produce the same results as the manual calculations to be valid.
Python Alternative
Writing CustomLogicBlocks in Python
Guardian also supports Python for customLogicBlock development. The concepts are the same, just different syntax:
Choose the language you're more comfortable with - both produce identical results.
Testing Your Code
Quick Testing Tips
While Chapter 21 covers comprehensive testing, here are quick validation techniques while you're developing:
1. Console Logging for Debug
2. Guardian's Built-in Testing Use Guardian's customLogicBlock testing interface (covered in Chapter 21) to test with real data.
3. Unit Testing Individual Functions
Real Results: ABC Mangrove Project
Production Calculation Results
Using VM0033's calculation engine with the ABC Blue Carbon Mangrove Project data, here are the actual VCU projections over the 40-year crediting period(data added till 2055 only):
Year
VCU Credits
Year
VCU Credits
Year
VCU Credits
Year
VCU Credits
Total Project Impact: 2,861,923 VCU credits over 40 years
This demonstrates what your code should produce - substantial carbon credits from mangrove restoration that follow the methodology calculations exactly.
Deep Dive: VM0033 Production Implementation Analysis
Note for Readers: This section provides an detailed analysis of VM0033 calculation implementation in Guardian's customLogicBlock. It's intended for developers who need to understand, write, or maintain VM0033. You can skip this section if you only need to understand the basic customLogicBlock concepts.
This deep dive examines the complete production implementation of VM0033 tidal wetland restoration calculations in Guardian, using the and as our reference implementations.
Complete VM0033 Production Code Architecture
The 1261-line er-calculations.js contains 25+ interconnected functions implementing the full VM0033 methodology. Here's the complete function catalog mapped to test artifact worksheets:
Core Architecture Overview
Test Artifact Mapping
Each function maps directly to specific data models defined within VM0033_Allcot_Test_Case_Artifact.xlsx:
Section 3: Temporal Boundary System (Lines 181-350)
Peat and Soil Depletion Time Calculations
VM0033 calculates when carbon pools will be depleted to determine crediting periods. This maps directly to the 5.1_TemporalBoundary worksheet (36x24) in our test artifact.
Two Ways to Calculate Soil Organic Carbon Benefits
VM0033 offers two approaches for calculating soil organic carbon benefits. Both map to the 5.2.4_Ineligible wetland areas worksheet (47x30) in our test artifact.
This function selects which approach to use and calculates the final SOC_MAX value:
Test Artifact Cross-Reference:
SOC calculations map to 5.2.4_Ineligible wetland areas worksheet columns A-AD
Total stock approach uses 100-year projections from columns B-H
Stock loss approach uses carbon loss rates from columns I-O
Both approaches feed into SOC_MAX value in column AD
Section 1: Monitoring and Submergence Processing (Lines 39-94)
Processing Time-Series Monitoring Data
VM0033 tracks wetland submergence over time to calculate biomass changes. This maps to the MonitoringPeriodInputs worksheet (158x8) in our test artifact.
The Master Controller: How All 25+ Functions Work Together
The processInstance() function is where the entire VM0033 methodology comes together. It orchestrates all the functions we've covered and maps to multiple test artifact worksheets. This is the production-level implementation that processes a complete project instance.
Parameter Extraction Phase (Lines 1126-1184)
The function starts by extracting parameters from every section of the Guardian document:
Monitoring Data Processing Phase (Lines 1185-1221)
Next, the function processes monitoring period inputs:
Calculation Orchestration Phase (Lines 1221-1241)
Finally, the function orchestrates all the calculations in the correct order:
This is the production implementation that processes VM0033 baseline emissions, mapping directly to the 8.1BaselineEmissions worksheet (158x84) in our test artifact.
Key Production Features:
AR Tool Integration - integration with AR Tool 14 (afforestation) and AR Tool 05 (fuel)
Temporal Boundary Application - PDT/SDT constraints applied to actual emission calculations
Submergence Integration - Monitoring data affects biomass calculations
Multiple Calculation Methods - Field data, proxies, IPCC factors handled
Each function processes multi-dimensional calculations across temporal and spatial boundaries
Baseline Emissions Processing
Let's examine the baseline emissions calculation in detail, cross-referencing with test artifact data:
The processProjectEmissions function calculates the project scenario emissions. It follows a parallel structure to baseline processing but applies project-specific parameters.
AR Tool Results Integration
The function begins by extracting AR Tool results for each stratum:
This corresponds to the 6.2ARTool14ProjectData worksheet (2x4 dimensions) where AR Tool 14 calculates carbon stock changes in:
Tree biomass - Column C in test data
Shrub biomass - Column D in test data
And 6.4ARTool5ProjectData worksheet (43x4 dimensions) where AR Tool 05 calculates fossil fuel consumption for project machinery and operations.
Biomass Application Logic
The function includes conditional logic for biomass components:
This checks stratum configuration flags to determine which biomass pools should be included in calculations. The corresponding test data in 7.2ProjectScenarioData worksheet (43x28 dimensions) shows these boolean flags in columns H-J.
Project Scenario Soil Emissions
The soil emissions calculation follows the same three-method approach as baseline but applies project scenario parameters:
This maps to 7.2ProjectScenarioData columns K-M which contain project scenario soil carbon change data calculated using the same methods as baseline but with project-specific parameters.
Non-CO2 Gas Calculations
The function handles CH4 and N2O emissions from soil using project-specific approaches:
This corresponds to columns N-P in 7.2ProjectScenarioData where CH4 emissions are calculated using project-specific approaches and emission factors.
Prescribed Burning Calculations
The function includes specialized calculations for prescribed burning activities:
This calculates emissions from biomass burning using emission factors for N2O and CH4, converted to CO2 equivalent using Global Warming Potentials. The calculations use the Math.pow(10, -6) conversion factor for unit consistency. Test data in columns S-U of 7.2ProjectScenarioData validate these burning emission calculations.
Annual Aggregation
The function aggregates all emission components for each monitoring year:
This produces the annual project scenario emissions that feed into net emission reduction calculations. The final aggregation creates cumulative totals across all monitoring years using the reduce operations.
The function outputs correspond to 7.3ProjectScenarioGHGEmissions worksheet (43x7 dimensions) which contains:
The processNETERR function calculates the net emission reductions for each monitoring year. This function brings together baseline and project scenario results to determine final creditable volumes.
Baseline and Project Aggregation
The function begins by aggregating baseline and project scenario results across all strata for each monitoring year:
This aggregation corresponds to 8.1NetERRCoreData worksheet (43x8 dimensions) where baseline and project scenario emissions are aggregated across all strata to produce project-level totals for each monitoring year.
Cumulative Calculations
The function maintains cumulative sums across monitoring years using running totals:
This produces cumulative emission totals that are essential for stock loss approach calculations and buffer pool management. Test data in columns C-F of 8.1NetERRCoreData shows these cumulative progressions.
Stock Loss Deduction Logic
The function implements stock loss approach deductions when enabled:
This logic deducts any emissions above the maximum soil organic carbon limit (SOC_MAX) to ensure conservative crediting. The calculation corresponds to column G in 8.1NetERRCoreData which shows stock loss deductions applied when cumulative differences exceed the methodology limits.
Fire Reduction Premium Integration
The function includes optional fire reduction premium credits:
This applies fire reduction credits based on documented fire management activities. Test data in column H of 8.1NetERRCoreData shows annual fire reduction premium applications.
NERRWE Calculation
The core net emission reduction calculation combines all components:
This formula represents the fundamental VM0033 equation: Net Emission Reductions = Baseline Emissions + Project Emissions + Fire Reduction Premium - Leakage - Stock Loss Deductions.
Capping Logic
The function applies optional annual emission reduction caps:
This ensures annual emission reductions don't exceed methodology-defined limits. Test data in 8.2NetERRAdjustments worksheet (43x6 dimensions) shows the application of caps in column C.
Uncertainty Adjustments
The function applies measurement and model uncertainties:
This incorporates both positive (allowable) and negative (model error) uncertainty adjustments. The calculation corresponds to column D in 8.2NetERRAdjustments where uncertainty percentages are applied to final emission reductions.
Buffer Pool Calculations
The function calculates buffer pool deductions using an incremental approach:
This calculates buffer deductions based on incremental changes between monitoring years rather than applying the buffer percentage to total accumulations. Test data in 8.3NetERRBufferDeduction worksheet (43x6 dimensions) validates these buffer calculations.
Final VCU Calculations
The function produces final Verified Carbon Units:
This produces the final creditable carbon units for each monitoring year. The outputs correspond to 8.4NetERRFinalCalculations worksheet (43x6 dimensions) which contains:
Gross emission reductions - Column C
Uncertainty-adjusted reductions - Column D
Buffer deductions - Column E
Final VCU issuance - Column F
The function establishes total VCU quantities that determine final carbon credit issuance amounts for the project.
Chapter Summary
You've learned how to translate scientific equations from environmental methodologies into executable code that produces verified carbon credits. The key principles:
Equation-to-Code Translation - Every methodology equation becomes a function in your customLogicBlock
Scientific Precision Required - Use defensive programming to handle edge cases while maintaining mathematical accuracy
Allcot Test Artifact is Your Benchmark - Your code must reproduce manual calculations exactly for scientific validity
Field Access Utilities
Your equation implementations are the foundation of environmental credit integrity. When coded properly, they transform scientific methodology equations into verified carbon units that represent real, measured emission reductions from restoration projects.
The next chapter explores Formula Linked Definitions (FLDs) for managing parameter relationships, and Chapter 21 covers comprehensive testing to ensure your calculations are production-ready.
// Guardian customLogicBlock structure - this is your equation implementation workspace
{
"blockType": "customLogicBlock",
"tag": "methodology_equation_implementation",
"expression": "(function calc() {\n // Implement methodology equations here\n const documents = arguments[0] || [];\n // Process monitoring data through scientific formulas\n return calculatedResults;\n})"
}
// Real document structure from final-PDD-vc.json
const document = {
document: {
credentialSubject: [
{
// Real project information
project_cert_type: "CCB v3.0 & VCS v4.4",
project_details: {
registry_vcs: {
vcs_project_description: "ABC Blue Carbon Mangrove Project..."
}
},
// The data your calculations need
project_data_per_instance: [{
project_instance: {
// Baseline emissions data
baseline_emissions: { /* monitoring data */ },
// Project emissions data
project_emissions: { /* monitoring data */ },
// Where your calculations go
net_ERR: {
total_VCU_per_instance: 0 // You'll calculate this!
}
}
}],
// Project settings and parameters
project_boundary: { /* boundary conditions */ },
individual_parameters: { /* methodology parameters */ }
}
]
}
};
// These utility functions handle the complexity for you
function getProjectBoundaryValue(data, key) {
return data.project_boundary_baseline_scenario?.[key]?.included ??
data.project_boundary_project_scenario?.[key]?.included ??
undefined;
}
function getIndividualParam(data, key) {
return data?.individual_parameters?.[key] ?? undefined;
}
function getMonitoringValue(data, key) {
return data?.monitoring_period_inputs?.[key] ?? undefined;
}
// Using these in your calculations
function processInstance(instance, project_boundary) {
const data = instance.project_instance;
// Get project settings cleanly
const BaselineSoil = getProjectBoundaryValue(project_boundary, 'baseline_soil');
// Get methodology parameters
const GWP_CH4 = getIndividualParam(data, 'gwp_ch4');
// Get monitoring data
const SubmergenceData = getMonitoringValue(data, 'submergence_monitoring_data');
}
// Main entry point - this is where your calculations begin
function calc() {
// Guardian passes documents as arguments[0]
const documents = arguments[0] || [];
const document = documents[0].document;
const creds = document.credentialSubject;
let totalVcus = 0;
// Process each project instance (some projects have multiple sites)
for (const cred of creds) {
for (const instance of cred.project_data_per_instance) {
// This is where the real work happens
processInstance(instance, cred.project_boundary);
// Add up the verified carbon units
totalVcus += instance.project_instance.net_ERR.total_VCU_per_instance;
}
// Set the total for this credential
cred.total_vcus = totalVcus;
}
// Guardian expects this callback
done(adjustValues(document.credentialSubject[0]));
}
function processInstance(instance, project_boundary) {
const data = instance.project_instance;
// Extract key parameters you'll need
const crediting_period = getIndividualParam(data, 'crediting_period') || 40;
const GWP_CH4 = getIndividualParam(data, 'gwp_ch4') || 28;
const GWP_N2O = getIndividualParam(data, 'gwp_n2o') || 265;
// Get project boundary settings
const baseline_soil_CH4 = getProjectBoundaryValue(project_boundary, 'baseline_soil_ch4');
const project_soil_CH4 = getProjectBoundaryValue(project_boundary, 'project_soil_ch4');
// Process the main calculations
processBaselineEmissions(data.baseline_emissions, /* parameters */);
processProjectEmissions(data.project_emissions, /* parameters */);
processNETERR(data.baseline_emissions, data.project_emissions, data.net_ERR, /* parameters */);
}
// Quick test of a calculation function
function testSoilEmissions() {
const testData = { delta_C_BSL_soil_i_t: 100, A_i_t: 10 };
const result = calculateSoilCO2Emissions(testData);
const expected = -(3.6666666666666665 * 100) * 10;
debug('Test passed:', Math.abs(result - expected) < 0.01);
}
// VM0033 Production Implementation: 25+ Functions in 6 Major Categories
// ── 1. DATA ACCESS UTILITIES (Lines 7-37) ──
adjustValues() // Document post-processing
getStartYear() // Find earliest monitoring year
getProjectBoundaryValue() // Extract project boundary settings
getQuantificationValue() // Get quantification approach parameters
getIndividualParam() // Access individual methodology parameters
getMonitoringValue() // Extract monitoring period data
getWoodProductValue() // Access wood product parameters
// ── 2. TEMPORAL BOUNDARY SYSTEM (Lines 39-350) ──
processMonitoringSubmergence() // Process submergence monitoring data
getDeltaCBSLAGBiomassForStratumAndYear() // Biomass delta calculations across time
calculatePDTSDT() // Peat & Soil Depletion Time calculations
getEndPDTPerStratum() // Stratum-specific PDT boundaries
getEndSDTPerStratum() // Stratum-specific SDT boundaries
calculate_peat_strata_input_coverage_100_years() // 100-year peat projections
calculate_non_peat_strata_input_coverage_100_years() // 100-year mineral soil projections
getCBSL_i_t0() // Initial baseline carbon stocks
calculateRemainingPercentage() // Remaining depletion percentages
// ── 3. SOC CALCULATION APPROACHES (Lines 352-516) ──
totalStockApproach() // VM0033 Total Stock Approach (Section 5.2.1)
stockLossApproach() // VM0033 Stock Loss Approach (Section 5.2.2)
SOC_MAX_calculation() // Soil Organic Carbon maximum calculations
// ── 4. EMISSION PROCESSING ENGINES (Lines 517-926) ──
processBaselineEmissions() // Complete baseline scenario processing
processProjectEmissions() // Complete project scenario processing
processNETERR() // Net emission reduction calculations
// ── 5. SPECIALIZED CALCULATORS (Lines 95-180) ──
computeDeductionAllochBaseline() // Allocation deductions for baseline
computeDeductionAllochProject() // Allocation deductions for project
getFireReductionPremiumPerYear() // Fire reduction premium by year
getGHGBSL/WPS/Biomass() // GHG emission getters by type
calculateNetERRChange() // VCU change between monitoring periods
calculateNetVCU() // Net VCU calculations
// ── 6. ORCHESTRATION & CONTROL (Lines 1121-1261) ──
calculateTotalVCUPerInstance() // Sum VCUs across monitoring periods
processInstance() // Main instance processing orchestrator
calc() // Entry point function
// From er-calculations.js:181-286 - VM0033 temporal boundary calculation
function calculatePDTSDT(baseline, isProjectQuantifyBSLReduction, temporalBoundary, crediting_period) {
if (isProjectQuantifyBSLReduction) {
// Work on earliest year for temporal boundary establishment
const baselineEmissionsSorted = (baseline.yearly_data_for_baseline_GHG_emissions || [])
.slice() // Prevent mutation of original array
.sort((a, b) => a.year_t - b.year_t);
if (!baselineEmissionsSorted.length) return;
baselineEmissionsSorted[0].annual_stratum_parameters.forEach(stratum => {
const sc = stratum.stratum_characteristics ?? {};
const asl = stratum.annual_stratum_level_parameters ?? {};
// Extract critical parameters from test artifact StratumLevelInput worksheet
const {
soil_disturbance_type, // From Column C in test data
drained_20_yr, // From Column D in test data
significant_soil_erosion_as_non_peat_soil, // From Column E
RateCloss_BSL_i // From Column F - soil carbon loss rate
} = sc;
let SDT = {}; // Soil organic carbon Depletion Time
let PDT = {}; // Peat Depletion Time
// VM0033 Equation 5.1.1 - Initial soil carbon calculation
SDT.CBSL_i_t0 = (isProjectQuantifyBSLReduction && sc.is_project_quantify_BSL_reduction)
? sc.depth_soil_i_t0 * sc.VC_I_mineral_soil_portion * 10 // Convert to tC/ha
: 0;
// VM0033 Equation 5.1.2 - Soil Depletion Time calculation
if (isProjectQuantifyBSLReduction && sc.is_project_quantify_BSL_reduction) {
if (significant_soil_erosion_as_non_peat_soil || drained_20_yr) {
// Immediate depletion scenarios
SDT.t_SDT_BSL_i = 0;
} else {
// Calculate remaining time after peat depletion
const duration = crediting_period - (sc.soil_type_t0 === 'Peatsoil'
? (sc.depth_peat_i_t0 / sc.Ratepeatloss_BSL_i) // Peat depletion duration
: 0
);
if (duration > 0) {
SDT.t_SDT_BSL_i = soil_disturbance_type === "Erosion"
? 5 // Fixed 5-year erosion period per methodology
: (RateCloss_BSL_i !== 0 ? SDT.CBSL_i_t0 / RateCloss_BSL_i : 0);
}
}
} else {
SDT.t_SDT_BSL_i = 0;
}
// VM0033 Equation 5.1.3 - Peat Depletion Time for peat soils
if (sc.soil_type_t0 === 'Peatsoil' && sc.is_project_quantify_BSL_reduction) {
PDT.t_PDT_BSL_i = sc.depth_peat_i_t0 / sc.Ratepeatloss_BSL_i; // Years until peat depleted
PDT.start_PDT = 0; // Peat depletion starts immediately
PDT.end_PDT = PDT.t_PDT_BSL_i; // When peat is fully depleted
} else {
// Non-peat soils have no peat depletion
PDT.t_PDT_BSL_i = 0;
PDT.start_PDT = 0;
PDT.end_PDT = 0;
}
// Coordinate PDT and SDT temporal boundaries
SDT.start_PDT = PDT.start_PDT;
SDT.end_PDT = Math.min(PDT.end_PDT, crediting_period); // Cap at crediting period
// Soil depletion starts after peat depletion ends
if (SDT.t_SDT_BSL_i > 0) {
SDT.start_SDT = SDT.end_PDT; // Start when peat depletion ends
} else {
SDT.start_SDT = 0; // No soil depletion
}
SDT.end_SDT = SDT.start_SDT + SDT.t_SDT_BSL_i; // When soil is depleted
// Store temporal boundary data for this stratum
temporalBoundary.push({
stratum_i: stratum.stratum_i,
peat_depletion_time: {
"t_PDT_BSL_i": PDT.t_PDT_BSL_i,
"start_PDT": PDT.start_PDT,
"end_PDT": PDT.end_PDT,
// Guardian metadata for schema validation
type: temporalBoundary[0]?.peat_depletion_time?.type,
'@context': temporalBoundary[0]?.peat_depletion_time?.['@context'] ?? [],
},
soil_organic_carbon_depletion_time: {
"t_SDT_BSL_i": SDT.t_SDT_BSL_i,
'CBSL_i_t0': SDT.CBSL_i_t0,
"start_SDT": SDT.start_SDT,
"end_SDT": SDT.end_SDT,
"start_PDT": SDT.start_PDT,
"end_PDT": SDT.end_PDT,
type: temporalBoundary[0]?.soil_organic_carbon_depletion_time?.type,
'@context': temporalBoundary[0]?.soil_organic_carbon_depletion_time?.['@context'] ?? [],
},
type: temporalBoundary?.[0]?.type,
'@context': temporalBoundary?.[0]?.['@context'] ?? [],
});
});
// Remove template element after processing
temporalBoundary.shift();
}
}
// From er-calculations.js:288-298 - Access PDT end time for specific stratum
function getEndPDTPerStratum(temporal_boundary, stratum_i) {
const stratumTemporalBoundary = temporal_boundary.find(
(boundary) => boundary.stratum_i === stratum_i
);
if (stratumTemporalBoundary) {
return stratumTemporalBoundary.soil_organic_carbon_depletion_time.end_PDT;
}
return 0; // Default if no temporal boundary found
}
// From er-calculations.js:300-310 - Access SDT end time for specific stratum
function getEndSDTPerStratum(temporal_boundary, stratum_i) {
const stratumTemporalBoundary = temporal_boundary.find(
(boundary) => boundary.stratum_i === stratum_i
);
if (stratumTemporalBoundary) {
return stratumTemporalBoundary.soil_organic_carbon_depletion_time.end_SDT;
}
return 0; // Default if no temporal boundary found
}
// From er-calculations.js:312-321 - Calculate peat carbon coverage over 100 years
function calculate_peat_strata_input_coverage_100_years(data, strata) {
const match = data.find(item => String(item.stratum_i) === String(strata));
return match ? Number(match.peat_strata_input_coverage_100_years) || 0 : 0;
}
// From er-calculations.js:322-331 - Calculate mineral soil carbon coverage over 100 years
function calculate_non_peat_strata_input_coverage_100_years(data, strata) {
const match = data.find(item => String(item.stratum_i) === String(strata));
return match ? Number(match.non_peat_strata_input_coverage_100_years) || 0 : 0;
}
// From er-calculations.js:332-338 - Get initial baseline carbon stock for stratum
function getCBSL_i_t0(temporalBoundary = [], strata) {
const match = temporalBoundary.find(item => String(item.stratum_i) === String(strata));
return match ? Number(match.soil_organic_carbon_depletion_time.CBSL_i_t0) || 0 : 0;
}
// From er-calculations.js:340-349 - Calculate remaining carbon after depletion
function calculateRemainingPercentage(match, D41) {
if (match === 0) return 100; // No depletion = 100% remaining
if (D41 === 0) return 0; // No carbon = 0% remaining
const percentage = (D41 / match) * 100;
return Math.min(percentage, 100); // Cap at 100%
}
// From er-calculations.js:352-458 - Total Stock Approach implementation
function totalStockApproach(
baseline,
total_stock_approach_parameters,
peat_strata_input_coverage_100_years,
non_peat_strata_input_coverage_100_years,
temporal_boundary
) {
let sumWPS = 0; // Σ C_WPS_i_t100 × A_WPS_i_t100 (project carbon at 100 years)
let sumBSL = 0; // Σ C_BSL_i_t100 × A_BSL_i_t100 (baseline carbon at 100 years)
// Process each stratum in the first-year baseline record
baseline.yearly_data_for_baseline_GHG_emissions[0].annual_stratum_parameters
.forEach((stratum) => {
const { stratum_i } = stratum;
const charac = stratum.stratum_characteristics ?? {};
// Extract parameters with safe defaults (defensive programming)
const depth_peat_i_t0 = Number(charac.depth_peat_i_t0) || 0;
const VC_I_peat_portion = Number(charac.VC_I_peat_portion) || 0;
const VC_I_mineral_soil_portion = Number(charac.VC_I_mineral_soil_portion) || 0;
const Ratepeatloss_BSL_i = Number(charac.Ratepeatloss_BSL_i) || 0;
const RateCloss_BSL_i = Number(charac.RateCloss_BSL_i) || 0;
const A_WPS_i_t100 = Number(charac.A_WPS_i_t100) || 0;
const A_BSL_i_t100 = Number(charac.A_BSL_i_t100) || 0;
// VM0033 Equation 5.2.1.1 - Project scenario carbon at 100 years
const depth_peat_WPS_t100 =
depth_peat_i_t0 -
calculate_peat_strata_input_coverage_100_years(
peat_strata_input_coverage_100_years,
stratum_i
);
// Project organic soil carbon (preserved peat)
const C_WPS_i_t100_organic_soil =
charac.soil_type_t0 === "Peatsoil"
? depth_peat_WPS_t100 * VC_I_peat_portion * 10 // Convert to tC/ha
: 0;
// Project mineral soil carbon (preserved mineral soil)
const C_WPS_i_t100_mineral_soil =
getCBSL_i_t0(temporal_boundary, stratum_i) -
calculate_non_peat_strata_input_coverage_100_years(
non_peat_strata_input_coverage_100_years,
stratum_i
);
const C_WPS_i_t100 =
C_WPS_i_t100_organic_soil + C_WPS_i_t100_mineral_soil;
// VM0033 Equation 5.2.1.2 - Baseline scenario carbon at 100 years
const depth_peat_BSL_t100 =
depth_peat_i_t0 - 100 * Ratepeatloss_BSL_i; // Peat lost over 100 years
const C_BSL_i_t100_organic_soil =
charac.soil_type_t0 === "Peatsoil"
? depth_peat_BSL_t100 * VC_I_peat_portion * 10
: 0;
// Calculate remaining years after peat depletion for mineral soil loss
const remaining_years_after_peat_depletion_BSL =
calculateRemainingPercentage(Ratepeatloss_BSL_i, depth_peat_i_t0);
const C_BSL_i_t100_mineral_soil =
getCBSL_i_t0(temporal_boundary, stratum_i) -
remaining_years_after_peat_depletion_BSL * RateCloss_BSL_i;
const C_BSL_i_t100 =
charac.soil_type_t0 === "Peatsoil"
? C_BSL_i_t100_organic_soil
: C_BSL_i_t100_mineral_soil;
// VM0033 Equation 5.2.1.3 - Area-weighted carbon stock sums
sumWPS += C_WPS_i_t100 * A_WPS_i_t100;
sumBSL += C_BSL_i_t100 * A_BSL_i_t100;
// Store detailed calculations for each stratum
total_stock_approach_parameters.push({
stratum_i,
C_WPS_i_t100,
depthpeat_WPS_i_t100: Math.max(depth_peat_WPS_t100, 0),
C_WPS_i_t100_organic_soil,
C_WPS_i_t100_mineral_soil: Math.max(C_WPS_i_t100_mineral_soil, 0),
Depthpeat_BSL_i_t100: Math.max(depth_peat_BSL_t100, 0),
C_BSL_i_t100_organic_soil,
remaining_years_after_peat_depletion_BSL,
C_BSL_i_t100_mineral_soil: Math.max(
getCBSL_i_t0(temporal_boundary, stratum_i) - 100 * RateCloss_BSL_i,
0
),
C_BSL_i_t100,
type: total_stock_approach_parameters?.[0]?.type,
"@context": total_stock_approach_parameters?.[0]?.["@context"] ?? [],
});
});
// Remove template element after processing
total_stock_approach_parameters.shift();
// VM0033 Equation 5.2.1.4 - Check if project stocks are ≥ 105% of baseline
const condition = sumWPS >= sumBSL * 1.05;
return {
condition,
sumWPS,
sumBSL,
diff: condition ? sumWPS - sumBSL : 0, // Only credit if condition met
};
}
// From er-calculations.js:39-69 - Process submergence monitoring data
function processMonitoringSubmergence(subInputs = {}) {
const years = subInputs.submergence_monitoring_data ?? [];
for (const yrRec of years) {
const {
monitoring_year,
submergence_measurements_for_each_stratum: strata = []
} = yrRec;
// Process each stratum's submergence data for this monitoring year
for (const s of strata) {
const {
stratum_i, // Stratum identifier
is_submerged, // Boolean: is this stratum submerged?
submergence_T, // Time period of submergence (years)
area_submerged_percentage, // Percentage of stratum area submerged
C_BSL_agbiomass_i_t_ar_tool_14, // Initial baseline above-ground biomass
C_BSL_agbiomass_i_t_to_T_ar_tool_14, // Baseline biomass at time T
delta_C_BSL_agbiomass_i_t // Calculated delta (output)
} = s;
if (is_submerged) {
// VM0033 Equation 6.1 - Calculate biomass change due to submergence
const tempDelta = (C_BSL_agbiomass_i_t_ar_tool_14 - C_BSL_agbiomass_i_t_to_T_ar_tool_14) / submergence_T;
const tempDeltaFinal = tempDelta * area_submerged_percentage;
// Apply methodology constraint: negative deltas set to zero
if (tempDeltaFinal < 0) {
s.delta_C_BSL_agbiomass_i_t = 0;
} else {
s.delta_C_BSL_agbiomass_i_t = tempDeltaFinal;
}
} else {
// No submergence = no biomass change
s.delta_C_BSL_agbiomass_i_t = 0;
}
}
}
}
// From er-calculations.js:71-91 - Retrieve biomass delta for specific stratum/year
function getDeltaCBSLAGBiomassForStratumAndYear(
subInputs = {},
stratumId,
year
) {
const results = [];
// Search through all monitoring year records
for (const yrRec of subInputs.submergence_monitoring_data ?? []) {
// Check each stratum measurement in this monitoring year
for (const s of yrRec.submergence_measurements_for_each_stratum ?? []) {
// Match stratum ID and year criteria
if (String(s.stratum_i) === String(stratumId) && (year < yrRec.monitoring_year)) {
results.push({
year: yrRec.monitoring_year,
delta: s.delta_C_BSL_agbiomass_i_t,
});
}
}
}
// Return results or default if no matches found
return results.length ? results : [{ year: null, delta: 0 }];
}
// From er-calculations.js:95-115 - Baseline allocation deduction calculation
function computeDeductionAllochBaseline(params) {
const {
baseline_soil_SOC, // Is baseline soil SOC included?
soil_insitu_approach, // Soil measurement approach
soil_type, // Soil type (Peatsoil vs others)
AU5, // Soil emissions value
AV5, // Allocation percentage
BB5 // Alternative emissions value
} = params;
// No deduction if soil SOC not included or peat soil
if (baseline_soil_SOC !== true) return 0;
if (soil_type === "Peatsoil") return 0;
const fraction = AV5 / 100; // Convert percentage to fraction
// Apply appropriate calculation based on measurement approach
if (soil_insitu_approach === "Proxies" || soil_insitu_approach === "Field-collected data") {
return AU5 * fraction;
}
return BB5 * fraction;
}
// From er-calculations.js:117-137 - Project allocation deduction calculation
function computeDeductionAllochProject(params) {
const {
project_soil_SOC, // Is project soil SOC included?
soil_insitu_approach, // Soil measurement approach
soil_type, // Soil type
AK5, // Project soil emissions value
AL5, // Allocation percentage
AR5 // Alternative emissions value
} = params;
// Same logic as baseline but for project scenario
if (project_soil_SOC !== true) return 0;
if (soil_type === "Peatsoil") return 0;
const fraction = AL5 / 100;
if (soil_insitu_approach === "Proxies" || soil_insitu_approach === "Field-collected data") {
return AK5 * fraction;
}
return AR5 * fraction;
}
// From er-calculations.js:1185-1221 - Monitoring data processing
// ── MONITORING PERIOD INPUTS (Maps to MonitoringPeriodInputs worksheet) ──
const IsBaselineAbovegroundNonTreeBiomass = getMonitoringValue(data, 'is_baseline_aboveground_non_tree_biomass');
const IsProjectAbovegroundNonTreeBiomass = getMonitoringValue(data, 'is_project_aboveground_non_tree_biomass');
// Initialize monitoring data arrays
let BaselineSoilCarbonStockMonitoringData = [];
let ProjectSoilCarbonStockMonitoringData = [];
let BaselineHerbaceousVegetationMonitoringData = [];
let ProjectHerbaceousVegetationMonitoringData = [];
// Extract submergence monitoring data (critical for VM0033)
const SubmergenceMonitoringData = getMonitoringValue(data, 'submergence_monitoring_data');
// Conditional data extraction based on project boundary and quantification approach
BaselineSoilCarbonStockMonitoringData = (BaselineSoil && QuantificationCO2EmissionsSoil === 'Field-collected data') ?
getMonitoringValue(data, 'baseline_soil_carbon_stock_monitoring_data') : [];
ProjectSoilCarbonStockMonitoringData = (ProjectSoil && QuantificationCO2EmissionsSoil === 'Field-collected data') ?
getMonitoringValue(data, 'project_soil_carbon_stock_monitoring_data') : [];
BaselineHerbaceousVegetationMonitoringData = IsBaselineAbovegroundNonTreeBiomass ?
getMonitoringValue(data, 'baseline_herbaceous_vegetation_monitoring_data') : [];
ProjectHerbaceousVegetationMonitoringData = IsProjectAbovegroundNonTreeBiomass ?
getMonitoringValue(data, 'project_herbaceous_vegetation_monitoring_data') : [];
// ── WOOD PRODUCT PROJECT SCENARIO (Maps to IF Wood Product Is Included worksheet) ──
let WoodProductDjCFjBCEF = [];
let WoodProductSLFty = [];
let WoodProductOfty = [];
let WoodProductVexPcomi = [];
let WoodProductCAVGTREEi = [];
// Only extract wood product data if project boundary includes it
if (ProjectWoodProducts) {
WoodProductDjCFjBCEF = getWoodProductValue(data, 'wood_product_Dj_CFj_BCEF');
WoodProductSLFty = getWoodProductValue(data, 'wood_product_SLFty');
WoodProductOfty = getWoodProductValue(data, 'wood_product_Ofty');
WoodProductVexPcomi = getWoodProductValue(data, 'wood_product_Vex_Pcomi');
WoodProductCAVGTREEi = getWoodProductValue(data, 'wood_product_CAVG_TREE_i');
}
// From er-calculations.js:1221-1241 - Calculation orchestration
// ── CALCULATION SEQUENCE ──
// Step 1: Process submergence monitoring data (required for biomass calculations)
processMonitoringSubmergence(data.monitoring_period_inputs);
// Step 2: Establish temporal boundaries (required for all subsequent calculations)
const temporalBoundary = data.temporal_boundary;
calculatePDTSDT(data.baseline_emissions, QuantificationBaselineCO2Reduction, temporalBoundary, CreditingPeriod);
// Step 3: Calculate baseline emissions (maps to 8.1BaselineEmissions worksheet)
processBaselineEmissions(
data.baseline_emissions,
CreditingPeriod,
BaselineMethaneProductionByMicrobes,
QuantificationCH4EmissionsSoil,
GWP_CH4,
BaselineDenitrificationNitrification,
QuantificationN2OEmissionsSoil,
GWP_N2O,
data.monitoring_period_inputs,
temporalBoundary
);
// Step 4: Calculate project emissions (maps to 8.2ProjectEmissions worksheet)
processProjectEmissions(
data.project_emissions,
ProjectMethaneProductionByMicrobes,
QuantificationCH4EmissionsSoil,
GWP_CH4,
ProjectDenitrificationNitrification,
QuantificationN2OEmissionsSoil,
GWP_N2O,
EF_N2O_Burn,
EF_CH4_Burn,
ProjectBurningBiomass
);
// Step 5: Calculate SOC_MAX using appropriate approach (maps to 5.2.4_Ineligible wetland areas worksheet)
SOC_MAX_calculation(
data.baseline_emissions,
data.peat_strata_input_coverage_100_years,
data.non_peat_strata_input_coverage_100_years,
temporalBoundary,
QuantificationSOCCapApproach,
data.ineligible_wetland_areas
);
// Step 6: Calculate final net emission reductions and VCUs (maps to 8.5NetERR worksheet)
processNETERR(
data.baseline_emissions,
data.project_emissions,
data.net_ERR,
data.ineligible_wetland_areas.SOC_MAX,
QuantificationBaselineCO2Reduction,
QuantificationFireReductionPremium,
FireReductionPremiumArray,
IsNERRWEMaxCap,
NERRWE_Max,
NERError,
AllowableUncertainty,
BufferPercent
);
}
// From er-calculations.js:1243-1261 - Guardian customLogicBlock entry point
function calc() {
const document = documents[0].document; // Guardian passes documents array
const creds = document.credentialSubject; // Extract credential subjects
let totalVcus = 0;
// Process each credential (can be multiple projects)
for (const cred of creds) {
// Process each project instance (can be multiple sites per project)
for (const instance of cred.project_data_per_instance) {
// This calls the complete processInstance orchestration we covered
processInstance(instance, cred.project_boundary);
// Accumulate VCUs from this instance
totalVcus += instance.project_instance.net_ERR.total_VCU_per_instance;
}
// Store total for this credential
cred.total_vcus = totalVcus;
}
// Guardian callback - return processed document
done(adjustValues(document.credentialSubject[0]));
}
// From er-calculations.js:517-713 - Complete baseline emissions processing
function processBaselineEmissions(baseline, crediting_period, baseline_soil_CH4, soil_CH4_approach,
GWP_CH4, baseline_soil_N2O, soil_N2O_approach, GWP_N2O, monitoring_submergence_data, temporal_boundary) {
// Process each monitoring year in the baseline scenario
for (const yearRec of baseline.yearly_data_for_baseline_GHG_emissions ?? []) {
const { year_t } = yearRec;
// Process each stratum within this year
for (const stratum of yearRec.annual_stratum_parameters ?? []) {
const { stratum_i } = stratum;
const sc = stratum.stratum_characteristics ?? {};
const asl = stratum.annual_stratum_level_parameters ?? {};
// ── AR TOOL INTEGRATION ────────────────────────────────────────
// Extract AR Tool 14 results (afforestation/reforestation calculations)
asl.delta_CTREE_BSL_i_t_ar_tool_14 = stratum.ar_tool_14.delta_C_TREE;
asl.delta_CSHRUB_BSL_i_t_ar_tool_14 = stratum.ar_tool_14.delta_C_SHRUB;
// Extract AR Tool 05 results (fuel consumption calculations)
asl.ET_FC_I_t_ar_tool_5_BSL = stratum.ar_tool_05.ET_FC_y;
// Check if this stratum quantifies baseline reduction
const isProjectQuantifyBSLReduction = sc.is_project_quantify_BSL_reduction;
// ── BIOMASS CALCULATIONS ───────────────────────────────────────
// Apply above-ground non-tree biomass logic
if (asl.is_aboveground_non_tree_biomass) {
asl.delta_CSHRUB_BSL_i_t_ar_tool_14 = 0; // Zero out shrubs if non-tree biomass included
}
// VM0033 Equation 8.1.2 - Tree and shrub biomass change
asl.delta_C_BSL_tree_or_shrub_i_t = const_12_by_44 * (
asl.delta_CTREE_BSL_i_t_ar_tool_14 + asl.delta_CSHRUB_BSL_i_t_ar_tool_14
);
// Handle herbaceous vegetation
if (asl.is_aboveground_non_tree_biomass) {
asl.delta_C_BSL_herb_i_t = 0; // Set to zero if already included above
}
// ── SOIL CO2 EMISSIONS ─────────────────────────────────────────
if (asl.is_soil) {
const method = sc.co2_emissions_from_soil;
switch (method) {
case "Field-collected data":
// VM0033 Equation 8.1.1 - Direct field measurements
asl.GHGBSL_soil_CO2_i_t = -(const_44_by_12 * asl.delta_C_BSL_soil_i_t);
break;
case "Proxies":
// Use proxy data when direct measurement not available
asl.GHGBSL_soil_CO2_i_t = asl.GHG_emission_proxy_GHGBSL_soil_CO2_i_t;
break;
default:
// Sum of individual emission sources
asl.GHGBSL_soil_CO2_i_t = (asl.GHGBSL_insitu_CO2_i_t ?? 0) +
(asl.GHGBSL_eroded_CO2_i_t ?? 0) +
(asl.GHGBSL_excav_CO2_i_t ?? 0);
}
} else {
asl.GHGBSL_soil_CO2_i_t = 0; // No soil emissions for this stratum
}
// ── ALLOCATION DEDUCTIONS ──────────────────────────────────────
// Calculate allocation deductions using the utility function
asl.Deduction_alloch = computeDeductionAllochBaseline({
baseline_soil_SOC: asl.is_soil,
soil_insitu_approach: sc.co2_emissions_from_soil,
soil_type: sc.soil_type_t0,
AU5: asl.GHGBSL_soil_CO2_i_t,
AV5: asl.is_soil ? asl.percentage_C_alloch_BSL : 0,
BB5: (asl.is_soil && sc.co2_emissions_from_soil === "Others") ?
asl.GHGBSL_insitu_CO2_i_t : 0
});
// ── CH4 EMISSIONS FROM SOIL ────────────────────────────────────
if (baseline_soil_CH4) {
const method = soil_CH4_approach;
switch (method) {
case "IPCC emission factors":
asl.GHGBSL_soil_CH4_i_t = asl.IPCC_emission_factor_ch4_BSL * GWP_CH4;
break;
case "Proxies":
asl.GHGBSL_soil_CH4_i_t = asl.GHG_emission_proxy_ch4_BSL * GWP_CH4;
break;
default:
asl.GHGBSL_soil_CH4_i_t = asl.CH4_BSL_soil_i_t * GWP_CH4;
}
} else {
asl.GHGBSL_soil_CH4_i_t = 0;
}
// ── N2O EMISSIONS FROM SOIL ────────────────────────────────────
if (baseline_soil_N2O) {
const method = soil_N2O_approach;
switch (method) {
case "IPCC emission factors":
asl.GHGBSL_soil_N2O_i_t = asl.IPCC_emission_factor_n2o_BSL * GWP_N2O;
break;
case "Proxies":
asl.GHGBSL_soil_N2O_i_t = asl.N2O_emission_proxy_BSL * GWP_N2O;
break;
default:
asl.GHGBSL_soil_N2O_i_t = asl.N2O_BSL_soil_I_t * GWP_N2O;
}
} else {
asl.GHGBSL_soil_N2O_i_t = 0;
}
// ── TEMPORAL BOUNDARY APPLICATION ──────────────────────────────
// This is where the PDT/SDT system gets applied to actual calculations
const endPDT = isProjectQuantifyBSLReduction ?
getEndPDTPerStratum(temporal_boundary, stratum_i) : crediting_period;
const endSDT = isProjectQuantifyBSLReduction ?
getEndSDTPerStratum(temporal_boundary, stratum_i) : crediting_period;
if (isProjectQuantifyBSLReduction) {
const emissionsArray = baseline.yearly_data_for_baseline_GHG_emissions || [];
const startYear = getStartYear(emissionsArray);
const period = year_t - startYear + 1;
// VM0033 Equation 8.1.26 - Apply temporal boundary constraints
if (period > endPDT && period > endSDT) {
// Beyond depletion periods - no soil emissions
asl.GHGBSL_soil_i_t = 0;
} else {
// Within depletion periods - calculate full soil emissions
asl.GHGBSL_soil_i_t = asl.A_i_t * (
asl.GHGBSL_soil_CO2_i_t - asl.Deduction_alloch +
asl.GHGBSL_soil_CH4_i_t + asl.GHGBSL_soil_N2O_i_t
);
}
} else {
// No temporal boundary constraints
asl.GHGBSL_soil_i_t = asl.A_i_t * (
asl.GHGBSL_soil_CO2_i_t - asl.Deduction_alloch +
asl.GHGBSL_soil_CH4_i_t + asl.GHGBSL_soil_N2O_i_t
);
}
// ── BIOMASS CALCULATION WITH SUBMERGENCE ──────────────────────
// VM0033 Equation 8.1.23 - Integrate submergence monitoring data
const monitoring_submergence = getDeltaCBSLAGBiomassForStratumAndYear(
monitoring_submergence_data, stratum_i, yearRec.year_t
);
asl.delta_C_BSL_biomass_𝑖_t = asl.delta_C_BSL_tree_or_shrub_i_t +
asl.delta_C_BSL_herb_i_t -
monitoring_submergence[0].delta;
// ── FUEL CONSUMPTION EMISSIONS ─────────────────────────────────
if (asl.is_fossil_fuel_use) {
asl.GHGBSL_fuel_i_t = asl.ET_FC_I_t_ar_tool_5_BSL; // From AR Tool 05
} else {
asl.GHGBSL_fuel_i_t = 0;
}
}
// ── YEAR-LEVEL AGGREGATIONS ────────────────────────────────────
// Sum biomass changes across all strata for this year
const sum_delta_C_BSL_biomass = yearRec.annual_stratum_parameters
.reduce((acc, s) => acc + (Number(s.annual_stratum_level_parameters
.delta_C_BSL_biomass_𝑖_t) || 0), 0);
// Convert carbon changes to CO2 equivalent
yearRec.GHG_BSL_biomass = -(sum_delta_C_BSL_biomass * const_44_by_12);
// Sum soil emissions across all strata
const sum_GHG_BSL_soil = yearRec.annual_stratum_parameters.reduce(
(acc, s) => acc + (Number(s.annual_stratum_level_parameters.GHGBSL_soil_i_t) || 0), 0
);
yearRec.GHG_BSL_soil = sum_GHG_BSL_soil;
// Sum fuel emissions across all strata
const sum_GHG_BSL_fuel = yearRec.annual_stratum_parameters.reduce(
(acc, s) => acc + (Number(s.annual_stratum_level_parameters.GHGBSL_fuel_i_t) || 0), 0
);
yearRec.GHG_BSL_fuel = sum_GHG_BSL_fuel;
}
// ── CUMULATIVE CALCULATIONS ────────────────────────────────────────
// Calculate cumulative totals across all years
baseline.yearly_data_for_baseline_GHG_emissions.reduce((acc, rec) => {
rec.GHG_BSL_biomass = acc + rec.GHG_BSL_biomass;
return rec.GHG_BSL_biomass;
}, 0);
baseline.yearly_data_for_baseline_GHG_emissions.reduce((acc, rec) => {
rec.GHG_BSL_soil = acc + rec.GHG_BSL_soil;
return rec.GHG_BSL_soil;
}, 0);
baseline.yearly_data_for_baseline_GHG_emissions.reduce((acc, rec) => {
rec.GHG_BSL_fuel = acc + rec.GHG_BSL_fuel;
return rec.GHG_BSL_fuel;
}, 0);
// Calculate total baseline emissions per year
baseline.yearly_data_for_baseline_GHG_emissions.reduce((acc, rec) => {
rec.GHG_BSL = rec.GHG_BSL_biomass + rec.GHG_BSL_soil + rec.GHG_BSL_fuel;
return rec.GHG_BSL;
}, 0);
}
function calculatePDTSDT(baseline, isProjectQuantifyBSLReduction, temporalBoundary, crediting_period) {
let PDT = null; // Peat Depletion Time
let SDT = null; // Soil organic carbon Depletion Time
// Processing each stratum's peat depth data from test artifact
baseline.stratum_data.forEach((stratum, stratum_index) => {
if (stratum.peat_depth_data && stratum.peat_depth_data.length > 0) {
stratum.peat_depth_data.forEach((peat_data, peat_index) => {
// VM0033 Equation 5.1 - Peat Depletion Time calculation
if (peat_data.peat_thickness_cm && peat_data.subsidence_rate_cm_yr) {
const calculated_PDT = peat_data.peat_thickness_cm / peat_data.subsidence_rate_cm_yr;
// Take minimum PDT across all strata (most conservative approach)
PDT = Math.min(PDT || calculated_PDT, calculated_PDT);
}
// VM0033 Equation 5.2 - Soil organic carbon Depletion Time
if (peat_data.soc_stock_t_ha && peat_data.soc_loss_rate_t_ha_yr) {
const calculated_SDT = peat_data.soc_stock_t_ha / peat_data.soc_loss_rate_t_ha_yr;
SDT = Math.min(SDT || calculated_SDT, calculated_SDT);
}
});
}
});
// Apply crediting period constraint from methodology
const temporal_boundary_years = Math.min(PDT || crediting_period, SDT || crediting_period, crediting_period);
return {
PDT: PDT,
SDT: SDT,
temporal_boundary_years: temporal_boundary_years
};
}
// From er-calculations.js:850-920 - fire emissions processing
function processFireEmissions(baseline, temporal_boundary) {
const fireEmissionsArray = {};
baseline.stratum_data.forEach((stratum, stratum_index) => {
if (stratum.fire_data && stratum.fire_data.length > 0) {
stratum.fire_data.forEach((fire_data, fire_index) => {
const year = parseInt(fire_data.year);
// Above-ground biomass fire emissions (VM0033 Equation 8.1.3)
if (fire_data.fire_area_ha && fire_data.AGB_tC_ha &&
fire_data.combustion_factor && fire_data.CF_root) {
// AGB fire emissions calculation
const fire_emissions_AGB = fire_data.fire_area_ha *
fire_data.AGB_tC_ha *
fire_data.combustion_factor *
(44/12); // CO2 conversion factor
// Below-ground biomass fire emissions (VM0033 Equation 8.1.4)
const fire_emissions_BGB = fire_data.fire_area_ha *
fire_data.BGB_tC_ha *
fire_data.CF_root *
(44/12);
// Dead wood fire emissions (VM0033 Equation 8.1.5)
const fire_emissions_DW = fire_data.fire_area_ha *
fire_data.dead_wood_tC_ha *
fire_data.CF_dead_wood *
(44/12);
// Litter fire emissions (VM0033 Equation 8.1.6)
const fire_emissions_litter = fire_data.fire_area_ha *
fire_data.litter_tC_ha *
fire_data.CF_litter *
(44/12);
// Total fire emissions for this event
const total_fire_emissions = fire_emissions_AGB +
fire_emissions_BGB +
fire_emissions_DW +
fire_emissions_litter;
// Apply temporal boundary constraints
if (year <= temporal_boundary.temporal_boundary_years) {
fireEmissionsArray[year] = (fireEmissionsArray[year] || 0) + total_fire_emissions;
}
// Debug output for validation against test artifact
debug(`Fire emissions Year ${year}:`, {
stratum: stratum_index,
fire_event: fire_index,
AGB_emissions: fire_emissions_AGB,
BGB_emissions: fire_emissions_BGB,
total_emissions: total_fire_emissions
});
}
});
}
});
return fireEmissionsArray;
}
// From er-calculations.js:1090-1144 - Advanced stock approach selection
function totalStockApproach(baseline, crediting_period, monitoring_submergence_data) {
const stockData = {};
const approachType = baseline.soil_carbon_quantification_approach;
if (approachType === "total_stock_approach") {
// Total Stock Approach: VM0033 Equation 5.2
baseline.stratum_data.forEach((stratum, stratum_index) => {
if (stratum.soil_carbon_data && stratum.soil_carbon_data.length > 0) {
stratum.soil_carbon_data.forEach((soc_data, soc_index) => {
const year = parseInt(soc_data.year);
// Calculate SOC_MAX using VM0033 Equation 5.2 parameters
if (soc_data.area_ha && soc_data.soc_stock_t_ha) {
// SOC_MAX = Area × SOC stock × CO2 conversion factor
const soc_max = soc_data.area_ha *
soc_data.soc_stock_t_ha *
(44/12); // tCO2 conversion
// Apply depth-weighted calculation if multiple soil layers
let depth_weighted_soc = soc_max;
if (soc_data.soil_layers && soc_data.soil_layers.length > 0) {
depth_weighted_soc = soc_data.soil_layers.reduce((total, layer) => {
return total + (layer.thickness_cm * layer.soc_density_tC_m3 *
soc_data.area_ha * 0.01 * (44/12));
}, 0);
}
stockData[year] = (stockData[year] || 0) + depth_weighted_soc;
// Validate against test artifact expected values
debug(`SOC calculation Year ${year}:`, {
stratum: stratum_index,
area_ha: soc_data.area_ha,
soc_stock_t_ha: soc_data.soc_stock_t_ha,
calculated_soc_max: depth_weighted_soc
});
}
});
}
});
} else if (approachType === "stock_loss_approach") {
// Stock Loss Approach: VM0033 Equation 5.3
baseline.stratum_data.forEach((stratum, stratum_index) => {
if (stratum.soil_carbon_data && stratum.soil_carbon_data.length > 0) {
stratum.soil_carbon_data.forEach((soc_data, soc_index) => {
const year = parseInt(soc_data.year);
// Calculate annual SOC loss using VM0033 Equation 5.3
if (soc_data.area_ha && soc_data.annual_soc_loss_rate_t_ha_yr) {
const annual_soc_loss = soc_data.area_ha *
soc_data.annual_soc_loss_rate_t_ha_yr *
(44/12); // tCO2 conversion
// Apply submergence factor if wetland is partially submerged
let submergence_factor = 1.0;
if (monitoring_submergence_data && monitoring_submergence_data[year]) {
submergence_factor = monitoring_submergence_data[year].submergence_fraction;
}
const adjusted_soc_loss = annual_soc_loss * submergence_factor;
stockData[year] = (stockData[year] || 0) + adjusted_soc_loss;
}
});
}
});
}
return stockData;
}